Thursday, November 12, 2009

TriZetto Hosted Batch Architecture

TriZetto Hosting has a framework in place for executing batch jobs – and that framework is called the Hosted Batch Architecture or HBA for short. So if you are a TriZetto Hosting customer, and are writing custom code to extend/enhance the core functionality of QNXT or Facets, you will come to a cross roads where you will need a job created to execute this custom code. And this is where the HBA comes into play. You will have to make sure your job is HBA compatible.

Before I get into what it means to be HBA compatible, I’d like to bring up the benefits the HBA provides us. The HBA provides a common look and feel for the execution and supportability of all jobs. This is a very important principle in custom development practice – keeping things consistent and the same. This means that all jobs will be kicked off the same way, they will use the same resources like I/O and database connections, and in the event that an error occurs, errors will be handled in the same manner. This is a very powerful benefit for TriZetto customers in my opinion. This reduces the burden and the cost of troubleshooting problems when they arise.

 The HBA in it of itself is a “black box.” So I can not really tell you what the HBA is. The TriZetto documentation refers to is a framework, so that is what I call it here. From a customer’s point of view, I can share with you that jobs can be configured using XML, and that you can use VB Script to extend the capabilities of the HBA if the need arises.


Creating jobs that are HBA compatible

Note: this section is entirely QNXT specific, and speaks to version 4.51 of the HBA

In a previous post, I discussed customizing QNXT using the Custom Database. Once you have a Custom Database setup, you will need custom code to take various read and write actions against that database. A common approach is to write this custom code into SQL Server Integration Service (SSIS) Packages. SSIS allows you to extract, transform and load data in and out of SQL Server.

Within an SSIS Package you define Connection Managers for any type of resource that will be consumed or published by the SSIS Package. For example, you will need Connection Managers for database connections, file system locations (like an /input or /output folder for example), SMTP and any other resource that will be part of the workflow of the package. A common practice in SSIS development is to place the values of these resources into a configuration file called .dtsConfig, and bind these configurations in at runtime. This way, if you have varying configurations depending on which environment (development, test or production) the package runs in, you don’t have to change code in the package to deploy to these environments.

In order to use a configuration file, you have to define Package Variables within your SSIS package. And use these Package Variables when creating Connection Managers so that resource names are never hard coded into the package. At run time, when the configuration file is bound in, the Package Variables are populated with the appropriate values for that environment, and the Connection Managers are ready to be consumed by and published to.

With the HBA, you employ a similar practice. The key differences are that:

(a) The configuration file is not a .dtsConfig. Instead it is a proprietary TriZetto file called Common System Properties. And if any local configurations are needed, there is a separate configuration file called the HBA Wrapper also referred to as the job XML

(b) The Package Variables must be placed within an HBA namespace

TriZetto provides an HBA Development Guide which details all of the particulars of how to build HBA compliant SSIS Packages, which includes, among other things the Package Variables you must use.  So becoming HBA compatible means employing these practices in your custom code.

Recommended Reading

If you are a TriZetto customer and are looking to familiarize yourself with the HBA, in addition to reading the HBA Development Guide, I recommend you also request information about:

  • Common System Properties - this should will come in the form of a list of variables and their meaning
  • HBA Wrapper Dev Guide - this has a list of commands you can execute inside of the HBA.  For example, there are command for file manipulation like fileDelete() and fileCompress().
  • Client Specific - you can enhance the functionallity of the HBA with VB Script be creating a Client Specific VB file that is imported into the HBA
  • Auto Date Parameters - there are a series of data variables that are predefined and available to you in the HBA.  For example, there is a date variable that always returns the first date of the current quarter, or the last day of the month 
Stay Tuned ...

In a future post, I will discuss a unique challenge that you have to solve with respect to HBA.  Since the HBA is only available in your hosted environments, you need a mechanism to unit test your code to make sure that you are consistently creating HBA compatible custom code.

Wednesday, November 11, 2009

Creating an Authorization or Referral with QNXT 3.4 SDK

When you install a QNXT EU (Execution Unit), you also get the QNXT SDK. The QNXT SDK is really just a Microsoft Word document that you will find in the \Program Files directory after the install is finished. You do not have to select anything special as part of the install process, the SDK is installed by default.

This Word document lists a series of assemblies that can be used to harness some of the functionality of the Execution Unit. This document does not go into great detail as to how to use these assemblies. Also, no code samples are provided. So you are left more or less to your own devices to figure out how to get what you need out of the SDK.

This blog post describes how we used the SDK to create authorizations/referrals using the QNXT 3.4 SDK. Near as I can tell what I am describing here is not documented anywhere else, including the TriZetto Customer Exchange Website. This is unsupported custom code, but it has passed Trizetto code review, which may not mean much to you unless you are a TriZetto Hosting customer. If you are Hosting customer, then passing code review means that TriZetto will deploy this code to your production environment.

There are two pre-requisites to get the sample code provided in this blog post to run:

  1. Install the QNXT 3.4 EU locally
  2. Create a QNXT User with (a) the appropriate QEnvironment added i.e. Integrated Dev and (b) with the “Authorization Assignment” role added/granted

Next, you will need to bind in the following reference assemblies to compile this code

  • QCSI Globals.dll
  • QCSI Internal Proxy.dll
  • QCSI Messages Authorization.dll
  • QCSI Messages CommonTypes.dll
  • QCSI Proxy Authoriztion.dll
  • QFrame Common Messages.dll

All of the above are installed into the GAC on the EU. If you are using Visual Studio to build your project, you will need to xcopy these out of the GAC first so that they will show up in your Add References wizard.

Without any further ado - here is a working unit test to create an authorization/referral using the SDK
using System;
using Q.QFrame.Messages;
using Q.Proxy;
using Q.Global;
using MbUnit.Framework;

namespace AuthPosting.Tests
{

[TestFixture]
public class AuthPostingTests
{

[Test]
public void CreateReferralTest()
{
QNXTProxy _proxy = new QNXTProxy();
if (!_proxy.InitQNXT("user", "password", "environment", "plandataAlias")) Assert.Fail();

AuthCreateUpdateRequestMessageType _request = new AuthCreateUpdateRequestMessageType();
_request.Referral = new ReferralMessageType();
_request.Referral.EnrollId = "";      // QNXT Enrollment ID
_request.Referral.MemId = "";       // QNXT Member ID
_request.Referral.ServiceCode = ""; // QNXT Template ID
_request.Referral.Cob = 0;
_request.Referral.ReferTo =""; // QNXT ReferTo Provider Id
_request.Referral.EffDate = DateTime.Today.ToString();
_request.Referral.ReferFrom = ""; // QNXT ReferFrom Provider Id
_request.Referral.Emergency = 0;
_request.Referral.ReferralDate = DateTime.Today.ToString();
_request.Referral.TransferInOut = 0;
_request.Referral.AdmitDate = DateTime.Today.ToString();
_request.Referral.IssueInitial = "webusr";
_request.Referral.DischargeDate = DateTime.Parse("2078-12-31").ToString();
_request.Referral.TermDate = DateTime.Parse("2078-12-31").ToString();
_request.Referral.PayToAffiliationId = ""; // QNXT PayTo Affiliation ID
_request.Referral.AttProvid = ""; // QNXT Attending Provider Id
_request.Referral.AppealDate = DateTime.Parse("2078-12-31").ToString();
_request.Referral.AccChg = Convert.ToDecimal(0.00);
_request.Referral.Acuity =  "Urgent";
_request.Referral.Admit = 1;
_request.Referral.AdmitDate = DateTime.Today.ToString();
_request.Referral.AdmitPhys = "";
_request.Referral.AdmtProvid = ""; // QNXT Admitting Provider Id
_request.Referral.AuthStatus = AuthStatusType.MEDREVIEW;
_request.Referral.Diagnosis = "";
_request.Referral.DisDiagnosis = "";
_request.Referral.Dispositionid = "";
_request.Referral.ProcessLogId = "";
_request.Referral.ReceiptDate = DateTime.Parse("2078-12-31").ToString();;
_request.Referral.ReferToLocation = "";
_request.Referral.ReferToPar = YesNoType.N;
_request.Referral.ReferToProvType = "";
_request.Referral.Source = ReferralMessageTypeSource.Q;
_request.Referral.DecrementType = ReferralMessageTypeDecrementType.SVC;
_request.Referral.Status = AuthStatusType.INPROCESS;

_request.Referral.ReferralText = new ReferralTextMessageType();
_request.Referral.ReferralText.Reason = _referral.Reason;

switch("Auto")
{
case "Auto":
_request.Referral.AccidentCause = ReferralMessageTypeAccidentCause.A;
break;
case "Employment":
_request.Referral.AccidentCause = ReferralMessageTypeAccidentCause.E;
break;
case "Others":
_request.Referral.AccidentCause = ReferralMessageTypeAccidentCause.O;
break;
case "No":
default:
_request.Referral.AccidentCause = ReferralMessageTypeAccidentCause.Item;
break;
}

_request.Referral.AuthDiags = new AuthDiagMessageType[1];
_request.Referral.AuthDiags[0] = new AuthDiagMessageType();
_request.Referral.AuthDiags[0].DiagCode = "100.0";
_request.Referral.AuthDiags[0].Sequence = "1";
_request.Referral.AuthDiags[0].DiagQualifier = AuthDiagMessageTypeDiagQualifier.PRINCIPAL;


AuthCreateUpdateProxy _authProxy = new AuthCreateUpdateProxy(_proxy.Session);

AuthCreateUpdateResponseMessageType _response = authProxy.ProcessMessage(_request);
}



}
}
}

Note that AuthCreateUpdateProxy() method does not perform any data validation. So if this is important to you, you will have to perform this outside of the SDK. Also, the SDK does not support:

  • Adding data to custom attributes
  • Creating an authorization service
  • Creating an authorization alert/memo

These operations will also need to be performed outside of the SDK.

Here are is a list of some other tests we found to be useful in insuring the quality of the SDK interface:

  • Add authorization with wrong or missing Member Id
  • Add authorization with wrong or missing Enrollment Id
  • Add authorization with wrong or missing Auth Template Id
  • Add authorization with wrong or missing Refer From Provider Id
  • Add authorization with wrong or missing Refer To Provider Id
  • Add authorization with wrong or missing Affiliation Id
  • Add authorization with wrong or missing Admitting Provider Id
  • Add authorization with wrong or missing Admitting Provider Id
  • Add authorization with wrong or missing Diagnosis Code
  • Add duplicate authorization

Thursday, July 30, 2009

Custom NAnt ILMerge Task. No Assembly Required

ILMerge is a Microsoft utility that enables you to incorporate referenced .NET assemblies into a combined assembly, removing the need to distribute external assemblies. If you are using ILMerge, and you have an automated build process using NAnt, you have several options for integrating ILMerge into your build:

1. You can you use the <exec> task and build your command line arguments as a string.

2. You can use a 3rd party (or write your own) custom tasks library like this one: http://code.google.com/p/ilmerge-tasks/wiki/HowToUse

3. Or you can write a inline custom task using the <script> tag in your NAnt script


I started with option #1. I quickly realized that without <fileSet> support, this could become a difficult option to maintain.

I then moved to option #2. This library didn’t work for me. I suspect it’s because my \tools folder, which contains ILMerge.exe and the ILMergeTask.dll are both on an R:\. And I suspect this is troublesome for the .net reference search. I get back an error that ILMerge was not found.

Which left me with option #3. NAnt presents a <script> tag which allows you to write extensions to core functionality. So I wrote my own ILMergeTask. The thing to note here is that I could have compiled this into a task library as well. The real difference verse option #2 is the ILMergeTask I wrote extends ExternalProgram (as opposed to Task). Which allows me to pass in the path of ILMerge.exe.

Usage is pretty familiar:

<ilmerge outputfile="Combined.Assembly.exe"
program=”R:\tools\ilmerge\ilmerge.exe”
primary=”Primary.Assembly.exe”
log=”${log.dir}\ilmerge.log”>
<assemblies>
<include name="*.dll" />
</assemblies>
</ilmerge>






And here is the code for the custom task:
  <script language="C#" prefix="custom" >
<references>
<include name="System.dll" />
<include name="NAnt.Core.dll" />
</references>
<imports>
<import namespace="System" />
<import namespace="System.Collections" />
<import namespace="System.Collections.Specialized" />
<import namespace="NAnt.Core.Types" />
<import namespace="NAnt.Core.Util" />
<import namespace="NAnt.Core.Tasks" />
</imports>

<code>
<![CDATA[
[TaskName("ilmerge")]
public class ILMergeTask : ExternalProgramBase {

private FileSet m_assemblies;
private string m_logFile;
private string m_outputFile;
private string m_primaryFile;


[TaskAttribute("program", Required = true)]
[StringValidator(AllowEmpty = false)]
public override string ExeName
{
get { return base.ExeName; }
set { base.ExeName = value; }
}


public override string ProgramArguments
{
get { return string.Empty; }
}

[BuildElement("assemblies", Required=true)]
public virtual FileSet InputAssemblies
{
get
{
return this.m_assemblies;
}
set
{
this.m_assemblies = value;
}
}

[TaskAttribute("logfile")]
public virtual string LogFile
{
get
{
if (this.m_logFile == null)
{
return null;
}
return this.Project.GetFullPath(this.m_logFile);
}
set
{
this.m_logFile = StringUtils.ConvertEmptyToNull(value);
}
}

[TaskAttribute("primary", Required=true), StringValidator(AllowEmpty=false)]
public virtual string PrimaryFile
{
get
{
if (this.m_primaryFile == null)
{
return null;
}
return this.Project.GetFullPath(this.m_primaryFile);
}
set
{
this.m_primaryFile = StringUtils.ConvertEmptyToNull(value);
}
}

[TaskAttribute("outputfile", Required=true), StringValidator(AllowEmpty=false)]
public virtual string OutputFile
{
get
{
if (this.m_outputFile == null)
{
return null;
}
return this.Project.GetFullPath(this.m_outputFile);
}
set
{
this.m_outputFile = StringUtils.ConvertEmptyToNull(value);
}
}


protected override void ExecuteTask()
{
try
{
Log(Level.Info, "Executing ILMerge.exe");
Log(Level.Info, string.Format("/out:\"{0}\"", m_outputFile));
Log(Level.Info, string.Format("/log:\"{0}\"", m_logFile));
Arguments.Add(new Argument(string.Format("/out:\"{0}\"", m_outputFile)));

Log(Level.Info, string.Format("assembly[{0}]: {1}", "primary", m_primaryFile));
Arguments.Add(new Argument(string.Format("\"{0}\"", m_primaryFile)));

for (int i = 0; i < m_assemblies.FileNames.Count; i++)
{
Log(Level.Info, string.Format("assembly[{0}]: {1}", i, m_assemblies.FileNames[i]));
Arguments.Add(new Argument(string.Format("\"{0}\"", m_assemblies.FileNames[i])));
}

Arguments.Add(new Argument(string.Format("/log:\"{0}\"", m_logFile)));

base.FailOnError = false;
base.ExecuteTask();
}
catch (Exception ex)
{
throw new BuildException(string.Format("Error executing ILMerge {0}", "test"), Location, ex);
}
}
}
]]>
</code>
</script>




Big kudos to the guy on google code and the guy that wrote reflector ;-)

Friday, July 17, 2009

How to harden your iMac Apache Web Server

Note: I am running Leopard 10.5.6 and Apache 2.2. Other versions of OS X and/or Apache may work differently.

A few months ago I started Apache Web Server on my iMac and today I’m going to apply the “minimum necessary” configuration to Apache, so it is more secure. In a future posts, I plan to write about putting your Apache Web Server "in Jail," and enabling DDNS ... so that your hardened iMac will be a web host on the Internet.

Prerequisite
When I first bought my iMac, I secured it using Apple’s Security Configuration as a guide. If you buy your iMac at Best Buy, you can also pay Geek Squad $40 to do this for you. If you’re looking for an abridged version, this guide is also good - http://www.macshadows.com/kb/index.php?title=Hardening_Mac_OS_X. Also, Apple puts out an audit tool with lots of other security tips - http://support.apple.com/downloads/Common_Criteria_Tools_for_10_4.

The first thing to do before securing your Apache Web Server is to decide on the functionality you want from Apache. I decided on some very basic items:

  1. Only static HTML pages will be served.
  2. Ok, I’m adding SSI (http://httpd.apache.org/docs/2.2/howto/ssi.html)
  3. The server must support the virtual hosting mechanism.
  4. The server must log all web requests (including information about web browsers).
Configure Apache
To apply these configurations, you have to edit the web server configuration file

Open a terminal: Finder->Applications->Utilities->Terminal
Edit the configuration file: sudo bbedit /private/etc/apache2/httpd.conf

bbedit is a text editor that I bought. You can use any text editor here, even TextEdit which came pre-installed on my iMac.

You do not need to enable root password for this. If you are logged in as an Administrator User you can sudo and execute this command. My finished configuration is as follows (in the interest of space, I removed comments normally found in

ServerRoot "..."
Listen 80
LoadModule authz_host_module libexec/apache2/mod_authz_host.so
LoadModule include_module libexec/apache2/mod_include.so
LoadModule log_config_module libexec/apache2/mod_log_config.so
LoadModule expires_module libexec/apache2/mod_expires.so
LoadModule mime_module libexec/apache2/mod_mime.so
LoadModule dir_module libexec/apache2/mod_dir.so



User www
Group www



ServerAdmin esuyer at gmail dot com
UseCanonicalName Off
ServerSignature Off
HostnameLookups Off
ServerTokens Prod
DocumentRoot ".../www"
Timeout 300
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 15

MinSpareServers 5
MaxSpareServers 10
StartServers 5
MaxClients 150
MaxRequestsPerChild 0


Options None
AllowOverride None
Order deny,allow
Deny from all


Options +Includes
Order allow,deny
Allow from all


DirectoryIndex index.htm


Order allow,deny
Deny from all
Satisfy All


Order allow,deny
Deny from all
Satisfy All


Order allow,deny
Deny from all
Satisfy All

ErrorLog "/private/var/log/apache2/error_log"
LogLevel warn

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common


LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio


CustomLog "/private/var/log/apache2/access_log" common



ScriptAliasMatch ^/cgi-bin/((?!(?i:webobjects)).*$) "/Library/WebServer/CGI-Executables/$1"


DefaultType text/plain


TypesConfig /private/etc/apache2/mime.types
AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz
AddType text/html .shtml
AddOutputFilter INCLUDES .shtml


Include /private/etc/apache2/extra/httpd-vhosts.conf

SSLRandomSeed startup builtin
SSLRandomSeed connect builtin




Compared to the default configuration file, the following important changes have been made:

  • The number of enabled modules has been reduced to minimum.
  • Apache's processes (except for the root process) are set to be executed with unique regular user/group privileges.
  • Apache discloses the least information about itself as possible.
  • Access rights to the website's content are set to be more restrictive.

Wednesday, July 15, 2009

QNXT Integration Specialist Job Description

If you are planning to customize QNXT with functionality that is specific to your business, you have several options. All of these options revolve around application integration; under the TriZetto license, you do not have the rights to customize the QNXT application itself. You will need to fill several roles in your organization to make this happen. One of these roles is an Integration Specialist. Here is a job description for this role.


Position Description
The Integration Specialist is responsible for designing, developing and implementing software solutions based on requirements from the business, with an emphasis on enterprise solution development utilizing Microsoft BizTalk Server 2006, SQL Server 2005 and .NET. We need a cooperative team player who can work closely with a manager and team members as well as alone. Ability to function well in fast-paced work environment and the ability to work on multiple time-sensitive projects concurrently are necessary. Excellent written and verbal communication skills, as well as detailed-oriented organizational skills are also required. Software development experience should include extensive object-oriented design experience.


Required Skills:
  • Minimum of 7 years in-depth hands-on development experience with Object Oriented Analysis, Design and Development of custom .net applications and application integrations.
  • Minimum of 5 years development experience with C# .NET
  • Minimum of 4 years development experience with SQL Server, and T-SQL
  • One of the following:
  1. Minimum of 2 years development experience with BizTalk 2006
  2. Minimum of 2 years development experience with SSIS 2005
  • Strong verbal/written communication skills
  • Demonstrated ability to be productive independently and/or as a team member
  • Strong organizational, interpersonal and analytical skills


You’ll notice I did not include QNXT experience … QNXT experience would be ideal. QNXT Hosting experience would be nirvana. I had no luck finding people with any QNXT experience in Worcester, MA. And I’m reasonably certain we were well above the average salary range for this position. At the time of writing this, there are exactly 3 QNXT Hosting customers. I’m not sure that QNXT has the market penetration to warrant an effective match on this skill set. In place of QNXT experience, I recommend all Integration Specialists go through this first 90 day plan


First 90 days
Look for more posts on each item below (coming soon on this blog). If you are at the beginning of your implementation project, you may find yourself with some down time while planning is happening. Use this window to allow this 90 day plan to occur. This research/education is invaluable in my opinion.

  • Certified Product Professional – Technical System Integration training. This is a QNXT certification
  • Install QNXT and the QNXT SDK
  • Write a simple .net console app that uses the QNXT SDK i.e. create an authorization
  • If you are a hosting customer,
  1. Write an SSIS package that is HBA Compliant and that adheres to the Self Service deployment structure
  2. Deploy and Execute to hosting using the Self Service facilities
  • Create build/make scripts for your deployment artifacts i.e. using nant or other .net scripting tools
  • Create unit tests for your deployment artifacts i.e. using nunit, fit or other automated testing tools

Tuesday, July 14, 2009

QNXT Custom Database

Warning: this post is not for the faint of heart. I will be discussing the customization of QNXT.


There are two types of customization that you can do with QNXT: configuration and custom coding. Configuration is a business process. You configure QNXT, by defining your business rules using the configuration functionality built in to QNXT. Some common configurations health plans apply in QNXT are the business rules around adjudicating claims, member eligibility verification, and authorizations. There are many business processes that you can configure using the native, or out of the box, functionality provided by QNXT.

Custom coding is a development process. There are two types of custom code development: supported and unsupported. I’m going to cover the differences between supported and unsupported code in another post. Supported and unsupported code are both deployed to the QNXT Custom Database.

Custom coding is not used to enhance the QNXT application itself. Put another way, the QNXT application is not something that can be modified under the QNXT license and support agreement. Custom coding is used to extend QNXT, or to allow for integration with QNXT. These customizations are applied outside of QNXT itself. Some common examples of custom code are extracts and interfaces to other line of business applications like Finance/Accounting, CRM and Case Management.

QNXT allows for several points of integration. The ones we use are: the QNXT SDK (which I will cover in a separate post), and the QNXT Custom Database. The purpose of the QNXT Custom Database is to provide a layer of abstraction over the out of the box QNXT database called Plan Data. Changes that ordinarily could be applied to Plan Data, by way of new stored procedures, views, and other database objects, are applied to this Custom database instead. This way the Plan Data database remains unchanged. And all customer customizations are one place, the Custom database.

The QNXT Custom Database hosts all custom database code, as well as any data that supports these customizations. There is also a QNXT Stage Database. This database is also used for customization. The difference is the Stage database is permanent in structure, but temporary in storage. It is a temp database. If you have an extract for example, that requires some pre-processing or post-processing, you can temporarily host your interim-state data in Stage.

TriZetto Hosting

What I’ve described so far is true of all QNXT installations. These next few paragraphs speak to the specifics of the QNXT Custom Database if you are a hosting customer. Each QNXT environment hosts a QNXT Custom and Stage database. If you have regions that have several environments this likely means that a single SQL Server instance at TriZetto hosts several Custom and Stage databases. So for example, in a single database instance, you may have development, unit test, training and configuration version of the Custom and Stage database. So eight databases in total. This creates an interesting custom code promotion problem. I think a typical scenario that most of us are used to when promoting a change from development, to unit test, to model office and production, you expect the database names in each of these environments to be the same. With the QNXT Custom Database at TriZetto Hosting this is not the case. You promote from Custom_Dev, to Custom_UT, to Custom_PPMO, and Custom_Prod.

There are a couple of solutions to this problem. Trizetto supports a ticket-based promotion process that will support these name changes as you promote up. So in this example, a customer may choose to setup a promotion process where they deliver custom code to the Custom_Dev database, and Trizetto manages the promotion of these changes going forward. This promotion process is custom tailored to customer needs. In our case, we chose our first promotion group to be PPMO. This means we manage the delivery of code to Custom_Dev, Custom_UT and Custom_PPMO, which puts more of this burden on us. We chose to do this because we realized a time savings during our implementation effort by opening fewer tickets. Instead of putting in a ticket (which may have a 24 to 48 hour turnaround depending on the change) to promote to Custom_Dev, we delivered the code there ourselves, an didn’t engage the ticketing system until we needed to deploy to Custom_PPMO. By this time, the custom code had gone through development, quality assurance and user acceptance testing.

If you are a hosting customer, you’ll likely be executing code in the QNXT Custom Database using a job scheduler and the TriZetto Hosted Batch Architecture (also known as the HBA, more on this in another post) which has Common System Properties that allow you to connect to the QNXT Custom Database.

Full Disclosure re QNXT

I hope to be writing a lot about QNXT and TriZetto Hosting on this blog, so I wanted to make this clear to everyone reading this: I do not work for Trizetto, nor am I a partner of TriZetto’s. I work for a health plan that is a TriZetto customer. Everything I am writing here is about my experience and my point of view. My company went through a three year implementation effort with QNXT and TriZetto Hosting, ending in 2010. We were TriZetto’s very first hosting customer. A large part of the reason I started this blog is I am interested in collaborating with the QNXT community. I encourage you to ask a question, leave a comment, tell me about your experience, and/or challenge my views.

Thursday, July 2, 2009

How to connect your Apple TimeCapsule to an existing network

1. Unpack your Apple TimeCapsule

2. Plug in a network cable into a LAN port on your TimeCapsule, and a LAN port on your existing network/router. Then plug in the TimeCapsule power cable to a power outlet. When the TimeCapsule comes on and configures itself, a blinking amber light will be on. This signifies a network connection problem. The next few steps will fix this problem.

3. On your MAC, Airport Utility should pop up after a few minutes. This comes out of the box with Leopard, but it also comes on the install CD that accompanies the TimeCapsule


4. Click the Manual Setup button


5. Click Internet in top navigation. Then under Connection Sharing, select "Off (Bridge Mode)". Then click the Update button. The TimeCapsule will reboot, and will come back up with a solid green status light, signifying it is ready to use.


If you don't need it, recommend you also turn the Wireless Network off.

Monday, June 8, 2009

TriZetto QNXT Hosting and Time Zone

TriZetto Hosting's answer to the Time Zone Globalization problem is to set the Server Time on the Operating System to Time Zone of their customers. This way, without having to make any coding changes to the QNXT application, they can store datetime data in their customer's Time Zone.

Sunday, May 17, 2009

Setting a Time Zone in your .NET application

One important consideration when designing software systems is globalization.  This is nothing new - we have had software for many years that must transcend various cultures and languages.  But there is an aspect of globalization that has become more mainstream in the last two years - Time Zone globalization.  The evidence I offer you to substantiate this opinion is that in the last two years M$ has released major version of two product lines - SQL Server 2008 and the .NET Framework 3.5, which have features that specifically target the Time Zone globalization problem (The TimeZoneInfo and DateTimeOffset classes in .NET 3.5, and the DateTimeOffset Data Type in SQL Server 2008)


The Time Zone globalization problem is not new.  On Google, I found articles about solving this problem that date back to 2000 and beyond.  I think what makes this problem more relevant today is that software systems are more distributed; managed software (aka hosted software), software as a service, and software in the cloud are more mainstream today than ever before.



What is the Time Zone globalization problem?


I’m going to attempt to describe this problem in the form of an example.  You are  a software company.  10 years ago you built a web application that you sell and license to your customers.  Your product is very good and has exploded in popularity.  Last year you decided to expand your product line by offering this product as a managed service.  Now, instead of owning a license to your product, you host the product, and your customers pay for the privilege of using it on a subscription basis.


When your customers license and own your product - there is no Time Zone problem.  Your customers install your product on their servers where they have complete control of the Time Zones.  If your customer is a Massachusetts company, but their data center is in Colorado - it makes no difference.  The customer is in control of the data center; if they want to set all of their Colorado servers to Eastern Time because that is what makes sense to their business, that is their prerogative.  


When your customers subscribe to your managed service - there is a potential Time Zone problem.  You, as the managed service provider, are in control of the software and the servers, and thereby in control of the Time Zone on those servers.  Your hosting data center is in Colorado, so therefore you set the Time Zone to Mountain Time.  This presents the first side effect of the Time Zone globalization problem:


Your customer’s data is now in your local Time Zone, not theirs.


This may lead to various usability problems.  You’re customers employees, who are users of your product, will have to convert back and fourth between their local Time Zone and Mountain Time.  For example, lets say the hosted application is a Customer Service Call Center.  If a Customer Service Rep wants to set a call back reminder for 2pm Eastern Time, they’ll have to remember to set the reminder to 4pm (because the server Time Zone is Mountain Time which is offset by 2 hours from Eastern Time).


Lets take this example a bit further.  For various system integration reasons, you exchange data on a regular basis with your customers.   You’re customer sends you data in one Time Zone, you send data back in another.  This presents the next side effect of the Time Zone globalization problem: 


Your customer’s data is no longer in one Time Zone.


This may lead to various business rule problems.  For example, lets say the hosted application is a claims processing application, and the customer hosts their financial accounting application in-house in a different Time Zone.  To reconcile claims against payments you exchange data on a nightly basis.  Every so often, the job runs past 2:00am Mountain Time, which increments to the next day on the server, and pickups the next day of claims.  This is an unexpected condition and the job fails.



Possible Solutions


There are several ways to solve this problem.  One possible solution is to set the server, or operating system time, to the Time Zone of your customer.  This way we are back to one Time Zone and this problem ceases to exist.  This solution is not always possible.  If you are a managed service provider, you likely have customer across various Time Zones.  So setting server time to Eastern Time may help one customer, and ale another.


Another possibility is to fix this problem at the database level.  With SQL Server 2005, the datetime data type is stored as two 4-byte integers.  The first 4 bytes store the number of days before or after 1/1/1900.  The second four bytes store the time of day represented as the number of 1/300-second units after midnight.  This data type is not Time Zone aware.  So SQL Server 2005 is not Time Zone aware.  If you are storing the datetime “5/17/2009 20:00”, SQL Server has no way to distinguish if you mean Eastern Time, Mountain Time, or another Time Zone.  With SQL Server 2005, you can write some programming code that uses the CONVERT function to convert between Time Zones either when the data is stored or queried.


With SQL Server 2008, there is a new data type: DateTimeOffset which is Time Zone aware:


http://blogs.msdn.com/manisblog/archive/2007/08/28/sql-server-2008-enhancements-in-date-and-time-data-types.aspx


Using this data type, the Time Zone can be stored along with the date/time.  This does not solve the usability problem described above, as you will still need to convert to display the datetime in a different Time Zone.  But it does help you in solving the systems problem when exchanging data.


The third possibility is to solve this problem in the application itself.  When this issue came up for me at work this week, I set out looking for the magic web.config setting that would enable us to set the Time Zone to Eastern Time.  Unfortunately, this setting does not exist out of the box.  The setting I was thinking of was CultureInfo which lets you set how date/time information is formatted for display (i.e. 05/17/2009 or 05-May-2009, etc.,).  But there is no setting for Time Zone.  


However, writing your own setting is relatively straight forward.  The first thing you’ll need to decide is your reference Time Zone.  GMT/UTC is the real-life model of the reference Time Zone.   So my recommendation is to reference all of your date/time properties and fields using the built-in UTC directives in .NET and SQL Server.  This is optional.  But it will make the rest of what I’m about to suggest easier on you.


To reference data in UTC, stop using server time, and start using UTC time.  Put another way, stop using DateTime.Now in .NET and GETDATE() in SQL Server, and start using DateTime.UtcNow and GETUTCDATE().  Here is a good article by Scott Mitchell that shows some code examples


http://aspnet.4guysfromrolla.com/articles/081507-1.aspx


Next, at some point, either before storing or before displaying your data you will need to convert from your reference Time Zone to your local Time Zone.  This is where the TimeZoneInfo class (new with .NET 3.5) comes in to play:


http://msdn.microsoft.com/en-us/library/system.timezoneinfo.aspx



Here is a quick code example which will convert your local time to Hawaiian Standard Time:


DateTime nowDateTime = DateTime.UtcNow;

DateTime newDateTime = TimeZoneInfo.ConvertTime(

    nowDateTime,

    TimeZoneInfo.FindSystemTimeZoneById("Hawaiian Standard Time"));

Console.WriteLine("Now: {0}", nowDateTime);

Console.WriteLine("Now in Hawaii: {0}", newDateTime);


That’s it.  Not as easy as built-in web.config setting.  But this gives you the tools you need to build your own setting.   


If you have applied good separation of concerns, and application layering object oriented principals in your design, then applying this conversion right before you persist application data should be pretty straight forward.




Friday, May 15, 2009

Starting Apache 2.2 Web Server on Mac Leopard 10.5.6

I bought my iMac with Leopard OS pre-installed.  To start Apache, choose System Preferences from the dock.  Next choose Sharing














And select Web Sharing.  


That's it!  Apache Web Server is now running.  Click here to view your home page

http://127.0.0.1/

Coming soon:

  • How to harden your iMac Apache Web Server

  • Enabling DDNS

Thursday, May 14, 2009

TriZetto QNXT Hosting

About 18 months ago, my company decided to embark on a core system replacement project.  We sell health insurance, so our core system is used for processing claims, plan and benefit administration, and procuring new members and providers.  There were several reasons why we felt replacing our core system was a good idea.  Our old core system, GE IDX, was not designed for health plan administration.  And GE was about to end-of-life their support for this part of their product line.  Also the landscape of health insurance has been changing in Massachusetts for some time.  And we needed a core system that would help is be more agile when introducing changes, and helping us to stay ahead of an increasingly demanding list of customer needs.

After evaluating a handful of leading vendors in this space, we settled with TriZetto QNXT as our new core system.  We also decided to be the very first Trizetto Hosting customer (If you would like to hear more about how I feel about being “first” see my profile).  It’s for this reason that I decided that writing about QNXT and TriZetto Hosting would be one of the first topics I would cover in this blog.  I intend to write about

  • The good, bad, and ugly of our implementation effort

  • The TriZetto Hosting environment

  • The QNXT SDK

  • How we built our own QNXT Web Portal

  • And last, but certainly not least: custom code delivery, promotion and deployment

If you are a TriZetto customer like me, I hope you can relate to all or some of these topics, and that the things I have to write, will move you to leave a comment