Skip to content

Latest commit

 

History

History
192 lines (182 loc) · 26 KB

README.md

File metadata and controls

192 lines (182 loc) · 26 KB

jeverest

In short, the Everest Framework is designed to ease the creation, formatting, and transmission of HL7v3 structures with remote systems.

Home

In short, the Everest Framework is designed to ease the creation, formatting, and transmission of HL7v3 structures with remote systems. 

The "framework" provides a series of consistent, well documented components that, when used together, provide a flexible mechanism for supporting HL7v3 standards within application. Through a combination of automatically generated code and carefully constructed handwritten modules, Everest has the ability to serialize, validate, and transmit structures. Everest comes bundled with basic serialization capabilities for:

  • HL7 Clinical Document Architecture r2
  • HL7v3 Messaging
  • Normative Edition 2008
  • Normative Edition 2010
  • pan-Canadian Messaging Specifications
  • R02.04.01
  • R02.04.02
  • R02.04.03


The serialization assemblies bundled with Everest represent the structures contained within the MIF files bundled with documentation (where license permits), functionality (validation, casting, etc.) and structure meta-data. Additional standards or documentation for the bundled DLLs can be generated by processing Model Interchange Format (MIF) files (version 2.1.2, 2.1.3, 2.1.4, 2.1.5 and 2.1.6) using either the GPMR or GPMR Wizard tools bundled with the framework. The following (additional) standards are known to work with Everest but are not included:

  • pan-Canadian CeRX 4.3 messaging
  • Universal Normative Edition 2009


Everest currently supports serializing structures to/from the following formats:

  • XML ITS 1.0
  • HL7v3 XML Data Types R1 (UV and CA extensions)
  • HL7v3 XML Data Types R2
  • Binary format


Everest currently supports transporting structures to/from other systems using the following connectors:

  • Windows Communication Foundation (Server/Client mode) (basicHttpBinding, wsHttpBinding, ws2007HttpBinding, netTcpBinding)
  • File Systems (Server/Client)
  • Msmq (Publish only)


The pillars of Everest are:

  • Intuitiveness: All components within Everest are designed to be intuitive to developers. Great care has been taken to reduce the complexity of the Framework and allow developers to focus on HL7v3 messaging
  • Standards Compliance: Being a standards-based framework, one of the foundational pieces is standards compliance. The Everest framework is more than just a serialization engine; it will generate messages, transport them, and validate instances in a standards-compliant manner.
  • Quality: Everest code is held to the highest standard of quality in terms of regression testing and documentation. All changes made to the framework are reviewed for their quality and are subject to over 8,000 tests.
  • Performance: Everest has been designed with long-term performance in mind. Many of the methods within Everest (especially formatting) have the ability to "learn" and become faster the more they are used.
  • Flexibility: Everest has been designed to be flexible in the manner that allows it to support new HL7v3 standards.
Architecture

The MARC-HI Everest Framework is modeled using a very loosely coupled architecture. This design allows application developers to program against one set of HL7v3 models, and serialize/de-serialize to many different Implementable Technology Specification (ITS) formats. The MARC-HI Everest Framework also allows applications to consume or produce these models using a wide array of transport mechanisms.

This flexible architecture ensures that the internal canonical data of your application is safely insulated from changes in the HL7v3 ITS, or transport specifications.

framework diagram

Continuous Integration Status:

You can view the CI build results of Everest here: http://ci-services.fyfesoftware.ca:8081/job/Everest%20Community%20Build/

 

Collection Shortcuts

The goal of the Everest framework data types is to provide functionality that allows developers to easily construct and interact with the HL7v3 data types. In previous versions of the Everest Framework, creating collections could be difficult. Consider the AD data type which is nothing more than a collection of ADXP components. In previous versions of Everest, creating this structure would look something like this:


AD homeAddress = new AD(
new SET<CS<PostalAddressUse>>()
  {
PostalAddressUse.Alphabetic,
PostalAddressUse.Direct
  },
new ADXP[] {
new ADXP("123 Main Street", AddressPartType.StreetAddressLine),
new ADXP("West", AddressPartType.Direction),
new ADXP("Hamilton", AddressPartType.City),
new ADXP("Ontario", AddressPartType.State),
new ADXP("Canada", AddressPartType.Country)
}
);


This code can be quite large and is difficult to track if not styled properly (indentation is the key here). So, to make the construction of sets a little easier, we've added static "creator" methods on each of the collection data types. They are used as follows:


AD homeAddress = AD.CreateAD(
SET<PostalAddressUse>.CreateSET(
PostalAddressUse.Alphabetic,
PostalAddressUse.Direct
),
new ADXP("123 Main Street", AddressPartType.City),
new ADXP("West", AddressPartType.Direction),
new ADXP("Hamilton", AddressPartType.City),
new ADXP("Ontario", AddressPartType.State),
new ADXP("Canada", AddressPartType.Country)
);


The benefit of this shortcut is illustrated better with more complex sets such as SXPR and QSET. The following snippet represents the construction of an SXPR that represents numbers {1..10}, intersected with the result of a union of numbers {3..5} and {7..9}.


SXPR<INT> result = new SXPR<INT>()
{
new IVL<INT>(1, 10),
new SXPR<INT>() {
Operator = SetOperator.Intersect,
       Terms = new LIST<SXCM<INT>>() {
new IVL<INT>(3, 5),
new IVL<INT>(7, 9)
          {
              Operator = SetOperator.Inclusive
          }
       }
   }
};


Using the new constructor which is a shortcut, the following expression can be used:


SXPR<INT> result = new SXPR<INT>(
new IVL<INT>(1, 10),
new SXPR<INT>(
new IVL<INT>(3, 5),
new IVL<INT>(7, 9)
       {
           Operator = SetOperator.Inclusive
       }
)
   {
      Operator = SetOperator.Intersect
   }
);


These new shortcut methods are intended to assist developers even more than previous attempts at the Data Types implementation and are one of the many improvements in the Everest 1.0 data types library.

SNOMED Expressions

First, I will provide a little bit of an overview. In HL7v3 clinical concepts within messages are represented using one of four different data types (two in the Data Types R2 specification). These are:

R1 R2 Summary
CS CS Coded Simple - A simple code where only the code mnemonic is unknown
CV CD.CV Coded Value – A more complex code structure whereby the code system from which the mnemonic is taken is unknown at design time.
CE CD.CE Code with Equivalents – A CV instance where translations (or equivalents) can optionally be specified.
CD CD Concept Descriptor – A code mnemonic taken from a code system, optionally with one or more concept roles which qualify the primary code. For example, the code LEFT qualifies FOOT to mean LEFT FOOT.
CR N/A Concept Role - A name/value pair where the value concept qualifies the semantic meaning of the primary mnemonic by way of the named concept.

Wait a minute! Notice some differences? Well, for starters CV and CE are no longer "proper" types according to the data types, they are flavors of CD. This is an appropriate change as they remain structurally identical to the R1 structures.


The big change comes in the concept descriptor. Notice how the CR data type is not present in data types R2. When I first saw this I thought nothing of it, however when looking at how each are represented on the wire the difference is very pronounced.


I'm a code kind of guy, so I thought I would explain this using code. First off, Everest uses a hybrid of DT R1 and R2, so the codified data types in Everest resemble those found in R1 (and are mapped to appropriate R2 flavors on formatting). With that in mind, let's represent the following example: "severe burn on the skin between the fourth and fifth toes on the left side", in Everest.


First, we create the primary code of "burn":


var burnCode = new CD<string>("284196006", "2.16.840.1.113883.6.96") {
    DisplayName = "Burn of Skin",
    CodeSystemName = "SNOMED-CT",
    CodeSystemVersion = "2009"
};

Next, the we want to qualify "Burn of Skin" with a severity of "Severe". This is accomplished by creating a CR instance:


// Severity
var severityQualifier = new CR<string>(
new CV<String>("246112005", "2.16.840.1.113883.6.96")
            { DisplayName = "Severity" },
   new CD<String>("24484000", "2.16.840.1.113883.6.96")
            { DisplayName = "Severe" }
);


Next, our code has a finding site. The burn was located on the skin between the fourth and fifth toes, so once again it is another CR instance:


// Finding Site
var findingSiteQualifier = new CR<String>(
    new CV<String>("363698007", "2.16.840.1.113883.6.96")
             { DisplayName = "Finding Site" },
    new CD<String>("113185004", "2.16.840.1.113883.6.96")
             { DisplayName = "Skin Between fourth and fifth toes" }
);


Next, we want to describe the fact that the the burn was found on the skin between the fourth and fifth toes "on the left hand side". Obviously we want to create another qualifier for this:


// Laterality
var lateralityQualifier = new CR<String>(
new CV<String>("272741003", "2.16.840.1.113883.6.96")
            { DisplayName = "Laterality" },
new CD<string>("7771000", "2.16.840.1.113883.6.96")
            { DisplayName = "Left Side" }
);


But how would we structure these concept roles to describe the situation? First we have to look at each term and ask the question, "What does this qualify". So, for example, does "Laterality of Left Side" qualify the burn? Technically no, the laterality qualifies the finding site (i.e.: We found the burn on the toes on the left hand side). So we want to add the lateralityQualifier to the findingSiteQualifier's value:


// Laterality applies to the finding site
findingSiteQualifier.Value.Qualifier = new LIST<CR<string>>() { lateralityQualifier };


What does finding site qualify? Technically finding doesn't qualify the severity it qualifies the primary code (i.e.: The burn was "found on" the skin…), and the same with applies to the severity (i.e.: We didn't find a severe skin between toes, we found a severe burn). So we add these two qualifiers to the primary code:


// Finding site and severity apply to primary code
burnCode.Qualifier = new LIST<CR<string>>() {
severityQualifier,
findingSiteQualifier
};


Now comes the easy part, when we format the data type using data types R1 formatter:


var formatter = new MARC.Everest.Formatters.XML.ITS1.Formatter();
formatter.ValidateConformance = false;
formatter.GraphAides.Add(
    typeof(MARC.Everest.Formatters.XML.Datatypes.R1.DatatypeFormatter)
);
// Setup the writer
StreamWriter sw = new StreamWriter("C:\\temp\\temp.xml");
XmlWriter xw = XmlWriter.Create(sw, new XmlWriterSettings() { Indent = true });
XmlStateWriter xsw = new XmlStateWriter(xw);
// Format and produce the XML file
try
{
    xsw.WriteStartElement("code", "urn:hl7-org:v3");   xsw.WriteAttributeString("xmlns", "xsi", null, http://www.w3.org/2001/XMLSchema-instance);
    var p = formatter.Graph(xsw, burnCode);   sw.WriteEndElement();
}
finally
{    xw.Close();    sw.Flush();
    formatter.Dispose();
}


The output of this is the following XML:

<code code="284196006" codeSystem="2.16.840.1.113883.6.96" codeSystemName="SNOMED-CT" codeSystemVersion="2009" displayName="Burn of Skin">
<qualifier inverted="false">
<name code="246112005" codeSystem="2.16.840.1.113883.6.96"
displayName="Severity" />
      <value code="24484000" codeSystem="2.16.840.1.113883.6.96"
displayName="Severe" />
   </qualifier>
   <qualifier inverted="false">
      <name code="363698007" codeSystem="2.16.840.1.113883.6.96"
displayName="Finding Site" />
<value code="113185004" codeSystem="2.16.840.1.113883.6.96"
displayName="Skin Between fourth and fifth toes">
          <qualifier inverted="false">
              <name code="272741003" codeSystem="2.16.840.1.113883.6.96"
displayName="Laterality" />
<value code="7771000" codeSystem="2.16.840.1.113883.6.96"
displayName="Left Side" />
</qualifier>
      </value>
</qualifier>
</code>


But as I mentioned previously, CR is not supported in DT R2. So the question arises, "How do I qualify a code in HL7v3 DT R2?". Well, the answer is not so simple. In DT R2, the concepts for SNOMED terms are described using an expression language defined by IHTSDO. The SNOMED expression for our scenario is:


284196006|Burn of Skin|:{246112005|Severity|=24484000|Severe|,363698007|Finding Site|=(113185004|Skin Between fourth and fifth toes|:272741003|Laterality|=7771000|Left|)}


Intuitive right? Not really. So how do I represent this in a CD instance? Well, the answer is really ugly, and in my opinion violates first normal form (I will post an opinion post later about my thoughts of using 1NF in XML and how I think standards bodies seem to have forgotten it).
Anyways, so what is this supposed to look like in DT R2? The answer is below:


<code code="284196006:{246112005=24484000,363698007=(113185004:272741003=7771000)}"
      codeSystem="2.16.840.1.113883.6.96"
      codeSystemName="SNOMED-CT"
       codeSystemVersion="2009">
             <displayName value="Burn of Skin"/>
</code>


I warned you it wasn't pretty. So how do you get Everest to format a concept descriptor like this? Well, the answer is simple, change this line of code:


// Old Line: formatter.GraphAides.Add(typeof(MARC.Everest.Formatters.XML.Datatypes.R1.DatatypeFormatter));
formatter.GraphAides.Add(typeof(MARC.Everest.Formatters.XML.Datatypes.R2.DatatypeR2Formatter));


And Everest will automatically handle the generation of these expressions for SNOMED concepts. Parsing? It is the same. Everest 1.0's data types R2 formatter has been developed so that you are shielded from having to understand the complexities of SNOMED expressions.


As a matter of fact, when parsing a SNOMED concept with an SNOMED expression, Everest will construct the appropriate hierarchy of concept roles for you.
What do I think of this change in HL7? I think it was pointless, and simply over-complicates processing of XML instances. In my opinion, there is no need to mix the hierarchical language of SNOMED expressions as an attribute within the hierarcal container of XML. At least a framework like Everest has enough logic in the formatting of codes to shield the developer from changes like this.
Next time, I'm going to blog about a good change in R2, changes in the continuous set expression data types (SXPR).

Custom SOAP Headers

For this example, I'll show you how to read/write the WS-Addressing headers in a received message from the WCF connectors. First, to access the SOAP headers from a message received from a WcfServerConnector, you can simply access the Headers property on the WcfReceiveResult. The following code is written in the MessageAvailable event handler for a WcfServerConnector:


static void conn_MessageAvailable(object sender, MARC.Everest.Connectors.UnsolicitedDataEventArgs e)
{
// Get the sending connector that raised the event
var connector = sender as WcfServerConnector;
if (connector == null)
throw new ArgumentException("Must be called from a WcfServerConnector", "sender");
// Receive the message
var receiveResult = connector.Receive() as WcfReceiveResult;
Pretty standard Everest stuff, next we'll emit the value of the WS-Addressing headers:
if (receiveResult.Headers != null){
     Console.WriteLine(receiveResult.Headers.To);
Console.WriteLine(receiveResult.Headers.Action);
}


We can access the Headers array just like any other WCF Header, this applies to constructing the response as well. To construct the response, populate the ResponseHeaders on the receiveResult prior to call "Send()" on the server connector.


receiveResult.ResponseHeaders = new System.ServiceModel.Channels.MessageHeaders
(receiveResult.Headers.MessageVersion);
receiveResult.ResponseHeaders.Add(MessageHeader.CreateHeader("myHeader", "urn:my-ns:com", "Value"));
connector.Send(new MCCI_IN000002CA(), receiveResult);


This code will return the following soap header:


< tns:myHeader xmlns:tns="urn:my-ns:com">Value</tns:myHeader>


More complex headers can be added the same way you would add standard System.ServiceModel.Channel.MessageHeader objects. It is also possible to send message headers using the overridden Send() method on the WcfClientConnector:


var conn = new WcfClientConnector();
// trimmed
MessageHeaders messageHeaders = new System.ServiceModel.Channels.MessageHeaders(MessageVersion.Soap12);messageHeaders.Add(MessageHeader.CreateHeader("myHeader", "urn:my-ns:com", "Value"));
conn.Send(instance, messageHeaders);

Emitting XML Comments

A great question came to me the other day. How does one pretty-up the XML output generated by Everest so that humans can read the XML? Of course there are the old tricks of indentation and formatting the output however that only gets us so far.


Wouldn’t it be great if Everest had the capacity to emit comments in the XML instances. Sadly this isn’t a use case for vanilla Everest however there is a way to easily do this in the upcoming 1.2 release of Everest (being released on June 5th BTW).


Lets say I want to emit a comment that annotates the <acceptAckCode> element, something like this:


<?xml version="1.0" encoding="utf-8"?>
<PRPA_IN101301UV02 ITSVersion="XML_1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:hl7-org:v3">
  <id root="F043D3BF-02C4-48BF-8C7E-6FFEE7D75B52" />
  <creationTime value="20130531171954.939-0400" />
  <interactionId root="2.16.840.1.113883.1.18" extension="PRPA_IN101301UV02" />
  <processingCode code="T" />
  <processingModeCode code="A" />
  <!--The acknowledgement code—>
  <acceptAckCode code="AL" />
</PRPA_IN101301UV02>

The way I would do this is to add an extension method to IGraphable to allow users to add commnents:

public static class CommentExtension
{
    private static List<KeyValuePair<IGraphable, String>> s_comments = new List<KeyValuePair<IGraphable, string>>();
    private static Object s_syncLock = new object();
    public static void AddComment(this IGraphable me, string comment)
    {
        lock (s_syncLock)
                s_comments.Add(new KeyValuePair<IGraphable, String>(me, comment));
    }
    public static string GetComment(this IGraphable me)
    {
        return s_comments.Find(o=>o.Key == me).Value;
    }
}


We can then extend the XmlIts1Formatter and override the WriteElementUtil method to emit the comment added prior to serializing the element:

public class XmlIts1FormatterWithComments : XmlIts1Formatter
{
    public override void WriteElementUtil(System.Xml.XmlWriter s,
               string elementName,
               MARC.Everest.Interfaces.IGraphable g,
               Type propType,
               MARC.Everest.Interfaces.IGraphable context,
               XmlIts1FormatterGraphResult resultContext)
    {
        String comment = g.GetComment();
        if (comment != null)
            s.WriteComment(comment);
        base.WriteElementUtil(s, elementName, g, propType, context, resultContext);
    }
}


Then, using this new formatter we can simply add the comment and format!


PRPA_IN101301UV02 test = new PRPA_IN101301UV02(
    Guid.NewGuid(),
    DateTime.Now,
    PRPA_IN101301UV02.GetInteractionId(),
    ProcessingID.Training,
    ProcessingMode.Archive,
    AcknowledgementCondition.Always);
test.AcceptAckCode.AddComment("The acknowledgement code");
var formatter = new XmlIts1FormatterWithComments();
formatter.GraphAides.Add(new DatatypeFormatter());
formatter.Graph(Console.OpenStandardOutput(), test);
Console.ReadKey();


Hope that helps anyone else with the same problem!