-
Notifications
You must be signed in to change notification settings - Fork 30
Writing Conformance Tests
The conformance test environment is made up of four components: conformance tool, test engine, simulators, and orchestration. This section gives an overview of these major elements.
The conformance tool is the user interface for conformance testing. It pulls together the capabilities of the test engine, simulators, and orchestration and presents the user with a consistent view. This tool resides entirely in the user interface code running in the browser. It has no server side elements.
The test engine is the server side engine for running and evaluating tests. Test definitions are stored a testkit. A testkit is a directory containing test definitions. There is a main testkit that resides in the toolkit WAR file which contains all the tests delivered with toolkit. Users can create their own tests in a Testkit that resides in the External Cache.
The test engine implements an actor, or part of an actor, that initiates a transaction. So the focus of the test engine is to send a transaction, receive the response, and grade the response. A transaction can be coded so that it depends on the results of a previous transaction, pulling values from already run transactions and inserting them in new transactions.
Running a test generates a persistent log. When one test depends on another it does so through the log.
Simulators are mini-implementations of actors, or parts of actors, that receive transactions. Said another, non-IHE way, a simulator is a server and the test engine is a client. Each simulator implements one or more IHE actors. When a simulator implements multiple actors is it because it is convenient, from a testing perspective, to do so.
A simulator, once created, is always running as long as toolkit is running. All simulator state is kept in the filesystem (external cache/simdb). Simulators are not destroyed when Toolkit is stopped. They will resume operation as soon as Tookit is restarted - usually by starting Tomcat.
To test a transaction implemented by an actor it is necessary to have another actor to test against. You can think of these other actors as test references. The test engine and a collection of simulators make up the test references. The system under test (SUT) lives among these testing references, exchanging messages according to the design of a test. This collection of test references that make up the test environment are constructed by a processes known as orchestration.
The term orchestration comes of out the literature on Service Oriented Architectures where it is defined as: the coordination and arrangement of multiple services exposed as a single aggregate service. The multiple services are the test engine and collection of simulators. Together they construct the test environment around the SUT.
Within the conformance tool, this environment initialization is performed automatically given the actor type and possibly option selected. The initialization includes building supporting simulators, sending Patient Identity Feed messages to Registry actors (either simuulator or SUT), and sending messages that load known test data.
A single test is known by its ID and is composed of three layers: step, section, and test.
A step is the sending of a single transaction. A step cannot be executed independently. It is always part of a section and can only be run by running the section.
A section is a collection of steps and the order they are to be executed in. When a section is run it attempts to run all of its steps. A section terminates when all of its steps have been run or one of the steps terminates with an error.
We recommend test authors create test sections containing only a single step. It displays better in the UI that way. If there are two steps (transactions) that must always be executed together that is a good reason to have multiple steps in a section.
A section may be executed independently of other sections in the test but may be written to be dependent on other sections. An example of this dependency is to have the first section submit data and then the second section retrieve that data. Without the first the second must fail.
A test is a collection of sections and the order they will be executed. Typically a test has an easy to describe purpose. This purpose will be displayed in the UI for the user. Sections are the step-wise implementation of that purpose. It is expected that the sections will only be understood by the user once they comprehend the purpose of the test.
Writing a test starts with the structure of a test definition. A test is defined as a directory with a proscribed structure and contents. We separate the structure of a test from the structure of the testkit holding the test which is described in a later section.
A test is a directory with the following contents:
-
readme.txt (file)
-
index.idx (file)
-
section (directory)
For example:
The Test/ readme.txt index.idx section1/ section2/
The directory name The Test is the test ID and will be displayed in the UI.
Readme.txt is what it looks like, a README describing the test. This is the documentation for the overall test. There is other documentation for each section of the test as well.
This file is made up of two parts: the first line and the rest of the file. The first line is a short description of the test. In the conformance tool this first line is displayed next to the test name on the test bar. The rest of the file is a description of the test. This will be displayed in the conformance tool when the test is opened - the test bar is clicked.
The one line title, line 1, is interpreted as plain text. The rest of the file is interpreted as either a small subset of Markdown or HTML. The format is determined by examining the file.
Example:
15800/ readme.txt
with
DocumentEntry Update Tests ability of Registry to accept a metadata update which changes a few simple attributes of a DocumentEntry. This exercises the basic operation of the Update DocumentEntry Metadata operation as defined in section 3.57.4.1.3.3.1 of the Metadata Update Supplement
generates

on the UI
This file specifies the order in which the sections are executed. The file is formatted as plain text. Each line of the file contains one section name.
Example:
original update query_by_uniqueid
where line is the name of a sub-directory holding that section.
This is a directory containing the description of a section. Here section is not the actual name but instead a description. Each section is a directory and the directory name is the section name. Index.idx contains these directory names. A test must contain at least one section.
Example:
original/ readme.txt single_doc.xml testplan.xml
Readme.txt is the documentation for the section. It is different in that the first line is not special. The entire file is interpreted as Markdown or HTML.

testplan.xml is the execution instructions for the test.
A section (directory) contains the following mandatory files:
-
readme.txt
-
testplan.xml
It may contain other files. Usually testplan.xml references other files. The files it references are to be found in this directory.
This file contains text in a subset of Markdown syntax or HTML. In the conformance tool this text is displayed when the section bar is clicked and opened. Unlike the test readme.txt the first line is not special. The exact format of the file is determined by inspection (if it apprears to be HTML formatting it is interpreted as HTML).
The detailed execution of the section is coded in this XML file. It is essentially a script and the test engine is the interpreter of the script.
The outer wrapper of XML looks like
<TestPlan>
<Test>11966/submit</Test> <!--(1)-->
<!--(2)-->
</TestPlan>
-
Define the test name and section. Note that the test name and section are defined here AND in the directory structure. The directory structure is used in execution. Mis-naming here is really confusing for users. Also note the structure of the name testname/sectionname. This is used in many places in the test engine.
-
Steps go here. Test steps are executed in the order listed in the TestPlan.
A testplan contains one or more steps.
<TestStep id="submit"> <!--(1)-->
<Goal> <!--(2)-->
</Goal>
<ExpectedStatus>Success</ExpectedStatus> <!--(3)-->
<ProvideAndRegisterTransaction> <!--(4)-->
<MetadataFile>filename</MetadataFile> <!--(5)-->
<Document id="Document01">filename</Document> <!--(6)-->
</ProvideAndRegisterTransaction>
</TestStep>
-
Test steps are always named, TestStep/@id specifies the name.
-
This is displayed in the conformance tool when you open the step. The goal is formatted text (Markdown subset) that is displayed in the conformance tool.
-
Success, Failure, or PartialSuccess. The expected status codes your expectations. If other is received then an error will be generated.
-
Identifies the transaction, Provide and Register is shown in this example
-
File containing the metadata. The metadata file is the XML file containing the metadata template. More on this later.
-
File containing a document. This example is of a Provide and Register transaction which is used to submit documents. To add documents to the test step, store each in its own file and reference them in a Document element. The filename is always a local filename and it is interpreted as coming from the local directory. The id attribute is the XML id attribute of the DocumentEntry (ExtrinsicObject) that is associated with this content. The Document instruction is optional and most transactions do not use it.
All instructions shown, other than Document, are required.
Here is an example of connection between MetadataFile and Document elements.
<Document id="Document01">filename</Document> <!--(1)-->
-
Element of TestPlan
<rs:SubmitObjectsRequest xmlns:rs="urn:oasis:names:tc:ebxml-regrep:registry:xsd:3.0">
<rim:LeafRegistryObjectList xmlns:rim="urn:oasis:names:tc:ebxml-regrep:rim:xsd:3.0">
<rim:ExtrinsicObject
id="Document01" <!--(1)-->
objectType="urn:uuid:7edca82f-054d-47f2-a032-9b2a5b5186c1"
mimeType="text/plain">
...
</rim:ExtrinsicObject>
</rim:LeafRegistryObjectList>
</rs:SubmitObjectsRequest>
-
Note the same ID - Document01
The metadata files in the Testkit are templates for real messages. Here is the processing that occurs before the message is sent on the wire.
Patient ID management - Patient IDs are controlled by Orchestration using the UseId feature of Testplan. Basically a Patient Id is created and submitted via a Testplan in the orchestration process. That Testplan publishes the ID via either UseId or Report mechanisms. Testplans used in tests reference those Patient IDs.
Note: The term *Test* is used in two different ways here. A *Test* can be used as a utility to send data to a system under test or simulator to generate an initial state. In this case the Test is just really a utility based on the Testplan and test engine as the interpreter of the Testplan. The second form of the term Test is a real test - a challenge you want to succeed at - which is structurally identical but has a different purpose. Of course this real test probably has some added assertions that validate its operation.
UUID management - by default the test engine sends the entryUUID attributes as they are coded in the Testplan. This can be changed by using the <AssignUuids/> instruction within the Test Step. This instruction causes unique UUIDs to be generated and inserted into the submission.
SubmissionSet.sourceId management - Toolkit has a single hardcoded sourceId that it uses in all submissions. This file is stored in the WAR file at toolkitx/xdstest/sourceid.txt
DocumentEntry.uniqueId management - like with UUIDs, these values are uniquely generated at runtime.
Metadata attribute ordering - Metadata attributes (the content inside an ExtrinsicObject for example) are automatically sorted into Schema order before submission. Also, many of the older templates are from the XDS.a era. These will be automatically converted to XDS.b rules before sending.
All transactions define unique parameters that are used to code the transaction. There is also a base set that all transactions understand. First the base set.
All transactions inherit from Basic Transaction. It offers the following tags:
MetadataFile - file holding the transaction contents (PnR, SQ, etc). Except for FHIR based content this is always XML.
AssignUuids - By convention ebRIM metadata is authored with symbolic IDs that are eventually translated to UUIDs by the receiving system. If this instruction is missing or has value false the symbolic IDs are submitted as coded. If true then UUIDs are generated and replace the ID attributes in a way consistent with ebRIM.
NoAssignUids - XDS/XCA require Documents be assigned DocumentEntry.uniqueIds. This happens by default. If the parameter is true then UniqueIds are not assigned which means the value coded in the XML is used.
NoConvert - By default ebRIM metadata is cleaned up and converted to ebRIM 3.0 (Tests can be coded in ebRIM 2.0). The conversion includes correcting Schema relevant attribute ordering. IF true this clean up is not performed. This can be used to generated invalid/mis-formatted requests.
Report - Report/UseReport is a pub/sub mechanism for sharing details between tests. A test can publish a name/value pair using Report. A later test can consume this value using UseReport. If a UseReport instruction is executed referencing a test/section that has not been run (no log.xml file) then a test execution error occurs. Report is always coded inside a Transaction element (StoredQueryTransaction for example).
<Report name="repuid" section="Result">
//*[local-name()='ExtrinsicObject']/*[local-name()='Slot'][@name='repositoryUniqueId']/*[local-name()='ValueList']/*[local-name()='Value']
</Report>
This generates a Report in the log file with the name repuid. To generate the value it scans the Result section of the log.xml file and runs the specified XPath to extract a value.
UseReport - references a Report. The value published in the Report is grabbed and inserted into the message being generated.
<UseReport
reportName="repuid" <!-- 1 -->
test="1111" <!-- 2 -->
section="original" <!-- 3 -->
step="original" <!-- 4 -->
useAs="orig_uuid"/> <!-- 5 -->
-
References a named report
-
Test the Report is to be found in (this attribute is optional - if missing it looks in the current test)
-
Test section to look in
-
Test step to look in
-
Text in the message template to be replaced. This is sometimes coded as
$name$ to make sure it is unique. The dollar signs are part of the name. When the replacement is done, only the XML element values and attribute values are scanned.
ParseMetadata - If false no attempt is made to parse or interpret the metadata. Default is true.
NoMetadata - all metadata preparations are bypassed.
SOAPHeader - contains one or more valid XML elements that are added to the SOAP header.
UseId - An older Report/UseReport mechanism that is only applied to IDs (UUID, uniqueId).
The execution of a Testplan automatically generates a listing of the IDs that were automatically generated. This content is put in log.xml. An example of the listing is:
<AssignedPatientId>
<Assign
symbol="Document01"
id="P0608085456.4^^^&1.3.6.1.4.1.21367.13.20.1000&ISO"/>
<Assign
symbol="SubmissionSet01"
id="P0608085456.4^^^&1.3.6.1.4.1.21367.13.20.1000&ISO"/>
</AssignedPatientId>
<AssignedUids>
<Assign
symbol="Document01"
id="1.2.42.20180608085458.2"/>
<Assign
symbol="SubmissionSet01"
id="1.2.42.20180608085458.3"/>
</AssignedUids>
<AssignedSourceId>
<Assign
symbol="SubmissionSet01"
id="1.3.6.1.4.1.21367.4"/>
</AssignedSourceId>
<AssignedUuids>
<Assign
symbol="Document01"
id="urn:uuid:d72e4b81-9c21-41db-8129-8147b6f627d1"/>
<Assign
symbol="SubmissionSet01"
id="urn:uuid:8785810f-a83f-4737-8df2-c2972ca9a06f"/>
<Assign
symbol="ID_1358943916_2"
id="urn:uuid:aac6d649-6e52-4e79-8bb0-cf54aad8a2c2"/>
<Assign
symbol="ID_1358943916_1"
id="urn:uuid:6ff66b5b-7cc2-4808-99a6-e1164f7cc412"/>
</AssignedUuids>
To make use of one of these IDs code a UseId statement
<UseId
testdir="../original" <!-- 1 -->
id="Document01" <!-- 2 -->
symbol="$uid$" <!-- 3 -->
step="original" <!-- 4 -->
section="AssignedUids"/> <!-- 5 -->
-
Relative directory path (between this test/section and the test/section holding the log of the other test
-
The ID to extract from the other test
-
Symbol to replace in our test
-
Step in the other test
-
Section of the log output to pull the ID from
UseRepositoryUniqueId - used in a Retrieve transaction. Points to a step result/metadata where a repositoryUniqueId should be extracted for use in the Retrieve.
Assertions - validations run on test results.
<Assertions>
<DataRef file="THIS" as="output"/> (1)
<Assert id="same_logicalId"> (2)
count(//*[local-name()='ExtrinsicObject'][@lid="orig_uuid"]) = 2 (3)
</Assert>
</Assertions>
-
Use this DataRef statement as shown. It indicates that the assertions reference content in the current log file.
-
Assert statement - the ID attribute shows up in the log and display so make the name helpful
-
XPath statement to evaluate against the <Result> section of the log.
XDSb - Back when XDS.a was a thing, this instruction triggered SOAP and metadata element ordering according to XDS.b rules. XDS.a is deprecated and XDSb is now the default.
NoPatientId - By default the specified Patient ID is inserted into all the necessary places. This instructions says don’t alter the template this way.
WaitBefore - milliseconds of delay to pause before sending this transaction.
Tests are composed as described above. To make a test appear in the Conformance tool you must code what Profile/Actor/Transaction(2) the test applies to. This is described here.
This section documents what could be considered best practices for test design in toolkit.
-
Write all your tests in the external cache. The Toolkit Configuration tool can be used to build a testkit in your environment. Write all your tests here even if they will later be integrated into the internal testkit. Why? Because you edit/save/run these tests very fast. If they reside in the internal testkit then you have to redeploy Toolkit between edit and run. At least this is the way WAR deploys operate in IntelliJ and Eclipse.
-
Make sure all the actors and transactions you need are constructed first. Again, cycling between big builds and test writing is time consuming.
-
Link your new tests to an actor/option in the Conformance tool early. This allows them to be run in the tool as you write them. Even if you partition up the test wrong the first time, they are easy to move.
-
There are two major parts to the Conformance tool - orchestration and tests. Orchestration builds the test environment for a collection of tests. This usually consists of creating and submitting Patient IDs and loading test data. It’s more painful to write orchestration steps than it is actual tests so think through what you need. Orchestration setup requires coding Java/Groovy and rebuilding Toolkit. So if you can get all your orchestration done first and then build tests.
-
A test should be an independent thing. Order of execution of tests should not be depended on. If you have several things that must be done in order then you should be thinking about one of two approaches. First, the things you depend on could be done in orchestration. These should be low risk. Orchestration should never fail and should be very generic. If your idea does not fit here then you should be looking at multi-part tests.
-
Most tests are multi-part tests. A well written test is a collection of sections each with a single step. Let’s review - a step runs a single transaction. A section can contain multiple steps. You can choose to run, or re-run, a section. When a section runs it runs all steps it contains in order. If you write a section with multiple steps that means you never want these steps to be run on their own - always together. Sections can be run independently but always in order. If you have 5 transactions to run to accomplish a test you should start your organization as a single test with 5 sections, each section containing a single step (remember a step runs a single transaction).
-
Another consideration is that if the user gets stuck because his application is broken, he can run a section over and over until his code is fixed. If a section is a single step (and therefore a single transaction) this makes system debugging easier.
Toolkit
Downloads
Installing Toolkit
Configuring Toolkit for Imaging Tests
Reporting Toolkit Installation Problems
Environment
Test Session
Conformance Test Tool
Writing Conformance Tests
Overview of Imaging Tests
Test Context Definition
Launching Conformance Tool from Gazelle
Inspector
External Cache
Support Tools
Test Organization
Configuring Test Kits
Managing Multiple Test Kits
SAML Validation against Gazelle
Renaming Toolkit
Toolkit API
Managing system configurations
Configuring Toolkit for Connectathon
Developer's blog