Skip to content
/ hermes Public

A E2E test tool for Enceladus. Also general dataframe comparison tool

License

Notifications You must be signed in to change notification settings

AbsaOSS/hermes

Repository files navigation

Enceladus TestUtils

Hermes is an E2E testing tool created mainly for the use in ABSA OSS ecosystem but still provides some tools/utils that are usable in other projects and are quite generic. For more information, please look at our Hermes Github Pages.

To Build

Use either of the commands below. Depending on your versions.

sbt ++2.11.12 assembly -DSPARK_VERSION=2.4.7
sbt ++2.12.12 assembly -DSPARK_VERSION=3.2.2

Known to work with:

  • Spark 2.4.2 - 3.2.2 [1]
  • Java 1.8.0_191-b12
  • Scala 2.11.12 and 2.12.12

[1] There are now spark version guards to protect from false positives. If there is someone willing to test for older versions, we are happy to extend these. These are applicable only to use as a spark-job not as a library

How to generate Code coverage report

sbt ++2.11.12 jacoco -DSPARK_VERSION=2.4.7
sbt ++2.12.12 jacoco -DSPARK_VERSION=3.2.2

Code coverage will be generated on path:

{project-root}/{module}/target/scala-{scala_version}/jacoco/report/html

Dataset Comparison

Spark job for the comparison of data sets. As it leverages spark, there are almost no limitations to data sources and size of the data.

Running

Basic running example

spark-submit \
/path/to/jar/file \
--format <format of the reference and new data sets> \
--new-path /path/to/new/data/set \
--ref-path /path/to/referential/data/set \
--out-path /path/to/diff/output
--keys key1,key2,key3

Where

Datasets Comparison 
Usage: spark-submit [spark options] --class za.co.absa.hermes.datasetComparison.DatasetComparisonJob hermes.jar [options]

  --[ref|new]-format            Format of the raw data (csv, xml, parquet,fixed-width, etc.). Use prefix only in case
                                    comparison of two different formats. Mandatory.
  --new-path|--new-dbtable      Path to the new dataset or dbtable (jdbc), just generated and to be tested. Mandatory.
  --ref-path|--ref-dbtable      Path to supposedly correct data set or dbtable (jdbc). Mandatory.
  --out-path.                    Path to where the `ComparisonJob` will save the differences. 
                                    This will efectivly creat a folder in which you will find two 
                                    other folders. expected_minus_actual and actual_minus_expected.
                                    Both hold parque data sets of differences. (minus as in is 
                                    relative complement. Mandatory.
  --keys                        If there are know unique keys, they can be specified for better
                                   output. Keys should be specified one by one, with , (comma) 
                                   between them. Optional.
  others                        Other options depends on selected format specifications (e.g. --delimiter and --header for
                                   csv, --rowTag for xml). For case comparison of two different formats use prefix ref|new
                                   for each of this options. For more information, check sparks documentation on what all
                                   the options for the format you are using. Optional.
  
  --help                   prints similar text to this one.
  

Other configurations are Spark dependant and are out of scope of this README.

Info File Comparison

Atum's (and it's derivatives) Info file comparison. Ran as part of the E2E Runner. Can be run as a plain old jar file.

Running

Basic running example

java -jar \
/path/to/jar/file \
--new-path /path/to/new/data/set \
--ref-path /path/to/referential/data/set \
--out-path /path/to/diff/output

E2E Runner

Currently runs both Standardization and Conformance of [Enceladus][enceladus] project on the data provided. After each, a comparison job is run to check the results against expected reference data.

This tool is planed for an upgrade in the nearest future to be a general E2E Runner for user defined runes.

Basic running example:

spark-submit \
/path/to/jar/file \
--menas-credentials-file /path/to/credentials/file \
--dataset-name <datasetName> \
--dataset-version <datasetVersion> \
--report-date <reportData> \
--report-version <reportVersion> \
--raw-format <rawFormat>
--keys <key1,key2,...>