Skip to content

Commit

Permalink
[a] Create PFB-based verbatim manifest format (#6040)
Browse files Browse the repository at this point in the history
  • Loading branch information
nadove-ucsc committed Apr 12, 2024
1 parent b18acc2 commit ae149dc
Show file tree
Hide file tree
Showing 10 changed files with 1,133 additions and 14 deletions.
6 changes: 5 additions & 1 deletion lambdas/service/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@
# changes and reset the minor version to zero. Otherwise, increment only
# the minor version for backwards compatible changes. A backwards
# compatible change is one that does not require updates to clients.
'version': '7.2'
'version': '7.3'
},
'tags': [
{
Expand Down Expand Up @@ -1414,6 +1414,10 @@ def manifest_route(*, fetch: bool, initiate: bool):
manifest in [JSONL][5] format. Each line contains an
unaltered metadata entity from the underlying repository.
- `{ManifestFormat.verbatim_pfb.value}` for a verbatim
manifest in the [PFB format][3]. This format is mainly
used for exporting data to Terra.
[1]: https://bd2k.ini.usc.edu/tools/bdbag/
[2]: https://software.broadinstitute.org/firecloud/documentation/article?id=10954
Expand Down
12 changes: 7 additions & 5 deletions lambdas/service/openapi.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
"info": {
"title": "azul_service",
"description": "\n# Overview\n\nAzul is a REST web service for querying metadata associated with\nboth experimental and analysis data from a data repository. In order\nto deliver response times that make it suitable for interactive use\ncases, the set of metadata properties that it exposes for sorting,\nfiltering, and aggregation is limited. Azul provides a uniform view\nof the metadata over a range of diverse schemas, effectively\nshielding clients from changes in the schemas as they occur over\ntime. It does so, however, at the expense of detail in the set of\nmetadata properties it exposes and in the accuracy with which it\naggregates them.\n\nAzul denormalizes and aggregates metadata into several different\nindices for selected entity types. Metadata entities can be queried\nusing the [Index](#operations-tag-Index) endpoints.\n\nA set of indices forms a catalog. There is a default catalog called\n`dcp2` which will be used unless a\ndifferent catalog name is specified using the `catalog` query\nparameter. Metadata from different catalogs is completely\nindependent: a response obtained by querying one catalog does not\nnecessarily correlate to a response obtained by querying another\none. Two catalogs can contain metadata from the same sources or\ndifferent sources. It is only guaranteed that the body of a\nresponse by any given endpoint adheres to one schema,\nindependently of which catalog was specified in the request.\n\nAzul provides the ability to download data and metadata via the\n[Manifests](#operations-tag-Manifests) endpoints. The\n`curl` format manifests can be used to\ndownload data files. Other formats provide various views of the\nmetadata. Manifests can be generated for a selection of files using\nfilters. These filters are interchangeable with the filters used by\nthe [Index](#operations-tag-Index) endpoints.\n\nAzul also provides a [summary](#operations-Index-get_index_summary)\nview of indexed data.\n\n## Data model\n\nAny index, when queried, returns a JSON array of hits. Each hit\nrepresents a metadata entity. Nested in each hit is a summary of the\nproperties of entities associated with the hit. An entity is\nassociated either by a direct edge in the original metadata graph,\nor indirectly as a series of edges. The nested properties are\ngrouped by the type of the associated entity. The properties of all\ndata files associated with a particular sample, for example, are\nlisted under `hits[*].files` in a `/index/samples` response. It is\nimportant to note that while each _hit_ represents a discrete\nentity, the properties nested within that hit are the result of an\naggregation over potentially many associated entities.\n\nTo illustrate this, consider a data file that is part of two\nprojects (a project is a group of related experiments, typically by\none laboratory, institution or consortium). Querying the `files`\nindex for this file yields a hit looking something like:\n\n```\n{\n \"projects\": [\n {\n \"projectTitle\": \"Project One\"\n \"laboratory\": ...,\n ...\n },\n {\n \"projectTitle\": \"Project Two\"\n \"laboratory\": ...,\n ...\n }\n ],\n \"files\": [\n {\n \"format\": \"pdf\",\n \"name\": \"Team description.pdf\",\n ...\n }\n ]\n}\n```\n\nThis example hit contains two kinds of nested entities (a hit in an\nactual Azul response will contain more): There are the two projects\nentities, and the file itself. These nested entities contain\nselected metadata properties extracted in a consistent way. This\nmakes filtering and sorting simple.\n\nAlso notice that there is only one file. When querying a particular\nindex, the corresponding entity will always be a singleton like\nthis.\n",
"version": "7.2"
"version": "7.3"
},
"tags": [
{
Expand Down Expand Up @@ -9479,10 +9479,11 @@
"terra.bdbag",
"terra.pfb",
"curl",
"verbatim.jsonl"
"verbatim.jsonl",
"verbatim.pfb"
]
},
"description": "\nThe desired format of the output.\n\n- `compact` (the default) for a compact,\n tab-separated manifest\n\n- `terra.bdbag` for a manifest in the\n [BDBag format][1]. This provides a ZIP file containing two\n manifests: one for Participants (aka Donors) and one for\n Samples (aka Specimens). For more on the format of the\n manifests see [documentation here][2].\n\n- `terra.pfb` for a manifest in the [PFB\n format][3]. This format is mainly used for exporting data to\n Terra.\n\n- `curl` for a [curl configuration\n file][4] manifest. This manifest can be used with the curl\n program to download all the files listed in the manifest.\n\n- `verbatim.jsonl` for a verbatim\n manifest in [JSONL][5] format. Each line contains an\n unaltered metadata entity from the underlying repository.\n\n[1]: https://bd2k.ini.usc.edu/tools/bdbag/\n\n[2]: https://software.broadinstitute.org/firecloud/documentation/article?id=10954\n\n[3]: https://github.com/uc-cdis/pypfb\n\n[4]: https://curl.haxx.se/docs/manpage.html#-K\n\n[5]: https://jsonlines.org/\n"
"description": "\nThe desired format of the output.\n\n- `compact` (the default) for a compact,\n tab-separated manifest\n\n- `terra.bdbag` for a manifest in the\n [BDBag format][1]. This provides a ZIP file containing two\n manifests: one for Participants (aka Donors) and one for\n Samples (aka Specimens). For more on the format of the\n manifests see [documentation here][2].\n\n- `terra.pfb` for a manifest in the [PFB\n format][3]. This format is mainly used for exporting data to\n Terra.\n\n- `curl` for a [curl configuration\n file][4] manifest. This manifest can be used with the curl\n program to download all the files listed in the manifest.\n\n- `verbatim.jsonl` for a verbatim\n manifest in [JSONL][5] format. Each line contains an\n unaltered metadata entity from the underlying repository.\n\n- `verbatim.pfb` for a verbatim\n manifest in the [PFB format][3]. This format is mainly\n used for exporting data to Terra.\n\n[1]: https://bd2k.ini.usc.edu/tools/bdbag/\n\n[2]: https://software.broadinstitute.org/firecloud/documentation/article?id=10954\n\n[3]: https://github.com/uc-cdis/pypfb\n\n[4]: https://curl.haxx.se/docs/manpage.html#-K\n\n[5]: https://jsonlines.org/\n"
}
],
"responses": {
Expand Down Expand Up @@ -10887,10 +10888,11 @@
"terra.bdbag",
"terra.pfb",
"curl",
"verbatim.jsonl"
"verbatim.jsonl",
"verbatim.pfb"
]
},
"description": "\nThe desired format of the output.\n\n- `compact` (the default) for a compact,\n tab-separated manifest\n\n- `terra.bdbag` for a manifest in the\n [BDBag format][1]. This provides a ZIP file containing two\n manifests: one for Participants (aka Donors) and one for\n Samples (aka Specimens). For more on the format of the\n manifests see [documentation here][2].\n\n- `terra.pfb` for a manifest in the [PFB\n format][3]. This format is mainly used for exporting data to\n Terra.\n\n- `curl` for a [curl configuration\n file][4] manifest. This manifest can be used with the curl\n program to download all the files listed in the manifest.\n\n- `verbatim.jsonl` for a verbatim\n manifest in [JSONL][5] format. Each line contains an\n unaltered metadata entity from the underlying repository.\n\n[1]: https://bd2k.ini.usc.edu/tools/bdbag/\n\n[2]: https://software.broadinstitute.org/firecloud/documentation/article?id=10954\n\n[3]: https://github.com/uc-cdis/pypfb\n\n[4]: https://curl.haxx.se/docs/manpage.html#-K\n\n[5]: https://jsonlines.org/\n"
"description": "\nThe desired format of the output.\n\n- `compact` (the default) for a compact,\n tab-separated manifest\n\n- `terra.bdbag` for a manifest in the\n [BDBag format][1]. This provides a ZIP file containing two\n manifests: one for Participants (aka Donors) and one for\n Samples (aka Specimens). For more on the format of the\n manifests see [documentation here][2].\n\n- `terra.pfb` for a manifest in the [PFB\n format][3]. This format is mainly used for exporting data to\n Terra.\n\n- `curl` for a [curl configuration\n file][4] manifest. This manifest can be used with the curl\n program to download all the files listed in the manifest.\n\n- `verbatim.jsonl` for a verbatim\n manifest in [JSONL][5] format. Each line contains an\n unaltered metadata entity from the underlying repository.\n\n- `verbatim.pfb` for a verbatim\n manifest in the [PFB format][3]. This format is mainly\n used for exporting data to Terra.\n\n[1]: https://bd2k.ini.usc.edu/tools/bdbag/\n\n[2]: https://software.broadinstitute.org/firecloud/documentation/article?id=10954\n\n[3]: https://github.com/uc-cdis/pypfb\n\n[4]: https://curl.haxx.se/docs/manpage.html#-K\n\n[5]: https://jsonlines.org/\n"
}
],
"responses": {
Expand Down
1 change: 1 addition & 0 deletions src/azul/plugins/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -142,6 +142,7 @@ class ManifestFormat(Enum):
terra_pfb = 'terra.pfb'
curl = 'curl'
verbatim_jsonl = 'verbatim.jsonl'
verbatim_pfb = 'verbatim.pfb'


T = TypeVar('T', bound='Plugin')
Expand Down
5 changes: 4 additions & 1 deletion src/azul/plugins/metadata/anvil/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,10 @@ def manifest_formats(self) -> Sequence[ManifestFormat]:
return [
ManifestFormat.compact,
ManifestFormat.terra_pfb,
*iif(config.enable_replicas, [ManifestFormat.verbatim_jsonl])
*iif(config.enable_replicas, [
ManifestFormat.verbatim_jsonl,
ManifestFormat.verbatim_pfb
])
]

def transformer_types(self) -> Iterable[Type[BaseTransformer]]:
Expand Down
139 changes: 133 additions & 6 deletions src/azul/service/avro_pfb.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import bisect
from collections import (
defaultdict,
)
Expand All @@ -16,6 +17,7 @@
ClassVar,
MutableSet,
Self,
Sequence,
)
from uuid import (
UUID,
Expand Down Expand Up @@ -58,6 +60,7 @@
value_and_unit,
)
from azul.types import (
AnyMutableJSON,
JSON,
MutableJSON,
)
Expand Down Expand Up @@ -189,6 +192,17 @@ def from_json(cls,
id_ = _reversible_join('.', map(str, (name, id_, len(ids))))
return cls(id=id_, name=name, object=object_)

@classmethod
def for_replica(cls, replica: MutableJSON, schema: JSON) -> Self:
name, object_ = replica['replica_type'], replica['contents']
cls._add_missing_fields(name, object_, schema)
# Note that it is possible for two distinct replicas to have the same
# entity ID. For example, replicas representing the DUOS registration
# of AnVIL datasets have the same ID as the replica for the dataset
# itself. Terra appears to combine PFB entities with the same ID
# into a single row.
return cls(id=replica['entity_id'], name=name, object=object_)

@classmethod
def _add_missing_fields(cls, name: str, object_: MutableJSON, schema):
"""
Expand All @@ -215,6 +229,8 @@ def _add_missing_fields(cls, name: str, object_: MutableJSON, schema):
else:
assert 'null' in field_type['items'], field
default_value = [None]
elif field_type == 'null':
default_value = None
else:
assert False, field
object_[field_name] = default_value
Expand Down Expand Up @@ -251,7 +267,7 @@ def to_entity(cls, entity: PFBEntity) -> Self:
return cls(dst_id=entity.id, dst_name=entity.name)


def pfb_metadata_entity(entity_types: Iterable[str]) -> MutableJSON:
def pfb_metadata_entity(entity_types: Iterable[str], links: bool = True) -> MutableJSON:
"""
The Metadata entity encodes the possible relationships between tables.
Expand All @@ -266,7 +282,7 @@ def pfb_metadata_entity(entity_types: Iterable[str]) -> MutableJSON:
'name': entity_type,
'ontology_reference': '',
'values': {},
'links': [] if entity_type == 'files' else [{
'links': [] if not links or entity_type == 'files' else [{
'multiplicity': 'MANY_TO_MANY',
'dst': 'files',
'name': 'files'
Expand Down Expand Up @@ -294,6 +310,19 @@ def pfb_schema_from_field_types(field_types: FieldTypes) -> JSON:
return _avro_pfb_schema(entity_schemas)


def pfb_schema_from_replicas(replicas: Iterable[JSON]) -> tuple[Sequence[str], JSON]:
schemas_by_replica_type = {}
for replica in replicas:
replica_type, replica_contents = replica['replica_type'], replica['contents']
_update_replica_schema(schema=schemas_by_replica_type,
path=(replica_type,),
key=replica_type,
value=replica_contents)
schemas_by_replica_type = sorted(schemas_by_replica_type.items())
keys, values = zip(*schemas_by_replica_type)
return keys, _avro_pfb_schema(values)


def _avro_pfb_schema(azul_avro_schema: Iterable[JSON]) -> JSON:
"""
The boilerplate Avro schema that comprises a PFB's schema is returned in
Expand Down Expand Up @@ -474,6 +503,13 @@ def _inject_reference_handover_values(entity: MutableJSON, doc: JSON):
# that all of the primitive field types types are nullable
# https://github.com/DataBiosphere/azul/issues/4094

_json_to_pfb_types = {
bool: 'boolean',
float: 'double',
int: 'long',
str: 'string'
}

_nullable_to_pfb_types = {
null_bool: ['null', 'boolean'],
null_float: ['null', 'double'],
Expand Down Expand Up @@ -561,10 +597,7 @@ def _entity_schema_recursive(field_types: FieldTypes,
'type': 'array',
'items': {
'type': 'array',
'items': {
int: 'long',
float: 'double'
}[field_type.ends_type.native_type]
'items': _json_to_pfb_types[field_type.ends_type.native_type]
}
}
}
Expand Down Expand Up @@ -603,3 +636,97 @@ def _entity_schema_recursive(field_types: FieldTypes,
pass
else:
assert False, field_type


def _update_replica_schema(*,
schema: MutableJSON,
path: Sequence[str],
key: str,
value: AnyMutableJSON):
try:
old_type = schema[key]
except KeyError:
schema[key] = _new_replica_schema(path=path, value=value)
else:
if old_type == []:
schema[key] = _new_replica_schema(path=path, value=value)
elif value is None:
if old_type == 'null' or isinstance(old_type, list):
pass
else:
schema[key] = ['null', old_type]
elif old_type == 'null':
schema[key] = [
'null',
_new_replica_schema(path=path, value=value)
]
elif isinstance(value, list):
if isinstance(old_type, list):
old_type = old_type[1]
assert old_type['type'] == 'array', old_type
for v in value:
_update_replica_schema(schema=old_type,
path=path,
key='items',
value=v)
elif isinstance(value, dict):
if isinstance(old_type, list):
old_type = old_type[1]
assert old_type['type'] == 'record', old_type
old_fields = {field['name']: field for field in old_type['fields']}
for k in value.keys() | old_fields.keys():
try:
field = old_fields[k]
except KeyError:
field = {
'name': k,
'namespace': '.'.join(path),
'type': 'null'
}
bisect.insort(old_type['fields'], field, key=itemgetter('name'))
new_value = value[k]
else:
new_value = value.get(k)
_update_replica_schema(schema=field,
path=(*path, k),
key='type',
value=new_value)
else:
new_type = _json_to_pfb_types[type(value)]
if isinstance(old_type, list):
old_type = old_type[1]
assert old_type == new_type, (old_type, value)


def _new_replica_schema(*,
path: Sequence[str],
value: AnyMutableJSON,
) -> AnyMutableJSON:
if value is None:
result = 'null'
elif isinstance(value, list):
# Empty list indicates "no type" (emtpy union). This will be replaced
# with an actual type unless we never encounter a non-empty array.
result = {'type': 'array', 'items': []}
for v in value:
_update_replica_schema(schema=result,
path=path,
key='items',
value=v)
elif isinstance(value, dict):
name = '.'.join(path)
result = {
'name': name,
'type': 'record',
'fields': [
{
'name': k,
'namespace': name,
'type': _new_replica_schema(path=(*path, k), value=v)
}
for k, v in sorted(value.items())
]
}
else:
result = _json_to_pfb_types[type(value)]
return result
Loading

0 comments on commit ae149dc

Please sign in to comment.