Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Incremental Data Loading Documentation #10816

Merged
merged 3 commits into from
Jul 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions docs/Data-Loading-Maintaining-Studies.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,14 @@ For example:
./cbioportalImporter.py -s ../../../test/scripts/test_data/study_es_0/
```

## Importing part of the data
To import only some new or updated data entries, you can specify `-d` instead `-s` option:
```
./cbioportalImporter.py -d <path to data directory>
```
Although the -d option accepts a directory that follows the same structure as the study directory, not all data types are supported for incremental upload.
For more details on incremental data loading, see [this page](./Incremental-Data-Loading.md).

## Deleting a study
To remove a study, run:
```
Expand Down
5 changes: 5 additions & 0 deletions docs/Data-Loading.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,11 @@ The validation can be run standalone, but it is also integrated into the [metaIm
## Loading Data
To load the data into cBioPortal, the [metaImport script](/Using-the-metaImport-script.md) has to be used. This script first validates the data and, if validation succeeds, loads the data.

### Incremental Loading

You can incorporate data entries of certain data types without re-uploading the whole study.
To do this, you have to specify `--data_directory` (or `-d`) instead of `--study_directory` (or `-s`) option for the [metaImport script](./Using-the-metaImport-script.md).

## Removing a Study
To remove a study, the [cbioportalImporter script](/Data-Loading-Maintaining-Studies.md#deleting-a-study) can be used.

Expand Down
61 changes: 61 additions & 0 deletions docs/Incremental-Data-Loading.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# Incremental Data Loading

To add or update a few entries (patient/sample/genetic profile) more quickly, especially for larger studies, you can use incremental data loading instead of re-uploading the entire study.

## Granularity of Incremental Data Loading

Think of updating an entry as a complete swap of data for a particular data type for this entry (patient/sample/genetic profile).
When you update an entry, you must provide the complete data for this data type for this entry again.
For example, if you want to add or update the `Gender` attribute of a patient by incrementally uploading the `PATIENT_ATTRIBUTES` data type, you have to supply **all** other attributes of this patient again.
Note that in this case, you don't have to supply all sample information or molecular data types for this patient again as those are separate data types, and the rule applies to them in their own turn.

**Note:** Although incremental upload will create a genetic profile (name, description, etc.) when you upload molecular data for the first time, it does not update the profile (metadata)attributes on subsequent uploads.
It simply reuses the genetic profile if none of the identifying attributes (`cancer_study_identifier`, `genetic_alteration_type`, `datatype` and `stable_id`) have changed.

## Usage
To load data incrementally, you have to specify `--data_directory` (or `-d`) instead of `--study_directory` (or `-s`) option for the [metaImport script](./Using-the-metaImport-script.md) or `cbioportalImporter.py` scripts.

The data directory follows the same structure and data format as the study directory.
The data files should contain complete information about entries you want to add or update.

## Supported Data Types
Please note that incremental upload is supported for subset of data types only.
Unsupported data types have to be omitted from the directory.

Here is the list of data types as they specified in `datatype` attribute of meta file.

- `CASE_LIST`
- `CNA_CONTINUOUS`
- `CNA_DISCRETE`
- `CNA_DISCRETE_LONG`
- `CNA_LOG2`
- `EXPRESSION`
- `GENERIC_ASSAY_BINARY` (sample level only; `patient_level: false`)
- `GENERIC_ASSAY_CATEGORICAL` (sample level only; `patient_level: false`)
- `GENERIC_ASSAY_CONTINUOUS` (sample level only; `patient_level: false`)
- `METHYLATION`
- `MUTATION`
- `MUTATION_UNCALLED`
- `PATIENT_ATTRIBUTES`
- `PROTEIN`
- `SAMPLE_ATTRIBUTES`
- `SEG`
- `STRUCTURAL_VARIANT`
- `TIMELINE` (aka clinical events)

You might want to check the `INCREMENTAL_UPLOAD_SUPPORTED_META_TYPES` variable of the `cbioportal_common.py` module of the `cbioportal-core` project to ensure the list is up to date.

These are the known data types for which incremental upload is not currently supported:

- `CANCER_TYPE`
- `GENERIC_ASSAY_BINARY` (patient level; `patient_level: true`)
- `GENERIC_ASSAY_CATEGORICAL` (patient level; `patient_level: true`)
- `GENERIC_ASSAY_CONTINUOUS` (patient level; `patient_level: true`)
- `GISTIC_GENES`
- `GSVA_PVALUES`
- `GSVA_SCORES`
- `PATIENT_RESOURCES`
- `RESOURCES_DEFINITION`
- `SAMPLE_RESOURCES`
- `STUDY_RESOURCES`
- `STUDY`
13 changes: 12 additions & 1 deletion docs/Using-the-metaImport-script.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ and then run the following command:
This will tell you the parameters you can use:
```
$./metaImport.py -h
usage: metaImport.py [-h] -s STUDY_DIRECTORY
usage: metaImport.py [-h] [-s STUDY_DIRECTORY | -d DATA_DIRECTORY]
[-u URL_SERVER | -p PORTAL_INFO_DIR | -n]
[-jar JAR_PATH] [-html HTML_TABLE]
[-v] [-o] [-r] [-m]
Expand All @@ -22,6 +22,8 @@ optional arguments:
-h, --help show this help message and exit
-s STUDY_DIRECTORY, --study_directory STUDY_DIRECTORY
path to directory.
-d DATA_DIRECTORY, --data_directory DATA_DIRECTORY
path to data directory for incremental upload.
-u URL_SERVER, --url_server URL_SERVER
URL to cBioPortal server. You can set this if your URL
is not http://localhost/cbioportal
Expand Down Expand Up @@ -68,5 +70,14 @@ This example imports the study to the localhost, creates an html report and show

By adding `-o`, warnings will be overridden and import will start after validation.

#### Incremental Upload

You have to specify `--data_directory` (or `-d`) instead of `--study_directory` (or `-s`) option to load data incrementally.
Incremental upload enables data entries of certain data types to be updated without the need of re-uploading the whole study.
The data directory follows the same structure and data format as the study directory.
It should contain complete information about entries you want to add or update.
Please note that some data types like study are not supported and must not be present in the data directory.
[Here](./Incremental-Data-Loading.md) you can find more details.

## Development / debugging mode
For developers and specific testing purposes, an extra script, cbioportalImporter.py, is available which imports data regardless of validation results. Check [this](Data-Loading-For-Developers.md) page for more information on how to use it.
3 changes: 3 additions & 0 deletions docs/deployment/docker/example_commands.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,9 @@ docker-compose run \
:warning: after importing a study, remember to restart `cbioportal-container`
to see the study on the home page. Run `docker-compose restart cbioportal`.

To load data incrementally, specify `-d` instead of `-s` option.
For more details on incremental data loading, see [this page](./Incremental-Data-Loading.md).

#### Using cached portal side-data ####

In some setups the data validation step may not have direct access to the web API, for instance when the web API is only accessible to authenticated browser sessions. You can use this command to generate a cached folder of files that the validation script can use instead. Make sure to replace `<path_to_portalinfo>` with the absolute path where the cached folder is going to be generated.
Expand Down
Loading