Skip to content

Commit 796317e

Browse files
authored
Merge pull request #244 from ENCODE-DCC/dev
v2.0.1
2 parents 43c10a6 + 769ca5a commit 796317e

File tree

5 files changed

+144
-47
lines changed

5 files changed

+144
-47
lines changed

.circleci/config.yml

Lines changed: 16 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,12 +31,20 @@ commands:
3131
steps:
3232
- run:
3333
command: |
34-
sudo apt-get update && sudo apt-get install software-properties-common git wget curl default-jre -y
34+
sudo apt-get update && sudo apt-get install software-properties-common git wget curl -y
35+
36+
# install java 11
37+
sudo add-apt-repository ppa:openjdk-r/ppa -y
38+
sudo apt-get update && sudo apt-get install openjdk-11-jdk -y
39+
# automatically set 11 as default java
40+
sudo update-java-alternatives -a
41+
3542
sudo add-apt-repository ppa:deadsnakes/ppa -y
3643
sudo apt-get update && sudo apt-get install python3.6 -y
3744
sudo wget --no-check-certificate https://bootstrap.pypa.io/get-pip.py
3845
sudo python3.6 get-pip.py
3946
sudo ln -s /usr/bin/python3.6 /usr/local/bin/python3
47+
4048
pip3 install --upgrade pip
4149
pip3 install caper google-cloud-storage
4250
@@ -51,11 +59,17 @@ commands:
5159
5260
echo ${GCLOUD_SERVICE_ACCOUNT_SECRET_JSON} > tmp_secret_key.json
5361
export GOOGLE_APPLICATION_CREDENTIALS=$PWD/tmp_secret_key.json
62+
63+
# add docker image to input JSON
64+
cat ${INPUT} | jq ".+{\"chip.docker\": \"${TAG}\"}" > input_with_docker.json
65+
5466
caper run ../../../chip.wdl \
5567
--backend gcp --gcp-prj ${GOOGLE_PROJECT_ID} \
5668
--gcp-service-account-key-json $PWD/tmp_secret_key.json \
5769
--out-gcs-bucket ${CAPER_OUT_DIR} --tmp-gcs-bucket ${CAPER_TMP_DIR} \
58-
-i ${INPUT} -m metadata.json --docker ${TAG}
70+
-i input_with_docker.json -m metadata.json --docker ${TAG}
71+
72+
rm -f input_with_docker.json
5973
6074
res=$(jq '.outputs["chip.qc_json_ref_match"]' metadata.json)
6175
[[ "$res" != true ]] && exit 100

README.md

Lines changed: 61 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,33 @@
55

66
## Download new Caper>=2.0
77

8-
New Caper is out. You need to update your Caper to work with the latest ENCODE ATAC-seq pipeline.
8+
New Caper is out. You need to update your Caper to work with the latest ENCODE ChIP-seq pipeline.
99
```bash
1010
$ pip install caper --upgrade
1111
```
1212

13+
## Local/HPC users and new Caper>=2.0
14+
15+
There are tons of changes for local/HPC backends: `local`, `slurm`, `sge`, `pbs` and `lsf`(added). Make a backup of your current Caper configuration file `~/.caper/default.conf` and run `caper init`. Local/HPC users need to reset/initialize Caper's configuration file according to your chosen backend. Edit the configuration file and follow instructions in there.
16+
```bash
17+
$ cd ~/.caper
18+
$ cp default.conf default.conf.bak
19+
$ caper init [YOUR_BACKEND]
20+
```
21+
22+
In order to run a pipeline, you need to add one of the following flags to specify the environment to run each task within. i.e. `--conda`, `--singularity` and `--docker`. These flags are not required for cloud backend users (`aws` and `gcp`).
23+
```bash
24+
# for example
25+
$ caper run ... --singularity
26+
```
27+
28+
For Conda users, **RE-INSTALL PIPELINE'S CONDA ENVIRONMENT AND DO NOT ACTIVATE CONDA ENVIRONMENT BEFORE RUNNING PIPELINES**. Caper will internally call `conda run -n ENV_NAME CROMWELL_JOB_SCRIPT`. Just make sure that pipeline's new Conda environments are correctly installed.
29+
```bash
30+
$ scripts/uninstall_conda_env.sh
31+
$ scripts/install_conda_env.sh
32+
```
33+
34+
1335
## Introduction
1436
This ChIP-Seq pipeline is based off the ENCODE (phase-3) transcription factor and histone ChIP-seq pipeline specifications (by Anshul Kundaje) in [this google doc](https://docs.google.com/document/d/1lG_Rd7fnYgRpSIqrIfuVlAz2dW1VaSQThzk836Db99c/edit#).
1537

@@ -21,34 +43,35 @@ This ChIP-Seq pipeline is based off the ENCODE (phase-3) transcription factor an
2143

2244
## Installation
2345

24-
2546
1) Make sure that you have Python>=3.6. Caper does not work with Python2. Install Caper and check its version >=2.0.
2647
```bash
2748
$ python --version
2849
$ pip install caper
29-
$ caper -v
3050
```
51+
2) Make a backup of your Caper configuration file `~/.caper/default.conf` if you are upgrading from old Caper(<2.0.0). Reset/initialize Caper's configuration file. Read Caper's [README](https://github.com/ENCODE-DCC/caper/blob/master/README.md) carefully to choose a backend for your system. Follow the instruction in the configuration file.
52+
```bash
53+
# make a backup of ~/.caper/default.conf if you already have it
54+
$ caper init [YOUR_BACKEND]
3155

32-
2) Git clone this pipeline.
33-
> **IMPORTANT**: use `~/chip-seq-pipeline2/chip.wdl` as `[WDL]` in Caper's documentation.
56+
# then edit ~/.caper/default.conf
57+
$ vi ~/.caper/default.conf
58+
```
3459

60+
3) Git clone this pipeline.
61+
> **IMPORTANT**: use `~/chip-seq-pipeline2/chip.wdl` as `[WDL]` in Caper's documentation.
3562
```bash
3663
$ cd
3764
$ git clone https://github.com/ENCODE-DCC/chip-seq-pipeline2
3865
```
3966

40-
41-
3) (Optional for Conda) Install pipeline's Conda environments if you don't have Singularity or Docker installed on your system. We recommend to use Singularity instead of Conda. If you don't have Conda on your system, install [Miniconda3](https://docs.conda.io/en/latest/miniconda.html).
67+
4) (Optional for Conda) Install pipeline's Conda environments if you don't have Singularity or Docker installed on your system. We recommend to use Singularity instead of Conda. If you don't have Conda on your system, install [Miniconda3](https://docs.conda.io/en/latest/miniconda.html).
4268
```bash
4369
$ cd chip-seq-pipeline2
70+
# uninstall old environments (<2.0.0)
71+
$ bash scripts/uninstall_conda_env.sh
4472
$ bash scripts/install_conda_env.sh
4573
```
4674

47-
4) Follow [Caper's README](https://github.com/ENCODE-DCC/caper) carefully. Find an instruction for your platform and run `caper init`. Edit the initialized Caper's configuration file (`~/.caper/default.conf`).
48-
```bash
49-
$ caper init [YOUR_PLATFORM]
50-
$ vi ~/.caper/default.conf
51-
```
5275

5376
## Test run
5477

@@ -63,10 +86,35 @@ The followings are just examples. Please read [Caper's README](https://github.co
6386

6487
# Or submit it as a leader job (with long/enough resources) to SLURM (Stanford Sherlock) with Singularity
6588
# It will fail if you directly run the leader job on login nodes
66-
$ sbatch -p [SLURM_PARTITON] -J [WORKFLOW_NAME] --export=ALL --mem 4G -t 4-0 --wrap "caper chip atac.wdl -i https://storage.googleapis.com/encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI_subsampled_chr19_only.json --singularity"
89+
$ sbatch -p [SLURM_PARTITON] -J [WORKFLOW_NAME] --export=ALL --mem 4G -t 4-0 --wrap "caper chip chip.wdl -i https://storage.googleapis.com/encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI_subsampled_chr19_only.json --singularity"
6790
```
6891

6992

93+
## Running a pipeline on Terra/Anvil (using Dockstore)
94+
95+
Visit our pipeline repo on [Dockstore](https://dockstore.org/workflows/github.com/ENCODE-DCC/chip-seq-pipeline2). Click on `Terra` or `Anvil`. Follow Terra's instruction to create a workspace on Terra and add Terra's billing bot to your Google Cloud account.
96+
97+
Download this [test input JSON for Terra](https://storage.googleapis.com/encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI_subsampled_chr19_only.terra.json) and upload it to Terra's UI and then run analysis.
98+
99+
If you want to use your own input JSON file, then make sure that all files in the input JSON are on a Google Cloud Storage bucket (`gs://`). URLs will not work.
100+
101+
102+
## Running a pipeline on DNAnexus (using Dockstore)
103+
104+
Sign up for a new account on [DNAnexus](https://platform.dnanexus.com/) and create a new project on either AWS or Azure. Visit our pipeline repo on [Dockstore](https://dockstore.org/workflows/github.com/ENCODE-DCC/chip-seq-pipeline2). Click on `DNAnexus`. Choose a destination directory on your DNAnexus project. Click on `Submit` and visit DNAnexus. This will submit a conversion job so that you can check status of it on `Monitor` on DNAnexus UI.
105+
106+
Once conversion is done download one of the following input JSON files according to your chosen platform (AWS or Azure) for your DNAnexus project:
107+
- AWS: https://storage.googleapis.com/encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI_subsampled_chr19_only_dx.json
108+
- Azure: https://storage.googleapis.com/encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI_subsampled_chr19_only_dx_azure.json
109+
110+
You cannot use these input JSON files directly. Go to the destination directory on DNAnexus and click on the converted workflow `chip`. You will see input file boxes in the left-hand side of the task graph. Expand it and define FASTQs (`fastq_repX_R1`) and `genome_tsv` as in the downloaded input JSON file. Click on the `common` task box and define other non-file pipeline parameters.
111+
112+
113+
## Running a pipeline on DNAnexus (using our pre-built workflows)
114+
115+
See [this](docs/tutorial_dx_web.md) for details.
116+
117+
70118

71119
## Input JSON file
72120

@@ -82,13 +130,6 @@ You can run this pipeline on [truwl.com](https://truwl.com/). This provides a we
82130

83131
If you do not run the pipeline on Truwl, you can still share your use-case/job on the platform by getting in touch at [info@truwl.com](mailto:info@truwl.com) and providing your inputs.json file.
84132

85-
## Running a pipeline on DNAnexus
86-
87-
You can also run this pipeline on DNAnexus without using Caper or Cromwell. There are two ways to build a workflow on DNAnexus based on our WDL.
88-
89-
1) [dxWDL CLI](docs/tutorial_dx_cli.md)
90-
2) [DNAnexus Web UI](docs/tutorial_dx_web.md)
91-
92133
## How to organize outputs
93134

94135
Install [Croo](https://github.com/ENCODE-DCC/croo#installation). **You can skip this installation if you have installed pipeline's Conda environment and activated it**. Make sure that you have python3(> 3.4.1) installed on your system. Find a `metadata.json` on Caper's output directory.

chip.wdl

Lines changed: 29 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -7,16 +7,20 @@ struct RuntimeEnvironment {
77
}
88

99
workflow chip {
10-
String pipeline_ver = 'v2.0.0'
10+
String pipeline_ver = 'v2.0.1'
1111

1212
meta {
13-
version: 'v2.0.0'
14-
author: 'Jin wook Lee (leepc12@gmail.com) at ENCODE-DCC'
15-
description: 'ENCODE TF/Histone ChIP-Seq pipeline'
13+
version: 'v2.0.1'
14+
15+
author: 'Jin wook Lee'
16+
email: 'leepc12@gmail.com'
17+
description: 'ENCODE TF/Histone ChIP-Seq pipeline. See https://github.com/ENCODE-DCC/chip-seq-pipeline2 for more details. e.g. example input JSON for Terra/Anvil.'
18+
organization: 'ENCODE DCC'
19+
1620
specification_document: 'https://docs.google.com/document/d/1lG_Rd7fnYgRpSIqrIfuVlAz2dW1VaSQThzk836Db99c/edit?usp=sharing'
1721

18-
default_docker: 'encodedcc/chip-seq-pipeline:v2.0.0'
19-
default_singularity: 'library://leepc12/default/chip-seq-pipeline:v2.0.0'
22+
default_docker: 'encodedcc/chip-seq-pipeline:v2.0.1'
23+
default_singularity: 'library://leepc12/default/chip-seq-pipeline:v2.0.1'
2024
croo_out_def: 'https://storage.googleapis.com/encode-pipeline-output-definition/chip.croo.v5.json'
2125

2226
parameter_group: {
@@ -67,8 +71,8 @@ workflow chip {
6771
}
6872
input {
6973
# group: runtime_environment
70-
String docker = 'encodedcc/chip-seq-pipeline:v2.0.0'
71-
String singularity = 'library://leepc12/default/chip-seq-pipeline:v2.0.0'
74+
String docker = 'encodedcc/chip-seq-pipeline:v2.0.1'
75+
String singularity = 'library://leepc12/default/chip-seq-pipeline:v2.0.1'
7276
String conda = 'encode-chip-seq-pipeline'
7377
String conda_macs2 = 'encode-chip-seq-pipeline-macs2'
7478
String conda_spp = 'encode-chip-seq-pipeline-spp'
@@ -117,9 +121,9 @@ workflow chip {
117121
Array[File] bams = []
118122
Array[File] nodup_bams = []
119123
Array[File] tas = []
120-
Array[File?] peaks = []
121-
Array[File?] peaks_pr1 = []
122-
Array[File?] peaks_pr2 = []
124+
Array[File] peaks = []
125+
Array[File] peaks_pr1 = []
126+
Array[File] peaks_pr2 = []
123127
File? peak_ppr1
124128
File? peak_ppr2
125129
File? peak_pooled
@@ -1703,7 +1707,7 @@ workflow chip {
17031707
# we have all tas and ctl_tas (optional for histone chipseq) ready, let's call peaks
17041708
scatter(i in range(num_rep)) {
17051709
Boolean has_input_of_call_peak = defined(ta_[i])
1706-
Boolean has_output_of_call_peak = i<length(peaks) && defined(peaks[i])
1710+
Boolean has_output_of_call_peak = i<length(peaks)
17071711
if ( has_input_of_call_peak && !has_output_of_call_peak && !align_only_ ) {
17081712
call call_peak { input :
17091713
peak_caller = peak_caller_,
@@ -1748,7 +1752,7 @@ workflow chip {
17481752

17491753
# call peaks on 1st pseudo replicated tagalign
17501754
Boolean has_input_of_call_peak_pr1 = defined(spr.ta_pr1[i])
1751-
Boolean has_output_of_call_peak_pr1 = i<length(peaks_pr1) && defined(peaks_pr1[i])
1755+
Boolean has_output_of_call_peak_pr1 = i<length(peaks_pr1)
17521756
if ( has_input_of_call_peak_pr1 && !has_output_of_call_peak_pr1 && !true_rep_only ) {
17531757
call call_peak as call_peak_pr1 { input :
17541758
peak_caller = peak_caller_,
@@ -1777,7 +1781,7 @@ workflow chip {
17771781

17781782
# call peaks on 2nd pseudo replicated tagalign
17791783
Boolean has_input_of_call_peak_pr2 = defined(spr.ta_pr2[i])
1780-
Boolean has_output_of_call_peak_pr2 = i<length(peaks_pr2) && defined(peaks_pr2[i])
1784+
Boolean has_output_of_call_peak_pr2 = i<length(peaks_pr2)
17811785
if ( has_input_of_call_peak_pr2 && !has_output_of_call_peak_pr2 && !true_rep_only ) {
17821786
call call_peak as call_peak_pr2 { input :
17831787
peak_caller = peak_caller_,
@@ -2061,7 +2065,7 @@ workflow chip {
20612065
call reproducibility as reproducibility_overlap { input :
20622066
prefix = 'overlap',
20632067
peaks = select_all(overlap.bfilt_overlap_peak),
2064-
peaks_pr = overlap_pr.bfilt_overlap_peak,
2068+
peaks_pr = if defined(overlap_pr.bfilt_overlap_peak) then select_first([overlap_pr.bfilt_overlap_peak]) else [],
20652069
peak_ppr = overlap_ppr.bfilt_overlap_peak,
20662070
peak_type = peak_type_,
20672071
chrsz = chrsz_,
@@ -2074,7 +2078,7 @@ workflow chip {
20742078
call reproducibility as reproducibility_idr { input :
20752079
prefix = 'idr',
20762080
peaks = select_all(idr.bfilt_idr_peak),
2077-
peaks_pr = idr_pr.bfilt_idr_peak,
2081+
peaks_pr = if defined(idr_pr.bfilt_idr_peak) then select_first([idr_pr.bfilt_idr_peak]) else [],
20782082
peak_ppr = idr_ppr.bfilt_idr_peak,
20792083
peak_type = peak_type_,
20802084
chrsz = chrsz_,
@@ -2112,7 +2116,7 @@ workflow chip {
21122116
ctl_lib_complexity_qcs = select_all(filter_ctl.lib_complexity_qc),
21132117

21142118
jsd_plot = jsd.plot,
2115-
jsd_qcs = jsd.jsd_qcs,
2119+
jsd_qcs = if defined(jsd.jsd_qcs) then select_first([jsd.jsd_qcs]) else [],
21162120

21172121
frip_qcs = select_all(call_peak.frip_qc),
21182122
frip_qcs_pr1 = select_all(call_peak_pr1.frip_qc),
@@ -2122,13 +2126,13 @@ workflow chip {
21222126
frip_qc_ppr2 = call_peak_ppr2.frip_qc,
21232127

21242128
idr_plots = select_all(idr.idr_plot),
2125-
idr_plots_pr = idr_pr.idr_plot,
2129+
idr_plots_pr = if defined(idr_pr.idr_plot) then select_first([idr_pr.idr_plot]) else [],
21262130
idr_plot_ppr = idr_ppr.idr_plot,
21272131
frip_idr_qcs = select_all(idr.frip_qc),
2128-
frip_idr_qcs_pr = idr_pr.frip_qc,
2132+
frip_idr_qcs_pr = if defined(idr_pr.frip_qc) then select_first([idr_pr.frip_qc]) else [],
21292133
frip_idr_qc_ppr = idr_ppr.frip_qc,
21302134
frip_overlap_qcs = select_all(overlap.frip_qc),
2131-
frip_overlap_qcs_pr = overlap_pr.frip_qc,
2135+
frip_overlap_qcs_pr = if defined(overlap_pr.frip_qc) then select_first([overlap_pr.frip_qc]) else [],
21322136
frip_overlap_qc_ppr = overlap_ppr.frip_qc,
21332137
idr_reproducibility_qc = reproducibility_idr.reproducibility_qc,
21342138
overlap_reproducibility_qc = reproducibility_overlap.reproducibility_qc,
@@ -2945,7 +2949,7 @@ task reproducibility {
29452949
# in a sorted order. for example of 4 replicates,
29462950
# 1,2 1,3 1,4 2,3 2,4 3,4.
29472951
# x,y means peak file from rep-x vs rep-y
2948-
Array[File]? peaks_pr # peak files from pseudo replicates
2952+
Array[File] peaks_pr # peak files from pseudo replicates
29492953
File? peak_ppr # Peak file from pooled pseudo replicate.
29502954
String peak_type
29512955
File chrsz # 2-col chromosome sizes file
@@ -3060,9 +3064,9 @@ task qc_report {
30603064
Array[File] xcor_plots
30613065
Array[File] xcor_scores
30623066
File? jsd_plot
3063-
Array[File]? jsd_qcs
3067+
Array[File] jsd_qcs
30643068
Array[File] idr_plots
3065-
Array[File]? idr_plots_pr
3069+
Array[File] idr_plots_pr
30663070
File? idr_plot_ppr
30673071
Array[File] frip_qcs
30683072
Array[File] frip_qcs_pr1
@@ -3071,10 +3075,10 @@ task qc_report {
30713075
File? frip_qc_ppr1
30723076
File? frip_qc_ppr2
30733077
Array[File] frip_idr_qcs
3074-
Array[File]? frip_idr_qcs_pr
3078+
Array[File] frip_idr_qcs_pr
30753079
File? frip_idr_qc_ppr
30763080
Array[File] frip_overlap_qcs
3077-
Array[File]? frip_overlap_qcs_pr
3081+
Array[File] frip_overlap_qcs_pr
30783082
File? frip_overlap_qc_ppr
30793083
File? idr_reproducibility_qc
30803084
File? overlap_reproducibility_qc
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
{
2+
"chip.pipeline_type" : "tf",
3+
"chip.genome_tsv" : "gs://encode-pipeline-genome-data/genome_tsv/v3/hg38_chr19_chrM.terra.tsv",
4+
"chip.fastqs_rep1_R1" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI/fastq_subsampled/rep1.subsampled.25.fastq.gz"
5+
],
6+
"chip.fastqs_rep2_R1" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI/fastq_subsampled/rep2.subsampled.20.fastq.gz"
7+
],
8+
"chip.ctl_fastqs_rep1_R1" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI/fastq_subsampled/ctl1.subsampled.25.fastq.gz"
9+
],
10+
"chip.ctl_fastqs_rep2_R1" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI/fastq_subsampled/ctl2.subsampled.25.fastq.gz"
11+
],
12+
"chip.paired_end" : false,
13+
"chip.title" : "ENCSR000DYI (subsampled 1/25, chr19_chrM only)",
14+
"chip.description" : "CEBPB ChIP-seq on human A549 produced by the Snyder lab"
15+
}
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
{
2+
"chip.pipeline_type" : "tf",
3+
"chip.genome_tsv" : "gs://encode-pipeline-genome-data/genome_tsv/v3/hg38_chr19_chrM.terra.tsv",
4+
"chip.fastqs_rep1_R1" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR936XTK/fastq_subsampled/rep1-R1.subsampled.50.fastq.gz"
5+
],
6+
"chip.fastqs_rep1_R2" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR936XTK/fastq_subsampled/rep1-R2.subsampled.50.fastq.gz"
7+
],
8+
"chip.fastqs_rep2_R1" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR936XTK/fastq_subsampled/rep2-R1.subsampled.50.fastq.gz"
9+
],
10+
"chip.fastqs_rep2_R2" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR936XTK/fastq_subsampled/rep2-R2.subsampled.50.fastq.gz"
11+
],
12+
"chip.ctl_fastqs_rep1_R1" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR936XTK/fastq_subsampled/ctl1-R1.subsampled.80.fastq.gz"
13+
],
14+
"chip.ctl_fastqs_rep1_R2" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR936XTK/fastq_subsampled/ctl1-R2.subsampled.80.fastq.gz"
15+
],
16+
"chip.ctl_fastqs_rep2_R1" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR936XTK/fastq_subsampled/ctl2-R1.subsampled.80.fastq.gz"
17+
],
18+
"chip.ctl_fastqs_rep2_R2" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR936XTK/fastq_subsampled/ctl2-R2.subsampled.80.fastq.gz"
19+
],
20+
"chip.paired_end" : true,
21+
"chip.title" : "ENCSR936XTK (subsampled 1/50, chr19 and chrM Only)",
22+
"chip.description" : "ZNF143 ChIP-seq on human GM12878"
23+
}

0 commit comments

Comments
 (0)