This pipeline can be used to obtain tax profiles, pathway abundance tables from metagenomic reads. The input The MetaPhlAn, HUMAnN and Kraken2 databases are in the current version downloaded as part of the pipeline and stored in the db folder (the first time you use the pipeline, it therefore takes a bit longer).
metagenomicspipeline is a bioinformatics pipeline that processes shotgun metagenomics reads to obtain relative abundances and pathway abundances using BioBakery software.
- Input check
- Preprocessing
- Taxonomy profiles:
MetaPhlAn
(default) orKraken2
- Pathway abundances:
HUMAnN
- Present a MultiQC report (
MultiQC
)
If you only want to use certain parts of the pipeline, you can use the flags --skip-processing
, --skip-metaphlan
, --skip-humann
to skip certain subworkflows. If you want to get Kraken2 profiles instead of MetaPhlAn, use --skip-kraken false
.
:::note
If you are new to Nextflow and nf-core, please refer to this page on how
to set-up Nextflow. Make sure to test your setup
with -profile test
before running the workflow on actual data.
:::
First, prepare a samplesheet with your input data that looks as follows:
samplesheet.csv
:
sample,fastq_1,fastq_2
CONTROL_REP1,AEG588A1_S1_L002_R1_001.fastq.gz,AEG588A1_S1_L002_R2_001.fastq.gz
Each row represents a pair of fastq files. This pipeline does accept single-end and paired-end fastq files. For single-end data, use only fastq_1
.
Now, you can run the pipeline using:
nextflow run barbarahelena/metagenomicspipeline \
-profile <docker/singularity/.../institute> \
--input samplesheet.csv \
--genome GRCh38 \
--outdir <OUTDIR>
The subsampling level is by default is 20 million. If you want to change this subsampling level, you can specify it by using --subsamplelevel 10000000
(to set it to 10 million, for example).
I recommend using docker
or singularity
, this is a lot less error prone because of the containerized environments. If you don't have container software installed, you can also use conda
as profile. For users at the University of Amsterdam: use the profile snellius
if you are using the Snellius (uses Singularity by default).
You can also run a test that uses the samplesheet_test.csv
in the assets folder (you don't need data for this), using:
nextflow run barbarahelena/metagenomicspipeline \
-profile test \
--outdir <OUTDIR>
If you are working on an HPC, there might be specific rules on how many jobs the pipeline can submit in a specific timeframe. I wrote a separate instruction for use on HPCs in the usage documentation
For more details and further functionality, please refer to the usage documentation and the parameter documentation.
All output of the different parts of the pipeline are stored in subdirectories of the output directory. These directories are named after the tools that were used ('metaphlan', 'humann', etc.). Other important outputs are the multiqc report in the multiqc folder and the execution html report in the pipeline_info folder.
For more details on the pipeline output, please refer to the output documentation.
I used the nf-core template as much as possible and used the taxprofiler nf-core pipeline and Eduard's metagenomics pipeline as examples.
If you would like to contribute to this pipeline, please see the contributing guidelines. For further information or help, don't hesitate to get in touch.
This pipeline uses bioBakery software, including MetaPhlAn and HUMAnN. It also uses Kraken2, if you specify it in the options (--skip_metaphlan
, with --skip_kraken false
). Please cite the papers of these tools. An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md
file.
If you use this metagenomicspipeline for your analysis, please cite it using the following doi: 10.5281/zenodo.10663326.