Skip to content

Commit

Permalink
fix: pip install mikado now works (#443)
Browse files Browse the repository at this point in the history
* Update requiremnts.txt and environment.yml and fix scipy import issue.
* update README
  • Loading branch information
MmasterT authored Dec 19, 2023
1 parent c8f6b7c commit c27d7e0
Show file tree
Hide file tree
Showing 4 changed files with 48 additions and 22 deletions.
55 changes: 39 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,25 +5,48 @@

# Mikado - pick your transcript: a pipeline to determine and select the best RNA-Seq prediction

Mikado is a lightweight Python3 pipeline to identify the most useful or “best” set of transcripts from multiple transcript assemblies. Our approach leverages transcript assemblies generated by multiple methods to define expressed loci, assign a representative transcript and return a set of gene models that selects against transcripts that are chimeric, fragmented or with short or disrupted CDS. Loci are first defined based on overlap criteria and each transcript therein is scored based on up to 50 available metrics relating to ORF and cDNA size, relative position of the ORF within the transcript, UTR length and presence of multiple ORFs. Mikado can also utilize blast data to score transcripts based on proteins similarity and to identify and split chimeric transcripts. Optionally, junction confidence data as provided by [Portcullis] can be used to improve the assessment. The best-scoring transcripts are selected as the primary transcripts of their respective gene loci; additionally, Mikado can bring back other valid splice variants that are compatible with the primary isoform.
Mikado is a lightweight Python3 pipeline to identify the most useful or “best” set of transcripts from multiple transcript assemblies. Our approach leverages transcript assemblies generated by multiple methods to define expressed loci, assign a representative transcript and return a set of gene models that selects against transcripts that are chimeric, fragmented or with short or disrupted CDS. Loci are first defined based on overlap criteria and each transcript therein is scored based on up to 50 available metrics relating to ORF and cDNA size, relative position of the ORF within the transcript, UTR length and presence of multiple ORFs. Mikado can also utilize blast data to score transcripts based on proteins similarity and to identify and split chimeric transcripts. Optionally, junction confidence data as provided by [Portcullis][Portcullis] can be used to improve the assessment. The best-scoring transcripts are selected as the primary transcripts of their respective gene loci; additionally, Mikado can bring back other valid splice variants that are compatible with the primary isoform.

Mikado uses GTF or GFF files as mandatory input. Non-mandatory but highly recommended input data can be generated by obtaining a set of reliable splicing junctions with Portcullis_, by locating coding ORFs on the transcripts using either [Transdecoder] or [Prodigal], and by obtaining homology information through either [BLASTX][Blast+] or [DIAMOND].
Mikado uses GTF or GFF files as mandatory input. Non-mandatory but highly recommended input data can be generated by obtaining a set of reliable splicing junctions with Portcullis_, by locating coding ORFs on the transcripts using either [Transdecoder][Transdecoder] or [Prodigal][Prodigal], and by obtaining homology information through either [BLASTX][Blast+] or [DIAMOND][DIAMOND].

Our approach is amenable to include sequences generated by *de novo* Illumina assemblers or reads generated from long read technologies such as Pacbio.

Extended documentation is hosted on ReadTheDocs: http://mikado.readthedocs.org/

## Installation

Mikado can be installed from PyPI with pip:
Using mamba

```pip3 install mikado```
download mamba using pip

```bash
pip install mamba=0.27.0
```

Create a mamba environment using the environment.yml file

```bash
mamba env create -f environment.yml
conda activate mikado2
```

Check and run mikado

```bash
mikado --help
```



Mikado can also be be installed from PyPI with pip (**deprecated**):

``pip3 install mikado``

Alternatively, you can clone the repository from source and install with:

pip wheel -w dist .
pip install dist/*whl
pip install dist/*whl

You can verify the correctness of the installation with the unit tests (*outside of the source folder*, as otherwise Python will get confused and try to use the `Mikado` source folder instead of the system installation):

python -c "import Mikado; Mikado.test(); Mikado.test(label='slow')"
Expand All @@ -40,29 +63,29 @@ The steps above will ensure that any additional python dependencies will be inst
### Additional dependencies

Mikado by itself does require only the presence of a database solution, such as SQLite (although we do support MySQL and PostGRESQL as well).
However, the Daijin pipeline requires additional programs to run.
However, the Daijin pipeline requires additional programs to run.

For driving Mikado through Daijin, the following programs are required:

- [DIAMOND] or [Blast+] to provide protein homology. DIAMOND is preferred for its speed.
- [Prodigal] or [Transdecoder] to calculate ORFs. The versions of Transdecoder that we tested scale poorly in terms of runtime and disk usage, depending on the size of the input dataset. Prodigal is much faster and lighter, however, the data on our paper has been generated through Transdecoder - not Prodigal. Currently we set Prodigal as default.
- Mikado also makes use of a dataset of RNA-Seq high-quality junctions. We are using [Portcullis] to calculate this data alongside the alignments and assemblies.
- [DIAMOND][DIAMOND] or [Blast+][Blast+] to provide protein homology. DIAMOND is preferred for its speed.
- [Prodigal][Prodigal] or [Transdecoder][Transdecoder] to calculate ORFs. The versions of Transdecoder that we tested scale poorly in terms of runtime and disk usage, depending on the size of the input dataset. Prodigal is much faster and lighter, however, the data on our paper has been generated through Transdecoder - not Prodigal. Currently we set Prodigal as default.
- Mikado also makes use of a dataset of RNA-Seq high-quality junctions. We are using [Portcullis][Portcullis] to calculate this data alongside the alignments and assemblies.

If you plan to generate the alignment and assembly part as well through Daijin, the pipeline requires the following:

- SAMTools
- If you have short-read RNA-Seq data:
- At least one short-read RNA-Seq aligner, choice between [GSNAP], [GMAP], [STAR], [TopHat2], [HISAT2]
- At least one RNA-Seq assembler, choice between [StringTie], [Trinity], [Cufflinks], [CLASS2]. Trinity additionally requires [GMAP].
- [Portcullis] is optional, but highly recommended to retrieve high-quality junctions from the data
- At least one short-read RNA-Seq aligner, choice between [GSNAP], [GMAP][GMAP], [STAR][STAR], [TopHat2][TopHat2], [HISAT2][HISAT2]
- At least one RNA-Seq assembler, choice between [StringTie][StringTie], [Trinity][Trinity], [Cufflinks], [CLASS2][CLASS2]. Trinity additionally requires [GMAP][GMAP].
- [Portcullis][Portcullis] is optional, but highly recommended to retrieve high-quality junctions from the data
- If you have long-read RNA-Seq data:
- At least one long-read RNA-Seq aligner, current choice between [STAR] and [GMAP]
- At least one long-read RNA-Seq aligner, current choice between [STAR][STAR] and [GMAP][GMAP]

## Development guide

We provide source trail files ([https://www.sourcetrail.com/](https://www.sourcetrail.com/)) to aid in development.
As required by the SourceTrail application, these files are present in the master directory, as "Mikado.srctrl*".

## Citing Mikado

If you use Mikado in your work, please consider to cite:
Expand All @@ -84,4 +107,4 @@ If you also use Portcullis to provide reliable junctions to Mikado, either indep
[HISAT2]: http://ccb.jhu.edu/software/hisat2
[StringTie]: https://ccb.jhu.edu/software/stringtie/
[Trinity]: https://github.com/trinityrnaseq/trinityrnaseq
[CLASS2]: http://ccb.jhu.edu/people/florea/research/CLASS2/
[CLASS2]: http://ccb.jhu.edu/people/florea/research/CLASS2/
7 changes: 4 additions & 3 deletions environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,13 @@ dependencies:
- samtools>=1.11
- htslib>=1.11
- pysam>=0.15.3
- defaults::python>=3.6
- python>=3.6,<3.10
- pyyaml>=5.1.2
- scipy>=1.3.1
- gmap==2021.08.25
- snakemake-minimal>=5.7.0
- sqlalchemy>=1.4.0
- sqlalchemy-utils>=0.34.1
- sqlalchemy>1.4.0,<2
- sqlalchemy-utils==0.34.1
- sqlite
- tabulate>=0.8.5
- wheel
Expand Down
4 changes: 2 additions & 2 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ pysam>=0.15.3
pyyaml>=5.1.2
scipy>=1.3.1
snakemake>=5.7.0
sqlalchemy>=1.4.0
sqlalchemy-utils>=0.37
sqlalchemy>1.4.0,<2
sqlalchemy-utils==0.37
tabulate>=0.8.5
pytest>=5.4.1
python-rapidjson>=1.0.0
Expand Down
4 changes: 3 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,10 @@
import re
import sys
import numpy as np
from scipy._build_utils import numpy_nodepr_api

### See comment here https://github.com/cython/cython/issues/2498
numpy_nodepr_api = dict(define_macros=[("NPY_NO_DEPRECATED_API",
"NPY_1_9_API_VERSION")])

here = path.abspath(path.dirname("__file__"))

Expand Down

0 comments on commit c27d7e0

Please sign in to comment.