You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The buildJunctions function in the mapping pipeline builds a bed4 file of junctions:
contig, start, end, strand
to pass to various mappers. It does this by taking a bundle of gffs from a transcript iterator, sorting them into start order, and then walking along, creating intervals from the end of one entry to the start of the next.
# subtract one: these are not open/closed coordinates but
# the 0-based coordinates
# of first and last residue that are to be kept (i.e., within the
# exon).
outf.write("%s\t%i\t%i\t%s\n"%
(gff.contig, end-1, gff.start, gff.strand))
end=gff.end
njunctions+=1
outf.close()
Unfortunately this fails to take account of the fact that the gffs list will contain things other than exons (it will include both CDS and transcript entries). In particular the transcript entries will often become the first entry when sorted on starts, but their end is after the end of all other entries. This means you get invalid entries where the start is at a higher coordinate than the end. And the interval doesn't refer to a junction.
This file is used by mapReadsWithTopHat2 and mapReadsWithHisat. One assumes no one has used TopHat2 for years . However it leads Hisat2 to produce and invalid BAM file. See:
On a related, but unaffected note:
This reminds me to make a note that the 2 pass STAR method is currently only partly implemented in the mapping pipeline.
The pipeline currently uses the junction file of each individual run, and I have now corrected this for the second run to use a joint junctions file from all junctions found with the first pass (with some minor filtering of low confidence junctions built in). I will make a PR very soon.
The buildJunctions function in the mapping pipeline builds a bed4 file of junctions:
contig, start, end, strand
to pass to various mappers. It does this by taking a bundle of gffs from a transcript iterator, sorting them into start order, and then walking along, creating intervals from the end of one entry to the start of the next.
cgat-flow/cgatpipelines/tools/pipeline_mapping.py
Lines 619 to 635 in 87fcdbc
Unfortunately this fails to take account of the fact that the gffs list will contain things other than exons (it will include both CDS and transcript entries). In particular the transcript entries will often become the first entry when sorted on starts, but their end is after the end of all other entries. This means you get invalid entries where the start is at a higher coordinate than the end. And the interval doesn't refer to a junction.
This file is used by mapReadsWithTopHat2 and mapReadsWithHisat. One assumes no one has used TopHat2 for years . However it leads Hisat2 to produce and invalid BAM file. See:
DaehwanKimLab/hisat2#365
PR incoming.
The text was updated successfully, but these errors were encountered: