-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fatal runtime error: stack overflow #131
Comments
Thanks for the report - stay tuned for an update on the unspecific-digest front soon. Your guess might be correct - some operating systems add guard pages to the stack boundary, and heap overflowing into the stack could cause a false stack overflow. |
just a note: I also run into this issue but with a specific digest (generally when using TMT mods) It's been pretty challenging to get Sage to work with some data sets compared to other options (Comet, XTandem, MSGF+, etc.) |
Can you describe the issue you're having? I have run many thousands of searches of TMT labeled files 😉 |
Specifically the same issue mentioned in the thread:
I understand it's probably due to the large FASTA (metaproteomics) but I had mainly wanted to point out that the issue also occurs when using a non-specific search. This was tested on multiple machines, one of which had 256GB of memory. I saw the other thread about splitting up the FASTA and that's possibly a solution but the FASTA is already relatively tiny for this field (850K sequences pre-decoy) and I don't encounter issues with the other typical search options (MSGF+, Comet, etc.) unless I use a much larger FASTA. It's also not likely related to TMT but when using more than the typical 2 mods (which tends to only be with multiplexed files). I generally haven't had as many issues when using 2 mods only on the system above. I do frequently run out of memory on lower memory systems (~64 GB) with 2 mods, though. Anyway, I appreciate your efforts. Fast searches really aid my metaproteomics work -- I just haven't been able to make it work reliably enough yet to replace other options even I'd like to because I've been getting better results with Sage when it does work. |
Hi everyone, For future reference, setting the
https://doc.rust-lang.org/std/thread/#stack-size Best, |
Hello @lazear ,
I encounter the following error:
This is happening on a big CPU node, when I run an unspecific digest with 240 CPUs in parallel, ~560 GB RAM consumption, with
min_length=7
andmax_length=20
. It works withmax_length=15
though. I wanted to sorta test how far I can go with the max peptide length.While I can't pinpoint this to sth. specfiic, I think it may have to do with the stack being full since the fragment index is so huge already with the unspecific digest. Is it possible the stack runs into the heap?
I also know about #97 already and I am happy with a max length of 15 since most HLA-1 peptides fall within 8-12 aa length anyways...
Just wanted to let you know that this error message can happen.
The text was updated successfully, but these errors were encountered: