-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix PepQuery2 memory #722
base: master
Are you sure you want to change the base?
fix PepQuery2 memory #722
Conversation
@@ -32,7 +32,7 @@ | |||
#end if | |||
## PepQuery command | |||
pepquery | |||
-Xmx\$[ \${GALAXY_MEMORY_MB:-8192} / 1024 ]g | |||
-Xmx\$(( \${GALAXY_MEMORY_MB:-8192} - 256 - (\${GALAXY_SLOTS:-1} / 5) - \${GALAXY_MEMORY_MB:-8192} * (1 / 20) ))m |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do you arrive at this formula?
If you want to make the memory depend on the number of available threads use it may be better to use \$GALAXY_MEMORY_MB_PER_SLOT
?
Wouldn't it be a simple solution to let the admin just give more memory to the application?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to make the memory depend on the number of available threads use it may be better to use $GALAXY_MEMORY_MB_PER_SLOT?
Yes I should use that one indeed
Wouldn't it be a simple solution to let the admin just give more memory to the application?
-8192
sets a default value. The change is needed because I had the case when the JVM used all mem specified in Galaxy for the heap (-Xmx), which leads to a run out of memory error, becuase the JVM needs memory also for the stack and other overhead. The formula is not super precise but follows a recommendation for JVM memory settings, from this source
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is what I do on my instance. In each destination of the job_conf I add
<env id="_JAVA_OPTIONS">-Xmx@JAVAMEM@ -Xms256m</env>
and set @JAVAMEM@
to something lower than the available memory (roughly 33%). I guess this does not work here because the bioconda recipe seems not to respect _JAVA_OPTIONS
, but I might be wrong. If I'm right it would be good to adapt the recipe .. a grep _JAVA_OPTIONS
in the bioconda recipes should guide you.
Would be interesting to ask the TPV / admin community how they handle java memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouln't it be cleaner to substract that 33% already in the wrapper and not later. Otherwise all admins would need to think of it (like me for example) and we would need an additional destination and configuration lines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test 2 is failing for me on the main branch with --docker
, could someone check if it is just for me or in general?
@mira-miracoli we have now something similar in the default tool. Do we need this PR still? |
I changed the Java options, because the JVM overhead can lead to failing jobs, when the tool uses the full amount defined in GALAXY_MEMORY_MB as stack.