-
Notifications
You must be signed in to change notification settings - Fork 83
Workflow running out of memory #570
Comments
I am seeing a similar behavior with cromwell. I give a task 64GB. In AWS batch, I see the following warning next to the Memory information
I see an "Essential container in task exited". However, when I click on the job definition. It appears to have 8GB allocated memory. |
Thanks for reporting this issue. Is this an issue with the 1.5.2 release as well? |
It is still an issue with v 1.5.2 (cromwell) |
@spitfiredd The child processes are spawned with a default of 1vCPU and 1024 MEMORY. If tasks need more memory or CPU then you would typically make these requests as process directives for CPU and memory. (https://www.nextflow.io/docs/latest/process.html#cpus) and (https://www.nextflow.io/docs/latest/process.html#memory). |
@biofilos AGC is currently using an older version of Cromwell. This older version uses the deprecated call to AWS Batch, hence the error. In our next release we will update the version of Cromwell used. As a possible work around, you might consider deploying a |
Describe the Bug
Worker processes not spawning with enough memory or scaling; therefore Nexflow will error with exit status 137 (not enough memory)
Steps to Reproduce
Child processes are spawing with
1vCPU and 1024 MEMORY
Relevant Logs
Main Process
Child Process
Expected Behavior
spawn processes with enough memory or scale.
Actual Behavior
Container ran out of memory
Screenshots
Additional Context
ran workflow with the following command:
agc workflow run foo --context dev
Operating System: Linux
AGC Version: 1.5.1
Was AGC setup with a custom bucket: no
Was AGC setup with a custom VPC: no
The text was updated successfully, but these errors were encountered: