-
Notifications
You must be signed in to change notification settings - Fork 651
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support disk
directive for local executor
#5652
Conversation
Signed-off-by: Ben Sherman <bentshermann@gmail.com>
✅ Deploy Preview for nextflow-docs-staging ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
The documentation looks good. I'll leave the code review for someone else. |
Signed-off-by: Ben Sherman <bentshermann@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about this PR. First the avail disk storage is a dynamic value, while CPUs and memory are static.
Moreover the main needed for using CPUs and memory is throttling task submission to avoid over allocation avail resources. But this cannot be done with disk storage and therefore it would ultimately just thrown an error as when the task run out of space
log.debug "Local executor is using a remote work directory -- task disk requirements will be ignored" | ||
return 0 | ||
} | ||
(session.getExecConfigProp(name, 'disk', session.workDir.toFile().getUsableSpace()) as MemoryUnit).toBytes() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But the avail disk changes over time, how to take into account free spaces can increase or decrease while the workflow is running?
I will let @schorlton-bugseq make his case since he submitted the original issue. As for my thoughts: Disk works exactly the same way as memory. There is a total amount and a currently available amount. The local executor doesn't prevent any task from using more than their allocated memory (unless Docker is enabled), it just uses the task resources as "hints" to limit the parallelism accordingly. The same is true for disk, it's just a hint that the user can use to limit the parallelism based on how much disk space the user estimates each task will need. The only practical difference is that the steady-state disk usage is likely higher than steady-state memory usage, so it's more accurate to use the currently available disk space at the start of the run as the "total", rather than the true total. Overall, it's a simple change that's opt-in and provides the same guarantees as the memory tracking, so I'm fine with it. |
First, I want to thank the three of you for such a fast turnaround on this and being so receptive to the request! The main example that I would be interested in is not even kicking off a process if the process has a specified disk requirement which is greater than the available disk space - else one risks filling their disk and failing the process, which could take a long time to fail and cause unnecessary headache/cost if the directive was not specified. Specific examples where I see this to be useful are processes where the outputs will be predictably massive and fill standard disks, e.g. downloading large files (TBs in size), generating simulated data, or certain read alignment tasks (e.g. hundreds of secondary/supplementary alignments per read). I leave to you the specifics of the parallelism as I don't fully appreciate the complexities of nextflow job scheduling or its guarantees. One could imagine in a complex scenario keeping track of allocated disk to each process and updating available disk after the process completes to account for the allocated vs actually used disk space in a dynamic fashion...this seems complex although maybe you already have this figured out! Alternatively, probably some simpler assumptions with clear documentation would more than suffice for most nf users and the use cases described above. Thanks again for your hard work and responsiveness! |
I think I see the problem now. Whereas task memory is always released to the system, disk space is not. Tasks can leave behind any amount of output files, each with their own lifetimes based on what the downstream tasks take as input. I think it would require a lot of complexity for the local executor to track that usage accurately enough to be useful. The most I think we could do here is a minimal solution based on Josh's suggestion, where the It was a good exercise to try out this idea, but I'm less confident now that it would be a good idea to implement. |
Agree on Ben latest comment. Closing without merging |
Close #5636
This PR adds support for the
disk
directive to the local executor. It uses File::getUsableSpace() to estimate the total available disk space at the beginning of the run.Disk requirements are ignored when using the local executor with a remote filesystem via Fusion.