-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Labels
enhancementNew feature or requestNew feature or requesthigh-priorityImportant issue, but not a bottleneckImportant issue, but not a bottleneck
Description
Delftblue has quite a strict limit on #jobs that can be concurrently scheduled, which is limiting for single-threaded programs (as many CPUs are available per job). Slurm has a feature for this, in that you can use srun within an sbatch to schedule multiple tasks within a job (only relevant fields shown):
#!/bin/bash
#SBATCH --array=0-15
#SBATCH --ntasks=8
#SBATCH --mem-per-cpu=100M
#SBATCH --cpus-per-task=1
set -x
for i in $(seq 1 8); do
srun -c1 -n1 --exact --mem-per-cpu=100M my_exec $SLURM_ARRAY_TASK_ID $i &
done
wait
This will schedule 8 sub-jobs in the allocated resources from the sbatch without running into the concurrent job limit.
Would it be possible to implement this feature in Gourd? E.g. add the ntasks config field such that the chunks are automatically also divided over jobs. Thanks in advance!
Koramund and lchladek
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or requesthigh-priorityImportant issue, but not a bottleneckImportant issue, but not a bottleneck