You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have you tried only using sbatch? As far as I can tell, the HPC docs at my company only use sbatch for MPI jobs (ie they put the mpirun command directly into a shell script with #SBATCH directives)
Imagine I have a rule which should run
mytool
via MPI on 8 processes.Locally I would start this with e.g.:
mpirun -np 8 mytool
.On a slurm cluster I would write a submit script and then inside the script start with srun, e.g.:
For MPI and slurm it is crucial to start the parallelized application (
mytool
) viasrun
, see also https://slurm.schedmd.com/mpi_guide.html.I assumed that when using this profile to submit jobs to a slurm cluster using Snakemake the following would happen:
sbatch
But instead Snakemake does the following:
snakemake
is run again with lots of parameters to ensure that only the current rule is run (basically the contents of{exec_job}
in https://github.com/Snakemake-Profiles/slurm/blob/master/%7B%7Bcookiecutter.profile_name%7D%7D/slurm-jobscript.sh)So my questions are:
srun
inside the job)?Explicitly pinging @jdblischak because you may have experience on this topic as well.
cc: @TobiasMeisel
The text was updated successfully, but these errors were encountered: