Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems with Snakemake 5.6.0 standard resources #89

Open
holtgrewe opened this issue Mar 30, 2022 · 4 comments
Open

Problems with Snakemake 5.6.0 standard resources #89

holtgrewe opened this issue Mar 30, 2022 · 4 comments

Comments

@holtgrewe
Copy link

Apparently, Snakemake >=5.6.0 defines

  • mem_mb
  • disk_mb
  • tmpdir

by default, cf.

At least the standard value for mem_mb has a higher precedence when using the snakemake profile than mem.

E.g.,

rule foo:
  resources:
    mem='8G'

Will lead to using the mem_mb=1000 from the defaults.

@holtgrewe
Copy link
Author

This also leads to mem_per_thread not being settable.


sbatch: fatal: --mem, --mem-per-cpu, and --mem-per-gpu are mutually exclusive.
Traceback (most recent call last):
  File "/etc/xdg/snakemake/cubi-v1/slurm-submit.py", line 82, in <module>
    jobid = slurm_utils.submit_job(jobscript, **sbatch_options)
  File "/etc/xdg/snakemake/cubi-v1/slurm_utils.py", line 182, in submit_job
    raise e
  File "/etc/xdg/snakemake/cubi-v1/slurm_utils.py", line 180, in submit_job
    res = sp.check_output(cmd)
  File "/data/gpfs-1/users/euskircp_c/work/miniconda/lib/python3.7/subprocess.py", line 411, in check_output
    **kwargs).stdout
  File "/data/gpfs-1/users/euskircp_c/work/miniconda/lib/python3.7/subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['sbatch', '--parsable', '--cluster=cubi', '--time=02:00:00', '--mem=5532', '--mem-per-cpu=6G', '--cpus-per-task=12',...

@percyfal
Copy link
Collaborator

percyfal commented Apr 6, 2022

I can reproduce this using snakemake>=7.0. The proper place to apply a patch I would guess is here

@bs-az
Copy link

bs-az commented Apr 19, 2022

My solution to this was to change the order of priority here and therefore run convert_job_properties() before applying any defaults from my cluster_config.json. When using the HPC profile I don't really want the resources from the Snakefile anyway since they would be tuned for local execution and not for SLURM, but I understand the order is based on preference.

Edit for clarity: I mean I don't want any resources from the Snakefile to take priority over those in __defaults__ if there is a name clash.

@johnstonmj
Copy link

I am also encountering inconsistent resource mapping in combination with Snakemake 7.6.2.

Setting

resources:
    mem_mb = 30000

Works with scontrol show jobid indicating 30000M memory.

But
mem = 30000
or
mem = "30G"

Only provide the default 1000M.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants