Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto define slurmd memory and CPU configuration #27

Closed
wants to merge 8 commits into from

Conversation

sjpb
Copy link
Collaborator

@sjpb sjpb commented Aug 16, 2023

This PR enables the CPU and memory configuration of a node to be automatically defined on slurmd pod startup, rather than having to be modified in slurm.conf.

Some background

Non-cloud/non-autoscaling Slurm daemons can be started in two modes:

  • Dynamic Future nodes (slurmd -F)
  • Dynamic Normal nodes (slurmd -Z)

Dynamic Normal nodes do not need to be defined before slurmd startup. They automatically pass their actual memory and CPU configuration to the slurmctld on startup. However, the slurm.conf SlurmctldParameters=cloud_reg_addrs setting cannot be used, which with stable pod hostnames (= slurmd NodeNames), if a pod update changes the IP for a particular pod/slurm node, means slurmctld loses communication as the IP does not get updated.

Dynamic Future nodes do need to be defined before slurmd startup (with State=FUTURE), but cloud_reg_addrs works. This is the approach used in both the current main branch and this PR. In the main branch, the nodes are defined in slurm.conf. However, the default node definition results in 1x CPU and 1x MB memory, meaning these must be manually adjusted in slurm.conf to match the k8s worker node configurations.

This PR instead uses scontrol create node to create slurm nodes dynamically on pod startup with the memory/cpu/etc config of the actual node. It was expected that this would mean no NodeName= definitions would be required in slurm.conf. However, it appears slurmd segfaults on startup in that configuration. Therefore, a default configuration is left in slurm.conf, and on pod startup the node definition is deleted and recreated with the actual pod configuration.

Testing

This can be tested by launching a job which requires more than 1x cpu (in Slurm terms) per node, e.g. (on login node as rocky in ~):

srun  -N1 --ntasks-per-node=2 /usr/lib64/openmpi/bin/mpitests-IMB-MPI1 pingpong

@sjpb sjpb changed the title slurmd nodes add themselves with appropriate config on startup Auto define slurmd memory and CPU configuration Aug 16, 2023
@@ -52,7 +52,8 @@ CommunicationParameters=NoAddrCache

# NODES
MaxNodeCount=10
NodeName=slurmd-[0-9] State=FUTURE CPUs=4
NodeName=slurmd-[0-9] State=FUTURE
TreeWidth=65533
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note this should be in main anyway, although with only 2x nodes it doesn't seem to make a difference: See https://slurm.schedmd.com/dynamic_nodes.html#config

@sjpb sjpb marked this pull request as ready for review August 17, 2023 09:02
@sjpb
Copy link
Collaborator Author

sjpb commented Aug 17, 2023

@sd109 I tested this; after a clean install ran a job ok, changed the slurmd args to -vv and upgrade (to force slurmd deletion/recreation), ran a job ok, changed again, ran a job ok.

@sjpb sjpb requested a review from sd109 August 17, 2023 14:33
@sjpb sjpb marked this pull request as draft August 18, 2023 09:56
@sjpb sjpb marked this pull request as ready for review August 18, 2023 10:24
Copy link
Collaborator

@sd109 sd109 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

This was referenced Aug 18, 2023
@sjpb
Copy link
Collaborator Author

sjpb commented Sep 13, 2023

Closed, as #35 appears to work

@sjpb sjpb closed this Sep 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants