feat: all reduce bench slurm pyxis #101
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
SLURM Pyxis Container Plugin allows for easy reproducible scripts where the environment is containerized as such the benchmark harness script does not rely on any host machine dependencies besides the standard SLURM,
pyxis SLURM plugin
andnvidia-drivers
.SLURM Pyxis Container Plugin is widely used across many companies and increasingly gaining adoption.
On CSPs that have enabled SLURM Pyxis Container Plugin, such as CoreWeave, Crusoe, Oracle, Azure, etc,
all_reduce_bench.py
can be easily ran & reproduced via the following command:Note that on AWS & GCP, this launcher will work too once you swapped
nvcr.io#nvidia/pytorch:25.02-py3
and AWS/GCP specific container image that has all of the env vars and AWS EFA or GCPGPUDirect-TCP
orGPUDirect-TCPXO
akaFastTrak
orgIB
nccl net plugin includedTesting
I did some quick smoke tests to make sure this script works properly on two HGX H100 700W SXM nodes (16 total GPUs) with 400G InfiniBand NDR connected between them at 1 hop distance.