MPIBenchmarks.jl
is a collection of benchmarks for
MPI.jl
, the
Julia wrapper for the Message Passing Interface
(MPI).
The goal is to have a suite of benchmarks which will allow MPI.jl
users to
- compare performance of different MPI libraries used with
MPI.jl
; - compare performance of
MPI.jl
with analogue benchmarks written in other languages, like C/C++.
For this purpose we have benchmarks inspired by Intel(R) MPI Benchmarks (IMB) and OSU Micro-Benchmarks, to make it easier to compare results with established MPI benchmarks suites.
NOTE: MPIBenchmarks.jl
is a work in progress. Contributions are very much welcome!
To install MPIBenchmarks.jl
, open a Julia REPL, type ]
to enter the Pkg REPL mode and
run the command
add https://github.com/JuliaParallel/MPIBenchmarks.jl
MPIBenchmarks.jl
currently requires Julia v1.6 and MPI.jl
v0.20.
MPIBenchmarks.jl
currently provides the following benchmark types
- collective
IMBAllreduce()
: inspired by IMB AllreduceIMBAlltoall()
: inspired by IMB AlltoallIMBGatherv()
: inspired by IMB GathervIMBReduce()
: inspired by IMB ReduceOSUAllreduce()
: inspired by OSU AllreduceOSUAlltoall()
: inspired by OSU AlltoallOSUReduce()
: inspired by OSU Reduce
- point-to-point:
IMBPingPong()
: inspired by IMB PingPongIMBPingPing()
: inspired by IMB PingPingOSULatency()
: inspired by OSU Latency
After loading the package
using MPIBenchmarks
to run a benchmark use the function benchmark(<BENCHMARK TYPE>)
, replacing <BENCHMARK TYPE>
with the name of the benchmark you want to run, from the list above. The benchmarking types
take the following arguments:
- optional positional argument:
T::Type
: data type to use for the communication. It must be a bits type, with size in bytes which is a power of 2. Default isUInt8
- keyword arguments:
verbose::Bool
: whether to print tostdout
some information. Default istrue
.max_size::Int
: maximum size of the data to transmit, in bytes. It must be a power of 2 and larger than the size of the datatypeT
.filename::Union{String,Nothing}
: name of the output CSV file where to save the results of the benchmark. Ifnothing
, the file is not written. Default is a string with the name of the given benchmark (e.g.,"julia_imb_pingpong.csv"
forIMBPingPong
).
NOTE: kernels of the benchmarks in the IMB and OSU suites are usually very
similar, if not identical. After all, they benchmark the same MPI functions.
However, there are usually subtle differences, for example with regards to the
number of iterations, datatypes used, how buffers are initialized, etc, which
can slightly affect the results. MPIBenchmarks.jl
tries to match what the
original benchmarks do, but there is no guarantee about this and there may still
be unwanted differences. If you spot any, please open an issue or submit a pull
request. As a rule of thumb, OSU benchmarks tend be easier to follow than the
IMB's, so our implementations of their benchmarks should generally be more
faithful than compared to the IMB ones.
Write a script like the following:
using MPIBenchmarks
# Collective benchmarks
benchmark(IMBAllreduce())
benchmark(IMBAlltoall())
benchmark(IMBGatherv())
benchmark(IMBReduce())
benchmark(OSUAllreduce())
benchmark(OSUAlltoall())
benchmark(OSUReduce())
# Point-to-point benchmarks.
# NOTE: they require exactly two MPI processes.
benchmark(IMBPingPong())
benchmark(IMBPingPing())
benchmark(OSULatency())
Then execute it with the following command
mpiexecjl -np 2 julia --project mpi_benchmarks.jl
where
mpiexecjl
isMPI.jl
's wrapper formpiexec
,2
is the number of MPI process to launch. Use any other suitable number for your benchmarks, typically at least 2. Note that point-to-point benchmarks require exactly 2 processes, so if you want to use more processes for the collective operations you will have to run them in a separate script,mpi_benchmarks.jl
is the name of the script you created.
The MPIBenchmarks.jl
package is licensed under the MIT "Expat" License. The original
author is Mosè Giordano.