How to compare HPX perf between ETH and IB? #6588
Replies: 4 comments 1 reply
-
How do you know that it is 'not working'? What do you expect to see? Also, (assuming |
Beta Was this translation helpful? Give feedback.
-
Hi @hkaiser, First, I'm executing alltoall, allgather, and allreduce in the HPX and measuring the execution time using tic-toc, but there is no change between ETH and IB. Are you thinking that way because there are no commands like mpirun or mpiexec? I am submitting jobs through SLURM, so I am running them in a multi-process manner. Would it be correct to say that using the command below executes it in distributed mode? mpirun -np 4 ./hpx_test |
Beta Was this translation helpful? Give feedback.
-
Yes, that answers my question. This runs things in distrbuted. Depending on your srun setup this may run all four ranks on the same physical compute node however, in which case networking would go through shared memory in any case. Also, enabling logging will slow down the execution significantly, which might hide any performance differences you're trying to assess. In any case, to answer your initial question - HPX does not interfere with the MPI environment settings you specify. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your prompt response. If there are any issues with my execution, I would like to correct them.
|
Beta Was this translation helpful? Give feedback.
-
HI, How can I compare the performance of Ethernet and InfiniBand in HPX?
I am testing on a cluster where both Ethernet and InfiniBand are connected. It seems that controlling it through OMPI parameters is not working, so I would like to ask for clarification.
Here's our commands:
ETH:
export OMPI_MCA_pml=ob1
export OMPI_MCA_btl=tcp,self,vader
hpx_test --hpx:print-bind --hpx:threads=1 --hpx:ini=hpx.logging.level=4 --hpx:debug-hpx-log --hpx:ini=hpx.parcel.mpi.enable=1 --hpx:ini=hpx.parcel.tcp.enable=0 --hpx:ini=hpx.parcel.mpi.multithreaded=0
IB:
export OMPI_MCA_pml=ucx
export UCX_TLS=all
hpx_test --hpx:print-bind --hpx:threads=1 --hpx:ini=hpx.logging.level=4 --hpx:debug-hpx-log --hpx:ini=hpx.parcel.mpi.enable=1 --hpx:ini=hpx.parcel.tcp.enable=0 --hpx:ini=hpx.parcel.mpi.multithreaded=0
Could it be that other HPX runtime options are taking precedence?
Thank you in advance for your support.
Beta Was this translation helpful? Give feedback.
All reactions