You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following up with HighFive more than one rank in MPI should work. During testing, I got a bunch of oversubscription error messages for two and more ranks:
#9 27.57 There are not enough slots available in the system to satisfy the 4
#9 27.57 slots that were requested by the application:
#9 27.57
#9 27.57 /build/src/parquet2hdf5
#9 27.57
#9 27.57 Either request fewer slots for your application, or make more slots
#9 27.57 available for use.
#9 27.57
#9 27.57 A "slot" is the Open MPI term for an allocatable unit where we can
#9 27.57 launch a process. The number of slots available are defined by the
#9 27.57 environment in which Open MPI processes are run:
#9 27.57
#9 27.57 1. Hostfile, via "slots=N" clauses (N defaults to number of
#9 27.57 processor cores if not provided)
#9 27.57 2. The --host command line parameter, via a ":N" suffix on the
#9 27.57 hostname (N defaults to 1 if not provided)
#9 27.57 3. Resource manager (e.g., SLURM, PBS/Torque, LSF, etc.)
#9 27.57 4. If none of a hostfile, the --host command line parameter, or an
#9 27.57 RM is present, Open MPI defaults to the number of processor cores
#9 27.57
#9 27.57 In all the above cases, if you want Open MPI to default to the number
#9 27.57 of hardware threads instead of the number of processor cores, use the
#9 27.57 --use-hwthread-cpus option.
#9 27.57
#9 27.57 Alternatively, you can use the --oversubscribe option to ignore the
#9 27.57 number of available slots when deciding the number of processes to
#9 27.57 launch.
The text was updated successfully, but these errors were encountered:
Following up with HighFive more than one rank in MPI should work. During testing, I got a bunch of oversubscription error messages for two and more ranks:
The text was updated successfully, but these errors were encountered: