-
-
Notifications
You must be signed in to change notification settings - Fork 14.6k
Description
After a bunch of discussion, the intended semantics of min/max on scalar floats have been clarified to be exactly what we have documented for years already: they behave like the IEEE 754-2019 min/maximumNumber operation, except that signed zeros are ordered non-deterministically. (Equivalently: they behave like IEEE 754-2008 min/maxNum except that when one input is a signed NaN and the other is a number, the number is returned.) This corresponds to llvm.min/maximumnum nsz in LLVM (as clarified by llvm/llvm-project#172012). #153343 fixes codegen to actually implement those semantics.
For SIMD, we have a bunch of intrinsics where the same question arises: simd_fmin and simd_fmax as well as simd_reduce_min and simd_reduce_max which also support floats. They are documented to use "IEEE-754 maxNum", which I presume refers to the IEEE 754-2008 operation. This is strangely inconsistent with the scalar operation we expose. It is also not what const-eval and Miri do; they implement the same semantics for SIMD min/max as they do for scalar min/max. The only difference is in the behavior for signaling NaNs, which apparently none of the test suites are covering. But obviously, Miri having different behavior than rustc is bad.
So... should we change the SIMD intrinsics to match what the scalar operations do? I think we should. However, LLVM does not currently seem to have a version of llvm.vector.reduce.* that uses min/maximumnum semantics (Cc @nikic), so currently it's tricky to implement this correctly. Also, we have to be careful about stdarch using these intrinsics to implement vendor intrinsics -- there are some uses of all of these intrinsics in stdarch (Cc @folkertdev @Amanieu ). This also affects portable-simd but since that is unstable that seems fine, and AFAIK there the design goal generally is to match the scalar standard library (Cc @workingjubilee @calebzulawski ).