Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix NaN propagation for float16 min and max operators #22161

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

adamreeve
Copy link
Contributor

Description

This makes min and max with NaN for either operand always return NaN for float16 data, matching the behaviour of float and double.

The behaviour for floats and doubles was previously fixed for the CPU provider in #21492 and the CUDA provider in #19984, but these PRs didn't fix the behaviour for float16 due to tests causing asan errors. The memory access violations with float16 data have now been fixed in #22135, so this PR is a follow up to make float16 min and max behave the same as float and double for both the CPU and CUDA providers now that we can add tests for this.

Motivation and Context

Relevant previous issues (not float16 specific):

@@ -360,6 +366,16 @@ __device__ __inline__ double _Min(double a, double b) {
return (isnan(a) || isnan(b)) ? std::numeric_limits<double>::quiet_NaN() : ( a < b ? a : b );
}

template <>
__device__ __inline__ half _Min(half a, half b) {
return ISNAN_HALF(a) ? a : (ISNAN_HALF(b) ? b : (a < b ? a : b));
Copy link
Contributor

@tianleiwu tianleiwu Sep 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May use __hmin_nan(a, b) for _Min and __hmax_nan(a, b) for _Max, when a/b is half or bfloat16 (casted to __nv_bfloat16). See https://docs.nvidia.com/cuda/archive/12.1.0/cuda-math-api/group__CUDA__MATH____HALF__COMPARISON.html#group__CUDA__MATH____HALF__COMPARISON_1g9752fee573b47368538ab19495ab9623

Difference is that those functions return canonical NaN, and more consistent with float/double (see line 366).

Comment on lines 759 to +764
if (is_min) {
output_vec_map = input_1_vec_map.min(static_cast<Eigen::half>(per_iter_bh.ScalarInput0<MLFloat16>()));
output_vec_map = input_1_vec_map.template min<Eigen::PropagateNaN>(
static_cast<Eigen::half>(per_iter_bh.ScalarInput0<MLFloat16>()));
} else {
output_vec_map = input_1_vec_map.max(static_cast<Eigen::half>(per_iter_bh.ScalarInput0<MLFloat16>()));
output_vec_map = input_1_vec_map.template max<Eigen::PropagateNaN>(
static_cast<Eigen::half>(per_iter_bh.ScalarInput0<MLFloat16>()));
Copy link
Contributor

@tianleiwu tianleiwu Sep 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about logic like

        Eigen::half scalar_input = static_cast<Eigen::half>(per_iter_bh.ScalarInput0<MLFloat16>());
        scalar_input = std::isnan(static_cast<float>(scalar_input)) ? Eigen::NumTraits<Eigen::half>::quiet_NaN() : scalar_input;

        if (is_min) {
            output_vec_map = input_1_vec_map.isNaN().select(
                Eigen::NumTraits<Eigen::half>::quiet_NaN(),
                input_1_vec_map.template min<Eigen::PropagateNaN>(scalar_input)
            );
        } else {
            output_vec_map = input_1_vec_map.isNaN().select(
                Eigen::NumTraits<Eigen::half>::quiet_NaN(),
                input_1_vec_map.template max<Eigen::PropagateNaN>(scalar_input)
            );
        }

@@ -790,9 +794,9 @@ static Status MinMaxMLFloat16(const OpKernel& inst, OpKernelContext* context) {
EigenVectorArrayMap<Eigen::half> output_vec_map(output, num_elements);

if (is_min) {
output_vec_map = input_0_vec_map.min(input_1_vec_map);
output_vec_map = input_0_vec_map.template min<Eigen::PropagateNaN>(input_1_vec_map);
Copy link
Contributor

@tianleiwu tianleiwu Sep 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about logic like

output_vec_map = (input_0_vec_map.isNaN() || input_1_vec_map.isNaN()).select(
                Eigen::NumTraits<Eigen::half>::quiet_NaN(),
                input_0_vec_map.template min<Eigen::PropagateNaN>(input_1_vec_map)
            );

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants