Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propagate NaNs in the CPU min and max operators #21492

Merged
merged 6 commits into from
Jul 29, 2024

Conversation

adamreeve
Copy link
Contributor

@adamreeve adamreeve commented Jul 25, 2024

Description

Propagates NaN values in the min and max operators so that min or max with a NaN in either input always produces NaN.

Only fixes NaN propagation for float and double data types due to invalid read errors when testing NaNs with float16 data: #21492 (comment).

Motivation and Context

Fixes #21455

@skottmckay
Copy link
Contributor

Should there be a test where the NaN is the scalar to check it is propagated throughout the broadcast?

e.g. input {2,2} with no NaN and {1} with a NaN should result in all NaN in the output IIUC

@adamreeve
Copy link
Contributor Author

@microsoft-github-policy-service agree company="G-Research"

@adamreeve
Copy link
Contributor Author

Should there be a test where the NaN is the scalar to check it is propagated throughout the broadcast?

Good idea thanks, I've added those tests now and fixed the formatting errors.

@skottmckay
Copy link
Contributor

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@skottmckay
Copy link
Contributor

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-binary-size-checks-ci-pipeline,Big Models,Linux Android Emulator QNN CI Pipeline

@skottmckay
Copy link
Contributor

/azp run Android CI Pipeline,iOS CI Pipeline,ONNX Runtime React Native CI Pipeline

@microsoft microsoft deleted a comment from azure-pipelines bot Jul 28, 2024
@microsoft microsoft deleted a comment from azure-pipelines bot Jul 28, 2024
Copy link

Azure Pipelines successfully started running 3 pipeline(s).

@microsoft microsoft deleted a comment from azure-pipelines bot Jul 28, 2024
Copy link

Azure Pipelines successfully started running 9 pipeline(s).

Copy link

Azure Pipelines successfully started running 10 pipeline(s).

skottmckay
skottmckay previously approved these changes Jul 28, 2024
@adamreeve
Copy link
Contributor Author

It looks like there are some issues with the asan runs and the MLFloat16 tests:

And the web builds also seem to have a problem with these new tests: https://dev.azure.com/onnxruntime/onnxruntime/_build/results?buildId=1447211&view=logs&j=990616d7-1f75-5fe5-5c67-c84a39482fba&t=67a495a8-8444-507d-6780-4eb1ddef3707&l=14255

The same problem was run into in #19984 (comment), where there weren't any changes to element_wise_ops.cc but there were tests added for handling NaNs with MLFloat16.

I'm not sure what could be causing this, it seems like a bug in Eigen rather than onnxruntime at first glance. The best way forward is probably to do the same as before and revert the MLFloat16 related changes for now, and I can make a new issue to follow up on this.

@skottmckay
Copy link
Contributor

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@skottmckay
Copy link
Contributor

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-binary-size-checks-ci-pipeline,Big Models,Linux Android Emulator QNN CI Pipeline

@skottmckay
Copy link
Contributor

/azp run Android CI Pipeline,iOS CI Pipeline,ONNX Runtime React Native CI Pipeline

Copy link

Azure Pipelines successfully started running 3 pipeline(s).

Copy link

Azure Pipelines successfully started running 9 pipeline(s).

Copy link

Azure Pipelines successfully started running 10 pipeline(s).

@skottmckay skottmckay merged commit 7543dd0 into microsoft:main Jul 29, 2024
82 checks passed
@adamreeve adamreeve deleted the min_max_nan_fix branch July 29, 2024 23:25
tianleiwu pushed a commit that referenced this pull request Sep 24, 2024
This makes min and max with NaN for either operand always return NaN for
float16 data, matching the behaviour of float and double.

The behaviour for floats and doubles was previously fixed for the CPU
provider in #21492 and the CUDA provider in #19984, but these PRs didn't
fix the behaviour for float16 due to tests causing asan errors. The
memory access violations with float16 data have now been fixed in
#22135, so this PR is a follow up to make float16 min and max behave the
same as float and double for both the CPU and CUDA providers now that we
can add tests for this.

### Motivation and Context

Relevant previous issues (not float16 specific):
* #21455
* onnx/onnx#6003
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Incorrect NaN handling for Min and Max operators on CPU with a single element input
2 participants