Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement reduce precision FP8 MNIST training example. #87

Merged
merged 1 commit into from
Jan 16, 2024

Conversation

balancap
Copy link
Contributor

Supporting FP8 simulated training using ml_dtypes library. Allowing custom cast on forward and backward pass.

@balancap balancap force-pushed the implement-reduce-precision-fp8-mnist branch from 6b8d366 to f6dbd53 Compare January 16, 2024 15:09
Supporting FP8 simulated training using `ml_dtypes` library. Allowing custom cast on forward and backward pass.
@balancap balancap force-pushed the implement-reduce-precision-fp8-mnist branch from f6dbd53 to aa4b779 Compare January 16, 2024 15:33
@balancap balancap self-assigned this Jan 16, 2024
@balancap balancap added ops Ops coverage experiments Experiments labels Jan 16, 2024
@balancap balancap merged commit a1ea373 into main Jan 16, 2024
2 checks passed
@balancap balancap deleted the implement-reduce-precision-fp8-mnist branch January 16, 2024 16:25
@balancap balancap linked an issue Jan 16, 2024 that may be closed by this pull request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
experiments Experiments ops Ops coverage
Projects
None yet
Development

Successfully merging this pull request may close these issues.

MNIST training example in FP8
1 participant