-
Notifications
You must be signed in to change notification settings - Fork 467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[benchmarks] Fix execution with AMP precision. #6512
Conversation
Hi @ysiraichi , did you see any issue recently that GPU xla backend also failed to run? I am not sure if this is related to the data format changes. |
Not really... |
Overall, LGTM. |
It's not exactly an error. But this is how AMP is supposed to be used. This is also how PyTorch's benchmarking script uses it (if we want to have comparable numbers). |
This PR moves AMP to be only activated in the forward execution.
cc @miladm