You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
PCC is dropping due to precision loss for lodaddexp2 functions. While debugging observed that exp2 of certain inputs are zeros whereas in Torch we are getting values at the precision level which results in PCC drop.
@ttmtrajkovic I have discussed with @umadevimcw offline, and the issue above is that exp2 implementation on device does not output value: 1.1977575e-21, which is re-presentable by float16b, but pytorch with dataformat float16b does represent it, this causes precision failures downstream.
@umadevimcw@eyonland please let us know what is the priority for this issue, if it is only failing unit tests, or failing on models due to the downstream precision issue.
Describe the bug
PCC is dropping due to precision loss for lodaddexp2 functions. While debugging observed that exp2 of certain inputs are zeros whereas in Torch we are getting values at the precision level which results in PCC drop.
This issue blocks #6391, #8634, #13973, and #13930,
To Reproduce
Steps to reproduce the behavior:
Copy Paste this code to get the precision loss.
In this code the input values are fixed for debugging purposes which show cases the precision loss
Expected behavior
Screenshots
Please complete the following environment information:
Additional context
TT's exp2 ops internally depends on exp op
The text was updated successfully, but these errors were encountered: