Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed bug in precision logic #232

Merged
merged 1 commit into from
Oct 12, 2023
Merged

Fixed bug in precision logic #232

merged 1 commit into from
Oct 12, 2023

Commits on Mar 21, 2023

  1. Fixed bug in precision logic

    My change here aligns val() with the impimentation in upstream.  Without my change, the data samples will ALWAYS be cast to half when run on CUDA.  However, the model may not be half precision in this case as per the logic above this section (The "if training: ... else: ...", specifically lines 145&149).  This means the code ALWAYS crashes when run on CUDA when not passing --half as the model weights will be normal precision, but all input will be half precision.
    
    My change ensures that weights and input have same precision.  Additionally, half precision is used if and only if --half is passed, the devices is not cpu, and the specific backend supports half precision
    SIR-unit authored Mar 21, 2023
    Configuration menu
    Copy the full SHA
    a8689e4 View commit details
    Browse the repository at this point in the history