Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New feature: single precision build #485

Merged
merged 10 commits into from
Jun 10, 2024

Conversation

scrasmussen
Copy link
Member

@scrasmussen scrasmussen commented May 24, 2024

Single Precision SCM Build

New single precision build for the single column model.

Build steps:

cmake ../src -D32BIT=ON

  • added precision_analys.py script to easily build and run single and double precision
$ python etc/scripts/precision_analysis.py --configure
$ python etc/scripts/precision_analysis.py --build
$ python etc/scripts/precision_analysis.py --run
  • added single precision build and run steps to ci_run_scm_rts.yml workflow
  • added documentation
  • automatic whitespace cleanup from editor
  • will add Jupyter Notebook with analysis of single vs. precision output to PR

@scrasmussen
Copy link
Member Author

PDF of jupyter notebook with relative error plots of single vs double precision: single_vs_double_precision_output.pdf These slides will be shared and discussed in the Thursday meeting.

First batch of output differences look at the whole run, the second batch are from the first time step and those differing variables are all diagnostic tendencies.

@scrasmussen
Copy link
Member Author

I've added more metrics of error analysis so now the max relative error, standard error, and mean absolute error for each variable above a threshold is printed out. Looking at the mean absolute error things tend to look better, which I'm interpreting as good but I'm not sure how good to really feel about it?

I've split up the analysis of the whole time series and looking at just after initialization.

@grantfirl
Copy link
Collaborator

@scrasmussen Thanks for all of the plots and analysis, Soren. I wonder what the results look like if we just plotted time-mean profiles of the same variables. E.g., for each precision, choose a time period (probably between 1 and 6 hours, either at the beginning or another time) and plot the mean profile of each variable over that time period. Also, perhaps plot the minimum and maximum profiles to get a sense for the range. If both single and double precision are plotted on the same plot, this is traditional way to "eyeball" whether the differences really matter.

If we're only looking at relative errors, this can look really bad when dividing by really small numbers, which happens all the time. The MAE is helpful for sure and is perhaps more useful, IMO. Looking at profiles of MAE next to the profiles listed above should give a good sense of how much to worry about the differences.

Another test would be to do the same analysis, but for GNU vs Intel compilers and/or Release/Debug mode that have different optimizations. If these comparisons all look hand-wavily similar, that should tell us something too.

@scrasmussen
Copy link
Member Author

@grantfirl thanks for your input! Here are plots of the single and double precision mean profiles on one plot and then the difference between the single and double variables on another. There is only one time range but the two pdfs show the mean over time and then the other shows the mean over the vertical dimension across time. To me it seems that these pass the "eyeball" test

note: single precision is in red and double precision is in blue

@grantfirl grantfirl merged commit 337d1d2 into NCAR:main Jun 10, 2024
16 of 23 checks passed
@scrasmussen scrasmussen deleted the new-feature/single-precision-build branch October 17, 2024 17:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants