-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[HPC] Proposal: Update terminology for clarity #511
Comments
|
I would endorse "throughput" and then either "time to train" or "time to solution". I would suggest using a term that is consistent between Training and HPC, especially as we contemplate making the two more similar. Appreciate the suggestions from the team at Nvidia! |
TTT sounds good too. Aligning with MLPerf-Training on it is a good idea as well. |
This can probably be closed, now. Is that right, @nvaprodromou ? |
Introduction:
After collecting feedback from engineers, clients, and press, NVIDIA presented a list of proposals that aim to improve the popularity of the MLPerf HPC benchmark suite. Please see our slide deck for more information on our feedback gathering process and insights.
Proposal: Rename strongly- and weakly-scaled benchmarks
Slide 12 in proposals slide deck.
We propose to update the benchmark's terminology to reduce confusion when parsing results:
This proposal aims to improve the popularity of the MLPerf HPC benchmark suite by improving on the following aspects:
Discussion
Pros:
Cons:
The text was updated successfully, but these errors were encountered: