-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarify allowed changes in the system scale for Inference #178
Conversation
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
A _Preview_ system is a system which did not qualify as an _Available_ system as of the previous MLPerf submission date, but will qualify in the next submission after 140 days of the current submission date, or by the next MLPerf submission date, whichever is more, and which the submitter commits to submitting as an _Available_ system by that time. If it is not submitted in that submission round with equal or better performance (allowing for noise), the _Preview_ benchmark will be marked as invalid. A _Preview_ submission must include performance on at least one benchmark which will be considered _MLPerf Compatible_ (xref:MLPerf_Compatibility_Table.adoc[see the MLPerf Compatibility Table]) in the upcoming round where transition to _Available_ is made (consult SWG for Benchmark Roadmap). | ||
|
||
On each of the benchmarks that are previewed and are _Compatible_, the _Available_ submission must show _equal or better performance_ than the _Preview_submission_, allowing for noise, for changes in the benchmark definition, or for changes in the system scale (defined as the number of system components principally determining performance e.g. accelerator chips): | ||
* Training: An _Available_ submission can be on a system larger than the largest system used for _Preview_, or smaller than the smallest system used for _Preview_: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, the original rule implies an AND
condition and not OR
right? And I think AND
makes sense and the same can be applied for inference too. i.e., if the preview submission was on say 1,2,4 and 8 accelerators, a submitter can do an Available submission on 1 and 8 accelerators or even 1 and 16 accelerators. But the smallest and the largest scales must be submitted. This ensures that both scaling up and scaling down of the performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I second this, we should still keep -
"across at least the smallest and the largest scale of the systems used for Preview submission on that benchmark (e.g. Available Training submissions can be on scales smaller than the smallest and larger than the largest scale used for Preview submission)"
@psyhtest Bullet items in your diff are not showing up correctly in the rendered Markdown. |
* For an _Available_ system that is larger than the _Preview_ system, performance per accelerator must be equal or better. | ||
* Inference with Power measurements: An _Available_ submission must be on a system of the same scale as used for _Preview_. | ||
* Power-normalized performance (not absolute performance) must be equal or better. | ||
Any other changes must be approved by the relevant Working Group prior to submission. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rule for Training submissions along with Power is missing.
I recommend keeping this line under the Training section- "For submissions accompanied by power measurements, "equal or better" must use power-normalized performance rather than absolute performance."
Was discussed in the working group and is no longer pursued. |
Fixes #176. Consider Inference with and without Power measurements. Unify with Training (without Power).