You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following many recent discussions at MLCommons about improving the repeatability and reproducibility of MLPerf inference benchmarks, we suggest to look at similar initiatives at computer systems conferences (artifact evaluation and reproducibility initiatives) and maybe adopt their methodology and badges:
containers were useful as a snapshot but they do not guarantee to be working on new hardware or with new software
A potential solution is improve repeatability of MLPerf submissions (full reproducibility is probably too costly and impossible at this stage) by introducing MLPerf reproducibility badges similar to ACM reproducibility badges:
"MLPerf submission available" badge is published along submission only if all artifacts are publicly available for external user to rebuild the submission (code, data, configurations, workflows, etc) .
We can evaluate results after submission deadline and before the publication deadline, and assign badges to all results in the final table that is officially published. It may motivate everyone to improve the quality of their submission and get all such badges in the future instead of the community discovering such issues after MLPerf publication of results.
The text was updated successfully, but these errors were encountered:
Following many recent discussions at MLCommons about improving the repeatability and reproducibility of MLPerf inference benchmarks, we suggest to look at similar initiatives at computer systems conferences (artifact evaluation and reproducibility initiatives) and maybe adopt their methodology and badges:
Our repeatability study for MLPerf inference v3.1 highlights similar repeatability issues to what we already saw in compiler, systems and ML conferences:
A potential solution is improve repeatability of MLPerf submissions (full reproducibility is probably too costly and impossible at this stage) by introducing MLPerf reproducibility badges similar to ACM reproducibility badges:
"MLPerf submission available" badge is published along submission only if all artifacts are publicly available for external user to rebuild the submission (code, data, configurations, workflows, etc) .
"MLPerf submission functional/repeatable" badge if anyone can perform a short valid run for a given submission in a fully automated way. The MLCommons Automation and Reproducibility TaskForce can then extend MLCommons CM workflow for MLPerf to run that submission via a common interface in a unified way.
We can evaluate results after submission deadline and before the publication deadline, and assign badges to all results in the final table that is officially published. It may motivate everyone to improve the quality of their submission and get all such badges in the future instead of the community discovering such issues after MLPerf publication of results.
The text was updated successfully, but these errors were encountered: