Skip to content

CM interface and automation recipes to analyze MLPerf Inference, Tiny and Training results. The goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$

License

Notifications You must be signed in to change notification settings

mlcommons/cm4mlperf-results

Repository files navigation

MLPerf Benchmark Results in the MLCommons CM format

This repository contains compacted and aggregated results of the MLPerf Inference benchmark, MLPerf Training benchmark and TinyMLPerf benchmark in the compact MLCommons Collective Mind format for the MLCommons CK Playground being developed by the MLCommons taskforce on automation and reproducibility.

The goal is to make it easier for the community to analyze MLPerf results, add derived metrics such as performance/Watt and constraints, generate graphs, prepare reports and link reproducibility reports as shown in these examples:

How to import raw MLPerf results to CK/CM format

Install MLCommons CM framework.

MLPerf inference benchmark results

Follow this README from the related CM automations script.

You can see aggregated results here.

TinyMLPerf benchmark results

Follow this README from the related CM automations script.

You can see aggregated results here.

MLPerf training benchmark results

Follow this README from the related CM automations script.

You can see aggregated results here.

How to update this repository with new results

Using your own Python script

You can use this repository to analyze, reuse, update and improve MLPerf results compact by calculating and adding derived metrics (performance/watt) or links to reproducibility reports that will be visible at the MLCommons CK playground.

Install MLCommons CM framework.

Pull CM repository with automation recipes and with MLPerf results in the CM format:

cm pull repo mlcommons@ck
cm pull repo mlcommons@cm4mlperf-results

Find CM entries with MLPerf inference v3.1 experiments from CMD:

cm find experiment --tags=mlperf-inference,v3.1

Find CM entries with MLPerf inference v3.1 experiments from Python:

import cmind

r = cmind.access({'action':'find',
                  'automation':'experiment,a0a2d123ef064bcb',
                  'tags':'mlperf-inference,v3.1'})

if r['return']>0: cmind.error(r)

lst = r['list']

for experiment in lst:
    print (experiment.path)

Using CM script

We created a sample CM script in this repository that you can use and extend to add derived metrics:

cm run script "process mlperf-inference results" --experiment_tags=mlperf-inference,v3.1

Copyright

2021-2023 MLCommons

License

Apache 2.0

Project coordinators

Grigori Fursin and Arjun Suresh.

Contact us

This project is maintained by the MLCommons taskforce on automation and reproducibility, cTuning foundation and cKnowledge.org.

Join our Discord server to ask questions, provide your feedback and participate in further developments.

About

CM interface and automation recipes to analyze MLPerf Inference, Tiny and Training results. The goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages