Skip to content

Commit

Permalink
fixed results repo
Browse files Browse the repository at this point in the history
  • Loading branch information
gfursin committed Apr 10, 2024
1 parent b5f4ea6 commit 03aa743
Show file tree
Hide file tree
Showing 19 changed files with 24 additions and 24 deletions.
2 changes: 1 addition & 1 deletion cm-mlops/automation/experiment/README-extra.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ The goal is to provide a common interface to run, record, share, visualize and r
on any platform with any software, hardware and data.

The community helped us test a prototype of our "experiment" automation to record results in a unified CM format
from [several MLPerf benchmarks](https://github.com/mlcommons/ck_mlperf_results)
from [several MLPerf benchmarks](https://github.com/mlcommons/cm4mlperf-results)
including [MLPerf inference](https://github.com/mlcommons/inference) and [MLPerf Tiny](https://github.com/mlcommons/tiny),
visualize them at the [MLCommons CM platform](https://access.cknowledge.org/playground/?action=experiments&tags=all),
and improve them by the community via [public benchmarking, optimization and reproducibility challenges](https://access.cknowledge.org/playground/?action=challenges).
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
### Challenge

Check past MLPerf inference results in [this MLCommons repository](https://github.com/mlcommons/ck_mlperf_results)
Check past MLPerf inference results in [this MLCommons repository](https://github.com/mlcommons/cm4mlperf-results)
and add derived metrics such as result/No of cores, power efficiency, device cost, operational costs, etc.

Add clock speed as a third dimension to graphs and improve Bar graph visualization.
Expand Down Expand Up @@ -30,6 +30,6 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn
### Results

All accepted results will be publicly available in the CM format with derived metrics
in this [MLCommons repository](https://github.com/mlcommons/ck_mlperf_results),
in this [MLCommons repository](https://github.com/mlcommons/cm4mlperf-results),
in [MLCommons Collective Knowledge explorer](https://access.cknowledge.org/playground/?action=experiments)
and at official [MLCommons website](https://mlcommons.org).
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Official results:
* https://github.com/mlcommons/inference_results_v3.0/tree/main/open/cTuning

Results in the MLCommons CK/CM format:
* https://github.com/ctuning/ck_mlperf_results
* https://github.com/mlcommons/cm4mlperf-results

Visualization and comparison with derived metrics:
* [MLCommons Collective Knowledge Playground](https://access.cknowledge.org/playground/?action=experiments&tags=mlperf-inference,v3.0).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,6 @@ with PRs from participants [here](https://github.com/ctuning/mlperf_inference_su
### Results

All accepted results will be publicly available in the CM format with derived metrics
in this [MLCommons repository](https://github.com/mlcommons/ck_mlperf_results),
in this [MLCommons repository](https://github.com/mlcommons/cm4mlperf-results),
in [MLCommons Collective Knowledge explorer](https://access.cknowledge.org/playground/?action=experiments)
and at official [MLCommons website](https://mlcommons.org).
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,6 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn
### Results

All accepted results will be publicly available in the CM format with derived metrics
in this [MLCommons repository](https://github.com/mlcommons/ck_mlperf_results),
in this [MLCommons repository](https://github.com/mlcommons/cm4mlperf-results),
in [MLCommons Collective Knowledge explorer](https://access.cknowledge.org/playground/?action=experiments)
and at official [MLCommons website](https://mlcommons.org).
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,6 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn
### Results

All accepted results will be publicly available in the CM format with derived metrics
in this [MLCommons repository](https://github.com/mlcommons/ck_mlperf_results),
in this [MLCommons repository](https://github.com/mlcommons/cm4mlperf-results),
in [MLCommons Collective Knowledge explorer](https://access.cknowledge.org/playground/?action=experiments)
and at official [MLCommons website](https://mlcommons.org).
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,6 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn
### Results

All accepted results will be publicly available in the CM format with derived metrics
in this [MLCommons repository](https://github.com/mlcommons/ck_mlperf_results),
in this [MLCommons repository](https://github.com/mlcommons/cm4mlperf-results),
in [MLCommons Collective Knowledge explorer](https://access.cknowledge.org/playground/?action=experiments)
and at official [MLCommons website](https://mlcommons.org).
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,6 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn
### Results

All accepted results will be publicly available in the CM format with derived metrics
in this [MLCommons repository](https://github.com/mlcommons/ck_mlperf_results),
in this [MLCommons repository](https://github.com/mlcommons/cm4mlperf-results),
in [MLCommons Collective Knowledge explorer](https://access.cknowledge.org/playground/?action=experiments)
and at official [MLCommons website](https://mlcommons.org).
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,6 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn
### Results

All accepted results will be publicly available in the CM format with derived metrics
in this [MLCommons repository](https://github.com/mlcommons/ck_mlperf_results),
in this [MLCommons repository](https://github.com/mlcommons/cm4mlperf-results),
in [MLCommons Collective Knowledge explorer](https://access.cknowledge.org/playground/?action=experiments)
and at official [MLCommons website](https://mlcommons.org).
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,6 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn
### Results

All accepted results will be publicly available in the CM format with derived metrics
in this [MLCommons repository](https://github.com/mlcommons/ck_mlperf_results),
in this [MLCommons repository](https://github.com/mlcommons/cm4mlperf-results),
in [MLCommons Collective Knowledge explorer](https://access.cknowledge.org/playground/?action=experiments)
and at official [MLCommons website](https://mlcommons.org).
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,6 @@ This challenge is under preparation.
### Results

All accepted results will be publicly available in the CM format with derived metrics
in this [MLCommons repository](https://github.com/mlcommons/ck_mlperf_results),
in this [MLCommons repository](https://github.com/mlcommons/cm4mlperf-results),
in [MLCommons Collective Knowledge explorer](https://access.cknowledge.org/playground/?action=experiments)
and at official [MLCommons website](https://mlcommons.org).
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,6 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn
### Results

All accepted results will be publicly available in the CM format with derived metrics
in this [MLCommons repository](https://github.com/mlcommons/ck_mlperf_results),
in this [MLCommons repository](https://github.com/mlcommons/cm4mlperf-results),
in [MLCommons Collective Knowledge explorer](https://access.cknowledge.org/playground/?action=experiments)
and at official [MLCommons website](https://mlcommons.org).
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,6 @@ Open ticket: [GitHub](https://github.com/mlcommons/ck/issues/696)
### Results

All accepted results will be publicly available in the CM format with derived metrics
in this [MLCommons repository](https://github.com/mlcommons/ck_mlperf_results),
in this [MLCommons repository](https://github.com/mlcommons/cm4mlperf-results),
in [MLCommons Collective Knowledge explorer](https://access.cknowledge.org/playground/?action=experiments)
and at official [MLCommons website](https://mlcommons.org).
Original file line number Diff line number Diff line change
Expand Up @@ -36,5 +36,5 @@ in mid June 2023.

### Results

All results will be available in [this GitHub repo](https://github.com/ctuning/ck_mlperf_results)
All results will be available in [this GitHub repo](https://github.com/ctuning/cm4mlperf-results)
and can be visualized and compared using the [MLCommons Collective Knowledge Playground](https://access.cknowledge.org/playground/?action=experiments&tags=mlperf-tiny).
2 changes: 1 addition & 1 deletion cm-mlops/script/generate-mlperf-tiny-submission/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ ___
- CM script: [run-how-to-run-server](https://github.com/how-to-run/server/tree/master/script/run-how-to-run-server)
- CM script: [app-mlperf-inference-nvidia](https://github.com/cknowledge/cm-tests/tree/master/script/app-mlperf-inference-nvidia)
- CM script: [get-axs](https://github.com/cknowledge/cm-tests/tree/master/script/get-axs)
- CM script: [process-mlperf-inference-results](https://github.com/mlcommons/ck_mlperf_results/tree/master/script/process-mlperf-inference-results)
- CM script: [process-mlperf-inference-results](https://github.com/mlcommons/cm4mlperf-results/tree/master/script/process-mlperf-inference-results)
- CM script: [activate-python-venv](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/activate-python-venv)
- CM script: [add-custom-nvidia-system](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/add-custom-nvidia-system)
- CM script: [app-image-classification-onnx-py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-image-classification-onnx-py)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ and link reproducibility reports as shown in these examples:
* [Power efficiency to compare Qualcomm, Nvidia and Sima.ai devices](https://cKnowledge.org/mlcommons-mlperf-inference-gui-derived-metrics-and-conditions)
* [Reproducibility report for Nvidia Orin](https://access.cknowledge.org/playground/?action=experiments&name=mlperf-inference--v3.0--edge--closed--image-classification--offline&result_uid=3751b230c800434a)

Aggreaged results are available in [this MLCommons repository](https://github.com/mlcommons/ck_mlperf_results).
Aggreaged results are available in [this MLCommons repository](https://github.com/mlcommons/cm4mlperf-results).

You can see these results at [MLCommons CK playground](https://access.cknowledge.org/playground/?action=experiments&tags=mlperf-inference,all).

Expand All @@ -24,10 +24,10 @@ Pull the MLCommons CK repository with automation recipes for interoperable MLOps
cm pull repo mlcommons@ck
```

Pull already imported results (v2.0, v2.1, v3.0, v3.1) from this [mlcommons@ck_mlperf_results repo](https://github.com/mlcommons/ck_mlperf_results):
Pull already imported results (v2.0, v2.1, v3.0, v3.1) from this [mlcommons@cm4mlperf-results repo](https://github.com/mlcommons/cm4mlperf-results):

```bash
cm pull repo mlcommons@ck_mlperf_results
cm pull repo mlcommons@cm4mlperf-results
```

Install repository with raw MLPerf inference benchmark results with {NEW VERSION}:
Expand Down Expand Up @@ -61,7 +61,7 @@ cm run script "import mlperf inference to-experiment _skip_checker"
Import to a specific repo:

```bash
cm run script "import mlperf inference to-experiment" --target_repo=mlcommons@ck_mlperf_results
cm run script "import mlperf inference to-experiment" --target_repo=mlcommons@cm4mlperf-results
```

Visualize results on your local machine via CK playground GUI:
Expand All @@ -73,7 +73,7 @@ These results are also available in the [public CK playground](https://access.ck

## Further analysis of results

Please check this [README](https://github.com/mlcommons/ck_mlperf_results#how-to-update-this-repository-with-new-results).
Please check this [README](https://github.com/mlcommons/cm4mlperf-results#how-to-update-this-repository-with-new-results).

# Contact us

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ and link reproducibility reports as shown in these examples:
* [Power efficiency to compare Qualcomm, Nvidia and Sima.ai devices](https://cKnowledge.org/mlcommons-mlperf-inference-gui-derived-metrics-and-conditions)
* [Reproducibility report for Nvidia Orin](https://access.cknowledge.org/playground/?action=experiments&name=mlperf-inference--v3.0--edge--closed--image-classification--offline&result_uid=3751b230c800434a)

Aggreaged results are available in [this MLCommons repository](https://github.com/mlcommons/ck_mlperf_results).
Aggreaged results are available in [this MLCommons repository](https://github.com/mlcommons/cm4mlperf-results).

You can see these results at [MLCommons CK playground](You can see aggregated results [here](https://access.cknowledge.org/playground/?action=experiments&tags=mlperf-tiny,all).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ The goal is to make it easier for the community to analyze MLPerf results,
add derived metrics such as performance/Watt and constraints,
and link reproducibility reports.

Aggreaged results are available in [this MLCommons repository](https://github.com/mlcommons/ck_mlperf_results).
Aggreaged results are available in [this MLCommons repository](https://github.com/mlcommons/cm4mlperf-results).

You can see these results at [MLCommons CK playground](https://access.cknowledge.org/playground/?action=experiments&tags=mlperf-training,all).

Expand Down
2 changes: 1 addition & 1 deletion docs/mlperf/inference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ You can pull all past MLPerf results in the CM format, import your current exper
with derived metrics on your system using the Collective Knowledge Playground as follows:

```bash
cm pull repo mlcommons@ck_mlperf_results
cm pull repo mlcommons@cm4mlperf-results
cmr "get git repo _repo.https://github.com/ctuning/mlperf_inference_submissions_v3.1" \
--env.CM_GIT_CHECKOUT=main \
--extra_cache_tags=mlperf-inference-results,community,version-3.1
Expand Down

0 comments on commit 03aa743

Please sign in to comment.