[ Back to index ]
This is an automatically generated list of portable and reusable automation recipes (CM scripts) with a human-friendly interface (CM) to run a growing number of ad-hoc MLPerf, MLOps, and DevOps scripts from MLCommons projects and research papers in a unified way on any operating system with any software and hardware natively or inside containers.
Click on any automation recipe below to learn how to run and reuse it via CM command line, Python API or GUI.
CM scripts can easily chained together into automation workflows using deps
and tags
keys
while automatically updating all environment variables and paths
for a given task and platform using simple JSON or YAML.
Note that CM is a community project being developed and extended by MLCommons members and individual contributors - you can find source code of CM scripts maintained by MLCommons here. Please join Discord server to participate in collaborative developments or provide your feedback.
2022-2024 MLCommons
- AI/ML datasets
- AI/ML frameworks
- AI/ML models
- AI/ML optimization
- CM interface prototyping
- CUDA automation
- Cloud automation
- Collective benchmarking
- Compiler automation
- Dashboard automation
- Detection or installation of tools and artifacts
- DevOps automation
- Docker automation
- GUI
- Legacy CK support
- MLPerf benchmark support
- Modular AI/ML application pipeline
- Modular MLPerf benchmarks
- Modular MLPerf inference benchmark pipeline
- Modular MLPerf training benchmark pipeline
- Modular application pipeline
- Platform information
- Python automation
- Remote automation
- Reproduce MLPerf benchmarks
- Reproducibility and artifact evaluation
- Tests
- TinyML automation
- get-croissant
- get-dataset-cifar10
- get-dataset-cnndm
- get-dataset-coco
- get-dataset-coco2014
- get-dataset-criteo
- get-dataset-imagenet-aux
- get-dataset-imagenet-calibration
- get-dataset-imagenet-helper
- get-dataset-imagenet-train
- get-dataset-imagenet-val
- get-dataset-kits19
- get-dataset-librispeech
- get-dataset-openimages
- get-dataset-openimages-annotations
- get-dataset-openimages-calibration
- get-dataset-openorca
- get-dataset-squad
- get-dataset-squad-vocab
- get-preprocessed-dataset-criteo
- get-preprocessed-dataset-imagenet
- get-preprocessed-dataset-kits19
- get-preprocessed-dataset-librispeech
- get-preprocessed-dataset-openimages
- get-preprocessed-dataset-openorca
- get-preprocessed-dataset-squad
- get-preprocesser-script-generic
- get-google-saxml
- get-onnxruntime-prebuilt
- get-qaic-apps-sdk
- get-qaic-platform-sdk
- get-qaic-software-kit
- get-rocm
- get-tvm
- install-qaic-compute-sdk-from-src
- install-rocm
- install-tensorflow-for-c
- install-tensorflow-from-src
- install-tflite-from-src
- convert-ml-model-huggingface-to-onnx
- get-bert-squad-vocab
- get-dlrm
- get-ml-model-3d-unet-kits19
- get-ml-model-bert-base-squad
- get-ml-model-bert-large-squad
- get-ml-model-dlrm-terabyte
- get-ml-model-efficientnet-lite
- get-ml-model-gptj
- get-ml-model-huggingface-zoo
- get-ml-model-llama2
- get-ml-model-mobilenet
- get-ml-model-neuralmagic-zoo
- get-ml-model-resnet50
- get-ml-model-retinanet
- get-ml-model-retinanet-nvidia
- get-ml-model-rnnt
- get-ml-model-stable-diffusion
- get-ml-model-tiny-resnet
- get-ml-model-using-imagenet-from-model-zoo
- get-tvm-model
- destroy-terraform
- get-aws-cli
- get-terraform
- install-aws-cli
- install-terraform-from-src
- run-terraform
- get-aocl
- get-cl (Detect or install Microsoft C compiler)
- get-compiler-flags
- get-compiler-rust
- get-gcc (Detect or install GCC compiler)
- get-go
- get-llvm (Detect or install LLVM compiler)
- install-gcc-src
- install-ipex-from-src (Build IPEX from sources)
- install-llvm-prebuilt (Install prebuilt LLVM compiler)
- install-llvm-src (Build LLVM compiler from sources (can take >30 min))
- install-onednn-from-src (Build oneDNN from sources)
- install-onnxruntime-from-src (Build onnxruntime from sources)
- install-pytorch-from-src (Build pytorch from sources)
- install-pytorch-kineto-from-src (Build pytorch kineto from sources)
- install-torchvision-from-src (Build pytorchvision from sources)
- install-tpp-pytorch-extension (Build TPP-PEX from sources)
- install-transformers-from-src (Build transformers from sources)
- get-android-sdk
- get-aria2
- get-bazel
- get-blis
- get-brew
- get-cmake
- get-cmsis_5
- get-docker
- get-generic-sys-util
- get-google-test
- get-java
- get-javac
- get-lib-armnn
- get-lib-dnnl
- get-lib-protobuf
- get-lib-qaic-api
- get-nvidia-docker
- get-openssl
- get-rclone
- get-sys-utils-cm
- get-sys-utils-min
- get-xilinx-sdk
- get-zendnn
- install-bazel
- install-cmake-prebuilt
- install-gflags
- install-github-cli
- install-numactl-from-src (Build numactl from sources)
- install-openssl
- benchmark-program
- compile-program
- convert-csv-to-md
- copy-to-clipboard
- create-conda-env
- create-patch
- detect-sudo
- download-and-extract
- download-file
- download-torrent
- extract-file
- fail
- get-conda
- get-git-repo
- get-github-cli
- pull-git-repo
- push-csv-to-spreadsheet
- set-device-settings-qaic
- set-echo-off-win
- set-performance-mode
- set-sqlite-dir
- tar-my-folder
- add-custom-nvidia-system
- benchmark-any-mlperf-inference-implementation
- build-mlperf-inference-server-nvidia
- generate-mlperf-inference-submission
- generate-mlperf-inference-user-conf
- generate-mlperf-tiny-report
- generate-mlperf-tiny-submission
- generate-nvidia-engine
- get-mlperf-inference-intel-scratch-space
- get-mlperf-inference-loadgen
- get-mlperf-inference-nvidia-common-code
- get-mlperf-inference-nvidia-scratch-space
- get-mlperf-inference-results
- get-mlperf-inference-results-dir
- get-mlperf-inference-src
- get-mlperf-inference-submission-dir
- get-mlperf-inference-sut-configs
- get-mlperf-inference-sut-description
- get-mlperf-logging
- get-mlperf-power-dev
- get-mlperf-tiny-eembc-energy-runner-src
- get-mlperf-tiny-src
- get-mlperf-training-nvidia-code
- get-mlperf-training-src
- get-nvidia-mitten
- get-spec-ptd
- import-mlperf-inference-to-experiment
- import-mlperf-tiny-to-experiment
- import-mlperf-training-to-experiment
- install-mlperf-logging-from-src
- prepare-training-data-bert
- prepare-training-data-resnet
- preprocess-mlperf-inference-submission
- process-mlperf-accuracy
- push-mlperf-inference-results-to-github
- run-mlperf-inference-mobilenet-models
- run-mlperf-inference-submission-checker
- run-mlperf-power-client
- run-mlperf-power-server
- run-mlperf-training-submission-checker
- truncate-mlperf-inference-accuracy-log
- app-image-classification-onnx-py
- app-image-classification-tf-onnx-cpp
- app-image-classification-torch-py
- app-image-classification-tvm-onnx-py
- app-stable-diffusion-onnx-py
- app-loadgen-generic-python
- app-mlperf-inference
- app-mlperf-inference-ctuning-cpp-tflite
- app-mlperf-inference-mlcommons-cpp
- app-mlperf-inference-mlcommons-python
- benchmark-program-mlperf
- run-mlperf-inference-app
- activate-python-venv (Activate virtual Python environment)
- get-generic-python-lib
- get-python3
- install-generic-conda-package
- install-python-src
- install-python-venv
- app-mlperf-inference-nvidia
- reproduce-mlperf-octoml-tinyml-results
- reproduce-mlperf-training-nvidia
- wrapper-reproduce-octoml-tinyml-submission
- print-croissant-desc
- print-hello-world
- print-hello-world-java
- print-hello-world-javac
- print-hello-world-py
- print-python-version
- run-python
- test-download-and-extract-artifacts
- test-set-sys-user-cm
- upgrade-python-pip
- create-fpgaconvnet-app-tinyml
- create-fpgaconvnet-config-tinyml
- flash-tinyml-binary
- get-microtvm
- get-zephyr
- get-zephyr-sdk
- activate-python-venv (Activate virtual Python environment.)
- add-custom-nvidia-system
- app-image-classification-onnx-py
- app-image-classification-tf-onnx-cpp
- app-image-classification-torch-py
- app-image-classification-tvm-onnx-py
- app-image-corner-detection
- app-loadgen-generic-python
- app-mlperf-inference
- app-mlperf-inference-ctuning-cpp-tflite
- app-mlperf-inference-dummy
- app-mlperf-inference-intel
- app-mlperf-inference-mlcommons-cpp
- app-mlperf-inference-mlcommons-python
- app-mlperf-inference-nvidia
- app-mlperf-inference-qualcomm
- app-mlperf-training-nvidia
- app-mlperf-training-reference
- app-stable-diffusion-onnx-py
- benchmark-any-mlperf-inference-implementation
- benchmark-program
- benchmark-program-mlperf
- build-docker-image
- build-dockerfile
- build-mlperf-inference-server-nvidia
- calibrate-model-for.qaic
- compile-model-for.qaic
- compile-program
- convert-csv-to-md
- convert-ml-model-huggingface-to-onnx
- copy-to-clipboard
- create-conda-env
- create-fpgaconvnet-app-tinyml
- create-fpgaconvnet-config-tinyml
- create-patch
- destroy-terraform
- detect-cpu
- detect-os
- detect-sudo
- download-and-extract
- download-file
- download-torrent
- dump-pip-freeze
- extract-file
- fail
- flash-tinyml-binary
- generate-mlperf-inference-submission
- generate-mlperf-inference-user-conf
- generate-mlperf-tiny-report
- generate-mlperf-tiny-submission
- generate-nvidia-engine
- get-android-sdk
- get-aocl
- get-aria2
- get-aws-cli
- get-bazel
- get-bert-squad-vocab
- get-blis
- get-brew
- get-ck
- get-ck-repo-mlops
- get-cl (Detect or install Microsoft C compiler.)
- get-cmake
- get-cmsis_5
- get-compiler-flags
- get-compiler-rust
- get-conda
- get-croissant
- get-cuda
- get-cuda-devices
- get-cudnn
- get-dataset-cifar10
- get-dataset-cnndm
- get-dataset-coco
- get-dataset-coco2014
- get-dataset-criteo
- get-dataset-imagenet-aux
- get-dataset-imagenet-calibration
- get-dataset-imagenet-helper
- get-dataset-imagenet-train
- get-dataset-imagenet-val
- get-dataset-kits19
- get-dataset-librispeech
- get-dataset-openimages
- get-dataset-openimages-annotations
- get-dataset-openimages-calibration
- get-dataset-openorca
- get-dataset-squad
- get-dataset-squad-vocab
- get-dlrm
- get-dlrm-data-mlperf-inference
- get-docker
- get-gcc (Detect or install GCC compiler.)
- get-generic-python-lib
- get-generic-sys-util
- get-git-repo
- get-github-cli
- get-go
- get-google-saxml
- get-google-test
- get-ipol-src
- get-java
- get-javac
- get-lib-armnn
- get-lib-dnnl
- get-lib-protobuf
- get-lib-qaic-api
- get-llvm (Detect or install LLVM compiler.)
- get-microtvm
- get-ml-model-3d-unet-kits19
- get-ml-model-bert-base-squad
- get-ml-model-bert-large-squad
- get-ml-model-dlrm-terabyte
- get-ml-model-efficientnet-lite
- get-ml-model-gptj
- get-ml-model-huggingface-zoo
- get-ml-model-llama2
- get-ml-model-mobilenet
- get-ml-model-neuralmagic-zoo
- get-ml-model-resnet50
- get-ml-model-retinanet
- get-ml-model-retinanet-nvidia
- get-ml-model-rnnt
- get-ml-model-stable-diffusion
- get-ml-model-tiny-resnet
- get-ml-model-using-imagenet-from-model-zoo
- get-mlperf-inference-intel-scratch-space
- get-mlperf-inference-loadgen
- get-mlperf-inference-nvidia-common-code
- get-mlperf-inference-nvidia-scratch-space
- get-mlperf-inference-results
- get-mlperf-inference-results-dir
- get-mlperf-inference-src
- get-mlperf-inference-submission-dir
- get-mlperf-inference-sut-configs
- get-mlperf-inference-sut-description
- get-mlperf-inference-utils
- get-mlperf-logging
- get-mlperf-power-dev
- get-mlperf-tiny-eembc-energy-runner-src
- get-mlperf-tiny-src
- get-mlperf-training-nvidia-code
- get-mlperf-training-src
- get-nvidia-docker
- get-nvidia-mitten
- get-onnxruntime-prebuilt
- get-openssl
- get-preprocessed-dataset-criteo
- get-preprocessed-dataset-imagenet
- get-preprocessed-dataset-kits19
- get-preprocessed-dataset-librispeech
- get-preprocessed-dataset-openimages
- get-preprocessed-dataset-openorca
- get-preprocessed-dataset-squad
- get-preprocesser-script-generic
- get-python3
- get-qaic-apps-sdk
- get-qaic-platform-sdk
- get-qaic-software-kit
- get-rclone
- get-rocm
- get-spec-ptd
- get-sys-utils-cm
- get-sys-utils-min
- get-tensorrt
- get-terraform
- get-tvm
- get-tvm-model
- get-xilinx-sdk
- get-zendnn
- get-zephyr
- get-zephyr-sdk
- gui
- import-mlperf-inference-to-experiment
- import-mlperf-tiny-to-experiment
- import-mlperf-training-to-experiment
- install-aws-cli
- install-bazel
- install-cmake-prebuilt
- install-cuda-package-manager
- install-cuda-prebuilt
- install-gcc-src
- install-generic-conda-package
- install-gflags
- install-github-cli
- install-ipex-from-src (Build IPEX from sources.)
- install-llvm-prebuilt (Install prebuilt LLVM compiler.)
- install-llvm-src (Build LLVM compiler from sources (can take >30 min).)
- install-mlperf-logging-from-src
- install-nccl-libs
- install-numactl-from-src (Build numactl from sources.)
- install-onednn-from-src (Build oneDNN from sources.)
- install-onnxruntime-from-src (Build onnxruntime from sources.)
- install-openssl
- install-pip-package-for-cmind-python
- install-python-src
- install-python-venv
- install-pytorch-from-src (Build pytorch from sources.)
- install-pytorch-kineto-from-src (Build pytorch kineto from sources.)
- install-qaic-compute-sdk-from-src
- install-rocm
- install-tensorflow-for-c
- install-tensorflow-from-src
- install-terraform-from-src
- install-tflite-from-src
- install-torchvision-from-src (Build pytorchvision from sources.)
- install-tpp-pytorch-extension (Build TPP-PEX from sources.)
- install-transformers-from-src (Build transformers from sources.)
- launch-benchmark
- prepare-training-data-bert
- prepare-training-data-resnet
- preprocess-mlperf-inference-submission
- print-croissant-desc
- print-hello-world
- print-hello-world-java
- print-hello-world-javac
- print-hello-world-py
- print-python-version
- process-ae-users
- process-mlperf-accuracy
- prune-bert-models
- prune-docker
- publish-results-to-dashboard
- pull-git-repo
- push-csv-to-spreadsheet
- push-mlperf-inference-results-to-github
- remote-run-commands
- reproduce-ipol-paper-2022-439
- reproduce-mlperf-octoml-tinyml-results
- reproduce-mlperf-training-nvidia
- run-docker-container
- run-mlperf-inference-app
- run-mlperf-inference-mobilenet-models
- run-mlperf-inference-submission-checker
- run-mlperf-power-client
- run-mlperf-power-server
- run-mlperf-training-submission-checker
- run-python
- run-terraform
- save-mlperf-inference-implementation-state
- set-device-settings-qaic
- set-echo-off-win
- set-performance-mode
- set-sqlite-dir
- set-venv
- tar-my-folder
- test-download-and-extract-artifacts
- test-mlperf-inference-retinanet
- test-set-sys-user-cm
- truncate-mlperf-inference-accuracy-log
- upgrade-python-pip
- wrapper-reproduce-octoml-tinyml-submission