Skip to content

A collection of portable, reusable and cross-platform automation recipes (CM scripts) with a human-friendly interface and minimal dependencies to make it easier to build, run, benchmark and optimize AI, ML and other applications and systems across diverse and continuously changing models, data sets, software and hardware (cloud/edge)

License

Notifications You must be signed in to change notification settings

flexaihq/cm4mlops

 
 

Repository files navigation

Unified and cross-platform CM interface for DevOps, MLOps and MLPerf

License Python Version Powered by CM. Downloads

CM script automation features test MLPerf inference bert (deepsparse, tf, onnxruntime, pytorch) MLPerf inference MLCommons C++ ResNet50 MLPerf inference ABTF POC Test Test Compilation of QAIC Compute SDK (build LLVM from src) Test QAIC Software kit Compilation

This repository contains reusable and cross-platform automation recipes to run DevOps, MLOps, and MLPerf via a simple and human-readable Collective Mind interface (CM) while adapting to different operating systems, software and hardware.

All СM scripts have a simple Python API, extensible JSON/YAML meta description and unified input/output to make them reusable in different projects either individually or by chaining them together into portable automation workflows, applications and web services adaptable to continuously changing models, data sets, software and hardware.

We develop and test CM scripts as a community effort to support the following projects:

You can read this ArXiv paper to learn more about the CM motivation and long-term vision.

Please provide your feedback or submit your issues here.

Catalog

Online catalog: cKnowledge, MLCommons.

Citation

Please use this BibTeX file to cite this project.

A few demos

Install CM and virtual env

Install the MLCommons CM automation language.

Pull this repository

cm pull repo mlcommons@cm4mlops --branch=dev

Run image classification using CM

cm run script "python app image-classification onnx _cpu" --help

cm run script "download file _wget" --url=https://cKnowledge.org/ai/data/computer_mouse.jpg --verify=no --env.CM_DOWNLOAD_CHECKSUM=45ae5c940233892c2f860efdf0b66e7e
cm run script "python app image-classification onnx _cpu" --input=computer_mouse.jpg

cmr "python app image-classification onnx _cpu" --input=computer_mouse.jpg
cmr --tags=python,app,image-classification,onnx,_cpu --input=computer_mouse.jpg
cmr 3d5e908e472b417e --input=computer_mouse.jpg

cm docker script "python app image-classification onnx _cpu" --input=computer_mouse.jpg

cm gui script "python app image-classification onnx _cpu"

Re-run experiments from the ACM/IEEE MICRO'23 paper

Check this script/reproduce-ieee-acm-micro2023-paper-96.

Run MLPerf ResNet CPU inference benchmark via CM

cm run script --tags=run-mlperf,inference,_performance-only,_short  \
   --division=open \
   --category=edge \
   --device=cpu \
   --model=resnet50 \
   --precision=float32 \
   --implementation=mlcommons-python \
   --backend=onnxruntime \
   --scenario=Offline \
   --execution_mode=test \
   --power=no \
   --adr.python.version_min=3.8 \
   --clean \
   --compliance=no \
   --quiet \
   --time

Run MLPerf BERT CUDA inference benchmark v4.1 via CM

cmr "run-mlperf inference _find-performance _full _r4.1" \
   --model=bert-99 \
   --implementation=nvidia \
   --framework=tensorrt \
   --category=datacenter \
   --scenario=Offline \
   --execution_mode=test \
   --device=cuda  \
   --docker \
   --docker_cm_repo=mlcommons@cm4mlops \
   --docker_cm_repo_flags="--branch=mlperf-inference" \
   --test_query_count=100 \
   --quiet

Run MLPerf SDXL reference inference benchmark v4.1 via CM

cm run script \
	--tags=run-mlperf,inference,_r4.1 \
	--model=sdxl \
	--implementation=reference \
	--framework=pytorch \
	--category=datacenter \
	--scenario=Offline \
	--execution_mode=valid \
	--device=cuda \
	--quiet

License

Apache 2.0

CM concepts

Check our ACM REP'23 keynote.

Authors

Grigori Fursin and Arjun Suresh

Major script developers

Arjun Suresh, Anandhu S, Grigori Fursin

Funding

We thank cKnowledge.org, cTuning foundation and MLCommons for sponsoring this project!

Acknowledgments

We thank all volunteers, collaborators and contributors for their support, fruitful discussions, and useful feedback!

About

A collection of portable, reusable and cross-platform automation recipes (CM scripts) with a human-friendly interface and minimal dependencies to make it easier to build, run, benchmark and optimize AI, ML and other applications and systems across diverse and continuously changing models, data sets, software and hardware (cloud/edge)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 73.6%
  • Shell 12.8%
  • C++ 7.6%
  • C 3.1%
  • Batchfile 2.1%
  • Dockerfile 0.5%
  • Other 0.3%