Collective Knowledge (CK) is a community-driven project dedicated to supporting open science, enhancing reproducible research, and fostering collaborative learning on how to run AI, ML, and other emerging workloads in the most efficient and cost-effective way across diverse models, data sets, software and hardware: [ white paper ].
It includes the following sub-projects.
Starting in February 2025, CMX V4+ serves as drop-in, backward-compatible replacement for the earlier Collective Mind framework (CM) and other MLCommons automation prototypes, while providing a simpler and more robust interface.
The Common Metadata eXchange framework (CMX) was developed to support open science and facilitate collaborative, reproducible, and reusable research, development, and experimentation based on FAIR principles.
It helps users non-intrusively convert their software projects into file-based repositories of portable and reusable artifacts (code, data, models, scripts) with extensible metadata, a unified command-line interface, and a simple Python API.
Such artifacts can be easily chained together into portable and technology-agnostic automation workflows, enabling users to rerun, reproduce, and reuse complex experimental setups across diverse and rapidly evolving models, datasets, software, and hardware.
For example, CMX helps to modularize, automate and customize MLPerf benchmarks.
See the project page for more details.
See the project page for more details.
We have developed a collection of portable, extensible and technology-agnostic automation recipes with a common CLI and Python API (CM scripts) to unify and automate all the manual steps required to compose, run, benchmark and optimize complex ML/AI applications on diverse platforms with any software and hardware.
The two key automations are script and cache: see online catalog at CK playground, online MLCommons catalog.
CM scripts extend the concept of cmake
with simple Python automations, native scripts
and JSON/YAML meta descriptions. They require Python 3.8+ with minimal dependencies and are
continuously extended by the community and MLCommons members
to run natively on Ubuntu, MacOS, Windows, RHEL, Debian, Amazon Linux
and any other operating system, in a cloud or inside automatically generated containers
while keeping backward compatibility.
See the online MLPerf documentation
at MLCommons to run MLPerf inference benchmarks across diverse systems using CMX.
Just install pip install cmind
and substitute the following commands and flags:
cm
->cmx
mlc
->cmx run mlc
mlcr
->cmxr
-v
->--v
Collective Knowledge Playground - a unified and open-source platform designed to index all CM/CMX automations similar to PYPI and assist users in preparing CM/CMX commands to:
- aggregate, process, visualize, and compare MLPerf benchmarking results for AI and ML systems
- run MLPerf benchmarks
- organize open and reproducible optimization challenges and tournaments.
Artifact Evaluation automation - a community-driven initiative leveraging CK, CM and CMX to automate artifact evaluation and support reproducibility efforts at ML and systems conferences.
- CM-MLOps (2021)
- CM4MLOps (2022-2024)
- CK automation framework v1 and v2
Copyright (c) 2021-2025 MLCommons
Grigori Fursin, the cTuning foundation and OctoML donated this project to MLCommons to benefit everyone.
Copyright (c) 2014-2021 cTuning foundation
- CM, CM4MLOps and MLPerf automations: MLCommons infra WG
- CMX (the next generation of CM since 2025): Grigori Fursin
To learn more about the motivation behind this project, please explore the following articles and presentations:
- HPCA'25 article "MLPerf Power: Benchmarking the Energy Efficiency of Machine Learning Systems from Microwatts to Megawatts for Sustainable AI": [ Arxiv ], [ tutorial to reproduce results using CM/CMX ]
- "Enabling more efficient and cost-effective AI/ML systems with Collective Mind, virtualized MLOps, MLPerf, Collective Knowledge Playground and reproducible optimization tournaments": [ ArXiv ]
- ACM REP'23 keynote about the MLCommons CM automation framework: [ slides ]
- ACM TechTalk'21 about Collective Knowledge project: [ YouTube ] [ slides ]
- Journal of Royal Society'20: [ paper ]
This open-source project was created by Grigori Fursin and sponsored by cTuning.org, OctoAI and HiPEAC. Grigori donated this project to MLCommons to modularize and automate MLPerf benchmarks, benefit the community, and foster its development as a collaborative, community-driven effort.
We thank MLCommons, FlexAI and cTuning for supporting this project, as well as our dedicated volunteers and collaborators for their feedback and contributions!
If you found the CM automations helpful, kindly reference this article: [ ArXiv ], [ BibTex ].
You are welcome to contact the author to discuss long-term plans and potential collaboration.