MLPerf Mobile Inference Benchmark is an open-source benchmark suite for measuring how fast mobile devices (e.g. phones, laptops) can run AI tasks. The benchmark is supported by an App which currently supports Android and iOS.
Please see the MLPerf Mobile Inference benchmark paper for a detailed description of the benchmarks along with the motivation and guiding principles behind the benchmark suite. If you use any part of this benchmark (e.g., reference implementations, submissions, etc.), please cite the following:
@article{janapa2022mlperf,
title={Mlperf Mobile Inference Benchmark: An industry-standard open-source machine learning benchmark for on-device {AI}},
author={Janapa Reddi, Vijay and Kanter, David and Mattson, Peter and Duke, Jared and Nguyen, Thai and Chukka, Ramesh and Shiring, Ken and Tan, Koan-Sin and Charlebois, Mark and Chou, William and El-Khamy, Mostafa and others},
journal={Proceedings of Machine Learning and Systems},
volume={4},
pages={352--369},
year={2022}
}
To participate in the MLPerf Mobile Benchmark or submit results, please join the MLCommons Mobile Working Group.
This repo details the suite of models currently or previously adopted by the MLPerf Mobile Benchmark. This benchmark constitutes of a set of computer vision models and language understanding models. This repo also consitutes the guiding rules for participating or official submitting results to the MLPerf Mobile Benchmark.