diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000000..e69de29bb2 diff --git a/404.html b/404.html new file mode 100644 index 0000000000..19de19aa02 --- /dev/null +++ b/404.html @@ -0,0 +1,588 @@ + + + +
+ + + + + + + + + + + + + + +[ Back to index ]
+Following our successful community submission to MLPerf inference v3.0, +we will set up new weekly conf-calls shortly - please stay tuned for more details!
+Please add your topics for discussion in the meeting notes +or via GitHub tickets.
+Please join our mailing list here.
+See our R&D roadmap for Q4 2022 and Q1 2023
+MLCommons is a non-profit consortium of 50+ companies that was originally created +to develop a common, reproducible and fair benchmarking methodology for new AI and ML hardware.
+MLCommons has developed an open-source reusable module called loadgen +that efficiently and fairly measures the performance of inference systems. +It generates traffic for scenarios that were formulated by a diverse set of experts from MLCommons +to emulate the workloads seen in mobile devices, autonomous vehicles, robotics, and cloud-based setups.
+MLCommons has also prepared several reference ML tasks, models and datasets +for vision, recommendation, language processing and speech recognition +to let companies benchmark and compare their new hardware in terms of accuracy, latency, throughput and energy +in a reproducible way twice a year.
+The first goal of this open automation and reproducibility taskforce is to +develop a light-weight and open-source automation meta-framework +that can make MLOps and DevOps more interoperable, reusable, portable, +deterministic and reproducible.
+We then use this automation meta-framework to develop plug&play workflows +for the MLPerf benchmarks to make it easier for the newcomers to run them +across diverse hardware, software and data and automatically plug in +their own ML tasks, models, data sets, engines, libraries and tools.
+Another goal is to use these portable MLPerf workflows to help students, researchers and +engineers participate in crowd-benchmarking and exploration of the design space tradeoffs +(accuracy, latency, throughput, energy, size, etc.) of their ML Systems from the cloud to the +edge using the mature MLPerf methodology while automating the submission +of their Pareto-efficient configurations to the open division of the MLPerf +inference benchmark.
+The final goal is to help end-users reproduce MLPerf results +and deploy the most suitable ML/SW/HW stacks in production +based on their requirements and constraints.
+This MLCommons taskforce is developing an open-source and technology-neutral +Collective Mind meta-framework (CM) +to modularize ML Systems and automate their benchmarking, optimization +and design space exploration across continuously changing software, hardware and data.
+CM is the second generation of the MLCommons CK workflow automation framework +that was originally developed to make it easier to reproduce research papers and validate them in the real world.
+As a proof-of-concept, this technology was successfully used to automate +MLPerf benchmarking and submissions +from Qualcomm, HPE, Dell, Lenovo, dividiti, Krai, the cTuning foundation and OctoML. +For example, it was used and extended by Arjun Suresh +with several other engineers to automate the record-breaking MLPerf inference benchmark submission for Qualcomm AI 100 devices.
+The goal of this group is to help users automate all the steps to prepare and run MLPerf benchmarks +across any ML models, data sets, frameworks, compilers and hardware +using the MLCommons CM framework.
+Here is an example of current manual and error-prone MLPerf benchmark preparation steps:
+ +Here is the concept of CM-based automated workflows:
+ +We have finished prototyping the new CM framework in summer 2022 based on the feedback of CK users +and successfully used it to modularize MLPerf and automate the submission of benchmarking results to the MLPerf inference v2.1. +See this tutorial for more details.
+We continue developing CM as an open-source educational toolkit +to help the community learn how to modularize, crowd-benchmark, optimize and deploy +Pareto-efficient ML Systems based on the mature MLPerf methodology and portable CM scripts - +please check the deliverables section to keep track of our community developments +and do not hesitate to join this community effort!
+See our R&D roadmap for Q4 2022 and Q1 2023
+HPCA'22 presentation "MLPerf design space exploration and production deployment"
+Tools:
+Google Drive (public access)
+This project is supported by MLCommons, OctoML +and many great contributors.
+ + + + + + + + + + + + + +Moved to https://github.com/ctuning/artifact-evaluation/blob/master/docs/checklist.md
+ + + + + + + + + + + + + +Moved to https://github.com/ctuning/artifact-evaluation/blob/master/docs/faq.md
+ + + + + + + + + + + + + +A review preference is a small integer that indicates how much you want to review a submission. Positive numbers mean you want to review, negative numbers mean you don’t, and −100 means you think you have a conflict. −20 to 20 is a typical range for real preferences; multiple submissions can have the same preference. The automatic assignment algorithm attempts to assign reviews in descending preference order, using topic scores to break ties. Different users’ preference values are not compared and need not use the same scale.<\/p>\n\n
The list shows all submissions and their topics (high interest topics<\/span>, low interest topics<\/span>). “Topic score” summarizes your interest in the submission’s topics. Select a column heading to sort by that column. Enter preferences in the text boxes or on the paper pages. You may also upload preferences from a text file; see the “Download” and “Upload” links below the paper list.<\/p>",
+ "random_pids": true,
+ "response": [
+ {
+ "id": 1,
+ "name": "",
+ "open": "",
+ "done": "",
+ "grace": 0,
+ "condition": "all",
+ "wordlimit": 500,
+ "truncate": false,
+ "instructions": "The authors’ response should address reviewer concerns and correct misunderstandings. Make it short and to the point; the conference deadline has passed. Try to stay within {wordlimit} words."
+ }
+ ],
+ "response_active": false,
+ "review": [
+ {
+ "id": 3,
+ "name": "Full-Evaluation",
+ "soft": "",
+ "done": "",
+ "external_soft": "",
+ "external_done": ""
+ },
+ {
+ "id": 2,
+ "name": "Kick-the-Tires",
+ "soft": "",
+ "done": "",
+ "external_soft": "",
+ "external_done": ""
+ }
+ ],
+ "review_blind": "blind",
+ "review_default_external_round": "",
+ "review_default_round": "Kick-the-Tires",
+ "review_identity_visibility_external": "after_review",
+ "review_identity_visibility_pc": false,
+ "review_open": false,
+ "review_proposal": "no",
+ "review_proposal_editable": "no",
+ "review_rating": "pc",
+ "review_self_assign": true,
+ "review_terms": "Please, use the following guidelines to review artifacts: cTuning.org\/ae\/reviewing.html<\/a>.",
+ "review_visibility_author": "yes",
+ "review_visibility_author_condition": "",
+ "review_visibility_author_tags": "",
+ "review_visibility_external": true,
+ "review_visibility_lead": true,
+ "review_visibility_pc": "assignment_complete",
+ "rf": [
+ {
+ "id": "s04",
+ "name": "Evaluator expertise",
+ "order": 1,
+ "type": "radio",
+ "description": "",
+ "required": true,
+ "visibility": "au",
+ "condition": "all",
+ "values": [
+ {
+ "id": 1,
+ "symbol": 1,
+ "name": "Some familiarity",
+ "order": 1
+ },
+ {
+ "id": 2,
+ "symbol": 2,
+ "name": "Knowledgeable",
+ "order": 2
+ },
+ {
+ "id": 3,
+ "symbol": 3,
+ "name": "Expert",
+ "order": 3
+ }
+ ],
+ "scheme": "svr"
+ },
+ {
+ "id": "s01",
+ "name": "Artifact publicly available?",
+ "order": 2,
+ "type": "radio",
+ "description": "The author-created artifacts relevant to this paper will receive an ACM \"artifact available\" badge only if<\/strong> they have been placed on a publicly accessible archival repository such as Zenodo, FigShare and Dryad. A DOI will be then assigned to their artifacts and must be provided in the Artifact Appendix!\n\n Note: publisher repositories, institutional repositories or open commercial repositories are acceptable only if they have a declared plan to enable permanent accessibility! Personal web pages, GitHub, GitLab, BitBucket, Google Drive and DropBox are not acceptable for this purpose!<\/p>\n\n Artifacts do not need to have been formally evaluated in order for an article to receive this badge. In addition, they need not be complete in the sense described above. They simply need to be relevant to the study and add value beyond the text in the article. Such artifacts could be something as simple as the data from which the figures are drawn, or as complex as a complete software system under study.<\/p>",
+ "required": true,
+ "visibility": "au",
+ "condition": "all",
+ "values": [
+ {
+ "id": 1,
+ "symbol": 1,
+ "name": "Publicly available",
+ "order": 1
+ },
+ {
+ "id": 2,
+ "symbol": 2,
+ "name": "Not publicly available",
+ "order": 2
+ }
+ ],
+ "scheme": "svr"
+ },
+ {
+ "id": "s02",
+ "name": "Artifact functional?",
+ "order": 3,
+ "type": "radio",
+ "description": " Moved to https://github.com/ctuning/artifact-evaluation/blob/master/docs/hotcrp-config/README.md Moved to https://github.com/ctuning/artifact-evaluation/blob/master/docs/reviewing.md Moved to https://github.com/ctuning/artifact-evaluation/blob/master/docs/submission.md\n\n
\n
\n
\n
\n
+
+
+
+
+
+
+
+
+
+ Index
+
+
+
+
+
+
+
+
+
+
+
+ Reviewing
+
+
+
+
+
+
+
+
+
+
+
+ Submission
+
+