Skip to content

Commit

Permalink
Updated the docs (#1724)
Browse files Browse the repository at this point in the history
* Support batch-size in llama2 run

* Add Rclone-Cloudflare download instructions to README.md

* Add Rclone-Cloudflare download instructiosn to README.md

* Minor wording edit to README.md

* Add Rclone-Cloudflare download instructions to README.md

* Add Rclone-GDrive download instructions to README.md

* Add new and old instructions to README.md

* Tweak language in README.md

* Language tweak in README.md

* Minor language tweak in README.md

* Fix typo in README.md

* Count error when logging errors: submission_checker.py

* Fixes #1648, restrict loadgen uncommitted error message to within the loadgen directory

* Update test-rnnt.yml (#1688)

Stopping the github action for rnnt

* Added docs init

Added github action for website publish

Update benchmark documentation

Update publish.yaml

Update publish.yaml

Update benchmark documentation

Improved the submission documentation

Fix taskname

Removed unused images

* Fix benchmark URLs

* Fix links

* Add _full variation to run commands

* Added script flow diagram

* Added docker setup command for CM, extra run options

* Added support for docker options in the docs

* Added --quiet to the CM run_cmds in docs

* Fix the test query count for cm commands

* Support ctuning-cpp implementation

* Added commands for mobilenet models

* Docs cleanup

* Docs cleanup

* Added separate files for dataset and models in the docs

* Remove redundant tab in the docs

* Fixes some WIP models in the docs

* Use the official docs page for CM installation

* Fix the deadlink in docs

* Fix indendation issue in docs

* Added dockerinfo for nvidia implementation

* Added run options for gptj

* Added execution environment tabs

* Cleanup of the docs

* Cleanup of the docs

* Reordered the sections of the docs page

* Removed an unnecessary heading in the docs

* Fixes the commands for datacenter

---------

Co-authored-by: Nathan Wasson <nathanw@mlcommons.org>
Co-authored-by: anandhu-eng <anandhukicks@gmail.com>
  • Loading branch information
3 people authored Jun 7, 2024
1 parent 8af5229 commit 70d6678
Show file tree
Hide file tree
Showing 22 changed files with 667 additions and 515 deletions.
43 changes: 43 additions & 0 deletions docs/benchmarks/image_classification/get-resnet50-data.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Image Classification using ResNet50

## Dataset

The benchmark implementation run command will automatically download the validation and calibration datasets and do the necessary preprocessing. In case you want to download only the datasets, you can use the below commands.

=== "Validation"
ResNet50 validation run uses the Imagenet 2012 validation dataset consisting of 50,000 images.

### Get Validation Dataset
```
cm run script --tags=get,dataset,imagenet,validation -j
```
=== "Calibration"
ResNet50 calibration dataset consist of 500 images selected from the Imagenet 2012 validation dataset. There are 2 alternative options for the calibration dataset.

### Get Calibration Dataset Using Option 1
```
cm run script --tags=get,dataset,imagenet,calibration,_mlperf.option1 -j
```
### Get Calibration Dataset Using Option 2
```
cm run script --tags=get,dataset,imagenet,calibration,_mlperf.option2 -j
```

## Model
The benchmark implementation run command will automatically download the required model and do the necessary conversions. In case you want to only download the official model, you can use the below commands.

Get the Official MLPerf ResNet50 Model

=== "Tensorflow"

### Tensorflow
```
cm run script --tags=get,ml-model,resnet50,_tensorflow -j
```
=== "Onnx"

### Onnx
```
cm run script --tags=get,ml-model,resnet50,_onnx -j
```

59 changes: 59 additions & 0 deletions docs/benchmarks/image_classification/mobilenets.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# Image Classification using Mobilenet models

Mobilenet models are not official MLPerf models and so cannot be used for a Closed division MLPerf inference submission. But since they can be run with Imagenet dataset, we are allowed to use them for Open division submission. Only CPU runs are supported now.

## TFLite Backend

=== "Mobilenet-V1"
### Mobilenet V1
```bash
cm run script --tags=run,mobilenet-models,_tflite,_mobilenet-v1 --adr.compiler.tags=gcc
```
=== "Mobilenet-V2"
### Mobilenet V2
```bash
cm run script --tags=run,mobilenet-models,_tflite,_mobilenet-v2 --adr.compiler.tags=gcc
```
=== "Mobilenet-V2"
### Mobilenet V2
```bash
cm run script --tags=run,mobilenet-models,_tflite,_mobilenet-v2 --adr.compiler.tags=gcc
```
=== "Mobilenets"
### Mobilenet V1,V2,V3
```bash
cm run script --tags=run,mobilenet-models,_tflite,_mobilenet --adr.compiler.tags=gcc
```
=== "Efficientnet"
### Efficientnet
```bash
cm run script --tags=run,mobilenet-models,_tflite,_efficientnet --adr.compiler.tags=gcc
```

## ARMNN Backend
=== "Mobilenet-V1"
### Mobilenet V1
```bash
cm run script --tags=run,mobilenet-models,_tflite,_armnn,_mobilenet-v1 --adr.compiler.tags=gcc
```
=== "Mobilenet-V2"
### Mobilenet V2
```bash
cm run script --tags=run,mobilenet-models,_tflite,_armnn,_mobilenet-v2 --adr.compiler.tags=gcc
```
=== "Mobilenet-V2"
### Mobilenet V2
```bash
cm run script --tags=run,mobilenet-models,_tflite,_armnn,_mobilenet-v2 --adr.compiler.tags=gcc
```
=== "Mobilenets"
### Mobilenet V1,V2,V3
```bash
cm run script --tags=run,mobilenet-models,_tflite,_armnn,_mobilenet --adr.compiler.tags=gcc
```
=== "Efficientnet"
### Efficientnet
```bash
cm run script --tags=run,mobilenet-models,_tflite,_armnn,_efficientnet --adr.compiler.tags=gcc
```

54 changes: 6 additions & 48 deletions docs/benchmarks/image_classification/resnet50.md
Original file line number Diff line number Diff line change
@@ -1,68 +1,26 @@
# Image Classification using ResNet50

## Dataset

The benchmark implementation run command will automatically download the validation and calibration datasets and do the necessary preprocessing. In case you want to download only the datasets, you can use the below commands.

=== "Validation"
ResNet50 validation run uses the Imagenet 2012 validation dataset consisting of 50,000 images.

### Get Validation Dataset
```
cm run script --tags=get,dataset,imagenet,validation -j
```
=== "Calibration"
ResNet50 calibration dataset consist of 500 images selected from the Imagenet 2012 validation dataset. There are 2 alternative options for the calibration dataset.

### Get Calibration Dataset Using Option 1
```
cm run script --tags=get,dataset,imagenet,calibration,_mlperf.option1 -j
```
### Get Calibration Dataset Using Option 2
```
cm run script --tags=get,dataset,imagenet,calibration,_mlperf.option2 -j
```

## Model
The benchmark implementation run command will automatically download the required model and do the necessary conversions. In case you want to only download the official model, you can use the below commands.

Get the Official MLPerf ResNet50 Model

=== "Tensorflow"

### Tensorflow
```
cm run script --tags=get,ml-model,resnet50,_tensorflow -j
```
=== "Onnx"

### Onnx
```
cm run script --tags=get,ml-model,resnet50,_onnx -j
```

## Benchmark Implementations
=== "MLCommons-Python"
### MLPerf Reference Implementation in Python
## MLPerf Reference Implementation in Python

{{ mlperf_inference_implementation_readme (4, "resnet50", "reference") }}

=== "Nvidia"
### Nvidia MLPerf Implementation
## Nvidia MLPerf Implementation

{{ mlperf_inference_implementation_readme (4, "resnet50", "nvidia") }}

=== "Intel"
### Intel MLPerf Implementation
## Intel MLPerf Implementation

{{ mlperf_inference_implementation_readme (4, "resnet50", "intel") }}

=== "Qualcomm"
### Qualcomm AI100 MLPerf Implementation
## Qualcomm AI100 MLPerf Implementation

{{ mlperf_inference_implementation_readme (4, "resnet50", "qualcomm") }}

=== "MLCommon-C++"
### MLPerf Modular Implementation in C++
=== "MLCommons-C++"
## MLPerf Modular Implementation in C++

{{ mlperf_inference_implementation_readme (4, "resnet50", "cpp") }}
45 changes: 4 additions & 41 deletions docs/benchmarks/language/bert.md
Original file line number Diff line number Diff line change
@@ -1,44 +1,7 @@
# Question Answering using Bert-Large

## Dataset

The benchmark implementation run command will automatically download the validation and calibration datasets and do the necessary preprocessing. In case you want to download only the datasets, you can use the below commands.

=== "Validation"
BERT validation run uses the SQuAD v1.1 dataset.

### Get Validation Dataset
```
cm run script --tags=get,dataset,squad,validation -j
```

## Model
The benchmark implementation run command will automatically download the required model and do the necessary conversions. In case you want to only download the official model, you can use the below commands.

Get the Official MLPerf Bert-Large Model

=== "Pytorch"

### Pytorch
```
cm run script --tags=get,ml-model,bert-large,_pytorch -j
```
=== "Onnx"

### Onnx
```
cm run script --tags=get,ml-model,bert-large,_onnx -j
```
=== "Tensorflow"

### Tensorflow
```
cm run script --tags=get,ml-model,bert-large,_tensorflow -j
```

## Benchmark Implementations
=== "MLCommons-Python"
### MLPerf Reference Implementation in Python
## MLPerf Reference Implementation in Python

BERT-99
{{ mlperf_inference_implementation_readme (4, "bert-99", "reference") }}
Expand All @@ -47,7 +10,7 @@ Get the Official MLPerf Bert-Large Model
{{ mlperf_inference_implementation_readme (4, "bert-99.9", "reference") }}

=== "Nvidia"
### Nvidia MLPerf Implementation
## Nvidia MLPerf Implementation

BERT-99
{{ mlperf_inference_implementation_readme (4, "bert-99", "nvidia") }}
Expand All @@ -56,15 +19,15 @@ Get the Official MLPerf Bert-Large Model
{{ mlperf_inference_implementation_readme (4, "bert-99.9", "nvidia") }}

=== "Intel"
### Intel MLPerf Implementation
## Intel MLPerf Implementation
BERT-99
{{ mlperf_inference_implementation_readme (4, "bert-99", "intel") }}

BERT-99.9
{{ mlperf_inference_implementation_readme (4, "bert-99.9", "intel") }}

=== "Qualcomm"
### Qualcomm AI100 MLPerf Implementation
## Qualcomm AI100 MLPerf Implementation

BERT-99
{{ mlperf_inference_implementation_readme (4, "bert-99", "qualcomm") }}
Expand Down
38 changes: 38 additions & 0 deletions docs/benchmarks/language/get-bert-data.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Question Answering using Bert-Large

## Dataset

The benchmark implementation run command will automatically download the validation and calibration datasets and do the necessary preprocessing. In case you want to download only the datasets, you can use the below commands.

=== "Validation"
BERT validation run uses the SQuAD v1.1 dataset.

### Get Validation Dataset
```
cm run script --tags=get,dataset,squad,validation -j
```

## Model
The benchmark implementation run command will automatically download the required model and do the necessary conversions. In case you want to only download the official model, you can use the below commands.

Get the Official MLPerf Bert-Large Model

=== "Pytorch"

### Pytorch
```
cm run script --tags=get,ml-model,bert-large,_pytorch -j
```
=== "Onnx"

### Onnx
```
cm run script --tags=get,ml-model,bert-large,_onnx -j
```
=== "Tensorflow"

### Tensorflow
```
cm run script --tags=get,ml-model,bert-large,_tensorflow -j
```

25 changes: 25 additions & 0 deletions docs/benchmarks/language/get-gptj-data.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Text Summarization using GPT-J

## Dataset

The benchmark implementation run command will automatically download the validation and calibration datasets and do the necessary preprocessing. In case you want to download only the datasets, you can use the below commands.

=== "Validation"
GPT-J validation run uses the CNNDM dataset.

### Get Validation Dataset
```
cm run script --tags=get,dataset,cnndm,validation -j
```

## Model
The benchmark implementation run command will automatically download the required model and do the necessary conversions. In case you want to only download the official model, you can use the below commands.

Get the Official MLPerf GPT-J Model

=== "Pytorch"

### Pytorch
```
cm run script --tags=get,ml-model,gptj,_pytorch -j
```
26 changes: 26 additions & 0 deletions docs/benchmarks/language/get-llama2-70b-data.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Text Summarization using LLAMA2-70b

## Dataset

The benchmark implementation run command will automatically download the validation and calibration datasets and do the necessary preprocessing. In case you want to download only the datasets, you can use the below commands.

=== "Validation"
LLAMA2-70b validation run uses the Open ORCA dataset.

### Get Validation Dataset
```
cm run script --tags=get,dataset,openorca,validation -j
```

## Model
The benchmark implementation run command will automatically download the required model and do the necessary conversions. In case you want to only download the official model, you can use the below commands.

Get the Official MLPerf LLAMA2-70b Model

=== "Pytorch"

### Pytorch
```
cm run script --tags=get,ml-model,llama2-70b,_pytorch -j
```

Loading

0 comments on commit 70d6678

Please sign in to comment.