Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

valid result directory structure update + LLAMA2 access token & model permission link update #15

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions docs/benchmarks/language/get-llama2-70b-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,8 @@ Get the Official MLPerf LLAMA2-70b Model
```
cm run script --tags=get,ml-model,llama2-70b,_pytorch -j
```

!!! tip

Downloading llama2-70B model from Hugging Face will prompt you to enter the Hugging Face username and password. Please note that the password required is the [**access token**](https://huggingface.co/settings/tokens) generated for your account. Additionally, ensure that your account has access to the [llama2-70B](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) model.

42 changes: 25 additions & 17 deletions docs/submission/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,27 +12,35 @@ hide:
=== "Non CM based benchmark"
If you have not followed the `cm run` commands under the individual model pages in the [benchmarks](../index.md) directory, please make sure that the result directory is structured in the following way.
```
└── SUT_Name
├── cm_sut_info.json
└── model
└── System description ID(SUT Name)
├── system_meta.json
└── Benchmark
└── Scenario
└── loadgen_Mode
├── file-1
├── ...
└── file-n
├── Performance
| └── run_x/#1 run for all scenarios
| ├── mlperf_log_summary.txt
| └── mlperf_log_detail.txt
├── Accuracy
| ├── mlperf_log_summary.txt
| ├── mlperf_log_detail.txt
| ├── mlperf_log_accuracy.json
| └── accuracy.txt
└── Compliance_Test_ID
├── Performance
| └── run_x/#1 run for all scenarios
| ├── mlperf_log_summary.txt
| └── mlperf_log_detail.txt
├── Accuracy
| ├── baseline_accuracy.txt
| ├── compliance_accuracy.txt
| ├── mlperf_log_accuracy.json
| └── accuracy.txt
├── verify_performance.txt
└── verify_accuracy.txt #for TEST01 only
```

!!! tip

- The `cm_sut_info.json` should contain the following keys
- `system_name`
- `implementation`
- `device`
- `framework`
- `run_config`

<details>
<summary>Click here if you are submitting custom model in open division</summary>
<summary>Click here if you are submitting in open division</summary>

* The `model_mapping.json` should be included inside the SUT folder which is used to map the custom model full name to the official model name. The format of json file is:

Expand Down
Loading