Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix batch size duplication issue #416

Merged
merged 2 commits into from
Oct 24, 2024
Merged

Conversation

anandhu-eng
Copy link
Contributor

@anandhu-eng anandhu-eng commented Oct 24, 2024

In reference to issue: #414

@anandhu-eng anandhu-eng requested a review from a team as a code owner October 24, 2024 10:42
Copy link

github-actions bot commented Oct 24, 2024

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

@anandhu-eng anandhu-eng marked this pull request as draft October 24, 2024 10:42
@anandhu-eng anandhu-eng marked this pull request as ready for review October 24, 2024 10:55
@@ -264,7 +264,11 @@ def get_run_cmd_reference(os_info, env, scenario_extra_options, mode_extra_optio
cmd = env['CM_PYTHON_BIN_WITH_PATH']+ " run.py --backend=" + env['CM_MLPERF_BACKEND'] + " --scenario="+env['CM_MLPERF_LOADGEN_SCENARIO'] + \
env['CM_MLPERF_LOADGEN_EXTRA_OPTIONS'] + scenario_extra_options + mode_extra_options + dataset_options + quantization_options
if env['CM_MLPERF_BACKEND'] == "deepsparse":
cmd += " --batch_size=" + env.get('CM_MLPERF_LOADGEN_MAX_BATCHSIZE', '1') + " --model_path=" + env['MODEL_FILE']
if "--batch-size" in cmd:
cmd.replace("--batch-size","--batch_size")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this replacement done?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @arjunsuresh , this change was done based on the assumption that the batch size is already set in earlier section of code. But i think, unless its deepsparse, there is no requirement of setting batch size as i did not found any argument here and was not found in the command generated through cm below:

CM script::benchmark-program/run.sh

Run Directory: /home/anandhu/CM/repos/local/cache/c451e090cdb24951/inference/language/bert

CMD: /home/anandhu/CM/repos/local/cache/c5f60ba1e1b144af/bertthreading/bin/python3 run.py --backend=pytorch --scenario=Offline   --mlperf_conf '/home/anandhu/CM/repos/local/cache/c451e090cdb24951/inference/mlperf.conf' --user_conf '/home/anandhu/CM/repos/anandhu-eng@cm4mlops/script/generate-mlperf-inference-user-conf/tmp/6f0153a99ab04e1bab5cc2fd237604b1.conf' 2>&1 ; echo \$? > exitstatus | tee '/home/anandhu/CM/repos/local/cache/9ebcafa34f154897/test_results/intel_spr_i9-reference-cpu-pytorch-v2.4.1-default_config/bert-99/offline/performance/run_1/console.out'

I have changed the code in c7cdc80

@arjunsuresh arjunsuresh merged commit 358347c into mlperf-inference Oct 24, 2024
35 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Oct 24, 2024
@anandhu-eng anandhu-eng deleted the anandhu-eng-patch-1 branch October 24, 2024 15:36
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants