-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix batch size duplication issue #416
Conversation
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
@@ -264,7 +264,11 @@ def get_run_cmd_reference(os_info, env, scenario_extra_options, mode_extra_optio | |||
cmd = env['CM_PYTHON_BIN_WITH_PATH']+ " run.py --backend=" + env['CM_MLPERF_BACKEND'] + " --scenario="+env['CM_MLPERF_LOADGEN_SCENARIO'] + \ | |||
env['CM_MLPERF_LOADGEN_EXTRA_OPTIONS'] + scenario_extra_options + mode_extra_options + dataset_options + quantization_options | |||
if env['CM_MLPERF_BACKEND'] == "deepsparse": | |||
cmd += " --batch_size=" + env.get('CM_MLPERF_LOADGEN_MAX_BATCHSIZE', '1') + " --model_path=" + env['MODEL_FILE'] | |||
if "--batch-size" in cmd: | |||
cmd.replace("--batch-size","--batch_size") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this replacement done?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @arjunsuresh , this change was done based on the assumption that the batch size is already set in earlier section of code. But i think, unless its deepsparse, there is no requirement of setting batch size as i did not found any argument here and was not found in the command generated through cm below:
CM script::benchmark-program/run.sh
Run Directory: /home/anandhu/CM/repos/local/cache/c451e090cdb24951/inference/language/bert
CMD: /home/anandhu/CM/repos/local/cache/c5f60ba1e1b144af/bertthreading/bin/python3 run.py --backend=pytorch --scenario=Offline --mlperf_conf '/home/anandhu/CM/repos/local/cache/c451e090cdb24951/inference/mlperf.conf' --user_conf '/home/anandhu/CM/repos/anandhu-eng@cm4mlops/script/generate-mlperf-inference-user-conf/tmp/6f0153a99ab04e1bab5cc2fd237604b1.conf' 2>&1 ; echo \$? > exitstatus | tee '/home/anandhu/CM/repos/local/cache/9ebcafa34f154897/test_results/intel_spr_i9-reference-cpu-pytorch-v2.4.1-default_config/bert-99/offline/performance/run_1/console.out'
I have changed the code in c7cdc80
In reference to issue: #414