Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update URLs in benchmark tests to use localhost #2704

Merged
merged 1 commit into from
Oct 11, 2023

Conversation

agunapal
Copy link
Collaborator

Description

Update URLs in benchmark runs to use localhost

Fixes #(issue)

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Feature/Issue validation/testing

Starting AB benchmark suite...


Configured execution parameters are:
{'url': 'https://torchserve.pytorch.org/mar_files/mnist_v2.mar', 'gpus': 'all', 'exec_env': 'local', 'batch_size': 1, 'batch_delay': 100, 'workers': 4, 'concurrency': 10, 'requests': 500000, 'input': './examples/image_classifier/mnist/test_data/0.png', 'content_type': 'application/jpg', 'image': '', 'docker_runtime': '', 'backend_profiling': False, 'handler_profiling': False, 'generate_graphs': False, 'config_properties': './benchmarks/config.properties', 'inference_model_url': 'predictions/benchmark', 'report_location': '/tmp', 'tmp_dir': '/tmp', 'result_file': '/tmp/benchmark/result.txt', 'metric_log': '/tmp/benchmark/logs/model_metrics.log', 'inference_url': 'http://127.0.0.1:8080', 'management_url': 'http://127.0.0.1:8081', 'config_properties_name': 'config.properties'}


Preparing local execution...
*Terminating any existing Torchserve instance ...
torchserve --stop
Removing orphan pid file.
TorchServe is not currently running.
*Setting up model store...
*Starting local Torchserve instance...
torchserve --start --model-store /tmp/model_store --workflow-store /tmp/wf_store --ts-config /tmp/benchmark/conf/config.properties > /tmp/benchmark/logs/model_metrics.log
*Testing system health...
{
  "status": "Healthy"
}

*Registering model...
{
  "status": "Model \"benchmark\" Version: 2.0 registered with 4 initial workers"
}



Executing warm-up ...
ab -c 10  -n 50000.0 -k -p /tmp/benchmark/input -T  application/jpg http://127.0.0.1:8080/predictions/benchmark > /tmp/benchmark/result.txt
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Completed 20000 requests
Completed 25000 requests
Completed 30000 requests
Completed 35000 requests
Completed 40000 requests
Completed 45000 requests
Completed 50000 requests
Finished 50000 requests


Executing inference performance tests ...
ab -c 10  -n 500000 -k -p /tmp/benchmark/input -T  application/jpg http://127.0.0.1:8080/predictions/benchmark > /tmp/benchmark/result.txt
Completed 50000 requests
Completed 100000 requests
Completed 150000 requests
Completed 200000 requests
Completed 250000 requests
Completed 300000 requests
Completed 350000 requests
Completed 400000 requests
Completed 450000 requests
Completed 500000 requests
Finished 500000 requests
*Unregistering model ...
{
  "status": "Model \"benchmark\" unregistered"
}

*Terminating Torchserve instance...
torchserve --stop
TorchServe has stopped.
Apache Bench Execution completed.


Generating Reports...
Dropping 800165 warmup lines from log

Writing extracted PredictionTime metrics to /tmp/benchmark/predict.txt 

Writing extracted HandlerTime metrics to /tmp/benchmark/handler_time.txt 

Writing extracted QueueTime metrics to /tmp/benchmark/waiting_time.txt 

Writing extracted WorkerThreadTime metrics to /tmp/benchmark/worker_thread.txt 

Writing extracted CPUUtilization metrics to /tmp/benchmark/cpu_percentage.txt 

Writing extracted MemoryUtilization metrics to /tmp/benchmark/memory_percentage.txt 

Writing extracted GPUUtilization metrics to /tmp/benchmark/gpu_percentage.txt 

Writing extracted GPUMemoryUtilization metrics to /tmp/benchmark/gpu_memory_percentage.txt 

Writing extracted GPUMemoryUsed metrics to /tmp/benchmark/gpu_memory_used.txt 
*Generating CSV output...
Saving benchmark results to /tmp

Test suite execution complete.

Checklist:

  • Did you have fun?
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

@codecov
Copy link

codecov bot commented Oct 11, 2023

Codecov Report

Merging #2704 (da7bdbf) into master (c69defd) will decrease coverage by 0.11%.
The diff coverage is n/a.

❗ Current head da7bdbf differs from pull request most recent head 974486d. Consider uploading reports for the commit 974486d to get more accurate results

@@            Coverage Diff             @@
##           master    #2704      +/-   ##
==========================================
- Coverage   72.44%   72.34%   -0.11%     
==========================================
  Files          85       85              
  Lines        3963     3963              
  Branches       58       58              
==========================================
- Hits         2871     2867       -4     
- Misses       1088     1092       +4     
  Partials        4        4              

see 2 files with indirect coverage changes

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@agunapal agunapal added this pull request to the merge queue Oct 11, 2023
@@ -1,5 +1,5 @@
inference_address=http://0.0.0.0:8080
management_address=http://0.0.0.0:8081
inference_address=http://127.0.0.1:8080
Copy link
Collaborator

@namannandan namannandan Oct 11, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Benchmarking also supports running Torchserve in a container environment. This config will not work in this case. I believe this doesn't break any of our current workflows though since we run torchserve directly on the host for benchmarking.

Merged via the queue into master with commit 645d9a5 Oct 11, 2023
13 checks passed
@agunapal agunapal deleted the issues/benchmark_update_url branch October 11, 2023 23:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants