Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Profile TorchServe Handler (preprocess vs inference vs post-process) #2470

Merged
merged 25 commits into from
Aug 24, 2023

Conversation

agunapal
Copy link
Collaborator

@agunapal agunapal commented Jul 18, 2023

Description

Update the benchmark script to show a split of preprocess, inference and postprocess times

  • Additional log optionally enabled by config in model-config.yaml
  • Additonal metrics in benchmark report optionally enabled by config
  • Include a working example of ResNet50

Fixes #(issue)

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Feature/Issue validation/testing

Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.

TorchServe Benchmark on gpu

Date: 2023-07-18 00:56:21

TorchServe Version: 0.8.1

eager_mode_resnet50

version Benchmark Batch size Batch delay Workers Model Concurrency Input Requests TS failed requests TS throughput TS latency P50 TS latency P90 TS latency P99 TS latency mean TS error rate Model_p50 Model_p90 Model_p99 handler_time_mean predict_mean waiting_time_mean worker_thread_mean cpu_percentage_mean memory_percentage_mean gpu_percentage_mean gpu_memory_percentage_mean gpu_memory_used_mean backend_preprocess_mean backend_inference_mean backend_postprocess_mean
0.8.1 AB 1 100 4 .mar 100 input 10000 0 97.91 998 1259 1420 1021.319 0.0 19.04 24.84 26.19 38.7 38.83 974.83 0.42 50.0 35.11 18.0 28.51 6566.0 29.85 7.32 0.27
0.8.1 AB 16 100 4 .mar 100 input 10000 9999 102.3 949 1380 1750 977.526 99.99 436.65 556.46 576.54 610.41 610.67 349.69 10.15 100.0 36.51 8.0 30.13 6938.0 585.39 19.59 0.26
0.8.1 AB 2 100 4 .mar 100 input 10000 9998 94.47 1042 1258 1488 1058.591 99.98 38.15 56.68 62.99 81.12 81.36 966.6 2.16 0.0 34.35 10.5 27.11 6242.0 70.01 8.01 0.24
0.8.1 AB 32 100 4 .mar 100 input 10000 9119 132.2 735 1157 1501 756.404 91.19 110.47 194.85 225.53 513.62 513.85 16.01 6.62 100.0 38.35 2.0 33.7 7760.0 484.35 24.86 0.26
0.8.1 AB 4 100 4 .mar 100 input 10000 3 100.3 975 1213 1453 997.042 0.03 82.37 124.11 133.04 154.0 154.24 829.77 3.7 100.0 34.59 12.0 27.38 6304.0 141.65 8.01 0.29
0.8.1 AB 64 100 4 .mar 100 input 10000 9382 213.37 465 552 641 468.677 93.82 260.14 295.21 305.11 366.7 366.78 59.21 14.16 0.0 48.69 15.0 52.48 12084.0 312.83 52.37 0.34
0.8.1 AB 8 100 4 .mar 100 input 10000 9999 100.38 969 1263 1571 996.255 99.99 199.61 268.68 285.87 309.21 309.48 671.79 6.71 100.0 34.99 14.0 28.18 6490.0 293.13 10.72 0.28

Checklist:

  • Did you have fun?
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

@agunapal agunapal changed the title (WIP) Profile TorchServe Handler (preprocess vs inference vs post-process) Profile TorchServe Handler (preprocess vs inference vs post-process) Jul 18, 2023
@agunapal agunapal changed the title Profile TorchServe Handler (preprocess vs inference vs post-process) (WIP)Profile TorchServe Handler (preprocess vs inference vs post-process) Jul 18, 2023
@codecov
Copy link

codecov bot commented Jul 18, 2023

Codecov Report

Merging #2470 (106bcf9) into master (d47b14d) will decrease coverage by 0.23%.
The diff coverage is 53.84%.

❗ Current head 106bcf9 differs from pull request most recent head a531548. Consider uploading reports for the commit a531548 to get more accurate results

@@            Coverage Diff             @@
##           master    #2470      +/-   ##
==========================================
- Coverage   72.86%   72.64%   -0.23%     
==========================================
  Files          78       79       +1     
  Lines        3697     3733      +36     
  Branches       58       58              
==========================================
+ Hits         2694     2712      +18     
- Misses        999     1017      +18     
  Partials        4        4              
Files Changed Coverage Δ
ts/handler_utils/timer.py 32.00% <32.00%> (ø)
...orch_handler/unit_tests/test_utils/mock_context.py 86.95% <66.66%> (-3.05%) ⬇️
ts/torch_handler/base_handler.py 69.09% <100.00%> (+0.57%) ⬆️
ts/torch_handler/image_classifier.py 90.00% <100.00%> (+1.11%) ⬆️
ts/torch_handler/vision_handler.py 90.90% <100.00%> (+0.58%) ⬆️

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@agunapal agunapal changed the title (WIP)Profile TorchServe Handler (preprocess vs inference vs post-process) Profile TorchServe Handler (preprocess vs inference vs post-process) Jul 18, 2023
@msaroufim msaroufim self-requested a review July 21, 2023 20:06
examples/benchmarking/resnet50/README.md Show resolved Hide resolved
examples/benchmarking/resnet50/README.md Show resolved Hide resolved
ts/handler_utils/timer.py Outdated Show resolved Hide resolved
@agunapal agunapal requested a review from lxning July 24, 2023 21:59
@agunapal agunapal merged commit 03ad862 into master Aug 24, 2023
12 checks passed
@agunapal agunapal deleted the feature/ts_benchmark_profile branch August 24, 2023 22:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants