Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Continous batching for single GPU LLM inference #2628

Merged
merged 49 commits into from
Oct 4, 2023

Conversation

mreso
Copy link
Collaborator

@mreso mreso commented Sep 29, 2023

Description

This PR enables continuous batching for LLM by creating a new batch aggregator that keeps jobs in the batch as long as they are not yet finished.

Fixes #(issue)

Type of change

Please delete options that are not relevant.

  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Feature/Issue validation/testing

Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.

  • pytest test/pytest/test_continuous_batching.py
============================================================================================================ test session starts =============================================================================================================
platform linux -- Python 3.10.12, pytest-7.3.1, pluggy-1.3.0
rootdir: /home/ubuntu/serve
plugins: mock-3.10.0, cov-4.1.0
collected 3 items

test/pytest/test_continuous_batching.py ..2023-10-03T13:48:12,231 [INFO ] W-9000-streaming_handler_1.0 org.pytorch.serve.wlm.ContinuousBatching - Connection to client got closed; Removing job: 9fbdacb3-a91f-40e8-8fd6-2e7944162aae
2023-10-03T13:48:12,232 [INFO ] W-9000-streaming_handler_1.0-stdout MODEL_METRICS - PredictionTime.ms:10.69|#ModelName:streaming_handler,Level:Model|#hostname:ip-172-31-15-101,requestID:9fbdacb3-a91f-40e8-8fd6-2e7944162aae,timestamp:1696340892
2023-10-03T13:48:12,232 [DEBUG] W-9000-streaming_handler_1.0 org.pytorch.serve.wlm.WorkerThread - sent a reply, jobdone: true
2023-10-03T13:48:12,232 [INFO ] W-9000-streaming_handler_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 11
2023-10-03T13:48:12,232 [INFO ] W-9000-streaming_handler_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:ip-172-31-15-101,timestamp:1696340892
.                                                                                                                                                                                            [100%]

============================================================================================================== warnings summary ==============================================================================================================
ts/torch_handler/base_handler.py:13
 /home/ubuntu/serve/ts/torch_handler/base_handler.py:13: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
   from pkg_resources import packaging

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
======================================================================================================= 3 passed, 1 warning in 14.54s ========================================================================================================

Checklist:

  • Did you have fun?
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

logger = logging.getLogger(__name__)


class StreamingHandler(BaseHandler):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we move this handler to ts_handler/distrubuted or move the core function to handler_utils/distributed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets postpone this for a later PR. I want to get more clarity on the details on the TP implementation first and see what is the overlap between them to make sure we only move the generic part into core.

@codecov
Copy link

codecov bot commented Oct 3, 2023

Codecov Report

Merging #2628 (fc300b6) into master (a6fd770) will increase coverage by 1.05%.
The diff coverage is 97.26%.

❗ Current head fc300b6 differs from pull request most recent head 7855a9c. Consider uploading reports for the commit 7855a9c to get more accurate results

@@            Coverage Diff             @@
##           master    #2628      +/-   ##
==========================================
+ Coverage   71.34%   72.39%   +1.05%     
==========================================
  Files          85       85              
  Lines        3905     3956      +51     
  Branches       58       58              
==========================================
+ Hits         2786     2864      +78     
+ Misses       1115     1088      -27     
  Partials        4        4              
Files Coverage Δ
ts/context.py 77.92% <100.00%> (+10.38%) ⬆️
ts/tests/unit_tests/test_otf_codec_protocol.py 100.00% <100.00%> (ø)
ts/protocol/otf_message_handler.py 82.41% <75.00%> (+9.82%) ⬆️

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@mreso mreso changed the title [WIP] Feature/continous batching for streaming Continous batching for single GPU LLM inference Oct 3, 2023
@mreso mreso marked this pull request as ready for review October 3, 2023 15:31
@mreso mreso requested a review from lxning October 3, 2023 16:39
Copy link
Collaborator

@HamidShojanazeri HamidShojanazeri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mreso mreso enabled auto-merge October 3, 2023 20:54
@mreso mreso added this pull request to the merge queue Oct 4, 2023
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Oct 4, 2023
@mreso mreso added this pull request to the merge queue Oct 4, 2023
Merged via the queue into master with commit 8d12993 Oct 4, 2023
12 of 13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants