-
Notifications
You must be signed in to change notification settings - Fork 863
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update torch.export load with new api #2906
Conversation
# The below config is needed for max batch_size = 16 | ||
# https://github.com/pytorch/pytorch/pull/116152 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we check the batch_size input to reflect the max batch_size limitation ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks..Updated the comments. There is no limitation on batch_size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
overall LGTM, left some minor comments
True | ||
if packaging.version.parse(torch.__version__) > packaging.version.parse("2.1.1") | ||
if packaging.version.parse(torch.__version__) > packaging.version.parse("2.2.2") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would set PT_230_AVAILABLE to true if a 2.2.3 is released. Is that correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ankithagunapal Is this check being needed due to API changes between PT2.2.0 and latest nightlies which are 2.3.xxx?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this needs the latest nightlies. The new API got merged only in the last week or so. Based on the current release cadence, its 2.3.0 after 2.2.2 ( 2 patch releases after every major release) But i will keep this in mind in case this changes.
test/pytest/test_torch_export.py
Outdated
@@ -123,6 +123,6 @@ def test_torch_export_aot_compile_dynamic_batching(custom_working_directory): | |||
data["body"] = byte_array_type | |||
|
|||
# Send a batch of 16 elements |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: outdated comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Updated
test/pytest/test_torch_export.py
Outdated
@@ -123,6 +123,6 @@ def test_torch_export_aot_compile_dynamic_batching(custom_working_directory): | |||
data["body"] = byte_array_type | |||
|
|||
# Send a batch of 16 elements | |||
result = handler.handle([data for i in range(15)], ctx) | |||
result = handler.handle([data for i in range(32)], ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about parameterizing the test and test both?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for catching this. I changed it
@@ -53,10 +53,10 @@ | |||
) | |||
PT2_AVAILABLE = False | |||
|
|||
if packaging.version.parse(torch.__version__) > packaging.version.parse("2.1.1"): | |||
PT220_AVAILABLE = True | |||
if packaging.version.parse(torch.__version__) > packaging.version.parse("2.2.2"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @chauhang I tried that first #2812 |
Description
This PR integrates the new API torch._export.aot_load and deletes the existing implementation from TorchServe
This also updates the pytest to support batch size > 15.
Relevant PRs:
pytorch/pytorch#116152
pytorch/pytorch#117610
pytorch/pytorch#117948
Fixes #(issue)
Type of change
Please delete options that are not relevant.
Feature/Issue validation/testing
Checklist: