-
Notifications
You must be signed in to change notification settings - Fork 863
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
example for using TorchInductor caching with torch.compile #2925
Conversation
self.manifest = ctx.manifest | ||
properties = ctx.system_properties | ||
|
||
if ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understand the point of this PR? Why add a layer of indirection to set a torch config via the torchserve yaml config when I can just directly set an environment variable?
EDIT: Ah actually since you're using os.environ
this will introduce some config isolation which is useful
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that is the idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you also tested this for TS with multiple models? Any downside of setting Environment variable for model specific config yaml for other workers on TS? ie if two workers tried to set different values will that work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't tried multi model, but updated the example to be a multi-worker example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you know if pytorch core will log a line to say the cache was hit? Might be useful for people to debug
Unfortunately, no. |
So I'd recommend at least adding some debug statement after building pytorch from source so you're sure things work. MIght as well upstream that too |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ankithagunapal Great to see the speed with the cache. Let's also verify for multiple workers.
Will be good to also reference these articles for additional tips for reducing / debugging compile time issues:
https://pytorch.org/blog/training-production-ai-models/#34-controlling-just-in-time-compilation-time
https://pytorch.org/docs/stable/torch.compiler_profiling_torch_compile.html
@msaroufim There are counters. I added this logic in the handler
|
Modified the example to work with multiple workers. And added a section for additional links you mentioned. @chauhang Merging this PR for now. |
Description
This PR shows 2 examples
TORCHINDUCTOR_FX_GRAPH_CACHE
torchinductor
cache with a configFixes #(issue)
Type of change
Please delete options that are not relevant.
Feature/Issue validation/testing
Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.
Seeing a 40% reduction in compile time with resnet18 max-autotune
Checklist: