-
Notifications
You must be signed in to change notification settings - Fork 863
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TorchServe linux-aarch64 experimental support #3071
Conversation
…serve into feature/aarch64_support
Any strategy for how we'll test this in CI? Cause if not might be best to mark this support as early preview or experimental with the expectation that some things might break |
Good point. I updated the plan to include what needs to be implemented for CI, regression #3072 Once we remove the dependency on TorchText, we'll know what else is remaining. |
@msaroufim Updated the documentation to say its experimental |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding the SpeechT5 example for Graviton. It will be good to also include the new WavLLM model added to https://github.com/microsoft/SpeechT5/tree/main/WavLLM. For the WaveGlow example updates, please update the readme to indicate that example also works on Graviton instances and not just for Nvidia GPUs
) | ||
return output | ||
|
||
def postprocess(self, inference_output): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is the response being sent to the client side? Will be good add support for streaming response, and the make the location of /tmp configurable in case the deployment server does not have "/tmp" folder
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, the output path can be set in model-config.yaml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Set the output_dir in config file
@@ -0,0 +1,48 @@ | |||
# Text to Speech synthesis with SpeechT5 | |||
|
|||
This is an example showing text to speech synthesis using SpeechT5 model. This has been verified to work on (linux-aarch64) Graviton 3 instance |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding the Speech Synthesis example for Graviton. It will be good to also see if support for the new WavLLM in https://github.com/microsoft/SpeechT5/tree/main/WavLLM added to the Microsoft SpeechT5 can also be included.
model: "./model" | ||
vocoder: "./vocoder" | ||
processor: "./processor" | ||
speaker_embeddings: "./speaker_embeddings" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dot can be removed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated
self.processor = SpeechT5Processor.from_pretrained(processor) | ||
self.model = SpeechT5ForTextToSpeech.from_pretrained(model) | ||
self.vocoder = SpeechT5HifiGan.from_pretrained(vocoder) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
without prefix model_dir, do these paths work correctly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated. They work
) | ||
return output | ||
|
||
def postprocess(self, inference_output): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, the output path can be set in model-config.yaml
Description
TorchServe on linux aarch64 - Experimental
Plan
TorchServe has been tested to be working on linux aarch64 for some of the examples. Regression tests have not been tested. Tested this on Amazon Graviton 3 instance(m7g.4x.large)
Installation
Currently installation from PyPi or installing from source works. Conda binaries will be available once this PR is pushed.
Optimizations
You can also enable this optimizations for Graviton 3 to get an improved performance. More details can be found in this blog
Example
This example on Text to Speech synthesis was verified to be working on Graviton 3
Fixes #(issue)
Type of change
Please delete options that are not relevant.
Feature/Issue validation/testing
Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.
Test A
Logs for Test A
Test B
Logs for Test B
Checklist: