-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC][Feature request] Loading model onto specific GPU #2282
Comments
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels. |
Unstale |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels. |
Hi @jaketae, You can easily select the device using the "CUDA_VISIBLE_DEVICES=1". Using this only GPU 1 will be visible for the Pytorch and it will be forced to use this GPU. Example:
You can also use the "torch.cuda.set_device(device_num)" to set the device however I recommend you to use "CUDA_VISIBLE_DEVICES". |
@Edresson, thanks for the reply. I'm aware of For context, I am building StoryTeller, a package that leverages TTS, diffusion, and GPT to create a narrated video of a story. I was wondering if there is a way to specify to which If this is not supported in CoquiTTS, I'm happy to open a PR. My initial thought is to modify this line Lines 118 to 119 in 14d45b5
but if maintainers have other thoughts or design philosophies, I'm curious to hear them as well. |
I just reopen this if anyone is willing to contribute |
@erogol @Edresson, thanks for the reply. I'm interested in contributing. Some questions:
Once there is some agreement around the design, I'll submit a PR soon. Thanks! |
thanks @jaketae for you interest I like the second option. I'd not worry about BC since we have not released a major version by definition we don't promise BC. But of course for convenience it'd be nice to do the transition with a warning. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels. |
Apologies for the delay; I was carried away by other work and did not get the chance to work on this. I can still give it a try in the next few weeks, but if anyone else wants to give it a shot, feel free! |
Feel free to send a PR whenever. |
WIP draft PR in #2855. |
Context
The general API for TTS inference looks like this:
gpu
is abool
, so users can only specifyTrue
orFalse
.Problem
If the user is trying to load the model on a machine with multiple GPU cards, perhaps they might want to do something like
Although running a script with the
CUDA_VISIBLE_DEVICES
is an option, this could be limiting if the user is dealing with loading multiple models, and they want the specific TTS model to go tocuda:x
and another NLP model to go tocuda:y
, for example.Proposed Solution
Tracing the
gpu
flag, it appears to be used in places likesynthesizer.py
:TTS/TTS/utils/synthesizer.py
Lines 118 to 119 in 14d45b5
Instead of calling
.cuda()
, perhaps we can make the flag more flexible, saydevice
, and call.to(device)
instead. Obviously, changing the name of the argument would be breaking, so to preserve BC, we should introduce this as another optional keyword argument.I'm not too familiar with the details of the TTS API, so any feedback would be appreciated!
The text was updated successfully, but these errors were encountered: