Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MacOS Performance - any tricks to making it usable? #156

Open
ajkessel opened this issue Sep 24, 2024 · 2 comments
Open

MacOS Performance - any tricks to making it usable? #156

ajkessel opened this issue Sep 24, 2024 · 2 comments

Comments

@ajkessel
Copy link

I'm testing this library on a relatively recent Mac (info below) and it is incredibly slow. Generating a ten-word speech sample takes about an hour with default settings. Are there any tricks to getting reasonable performance? I know we can't leverage CUDA but pytorch does show mps as available. Anyone had better luck?

Model Name: iMac
Model Identifier: iMac20,2
Processor Name: 10-Core Intel Core i9
Processor Speed: 3.6 GHz
Number of Processors: 1
Total Number of Cores: 10
L2 Cache (per Core): 256 KB
L3 Cache: 20 MB
Hyper-Threading Technology: Enabled
Memory: 32 GB

@MiniXC
Copy link

MiniXC commented Oct 11, 2024

I have the same issue on CPU (although not a Mac) - turning off the "optimize" flag for the pipeline makes things faster, but leads to the model generating a random high pitched noise.

@MiniXC
Copy link

MiniXC commented Oct 11, 2024

The issue seems to be half precision (at least on my end). The following makes generation on CPU much faster:

pipe = Pipeline(optimize=False, torch_compile=False)
pipe.t2s.optimize(max_batch_size=1, dtype=torch.float32, torch_compile=False)
pipe.s2a.optimize(max_batch_size=1, dtype=torch.float32, torch_compile=False)
pipe.s2a.dtype = torch.float32

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants