You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm testing this library on a relatively recent Mac (info below) and it is incredibly slow. Generating a ten-word speech sample takes about an hour with default settings. Are there any tricks to getting reasonable performance? I know we can't leverage CUDA but pytorch does show mps as available. Anyone had better luck?
Model Name: iMac
Model Identifier: iMac20,2
Processor Name: 10-Core Intel Core i9
Processor Speed: 3.6 GHz
Number of Processors: 1
Total Number of Cores: 10
L2 Cache (per Core): 256 KB
L3 Cache: 20 MB
Hyper-Threading Technology: Enabled
Memory: 32 GB
The text was updated successfully, but these errors were encountered:
I have the same issue on CPU (although not a Mac) - turning off the "optimize" flag for the pipeline makes things faster, but leads to the model generating a random high pitched noise.
I'm testing this library on a relatively recent Mac (info below) and it is incredibly slow. Generating a ten-word speech sample takes about an hour with default settings. Are there any tricks to getting reasonable performance? I know we can't leverage CUDA but pytorch does show mps as available. Anyone had better luck?
The text was updated successfully, but these errors were encountered: