-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Assertion srcIndex < srcSelectDimSize
failed
#2971
Comments
Hey, thanks for the bug report, do you mind sharing the reference as well so we can reproduce? |
I'm also getting the same error |
I think the GPU is running out of memory. I'm using this to make an audiobook out of royalroad. I'm breaking my content up into chapters and then running text_to_speach on an already intalized GPU. I am going to try to re-intalize TTS every chapter. Failing that, I'll try tortoise or another model. from TTS.api import TTS Initialize TTS#tts = TTS("tts_models/multilingual/multi-dataset/xtts_v1", gpu=True) Load and split the input filefile = open("Primal Hunter, The - Zogarth.txt", "r") text_set = re.split(delimiter, text_read) Remove the first itemtext_set = text_set[1:] Pair up items in the listchapter_num = [text_set[i] for i in range(0, len(text_set) - 1, 2)] Function to convert wav to mp3def convert_wav_to_mp3(wav_file, mp3_file): Function to get the last processed chapter from chapters.txtdef get_last_processed_chapter(): Output directory setupoutput_dir = "output" last_chapter = get_last_processed_chapter() Determine where to start processingif last_chapter: Open both files.txt and chapters.txt for writing (append mode to avoid overwriting)with open('chapters.txt', 'a') as chapters, open('files.txt', 'a') as files:
|
Hi @feizi @Omegastick @isaac1987a, It happens because the GPT encoder is able to produce more tokens than the gpt_max_audio_tokens. max_length should be set to self.max_mel_tokens: TTS/TTS/tts/layers/xtts/gpt.py Line 551 in d21f15c
I added this fix in a private branch and It should be fixed in the next release. A work around for you guys would be split the long sentences in small ones. |
I will close this issue because it was fixed on the PR #3086 and it will be merged soon. Feel free to reopen it if needed. |
Hello everyone, I'm having the same problem only in version v2 of the xxts model. Explanation for having the error In my scenario I managed to understand how to cause the error. Logs
Ways I call processing
|
Same problem with XTTS-v2 model using the latest code. |
@CRochaVox hi did u fix it |
@Poeroz @davaavirtualplus @CRochaVox Hi guys, have you ever solved the problem? |
Describe the bug
Sometimes, XTTS inference will fail with a long list of
../aten/src/ATen/native/cuda/Indexing.cu:1093: indexSelectSmallIndex: block: [0,0,0], thread: [95,0,0] Assertion srcIndex < srcSelectDimSize failed.
exceptions, followed byIt seems random, around 1 in 20 calls fail. Longer inputs seem more likely to fail, but I might be imagining it.
Once it fails one, the Python runtime has to be restarted. Any further attempts to use CUDA give
RuntimeError: CUDA error: device-side assert triggered
.To Reproduce
Run the example code from the docs a few times:
Expected behavior
It should run every time without issue.
Logs
And here's the relevant stacktrace:
Environment
Additional context
No response
The text was updated successfully, but these errors were encountered: