You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The formatting_your_dataset doc recommends writing your dataset in this format with the note that it'll be compatible with the LJSpeech formatter:
# metadata.txt
audio1|This is my sentence.
audio2|This is maybe my sentence.
audio3|This is certainly my sentence.
audio4|Let this be your sentence.
...
If you create a dataset in that format and try to follow the instructions in tutorial_for_nervous_beginners, you'll get an error:
root@937a34667dbe:~# CUDA_VISIBLE_DEVICES="0" python3 TTS/bin/train_tts.py --config_path config.json
Traceback (most recent call last):
File "/root/TTS/bin/train_tts.py", line 71, in <module>
main()
File "/root/TTS/bin/train_tts.py", line 47, in main
train_samples, eval_samples = load_tts_samples(
File "/root/TTS/tts/datasets/__init__.py", line 120, in load_tts_samples
meta_data_train = formatter(root_path, meta_file_train, ignored_speakers=ignored_speakers)
File "/root/TTS/tts/datasets/formatters.py", line 201, in ljspeech
text = cols[2]
IndexError: list index out of range
Looking at the code for that formatter it's expecting 3 columns (looking up cols[2])
Describe the bug
The
formatting_your_dataset
doc recommends writing your dataset in this format with the note that it'll be compatible with the LJSpeech formatter:If you create a dataset in that format and try to follow the instructions in
tutorial_for_nervous_beginners
, you'll get an error:Looking at the code for that formatter it's expecting 3 columns (looking up
cols[2]
)TTS/TTS/tts/datasets/formatters.py
Lines 198 to 202 in 9963519
To Reproduce
transcript.txt
config,json
:CUDA_VISIBLE_DEVICES="0" python3 TTS/bin/train_tts.py --config_path config.json
Expected behavior
No response
Logs
No response
Environment
This is a docker image based on the
coqui-ai/tts
docker image:Additional context
No response
The text was updated successfully, but these errors were encountered: