Skip to content

Latest commit

 

History

History
39 lines (30 loc) · 1.69 KB

README.md

File metadata and controls

39 lines (30 loc) · 1.69 KB

VITS 한국어 버전 (VITS Korean version)

https://github.com/jaywalnut310/vits

설치

  1. Python >= 3.6
  2. Clone this repository
  3. Install python requirements. Please refer requirements.txt
    1. You may need to install espeak first: apt-get install espeak
  4. Download datasets
    1. Download and extract the LJ Speech dataset, then rename or create a link to the dataset folder: ln -s /path/to/LJSpeech-1.1/wavs DUMMY1
    2. For mult-speaker setting, download and extract the VCTK dataset, and downsample wav files to 22050 Hz. Then rename or create a link to the dataset folder: ln -s /path/to/VCTK-Corpus/downsampled_wavs DUMMY2
  5. Build Monotonic Alignment Search and run preprocessing if you use your own datasets.
# Cython-version Monotonoic Alignment Search
cd monotonic_align
python setup.py build_ext --inplace

# Preprocessing (g2p) for your own datasets. Preprocessed phonemes for LJ Speech and VCTK have been already provided.
# python preprocess.py --type ljs --filelists filelists/ljs_audio_text_train_filelist.txt filelists/ljs_audio_text_val_filelist.txt filelists/ljs_audio_text_test_filelist.txt 
# python preprocess.py --type vctk --filelists filelists/vctk_audio_sid_text_train_filelist.txt filelists/vctk_audio_sid_text_val_filelist.txt filelists/vctk_audio_sid_text_test_filelist.txt

Training Exmaple

# LJ Speech 포멧
python train.py -c configs/ljs_base.json -m ljs_base

# 사전학습 모델에서 학습시작하기 - LJ Speech 모멧
python train.py -c configs/ljs_base.json -m ljs_base -w pre_trained

# VCTK
python train_ms.py -c configs/vctk_base.json -m vctk_base

Inference Example

See inference_cpu.ipynb