Skip to content

Latest commit

 

History

History
59 lines (26 loc) · 1.8 KB

README.md

File metadata and controls

59 lines (26 loc) · 1.8 KB

Flask-Tacotron2-TTS-Web-App

This repo was forked from NVIDIA/Tacotron2 for inference test only (not for training).

Because I didn't know flask well, I forked CodeDem/flask-musing-streaming.

If you want to test NVIDIA Tacotron2 models in jupyter notebook, you better try inference model NVIDIA/Tacotron2 .

example

Installation

  1. Install PyTorch 1.0 (You Need NVIDIA CUDA GPUs!)

  2. pip install -r requirement.txt

  3. clone this repo: https://github.com/NVIDIA/waveglow.git

    or git submodule init; git submodule update

  4. you may need models tacotron2, waveglow both :

    1. NVIDIA/Tacotron2's model for inference demo: Tacotron 2 , WaveGlow

    2. or My trained models:

      Tacotron2: English_90k_steps(ljspeech dataset), Korean_162k_steps(kss dataset)

      Waveglow: waveglow_152k_steps using Korean dataset

Usage

python app.py

or You can test tts on console: python console_test.py

in config.json, you can change models' path.

Results

You can see Warning! Decoder Max on console.

In this case, your synthesized audio will have 11 seconds length and weired sounds.

This problems many happen in my korean trained model, but hardly happen in my english trained model.

I can't find any difference from synthesized audio between waveglow_256channels.pt(waveglow demo) and my waveglow_152k .