Skip to content

Foreign Accent Conversion by Synthesizing Speech from Phonetic Posteriorgrams (Interspeech'19)

License

Notifications You must be signed in to change notification settings

a2d8a4v/fac-via-ppg

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Foreign Accent Conversion by Synthesizing Speech from Phonetic Posteriorgrams

This repository hosts the code we used to prepare our interspeech'19 paper titled "Foreign Accent Conversion by Synthesizing Speech from Phonetic Posteriorgrams"

Install

This project uses conda to manage all the dependencies, you should install Minoconda if you have not done so. I used the version of python 3.8.

# Clone the repo
git clone https://github.com/a2d8a4v/fac-via-ppg.git
cd fac-via-ppg
PROJECT_ROOT_DIR=pwd

# install Minoconda
sh Miniconda3-latest-Linux-x86_64.sh

# activate the environment
. YOUR_CONDA_DIR_PATH/bin/activate

# use the specific pacakages with certained version in environment.yml. It takes some time to finish.
conda env update --file environment.yml --prune

# install pykaldi. It takes a long time to finish.
# if pykaldi installing failed, please read https://github.com/pykaldi/pykaldi#installation
git clone https://github.com/pykaldi/pykaldi.git
pushd pykaldi
cd tools
./check_dependencies.sh  # checks if system dependencies are installed
./install_protobuf.sh    # installs both the C++ library and the Python package
./install_clif.sh        # installs both the C++ library and the Python package
./install_kaldi.sh       # installs the C++ library
cd ..
python setup.py install
popd

# Compile protocol buffer to get the data_utterance_pb2.py file
protoc -I=src/common --python_out=src/common src/common/data_utterance.proto

# Include src in your PYTHONPATH
export PYTHONPATH=$PROJECT_ROOT_DIR/src:$PYTHONPATH

If conda complains that some packages are missing, it is very likely that you can find a similar version of that package on anaconda's archive.

If you are using pytorch >= 1.3, you may need to remove the byte() method call in src.common.utils.get_mask_from_lengths.

Run unit tests

cd test

# Remember to make this script executable
./run_coverage.sh

This only does a few sanity checks, don't worry if the test coverage looks low :)

Depending on your git configs, you may or may not need to recreate the symbolic links in test/data.

Train PPG-to-Mel model

Change default parameters in src/common/hparams.py:create_hparams(). The training and validation data should be specified in text files, see data/filelists for examples.

cd src/script
mkdir -pv ../../output
mkdir -pv ../../checkpoints
python train_ppg2mel.py

The FP16 mode will not work, unfortunately :(

Train WaveGlow model

Change the default parameters in src/waveglow/config.json. The training data should be specified in the same manner as the PPG-to-Mel model.

cd src/script
python train_waveglow.py

View training progress

You should find a dir log in all of your output dirs, that is the LOG_DIR you should use below.

tensorboard --logdir=${LOG_DIR}

Generate speech synthesis

Use src/script/generate_synthesis.py, you can find pre-trained models in the Links section.

generate_synthesis.py [-h] --ppg2mel_model PPG2MEL_MODEL
                           --waveglow_model WAVEGLOW_MODEL
                           --teacher_utterance_path TEACHER_UTTERANCE_PATH
                           --output_dir OUTPUT_DIR

Links

Citation

Please kindly cite the following paper if you use this code repository in your work,

@inproceedings{zhao2019ForeignAC,
  author={Guanlong Zhao and Shaojin Ding and Ricardo Gutierrez-Osuna},
  title={{Foreign Accent Conversion by Synthesizing Speech from Phonetic Posteriorgrams}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2843--2847},
  doi={10.21437/Interspeech.2019-1778},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1778}
}

About

Foreign Accent Conversion by Synthesizing Speech from Phonetic Posteriorgrams (Interspeech'19)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.7%
  • Shell 0.3%