Skip to content

Releases: DigitalPhonetics/IMS-Toucan

Improved TTS in 7000 Languages

25 Jul 07:16
Compare
Choose a tag to compare

What's Changed

This release provides new checkpoints and improves some aspects of the previous release that were not included due to time constraints. For more information on the universal TTS model for 7000 languages, please refer to the previous release v3.0

  • Prosody prediction in terms of pitch, energy and durations are now stochastic and sample from a distribution instead of assuming a one-to-one mapping.
  • Added support for more IPA modifiers to cover more languages
  • Added more languages into the pretraining
  • Overhauled language similarity prediction modules and visualization

Full Changelog: v3.0...v3.1

TTS in 7000 Languages

10 Jun 15:21
Compare
Choose a tag to compare

This release extends the toolkits' functionality and provides new checkpoints.

  • We improved the overall TTS quality, with further enhancements already on their way
  • Watermarking is added to prevent misuse
  • We extend the support for almost all languages in the ISO-639-3 standard (that's over 7000 languages!)
  • With a few clever designs, we were able to extrapolate from a pretrained checkpoint using 462 languages to a checkpoint that can speak all languages for which we now support a text frontend!
  • Lots of simplifications and quality of life changes.

This is the outcome of a collaboration with colleagues from the University of Groningen and the Fraunhofer IIS in Erlangen. Together with our group from the University of Stuttgart, we have built this model, which is the first of its kind.

We will present this at the Interspeech 2024, the full list of authors is Florian Lux, Sarina Meyer, Lyonel Behringer, Frank Zalkow, Phat Do, Matt Coler, Emanuël Habets and Thang Vu.

Paper: https://arxiv.org/abs/2406.06403
Dataset: https://huggingface.co/datasets/Flux9665/BibleMMS
Interactive Demo: https://huggingface.co/spaces/Flux9665/MassivelyMultilingualTTS
Static Demo: https://anondemos.github.io/MMDemo/

Prompting Controlled Emotional TTS

10 Jun 14:03
bb4755b
Compare
Choose a tag to compare

In this release you can condition your TTS model on emotional prompts during training and transfer the emotion in any prompt to synthesized speech during inference.

Demo samples are available at https://anondemos.github.io/Prompting/
A demo space is available at https://huggingface.co/spaces/Thommy96/promptingtoucan

Using pretrained models:
You can use the pretrained models for inference by simply providing an instance of the sentence embedding extractor, a speaker id and a prompt (see run_sent_emb_test_suite.py).

Training your own model:
You will need to extract a number of prompts and their sentence embeddings for all emotion categories which you want to include during training (see e.g. extract_yelp_sent_embs.py).
Then in your training pipeline you need to load these sentence embeddings and pass them to the train loop. You should also provide the dimensionality of the embeddings in the instantiation of the TTS model and set static_speaker_embedding=True (see TrainingInterfaces\TrainingPipelines\ToucanTTS_Sent_Finetuning.py). Depending on how many speakers there are in the datasets you use for training, you need to adapt the dimensionality of the speaker embedding table in the TTS model. Finally you should check if the datasets you use are included in the functions for extracting emotion and speaker id from the filepath (Utility\utils.py).

ChallengeDataContribution

01 Dec 15:40
Compare
Choose a tag to compare
Pre-release
v2.asvspoof

fix popping noise and incorrect path in downloader

ToucanTTS

10 Apr 18:22
Compare
Choose a tag to compare

We pack a bunch of designs into a new architecture, which will be the basis for our multilingual and low-resource research going forward. We call it ToucanTTS and as usual, provide pretrained models. The synthesis quality is very good and the training is very stable and requires few datapoints for training from scratch and even fewer for finetuning. It is hard to quantify these stats, so it's probably best to try it out yourself.

We also offer the option to use a BigVGAN vocoder, which sounds very nice, but is a bit slow on CPU. On GPU it is definitely recommended to use the new vocoder.

Blizzard Challenge 2023

04 Apr 14:15
Compare
Choose a tag to compare

Improved Controllable Multilingual

22 Feb 17:08
Compare
Choose a tag to compare

This release extends the toolkits functionality and provides new checkpoints.

  • new sampling rate for the vocoder: Using 24kHz instead of 48kHz lowers the theoretical upper bound for quality, but produces fewer artifacts in practice.
  • flow based postnet from portaspeech is included in the new TTS model which brings cleaner results at basically no expense
  • new controllability options through artificial speaker generation in a lower dimensional space with a better embedding function
  • quality of life changes, such as an integrated finetuning example and an arbiter for the train loops to be used and vocoder finetuning (although that should really not be necessary)
  • divese bugfixes and speed increases

This release breaks backwards compatibility, please download the new models or stick to a prior release if you rely on your old models.

Future releaes will include one more change to the vocoder used (BigVGAN generator) and lots of changes to scale up the multi-lingual capabilities of a single model.

Controllable Speakers

25 Oct 15:16
Compare
Choose a tag to compare

This release extends the toolkits functionality and provides new checkpoints.

  • self contained embeddings: we no longer use an external embedding model for TTS conditioning. Instead we train one that is specifically tailored for this use.
  • new vocoder: Avocodo replaces HiFi-GAN
  • new controllability options through artificial speaker generation
  • quality of life changes, such as weights&biases integration, a graphic demo script and automated model downloading
  • divese bugfixes and speed increases

This release breaks backwards compatibility, please download the new models or stick to a prior release if you rely on your old models.

Support all Types of Languages

20 May 10:04
1ae0202
Compare
Choose a tag to compare

This release extends the toolkits functionality and provides new checkpoints.

New Features:

  • support for all phonemes in the IPA standard through an extended lookup of articulatory features
  • support for some suprasegmental markers in the IPA standard through parsing (tone, lengthening, primary stress)
  • praat-parselmouth for greatly improved pitch extraction
  • faster phonemizaton
  • word boundaries are added, which are invisible to the aligner and the decoder, but can help the encoder in multilingual scenarios
  • tonal languages added, tested and included into the pretraining (Chinese, Vietnamese)
  • Scorer class to inspect data given a trained model and dataset cache (provided pretrained models can be used for this)
  • intuitive controls for scaling durations and variance in pitch and energy
  • divese bugfixes and speed increases

Note:

  • This release breaks backwards compatibility. Make sure you are using the associated pretrained models. Old checkpoints and dataset caches become incompatible. Only HiFiGAN remains compatible.
  • Work on upcoming releases is already in progress. Improved voice adaptation will be our next goal.
  • To use the pretrained checkpoints, download them, create their corresponding directories and place them into your clone as follows (you have to rename the HiFiGAN and FastSpeech2 checkpoints once in place):
...
Models
└─ Aligner
      └─ aligner.pt
└─ FastSpeech2_Meta
      └─ best.pt
└─ HiFiGAN_combined
      └─ best.pt
...

Multi Language and Multi Speaker

01 Mar 20:37
81075a6
Compare
Choose a tag to compare
  • self contained aligner to get high quality durations quickly and easily without reliance on external tools or knowledge distillation
  • modelling speakers and languages jointly but disentangled, so you can use speakers across languages
  • look at the demo section for an interactive online demo

Pretrained FastSpeech2 model that can speak in many languages in any voices, HiFiGAN model and Aligner model are attached to this commit.