Skip to content

An automatic prosodic boundary annotation tool for Text-to-Speech Synthesis (TTS).

License

Notifications You must be signed in to change notification settings

jzmzhong/Automatic-Prosody-Annotator-with-SSWP-CLAP

Repository files navigation

Multi-modal Automatic Prosody Annotation with Contrastive Pretraining of SSWP

This is the official implementation of the following paper (Arxiv):

Multi-Modal Automatic Prosody Annotation with Contrastive Pretaining of SSWP
Jinzuomu Zhong, Yang Li, Hui Huang, Korin Richmond, Jie Liu, Zhiba Su, Jing Guo, Benlai Tang, Fengjie Zhu

Abstract: *In expressive and controllable Text-to-Speech (TTS), explicit prosodic features significantly improve the naturalness and controllability of synthesised speech. However, manual prosody annotation is labor-intensive and inconsistent. To address this issue, a two-stage automatic annotation pipeline is novelly proposed in this paper. In the first stage, we use contrastive pretraining of Speech-Silence and Word-Punctuation (SSWP) pairs to enhance prosodic information in latent representations. In the second stage, we build a multi-modal prosody annotator, comprising pretrained encoders, a text-speech fusing scheme, and a sequence classifier. Experiments on English prosodic boundaries demonstrate that our method achieves state-of-the-art (SOTA) performance with 0.72 and 0.93 f1 score for Prosodic Word and Prosodic Phrase boundary respectively, while bearing remarkable robustness to data scarcity. *

Model Architecture

The Architecture of Our Proposed Work

AVSE

Results & Demos

Objective Evaluation

The results of our proposed work, compared with previous benchmarks, are shown below.

AVSE

Audio Samples Demo are avaialble at: Demo

Quickstart

Environment Installation

conda create -n clap python=3.10
conda activate clap
# you can also install pytorch by following the official instruction (https://pytorch.org/get-started/locally/)
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0+cu113 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt

Multi-modal Prosody Annotation

Data Process

The model requires the input data to be aligned using Kaldi. After that, it needs to be converted to the format of the sample data.

Text Features are stroed in *.json; Audio features are stored in *.mel.npy.

./sample_data/prosody_annotation
├── wordboundarylevel_ling
│   └── internal-spk1-test
└── wordboundarylevel_mel
    └── internal-spk1-test

To convert audio to mel, you can refer to the following command:

cd data_process
python 01_wav2mel.py

Sample Inference Script

bash ./example/01_inference_prosody_annotation.sh

Released Multi-modal Prosody Annotator

./released_model/finetuned_prosody_annotator.pt

References

We are grateful for the following open-source contributions. Most of our codes are based on LAION-AI's CLAP, with the Conformer part from ESPNET.

About

An automatic prosodic boundary annotation tool for Text-to-Speech Synthesis (TTS).

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published