This is the official implementation of the following paper (Arxiv):
Multi-Modal Automatic Prosody Annotation with Contrastive Pretaining of SSWP
Jinzuomu Zhong, Yang Li, Hui Huang, Korin Richmond, Jie Liu, Zhiba Su, Jing Guo, Benlai Tang, Fengjie Zhu
The results of our proposed work, compared with previous benchmarks, are shown below.Abstract: *In expressive and controllable Text-to-Speech (TTS), explicit prosodic features significantly improve the naturalness and controllability of synthesised speech. However, manual prosody annotation is labor-intensive and inconsistent. To address this issue, a two-stage automatic annotation pipeline is novelly proposed in this paper. In the first stage, we use contrastive pretraining of Speech-Silence and Word-Punctuation (SSWP) pairs to enhance prosodic information in latent representations. In the second stage, we build a multi-modal prosody annotator, comprising pretrained encoders, a text-speech fusing scheme, and a sequence classifier. Experiments on English prosodic boundaries demonstrate that our method achieves state-of-the-art (SOTA) performance with 0.72 and 0.93 f1 score for Prosodic Word and Prosodic Phrase boundary respectively, while bearing remarkable robustness to data scarcity. *
Audio Samples Demo are avaialble at: Demo
conda create -n clap python=3.10
conda activate clap
# you can also install pytorch by following the official instruction (https://pytorch.org/get-started/locally/)
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0+cu113 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt
The model requires the input data to be aligned using Kaldi. After that, it needs to be converted to the format of the sample data.
Text Features are stroed in *.json; Audio features are stored in *.mel.npy.
./sample_data/prosody_annotation
├── wordboundarylevel_ling
│ └── internal-spk1-test
└── wordboundarylevel_mel
└── internal-spk1-test
To convert audio to mel, you can refer to the following command:
cd data_process
python 01_wav2mel.py
bash ./example/01_inference_prosody_annotation.sh
./released_model/finetuned_prosody_annotator.pt
We are grateful for the following open-source contributions. Most of our codes are based on LAION-AI's CLAP, with the Conformer part from ESPNET.