SpeechLLM is a multi-modal Language Model (LLM) specifically trained to analyze and predict metadata from a speaker's turn in a conversation. This advanced model integrates a speech encoder to transform speech signals into meaningful speech representations. These embeddings, combined with text instructions, are then processed by the LLM to generate predictions.
The model inputs an speech audio file of 16 KHz and predicts the following:
- SpeechActivity : if the audio signal contains speech (True/False)
- Transcript : ASR transcript of the audio
- Gender of the speaker (Female/Male)
- Age of the speaker (Young/Middle-Age/Senior)
- Accent of the speaker (Africa/America/Celtic/Europe/Oceania/South-Asia/South-East-Asia)
- Emotion of the speaker (Happy/Sad/Anger/Neutral/Frustrated)
# Load model directly from huggingface
from transformers import AutoModel
model = AutoModel.from_pretrained("skit-ai/speechllm-2B", trust_remote_code=True)
model.generate_meta(
audio_path="path-to-audio.wav", #16k Hz, mono
audio_tensor=torchaudio.load("path-to-audio.wav")[1], # [Optional] either audio_path or audio_tensor directly
instruction="Give me the following information about the audio [SpeechActivity, Transcript, Gender, Emotion, Age, Accent]",
max_new_tokens=500,
return_special_tokens=False
)
# Model Generation
'''
{
"SpeechActivity" : "True",
"Transcript": "Yes, I got it. I'll make the payment now.",
"Gender": "Female",
"Emotion": "Neutral",
"Age": "Young",
"Accent" : "America",
}
'''
Try the model in Google Colab Notebook. Also, check out our blog on SpeechLLM for end-to-end conversational agents(User Speech -> Response).
We released the speechllm-2B and speechllm-1.5B model checkpoints on huggingface 🤗.
Model | Speech Encoder | LLM | checkpoint url |
---|---|---|---|
speechllm-2B | facebook/hubert-xlarge-ll60k | TinyLlama/TinyLlama-1.1B-Chat-v1.0 | Huggingface |
speechllm-1.5B | microsoft/wavlm-large | TinyLlama/TinyLlama-1.1B-Chat-v1.0 | Huggingface |
Dataset | Type | Word Error Rate | Gender Acc | Age Acc | Accent Acc |
---|---|---|---|---|---|
librispeech-test-clean | Read Speech | 6.73 | 0.9496 | ||
librispeech-test-other | Read Speech | 9.13 | 0.9217 | ||
CommonVoice test | Diverse Accent, Age | 25.66 | 0.8680 | 0.6041 | 0.6959 |
Dataset | Type | Word Error Rate | Gender Acc | Age Acc | Accent Acc |
---|---|---|---|---|---|
librispeech-test-clean | Read Speech | 11.51 | 0.9594 | ||
librispeech-test-other | Read Speech | 16.68 | 0.9297 | ||
CommonVoice test | Diverse Accent, Age | 26.02 | 0.9476 | 0.6498 | 0.8121 |
Install the necessary packages in the requirements.txt and take care of CUDA versions. Then prepare the audio dataset similar to data_samples/train.csv and data_samples/dev.csv, if new tasks eg: (noise, environment class) has to be added, then update the dataset.py accordingly.
pip install requirements.txt
update the config in train.py, such as audio_encoder_name, llm_name, etc and other hyper parameters.
python train.py
After training, update checkpoint path and test dataset path(similar format to train/dev.csv).
python test.py
streamlit run app.py
The models provided in this repository are not perfect and may produce errors in Automatic Speech Recognition (ASR), gender identification, age estimation, accent recognition, and emotion detection. Additionally, these models may exhibit biases related to gender, age, accent, and emotion. Please use with caution, especially in production environments, and be aware of potential inaccuracies and biases.
This project is released under the Apache 2.0 license as found in the LICENSE file. The released checkpoints, and code are intended for research purpose subject to the license of facebook/hubert-xlarge-ll60k, microsoft/wavlm-large and TinyLlama/TinyLlama-1.1B-Chat-v1.0 models.
@misc{Rajaa_SpeechLLM_Multi-Modal_LLM,
author = {Rajaa, Shangeth and Tushar, Abhinav},
title = {{SpeechLLM: Multi-Modal LLM for Speech Understanding}},
url = {https://github.com/skit-ai/SpeechLLM}
}