Skip to content

SERVER: Multi-modal Speech Emotion Recognition using Transformer-based and Vision-based Embeddings

Notifications You must be signed in to change notification settings

nhattruongpham/mmser

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mmser

SERVER: Multi-modal Speech Emotion Recognition using Transformer-based and Vision-based Embeddings

This code is to reproduce the multi-modal speech emotion recognition model that has been used in the paper entitled "SERVER: Multi-modal Speech Emotion Recognition using Transformer-based and Vision-based Embeddings".

Abstract

This paper proposes a multi-modal approach for speech emotion recognition (SER) using both text and audio inputs. The audio embedding is extracted by using a vision-based architecture, namely VGGish, while the text embedding is extracted by using a transformer-based architecture, namely BERT. Then, these embeddings are fused using concatenation to recognize emotional states. To evaluate the effectiveness of the proposed method, the benchmark dataset, namely IEMOCAP, is employed in this study. Experimental results indicate that the proposed method is very competitive and better than most of the latest and state-of-the-art methods using multi-modal analysis for SER. The proposed method achieves 63.10% unweighted accuracy (UA) and 63.00% weighted accuracy (WA) on the IEMOCAP dataset. In the future, an extension of multi-task learning and multi-lingual approaches will be investigated to improve the performance and robustness of multi-modal SER. For reproducibility purposes, our code is publicly available.

Dependencies

  • Python 3.7
  • Pytorch 1.10.0
  • Transformer 4.22.0
  • TensorboardX 2.5.1
  • Pytorch Lightning 1.6.5
  • Torchvggish-GPU[1,2,3] 0.1

Usage

Run PreState.ipynb to train, predict, analyze, and visualize all experimental results in this paper.

Citation

If you use this code or part of it, please cite the following papers:

@inproceedings{DBLP:conf/iciit/PhamDPN23,
  author       = {Nhat Truong Pham and
                  Duc Ngoc Minh Dang and
                  Bich Ngoc Hong Pham and
                  Sy Dzung Nguyen},
  title        = {{SERVER:} Multi-modal Speech Emotion Recognition using Transformer-based
                  and Vision-based Embeddings},
  booktitle    = {Proceedings of the 2023 8th International Conference on Intelligent
                  Information Technology, {ICIIT} 2023, Da Nang, Vietnam, February 24-26,
                  2023},
  pages        = {234--238},
  publisher    = {{ACM}},
  year         = {2023},
  url          = {https://doi.org/10.1145/3591569.3591610},
  doi          = {10.1145/3591569.3591610},
  timestamp    = {Fri, 21 Jul 2023 22:25:37 +0200},
  biburl       = {https://dblp.org/rec/conf/iciit/PhamDPN23.bib},
  bibsource    = {dblp computer science bibliography, https://dblp.org}
}

References


[1] S. Hershey et al., ‘CNN Architectures for Large-Scale Audio Classification’,\ in International Conference on Acoustics, Speech and Signal Processing (ICASSP),2017\ Available: https://arxiv.org/abs/1609.09430, https://ai.google/research/pubs/pub45611

[2] Harri Taylor et al., ‘Pytorch port of Google Research's VGGish model used for extracting audio features’, v0.1, Sep 27, 2019. Available: https://github.com/harritaylor/torchvggish/releases/tag/v0.1

[3] Nhat Truong Pham, ‘torchvggish-gpu’, Sep 29, 2022. Available: https://github.com/nhattruongpham/torchvggish-gpu

About

SERVER: Multi-modal Speech Emotion Recognition using Transformer-based and Vision-based Embeddings

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published