MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
-
Updated
Mar 10, 2024 - Python
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
MMSA is a unified framework for Multimodal Sentiment Analysis.
An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"
多模态情感分析——基于BERT+ResNet的多种融合方法
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.
A Tool for extracting multimodal features from videos.
Context-Dependent Sentiment Analysis in User-Generated Videos
The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.
CM-BERT: Cross-Modal BERT for Text-Audio Sentiment Analysis(MM2020)
Code for the paper "VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis", AAAI'19
Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment Analysis
Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"
This repository contains the implementation of the paper -- Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis
A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations (ACL 2023)
Multimodal sentiment analysis using hierarchical fusion with context modeling
Code and Data for the ACL22 main conference paper "MSCTD: A Multimodal Sentiment Chat Translation Dataset"
Towards Robust Multimodal Sentiment Analysis with Incomplete Data
NAACL 2022 paper on Analyzing Modality Robustness in Multimodal Sentiment Analysis
[EMNLP 2022] This repository contains the official implementation of the paper "MM-Align: Learning Optimal Transport-based Alignment Dynamics for Fast and Accurate Inference on Missing Modality Sequences"
DeepCU: Integrating Both Common and Unique Latent Information for Multimodal Sentiment Analysis, IJCAI-19
Add a description, image, and links to the multimodal-sentiment-analysis topic page so that developers can more easily learn about it.
To associate your repository with the multimodal-sentiment-analysis topic, visit your repo's landing page and select "manage topics."