Skip to content

In this research, we introduce a personalized emotion prediction model that focuses on four key emotions: happy, sad, neutral, and angry. What makes this model special is its ability to adapt to each individual user, learning from how they personally express emotions and adjusting the importance of each modality accordingly.

Notifications You must be signed in to change notification settings

cepdnaclk/e18-4yp-Personalized-Multimodal-Emotion-Prediction

Repository files navigation

Personalized Multimodal Emotion Prediction

Humans express emotions using different modalities like facial expressions, vocal tone and the speech. Each person may express their emotions using different modalities in different proportions. In this research, we introduce a personalized emotion prediction model that focuses on four key emotions: happy, sad, neutral, and angry. What makes this model special is its ability to adapt to each individual user, learning from how they personally express emotions and adjusting the importance of each modality accordingly.

We’ve integrated pretrained models for each modality, and the system gets better over time through feedback from the users. Unlike traditional models that treat everyone the same, this one adapts in real-time, making the emotion detection more accurate and personalized for every user. We used a mechanism based on Online Learning and weighted average ensemble.

About

In this research, we introduce a personalized emotion prediction model that focuses on four key emotions: happy, sad, neutral, and angry. What makes this model special is its ability to adapt to each individual user, learning from how they personally express emotions and adjusting the importance of each modality accordingly.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published