-
The project aimed to develop a deep learning model to identify human emotions using brain signals.
-
We have used Convolutional Neural Network (CNN), Sparse Autoencoder (SAE), and Deep Neural Network (DNN) in our model. The features extracted by the CNN are first sent to SAE for encoding and decoding. Then the data with reduced redundancy are used as the input features of a DNN for classification task.
-
A classification accuracy of 86% is obtained.
Below is the general procedure followed.
- Possibility of falsely judging a person’s emotion through facial expressions.
- Since brain responses are used (EEG), accuracy will be higher.
- EEG is non invasive in nature.
- Helpful in recognizing the emotions of people with major disability, for better communication with them.
https://www.eecs.qmul.ac.uk/mmv/datasets/deap/
https://www.frontiersin.org/articles/10.3389/fnsys.2020.00043/full