The "Real-Time Emotion Analysis System" project focuses on identifying human emotions from facial expressions using a Convolutional Neural Network (CNN). Trained on the FER-2013 dataset, the CNN classifies emotions into seven categories: Angry, Disgust, Fear, Happy, Sad, Surprise, and Neutral. The system provides accurate, real-time emotion recognition, with applications in human-computer interaction, customer service, and mental health monitoring.
Python, Pandas, NumPy, Scikit-learn (sklearn), Seaborn, Matplotlib, TensorFlow, Keras, ResNet50v2, VGG16, OpenCV, Gradio.
- The project uses the FER-2013 dataset, which is a publicly available dataset containing labeled images of facial expressions.
- Emotion Classification: Classifies images and videos into seven emotion categories.
- Real-time Detection: Capable of detecting emotions in real-time using webcam input.
- Pre-trained Model: Includes a pre-trained model for quick setup and use.
- Interactive Interface: Provides a user-friendly interface for testing and visualization.
- Python 3.7+
- pip
- OpenCV
- Other dependencies listed in 'requirements.txt'
-
Clone the repository:
git clone https://github.com/Praveenola/Real-Time-Emotion-Analysis-System.git cd Real-Time-Emotion-Analysis-System
-
Create and activate a virtual environment (optional but recommended):
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install the required dependencies:
pip install -r requirements.txt
-
To test the model on an image:
python test.py --image path/to/image.jpg
-
To use the webcam for real-time emotion detection:
python test.py --webcam
- Run the Gradio app:
gradio app.py
The emotion detection model is built using a Convolutional Neural Network (CNN) and includes advanced architectures such as ResNet50v2 and VGG16. It is trained on the FER-2013 dataset to classify facial expressions into seven emotion categories.
To train the model, use the following command:
python train.py --dataset path/to/FER-2013 --epochs 50 --batch_size 64
A pre-trained model is available in the directory for quick use and testing.
The provided deployment code sets up a real-time emotion detection system using a webcam. It utilizes a pre-trained deep learning model to classify emotions from facial expressions detected in the live video feed. The setup involves using OpenCV for video capture and face detection, and Keras for emotion classification.
- Python 3.x: Ensure Python is installed on your system.
- Libraries: Install the necessary Python libraries. Use the following command to install them:
pip install keras opencv-python numpy
- Pre-trained Models: You need a pre-trained emotion detection model saved in the
.keras
or.h5
format. Replace the model path in the code as needed.
- Face Classifier:
haarcascade_frontalface_default.xml
- Download the Haar Cascade model from the OpenCV GitHub repository.
- Emotion Classification Model:
- Ensure you have either
Custom_CNN_model.keras
orFinal_Resnet50_Best_model.keras
available in your working directory.
- Ensure you have either
-
Setup Model Paths: Update the paths to the model files in the code if necessary.
classifier = load_model(r'path/to/your/model.keras')
-
Run the Code:
- Save the provided code in a Python script file, e.g.,
deploy_emotion_detection.py
. - Execute the script using:
python deploy_emotion_detection.py
- Save the provided code in a Python script file, e.g.,
-
Interactive Video Feed:
- The code starts capturing video from your webcam.
- It detects faces in each frame, classifies emotions, and displays the results in real-time.
-
Exit the Application:
- Press the 'q' key while the video feed window is active to exit the application.
- No Webcam Feed: Ensure your webcam is connected and properly configured. Check webcam permissions in your system settings.
- Model Not Loading: Verify the model file path and ensure it matches the format expected by
load_model()
. - Dependencies: Ensure all required Python packages are installed correctly.
- The model's accuracy and performance depend on the quality of the training and the dataset used.
- For enhanced performance or additional features, consider fine-tuning the model or integrating additional pre-processing steps.
Contributions are always welcome!
Please see contributing.md
for ways to get started and adhere to this project's code of conduct.
I welcome contributions to the Real-Time Emotion Analysis project! Follow these steps to contribute:
-
Fork the repository:
- Click the "Fork" button at the top right of this repository page to create a copy of the repository under your own GitHub account.
-
Create your feature branch:
- Open a terminal and clone the forked repository to your local machine:
git clone https://github.com/Praveenola/Real-Time-Emotion-Analysis-System.git cd Real-Time-Emotion-Analysis-System
- Create a new branch for your feature or bugfix:
git checkout -b feature/new-feature
- Open a terminal and clone the forked repository to your local machine:
-
Commit your changes:
- Make the necessary changes in your local repository.
- Stage the changes:
git add .
- Commit the changes with a descriptive message:
git commit -m 'Add new feature'
-
Push to the branch:
- Push the changes to your forked repository:
git push origin feature/new-feature
- Push the changes to your forked repository:
-
Open a Pull Request:
- Go to the original repository on GitHub and you will see a prompt to open a Pull Request from your new branch.
- Provide a descriptive title and detailed description of your changes, and submit the Pull Request.
Thank you for contributing!
We would like to extend our heartfelt thanks to the following individuals and organizations for their contributions and support:
- FER-2013 Dataset: We gratefully acknowledge the creators of the FER-2013 dataset, which provided the essential data for training and evaluating the emotion detection model.
- ResNet and VGG Teams: Special thanks to the teams behind the ResNet50v2 and VGG16 architectures for their groundbreaking work in deep learning and computer vision, which greatly enhanced the performance of our model.
- Open Source Community: Our project benefits immensely from the tools and libraries available in the open-source community. We appreciate the developers and maintainers of the various Python libraries and frameworks we utilized.
- Contributors: Thank you to all the contributors who help improve this project. Your efforts in reviewing code, reporting issues, and suggesting features are invaluable.
- Supporters and Users: A big thank you to everyone who has supported and used this project. Your feedback and enthusiasm drive us to continuously improve and innovate.
If you have contributed in any way or provided feedback, we appreciate your support and involvement in making this project better.