The Emotion Detection For Facial Expressions project focuses on emotion detection by measuring facial features using OpenCV (Python). This system predicts and recognizes the expressions of individuals facing the camera by leveraging Convolutional Neural Networks (CNNs) trained on facial images. It detects faces, extracts features, and models emotional expressions in real-time.
Facial expressions are a natural way to communicate emotional states and intentions. The facial expression detection method consists of three main steps: face detection, feature extraction, and modeling. This project provides a simple and accurate solution for detecting human emotions, contributing to the development of emotion detection systems in the digital era.
- About
- Files
- Training
- Testing with Face Landmarks
- Testing without Face Landmarks
- Dependencies
- Additional Resources
- Author
- Dataset: Dataset used for training the emotion detection model.
- Emotion Detection For Facial Expressions.pdf: Documentation providing details about the project.
- Emotion_2.h5: Pre-trained model weights saved in HDF5 format.
- Training.ipynb: Jupyter Notebook containing the code for training the CNN model.
- Test_with_face_landmarks.ipynb: Jupyter Notebook for testing the trained model with face landmarks.
- Test_with_out_face_landmarks.ipynb: Jupyter Notebook for testing the trained model without face landmarks.
- haarcascade_frontalface_default.xml: XML file containing the pre-trained Haar Cascade classifier for face detection.
- shape_predictor_68_face_landmarks.dat: Data file containing the shape predictor for detecting 68 facial landmarks.
- README.md: This file, providing an overview of the project.
The Training.ipynb
notebook contains the code for training the CNN model using the provided dataset. It includes data preprocessing, model architecture definition, training process configuration, and model evaluation.
The Test_with_face_landmarks.ipynb
notebook demonstrates how to use the trained model for real-time emotion detection with face landmarks. It utilizes OpenCV for face detection and Dlib for detecting facial landmarks. The system captures video input from a camera and overlays emotion labels on detected faces along with facial landmarks in real-time.
The Test_with_out_face_landmarks.ipynb
notebook showcases how to use the trained model for real-time emotion detection without face landmarks. Similar to the previous method, it utilizes OpenCV for face detection. The system captures video input from a camera and overlays emotion labels on detected faces in real-time.
This project requires the following dependencies:
- Python 3.x
- TensorFlow
- Keras
- NumPy
- Matplotlib
- OpenCV
- Dlib
- Jupyter Notebook
You can install these dependencies using pip:
pip install tensorflow keras numpy matplotlib opencv-python dlib jupyter
Gulam Kibria Chowdhury
Software Developer || Competitive Programmer
Sylhet, Bangladesh
Gmail: gkchowdhury101@gmail.com