Project Overview: Face Detection using VGG16 and TensorFlow
- face_detection.ipynb: Contains the model training with 99% validation accuracy.
- live_face_detection.py: Utilizes OpenCV for real-time face detection.
- Demo Images: For testing purposes.
- screen_shot.png (1 & 2): Captures of live face detection.
- hasrcasecade_face_frontage_default.xml: Used for detecting face shapes in live footage.
- Tensorflow(1.13.1)
- Keras(2.3.1)
- OpenCV2(4.1.1)
- Python(3.7.7)
- Anaconda(4.7.11)
- Download the entire repository.
- Use your dataset.
- Open and run all cells in face_detection.ipynb using Jupyter Notebook.
- Import necessary libraries.
- Define variables (e.g., image_size, train/test paths).
- Download and integrate VGG16.
- Construct and train the model, validate, and plot accuracies.
- Test the model using a sample image.
- Save the model as 'Final_Model_Face.h5'.
- Import libraries and load 'Final_Model_Face.h5'.
- Ensure webcam functionality.
- Use 'haarcascade_frontalface_default.xml' for live facial detection.
- Convert detected faces to array for model prediction.
- Display predictions (Ash, Malav, Nani) on live camera feed.
- For any issues, check paths and installed modules.
Thank you.
Ashish
linkedin : https://www.linkedin.com/in/ashishbarvaliya/