Welcome to Harpocrates, an innovative device designed to detect American Sign Language (ASL) gestures and provide real-time predictions. Currently, it supports basic signs like "Hello" and "Thank you," with the potential for expanding its repertoire through further training.
Python
TensorFlow
MediaPipe
Harpocrates employs MediaPipe for hand, pose, and face detection, utilizing a Sequential Model with LSTM and Dense Layers for prediction. It captures 30 frames per action, ensuring accurate recognition. Once a gesture is predicted, it is converted to speech, making communication seamless.
Harpocrates serves as a bridge between sign language users and those unfamiliar with it, facilitating communication and understanding.
This graph illustrates the categorical accuracy's progression with increasing epochs during model training.
Face, pose, and hands detection in action.
Real-time gesture prediction.
3D model of Harpocrates.
This project draws inspiration and guidance from Nicholas Renotte's tutorial.
Feel free to explore and contribute to Harpocrates! 🚀