A Work in Progress
This project aims to create a Machine Learning model that can translate Indian Sign Language to English text and act as a simple medium of communication for people unfamiliar with sign language.
The hand recognition is done using MediaPipe Hands solution in Python.
Tutorials that I referred:
- Real-time Hand Gesture Recognition using TensorFlow & OpenCV
- Python: Hand landmark estimation with MediaPipe
Currently, only dataset creation has been implemented (save_gestures.py)
Instructions to create dataset
-
Create virtual environment using
virtualenv
and activate it. -
Run:
pip install -r requirements.txt
-
To just play around with the hand detection, run hand_recognition.py
-
To start creating the dataset, run save_gestures.py
To overwrite old data, run:
python save_gestures.py --new
. If you already have agestures.csv
file in the working directory, then new data will be added to that file by default. -
Press 'C' on your keyboard to start capturing the gesture.
-
Enter the name of the gesture in the terminal.
-
Raise your hand in front of the camera while making the gesture and it will automatically start capturing pixel coordinates of the landmarks that are being detected.
-
After number of datapoints recorded equals
TOTAL_DATAPOINTS
, code will stop capturing. -
Press 'C' to start recording a new gesture or press 'Q' to terminate the program.
- Study more about ISL and decide what changes need to be made.
- Test out different machine learning models and architectures.
- Work on deployment.