This project is a sign language alphabet recognizer using Python, openCV and a convolutional neural network model for classification. The goal of this project is to build a neural network which can identify the alphabet of the American Sign Language (ASL) and translate it into text and voice.
The primary source of data for this project was the compiled dataset of American Sign Language (ASL) from Kaggle. The dataset is comprised of 18,200 images which are 200x200 pixels. There are 26 total classes, each with 700 images, 26 for the letters A-Z.
- Python
- OpenCV
- NumPy
- Pandas
- Tensorflow
- Keras
All the requirements can be installed from "requirements.txt" file
pip install -r requirements.txt