The Sign Talk Web App is a platform designed to facilitate communication with deaf individuals by translating their signs into understandable text. This web application employs a combination of HTML, CSS, JavaScript, and Python to create an interactive and user-friendly experience.
- Sign-to-Text Translation: Easily understand what deaf individuals are communicating through sign language.
- User-Friendly Interface: Intuitive design for seamless navigation and a positive user experience.
- Multi-Platform Compatibility: Accessible through various web browsers for desktop and mobile devices.
-
Frontend:
- HTML
- CSS
- JavaScript
-
Backend:
- Python
- Visual Studio Code
- Kaggle (for data-related tasks)
- PyCharm
- Affinity Designer (for logo design)
For any inquiries or feedback please contact
-Nirmal Khakda at nirmal.201719@ncit.edu.np -Deepu Shah at deepu.201707@ncit.edu.np -puspha raj sangrula -skanda Nupana
This repository contains Python code for two related projects: a hand gesture data collection script and a hand gesture recognition script. The data collection script captures images of hand gestures using a webcam, while the recognition script utilizes a pre-trained model to classify the hand gestures in real-time.
-
Data Collection Script:
- Utilizes OpenCV and the cvzone library for hand detection.
- Captures and saves images of hand gestures with predefined labels.
- Allows users to customize the folder for saving images and set the number of images to capture.
-
Recognition Script:
- Combines hand detection using cvzone with a pre-trained gesture recognition model.
- Recognizes and classifies hand gestures in real-time.
- Displays bounding boxes and text annotations around the detected hands and their classifications.
- Python 3.x
- OpenCV
- cvzone library
- TensorFlow (for the recognition script)
-
Clone the repository:
git clone https://github.com/Nepal-College-of-Information-Technology/ai-mini-project-pands
-
Install the required libraries:
pip install -r requirements.txt
-
Download the pre-trained gesture recognition model and labels from Model.zip and extract it into the
Model
folder.
-
Run the data collection script:
python data_collection.py
-
Press the 's' key to save images. Images will be saved in the specified folder.
-
Run the recognition script:
python recognition.py
-
The script will display real-time hand gesture recognition with bounding boxes and text annotations.
-
Folder and Counter:
- You can customize the folder for saving images in the data collection script by modifying the
folder
variable. - The
counter
variable keeps track of the number of captured images.
- You can customize the folder for saving images in the data collection script by modifying the
-
Model:
- The recognition script uses a pre-trained model located in the
Model
folder. Ensure the model and labels are correctly downloaded.
- The recognition script uses a pre-trained model located in the
This project is licensed under the MIT License - see the LICENSE file for details.
- Hand detection powered by the cvzone library.
- Gesture recognition model trained using TensorFlow.
[Nirmal Khadka]
Feel free to contribute, report issues, or suggest improvements. Happy coding!