Skip to content

Android application for translating Sign language to Text or speech

License

Notifications You must be signed in to change notification settings

Parag0506/AndroidDevChallenge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AndroidDevChallenge

Image of AndroidDevChallenge

Android application for translating Sign language to Text or speech

Tell us what your idea is.
Describe in 250 words what the feature or service will do and how you’ll use Machine Learning to push the bar:



I plan to implement machine learning model using Firebase and Tensorflow lite in the app I am developing.
I want to help bridge the gap between Deaf, Mute and the people who can hear and speak.
The sign language is pretty complex to understand, and most of the population isn’t educated it, hence there’s a gap between the two people.
This gap in communication creates a gap between the two people.
Currently, if a person wants to understand the Deaf and Mute, then it requires a translator, who charge hefty fee. Otherwise most common way is to write it down and then talk.
There have been recent developments in this area, but it requires sophisticated equipment attached to the mobile device.
My plan is a bit different, I plan to leverage the power and flexibility of Tensorflow, and integrate it within my android application.
Using Neural Networks I will train my model with different videos or Images to identify the sign language.
Once the model is trained then I will integrate the model with the app to translate real time video into text or speech. Google Vision API can also be integrated to getter finer results. AutoML in Firebase is a way to speed up the development process.
The uniqueness of this project is that, it just requires the camera on person’s phone. All we need to do is point the camera at a person using sign language.
Similarly a text can be converted into animated sign language for the deaf to understand.



Tell us how you plan on bringing it to life.
Describe where your project is, how you could use Google’s help in the endeavor, and how you plan on using On-Device ML technology to bring the concept to life. The best submissions have a great idea combined with a concrete path of where you plan on going, which should include:

  1. any potential sample code you’ve already written,
  2. a list of the ways you could use Google’s help,
  3. as well as the timeline on how you plan on bringing it to life by May 1, 2020.

I just started working on the project. I have been Learning all the technologies from Machine Learning to Android Development. To speed up the process, I have included some friends with common interests. We really hope to solve the problem and hope to make this product widely accessible. In future, we are planning to make a portable device like Google glass to automatically do the work.
Sample code is written in Android and currently we are in experimenting phase, trying out different things to make sure the best method is used for the purpose.
We found that Google cloud’s vision API is very accurate to do such work, the only constraint is that it needs internet connection.
If possible we would like to make it on-device application, which wouldn’t always require internet connection.
There are around 500 million deaf people in the world and this number is expected to increase to 900 million by 2050. We need better methods for communication in the future, we can’t just rely on ancient methods to communicate, which essentially is the most important part of human life.


How Google could help us achieve the goal :

  1. Assign us a Mentor for the project
  2. Some software could be made free for experimenting purposes
  3. Help us reach a wide audience
  4. Help in spreading the technology to every part of the world and make it feasible
  5. Data set for the project

Timeline:

By november 2019 :

Project structuring and setup
Android Application beta model
Research and Learning

December 2019:

Finalising the best possible method for the application
Data Acquiring and cleaning
Documenting and Training Models

January 2020:

Development and cloud integration

February 2020 :

Development and UI/UX designs

March 2020 :

Testing and Beta Program

April 2020 :

Testing and Deployment


Tell us about you.
A great idea is just one part of the equation; we also want to learn a bit more about you. Share with us some of your other projects so we can get an idea of how we can assist you with your project.

Hello ! I am Parag Ghorpade a Third year computer Engineering Student in India.
I love coding and working on various Machine Learning projects. The projects I have done so far are mainly based on Machine Learning and Tensorflow. I have worked on Self Driving cars, Face recognition, Text recognition projects. Some were coded from Scratch and some using Cloud and APIs.
I recently started developing android applications, because I wanted a way to deploy my ML models.
I am a Developer student clubs - Lead of my campus, and have worked closely with professional people to bring the learning to our college.
I have also been involved with GDG -Pune, and I am glad to have their mentoring for almost everything in my career. It feels like a family to be at GDG and talk to like minded people, working to contribute to the society.
We want to solve real world problems and make the solutions available to all. Keeping this in mind I would like to open source the project for the community.
I have recently started developing some open source projects and would love to be a key contributor.
I have been thinking about this project for quite some time now, and I am willing to collaborate with everyone to make it possible.

About

Android application for translating Sign language to Text or speech

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages