Skip to content

SOS system for everyone, using ML model of Audio Samples

Notifications You must be signed in to change notification settings

sgaikar1/AndroidDevChallenge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 

Repository files navigation

#AndroidDevChallenge

Image of AndroidDevChallenge

SOS system for everyone, using ML model of Audio Samples

Tell us what your idea is.
Describe in 250 words what the feature or service will do and how you’ll use Machine Learning to push the bar:



This app is basically SOS app, which helps everyone at a time of distress, most of the time current available solution doesnt work technically because user might not able to open device due to screen lock or due to weak access to device, i.e. sometimes user don't have access to his device or device is not nearby to user, So basically we can setup user's preferred voice signal recorded by self as distress signal, as well as this system will recognize the pattern of distress by surrounding noice and user's panic voice.

  • Problem:

    • In distress there can be a scenario where victim might not have access to his device
    • If he has device, he might not able to open the sos app or press combination of buttons to trigger distress call
    • Every one doesn't install sos on their device.
  • Why people should care:

    • In emergency, without doing any required action user will get help
    • System will notify to his family members, police, ambulance and fire brigade if required
  • Proposal: Build a system where device will act on behalf of user in emergency situation like accident, fire or even in kidnap.

    • System is basically an app which will consist following modules App -> Trained ml algo -> server
      • App - it is just an user interface where user can specify his personal contacts as emergency contacts and few other settings.
      • On device ml model - It is a trained model deployed on user's device where it will analys surrounding situation and acts on behalf of user
      • Server - It can be used to train model remotely, to make it more accurate.
  • Why ML: -As we discussed in first section, there can be a situation where user might not have access to his device and because of that ml can act on behalf of him.



Tell us how you plan on bringing it to life.
Describe where your project is, how you could use Google’s help in the endeavor, and how you plan on using On-Device ML technology to bring the concept to life. The best submissions have a great idea combined with a concrete path of where you plan on going, which should include:

  1. any potential sample code you’ve already written,
  2. a list of the ways you could use Google’s help,
  3. as well as the timeline on how you plan on bringing it to life by May 1, 2020.

I haven't started working on the project. I have been Learning all the technologies from Machine Learning to Android Development. To speed up the process, I have included some friends with common interests. We really hope to solve the problem and hope to make this product widely accessible. In future, we would be glad if this system get's integrated with play service and google assistant to do the work.
Sample code will be written in Android, I found that Firebase MLkit is very accurate to do such work, which will allow us to use tenserFlow-light model on android client very easily. Which wouldn’t always require internet connection, but to send distress signal to emergency contact we will require atleast celular network and celular charges may aply as per network.


How Google could help us achieve the goal :

  1. Assign us a Mentor for the project
  2. Some software could be made free for experimenting purposes
  3. Help us reach a wide audience
  4. Help in spreading the technology to every part of the world and make it feasible
  5. Data set for the project
  6. Make it a part of Play services and google assistant

Timeline:

By 15th December 2019 :

Project structuring and setup
Android Application beta model
Research and Learning

31st December 2019:

Finalising the best possible method for the application
Data sample acquiring/generating and cleaning
Documenting and Training Models

January 2020:

Development and ML integration

February 2020 :

Development and UI/UX designs

March 2020 :

Testing and Beta Program

April 2020 :

Testing and Deployment


Tell us about you.
A great idea is just one part of the equation; we also want to learn a bit more about you. Share with us some of your other projects so we can get an idea of how we can assist you with your project.

*Hello ! I am Santosh Gaikar, Android Developer at HaikuJam Technolgies Pvt Ltd. I love android, I have nearly 4+ years industrial experiance i android. I have worked on BLE, android IOT projects in past, currently i am working on HaikuJam app which is a collaborative writing social app, which allows strangers write together. *

About

SOS system for everyone, using ML model of Audio Samples

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published