CS5500: Foundation of Software Engineering
Team 3 - Daniel Lisko, Matthew Vargas, Michael Chang, Semaa Amin, and Yvette Green
Image Classification with Jetson Nano coupled with Output from Arduino
- Inspiration came from the confusion of users properly disposing of waste
- Help reduce the manual intervention of disposing of different categories of waste
- Personal vs. Commercial
- distributed prototypes at every waste bin can help crowdsource model improvement. The same model can help automate robotics sorting at large disposal facilities
- Nvidia Jetson Nano Developer Kit: https://developer.nvidia.com/embedded/jetson-nano-developer-kit
- Kaggle: https://www.kaggle.com/
- Logitech C270 HD Webcam
- Python
- Arduino board - MKR 1010
- ThingSpeak: https://thingspeak.com/ - Channel, TalkBack & ThingTweet
- A USB Logitech camera is connected to the Jetson Nano and takes a picture of a piece of garbage.
- The image is compared with the machine learning model trained with Kaggle Dataset of Recyclable objects.
- The image is identified as one of the following:
- Cardboard: 0
- Glass: 1
- Metal: 2
- Paper: 3
- Plastic: 4
- Trash: 5
- ThingSpeak's ThingTweet app sends a post on Twitter.
- ThingSpeak's TalkBack feature changes the LED light on the Arduino board based on the image identified.
- Expand on the model for more accurate predictions
- Crowdsource to obtain images that expand our training data set (eg. captcha model)
- Connect to a database (scalability)
- Expand to include more categories
- E-waste and compost
- Possibly present the project at a SmartCities event
- Upgrade hardware
- 4GB or 8GB Jetson Nano
- Add better Arduino boards to include LED strip lights, LCD screens, and sound