-
Notifications
You must be signed in to change notification settings - Fork 39
Home
Build an iOS game powered by Core ML and Watson Visual Recognition
Use Watson Visual Recognition and Core ML to create a Kitura based iOS game that has a user search for a predetermined list of objects.
Artificial Intelligence
Whether you are identifying pieces of art in a museum or creating a game, there are many use cases for computer vision on a mobile device. With Core ML, detecting objects has never been faster; and with Watson Visual Recognition and Watson Studio, creating a model couldn't be easier. This code pattern shows you how to create your own iOS game to challenge players to find a variety of predetermined objects as fast as they can.
By David Okun, Sanjeev Ghimire, and Anton McConville
- Coming soon!
- Coming soon!
In this code pattern, we will create an iOS timed game that has users find items based on a list of objects. The list of objects is customizable and uses Watson Visual Recognition to train a Core ML model. The Core ML model is deployed to the iOS device when the user initializes the app. The beauty of Core ML is that the recognition is done on the device rather than over an HTTP call, meaning it's that much faster. The code pattern also uses Kitura to power a leaderboard, Cloudant to persist user records and best times, and Push Notifications to let a user know when they have been removed from the top of the leaderboard.
Our application has been published to the App Store under the name "Watson ML", and we encourage folks to give it a try. It comes with a built-in model for identifying six objects; shirts, jeans, apples, plants, notebooks, and lastly a plush bee. Our app could not have been built if not for fantastic pre-existing content from other IBMers. We use David Okun's Lumina project, and Anton McConville's Avatar generator microservice, see the references belwo for more information.
We include instruction on how to modify the application to fit your own needs. Feel free to fork the code and modify it to create your own conference swap game, scavenger hunt, guided tour, or team building or training event.
When the reader has completed this Code Pattern, they will understand how to:
- Create a custom visual recognition model in Watson Studio
- Develop an Swift based iOS application
- Deploy a Kitura based leaderboard
- Detect objects with Core ML and Lumina
- Generate a Core ML model using Watson Visual Recognition and Watson Studio.
- User runs the iOS application for the first time.
- The iOS application calls out to the Avatar microservice to generate a random username.
- The iOS application makes a call to Cloudant to create a user record.
- The iOS application notifies the Kitura service that the game has started.
- The user aims the phone's camera as they search for items, using Core ML to identify them.
- The user receives a push notification if they are bumped from the leaderboard.
- Core ML: Is a framework that will allow integration of machine learning models into apps.
- Kitura: Kitura is a free and open-source web framework written in Swift, developed by IBM and licensed under Apache 2.0. It’s an HTTP server and web framework for writing Swift server applications.
- Watson Visual Recognition: Visual Recognition understands the contents of images - visual concepts tag the image, find human faces, approximate age and gender, and find similar images in a collection.
- Artificial Intelligence: Artificial intelligence can be applied to disparate solution spaces to deliver disruptive technologies.
- Mobile: Systems of engagement are increasingly using mobile technology as the platform for delivery.
Coming off the heels of THINK 2018 where we announced our "Deploy a Core ML model with Watson Visual Recognition" code pattern we decided to challenge ourselves to come up with a unique application for Vivatech, ideally one that could be used in a hackathon. We ended up creating (what we think is) a pretty darn fun iOS game that showcases Watson Visual Recognition's Core ML functionality.
We decided to create an iOS game that has a user search for objects as quickly as they can. The object classification is done using a Core ML model that is deployed with Watson Visual Recognition, and the "searching" is done with the Lumina framework (it's pretty awesome, check it out!).
Upon launching the app and starting the game, the user is prompted to search for six items that can be easily found around a home, office, or conference center. The six objects are: a shirt, jeans, plant, apple, notebook, and (because of the IBM Bee Logo), a plush bee.
We didn't just stop at creating the app. We also used Kitura for our server-side logic to handle the leadership board and anonymized user provisioning.
We did a soft-launch at a Swift meetup last week and have received reviews. We encourage folks to install the app from the App Store, or go directly to the code on Github. Feel free to fork the repo and modify it fit your own need. In under an hour you can likely use Watson Visual Recognition to deploy your own Core ML model, whether it be use for a museum tour, team building workshop, or trying to create your own conference app, this code pattern is for you!
- Lumina: A camera designed in Swift for easily integrating CoreML models - as well as image streaming, QR/Barcode detection, and many other features.
- IBM’s Watson Visual Recognition service to support Apple Core ML technology: Blog from the code pattern author, Steve Martinelli.
- Deploy a Core ML model with Watson Visual Recognition: code pattern shows you how to create a Core ML model using Watson Visual Recognition, which is then deployed into an iOS application.
- AI Everywhere with IBM Watson and Apple Core ML: Blog from the code pattern author, Sridhar Sudarsan.
- Watson Studio Tooling: Start creating your own Watson Visual Recognition classifier.