This demo app allows users to quickly obtain calorie information using their camera. The object detection model utilizes a MobileNetV2 SSD architecture that was trained using transfer learning on 15 unique food classes from the Open Images v4 dataset.
Note: this app was built and modified from the TensorFlow Lite Object Detection Android Demo.
To build the pretrained demo in Android Studio, select "Open an existing Android Studio project" and navigate to the Food.AI/Food.AI directory. Then, connect a device and press 'run'.
Tip: to view additional details (e.g. detection confidence, inference time, etc.) when detecting foods, press a volume key.
-
Create a directory in Google Drive called
food_detection
. -
Add the training dataset and label_map.pbtxt to
food_detection
. -
Open
FoodAI_train.ipynb
and follow the notebook instructions. -
To use the newly trained model, download
food_detect.tflite
frommodel_checkpoints/tflite_model/
and move it to the assets folder in Android Studio. It should replace the existing pretrained model.
-
Create a directory in Google Drive called
food_detection
. -
Use OIDv4 ToolKit to download images and bounding box annotations for the desired classes.
-
Change the classes in
OIDv4_ToolKit/classes.txt
accordingly. Then, zip theOIDv4_ToolKit
folder and upload it tofood_detection
. -
Modify label_map.pbtxt to match the custom classes and upload it to
food_detection
.
-
When generating the TFRecords, set the flags to point to the location of the dataset.
-
Edit the number of classes in the model configuration file (
s = re.sub('90', 'NUM_CUSTOM_CLASSES', s)
).
-
Download
food_detect.tflite
frommodel_checkpoints/tflite_model/
and move it to the assets folder in Android Studio. It should replace the existing pretrained model. -
Modify
food_labelmap.txt
accordingly. Make sure to keep???
as the first line. -
Modify
calorie_info.txt
to reflect the custom classes