Skip to content

allankouidri/face_detection_android_app

 
 

Repository files navigation

Face Detection Android App

🌀 Overview

📱 This is a camera app that continuously detects the faces (bounding boxes) in the frames seen by your device's back camera 📷,

Light         Dark

🚩 Three models have been fine-tuned using the pre-trained model MobilnetV2:

  • Detect_640.tflite : SSD MobileNet V2 FPNLite 640x640
  • Detect_640_quantized.tflite :Post training weight quantized SSD MobileNet V2 FPNLite 640x640
  • Detect_LITE.tflite: SSD MobileNet V2 FPNLite 320x320

Models have been trained on the WILDERFACE dataset (http://shuoyang1213.me/WIDERFACE/)

The following instructions walk you through building and running the demo on an Android device 😃.

Models

Models are stored in the assets folder.

Adding metadata to the tflite exported model

To use your custom model, it is necessary to add you label metadata to the exported TFlite model.

https://www.tensorflow.org/lite/models/convert/metadata

!pip install tflite-support
from tflite_support.metadata_writers import object_detector, writer_utils 

 ObjectDetectorWriter = object_detector.MetadataWriter
_MODEL_PATH = "detect.tflite"
# Task Library expects label files that are in the same format as the one below.
_LABEL_FILE = "labelmap.txt"
_SAVE_TO_PATH = "detect_metadata.tflite"

_INPUT_NORM_MEAN = 127.5
_INPUT_NORM_STD = 127.5

# Create the metadata writer.
writer = ObjectDetectorWriter.create_for_inference(
writer_utils.load_file(_MODEL_PATH), [_INPUT_NORM_MEAN], [_INPUT_NORM_STD],
[_LABEL_FILE])

# Verify the metadata generated by metadata writer.
print(writer.get_metadata_json())

# Populate the metadata into the model.
writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)
  • Get labelmap.txt from the second column of class-descriptions-boxable.
  • In DetectorActivity.java set TF_OD_API_IS_QUANTIZED to false.

🚀 Post-training quantization

Quantizing weights

from https://www.tensorflow.org/model_optimization/guide/quantization/post_training

import  tensorflow  as  tf

# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(FROZEN_TFLITE_PATH) # path to the SavedModel directory

# Post-training quantization is a conversion technique that can reduce model size while also improving CPU
#and hardware accelerator latency, with little degradation in model accuracy.

# To quantize the model on export, set the optimizations flag to optimize for size:
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
# Save the model.
with  open(TFLITE_MODEL, 'wb') as  f:
f.write(tflite_model)

In DetectorActivity.java set TF_OD_API_IS_QUANTIZED to true.

Build the face detection app 📱

Prerequisites

  • If you don't have already, install Android Studio, following the instructions on the website.
  • You need an Android device and Android development environment with minimum API 21.
  • Android Studio 4.2 or above.

Building

  • Open Android Studio, and from the Welcome screen, select Open an existing Android Studio project.

  • From the Open File or Project window that appears, navigate to and select the tensorflow-lite/examples/object_detection/android directory from wherever you cloned the TensorFlow Lite sample GitHub repo. Click OK.

  • If it asks you to do a Gradle Sync, click OK.

  • You may also need to install various platforms and tools, if you get errors like "Failed to find target with hash string 'android-21'" and similar. Click the Run button (the green arrow) or select Run > Run 'android' from the top menu. You may need to rebuild the project using Build > Rebuild Project.

  • If it asks you to use Instant Run, click Proceed Without Instant Run.

  • Also, you need to have an Android device plugged in with developer options enabled at this point. See here for more details on setting up developer devices.

Switch between inference solutions (Task library vs TFLite Interpreter)

Inside Android Studio, you can change the build variant to whichever one you want to build and run—just go to Build > Select Build Variant and select one from the drop-down menu. See configure product flavors in Android Studio for more details.

For gradle CLI, running ./gradlew build can create APKs for both solutions under app/build/outputs/apk.

Note: If you simply want the out-of-box API to run the app, we recommend lib_task_api for inference. If you want to customize your own models and control the detail of inputs and outputs, it might be easier to adapt your model inputs and outputs by using lib_interpreter.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Java 100.0%