Skip to content

Latest commit

 

History

History
33 lines (27 loc) · 1.33 KB

README.md

File metadata and controls

33 lines (27 loc) · 1.33 KB

Image Description for Dementia Prevention

Project of 'Mobile Computing and Its Applications' in SNU.

Project Description

Please refer to mobileComputing_2023_project_final_presentation.pdf

Prepare tensorflow lite model

Contents of image_captioning_custom.ipynb

  • Prepare Flickr8k dataset
  • Load and quantize Resnet50 (Encoder)
  • Train 2-layer transformer (Decoder) and save its weights into checkpoint
  • Evaluate and compare two captioning models (original encoder vs quantized encoder)
  • Other things (save vocabulary file, plot attention map...)

How to run server for decoder model inference

  1. Move into decoder_server directory. In decoder_server directory, there is a dockerfile to install the dependencies like Tensorflow and Flask. controller.py is the entrypoint of HTTP server. decoder.py defines the decoder model and inference functions.

  2. Execute the below commands. (Image for only AMD64 arch)

docker build -t tensorflow-decoder-server .
docker run -it -p 8123:8123 --name decoder_server tensorflow-decoder-server
  1. Then you might be in the container. Move into /home, and run the below command.
python controller.py
  1. Modify the server IP address in the Android code according to your server IP. Modify IP address in app/src/main/java/com/mobilecomputing/mobilecomputingproject/QuizActivity.kt.