Skip to content

Latest commit

 

History

History
26 lines (16 loc) · 846 Bytes

README.md

File metadata and controls

26 lines (16 loc) · 846 Bytes

Goal

This codelab is meant to provide reference implementations (and some optimizations advice) for different quantized kernels.

Python is obvious much more easier to read and understand.

This codelab is not intended to cover the quantization schema design or quantization recipes for different kernels.

Kernel Supported

  • Mul

  • Fixed Point add, sub, mul

  • Fixed Point div

  • Fixed Point sin

  • Fixed Point tanh

Materials

TensorFlow Lite Model Optimization Toolkit

GemmLowp

TensorFlow Lite Kernels

Credits

The quantization kernel computation methods actually came from benoitjacob@, raziel@, suharshs@ and many other people from the tflite-team.