Skip to content

renjie-liu/quantization-kernel-codelab

Repository files navigation

Goal

This codelab is meant to provide reference implementations (and some optimizations advice) for different quantized kernels.

Python is obvious much more easier to read and understand.

This codelab is not intended to cover the quantization schema design or quantization recipes for different kernels.

Kernel Supported

  • Mul

  • Fixed Point add, sub, mul

  • Fixed Point div

  • Fixed Point sin

  • Fixed Point tanh

Materials

TensorFlow Lite Model Optimization Toolkit

GemmLowp

TensorFlow Lite Kernels

Credits

The quantization kernel computation methods actually came from benoitjacob@, raziel@, suharshs@ and many other people from the tflite-team.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published