Create tiny ML systems for on-device learning.
Latest version: ver0.8.1-demo.
Edited by ZHOU Qihua, 2021.06.22, Tue.
Folder | Description |
---|---|
simplecnn_lac |
3-layer CNN (1CONV+2FC) based on QAT&LAC |
alexnet_lac |
8-layerCNN (5CONV+3FC) based on QAT&LAC |
vgg11_lac |
11-layerCNN (8CONV+3FC) based on QAT&LAC |
quantizer |
INT8 quantization module |
common |
Neural network common modules |
dataset |
MNIST dataset and data loader |
For example, if we use MNIST dataset to train simplecnn_lac
model and the core file is simplecnn_lac/train_convnet.py
.
Python File | Description |
---|---|
simplecnn_lac/train_simplecnn_LAC_mnsit.py |
Main entrance of the training procedure |
simplecnn_lac/simplecnn_LAC_mnist.py |
Build the 3-layer CNN |
quantizer/KMQuantizer.py |
Quatization functions |
common/trainer |
Training handler |
common/layer |
Layers of the neural network |
The following Python packages are required:
- Python 3.x (3.6 is recommanded)
- NumPy
- Matplotlib
Shift into simplecnn_lac
folder and excute Python files:
$ cd simplecnn_lac
$ python train_simplecnn_LAC_mnsit.py
Directly open the root folder, choose train_simplecnn_LAC_mnsit.py
, click the run
button.
Please configure the Python Interpreter correctly
Octo: INT8 Training with Loss-aware Compensation and Backward Quantization for Tiny On-device Learning, In USENIX Annual Technical Conference (USENIX ATC), July 2021.
@inproceedings{octo_atc21,
title = {Octo: INT8 Training with Loss-aware Compensation and Backward Quantization for Tiny On-device Learning},
author = {Qihua Zhou and Song Guo and Zhihao Qu and Jingcai Guo and Zhenda Xu and Jiewei Zhang and Tao Guo and Boyuan Luo and Jingren Zhou},
booktitle = {2021 {USENIX} Annual Technical Conference ({USENIX} {ATC} 21)},
year = {2021},
url = {https://www.usenix.org/conference/atc21/presentation/zhou-qihua},
publisher = {{USENIX} Association},
month = july,
}