Skip to content

Applying effective Neural Network architectures for Depth Estimation along with researching best quantization methods to reduce their size.

Notifications You must be signed in to change notification settings

lukasz-staniszewski/quantized-depth-estimation

Repository files navigation

Quantized Depth Estimation

Lukasz Staniszewski

PyTorch Lightning Config: Hydra Template

black isort

Description

This repository serves as codebase for the project focused on applying effective neural network architectures for Depth Estimation along with researching best quantization methods to reduce their size. Project documentation can be found here.

Installation

Pip

# clone project
git clone https://github.com/lukasz-staniszewski/quantized-depth-estimation
cd quantized-depth-estimation

# [OPTIONAL] create conda environment
conda create -n myenv python=3.10.13
conda activate myenv

# install pytorch according to instructions
# https://pytorch.org/get-started/

# install requirements
pip install -r requirements.txt

Conda

# clone project
git clone https://github.com/lukasz-staniszewski/quantized-depth-estimation
cd quantized-depth-estimation

# create conda environment and install dependencies
conda env create -f environment.yaml -n myenv

# activate conda environment
conda activate myenv

How to run

Start by setting PYTHONPATH (inside project's root dir)

export PYTHONPATH=$PWD

Train model with default configuration

# train on CPU
python src/train.py trainer=cpu

# train on GPU
python src/train.py trainer=gpu

Train model with chosen experiment configuration from configs/experiment/

python src/train.py experiment=experiment_name.yaml

You can override any parameter from command line like this

python src/train.py trainer.max_epochs=20 data.batch_size=64

Eval your model

python src/eval.py ckpt_path=<PATH>

Quantize your model

Configure quantization settings inside quantization config file and run:

python src/quantize.py ckpt_path=<PATH>

You can set up there your quantization scenario easily:

...
inference_speed: True       # set to True if you want to check quantized model inference speed

quantization:
  methods:
    # - "fuse_bn"           # fuse batch norm
    - "ptq"                 # post training quantization
    - "qat"                 # quantization aware training
  ptq:
    batches_limit: 250      # max number of batches for ptq calibration
  qat:
    max_epochs: 10          # max number of epochs for qat
  quant_config:
    dummy_input_shape: [1, 3, 224, 224]
    is_per_tensor: False    # True if per-tensor quantization, False if you prefer per-channel
    is_asymmetric: True
    backend: "qnnpack"      # 'qnnpack' for mobile devices or 'fbgemm' for servers
    disable_requantization_for_cat: True
    use_cle: True           # wether to use Cross Layer Normalization before doing PTQ/QAT
    overwrite_set_ptq: True # if you don't use ptq, set it to False

Reproducing results

Training

python src/train.py experiment=nyu_efffnet

Quantization

Download quantize.yaml file from GitHub repository and put to configs/ directory.

Run:

python src/quantize.py ckpt_path=<CORRECT RUN PATH>/checkpoints/epoch_023.ckpt

TODO

Check those links:

About

Applying effective Neural Network architectures for Depth Estimation along with researching best quantization methods to reduce their size.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages