Skip to content

Latest commit

 

History

History
118 lines (88 loc) · 8.79 KB

Overview.rst

File metadata and controls

118 lines (88 loc) · 8.79 KB

Model Compression with NNI

As larger neural networks with more layers and nodes are considered, reducing their storage and computational cost becomes critical, especially for some real-time applications. Model compression can be used to address this problem.

NNI provides a model compression toolkit to help user compress and speed up their model with state-of-the-art compression algorithms and strategies. There are several core features supported by NNI model compression:

  • Support many popular pruning and quantization algorithms.
  • Automate model pruning and quantization process with state-of-the-art strategies and NNI's auto tuning power.
  • Speed up a compressed model to make it have lower inference latency and also make it become smaller.
  • Provide friendly and easy-to-use compression utilities for users to dive into the compression process and results.
  • Concise interface for users to customize their own compression algorithms.

Note that the interface and APIs are unified for both PyTorch and TensorFlow, currently only PyTorch version has been supported, TensorFlow version will be supported in future.

The algorithms include pruning algorithms and quantization algorithms.

Pruning algorithms compress the original network by removing redundant weights or channels of layers, which can reduce model complexity and address the over-fitting issue.

Name Brief Introduction of Algorithm
Level Pruner Pruning the specified ratio on each weight based on absolute values of weights
AGP Pruner Automated gradual pruning (To prune, or not to prune: exploring the efficacy of pruning for model compression) Reference Paper
Lottery Ticket Pruner The pruning process used by "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks". It prunes a model iteratively. Reference Paper
FPGM Pruner Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration Reference Paper
L1Filter Pruner Pruning filters with the smallest L1 norm of weights in convolution layers (Pruning Filters for Efficient Convnets) Reference Paper
L2Filter Pruner Pruning filters with the smallest L2 norm of weights in convolution layers
ActivationAPoZRankFilterPruner Pruning filters based on the metric APoZ (average percentage of zeros) which measures the percentage of zeros in activations of (convolutional) layers. Reference Paper
ActivationMeanRankFilterPruner Pruning filters based on the metric that calculates the smallest mean value of output activations
Slim Pruner Pruning channels in convolution layers by pruning scaling factors in BN layers(Learning Efficient Convolutional Networks through Network Slimming) Reference Paper
TaylorFO Pruner Pruning filters based on the first order taylor expansion on weights(Importance Estimation for Neural Network Pruning) Reference Paper
ADMM Pruner Pruning based on ADMM optimization technique Reference Paper
NetAdapt Pruner Automatically simplify a pretrained network to meet the resource budget by iterative pruning Reference Paper
SimulatedAnnealing Pruner Automatic pruning with a guided heuristic search method, Simulated Annealing algorithm Reference Paper
AutoCompress Pruner Automatic pruning by iteratively call SimulatedAnnealing Pruner and ADMM Pruner Reference Paper
AMC Pruner AMC: AutoML for Model Compression and Acceleration on Mobile Devices Reference Paper

You can refer to this :githublink:`benchmark <docs/en_US/CommunitySharings/ModelCompressionComparison.rst>` for the performance of these pruners on some benchmark problems.

Quantization algorithms compress the original network by reducing the number of bits required to represent weights or activations, which can reduce the computations and the inference time.

Name Brief Introduction of Algorithm
Naive Quantizer Quantize weights to default 8 bits
QAT Quantizer Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference. Reference Paper
DoReFa Quantizer DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients. Reference Paper
BNN Quantizer Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. Reference Paper

Given targeted compression ratio, it is pretty hard to obtain the best compressed ratio in a one shot manner. An automatic model compression algorithm usually need to explore the compression space by compressing different layers with different sparsities. NNI provides such algorithms to free users from specifying sparsity of each layer in a model. Moreover, users could leverage NNI's auto tuning power to automatically compress a model. Detailed document can be found here.

The final goal of model compression is to reduce inference latency and model size. However, existing model compression algorithms mainly use simulation to check the performance (e.g., accuracy) of compressed model, for example, using masks for pruning algorithms, and storing quantized values still in float32 for quantization algorithms. Given the output masks and quantization bits produced by those algorithms, NNI can really speed up the model. The detailed tutorial of Model Speedup can be found here.

Compression utilities include some useful tools for users to understand and analyze the model they want to compress. For example, users could check sensitivity of each layer to pruning. Users could easily calculate the FLOPs and parameter size of a model. Please refer to here for a complete list of compression utilities.

NNI model compression leaves simple interface for users to customize a new compression algorithm. The design philosophy of the interface is making users focus on the compression logic while hiding framework specific implementation details from users. The detailed tutorial for customizing a new compression algorithm (pruning algorithm or quantization algorithm) can be found here.