diff --git a/docs/en_US/Compressor/Overview.md b/docs/en_US/Compressor/Overview.md index 7ee603e3e3..5fc8e45c5d 100644 --- a/docs/en_US/Compressor/Overview.md +++ b/docs/en_US/Compressor/Overview.md @@ -95,7 +95,17 @@ pruner.update_epoch(epoch) The other is `step`, it can be called with `pruner.step()` after each minibatch. Note that not all algorithms need these two APIs, for those that do not need them, calling them is allowed but has no effect. -__[TODO]__ The last API is for users to export the compressed model. You will get a compressed model when you finish the training using this API. It also exports another file storing the values of masks. +You can easily export the compressed model using the following API if you are pruning your model, ```state_dict``` of the sparse model weights will be stored in ```model.pth```, which can be loaded by ```torch.load('model.pth')``` + +``` +pruner.export_model(model_path='model.pth') +``` + +```mask_dict ``` and pruned model in ```onnx``` format(```input_shape``` need to be specified) can also be exported like this: + +```python +pruner.export_model(model_path='model.pth', mask_path='mask.pth', onnx_path='model.onnx', input_shape=[1, 1, 28, 28]) +``` ## Customize new compression algorithms