Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

fix pruner bugs and add model compression README #1624

Merged
merged 14 commits into from
Oct 21, 2019
Merged

fix pruner bugs and add model compression README #1624

merged 14 commits into from
Oct 21, 2019

Conversation

tanglang96
Copy link
Contributor

fix pruner bugs and add model compression README

@tanglang96 tanglang96 changed the title V1.1 fix pruner bugs and add model compression README Oct 18, 2019
pruner.update_epoch(epoch)

pruner.update_epoch(epoch + 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's discuss whether epoch should start from 0 or 1 next time.

@@ -2,7 +2,7 @@
import torch
from .compressor import Pruner

__all__ = ['LevelPruner', 'AGP_Pruner', 'SensitivityPruner']
__all__ = ['LevelPruner', 'AGP_Pruner']
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you removed sensitivitypruner, so please update doc accordingly

python main_torch_pruner.py
```

Model compression can be configured in 2 ways
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This example uses AGP Pruner. Initiating a pruner needs a user provided configuration which can be provided in two ways:


Model compression can be configured in 2 ways

- By reading ```configure_example.yaml```, this can make codes clean when your configuration is complicated
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

codes -> code

pruner = AGP_Pruner(configure_list)
```

Please notice that when ```pruner(model)``` called, our model compression codes will be **automatically injected** and you can fine-tune your model **without** any modifications, masked weights **won't** be updated any more during fine tuning.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When pruner(model) is called, your model is injected with masks as embedded operations. For example, a layer takes a weight as input, we will insert an operation between the weight and the layer, this operation takes the weight as input and outputs a new weight applied by the mask. Thus, the masks are applied at any time the computation goes through the operations. You can fine-tune your model without any modifications.

@@ -8,7 +8,6 @@ We have provided two naive compression algorithms and four popular ones for user
|---|---|
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

“We have provided two naive compression algorithms and four popular ones for users, including three pruning algorithms and three quantization algorithms:” this line should also be updated

@@ -48,7 +48,7 @@ from nni.compression.tensorflow import AGP_Pruner
config_list = [{
'initial_sparsity': 0,
'final_sparsity': 0.8,
'start_epoch': 1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the meaning of start_epoch=0, end_epoch=10?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@QuanluZhang start_epoch=0, end_epoch=10 means pruning starts from epoch 0 and ends at 10, previous default start_epoch=1 in the algorithm but we usually start from 0, so I modified them all.

@liuzhe-lz liuzhe-lz merged commit 9d468d2 into microsoft:v1.1 Oct 21, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants