Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update README.md to add commands to run quantized model with pretrained weights #1547

Merged
merged 1 commit into from
Nov 4, 2019

Conversation

hx89
Copy link
Contributor

@hx89 hx89 commented Oct 31, 2019

Update README.md to add commands to run quantized model with pretrained weights.

@hx89 hx89 requested review from raghuramank100 and fmassa October 31, 2019 18:21
Copy link
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an improvement of what we had before, thanks for the PR!

I have a question about mobilenet not using fbgemm by default, but I'm merging this PR as is for now.
If something should be changed to accommodate mobilenet, can you do it in a follow-up PR?

Thanks!

For all quantized models except inception_v3:
```
python references/classification/train_quantization.py --data-path='imagenet_full_size/' \
--device='cpu' --test-only --backend='fbgemm' --model='<model_name>'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think mobilenet doesn't use fbgemm backend, @raghuramank100 can you confirm?

@fmassa fmassa merged commit bb261c5 into master Nov 4, 2019
@fmassa fmassa deleted the quant-models-doc branch November 4, 2019 09:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants