Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pull #69

Merged
merged 11 commits into from
Feb 11, 2020
2 changes: 1 addition & 1 deletion deployment/docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ RUN python3 -m pip --no-cache-dir install Keras==2.1.6
# PyTorch
#
RUN python3 -m pip --no-cache-dir install torch==1.2.0
RUN python3 -m pip install torchvision==0.4.0
RUN python3 -m pip install torchvision==0.5.0

#
# sklearn 0.20.0
Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/Compressor/Overview.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Model Compression with NNI
As larger neural networks with more layers and nodes are considered, reducing their storage and computational cost becomes critical, especially for some real-time applications. Model compression can be used to address this problem.

We are glad to announce the alpha release for model compression toolkit on top of NNI, it's still in the experiment phase which might evolve based on usage feedback. We'd like to invite you to use, feedback and even contribute.
We are glad to introduce model compression toolkit on top of NNI, it's still in the experiment phase which might evolve based on usage feedback. We'd like to invite you to use, feedback and even contribute.

NNI provides an easy-to-use toolkit to help user design and use compression algorithms. It currently supports PyTorch with unified interface. For users to compress their models, they only need to add several lines in their code. There are some popular model compression algorithms built-in in NNI. Users could further use NNI's auto tuning power to find the best compressed model, which is detailed in [Auto Model Compression](./AutoCompression.md). On the other hand, users could easily customize their new compression algorithms using NNI's interface, refer to the tutorial [here](#customize-new-compression-algorithms).

Expand Down Expand Up @@ -335,7 +335,7 @@ class YourQuantizer(Quantizer):
If you do not customize `QuantGrad`, the default backward is Straight-Through Estimator.
_Coming Soon_ ...

## **Reference and Feedback**
## Reference and Feedback
* To [report a bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md) for this feature in GitHub;
* To [file a feature or improvement request](https://github.com/microsoft/nni/issues/new?template=enhancement.md) for this feature in GitHub;
* To know more about [Feature Engineering with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/FeatureEngineering/Overview.md);
Expand Down
8 changes: 4 additions & 4 deletions docs/en_US/FeatureEngineering/Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ For now, we support the following feature selector:
- [GBDTSelector](./GBDTSelector.md)


# How to use?
## How to use?

```python
from nni.feature_engineering.gradient_selector import GradientFeatureSelector
Expand All @@ -30,7 +30,7 @@ print(fgs.get_selected_features(...))

When using the built-in Selector, you first need to `import` a feature selector, and `initialize` it. You could call the function `fit` in the selector to pass the data to the selector. After that, you could use `get_seleteced_features` to get important features. The function parameters in different selectors might be different, so you need to check the docs before using it.

# How to customize?
## How to customize?

NNI provides _state-of-the-art_ feature selector algorithm in the builtin-selector. NNI also supports to build a feature selector by yourself.

Expand Down Expand Up @@ -239,7 +239,7 @@ print("Pipeline Score: ", pipeline.score(X_train, y_train))

```

# Benchmark
## Benchmark

`Baseline` means without any feature selection, we directly pass the data to LogisticRegression. For this benchmark, we only use 10% data from the train as test data. For the GradientFeatureSelector, we only take the top20 features. The metric is the mean accuracy on the given test data and labels.

Expand All @@ -257,7 +257,7 @@ The dataset of benchmark could be download in [here](https://www.csie.ntu.edu.tw

The code could be refenrence `/examples/feature_engineering/gradient_feature_selector/benchmark_test.py`.

## **Reference and Feedback**
## Reference and Feedback
* To [report a bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md) for this feature in GitHub;
* To [file a feature or improvement request](https://github.com/microsoft/nni/issues/new?template=enhancement.md) for this feature in GitHub;
* To know more about [Neural Architecture Search with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/NAS/Overview.md);
Expand Down
4 changes: 0 additions & 4 deletions docs/en_US/NAS/CDARTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,16 +46,12 @@ bash run_retrain_cifar.sh
.. autoclass:: nni.nas.pytorch.cdarts.CdartsTrainer
:members:

.. automethod:: __init__

.. autoclass:: nni.nas.pytorch.cdarts.RegularizedDartsMutator
:members:

.. autoclass:: nni.nas.pytorch.cdarts.DartsDiscreteMutator
:members:

.. automethod:: __init__

.. autoclass:: nni.nas.pytorch.cdarts.RegularizedMutatorParallel
:members:
```
6 changes: 4 additions & 2 deletions docs/en_US/NAS/DARTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,10 @@ python3 retrain.py --arc-checkpoint ./checkpoints/epoch_49.json
.. autoclass:: nni.nas.pytorch.darts.DartsTrainer
:members:

.. automethod:: __init__

.. autoclass:: nni.nas.pytorch.darts.DartsMutator
:members:
```

## Limitations

* DARTS doesn't support DataParallel and needs to be customized in order to support DistributedDataParallel.
4 changes: 0 additions & 4 deletions docs/en_US/NAS/ENAS.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,10 +37,6 @@ python3 search.py -h
.. autoclass:: nni.nas.pytorch.enas.EnasTrainer
:members:

.. automethod:: __init__

.. autoclass:: nni.nas.pytorch.enas.EnasMutator
:members:

.. automethod:: __init__
```
Loading