Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pull code #33

Merged
merged 18 commits into from
Oct 8, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
18 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 25 additions & 14 deletions README_zh_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包。 它通过多种调优的算法来搜索最好的神经网络结构和(或)超参,并支持单机、本地多机、云等不同的运行环境。

### **NNI [v1.0](https://github.com/Microsoft/nni/releases) 已发布! &nbsp;[<img width="48" src="docs/img/release_icon.png" />](#nni-released-reminder)**
### **NNI [v1.0](https://github.com/Microsoft/nni/blob/master/docs/zh_CN/Release_v1.0.md) 已发布! &nbsp;[<img width="48" src="docs/img/release_icon.png" />](#nni-released-reminder)**

<p align="center">
<a href="#nni-has-been-released"><img src="docs/img/overview.svg" /></a>
Expand All @@ -22,7 +22,7 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包
<td>
</td>
<td>
<b>框架和库</b>
<b>支持的框架和库</b>
<img src="docs/img/bar.png"/>
</td>
<td>
Expand Down Expand Up @@ -62,8 +62,8 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包
<ul>
<li><b>示例</b></li>
<ul>
<li><a href="examples/trials/mnist-distributed-pytorch">MNIST-pytorch</li></a>
<li><a href="examples/trials/mnist-distributed">MNIST-tensorflow</li></a>
<li><a href="examples/trials/mnist-pytorch">MNIST-pytorch</li></a>
<li><a href="examples/trials/mnist">MNIST-tensorflow</li></a>
<li><a href="examples/trials/mnist-keras">MNIST-keras</li></a>
<li><a href="docs/zh_CN/TrialExample/GbdtExample.md">Auto-gbdt</a></li>
<li><a href="docs/zh_CN/TrialExample/Cifar10Examples.md">Cifar10-pytorch</li></a>
Expand Down Expand Up @@ -92,7 +92,7 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包
<li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#BOHB">BOHB</a></li>
<li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#GPTuner">GP Tuner</a></li>
</ul>
<li><b><a href="docs/zh_CN/CommunitySharings/NasComparision.md">网络结构搜索</a> Tuner</b></li>
<li><b><a href="docs/zh_CN/AdvancedFeature/GeneralNasInterfaces.md">NAS</a> Tuner</b></li>
<ul>
<li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#NetworkMorphism">Network Morphism</a></li>
<li><a href="examples/tuners/enas_nni/README_zh_CN.md">ENAS</a></li>
Expand Down Expand Up @@ -332,19 +332,30 @@ You can use these commands to get more information about the experiment
* [自定义 Tuner](docs/zh_CN/Tuner/CustomizeTuner.md)
* [实现定制的训练平台](docs/zh_CN/TrainingService/HowToImplementTrainingService.md)

## **外部代码库**

下面是一些贡献者为 NNI 提供的使用示例 谢谢可爱的贡献者! 欢迎越来越多的人加入我们!

* 在 NNI 中运行 [ENAS](examples/tuners/enas_nni/README_zh_CN.md)
* 在 NNI 中运行 [神经网络架构结构搜索](examples/trials/nas_cifar10/README_zh_CN.md)
* [NNI 中的自动特征工程](examples/trials/auto-feature-engineering/README_zh_CN.md)
## **其它代码库和参考**

经作者许可的一些 NNI 用法示例和相关文档。

* ### **外部代码库**

* 在 NNI 中运行 [ENAS](examples/tuners/enas_nni/README_zh_CN.md)
* 在 NNI 中运行 [神经网络架构结构搜索](examples/trials/nas_cifar10/README_zh_CN.md)
* [NNI 中的自动特征工程](examples/trials/auto-feature-engineering/README_zh_CN.md)
* 使用 NNI 的 [矩阵分解超参调优](https://github.com/microsoft/recommenders/blob/master/notebooks/04_model_select_and_optimize/nni_surprise_svd.ipynb)
* ### **相关文章**

* [超参数优化的对比](docs/zh_CN/CommunitySharings/HpoComparision.md)
* [神经网络结构搜索的对比](docs/zh_CN/CommunitySharings/NasComparision.md)
* [并行化顺序算法:TPE](docs/zh_CN/CommunitySharings/ParallelizingTpeSearch.md)
* [使用 NNI 为 SVD 自动调参](docs/zh_CN/CommunitySharings/RecommendersSvd.md)
* [使用 NNI 为 SPTAG 自动调参](docs/zh_CN/CommunitySharings/SptagAutoTune.md)
* **博客** - [AutoML 工具(Advisor,NNI 与 Google Vizier)的对比](http://gaocegege.com/Blog/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/katib-new#%E6%80%BB%E7%BB%93%E4%B8%8E%E5%88%86%E6%9E%90) 作者:[@gaocegege](https://github.com/gaocegege) - kubeflow/katib 的设计与实现的总结与分析章节

## **反馈**

* 在 [Gitter](https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 中参与讨论
* 在 [Stack Overflow](https://stackoverflow.com/questions/tagged/nni?sort=Newest&edited=true) 上使用 NNI 标签提问
* 在 [Gitter](https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 中参与讨论。
* [在 GitHub 上提交问题](https://github.com/microsoft/nni/issues/new/choose)。
* 在 [Stack Overflow](https://stackoverflow.com/questions/tagged/nni?sort=Newest&edited=true) 上使用 nni 标签提问。

## **许可协议**

Expand Down
2 changes: 1 addition & 1 deletion azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
displayName: 'Install nni toolkit via source code'
- script: |
python3 -m pip install flake8 --user
IGNORE=./tools/nni_annotation/testcase/*:F821,./examples/trials/mnist-nas/*/mnist*.py:F821
IGNORE=./tools/nni_annotation/testcase/*:F821,./examples/trials/mnist-nas/*/mnist*.py:F821,./examples/trials/nas_cifar10/src/cifar10/general_child.py:F821
python3 -m flake8 . --count --per-file-ignores=$IGNORE --select=E9,F63,F72,F82 --show-source --statistics
displayName: 'Run flake8 tests to find Python syntax errors and undefined names'
- script: |
Expand Down
4 changes: 2 additions & 2 deletions deployment/docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -68,8 +68,8 @@ RUN python3 -m pip --no-cache-dir install Keras==2.1.6
#
# PyTorch
#
RUN python3 -m pip --no-cache-dir install torch==0.4.1
RUN python3 -m pip install torchvision==0.2.1
RUN python3 -m pip --no-cache-dir install torch==1.2.0
RUN python3 -m pip install torchvision==0.4.0

#
# sklearn 0.20.0
Expand Down
9 changes: 7 additions & 2 deletions docs/en_US/AdvancedFeature/MultiPhase.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,6 @@ Typically each trial job gets a single configuration (e.g., hyperparameters) fro

The above cases can be supported by the same feature, i.e., multi-phase execution. To support those cases, basically a trial job should be able to request multiple configurations from tuner. Tuner is aware of whether two configuration requests are from the same trial job or different ones. Also in multi-phase a trial job can report multiple final results.

Note that, `nni.get_next_parameter()` and `nni.report_final_result()` should be called sequentially: __call the former one, then call the later one; and repeat this pattern__. If `nni.get_next_parameter()` is called multiple times consecutively, and then `nni.report_final_result()` is called once, the result is associated to the last configuration, which is retrieved from the last get_next_parameter call. So there is no result associated to previous get_next_parameter calls, and it may cause some multi-phase algorithm broken.

## Create multi-phase experiment

### Write trial code which leverages multi-phase:
Expand All @@ -23,6 +21,9 @@ It is pretty simple to use multi-phase in trial code, an example is shown below:
for i in range(5):
# get parameter from tuner
tuner_param = nni.get_next_parameter()
# nni.get_next_parameter returns None if there is no more hyper parameters can be generated by tuner.
if tuner_param is None:
break

# consume the params
# ...
Expand All @@ -32,6 +33,10 @@ It is pretty simple to use multi-phase in trial code, an example is shown below:
# ...
```

In multi-phase experiments, at each time the API ```nni.get_next_parameter()``` is called, it returns a new hyper parameter generated by tuner, then the trail code consume this new hyper parameter and report final result of this hyper parameter. `nni.get_next_parameter()` and `nni.report_final_result()` should be called sequentially: __call the former one, then call the later one; and repeat this pattern__. If `nni.get_next_parameter()` is called multiple times consecutively, and then `nni.report_final_result()` is called once, the result is associated to the last configuration, which is retrieved from the last get_next_parameter call. So there is no result associated to previous get_next_parameter calls, and it may cause some multi-phase algorithm broken.

Note that, ```nni.get_next_parameter``` returns None if there is no more hyper parameters can be generated by tuner.

__2. Experiment configuration__

To enable multi-phase, you should also add `multiPhase: true` in your experiment YAML configure file. If this line is not added, `nni.get_next_parameter()` would always return the same configuration.
Expand Down
Loading