Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

pdarts update #1753

Merged
merged 31 commits into from
Nov 22, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
10ec345
update pdarts to use new darts
squirrelsc Nov 20, 2019
5b68e91
duplicate darts code to support pdarts
Nov 20, 2019
9f1fb8a
add abstract methods for pdarts
Nov 20, 2019
3c8e95e
fix bug
Nov 20, 2019
0676ba0
fix base trainer
Nov 20, 2019
63ed173
fix bug on mutator
squirrelsc Nov 20, 2019
58b913b
try to improve performance
squirrelsc Nov 20, 2019
a1646a5
optimize code to reduce duplicated files.
Nov 20, 2019
ccd2369
update document path
squirrelsc Nov 20, 2019
e402dd8
update format
squirrelsc Nov 20, 2019
49e0aa5
update code to get validate run every time.
squirrelsc Nov 20, 2019
12b3c61
change urls to official ones.
squirrelsc Nov 20, 2019
c8c466f
add header and simplify code
squirrelsc Nov 20, 2019
81c5070
fix bug that may get None
squirrelsc Nov 20, 2019
5746364
merge refactored code
squirrelsc Nov 21, 2019
b821b12
update code for new refactoring
squirrelsc Nov 21, 2019
e8c0646
fix call backs
squirrelsc Nov 21, 2019
ef0d907
fix a bug on missing import
squirrelsc Nov 21, 2019
8322a95
fix runtime bug
squirrelsc Nov 21, 2019
a58492a
fix callback code's location.
squirrelsc Nov 21, 2019
c474820
fix previous bug throughtly...
squirrelsc Nov 21, 2019
269507b
update document and remove a duplicated line.
squirrelsc Nov 21, 2019
07b15cd
update logs
squirrelsc Nov 21, 2019
882e040
remove useless file
squirrelsc Nov 21, 2019
2f09e99
set log level to info
squirrelsc Nov 21, 2019
1837fae
fix format and test
squirrelsc Nov 21, 2019
0eddd37
add more logger for examples
squirrelsc Nov 21, 2019
2ed3161
fix log information
squirrelsc Nov 22, 2019
d030fe0
fix pylint errors
squirrelsc Nov 22, 2019
380ed85
update logger names
squirrelsc Nov 22, 2019
85ccac0
add dependencies
squirrelsc Nov 22, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/en_US/AdvancedFeature/MultiPhase.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ With this information, the tuner could know which trial is requesting a configur

### Tuners support multi-phase experiments:

[TPE](../Tuner/HyperoptTuner.md), [Random](../Tuner/HyperoptTuner.md), [Anneal](../Tuner/HyperoptTuner.md), [Evolution](../Tuner/EvolutionTuner.md), [SMAC](../Tuner/SmacTuner.md), [NetworkMorphism](../Tuner/NetworkmorphismTuner.md), [MetisTuner](../Tuner/MetisTuner.md), [BOHB](../Tuner/BohbAdvisor.md), [Hyperband](../Tuner/HyperbandAdvisor.md), [ENAS tuner](https://github.com/countif/enas_nni/blob/master/nni/examples/tuners/enas/nni_controller_ptb.py).
[TPE](../Tuner/HyperoptTuner.md), [Random](../Tuner/HyperoptTuner.md), [Anneal](../Tuner/HyperoptTuner.md), [Evolution](../Tuner/EvolutionTuner.md), [SMAC](../Tuner/SmacTuner.md), [NetworkMorphism](../Tuner/NetworkmorphismTuner.md), [MetisTuner](../Tuner/MetisTuner.md), [BOHB](../Tuner/BohbAdvisor.md), [Hyperband](../Tuner/HyperbandAdvisor.md).

### Training services support multi-phase experiment:
[Local Machine](../TrainingService/LocalMode.md), [Remote Servers](../TrainingService/RemoteMachineMode.md), [OpenPAI](../TrainingService/PaiMode.md)
139 changes: 77 additions & 62 deletions docs/en_US/NAS/Overview.md
Original file line number Diff line number Diff line change
@@ -1,62 +1,77 @@
# Neural Architecture Search (NAS) on NNI

Automatic neural architecture search is taking an increasingly important role on finding better models. Recent research works have proved the feasibility of automatic NAS, and also found some models that could beat manually designed and tuned models. Some of representative works are [NASNet][2], [ENAS][1], [DARTS][3], [Network Morphism][4], and [Evolution][5]. There are new innovations keeping emerging.

However, it takes great efforts to implement NAS algorithms, and it is hard to reuse code base of existing algorithms in new one. To facilitate NAS innovations (e.g., design and implement new NAS models, compare different NAS models side-by-side), an easy-to-use and flexible programming interface is crucial.

With this motivation, our ambition is to provide a unified architecture in NNI, to accelerate innovations on NAS, and apply state-of-art algorithms on real world problems faster.

## Supported algorithms

NNI supports below NAS algorithms now, and being adding more. User can reproduce an algorithm, or use it on owned dataset. we also encourage user to implement other algorithms with [NNI API](#use-nni-api), to benefit more people.

Note, these algorithms run standalone without nnictl, and supports PyTorch only.

### DARTS

The main contribution of [DARTS: Differentiable Architecture Search][3] on algorithm is to introduce a novel algorithm for differentiable network architecture search on bilevel optimization.

#### Usage

```bash
### In case NNI code is not cloned.
git clone https://github.com/Microsoft/nni.git

cd examples/nas/darts
python search.py
```

### P-DARTS

[Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation](https://arxiv.org/abs/1904.12760) bases on DARTS(#DARTS). It main contribution on algorithm is to introduce an efficient algorithm which allows the depth of searched architectures to grow gradually during the training procedure.

#### Usage

```bash
### In case NNI code is not cloned.
git clone https://github.com/Microsoft/nni.git

cd examples/nas/pdarts
python main.py
```

## Use NNI API

NOTE, we are trying to support various NAS algorithms with unified programming interface, and it's in very experimental stage. It means the current programing interface may be updated significantly.

*previous [NAS annotation](../AdvancedFeature/GeneralNasInterfaces.md) interface will be deprecated soon.*

### Programming interface

The programming interface of designing and searching a model is often demanded in two scenarios.

1. When designing a neural network, there may be multiple operation choices on a layer, sub-model, or connection, and it's undetermined which one or combination performs best. So it needs an easy way to express the candidate layers or sub-models.
2. When applying NAS on a neural network, it needs an unified way to express the search space of architectures, so that it doesn't need to update trial code for different searching algorithms.

NNI proposed API is [here](https://github.com/microsoft/nni/tree/dev-nas-refactor/src/sdk/pynni/nni/nas/pytorch). And [here](https://github.com/microsoft/nni/tree/dev-nas-refactor/examples/nas/darts) is an example of NAS implementation, which bases on NNI proposed interface.

[1]: https://arxiv.org/abs/1802.03268
[2]: https://arxiv.org/abs/1707.07012
[3]: https://arxiv.org/abs/1806.09055
[4]: https://arxiv.org/abs/1806.10282
[5]: https://arxiv.org/abs/1703.01041
# Neural Architecture Search (NAS) on NNI

Automatic neural architecture search is taking an increasingly important role on finding better models. Recent research works have proved the feasibility of automatic NAS, and also found some models that could beat manually designed and tuned models. Some of representative works are [NASNet][2], [ENAS][1], [DARTS][3], [Network Morphism][4], and [Evolution][5]. There are new innovations keeping emerging.

However, it takes great efforts to implement NAS algorithms, and it is hard to reuse code base of existing algorithms in new one. To facilitate NAS innovations (e.g., design and implement new NAS models, compare different NAS models side-by-side), an easy-to-use and flexible programming interface is crucial.

With this motivation, our ambition is to provide a unified architecture in NNI, to accelerate innovations on NAS, and apply state-of-art algorithms on real world problems faster.

## Supported algorithms

NNI supports below NAS algorithms now and being adding more. User can reproduce an algorithm or use it on owned dataset. we also encourage user to implement other algorithms with [NNI API](#use-nni-api), to benefit more people.

Note, these algorithms run standalone without nnictl, and supports PyTorch only.

### Dependencies

* Install latest NNI
* PyTorch 1.2+
* git

### DARTS

The main contribution of [DARTS: Differentiable Architecture Search][3] on algorithm is to introduce a novel algorithm for differentiable network architecture search on bilevel optimization.

#### Usage

```bash
# In case NNI code is not cloned. If the code is cloned already, ignore this line and enter code folder.
git clone https://github.com/Microsoft/nni.git

# search the best architecture
cd examples/nas/darts
python3 search.py

# train the best architecture
python3 retrain.py --arc-checkpoint ./checkpoints/epoch_49.json
```

### P-DARTS

[Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation](https://arxiv.org/abs/1904.12760) bases on [DARTS](#DARTS). It's contribution on algorithm is to introduce an efficient algorithm which allows the depth of searched architectures to grow gradually during the training procedure.

#### Usage

```bash
# In case NNI code is not cloned. If the code is cloned already, ignore this line and enter code folder.
git clone https://github.com/Microsoft/nni.git

# search the best architecture
cd examples/nas/pdarts
python3 search.py

# train the best architecture, it's the same progress as darts.
cd examples/nas/darts
python3 retrain.py --arc-checkpoint ./checkpoints/epoch_2.json
```

## Use NNI API

NOTE, we are trying to support various NAS algorithms with unified programming interface, and it's in very experimental stage. It means the current programing interface may be updated significantly.

*previous [NAS annotation](../AdvancedFeature/GeneralNasInterfaces.md) interface will be deprecated soon.*

### Programming interface

The programming interface of designing and searching a model is often demanded in two scenarios.

1. When designing a neural network, there may be multiple operation choices on a layer, sub-model, or connection, and it's undetermined which one or combination performs best. So, it needs an easy way to express the candidate layers or sub-models.
2. When applying NAS on a neural network, it needs an unified way to express the search space of architectures, so that it doesn't need to update trial code for different searching algorithms.

NNI proposed API is [here](https://github.com/microsoft/nni/tree/master/src/sdk/pynni/nni/nas/pytorch). And [here](https://github.com/microsoft/nni/tree/master/examples/nas/darts) is an example of NAS implementation, which bases on NNI proposed interface.

[1]: https://arxiv.org/abs/1802.03268
[2]: https://arxiv.org/abs/1707.07012
[3]: https://arxiv.org/abs/1806.09055
[4]: https://arxiv.org/abs/1806.10282
[5]: https://arxiv.org/abs/1703.01041
8 changes: 0 additions & 8 deletions docs/en_US/Tutorial/SearchSpaceSpec.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,12 +73,6 @@ All types of sampling strategies and their parameter are listed here:
* Which means the variable value is a value like `round(exp(normal(mu, sigma)) / q) * q`
* Suitable for a discrete variable with respect to which the objective is smooth and gets smoother with the size of the variable, which is bounded from one side.

* `{"_type": "mutable_layer", "_value": {mutable_layer_infomation}}`
* Type for [Neural Architecture Search Space][1]. Value is also a dictionary, which contains key-value pairs representing respectively name and search space of each mutable_layer.
* For now, users can only use this type of search space with annotation, which means that there is no need to define a json file for search space since it will be automatically generated according to the annotation in trial code.
* The following HPO tuners can be adapted to tune this search space: TPE, Random, Anneal, Evolution, Grid Search,
Hyperband and BOHB.
* For detailed usage, please refer to [General NAS Interfaces][1].

## Search Space Types Supported by Each Tuner

Expand All @@ -105,5 +99,3 @@ Known Limitations:
* Only Random Search/TPE/Anneal/Evolution tuner supports nested search space

* We do not support nested search space "Hyper Parameter" in visualization now, the enhancement is being considered in [#1110](https://github.com/microsoft/nni/issues/1110), any suggestions or discussions or contributions are warmly welcomed

[1]: ../AdvancedFeature/GeneralNasInterfaces.md
2 changes: 0 additions & 2 deletions docs/en_US/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,3 @@ Advanced Features

.. toctree::
MultiPhase<./AdvancedFeature/MultiPhase>
AdvancedNas<./AdvancedFeature/AdvancedNas>
NAS Programming Interface<./AdvancedFeature/GeneralNasInterfaces>
14 changes: 12 additions & 2 deletions examples/nas/darts/retrain.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import logging
import time
from argparse import ArgumentParser

import torch
Expand All @@ -10,8 +11,17 @@
from nni.nas.pytorch.fixed import apply_fixed_architecture
from nni.nas.pytorch.utils import AverageMeter

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
logger = logging.getLogger()

fmt = '[%(asctime)s] %(levelname)s (%(name)s/%(threadName)s) %(message)s'
logging.Formatter.converter = time.localtime
formatter = logging.Formatter(fmt, '%m/%d/%Y, %I:%M:%S %p')

std_out_info = logging.StreamHandler()
std_out_info.setFormatter(formatter)
logger.setLevel(logging.INFO)
logger.addHandler(std_out_info)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")


Expand Down
17 changes: 15 additions & 2 deletions examples/nas/darts/search.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,27 @@
import logging
import time
from argparse import ArgumentParser

import datasets
import torch
import torch.nn as nn

import datasets
from model import CNN
from nni.nas.pytorch.callbacks import LearningRateScheduler, ArchitectureCheckpoint
from nni.nas.pytorch.callbacks import (ArchitectureCheckpoint,
LearningRateScheduler)
from nni.nas.pytorch.darts import DartsTrainer
from utils import accuracy

logger = logging.getLogger()

fmt = '[%(asctime)s] %(levelname)s (%(name)s/%(threadName)s) %(message)s'
logging.Formatter.converter = time.localtime
formatter = logging.Formatter(fmt, '%m/%d/%Y, %I:%M:%S %p')

std_out_info = logging.StreamHandler()
std_out_info.setFormatter(formatter)
logger.setLevel(logging.INFO)
logger.addHandler(std_out_info)

if __name__ == "__main__":
parser = ArgumentParser("darts")
Expand Down
13 changes: 13 additions & 0 deletions examples/nas/enas/search.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
import logging
import time
from argparse import ArgumentParser

import torch
Expand All @@ -10,6 +12,17 @@
from nni.nas.pytorch.callbacks import LearningRateScheduler, ArchitectureCheckpoint
from utils import accuracy, reward_accuracy

logger = logging.getLogger()

fmt = '[%(asctime)s] %(levelname)s (%(name)s/%(threadName)s) %(message)s'
logging.Formatter.converter = time.localtime
formatter = logging.Formatter(fmt, '%m/%d/%Y, %I:%M:%S %p')

std_out_info = logging.StreamHandler()
std_out_info.setFormatter(formatter)
logger.setLevel(logging.INFO)
logger.addHandler(std_out_info)

if __name__ == "__main__":
parser = ArgumentParser("enas")
parser.add_argument("--batch-size", default=128, type=int)
Expand Down
25 changes: 0 additions & 25 deletions examples/nas/pdarts/datasets.py

This file was deleted.

65 changes: 0 additions & 65 deletions examples/nas/pdarts/main.py

This file was deleted.

Loading