forked from microsoft/nni
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #80 from microsoft/master
pull code
- Loading branch information
Showing
100 changed files
with
2,916 additions
and
799 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,80 @@ | ||
# TextNAS | ||
|
||
## Introduction | ||
|
||
This is the implementation of the TextNAS algorithm proposed in the paper [TextNAS: A Neural Architecture Search Space tailored for Text Representation](https://arxiv.org/pdf/1912.10729.pdf). TextNAS is a neural architecture search algorithm tailored for text representation, more specifically, TextNAS is based on a novel search space consists of operators widely adopted to solve various NLP tasks, and TextNAS also supports multi-path ensemble within a single network to balance the width and depth of the architecture. | ||
|
||
The search space of TextNAS contains: | ||
|
||
* 1-D convolutional operator with filter size 1, 3, 5, 7 | ||
* recurrent operator (bi-directional GRU) | ||
* self-attention operator | ||
* pooling operator (max/average) | ||
|
||
Following the ENAS algorithm, TextNAS also utilizes parameter sharing to accelerate the search speed and adopts a reinforcement-learning controller for the architecture sampling and generation. Please refer to the paper for more details of TextNAS. | ||
|
||
## Preparation | ||
|
||
Prepare the word vectors and SST dataset, and organize them in data directory as shown below: | ||
|
||
``` | ||
textnas | ||
├── data | ||
│ ├── sst | ||
│ │ └── trees | ||
│ │ ├── dev.txt | ||
│ │ ├── test.txt | ||
│ │ └── train.txt | ||
│ └── glove.840B.300d.txt | ||
├── dataloader.py | ||
├── model.py | ||
├── ops.py | ||
├── README.md | ||
├── search.py | ||
└── utils.py | ||
``` | ||
|
||
The following link might be helpful for finding and downloading the corresponding dataset: | ||
|
||
* [GloVe: Global Vectors for Word Representation](https://nlp.stanford.edu/projects/glove/) | ||
* [glove.840B.300d.txt](http://nlp.stanford.edu/data/glove.840B.300d.zip) | ||
* [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://nlp.stanford.edu/sentiment/) | ||
* [trainDevTestTrees_PTB.zip](https://nlp.stanford.edu/sentiment/trainDevTestTrees_PTB.zip) | ||
|
||
## Examples | ||
|
||
### Search Space | ||
|
||
[Example code](https://github.com/microsoft/nni/tree/master/examples/nas/textnas) | ||
|
||
```bash | ||
# In case NNI code is not cloned. If the code is cloned already, ignore this line and enter code folder. | ||
git clone https://github.com/Microsoft/nni.git | ||
|
||
# search the best architecture | ||
cd examples/nas/textnas | ||
|
||
# view more options for search | ||
python3 search.py -h | ||
``` | ||
|
||
After each search epoch, 10 sampled architectures will be tested directly. Their performances are expected to be 40% - 42% after 10 epochs. | ||
|
||
By default, 20 sampled architectures will be exported into `checkpoints` directory for next step. | ||
|
||
### retrain | ||
|
||
```bash | ||
# In case NNI code is not cloned. If the code is cloned already, ignore this line and enter code folder. | ||
git clone https://github.com/Microsoft/nni.git | ||
|
||
# search the best architecture | ||
cd examples/nas/textnas | ||
|
||
# default to retrain on sst-2 | ||
sh run_retrain.sh | ||
``` | ||
|
||
## Reference | ||
|
||
TextNAS directly uses EnasTrainer, please refer to [ENAS](./ENAS.md) for the trainer APIs. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
PBT Tuner on NNI | ||
=== | ||
|
||
## PBTTuner | ||
|
||
Population Based Training (PBT) comes from [Population Based Training of Neural Networks](https://arxiv.org/abs/1711.09846v1). It's a simple asynchronous optimization algorithm which effectively utilizes a fixed computational budget to jointly optimize a population of models and their hyperparameters to maximize performance. Importantly, PBT discovers a schedule of hyperparameter settings rather than following the generally sub-optimal strategy of trying to find a single fixed set to use for the whole course of training. | ||
|
||
PBTTuner initializes a population with several trials. Users can set a specific number of training epochs. After a certain number of epochs, the parameters and hyperparameters in the trial with bad metrics will be replaced with a better trial (exploit). Then the hyperparameters are perturbed (explore). | ||
|
||
In our implementation, training epochs in the trial code is regarded as a step of PBT, different with other tuners. At the end of each step, PBT tuner will do exploitation and exploration -- replacing some trials with new trials. This is implemented by constantly modifying the values of `load_checkpoint_dir` and `save_checkpoint_dir`. We can directly change `load_checkpoint_dir` to replace parameters and hyperparameters, and `save_checkpoint_dir` to save a checkpoint that will be loaded in the next step. To this end, we need a shared folder which is accessible to all trials. | ||
|
||
If the experiment is running in local mode, users could provide an argument `all_checkpoint_dir` which will be the base folder of `load_checkpoint_dir` and `save_checkpoint_dir` (`checkpoint_dir` is set to `all_checkpoint_dir/<population-id>/<step>`). By default, `all_checkpoint_dir` is set to be `~/nni/experiments/<exp-id>/checkpoint`. If the experiment is in non-local mode, then users should provide a path in a shared storage folder which is mounted at `all_checkpoint_dir` on worker machines (but it's not necessarily available on the machine which runs tuner). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Oops, something went wrong.