Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Commit

Permalink
Improve grammar, spelling, and wording within the English documentati…
Browse files Browse the repository at this point in the history
…on. (#2223)

* Fix broken english in Overview

Fix a lot of akward or misleading phrasing.
Fix a few spelling errors.
Fixed past tence vs presen tense (can vs. could, supports vs supported)

* Sentences shouldn't typically begin with "and", in installation.rst

* Fixed a bit of bad grammar and awkward phrasing in Linux installation instructions.

* Additional, single correction in linux instructions.

* Fix a lot of bad grammar and phrasing in windows installation instructions

* Fix a variety of grammar and spelling problems in docker instructions

Lots of akward phrasing.
Lots of tense issues (could vs can)
Lots of spelling errors (especcially "offical")
Lots of missing articles
Docker is a proper noun and should be capitalized

* Missing article in windows install instructions.

* Change some "refer to this"s to "see here"s.

* Fix a lot of bad grammar and confusing wording in Quick Start

tab "something" should be the "something" tab.
Tense issues (e.g., Modified vs. Modify).

* Fix some akward phrasing in hyperparameter tuning directory.

* Clean up grammar and phrasing in trial setup.

* Fix broken english in tuner directory.

* Correct a bunch of bad wording throughout the Hyperparameter Tuning overview

Lots of missing articles.
Swapped out "Example usage" for "Example config", because that's what it is. Usage isn't examplified at all.
I have no idea what the note at the end of the TPE section is trying to say, so I left it untouched, but it should be changed to something that make sense.

* Fixing, as best I canm weird wording in TPE, Random Search, Anneal Tuners

Fixed many incomplete sentences and bad wording.
The first sentence in the Parallel TPE optimization section doesn't make sense to me, but I left it in case it's supposed to be that way. That sentence was copied from the blog post.

* Improve wording in naive evolution description.

* Minor changes to SMAC page wording.

* Improve some wording, but mostly formatting, on Metis Tuner page.

* Minor grammatical fix in Matis page.

* Minor edits to Batch tuner description.

* Minor fixes to gridsearch description.

* Better wording for GPTuner description.

* Fix a lot of wording in the Network Morphism description.

* Improve wording in hyperbanding description.

* Fix a lot of confusing wording, spelling, and gramatical errors in BOHB description.

* Fix a lot of confusing and some redundant wording in the PPOTunder description.

* Improve wording in Builtin Assesors overview.

* Fix some wording in Assessor overview.

* Improved some wording in Median Assessor's description.

* Improve wording and grammar on curve fitting assessor description.

* Improved some grammar and wording the the WebUI tutorial page.

* Improved wording and gammar in NAS overview.

Also deletes one redundant copy of a note that was stated twice.

* Improved grammar and wording in NAS quickstart.

* Improve much of the wording and grammar in the NAS guide.

* Replace "Requirement of classArg" with "classArgs requirements:" in two files

tuner and builtin assessor.One instance in HyperoptTuner.md and BuiltinAssessor.md.

Co-authored-by: AHartNtkn <AHartNtkn@users.noreply.github.com>
  • Loading branch information
AHartNtkn and AHartNtkn authored Mar 24, 2020
1 parent d1bc0cf commit 86a27f4
Show file tree
Hide file tree
Showing 29 changed files with 414 additions and 417 deletions.
30 changes: 15 additions & 15 deletions docs/en_US/Assessor/BuiltinAssessor.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,21 @@
# Built-in Assessors

NNI provides state-of-the-art tuning algorithm in our builtin-assessors and makes them easy to use. Below is the brief overview of NNI current builtin Assessors:
NNI provides state-of-the-art tuning algorithms within our builtin-assessors and makes them easy to use. Below is a brief overview of NNI's current builtin Assessors.

Note: Click the **Assessor's name** to get the Assessor's installation requirements, suggested scenario and using example. The link for a detailed description of the algorithm is at the end of the suggested scenario of each Assessor.
Note: Click the **Assessor's name** to get each Assessor's installation requirements, suggested usage scenario, and a config example. A link to a detailed description of each algorithm is provided at the end of the suggested scenario for each Assessor.

Currently we support the following Assessors:
Currently, we support the following Assessors:

|Assessor|Brief Introduction of Algorithm|
|---|---|
|[__Medianstop__](#MedianStop)|Medianstop is a simple early stopping rule. It stops a pending trial X at step S if the trial’s best objective value by step S is strictly worse than the median value of the running averages of all completed trials’ objectives reported up to step S. [Reference Paper](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46180.pdf)|
|[__Curvefitting__](#Curvefitting)|Curve Fitting Assessor is a LPA(learning, predicting, assessing) algorithm. It stops a pending trial X at step S if the prediction of final epoch's performance worse than the best final performance in the trial history. In this algorithm, we use 12 curves to fit the accuracy curve. [Reference Paper](http://aad.informatik.uni-freiburg.de/papers/15-IJCAI-Extrapolation_of_Learning_Curves.pdf)|
|[__Curvefitting__](#Curvefitting)|Curve Fitting Assessor is an LPA (learning, predicting, assessing) algorithm. It stops a pending trial X at step S if the prediction of the final epoch's performance worse than the best final performance in the trial history. In this algorithm, we use 12 curves to fit the accuracy curve. [Reference Paper](http://aad.informatik.uni-freiburg.de/papers/15-IJCAI-Extrapolation_of_Learning_Curves.pdf)|

## Usage of Builtin Assessors

Use builtin assessors provided by NNI SDK requires to declare the **builtinAssessorName** and **classArgs** in `config.yml` file. In this part, we will introduce the detailed usage about the suggested scenarios, classArg requirements, and example for each assessor.
Usage of builtin assessors provided by the NNI SDK requires one to declare the **builtinAssessorName** and **classArgs** in the `config.yml` file. In this part, we will introduce the details of usage and the suggested scenarios, classArg requirements, and an example for each assessor.

Note: Please follow the format when you write your `config.yml` file.
Note: Please follow the provided format when writing your `config.yml` file.

<a name="MedianStop"></a>

Expand All @@ -25,12 +25,12 @@ Note: Please follow the format when you write your `config.yml` file.
**Suggested scenario**

It is applicable in a wide range of performance curves, thus, can be used in various scenarios to speed up the tuning progress. [Detailed Description](./MedianstopAssessor.md)
It's applicable in a wide range of performance curves, thus, it can be used in various scenarios to speed up the tuning progress. [Detailed Description](./MedianstopAssessor.md)

**Requirement of classArg**
**classArgs requirements:**

* **optimize_mode** (*maximize or minimize, optional, default = maximize*) - If 'maximize', assessor will **stop** the trial with smaller expectation. If 'minimize', assessor will **stop** the trial with larger expectation.
* **start_step** (*int, optional, default = 0*) - A trial is determined to be stopped or not, only after receiving start_step number of reported intermediate results.
* **start_step** (*int, optional, default = 0*) - A trial is determined to be stopped or not only after receiving start_step number of reported intermediate results.

**Usage example:**

Expand All @@ -53,15 +53,15 @@ assessor:

**Suggested scenario**

It is applicable in a wide range of performance curves, thus, can be used in various scenarios to speed up the tuning progress. Even better, it's able to handle and assess curves with similar performance. [Detailed Description](./CurvefittingAssessor.md)
It's applicable in a wide range of performance curves, thus, it can be used in various scenarios to speed up the tuning progress. Even better, it's able to handle and assess curves with similar performance. [Detailed Description](./CurvefittingAssessor.md)

**Requirement of classArg**
**classArgs requirements:**

* **epoch_num** (*int, **required***) - The total number of epoch. We need to know the number of epoch to determine which point we need to predict.
* **epoch_num** (*int, **required***) - The total number of epochs. We need to know the number of epochs to determine which points we need to predict.
* **optimize_mode** (*maximize or minimize, optional, default = maximize*) - If 'maximize', assessor will **stop** the trial with smaller expectation. If 'minimize', assessor will **stop** the trial with larger expectation.
* **start_step** (*int, optional, default = 6*) - A trial is determined to be stopped or not, we start to predict only after receiving start_step number of reported intermediate results.
* **threshold** (*float, optional, default = 0.95*) - The threshold that we decide to early stop the worse performance curve. For example: if threshold = 0.95, optimize_mode = maximize, best performance in the history is 0.9, then we will stop the trial which predict value is lower than 0.95 * 0.9 = 0.855.
* **gap** (*int, optional, default = 1*) - The gap interval between Assesor judgements. For example: if gap = 2, start_step = 6, then we will assess the result when we get 6, 8, 10, 12...intermedian result.
* **start_step** (*int, optional, default = 6*) - A trial is determined to be stopped or not only after receiving start_step number of reported intermediate results.
* **threshold** (*float, optional, default = 0.95*) - The threshold that we use to decide to early stop the worst performance curve. For example: if threshold = 0.95, optimize_mode = maximize, and the best performance in the history is 0.9, then we will stop the trial who's predicted value is lower than 0.95 * 0.9 = 0.855.
* **gap** (*int, optional, default = 1*) - The gap interval between Assesor judgements. For example: if gap = 2, start_step = 6, then we will assess the result when we get 6, 8, 10, 12...intermediate results.

**Usage example:**

Expand Down
26 changes: 13 additions & 13 deletions docs/en_US/Assessor/CurvefittingAssessor.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,31 +2,31 @@ Curve Fitting Assessor on NNI
===

## 1. Introduction
Curve Fitting Assessor is a LPA(learning, predicting, assessing) algorithm. It stops a pending trial X at step S if the prediction of final epoch's performance is worse than the best final performance in the trial history.
The Curve Fitting Assessor is an LPA (learning, predicting, assessing) algorithm. It stops a pending trial X at step S if the prediction of the final epoch's performance is worse than the best final performance in the trial history.

In this algorithm, we use 12 curves to fit the learning curve, the large set of parametric curve models are chosen from [reference paper][1]. The learning curves' shape coincides with our prior knowlwdge about the form of learning curves: They are typically increasing, saturating functions.
In this algorithm, we use 12 curves to fit the learning curve. The set of parametric curve models are chosen from this [reference paper][1]. The learning curves' shape coincides with our prior knowledge about the form of learning curves: They are typically increasing, saturating functions.

![](../../img/curvefitting_learning_curve.PNG)

We combine all learning curve models into a single, more powerful model. This combined model is given by a weighted linear combination:

![](../../img/curvefitting_f_comb.gif)

where the new combined parameter vector
with the new combined parameter vector

![](../../img/curvefitting_expression_xi.gif)

Assuming additive a Gaussian noise and the noise parameter is initialized to its maximum likelihood estimate.
Assuming additive Gaussian noise and the noise parameter being initialized to its maximum likelihood estimate.

We determine the maximum probability value of the new combined parameter vector by learing the historical data. Use such value to predict the future trial performance, and stop the inadequate experiments to save computing resource.
We determine the maximum probability value of the new combined parameter vector by learning the historical data. We use such a value to predict future trial performance and stop the inadequate experiments to save computing resources.

Concretely,this algorithm goes through three stages of learning, predicting and assessing.
Concretely, this algorithm goes through three stages of learning, predicting, and assessing.

* Step1: Learning. We will learning about the trial history of the current trial and determine the \xi at Bayesian angle. First of all, We fit each curve using the least squares method(implement by `fit_theta`) to save our time. After we obtained the parameters, we filter the curve and remove the outliers(implement by `filter_curve`). Finally, we use the MCMC sampling method(implement by `mcmc_sampling`) to adjust the weight of each curve. Up to now, we have dertermined all the parameters in \xi.
* Step1: Learning. We will learn about the trial history of the current trial and determine the \xi at the Bayesian angle. First of all, We fit each curve using the least-squares method, implemented by `fit_theta`. After we obtained the parameters, we filter the curve and remove the outliers, implemented by `filter_curve`. Finally, we use the MCMC sampling method. implemented by `mcmc_sampling`, to adjust the weight of each curve. Up to now, we have determined all the parameters in \xi.

* Step2: Predicting. Calculates the expected final result accuracy(implement by `f_comb`) at target position(ie the total number of epoch) by the \xi and the formula of the combined model.
* Step2: Predicting. It calculates the expected final result accuracy, implemented by `f_comb`, at the target position (i.e., the total number of epochs) by \xi and the formula of the combined model.

* Step3: If the fitting result doesn't converge, the predicted value will be `None`, in this case we return `AssessResult.Good` to ask for future accuracy information and predict again. Furthermore, we will get a positive value by `predict()` function, if this value is strictly greater than the best final performance in history * `THRESHOLD`(default value = 0.95), return `AssessResult.Good`, otherwise, return `AssessResult.Bad`
* Step3: If the fitting result doesn't converge, the predicted value will be `None`. In this case, we return `AssessResult.Good` to ask for future accuracy information and predict again. Furthermore, we will get a positive value from the `predict()` function. If this value is strictly greater than the best final performance in history * `THRESHOLD`(default value = 0.95), return `AssessResult.Good`, otherwise, return `AssessResult.Bad`

The figure below is the result of our algorithm on MNIST trial history data, where the green point represents the data obtained by Assessor, the blue point represents the future but unknown data, and the red line is the Curve predicted by the Curve fitting assessor.

Expand Down Expand Up @@ -60,11 +60,11 @@ assessor:
```

## 3. File Structure
The assessor has a lot of different files, functions and classes. Here we will only give most of those files a brief introduction:
The assessor has a lot of different files, functions, and classes. Here we briefly describe a few of them.

* `curvefunctions.py` includes all the function expression and default parameters.
* `modelfactory.py` includes learning and predicting, the corresponding calculation part is also implemented here.
* `curvefitting_assessor.py` is a assessor which receives the trial history and assess whether to early stop the trial.
* `curvefunctions.py` includes all the function expressions and default parameters.
* `modelfactory.py` includes learning and predicting; the corresponding calculation part is also implemented here.
* `curvefitting_assessor.py` is the assessor which receives the trial history and assess whether to early stop the trial.

## 4. TODO
* Further improve the accuracy of the prediction and test it on more models.
Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/Assessor/MedianstopAssessor.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@ Medianstop Assessor on NNI

## Median Stop

Medianstop is a simple early stopping rule mentioned in the [paper](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46180.pdf). It stops a pending trial X at step S if the trial’s best objective value by step S is strictly worse than the median value of the running averages of all completed trials’ objectives reported up to step S.
Medianstop is a simple early stopping rule mentioned in this [paper](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46180.pdf). It stops a pending trial X after step S if the trial’s best objective value by step S is strictly worse than the median value of the running averages of all completed trials’ objectives reported up to step S.
Loading

0 comments on commit 86a27f4

Please sign in to comment.