diff --git a/docs/en_US/CommunitySharings/AutoCompletion.rst b/docs/en_US/CommunitySharings/AutoCompletion.rst deleted file mode 100644 index f9b97a1efb..0000000000 --- a/docs/en_US/CommunitySharings/AutoCompletion.rst +++ /dev/null @@ -1,57 +0,0 @@ -Auto Completion for nnictl Commands -=================================== - -NNI's command line tool **nnictl** support auto-completion, i.e., you can complete a nnictl command by pressing the ``tab`` key. - -For example, if the current command is - -.. code-block:: bash - - nnictl cre - -By pressing the ``tab`` key, it will be completed to - -.. code-block:: bash - - nnictl create - -For now, auto-completion will not be enabled by default if you install NNI through ``pip``\ , and it only works on Linux with bash shell. If you want to enable this feature on your computer, please refer to the following steps: - -Step 1. Download ``bash-completion`` -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. code-block:: bash - - cd ~ - wget https://raw.githubusercontent.com/microsoft/nni/{nni-version}/tools/bash-completion - -Here, {nni-version} should by replaced by the version of NNI, e.g., ``master``, ``v2.3``. You can also check the latest ``bash-completion`` script :githublink:`here `. - -.. cannot find :githublink:`here `. - -Step 2. Install the script -^^^^^^^^^^^^^^^^^^^^^^^^^^ - -If you are running a root account and want to install this script for all the users - -.. code-block:: bash - - install -m644 ~/bash-completion /usr/share/bash-completion/completions/nnictl - -If you just want to install this script for your self - -.. code-block:: bash - - mkdir -p ~/.bash_completion.d - install -m644 ~/bash-completion ~/.bash_completion.d/nnictl - echo '[[ -f ~/.bash_completion.d/nnictl ]] && source ~/.bash_completion.d/nnictl' >> ~/.bash_completion - -Step 3. Reopen your terminal -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Reopen your terminal and you should be able to use the auto-completion feature. Enjoy! - -Step 4. Uninstall -^^^^^^^^^^^^^^^^^ - -If you want to uninstall this feature, just revert the changes in the steps above. diff --git a/docs/en_US/CommunitySharings/community_sharings.rst b/docs/en_US/CommunitySharings/community_sharings.rst index 1e3c517c40..a5fc0a841d 100644 --- a/docs/en_US/CommunitySharings/community_sharings.rst +++ b/docs/en_US/CommunitySharings/community_sharings.rst @@ -15,7 +15,6 @@ Use Cases and Solutions Feature Engineering Performance measurement, comparison and analysis Use NNI on Google Colab - Auto Completion for nnictl Commands External Repositories and References ==================================== diff --git a/docs/en_US/NAS/QuickStart.rst b/docs/en_US/NAS/QuickStart.rst index 2aa8492036..b3e239f95a 100644 --- a/docs/en_US/NAS/QuickStart.rst +++ b/docs/en_US/NAS/QuickStart.rst @@ -180,7 +180,7 @@ The complete code of a simple MNIST example can be found :githublink:`here `__ for details. +Users can visualize their experiment in the same way as visualizing a normal hyper-parameter tuning experiment. For example, open ``localhost::8081`` in your browser, 8081 is the port that you set in ``exp.run``. Please refer to `here <../Tutorial/WebUI.rst>`__ for details. Export Top Models ----------------- diff --git a/docs/en_US/Overview.rst b/docs/en_US/Overview.rst index 647b16951d..a4fe5d5dd8 100644 --- a/docs/en_US/Overview.rst +++ b/docs/en_US/Overview.rst @@ -54,7 +54,7 @@ For each experiment, the user only needs to define a search space and update a f Step 2: `Update model codes `__ - Step 3: `Define Experiment `__ + Step 3: `Define Experiment `__ diff --git a/docs/en_US/Tutorial/Contributing.rst b/docs/en_US/Tutorial/Contributing.rst index 43486a2be9..6f2fd50217 100644 --- a/docs/en_US/Tutorial/Contributing.rst +++ b/docs/en_US/Tutorial/Contributing.rst @@ -67,8 +67,8 @@ Our documentation is built with :githublink:`sphinx `. * Before submitting the documentation change, please **build homepage locally**: ``cd docs/en_US && make html``, then you can see all the built documentation webpage under the folder ``docs/en_US/_build/html``. It's also highly recommended taking care of **every WARNING** during the build, which is very likely the signal of a **deadlink** and other annoying issues. * - For links, please consider using **relative paths** first. However, if the documentation is written in Markdown format, and: + For links, please consider using **relative paths** first. However, if the documentation is written in reStructuredText format, and: * It's an image link which needs to be formatted with embedded html grammar, please use global URL like ``https://user-images.githubusercontent.com/44491713/51381727-e3d0f780-1b4f-11e9-96ab-d26b9198ba65.png``, which can be automatically generated by dragging picture onto `Github Issue `__ Box. - * It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at ``https://github.com/Microsoft/nni/tree/v2.3/`` (:githublink:`mnist.py ` for example). + * It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at ``https://github.com/Microsoft/nni/tree/master/`` (:githublink:`mnist.py ` for example). diff --git a/docs/en_US/Tutorial/QuickStart.rst b/docs/en_US/Tutorial/QuickStart.rst index 1256e75bd3..e93dac8bba 100644 --- a/docs/en_US/Tutorial/QuickStart.rst +++ b/docs/en_US/Tutorial/QuickStart.rst @@ -4,7 +4,7 @@ QuickStart Installation ------------ -We currently support Linux, macOS, and Windows. Ubuntu 16.04 or higher, macOS 10.14.1, and Windows 10.1809 are tested and supported. Simply run the following ``pip install`` in an environment that has ``python >= 3.6``. +Currently, NNI supports running on Linux, macOS and Windows. Ubuntu 16.04 or higher, macOS 10.14.1, and Windows 10.1809 are tested and supported. Simply run the following ``pip install`` in an environment that has ``python >= 3.6``. Linux and macOS ^^^^^^^^^^^^^^^ @@ -20,21 +20,17 @@ Windows python -m pip install --upgrade nni -.. Note:: For Linux and macOS, ``--user`` can be added if you want to install NNI in your home directory; this does not require any special privileges. +.. Note:: For Linux and macOS, ``--user`` can be added if you want to install NNI in your home directory, which does not require any special privileges. .. Note:: If there is an error like ``Segmentation fault``, please refer to the :doc:`FAQ `. -.. Note:: For the system requirements of NNI, please refer to :doc:`Install NNI on Linux & Mac ` or :doc:`Windows `. +.. Note:: For the system requirements of NNI, please refer to :doc:`Install NNI on Linux & Mac ` or :doc:`Windows `. If you want to use docker, refer to :doc:`HowToUseDocker `. -Enable NNI Command-line Auto-Completion (Optional) -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -After the installation, you may want to enable the auto-completion feature for **nnictl** commands. Please refer to this `tutorial <../CommunitySharings/AutoCompletion.rst>`__. "Hello World" example on MNIST ------------------------------ -NNI is a toolkit to help users run automated machine learning experiments. It can automatically do the cyclic process of getting hyperparameters, running trials, testing results, and tuning hyperparameters. Here, we'll show how to use NNI to help you find the optimal hyperparameters for a MNIST model. +NNI is a toolkit to help users run automated machine learning experiments. It can automatically do the cyclic process of getting hyperparameters, running trials, testing results, and tuning hyperparameters. Here, we'll show how to use NNI to help you find the optimal hyperparameters on the MNIST dataset. Here is an example script to train a CNN on the MNIST dataset **without NNI**: @@ -63,9 +59,9 @@ Here is an example script to train a CNN on the MNIST dataset **without NNI**: } main(params) -The above code can only try one set of parameters at a time; if we want to tune learning rate, we need to manually modify the hyperparameter and start the trial again and again. +The above code can only try one set of parameters at a time. If you want to tune the learning rate, you need to manually modify the hyperparameter and start the trial again and again. -NNI is born to help the user do tuning jobs; the NNI working process is presented below: +NNI is born to help users tune jobs, whose working process is presented below: .. code-block:: text @@ -80,26 +76,20 @@ NNI is born to help the user do tuning jobs; the NNI working process is presente 6: Stop the experiment 7: return hyperparameter value with best final result -If you want to use NNI to automatically train your model and find the optimal hyper-parameters, you need to do three changes based on your code: +.. note:: -Three steps to start an experiment -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + If you want to use NNI to automatically train your model and find the optimal hyper-parameters, there are two approaches: -**Step 1**: Write a ``Search Space`` file in JSON, including the ``name`` and the ``distribution`` (discrete-valued or continuous-valued) of all the hyperparameters you need to search. + 1. Write a config file and start the experiment from the command line. + 2. Config and launch the experiment directly from a Python file -.. code-block:: diff + In the this part, we will focus on the first approach. For the second approach, please refer to `this tutorial `__\ . - - params = {'batch_size': 32, 'hidden_size': 128, 'lr': 0.001, 'momentum': 0.5} - + { - + "batch_size": {"_type":"choice", "_value": [16, 32, 64, 128]}, - + "hidden_size":{"_type":"choice","_value":[128, 256, 512, 1024]}, - + "lr":{"_type":"choice","_value":[0.0001, 0.001, 0.01, 0.1]}, - + "momentum":{"_type":"uniform","_value":[0, 1]} - + } -*Example:* :githublink:`search_space.json ` +Step 1: Modify the ``Trial`` Code +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -**Step 2**\ : Modify your ``Trial`` file to get the hyperparameter set from NNI and report the final result to NNI. +Modify your ``Trial`` file to get the hyperparameter set from NNI and report the final results to NNI. .. code-block:: diff @@ -128,55 +118,83 @@ Three steps to start an experiment *Example:* :githublink:`mnist.py ` -**Step 3**\ : Define a ``config`` file in YAML which declares the ``path`` to the search space and trial files. It also gives other information such as the tuning algorithm, max trial number, and max duration arguments. + +Step 2: Define the Search Space +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Define a ``Search Space`` in a YAML file, including the ``name`` and the ``distribution`` (discrete-valued or continuous-valued) of all the hyperparameters you want to search. .. code-block:: yaml - authorName: default - experimentName: example_mnist - trialConcurrency: 1 - maxExecDuration: 1h - maxTrialNum: 10 - trainingServicePlatform: local - # The path to Search Space - searchSpacePath: search_space.json - useAnnotation: false - tuner: - builtinTunerName: TPE - # The path and the running command of trial - trial: - command: python3 mnist.py - codeDir: . - gpuNum: 0 + searchSpace: + batch_size: + _type: choice + _value: [16, 32, 64, 128] + hidden_size: + _type: choice + _value: [128, 256, 512, 1024] + lr: + _type: choice + _value: [0.0001, 0.001, 0.01, 0.1] + momentum: + _type: uniform + _value: [0, 1] + +*Example:* :githublink:`config_detailed.yml ` + +You can also write your search space in a JSON file and specify the file path in the configuration. For detailed tutorial on how to write the search space, please see `here `__. + + +Step 3: Config the Experiment +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +In addition to the search_space defined in the `step2 `__, you need to config the experiment in the YAML file. It specifies the key information of the experiment, such as the trial files, tuning algorithm, max trial number, and max duration, etc. + +.. code-block:: yaml + experimentName: MNIST # An optional name to distinguish the experiments + trialCommand: python3 mnist.py # NOTE: change "python3" to "python" if you are using Windows + trialConcurrency: 2 # Run 2 trials concurrently + maxTrialNumber: 10 # Generate at most 10 trials + maxExperimentDuration: 1h # Stop generating trials after 1 hour + tuner: # Configure the tuning algorithm + name: TPE + classArgs: # Algorithm specific arguments + optimize_mode: maximize + trainingService: # Configure the training platform + platform: local + +Experiment config reference could be found `here <../reference/experiment_config.rst>`__. .. _nniignore: -.. Note:: If you are planning to use remote machines or clusters as your :doc:`training service <../TrainingService/Overview>`, to avoid too much pressure on network, we limit the number of files to 2000 and total size to 300MB. If your codeDir contains too many files, you can choose which files and subfolders should be excluded by adding a ``.nniignore`` file that works like a ``.gitignore`` file. For more details on how to write this file, see the `git documentation `__. +.. Note:: If you are planning to use remote machines or clusters as your :doc:`training service <../TrainingService/Overview>`, to avoid too much pressure on network, NNI limits the number of files to 2000 and total size to 300MB. If your codeDir contains too many files, you can choose which files and subfolders should be excluded by adding a ``.nniignore`` file that works like a ``.gitignore`` file. For more details on how to write this file, see the `git documentation `__. + +*Example:* :githublink:`config_detailed.yml ` and :githublink:`.nniignore ` + +All the code above is already prepared and stored in :githublink:`examples/trials/mnist-pytorch/`. -*Example:* :githublink:`config.yml ` and :githublink:`.nniignore ` -All the code above is already prepared and stored in :githublink:`examples/trials/mnist-pytorch/ `. +Step 4: Launch the Experiment +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Linux and macOS -^^^^^^^^^^^^^^^ +*************** -Run the **config.yml** file from your command line to start an MNIST experiment. +Run the **config_detailed.yml** file from your command line to start the experiment. .. code-block:: bash - nnictl create --config nni/examples/trials/mnist-pytorch/config.yml + nnictl create --config nni/examples/trials/mnist-pytorch/config_detailed.yml Windows -^^^^^^^ +******* -Run the **config_windows.yml** file from your command line to start an MNIST experiment. +Change ``python3`` to ``python`` of the ``trialCommand`` field in the **config_detailed.yml** file, and run the **config_detailed.yml** file from your command line to start the experiment. .. code-block:: bash - nnictl create --config nni\examples\trials\mnist-pytorch\config_windows.yml - -.. Note:: If you're using NNI on Windows, you probably need to change ``python3`` to ``python`` in the config.yml file or use the config_windows.yml file to start the experiment. + nnictl create --config nni\examples\trials\mnist-pytorch\config_detailed.yml .. Note:: ``nnictl`` is a command line tool that can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click :doc:`here ` for more usage of ``nnictl``. @@ -208,24 +226,25 @@ Wait for the message ``INFO: Successfully started experiment!`` in the command l 8. nnictl --help get help information about nnictl ----------------------------------------------------------------------- -If you prepared ``trial``\ , ``search space``\ , and ``config`` according to the above steps and successfully created an NNI job, NNI will automatically tune the optimal hyper-parameters and run different hyper-parameter sets for each trial according to the requirements you set. You can clearly see its progress through the NNI WebUI. +If you prepared ``trial``\ , ``search space``\ , and ``config`` according to the above steps and successfully created an NNI job, NNI will automatically tune the optimal hyper-parameters and run different hyper-parameter sets for each trial according to the defined search space. You can see its progress through the WebUI clearly. + -WebUI ------ +Step 5: View the Experiment +^^^^^^^^^^^^^^^^^^^^^^^^^^^ -After you start your experiment in NNI successfully, you can find a message in the command-line interface that tells you the ``Web UI url`` like this: +After starting the experiment successfully, you can find a message in the command-line interface that tells you the ``Web UI url`` like this: .. code-block:: text The Web UI urls are: [Your IP]:8080 -Open the ``Web UI url`` (Here it's: ``[Your IP]:8080``\ ) in your browser; you can view detailed information about the experiment and all the submitted trial jobs as shown below. If you cannot open the WebUI link in your terminal, please refer to the `FAQ `__. +Open the ``Web UI url`` (Here it's: ``[Your IP]:8080``\ ) in your browser, you can view detailed information about the experiment and all the submitted trial jobs as shown below. If you cannot open the WebUI link in your terminal, please refer to the `FAQ `__. -View overview page -^^^^^^^^^^^^^^^^^^ +View Overview Page +****************** -Information about this experiment will be shown in the WebUI, including the experiment trial profile and search space message. NNI also supports downloading this information and the parameters through the **Experiment summary** button. +Information about this experiment will be shown in the WebUI, including the experiment profile and search space message. NNI also supports downloading this information and the parameters through the **Experiment summary** button. .. image:: ../../img/webui-img/full-oview.png @@ -233,11 +252,10 @@ Information about this experiment will be shown in the WebUI, including the expe :alt: overview +View Trials Detail Page +*********************** -View trials detail page -^^^^^^^^^^^^^^^^^^^^^^^ - -We could see best trial metrics and hyper-parameter graph in this page. And the table content includes more columns when you click the button ``Add/Remove columns``. +You could see the best trial metrics and hyper-parameter graph in this page. And the table content includes more columns when you click the button ``Add/Remove columns``. .. image:: ../../img/webui-img/full-detail.png @@ -245,9 +263,8 @@ We could see best trial metrics and hyper-parameter graph in this page. And the :alt: detail - -View experiments management page -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +View Experiments Management Page +******************************** On the ``All experiments`` page, you can see all the experiments on your machine. @@ -255,22 +272,18 @@ On the ``All experiments`` page, you can see all the experiments on your machine :target: ../../img/webui-img/managerExperimentList/expList.png :alt: Experiments list +For more detailed usage of WebUI, please refer to `this doc <./WebUI.rst>`__. -More detail please refer `the doc <./WebUI.rst>`__. - Related Topic ------------- +* `How to debug? `__ +* `How to write a trial? <../TrialExample/Trials.rst>`__ +* `How to try different Tuners? <../Tuner/BuiltinTuner.rst>`__ +* `How to try different Assessors? <../Assessor/BuiltinAssessor.rst>`__ +* `How to run an experiment on the different training platforms? <../training_services.rst>`__ +* `How to use Annotation? `__ +* `How to use the command line tool nnictl? `__ +* `How to launch Tensorboard on WebUI? `__ -* `Launch Tensorboard on WebUI `__ -* `Try different Tuners <../Tuner/BuiltinTuner.rst>`__ -* `Try different Assessors <../Assessor/BuiltinAssessor.rst>`__ -* `How to use command line tool nnictl `__ -* `How to write a trial <../TrialExample/Trials.rst>`__ -* `How to run an experiment on local (with multiple GPUs)? <../TrainingService/LocalMode.rst>`__ -* `How to run an experiment on multiple machines? <../TrainingService/RemoteMachineMode.rst>`__ -* `How to run an experiment on OpenPAI? <../TrainingService/PaiMode.rst>`__ -* `How to run an experiment on Kubernetes through Kubeflow? <../TrainingService/KubeflowMode.rst>`__ -* `How to run an experiment on Kubernetes through FrameworkController? <../TrainingService/FrameworkControllerMode.rst>`__ -* `How to run an experiment on Kubernetes through AdaptDL? <../TrainingService/AdaptDLMode.rst>`__ diff --git a/docs/en_US/Tutorial/SearchSpaceSpec.rst b/docs/en_US/Tutorial/SearchSpaceSpec.rst index 27ce2c31d3..fc5f25efc9 100644 --- a/docs/en_US/Tutorial/SearchSpaceSpec.rst +++ b/docs/en_US/Tutorial/SearchSpaceSpec.rst @@ -7,14 +7,13 @@ Search Space Overview -------- -In NNI, tuner will sample parameters/architecture according to the search space, which is defined as a json file. +In NNI, tuner will sample parameters/architectures according to the search space. To define a search space, users should define the name of the variable, the type of sampling strategy and its parameters. +* An example of a search space definition in a JSON file is as follow: -* An example of a search space definition is as follow: - -.. code-block:: yaml +.. code-block:: json { "dropout_rate": {"_type": "uniform", "_value": [0.1, 0.5]}, @@ -24,7 +23,9 @@ To define a search space, users should define the name of the variable, the type "learning_rate": {"_type": "uniform", "_value": [0.0001, 0.1]} } -Take the first line as an example. ``dropout_rate`` is defined as a variable whose priori distribution is a uniform distribution with a range from ``0.1`` to ``0.5``. +Take the first line as an example. ``dropout_rate`` is defined as a variable whose prior distribution is a uniform distribution with a range from ``0.1`` to ``0.5``. + +.. note:: In the `experiment configuration (V2) schema `_, NNI supports defining the search space directly in the configuration file, detailed usage can be found `here `__. When using Python API, users can write the search space in the Python file, refer `here `__. Note that the available sampling strategies within a search space depend on the tuner you want to use. We list the supported types for each builtin tuner below. For a customized tuner, you don't have to follow our convention and you will have the flexibility to define any type you want. @@ -38,7 +39,7 @@ All types of sampling strategies and their parameter are listed here: ``{"_type": "choice", "_value": options}`` - * The variable's value is one of the options. Here ``options`` should be a list of numbers or a list of strings. Using arbitrary objects as members of this list (like sublists, a mixture of numbers and strings, or null values) should work in most cases, but may trigger undefined behaviors. + * The variable's value is one of the options. Here ``options`` should be a list of **numbers** or a list of **strings**. Using arbitrary objects as members of this list (like sublists, a mixture of numbers and strings, or null values) should work in most cases, but may trigger undefined behaviors. * ``options`` can also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space can be seen as conditional variables. Here is an simple :githublink:`example of nested search space definition `. If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a ``_name`` key in this dict, which helps you to identify which element is chosen. Accordingly, here is a :githublink:`sample ` which users can get from nni with nested search space definition. See the table below for the tuners which support nested search spaces. * diff --git a/docs/en_US/Tutorial/SetupNniDeveloperEnvironment.rst b/docs/en_US/Tutorial/SetupNniDeveloperEnvironment.rst index 4b580e8352..4f610ca7b2 100644 --- a/docs/en_US/Tutorial/SetupNniDeveloperEnvironment.rst +++ b/docs/en_US/Tutorial/SetupNniDeveloperEnvironment.rst @@ -42,19 +42,19 @@ And open WebUI to check if everything is OK ^^^^^^^^^^^^^^^^^ Python -^^^^^^ +****** Nothing to do, the code is already linked to package folders. TypeScript (Linux and macOS) -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**************************** * If ``ts/nni_manager`` is changed, run ``yarn watch`` under this folder. It will watch and build code continually. The ``nnictl`` need to be restarted to reload NNI manager. * If ``ts/webui`` is changed, run ``yarn dev``\ , which will run a mock API server and a webpack dev server simultaneously. Use ``EXPERIMENT`` environment variable (e.g., ``mnist-tfv1-running``\ ) to specify the mock data being used. Built-in mock experiments are listed in ``src/webui/mock``. An example of the full command is ``EXPERIMENT=mnist-tfv1-running yarn dev``. * If ``ts/nasui`` is changed, run ``yarn start`` under the corresponding folder. The web UI will refresh automatically if code is changed. There is also a mock API server that is useful when developing. It can be launched via ``node server.js``. TypeScript (Windows) -^^^^^^^^^^^^^^^^^^^^ +******************** Currently you must rebuild TypeScript modules with `python3 setup.py build_ts` after edit. diff --git a/docs/en_US/reference/experiment_config.rst b/docs/en_US/reference/experiment_config.rst index 44b0d57c33..ba65f3a0ee 100644 --- a/docs/en_US/reference/experiment_config.rst +++ b/docs/en_US/reference/experiment_config.rst @@ -819,7 +819,7 @@ If the path does not exist, it will be created automatically. Recommended to use remoteMountPoint """""""""""""""" -The path that the storage will be mounted in the remote achine. +The path that the storage will be mounted in the remote machine. type: ``str`` @@ -876,7 +876,7 @@ If the path does not exist, it will be created automatically. Recommended to use remoteMountPoint """""""""""""""" -The path that the storage will be mounted in the remote achine. +The path that the storage will be mounted in the remote machine. type: ``str`` diff --git a/examples/trials/mnist-pytorch/config_detailed.yml b/examples/trials/mnist-pytorch/config_detailed.yml index 2bf65fd126..d85fb48f76 100644 --- a/examples/trials/mnist-pytorch/config_detailed.yml +++ b/examples/trials/mnist-pytorch/config_detailed.yml @@ -30,7 +30,7 @@ trialConcurrency: 4 # Run 4 trials concurrently. maxTrialNumber: 10 # Generate at most 10 trials. maxExperimentDuration: 1h # Stop generating trials after 1 hour. -tuner: # Configure the tuning alogrithm. +tuner: # Configure the tuning algorithm. name: TPE # Supported algorithms: TPE, Random, Anneal, Evolution, GridSearch, GPTuner, PBTTuner, etc. # Full list: https://nni.readthedocs.io/en/latest/Tuner/BuiltinTuner.html classArgs: # Algorithm specific arguments. See the tuner's doc for details. diff --git a/examples/trials/mnist-tfv2/config_detailed.yml b/examples/trials/mnist-tfv2/config_detailed.yml index 6c2fd1a487..ed31da19da 100644 --- a/examples/trials/mnist-tfv2/config_detailed.yml +++ b/examples/trials/mnist-tfv2/config_detailed.yml @@ -33,7 +33,7 @@ trialConcurrency: 4 # Run 4 trials concurrently. maxTrialNumber: 10 # Generate at most 10 trials. maxExperimentDuration: 1h # Stop generating trials after 1 hour. -tuner: # Configure the tuning alogrithm. +tuner: # Configure the tuning algorithm. name: TPE # Supported algorithms: TPE, Random, Anneal, Evolution, GridSearch, GPTuner, PBTTuner, etc. # Full list: https://nni.readthedocs.io/en/latest/Tuner/BuiltinTuner.html classArgs: # Algorithm specific arguments. See the tuner's doc for details. diff --git a/nni/algorithms/hpo/curvefitting_assessor/curvefitting_assessor.py b/nni/algorithms/hpo/curvefitting_assessor/curvefitting_assessor.py index 885886e89b..b91583bb39 100644 --- a/nni/algorithms/hpo/curvefitting_assessor/curvefitting_assessor.py +++ b/nni/algorithms/hpo/curvefitting_assessor/curvefitting_assessor.py @@ -71,7 +71,7 @@ def trial_end(self, trial_job_id, success): else: self.set_best_performance = True self.completed_best_performance = self.trial_history[-1] - logger.info('Updated complted best performance, trial job id: %s', trial_job_id) + logger.info('Updated completed best performance, trial job id: %s', trial_job_id) else: logger.info('No need to update, trial job id: %s', trial_job_id)