Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hyperparameters optimization with Optuna #150

Merged
merged 10 commits into from
Mar 5, 2024
Merged

Conversation

nhuet
Copy link
Contributor

@nhuet nhuet commented Jan 25, 2024

No description provided.

@nhuet nhuet force-pushed the optuna branch 2 times, most recently from d54780d to c8c2d0b Compare January 30, 2024 12:25
nhuet added 9 commits March 5, 2024 11:11
When using Optuna to tune hyperparameters of a solver,
one can use this callback to prune unpromising trials.

This callback reports every `report_nb_steps` an intermediate fit to optuna.

Avoid setting this parameter to 1 if using a solver with short steps,
as local search algorithms, which would excessively slow the run of a single trial.
Indeed, this pruning by optuna is rather designed
for epochs in deep learning trainers, which lasts quite a long time.

We add a test for this callback and thus add optuna to test
dependencies.

We do not add optuna to dependencies of discrete-optimization as the
library is functional without it. This is just a callback provided to
ease the usage of optuna to tune the hyperparameters.
@nhuet nhuet marked this pull request as ready for review March 5, 2024 16:31
…d their hyperparameters

- update OptunaPruningSingleFitCallback `report_nb_steps` param
   - rename into `optuna_report_nb_steps` to identify it more simply in
     kwargs
   - give it a default value if user does not define it in optuna script

- add a full example with
   - solvers_to_test: list of classes of solver to test
   - kwargs_fixed_by_solver: kwargs to pass to __ini__, init_model, and solve,
      except for hyperparameters
   - solvers_by_name: mapping string -> solver class (by default using
     solver_class.__name__ but should be overriden if same names for
     different solvers)
   - problem: defining the problem to solve
   - using default objectives by default, and deducing fromt it
     - direction ("minimize" or "maximize")
     - objective name to display in optuna dashboard
   - objective(trial) function: can be let as is
   - study_name: nmae given to the study
   - storage_path: path to the file used to log the study (can be the
     same for several studies). For easy parallelisation, can be a NFS
     path. A JournalFileStorage will be created at that path if none
     existing.
   - optuna_nb_trials: number of trials to be executed by optuna
      if relaunched, another batch of same number of trials will be added
      if parallelized, each node/process will launch this same number of trials
   - seed: fixed to get reproducible results. But should be None for
     parallelization to avoid having same trials for each node/process.
   - duplicate trials
      - failed trials: by default, they are ignored by optuna, so we
        explicitely prune following trials with exactlythe same
        hyperparameters
      - complete trials: TPESampler can (and will when converging on
	categorical hyperparameters) suggest duplicate trials. In that
        case we raise a warning and simply return the previously
        computed fit, as we are fully deterministic. See https://optuna.readthedocs.io/en/stable/faq.html#how-can-i-ignore-duplicated-samples
        and also optuna/optuna#2021

The optuna results can be monitored by

	optuna-dashboard optuna-journal.log

(if storage_path let as is)
@g-poveda g-poveda merged commit f2ea0a5 into airbus:master Mar 5, 2024
16 checks passed
@nhuet nhuet deleted the optuna branch March 7, 2024 08:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants