Skip to content
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.

[WIP] Introduce Optuna related logics to allennlp.commands.train for supporting pruning in distributed situation #5341

Closed
wants to merge 8 commits into from

Conversation

himkt
Copy link
Contributor

@himkt himkt commented Aug 3, 2021

🚧 Under construction 🚧

Related to #5338. See also optuna/optuna#2796.
Other backrgound information can be seen in optuna/optuna#1990 and himkt/allennlp-optuna#20.

Changes proposed in this pull request:

  • Introduce trial as an optional argument for train_model and _train_worker

Before submitting

  • I've read and followed all steps in the Making a pull request
    section of the CONTRIBUTING docs.
  • I've updated or added any relevant docstrings following the syntax described in the
    Writing docstrings section of the CONTRIBUTING docs.
  • If this PR fixes a bug, I've added a test that will fail without my fix.
  • If this PR adds a new feature, I've added tests that sufficiently cover my new functionality.

After submitting

  • All GitHub Actions jobs for my pull request have passed.
  • codecov/patch reports high test coverage (at least 90%).
    You can find this under the "Actions" tab of the pull request once the other checks have finished.

Reference

@himkt
Copy link
Contributor Author

himkt commented Sep 1, 2021

I close this PR, as noted in #5338 (comment).

@himkt himkt closed this Sep 1, 2021
@himkt himkt deleted the feat/optuna branch November 13, 2021 09:51
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant