Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Describe Usage of Avalanche updaters and their Limitations #202

Merged
merged 3 commits into from
Apr 24, 2023

Conversation

wistuba
Copy link
Contributor

@wistuba wistuba commented Apr 24, 2023

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@wistuba wistuba requested a review from 610v4nn1 April 24, 2023 15:47
@wistuba wistuba requested a review from lballes April 24, 2023 15:53
@github-actions
Copy link

Coverage report

The coverage rate went from 85.68% to 85.68% ➡️

None of the new lines are part of the tested code. Therefore, there is no coverage data about them.

610v4nn1
610v4nn1 previously approved these changes Apr 24, 2023
Copy link
Contributor

@610v4nn1 610v4nn1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please check the comments before merging

@@ -4,7 +4,7 @@ Getting Started

This section covers the usage of Renate from the installation to the training
of a model. The content is intended to explain the basic steps to be taken when creating a new
training pipeline based on Rente.
training pipeline based on Renate.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch!

:doc:`how_to_run_training`. You can select an Avalanche updater by passing the respective string to the ``updater``
argument of :py:func:`~renate.training.training.run_training_job`. The available Avalanche options are
``"Avalanche-ER"``, ``"Avalanche-EWC"``, ``"Avalanche-LwF"``, and ``"Avalanche-iCaRL"``.
More details about these algorithms are given in :doc:`supported_algorithms`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More updaters can be created if needed.

No Multi-Fidelity HPO
---------------------
Currently, we do not support multi-fidelity hyperparameter optimization with Avalanche updaters. For that reason,
please do not use ``asha`` as a scheduler but use ``random`` or ``bo`` instead.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be great to link the HPO page here

@610v4nn1 610v4nn1 removed the request for review from lballes April 24, 2023 16:14
@wistuba wistuba merged commit f2c7d15 into main Apr 24, 2023
@wistuba wistuba deleted the mw-avalanche-docs branch April 24, 2023 16:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants