Hyperparameter evolution in active learning batch mode context #2837
Replies: 1 comment 2 replies
-
@johanenforcer there will typically be < 1.0 correlation between your evolution results and your training results unless they are one and the same scenario (same arguments, dataset, epochs etc.). For example you can probably evolve decent hyperparameters on COCO by training to 150 epochs rather than 300, saving some time in the process, but these evolved hyps are not evolved for 300 epochs of training, they are evolved for 150, and will not correlate 100% with the best hyps for 300 epochs. The difference between what you evolve on and what you train on is naturally up to you, and logically the further they deviate the less the evolution results will correlate to your training results. |
Beta Was this translation helpful? Give feedback.
-
In an active learning setting batch mode setting, datapoints are iteratively added to the labelled training in multiple cycles of retraining the model with the current labelled dataset. When running the hyperparameter evolution is it then enough to only run the first cycle of training for the many different generations to evolve or do I run the generations for all cycles of training the model on the currently available labelled training set? I feel like only the first cycle ought to be enough, but maybe I'm wrong.
Thanks in advance
Beta Was this translation helpful? Give feedback.
All reactions