Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement mini-epochs in training #60

Merged
merged 5 commits into from
Oct 17, 2021

Conversation

ftostevin-ont
Copy link
Collaborator

This PR implements mini-epochs (multiple rounds of validation per full traversal of the training data) in Train.py.
I have found that running with --mini_epochs 5 slightly improves the accuracy of the resulting model, since models at intermediate stages of an epoch may be better than those at the end of an epoch. If no --mini_epochs parameter is provided, training should be the same as currently.

To facilitate this change, training batch generation has been reimplemented as a keras.utils.Sequence class, since this has an on_epoch_end method that can be used to count miniepochs and only shuffle when a full epoch has been completed. This code is based on the SequenceBatcher implemented in Medaka.

@aquaskyline
Copy link
Member

Testing with fine-tuning a model. Will roll out with r7.

@zhengzhenxian zhengzhenxian merged commit 31cdf49 into HKU-BAL:main Oct 17, 2021
@ftostevin-ont ftostevin-ont deleted the miniepochs branch April 29, 2022 12:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants