Skip to content
This repository has been archived by the owner on Oct 9, 2023. It is now read-only.

improve finetuning #39

Merged
merged 17 commits into from
Feb 1, 2021
Merged

improve finetuning #39

merged 17 commits into from
Feb 1, 2021

Conversation

tchaton
Copy link
Contributor

@tchaton tchaton commented Feb 1, 2021

What does this PR do?

This PR makes fine-tuning parametrisation more flexible. If we agree on the API, I will add it to the doc.

Fixes # (issue)

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests? [not needed for typos/docs]
  • Did you verify new and existing tests pass locally with your changes?
  • If you made a notable change (that affects users), did you update the CHANGELOG?

PR review

  • Is this pull request ready for review? (if not, please submit in draft mode)

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@tchaton tchaton self-assigned this Feb 1, 2021
@codecov
Copy link

codecov bot commented Feb 1, 2021

Codecov Report

Merging #39 (79cd57b) into master (bd3d50d) will decrease coverage by 1.63%.
The diff coverage is 68.35%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #39      +/-   ##
==========================================
- Coverage   87.06%   85.43%   -1.64%     
==========================================
  Files          28       29       +1     
  Lines         866      927      +61     
==========================================
+ Hits          754      792      +38     
- Misses        112      135      +23     
Flag Coverage Δ
unittests 85.43% <68.35%> (-1.64%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
flash/core/trainer.py 72.50% <64.28%> (-17.50%) ⬇️
flash/core/finetuning.py 69.38% <69.38%> (ø)
flash/core/model.py 94.66% <100.00%> (+0.14%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update bd3d50d...54bb69a. Read the comment docs.

@Borda Borda added the enhancement New feature or request label Feb 1, 2021
Copy link
Member

@Borda Borda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am missing docs at least to the classes as the names are not very intuitive :/

flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/finetuning.py Show resolved Hide resolved
CHANGELOG.md Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/trainer.py Outdated Show resolved Hide resolved
flash/core/trainer.py Outdated Show resolved Hide resolved
flash/core/trainer.py Outdated Show resolved Hide resolved
@carmocca
Copy link
Contributor

carmocca commented Feb 1, 2021

I don't like how we allow both a str and passing arguments via kwargs. It is ugly and hard for users to know what parameters are available

I think it would be cleaner to allow str to create the callback with some sensible defaults

And if the user wants to set their own parameters, have them instantiate the callback themselves and pass it.

# unfreezes at epoch 1 by default
trainer.finetune(model, datamodule=datamodule, strategy='freeze_unfreeze')

# if you want to change it
trainer.finetune(model, datamodule=datamodule, strategy=FreezeUnfreeze(unfreeze_epoch=50))

flash/core/finetuning.py Outdated Show resolved Hide resolved
@tchaton
Copy link
Contributor Author

tchaton commented Feb 1, 2021

I don't like how we allow both a str and passing arguments via kwargs. It is ugly and hard for users to know what parameters are available

I think it would be cleaner to allow str to create the callback with some sensible defaults

And if the user wants to set their own parameters, have them instantiate the callback themselves and pass it.

# unfreezes at epoch 1 by default
trainer.finetune(model, datamodule=datamodule, strategy='freeze_unfreeze')

# if you want to change it
trainer.finetune(model, datamodule=datamodule, strategy=FreezeUnfreeze(unfreeze_epoch=50))

Sounds fine to me !

flash/core/finetuning.py Outdated Show resolved Hide resolved
CHANGELOG.md Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/model.py Outdated Show resolved Hide resolved
flash/core/trainer.py Outdated Show resolved Hide resolved
flash/core/trainer.py Outdated Show resolved Hide resolved
flash/core/trainer.py Outdated Show resolved Hide resolved
flash/core/trainer.py Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
@tchaton tchaton merged commit edccf3b into master Feb 1, 2021
@tchaton tchaton deleted the better_finetune branch February 1, 2021 18:31
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants