Skip to content
This repository has been archived by the owner on Oct 9, 2023. It is now read-only.

Fix pretraining_transforms for ImageEmbedder #1196

Merged
merged 30 commits into from
Mar 4, 2022

Conversation

krshrimali
Copy link
Contributor

@krshrimali krshrimali commented Feb 25, 2022

What does this PR do?

Attempts to fix #1185. Before this, pretraining_transforms was never called.

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests? [not needed for typos/docs]
  • Did you verify new and existing tests pass locally with your changes?
  • If you made a notable change (that affects users), did you update the CHANGELOG?

PR review

  • Is this pull request ready for review? (if not, please submit in draft mode)

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@codecov
Copy link

codecov bot commented Feb 25, 2022

Codecov Report

Merging #1196 (bb27c2a) into master (36377b7) will increase coverage by 0.12%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1196      +/-   ##
==========================================
+ Coverage   90.99%   91.11%   +0.12%     
==========================================
  Files         284      284              
  Lines       12740    12755      +15     
==========================================
+ Hits        11593    11622      +29     
+ Misses       1147     1133      -14     
Flag Coverage Δ
unittests 91.11% <100.00%> (+0.12%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
flash/core/data/io/input_transform.py 72.72% <100.00%> (+0.13%) ⬆️
flash/image/embedding/heads/vissl_heads.py 100.00% <100.00%> (ø)
flash/image/embedding/losses/vissl_losses.py 100.00% <100.00%> (+8.33%) ⬆️
flash/image/embedding/model.py 91.22% <100.00%> (+2.76%) ⬆️
flash/image/embedding/vissl/adapter.py 89.01% <100.00%> (+0.77%) ⬆️
flash/image/embedding/backbones/vissl_backbones.py 74.07% <0.00%> (-7.41%) ⬇️
flash/core/adapter.py 98.38% <0.00%> (+1.61%) ⬆️
...ash/image/embedding/strategies/vissl_strategies.py 100.00% <0.00%> (+8.57%) ⬆️
... and 2 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 36377b7...bb27c2a. Read the comment docs.

@krshrimali krshrimali marked this pull request as draft February 25, 2022 13:31
@krshrimali
Copy link
Contributor Author

Update: looks like it's still not called. WIP :(

flash/core/data/transforms.py Outdated Show resolved Hide resolved
flash/core/adapter.py Outdated Show resolved Hide resolved
flash/image/embedding/model.py Outdated Show resolved Hide resolved
@ethanwharris ethanwharris added the bug / fix Something isn't working label Feb 28, 2022
@ethanwharris ethanwharris added this to the 0.7.x milestone Feb 28, 2022
@krshrimali krshrimali marked this pull request as ready for review March 2, 2022 16:34
CHANGELOG.md Outdated Show resolved Hide resolved
Copy link
Collaborator

@ethanwharris ethanwharris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome!!! LGTM, just a few small comments

flash/image/embedding/model.py Outdated Show resolved Hide resolved
Comment on lines 55 to 61
"backbone, training_strategy, head, pretraining_transform",
[
("vision_transformer", "simclr", "simclr_head", "simclr_transform"),
("vision_transformer", "dino", "dino_head", "dino_transform"),
("vision_transformer", "barlow_twins", "simclr_head", "barlow_twins_transform"),
("vision_transformer", "swav", "swav_head", "swav_transform"),
],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

flash/core/adapter.py Outdated Show resolved Hide resolved
@krshrimali
Copy link
Contributor Author

@ethanwharris - How to get rid of codecov failures? Can you please help with that?

Also, there are these tests failing: https://dev.azure.com/PytorchLightning/lightning%20Flash/_build/results?buildId=59765&view=logs&j=fb683405-d979-52da-6de9-2541dff429a6

# Failed while doing: nltk.download("punkt", quiet=True, force=False)

With an error:

Resource �[93mpunkt�[0m not found.
2022-03-03T04:45:07.0342026Z   Please use the NLTK Downloader to obtain the resource:

And the suggested fix from the CI:

# suggestion from the error:
import nltk
nltk.download('punkt')

Should we try downloading punkt with the CI config for azure?

@mergify mergify bot added the has conflicts label Mar 3, 2022
@mergify mergify bot removed the has conflicts label Mar 3, 2022
@mergify mergify bot added the has conflicts label Mar 3, 2022
@mergify mergify bot removed the has conflicts label Mar 3, 2022
@krshrimali
Copy link
Contributor Author

Thank you @ethanwharris for fixing the failures! All the tests pass now 🚀, regarding the DeepSource failures, I think we can safely ignore them for now? (is there a way to auto-merge with these failures?)

I have a minor suggestion though: https://deepsource.io/gh/PyTorchLightning/lightning-flash/run/cc57fc8b-c459-4137-8c24-69f7ddc9b119/python/PYL-W0221 - I think deepsource is right here, and we can make loss_fn, backbone, head, hooks as kwargs (it won't be a BC breaking change) in a separate PR, to make sure that we override correctly. What do you think?

@ethanwharris ethanwharris merged commit 8b78676 into master Mar 4, 2022
@ethanwharris ethanwharris deleted the fix/ImageEmbedder/transforms branch March 4, 2022 11:54
ethanwharris added a commit that referenced this pull request Mar 30, 2022
Co-authored-by: Ethan Harris <ethanwharris@gmail.com>
ethanwharris added a commit that referenced this pull request Mar 30, 2022
Co-authored-by: Ethan Harris <ethanwharris@gmail.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug / fix Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Training fails for dino and moco strategies (shape mismatch errors)
2 participants