Skip to content
This repository has been archived by the owner on Oct 9, 2023. It is now read-only.

Conversation

ar90n
Copy link
Contributor

@ar90n ar90n commented Apr 4, 2022

What does this PR do?

This PR is motivated to achieve the second change in this comment. And this PR contains the following modifications.

  • Make training_strategy, head, pretraining_transform in the arguments of __init__ of ImageEmbedder optional
  • Add DefaultAdapter to the Embedding task which is inspired by the Image Classification task's one.
  • Fix a test about integration with Fiftyone to work correctly
  • Add test cases for only embedding without VISSL

Part of #1031

This PR depends on #1264. So this PR won't be changed to Ready for review until #1264 is merged.

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests? [not needed for typos/docs]
  • Did you verify new and existing tests pass locally with your changes?
  • If you made a notable change (that affects users), did you update the CHANGELOG?

PR review

  • Is this pull request ready for review? (if not, please submit in draft mode)

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

Copy link
Collaborator

@ethanwharris ethanwharris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ar90n This is awesome!!! Just a few minor comments 😃

flash/image/embedding/model.py Outdated Show resolved Hide resolved

def training_step(self, batch: Any, batch_idx: int) -> Any:
batch = (batch[DataKeys.INPUT], batch[DataKeys.TARGET])
return Task.training_step(self._task, batch, batch_idx)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's instead raise an error here as we only support training the embedder if using the VISSL integration. Can do something similar to what we have here: https://github.com/PyTorchLightning/lightning-flash/blob/167f5e73fe28630830e15ed40502979a29d38a68/flash/text/embedding/model.py#L81

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fixed it. Please review again!


def default(head: Optional[str] = None, loss_fn: Optional[str] = None, **kwargs):
if loss_fn in IMAGE_EMBEDDER_LOSS_FUNCTIONS:
loss_fn = IMAGE_EMBEDDER_LOSS_FUNCTIONS.get(loss_fn)(**kwargs)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess just warn here if the loss function or head aren't None (as we don't have any loss functions of heads that don't use VISSL)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you. On second thought, it is enough that default return always (None, None, []). I think this makes default simple. What do you think about it?

And I modified this PR as I said above. If it is OK, please review it.

@ethanwharris ethanwharris deleted the branch Lightning-Universe:feature/ssl_backbones April 6, 2022 11:29
@ar90n ar90n mentioned this pull request Apr 6, 2022
8 tasks
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants