-
Notifications
You must be signed in to change notification settings - Fork 212
Make dependency of VISSL optional #1271
Make dependency of VISSL optional #1271
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ar90n This is awesome!!! Just a few minor comments 😃
|
||
def training_step(self, batch: Any, batch_idx: int) -> Any: | ||
batch = (batch[DataKeys.INPUT], batch[DataKeys.TARGET]) | ||
return Task.training_step(self._task, batch, batch_idx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's instead raise an error here as we only support training the embedder if using the VISSL integration. Can do something similar to what we have here: https://github.com/PyTorchLightning/lightning-flash/blob/167f5e73fe28630830e15ed40502979a29d38a68/flash/text/embedding/model.py#L81
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fixed it. Please review again!
|
||
def default(head: Optional[str] = None, loss_fn: Optional[str] = None, **kwargs): | ||
if loss_fn in IMAGE_EMBEDDER_LOSS_FUNCTIONS: | ||
loss_fn = IMAGE_EMBEDDER_LOSS_FUNCTIONS.get(loss_fn)(**kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess just warn here if the loss function or head aren't None
(as we don't have any loss functions of heads that don't use VISSL)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with you. On second thought, it is enough that default
return always (None, None, [])
. I think this makes default
simple. What do you think about it?
And I modified this PR as I said above. If it is OK, please review it.
Co-authored-by: Ethan Harris <ethanwharris@gmail.com>
What does this PR do?
This PR is motivated to achieve the second change in this comment. And this PR contains the following modifications.
training_strategy
,head
,pretraining_transform
in the arguments of__init__
of ImageEmbedder optionalDefaultAdapter
to the Embedding task which is inspired by the Image Classification task's one.Part of #1031
This PR depends on #1264. So this PR won't be changed to Ready for review until #1264 is merged.
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃