We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, Kevin
I notice that the current implementation of VICREG doesn't have the gather layer for the input embeddings. Does it still work as expected to get embeddings from mini-batches in multi-gpus in this case? See https://github.com/facebookresearch/vicreg/blob/a73f567660ae507b0667c68f685945ae6e2f62c3/main_vicreg.py#L200 for original implementation of VICREG.
Regards, Notody
The text was updated successfully, but these errors were encountered:
Yeah I don't think it does. For me, the ideal solution would be to make it compatible with DistributedLossWrapper. That means either
VICReg
Sorry, something went wrong.
In v2.0 this will work:
loss_fn = DistributedLossWrapper(loss=VICRegLoss()) loss = loss_fn(embeddings, ref_emb=ref_emb)
No branches or pull requests
Hi, Kevin
I notice that the current implementation of VICREG doesn't have the gather layer for the input embeddings. Does it still work as expected to get embeddings from mini-batches in multi-gpus in this case? See https://github.com/facebookresearch/vicreg/blob/a73f567660ae507b0667c68f685945ae6e2f62c3/main_vicreg.py#L200 for original implementation of VICREG.
Regards,
Notody
The text was updated successfully, but these errors were encountered: