-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Hugging Face Image Embedding MLTransform #31536
Conversation
Assigning reviewers. If you would like to opt out of this review, comment R: @damccorm for label python. Available commands:
The PR bot will only process comments in the main thread (not review comments). |
https://www.sbert.net/docs/sentence_transformer/pretrained_models.html#image-text-models # pylint: disable=line-too-long | ||
for a list of sentence_transformers models. | ||
columns: List of columns to be embedded. | ||
min_batch_size: The minimum batch size to be used for inference. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shall we add all these options to __init__
? similarly to SentenceTransformerEmbeddings
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a bit of a weird case where those parameters are passed up as kwargs and handled by the EmbeddingsManager
. I'd be okay to explicitly have these in the constructor though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this wound up looking cleaner, PTAL
large_model=self.large_model) | ||
|
||
def get_ptransform_for_processing(self, **kwargs) -> beam.PTransform: | ||
# wrap the model handler in a _TextEmbeddingHandler since |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_TextEmbeddingHandler
?
and do we need to create SentenceTransformerImageEmbeddings
? shall we just add image_model:bool
in SentenceTransformerEmbeddings
? or can we infer the model type automatically and then call either _ImageEmbeddingHandler
or _TextEmbeddingHandler
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Swapping to a bool assignment may be cleaner (and is how we probably need to handle the Inference API version as well,) let me take a run at writing that real quick
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks.
Adds support for generating image embeddings via the sentence-transformers library.
Part of #31500
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123
), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>
instead.CHANGES.md
with noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.