New OpenCLIP Ecommerce & Retail Models - Image Classification Example #929
ellie-sleightholm
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
New OpenCLIP Ecommerce & Retail Models - Image Classification Example
Introduction
Marqo have recently announced two new state-of-the-art embedding models for multimodal fashion search and recommendations: Marqo-FashionCLIP & Marqo FashionSigLIP. These models are supported in
open-clip
so I thought I'd create a Show and Tell of these models in action 🔥For the full code and a guided walk-through visit this article or try them out yourself in Google Colab!
1. Install Libararies
2. Load a Dataset
3. Load Marqo-FashionCLIP Model and Preprocessing
4. Perform Image Classification with Marqo-FashionCLIP
Conclusion
In this post, we've used Marqo-FashionCLIP for image classification on a small fashion dataset. We performed image classification for three specific images from the dataset and computed the predicted sub-category for each image and compared these with the true sub-category.
Additional Resources
Marqo-Fashion CLIP on Hugging Face
Marqo-Fashion CLIP on GitHub
Article: Marqo Launches Family of Embedding Models for Ecommerce and Retail
Beta Was this translation helpful? Give feedback.
All reactions