Skip to content

fabianmax/clip-latent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CLIP Latent Exploration

Minimal working example illustrating the use of CLIP (Contrastive Language-Image Pre-Training) embeddings.

The example uses (image, caption) pairs from Google's Conceptual Captions dataset. Data is available via the Huggingface Hub. CLIP is available via the official implementation from OpenAI at https://github.com/openai/CLIP.

In the example, both images and captions are embedded using CLIP and then embeddings are projected to a low-dimensional space via UMAP.

Releases

No releases published

Packages

No packages published

Languages