Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"
-
Updated
Aug 18, 2024 - Python
Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"
A deep learning model that generates descriptions of an image.
Image captioning model with Resnet50 encoder and LSTM decoder
Download flickr8k, flickr30k image caption datasets
Visual Elocution Synthesis
PyTorch implementation of 'CLIP' (Radford et al., 2021) from scratch and training it on Flickr8k + Flickr30k
Karpathy Splits json files for image captioning
ImgCap is an image captioning model designed to automatically generate descriptive captions for images. It has two versions CNN + LSTM model and CNN + LSTM + Attention mechanism model.
Processing data produced by flickr30k_entities to use as regional description for densecap model
Image captioning generation using Swin transformer and GRU attention mechanism
"Flickr30k_image_captioning" is a project or repository focused on image captioning using the Flickr30k dataset. The project aims to develop and showcase algorithms and models that generate descriptive captions for images.
Implementation of CLIP from OpenAI using pretrained Image and Text Encoders.
Add a description, image, and links to the flickr30k topic page so that developers can more easily learn about it.
To associate your repository with the flickr30k topic, visit your repo's landing page and select "manage topics."