Skip to content

Hoar012/RAP-MLLM

Repository files navigation

RAP-MLLM: Retrieval-Augmented Personalization for Multimodal Large Language Model

News

  • 2024.11.24 Release code and model weights.

Personalize Your Multimodal Large Language Model via Retrieval Augmented Generation.

RAP-LLaVA
Introduce some user-specific concepts to our RAP-LLaVA, it can remember them and achieve excellent performance in a variety of personalized multimodal generation tasks.

Visit our Project Page for more demostrations.

Contents

Note: This repository is still under construction.

Install

  1. Clone the repo into a local folder.
git clone https://github.com/Hoar012/RAP-MLLM.git

cd RAP-MLLM
  1. Install packages.
conda create -n rap python=3.10 -y
conda activate rap
pip install --upgrade pip  # enable PEP 660 support
pip install -e .
pip install -e ".[train]"
pip install flash-attn --no-build-isolation

pip install -r requirements.txt

Models

Pretrained model weights are available on Hugging Face.

RAP-LLaVA: RAP-LLaVA-13b; RAP-Phi3-V: RAP-Phi3-mini

Demo

Build Your Personal Database:

Each concept record in the database can be structured with the following format:

{
    "concept_dict": {
        "<concept>": {
            "name": "concept_name",
            "image": "image_path",
            "info": "",
            "category": ""
        }
    },
    "path_to_concept": {
        "image_path": "<concept>",
    }
}

We provide an example of the database in example_database.

CLI Demo:

python cli.py --model-path Hoar012/RAP-LLaVA-13b --image-file /path/to/test_image --retrieval --database ./example_database --topK 1

Evaluation

Prepare Data

Please download the test data used in the paper from the repositories of MyVLM and Yo'LLaVA.

Evaluation on Image Captioning

python eval/caption.py  --eval-file /path/to/eval_file --model-path Hoar012/RAP-LLaVA-13b --retrieval --database /path/to/database --topK 2

The eval-file records the image paths to be evaluated and their corresponding target concepts, formatted as follows:

{
    "/path/to/image": [
        "target_concept"
    ],
}

Evaluation on Question Answering

python eval/VQA.py --eval-file /path/to/yollava-visual-qa.json --model-path Hoar012/RAP-LLaVA-13b --retrieval --database /path/to/database --topK 1

BibTeX

@misc{hao2024rememberretrievegenerateunderstanding,
        title={Remember, Retrieve and Generate: Understanding Infinite Visual Concepts as Your Personalized Assistant}, 
        author={Haoran Hao and Jiaming Han and Changsheng Li and Yu-Feng Li and Xiangyu Yue},
        year={2024},
        eprint={2410.13360},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2410.13360}, 
  }

Acknowledgement

LLaVA, MyVLM, YoLLaVA

About

Retrieval-Augmented Personalization

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published