Skip to content

CXR-LLaVA for Chest X-ray images report generation. This is the code repository for MBZUAI 23fall AI701 project of Group-7, members: Jinhong Wang, Yongxin Wang, Haokun Lin

License

Notifications You must be signed in to change notification settings

TommyIX/CXR-LLaVA

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CXR-LLaVA: Chest X-Ray Large Language and Vision Assistant

🥰 This project is based on the codebase of LLaVA by Haotian Liu et al. Many thanks to them! As CXR-LLaVA is temporarily not released as a paper, please cite their work if you are further developing on CXR-LLaVA.

🤗 We have set up an Online Demo on Huggingface. Try it out!

Install Dependencies

  1. Clone this repository and navigate to LLaVA folder
git clone https://github.com/TommyIX/CXR-LLaVA.git
cd LLaVA
  1. Install Package
conda create -n llava python=3.10 -y
conda activate llava
pip install --upgrade pip  # enable PEP 660 support
pip install -e .
pip install open-clip-torch
  1. If you are going to train the model, you need to run:
pip install -e ".[train]"
pip install flash-attn --no-build-isolation
  1. If you want to rerun the evaluation, please install two mandatory libraries using:
pip install pycocotools pycocoevalcap

Get the model weight

You can download the pretrained weight from Huggingface: CXR-LLaVA-7b if you intend to do the evaluation.

As we are using meta-llama/Llama-2-7b as the base model, you need to follow its instruction to get the model if you want to train CXR-LLaVA from scratch. We are using openai/clip-vit-large-patch14 or microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16 as the vision tower, you can download each weight and change the directory in each training scripts to it. (Relative path is accepted)

Inference on CLI

CXR-LLaVA supports the inference pipeline on LLaVA. Inference on 16-bit can be run on a single 4090. You can refer to llava’s documentation for more details.

python -m llava.serve.cli \
    --model-path TommyIX/CXR-LLaVA-7b \
    --image-file "path/to/your/image.jpg

Train

The pretraining alignment and finetuning are using same set of data. The tuning instructions json file can be downloaded using this link.

Please follow Visual Instruction Tuning part to download the required influencing natural data in /data/images, as saving the open-i data to /data/openi-images. So the data organization would be like:

images
├── coco
│   └── train2017
├── gqa
│   └── images
├── ocr_vqa
│   └── images
├── textvqa
│   └── train_images
├── vg
│   ├── VG_100K
│   └── VG_100K_2
├── p11  // These are the MIMIC-CXR data folders
├── p12
├── ...
├── p19
├── xxx1.png  // These are all Open-I raw images.
├── xxx2.png
└── ...

You can download OpenI data on their official website. For the MIMIC-CXR data, please apply for usage on PhysioNet.

You can use scripts/pretrain7b.sh and scripts/finetune7b.sh to conduct stage 1 and stage 2 training.

Evaluation

We are providing eval_caption.py,mimic_caption.py and openi_caption.py in the eval folder. This covers all the experiments conducted in the report. You can specify the model directory and use them to reproduce the results.

Appendices

You are welcomed to open a issue or send me an email if you are encountering any problem in using MIMIC-CXR.

About

CXR-LLaVA for Chest X-ray images report generation. This is the code repository for MBZUAI 23fall AI701 project of Group-7, members: Jinhong Wang, Yongxin Wang, Haokun Lin

Resources

License

Stars

Watchers

Forks

Languages

  • Python 94.9%
  • JavaScript 2.4%
  • HTML 1.8%
  • Other 0.9%