This is the official implementation of Multi-Modal Masked Autoencoders for Medical Vision-and-Language Pre-Training at MICCAI-2022.
Run the following command to install the required packages:
pip install -r requirements.txt
You can download the models we pre-trained and fine-tuned in the corresponding datasets from here.
Please organize the pre-training datasets as the following structure:
root:[data]
+--pretrain_data
| +--roco
| | +--val
| | +--test
| | +--train
| +--medicat
| | +--release
| | +--net
Run the following command to pre-process the data:
python prepro/prepro_pretraining_data.py
to get the following arrow files:
root:[data]
+--pretrain_arrows
| +--medicat_train.arrow
| +--medicat_val.arrow
| +--medicat_test.arrow
| +--roco_train.arrow
| +--roco_val.arrow
| +--roco_test.arrow
Now we can start to pre-train the m3ae model:
bash run_scripts/pretrain_m3ae.sh
Please organize the fine-tuning datasets as the following structure:
root:[data]
+--finetune_data
| +--melinda
| | +--train.csv
| | +--dev.csv
| | +--test.csv
| | +--melinda_images
| +--slack
| | +--train.json
| | +--validate.json
| | +--test.json
| | +--imgs
| +--vqa_rad
| | +--trainset.json
| | +--valset.json
| | +--testset.json
| | +--images
| +--medvqa_2019
| | +--val
| | +--test
| | +--train
Run the following command to pre-process the data:
python prepro/prepro_finetuning_data.py
to get the following arrow files:
root:[data]
+--finetune_arrows
| +--vqa_vqa_rad_train.arrow
| +--vqa_vqa_rad_val.arrow
| +--vqa_vqa_rad_test.arrow
| +--vqa_slack_train.arrow
| +--vqa_slack_test.arrow
| +--vqa_slack_val.arrow
| +--vqa_medvqa_2019_train.arrow
| +--vqa_medvqa_2019_val.arrow
| +--vqa_medvqa_2019_test.arrow
| +--cls_melinda_train.arrow
| +--cls_melinda_val.arrow
| +--cls_melinda_test.arrow
| +--irtr_roco_train.arrow
| +--irtr_roco_val.arrow
| +--irtr_roco_test.arrow
Now you can start to fine-tune the m3ae model:
bash run_scripts/finetune_m3ae.sh
You can also test our fine-tuned models directly:
bash run_scripts/test_m3ae.sh
NOTE: This is a good way to check whether your environment is set up in the same way as ours (if you can reproduce the same results).
The code is based on ViLT, METER and MAE. We thank the authors for their open-sourced code and encourage users to cite their works when applicable.
If M3AE is useful for your research, please consider citing:
@inproceedings{chen2022m3ae,
title={Multi-Modal Masked Autoencoders for Medical Vision-and-Language Pre-Training},
author={Chen, Zhihong and Du, Yuhao and Hu, Jinpeng and Liu, Yang and Li, Guanbin and Wan, Xiang and Chang, Tsung-Hui},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
year={2022},
organization={Springer}
}