This is the implementation for the paper:
Modality-aware Mutual Learning for Multi-modal Medical Image Segmentation
Early Accepted to MICCAI 2021
-
Data Preparation
-
Download the data from MICCAI 2018 BraTS Challenge.
-
Convert the files' name by
python dataset_conversion/Task032_BraTS_2018.py
- Preprocess the data by
python experiment_planning/nnUNet_plan_and_preprocess.py -t 32 --verify_dataset_integrity
-
-
Train
- Train the model by
python run/run_training.py 3d_fullres MAMLTrainerV2 32 0
-
Test
- inference on the test data by
python inference/predict_simple.py -i INPUT_PATH -o OUTPUT_PATH -t 32 -f 0 -tr MAMLTrainerV2
MAML
is integrated with the out-of-box nnUNet. Please refer to it for more usage.
If you find this code and paper useful for your research, please kindly cite our paper.
@inproceedings{zhang2021modality,
title={Modality-Aware Mutual Learning for Multi-modal Medical Image Segmentation},
author={Zhang, Yao and Yang, Jiawei and Tian, Jiang and Shi, Zhongchao and Zhong, Cheng and Zhang, Yang and He, Zhiqiang},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
pages={589--599},
year={2021},
organization={Springer}
}
MAML
is integrated with the out-of-box nnUNet.