Python 3.8, Pytorch1.7.1, Cuda 11.0.
# Compile DCNv2:
cd $ROOT/models/modules/DCNv2
sh make.sh
For more implementation details about DCN, please see [DCNv2].
The IXI and BraTS2018 can be downloaded at:
[IXI dataset] and [BrainTS dataset].
(1) The original data are .nii data. Split your data set into training sets, validation sets, and test sets;
(2) Read .nii data and save these slices as .png images into two different folders as:
python data/read_nii_to_img.py
[T1 folder:]
000001.png, 000002.png, 000003.png, 000003.png ...
[T2 folder:]
000001.png, 000002.png, 000003.png, 000003.png ...
# Note that the images in the T1 and T2 folders correspond one to one. The undersampled target images will be automatically generated in the training phase.
The original Fastmri dataset can be downloaded at: [Fastmri dataset].
(1) For the paired Fastmri data (PD and FSPD), we follow the data preparation process of MINet and MTrans. For more details, please see [MINet] and [MTrans].
(2) In our code, you can prepare fastmri dataset using [data/fastmri_dataset.py].
Set your data set path and training parameters in [configs/joint_optimization.yaml], then run
sh train_joint.sh
CUDA_VISIBLE_DEVICES=0 python test_loupe_mask.py
Set your data set path and training parameters in [configs/only_reconstruction.yaml]. Set your learned mask path in the dataset file and then run
sh train_rec.sh
Set your data set path, mask path and training parameters in [configs/only_reconstruction.yaml] and the dataset file, then run
sh train_rec.sh
Modify the test configurations in Python file [test_psnr.py]. Then run:
CUDA_VISIBLE_DEVICES=0 python test_PSNR.py
Our codes are built based on LOUPE and BasicSR, thank them for releasing their codes!