MCDDPM: Multichannel Conditional Denoising Diffusion Model for Unsupervised Anomaly Detection in Brain MRI
This repository contains the code implementation for the paper "MCDDPM: Multichannel Conditional Denoising Diffusion Model for Unsupervised Anomaly Detection in Brain MRI" by Vivek Kumar Trivedi, Bheeshm Sharma and P. Balamurugan, accepted by CSIP-BMEI 2024.
- Introduction
- Environment Set-up
- DataSets
- Data-Preprocessing
- Running MCDDPM
- Results
- Citation
- Acknowledgements
Detecting anomalies in brain MRI scans using supervised deep learning methods presents challenges due to anatomical diversity and the labor-intensive requirement of pixel-level annotations. Generative models like Denoising Diffusion Probabilistic Models (DDPMs) and their variants such as Patch-based DDPMs (pDDPMs), Masked DDPMs (mDDPMs), and Conditional DDPMs (cDDPMs) have emerged as powerful alternatives for unsupervised anomaly detection in brain MRI scans.
In this work, we propose an improved version of DDPM called Multichannel Conditional Denoising Diffusion Probabilistic Model (MCDDPM) for unsupervised anomaly detection in brain MRI scans. Our proposed model achieves high fidelity by making use of additional information from the healthy images during the training process, enriching the representation power of DDPM models, with a computational cost and memory requirements on par with DDPM, pDDPM and mDDPM models. Experimental results on BraTS20 and BraTS21 datasets demonstrate promising performance of the proposed method.
To set up the environment, use the following installation instructions.
-
Clone the repository:
git clone https://github.com/vivekkumartri/MCDDPM.git
-
Update Environment Configuration:
- After cloning, update the path in the
pc_environment.env
file located in the repository to match your local setup.
- After cloning, update the path in the
-
Navigate to the project directory:
cd MCDDPM
-
Create and activate the Conda environment:
conda env create -f environment_mcddpm.yml conda activate mcddpm
This project utilizes the following datasets:
-
IXI: Information eXtraction from Images 2020 dataset.
- Download Working Link: IXI Dataset
-
BraTS20: Brain Tumor Segmentation Challenge 2020 dataset.
- Download Working Link: BraTS20 Dataset on Kaggle
-
BraTS21: Brain Tumor Segmentation Challenge 2021 dataset.
- Download Working Link: BraTS21 Dataset on Kaggle
-
MSLUB: The Multiple Sclerosis Dataset from The University Hospital of Ljubljana.
- Download from the above link
Before you begin processing, ensure that the downloaded ZIP files are extracted and arranged into the following directory structure:
├── IXI
│ ├── t2
│ │ ├── IXI1.nii.gz
│ │ ├── IXI2.nii.gz
│ │ └── ...
│ └── ...
├── MSLUB
│ ├── t2
│ │ ├── MSLUB1.nii.gz
│ │ ├── MSLUB2.nii.gz
│ │ └── ...
│ ├── seg
│ │ ├── MSLUB1_seg.nii.gz
│ │ ├── MSLUB2_seg.nii.gz
│ │ └── ...
│ └── ...
├── Brats21
│ ├── t2
│ │ ├── Brats1.nii.gz
│ │ ├── Brats2.nii.gz
│ │ └── ...
│ ├── seg
│ │ ├── Brats1_seg.nii.gz
│ │ ├── Brats2_seg.nii.gz
│ │ └── ...
│ └── ...
└── ...
The following preprocessing steps are performed on the datasets: Please note that only the T2 modality has been utilized for our task.
- Skull Stripping: HD-BET is utilized to remove skulls from the datasets.
- Affine Transformation: Volumes are aligned to match the T2 modality of the SRI24-Atlas for consistency.
- Non-Relevant Region Removal: Black, non-informative regions are removed from the images.
- Bias Field Correction: N4 Bias field correction is applied to reduce noise.
- Volume Resampling: For efficiency, the resolution is reduced by half, resulting in dimensions of [96 × 96 × 80] voxels.
- Slice Removal: 15 slices from both the top and bottom of the volumes are removed, parallel to the transverse plane.
To preprocess the IXI dataset, run the following command:
-
Set-Up of HD-BET:
# Script to automate the setup of HD-BET, a tool for brain extraction in medical images. # Step 1: Clone the HD-BET repository git clone https://github.com/MIC-DKFZ/HD-BET # Step 2: Navigate into the HD-BET directory cd HD-BET # Step 3: Install the HD-BET package in editable mode pip install -e . # (Optional) Step 4: Modify the parameter directory # The default location for model parameters is ~/hd-bet_params. # To change this, you can edit HD_BET/paths.py and adjust the `folder_with_parameter_files` variable.
-
For IXI dataset:
bash prepare_IXI.sh <input_dir> <output_dir>
<input_dir>
: Path to the directory where the dataset is stored in an organized manner as discussed previously.<output_dir>
: Path where you want to store the preprocessed data.
Ensure that you replace <input_dir>
and <output_dir>
with the actual paths relevant to your setup.
Please refer to the preprocessing/
directory in this repository for preprocessing for other datasets. Please use prepare_Brats20.sh
, prepare_Brats21.sh
and prepare_MSLUB.sh
files for BraTS20, BraTS21 and MSLUB datasets respectively.
The table below provides information about the datasets used in this project:
For more details on each dataset preprocessing, refer to the respective dataset documentation and the preprocessing/
directory in this repository.
-
Complete Environment Setup:
- Ensure you have followed the Environment Set-up instructions to configure your environment properly.
-
Train and Inference Using MCDDPM:
- Execute the following command to train and perform inference with the proposed MCDDPM model:
python run.py experiment=/experiment/CISP_BMEI_MCDDPM/MCDDPM
-
Comparative and Ablation Studies:
- For running comparative and ablation study experiments, please refer to the
config/
directory in this repository for additional configurations and scripts.
python run.py experiment=/experiment/CISP_BMEI_MCDDPM/MCDDPM_without_Condition
- For running comparative and ablation study experiments, please refer to the
We present below a few comparisons in terms of qualitative and quantitative results.
If you use this code in your research, please cite our paper:
We thank Technocraft Centre for Applied Artificial Intelligence (TCA2I), IIT Bombay, for their generous funding support towards this project.
This project draws inspiration and is developed based on the pddpm-uad repository.