Cosmin Bercea • Benedikt Wiestler • Daniel Rueckert • Julia A. Schnabel
If you find our work helpful, please cite our paper:
@inproceedings{
bercea2023generalizing,
title={Generalizing Unsupervised Anomaly Detection: Towards Unbiased Pathology Screening},
author={Cosmin I. Bercea and Benedikt Wiestler and Daniel Rueckert and Julia A Schnabel},
booktitle={Medical Imaging with Deep Learning},
year={2023},
url={https://openreview.net/forum?id=8ojx-Ld3yjR}
}
Abstract: The main benefit of unsupervised anomaly detection is the ability to identify arbitrary instances of pathologies even in the absence of training labels or sufficient examples of the rare class(es). Even though much work has been done on using auto-encoders (AE) for anomaly detection, there are still two critical challenges to overcome: First, learning compact and detailed representations of the healthy distribution is cumbersome. Second, the majority of unsupervised algorithms are tailored to detect hyperintense lesions on FLAIR brain MR scans. We found that even state-of-the-art (SOTA) AEs fail to detect several classes of non-hyperintense anomalies on T1w brain MRIs, such as brain atrophy, edema, or resections. In this work, we propose reversed AEs (RA) to generate pseudo-healthy reconstructions and localize various brain pathologies. Our method outperformed SOTA methods on T1w brain MRIs, detecting more global anomalies (AUROC increased from 73.1 to 89.4) and local pathologies (detection rate increased from 52.6% to 86.0%).
The code is based on the deep learning framework from the Institute of Machine Learning in Biomedical Imaging: https://github.com/compai-lab/iml-dl
1). Set up wandb (https://docs.wandb.ai/quickstart)
Sign up for a free account and login to your wandb account.
wandb login
Paste the API key from https://wandb.ai/authorize when prompted.
git clone https://github.com/ci-ber/RA.git
cd RA
3). Create a virtual environment with the needed packages (use conda_environment-osx.yaml for macOS)
cd ${TARGET_DIR}/RA
conda env create -f ra_environment.yaml
conda activate ra_env *or* source activate ra_env
Example installation:
- with cuda:
pip3 install torch==1.9.1+cu111 torchvision==0.10.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html
- w/o cuda:
pip3 install torch==1.9.1 torchvision==0.10.1 -f https://download.pytorch.org/whl/torch_stable.html
Alternatively you can use your own mid-axial brain T1w slices with our pre-trained weights or train from scratch on other anatomies and modalities.
Move the datasets to the expected paths (listed in the data/splits csv files)
Extract the middle axial slice and save as png images
[Optional] set config 'task' to test and load model from • here •
python core/Main.py --config_path projects/RA/configs/fast_mri/ra.yaml
Refer to *.yaml files for experiment configurations.