Skip to content

PyTorch Implementation of Lung Swapping Autoencoder

License

Notifications You must be signed in to change notification settings

cvlab-stonybrook/LSAE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lung Swapping Auotencoder: Learning a Disentangled Structure-texture Representation of Chest Radiographs

This is the PyTorch implementation of Lung Swapping Autoencoder published on MICCAI 2021. The extended version can be found at Lung Swapping Autoencoder: Learning a Disentangled Structure-texture Representation of Chest Radiographs

Preparation

CXRs and Masks of ChestX-ray14

You can download our pre-processed data through the following links. Please remember to modify the data path in the command lines correspondingly.

Data Splits of ChestX-ray14

We split ChestX-ray14 following the routine of the official website. To simplify the input of annotations, we generate train list and test list. Each line is composed of the image name and the corresponding labels like below:

00000001_002.png 0 1 1 0 0 0 0 0 0 0 0 0 0 0

If the image is positive with one class, the corresponding bit is 1, otherwise is 0. Class index follows this

Unsupervised Lung Swapping Pre-training

The command is following. Please fill in the blanks with your own paths.

CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 \
  --master_port=8898 train_lsae.py \
  --size 256 \
  --batch 8 \
  --lr 0.001 \
  --trlist data/trainval_list.txt \
  --tslist data/test_list.txt \
  --wandb \
  --proj_name lsae \
  [CXR_PATH] [CXR_Mask_PATH]

We provide the trained LSAE checkpoint which is used to perform the downstream tasks.

Qualitative Results of Lung Swapping

Full Labeled Data Finetuning on ChestX-ray14

The command is following. Please fill in the blanks with your own paths. Before running, you need to download the pretrained_lsae.pt, and put it in the directory saved_ckpts.

CUDA_VISIBLE_DEVICES=0 python train_texencoder_cxr14.py \
  --path [CXR_PATH] \
  --batch 96 \
  --iter 35000 \
  --lr 0.01 \
  --lr_steps 26000 30000 \
  --trlist data/trainval_list.txt \
  --tslist data/test_list.txt \
  --enc_ckpt saved_ckpts/pretrained_lsae.pt \
  --wandb

in LSAE achieves 79.2%(mAUC) on ChestX-ray14. The quantitative comparison with Inception v3 and DenseNet 121 is shown in the following table, together with all the model weights.

Models Init Weights Params(M) mAUC(%)
DenseNet 121 [ckpt] ImageNet pre-trained 7 78.7
Inception v3 [ckpt] ImageNet pre-trained 22 79.6
in LSAE [ckpt] LSAE pre-trained 5 79.2

Cite

@inproceedings{zhou2021chest,
  title={Chest Radiograph Disentanglement for COVID-19 Outcome Prediction},
  author={Zhou, Lei and Bae, Joseph and Liu, Huidong and Singh, Gagandeep and Green, Jeremy and Samaras, Dimitris and Prasanna, Prateek},
  booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
  pages={345--355},
  year={2021},
  organization={Springer}
}

Acknowledgement

Our code is heavily based on the following open-sourced repositories. We appreciate their generous releases.