Skip to content

Official PyTorch implementation for paper DrasCLR: A Self-supervised Framework of Learning Disease-related and Anatomy-specific Representation for 3D Lung CT Images, accepted by Medical Image Analysis.

License

Notifications You must be signed in to change notification settings

batmanlab/DrasCLR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DrasCLR: A Self-supervised Framework of Learning Disease-related and Anatomy-specific Representation for 3D Lung CT Images

Official PyTorch implementation for paper DrasCLR: A Self-supervised Framework of Learning Disease-related and Anatomy-specific Representation for 3D Lung CT Images, accepted by Medical Image Analysis.

Abstract

Large-scale volumetric medical images with annotation are rare, costly, and time prohibitive to acquire. Self-supervised learning (SSL) offers a promising pre-training and feature extraction solution for many downstream tasks, as it only uses unlabeled data. Recently, SSL methods based on instance discrimination have gained popularity in the medical imaging domain. However, SSL pre-trained encoders may use many clues in the image to discriminate an instance that are not necessarily disease-related. Moreover, pathological patterns are often subtle and heterogeneous, requiring the ability of the desired method to represent anatomy-specific features that are sensitive to abnormal changes in different body parts. In this work, we present a novel SSL framework, named DrasCLR, for 3D lung CT images to overcome these challenges. We propose two domain-specific contrastive learning strategies: one aims to capture subtle disease patterns inside a local anatomical region, and the other aims to represent severe disease patterns that span larger regions. We formulate the encoder using conditional hyper-parameterized network, in which the parameters are dependant on the anatomical location, to extract anatomically sensitive features. Extensive experiments on large-scale datasets of lung CT scans show that our method improves the performance on various downstream prediction and segmentation tasks.

@article{drasclr2023,
    title = {DrasCLR: A Self-supervised framework of learning disease-related and anatomy-specific representation for 3D lung CT images},
    journal = {Medical Image Analysis},
    pages = {103062},
    year = {2023},
    issn = {1361-8415},
    doi = {https://doi.org/10.1016/j.media.2023.103062},
    url = {https://www.sciencedirect.com/science/article/pii/S1361841523003225},
    author = {Ke Yu and Li Sun and Junxiang Chen and Max Reynolds and Tigmanshu Chaudhary and Kayhan Batmanghelich},
    keywords = {Self-supervised learning, Contrastive learning, Label-efficient learning, 3D Medical imaging data}
    }

Requirements

Preprocess Data

Please follow the instructions here

Training

sh run_train.sh

The hyperparameter setting can be found in run_train.sh, we run the pre-training on 4 GPUs.

Feature extraction

python extract_feature.py \
  --exp-name='./ssl_exp/exp_neighbor_2_128_expert_8' \
  --checkpoint-patch='checkpoint_patch_0002.pth.tar'

Pretrained weights

Dataset Anatomy Checkpoint
COPDGene Lung Download

Reference

MoCo v2: https://github.com/facebookresearch/moco

Context-aware SSL: https://github.com/batmanlab/Context_Aware_SSL

About

Official PyTorch implementation for paper DrasCLR: A Self-supervised Framework of Learning Disease-related and Anatomy-specific Representation for 3D Lung CT Images, accepted by Medical Image Analysis.

Resources

License

Stars

Watchers

Forks