Pytorch implementation of the paper "BowelNet: Joint Semantic-Geometric Ensemble Learning for Bowel Segmentation from Both Partially and Fully Labeled CT Images". Email: chong.wang@adelaide.edu.au
The algorithm is a two-stage coarse-to-fine framework for the sgmentation of entire bowel (including duodenum, jejunum-ileum, colon, sigmoid, and rectum) from abdominal CT images. The first stage jointly localizes all types of the bowel, trained robustly on both partially and fully labeled samples. The second stage finely segments each type of localized the bowels using geometric bowel representations and hybrid psuedo labels.
(1) Joint localzation of the five bowel parts using both partially- and fully-labeled images
(2) Fine segmentation of each part using geometric (i.e., boundary and skeleton) guidance
We use a private large abdominal CT dataset with partially and fully-labeled segmentation masks. Our dataset structure is as follows:
BowelSegData
├── Fully_labeled_5C
│ ├── abdomen
│ │ ├── <patient_1>.nii.gz
│ │ ...
│ ├── male
│ │ ├── <patient_1>.nii.gz
│ │ ...
│ └── female
│ ├── <patient_1>.nii.gz
│ ...
├── Colon_Sigmoid
│ ├── abdomen
│ │ ├── <patient_1>.nii.gz
│ │ ...
│ ├── male
│ │ ├── <patient_1>.nii.gz
│ │ ...
│ └── female
│ ├── <patient_1>.nii.gz
│ ...
└── Smallbowel
├── abdomen
│ ├── <patient_1>.nii.gz
│ ...
├── male
│ ├── <patient_1>.nii.gz
│ ...
└── female
├── <patient_1>.nii.gz
...
Preprocessing includes cropping abdominal body region. We average all 2D CT slices to form a mean image and then apply a thresholding on it to obtain the abdominal body region (excluding CT bed).
If you are interested in this work or use the software, please consider citing the paper:
@article{wang2022bowelnet,
title={BowelNet: Joint Semantic-Geometric Ensemble Learning for Bowel Segmentation From Both Partially and Fully Labeled CT Images},
author={Wang, Chong and Cui, Zhiming and Yang, Junwei and Han, Miaofei and Carneiro, Gustavo and Shen, Dinggang},
journal={IEEE Transactions on Medical Imaging},
volume={42},
number={4},
pages={1225--1236},
year={2022},
publisher={IEEE}
}