If you want more details, please visit my blog.
-
docopt
pip install docopt
-
imgaug
pip install imgaug
-
openslide-python
1. Download the project from openslide-python github python setup.py install
2. pip install openslide-python
-
tifffile
pip install tifffile
openslide was used to read WSI.
I tested .svs and .ndpi format.
If you want to use another format, add the command at inferManager.py(line :61)
ex)wsi_list += glob.glob(self.input_path+"/*.ndpi")
Because of the spatial size of WSI, it's not efficient to allocate the memory for chekcing the rusults corresponding with the original WSI.
So, it will be easy to the thumbnails at the "output/thumbnail" folder.
You can change the rescale factor for WSI in interval (0,1) with '--rescale' arguments.
If you make it into another range of input ex) -1, the program don't generate thumbnail.
Code Structure
├── classification_model
|
├── segmentation_model
|
├── infer
| ├── inferManager.py
|
├── dataset
| ├── example1.svs
| ├── example2.svs
| ├── ....
|
├── output
| ├── prediction
| ├── ├── example1_v.tif
| ├── ├── example2_v.tif
| ├── ├── ....
| ├── ├── example1_wt.tif
| ├── ├── example2_wt.tif
| ├── ├── ....
| |
| ├── thumbnail
| ├── ├── example1_v.png
| ├── ├── example2_v.png
| ├── ├── ....
| ├── ├── example1_wt.png
| ├── ├── example2_wt.png
| ├── ├── ....
|
├── pretrained
| ├── whole_cls_tumor_net.pth
| ├── ambiguours_tumor_net.pth
| ├── viable_tumor_net.pth
| ├── viable_seg_net.pth
|
├── model
├── run_infer.py
├── run_infer.sh
├── ....
Only support a single GPU, it will take 10 minutes per each WSI(40x).
- Put your data into input_path(default: ./dataset)
- Download pretrained models into "./pretrained"
- Run run_infer.sh
- You can see your result at output_path(default: ./output).
prediction: This folder contains the binary maps of the segmentation results corresponding with original WSIs(1-to-1 mapping).
- "?_v.tif" : viable tumor prediction
- "?_wt.tif" : whole tumor area prediction
thumbnail: This folder contains thumbnails, the resized segmentation results, which can be scaled by the "--rescale" arguments in a range of (0,1)
- "?_v.png" : viable tumor resized image
- "?_wt.png" : whole tumor resized image
PAIP2019 is the first challenge organized by the Pathology AI Platform (PAIP)
PAIP2019 homepage
-
Background:
Liver cancer is one of the most common cancer. For the best prognosis of patients, early diagnosis of liver cancer is crucial task. In this project, we proposed the method for whole and viable liver tumor segmentation.
-
Tumor definition
- Whole tumor area: This area is defined as the outermost boundary enclosing all dispersed viable tumor cell nests, tumor necrosis, and tumor capsule.
- Viable tumor area: This region is defined as viable tumor cell nests and as precisely as possible, boundaries along the boundary between the cell and the surrounding stroma.
- Dataset
- The training dataset contains 50 WSIs
- The validation dataset contains 10 WSIs
- The test dataset contains 40 WSIs
-
Evaluation
- Task1: Liver Cancer Segmentation
- Task2: Viable Tumor Burden Estimation
Figure1. Test set segmentation results
-
Validation Results
- 28 Aug. 2019
- Task1 score:0.6975
- Task2 score: 0.6558
-
Test Results
- Task1 rank: 5th
- Task1 score: 0.665227214
- Task2 rank: 3rd
- Task2 score :0.633028622