Skip to content

Latest commit

 

History

History
145 lines (110 loc) · 4.2 KB

readme.md

File metadata and controls

145 lines (110 loc) · 4.2 KB

Repository for Semantic Segmentation using Deep Learning for MoNuSeg Dataset

In this repository, we work with already created ground truth segmentation masks.

What is Segmentation Segmentation?

The goal of semantic image segmentation is to label each pixel of an image with a corresponding class of what is being represented. Because we’re predicting for every pixel in the image, this task is commonly referred to as dense prediction. The major applications of semantic segmentation are bio-medical diagnosis, Geo-Sensing, automous vehicles etc.

Diagrams

UNet

UNet

SegNet

DeepLabv3

Network Summaries

UNet

UNet

Nested UNet (with EfficientNet Backbone)

NestedUNet

SegNet

Refer to the respective notebook

DeepLabv3

Refer to the respective notebook

Pre-processing

MonuSeg datset contains 30 images for training and 14 images for testing each size of 1000x1000. To facilitate the training process, patches i.e. 256x256 for every images along with their corresponding masks can be generated using view_as_windows. More details can be found in patch_generator.ipinb. Each image in the example generates 36 patches, thus overall 1584 patches.

Pre-requistes

Main packages required are:

  • Keras 2.2.3
  • Tensorflow 1.15.0
  • Numpy
  • Skimage
  • Other details available in respective notebooks

More details can be found in respective notebooks.

Features

Can be found in respective notebooks

  • Preprocessing
  • Data set Visualizations
  • Networks summaries
  • Training
  • Inference
  • Predicted result visualized

Models

This repository contains three different semantic segmentation models:

  • UNet (Training + inference)
  • SegNet (Training + inference)
    • supports indices pooling)
  • DeepLabv3 (Training + inference)
    • supports MobileNetv2 and Xception backbone

You can find trained models in respective folders.

Usage

All the available notebooks are standalone and can be directly through Google Colab. The models can be trained by loading data from google drive and for inference purposes you can load saved weights from respective directories and multiple images can be from notebook. For inference only, you can comment model.fit cell in that particular notebook and you will get the results

Performance Graphs

UNet

Loss function used binary_crossentropy UNet UNet

Nested UNet (UNet++)

Loss function used binary_crossentropy UNet++ UNet++

SegNet

Loss function used binary_crossentropy SegNet SegNet

DeepLabv3

Loss function used dice_loss DeepLabv3 Loss DeepLabv3 Dice

Quantitative Results

UNet

Results Values
Test Loss 0.42978010276706874
Test Accuracy 0.8665722351216205
Dice co-efficient 0.8034556142745479
F1-score 0.7157416380664551

Nested UNet (UNet++) with EfficientNet Encoder

Results Values
Test Loss 0.15891806781291962
Test Accuracy 0.9142027497291565
Dice Co-efficient 0.7618255615234375
F1-score 0.7453052997589111

SegNet

Results Values
Test Loss 0.3588969517837871
Test Accuracy 0.8288334337147799
Dice co-efficient 0.8644718094305559
F1-score 0.6526657884771173

DeepLabv3

Results Values
Dice Loss 0.0832267701625824
Dice co-efficient 0.9167312383651733
Accuracy 0.7988329529762268

Some Predicted Results

UNet

UNet

Nested UNet (UNet++) with EfficientNet Encoder

UNet++

SegNet

SegNet

DeepLabv3

DeepLabv3