Implement of the paper:
Segmentation-aware Image Denoising Without Knowing True Segmentation
Sicheng Wang, Bihan Wen, Junru Wu, Dacheng Tao, Zhangyang Wang
we propose a segmentation-aware image denoising model dubbed U-SAID, which does not need any ground-truth segmentation map in training, and thus can be applied to any image dataset directly. We demonstrate the U-SAID generates denoised image has:
- better visual quality;
- stronger robustness for subsequent semantic segmentation tasks.
We also manifest U-SAID's superior generalizability in three folds:
- denoising unseen types of images;
- pre-processing unseen noisy images for segmentation;
- pre-processing unseen images for unseen high-level tasks.
U-SAID: Network architecture. The USA module is composed of a feature embedding sub-network for transforming the denoised image to a feature space, followed by an unsupervised segmentation sub-network that projects the feature to a segmentation map and calculates its pixel-wise uncertainty.
Visual comparison on Kodak Images
Semantic segmentation from Pascal VOC 2012 validation set
- PyTorch
- torchvision
- OpenCV for Python
- tensorboardX (TensorBoard for PyTorch)
USAID_train.py
Saved_Models/USAID.pth
If you use this code for your research, please cite our paper.
@misc{1905.08965,
Author = {Sicheng Wang and Bihan Wen and Junru Wu and Dacheng Tao and Zhangyang Wang},
Title = {Segmentation-Aware Image Denoising without Knowing True Segmentation},
Year = {2019},
Eprint = {arXiv:1905.08965},
}