Skip to content

Latest commit

 

History

History
152 lines (123 loc) · 6.05 KB

README.md

File metadata and controls

152 lines (123 loc) · 6.05 KB

UDASOD-UPL

Unsupervised Domain Adaptive Salient Object Detection Through Uncertainty-Aware Pseudo-Label Learning
Pengxiang Yan, Ziyi Wu, Mengmeng Liu, Kun Zeng, Liang Lin, Guanbin Li
Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI), 2022
[Paper] [Supplemental Materials]

Install

This code is tested with Python=3.7 (via Anaconda3), PyTorch=1.6.0, CUDA=10.2.

# Install PyTorch-1.6.0
$ conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 -c pytorch

# Install Dependencies
$ pip install numpy opencv-python matplotlib tqdm yacs albumentations tensorboard

# Install apex
$ git clone https://github.com/NVIDIA/apex
$ cd apex
$ pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./

Datasets

Proposed dataset - SYNSOD

In this paper, we proposed a synthetic salient object detection dataset (SYDSOD) including 11,197 images and corresponding pixel-level annotations.

Public datasets

Our code might involve the following datasets [PASCAL-S], [ECSSD], [HKU-IS], [DUT-OMRON], [DUTS], [SOD]. Please download them datasets and unzip them into data folder when necessary.

The data folder structure should be reorganized like this:

├── CG4
│   ├── image
│   ├── mask
│   ├── test.txt
│   └── train.txt
├── DUT-OMRON
│   ├── image
│   ├── mask
│   └── test.txt
├── DUTS
│   ├── image
│   ├── mask
│   ├── test.txt
│   ├── train.txt
│   └── train_val.txt
├── ECSSD
│   ├── image
│   ├── mask
│   └── test.txt
├── HKU-IS
│   ├── image
│   ├── mask
│   └── test.txt
├── PASCAL-S
│   ├── image
│   ├── mask
│   └── test.txt
└── SOD
    ├── image
    ├── mask
    └── test.txt

Evaluation

You can download the predicted saliency maps for evaluation: [Google Drive] [Baidu Pan](passwd: 4q1l)

Modify the config file config/eval.yaml and run the evaluation script:

$ python eval.py \
$    --exp_config config/eval.yaml \
$    --pred_dir <pred_dir>

The predicted saliency masks directory <pred_dir> should looks like:

├── duts_te
│   └── mask
│       ├── *.png
...

table

Testing

You can download the trained model weights: [Google Drive] [Baidu Pan](passwd: 1s86).

Modify the checkpoint path in config/test.yaml and run the testing script:

$ CUDA_VISIBLE_DEVICES=0 python test.py \
$    --exp_config config/test.yaml \
$    --save_res

The predicted saliency masks and evaluation results will be save in test_res_{date}/

Training

  1. The saliency detector is based on LDF. If you want to train from strach, you can download the pretrained resnet50 from here and save to checkpoints/. You can modify the value of MODEL.BAKCBONE_PATH in config/ldf_train.yaml to the path of the downloaded weight.

  2. Run the following commands to generate the detail and body maps for source domain for LDF training.

$ python tools/generate_body_detail.py
  1. Start training
$ CUDA_VISIBLE_DEVICES=0 python train.py \
$    --exp_config config/ldf_train.yaml
  1. (Optional) visualize training process in tensorboard
$ tensorboard --logdir tb_runs
  1. (Optional) for more training scripts, you can refer to scripts/training.sh

Citation

If you find this work helpful, please consider citing

@article{yan2022unsupervised,
  title={Unsupervised Domain Adaptive Salient Object Detection Through Uncertainty-Aware Pseudo-Label Learning},
  author={Yan, Pengxiang and Wu, Ziyi and Liu, Mengmeng and Zeng, Kun and Lin, Liang and Li, Guanbin},
  journal={arXiv preprint arXiv:2202.13170},
  year={2022}
}

FAQ

Q1: Is the proposed approach an unsupervised salient object detection method?

A1: We define our approach as an unsupervised domain adaptive salient object detection method, but not an unsupervised one. Only when comparing with others, we follow the definition of existing deep USOD methods that defines "unsupervised learning" as learning without human annotations. Most of them refer to models trained with noise labels generated by traditional methods.

Acknowledge

Thanks to the third-party libraries:

  • Saliency detector: LDF by weijun88
  • Evaluation Toolbox: PySODMetrics by lartpang

Contact

For any questions, feel free to open an issue or contact us:

  • Pengxiang Yan @Kinpzz, Email: yanpx (at) live.com
  • Ziyi Wu @Z1Wu, Email: wuzy39 (at) mail2.sysu.edu.cn
  • For dataset generation part: Mengmeng Liu @luimoli, Email: liumm97 (at) outlook.com