Skip to content

[ECCV 2018]: T2Net: Synthetic-to-Realistic Translation for Depth Estimation Tasks

Notifications You must be signed in to change notification settings

lyndonzheng/Synthetic2Realistic

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Synthetic2Realistic

This repository implements the training and testing of T2Net for "T2Net: Synthetic-to-Realistic Translation for Depth Estimation Tasks" by Chuanxia Zheng, Tat-Jen Cham and Jianfei Cai at NTU. A video is available on YouTube. The repository offers the implementation of the paper in Pytoch.

  • Outdoor Translation

  • Indoor Translation

  • Extension (WS-GAN, unpaired Image-to-Image Translation, horse2zebra)

This repository can be used for training and testing of

  • Unpaired image-to-image Translation
  • Single depth Estimation

Getting Started

Installation

This code was tested with Pytoch 0.4.0, CUDA 8.0, Python 3.6 and Ubuntu 16.04

pip install visdom dominate
  • Clone this repo:
git clone https://github.com/lyndonzheng/Synthetic2Realistic
cd Synthetic2Realistic

Datasets

The indoor Synthetic Dataset renders from SUNCG and indoor Realistic Dataset comes from NYUv2. The outdooe Synthetic Dataset is vKITTI and outdoor Realistic dataset is KITTI

Training

Warning: The input sizes need to be muliples of 64. The feature GAN model needs to be change for different scale

  • Train a model with multi-domain datasets:
python train.py --name Outdoor_nyu_wsupervised --model wsupervised
--img_source_file /dataset/Image2Depth31_KITTI/trainA_SYN.txt
--img_target_file /dataset/Image2Depth31_KITTI/trainA.txt
--lab_source_file /dataset/Image2Depth31_KITTI/trainB_SYN.txt
--lab_target_file /dataset/Image2Depth31_KITTI/trainB.txt
--shuffle --flip --rotation
  • To view training results and loss plots, run python -m visdom.server and copy the URL http://localhost:8097.
  • Training results will be saved under the checkpoints folder. The more training options can be found in options.

Testing

  • Test the model
python test.py --name Outdoor_nyu_wsupervised --model test
--img_source_file /dataset/Image2Depth31_KITTI/testA_SYN80
--img_target_file /dataset/Image2Depth31_KITTI/testA

Estimation

  • Depth Estimation, the code based on monodepth
python evaluation.py --split eigen --file_path ./datasplit/
--gt_path ''your path''/KITTI/raw_data_KITTI/
--predicted_depth_path ''your path''/result/KITTI/predicted_depth_vk
--garg_crop

Trained Models

The pretrained model for indoor scene weakly wsupervised.

The pretrained model for outdoor scene weakly wsupervised

Note: Since our orginal model in the paper trained on single-GPU, this pretrained model is for multi-GPU version.

Citation

If you use this code for your research, please cite our papers.

@inproceedings{zheng2018t2net,
  title={T2Net: Synthetic-to-Realistic Translation for Solving Single-Image Depth Estimation Tasks},
  author={Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={767--783},
  year={2018}
}

Acknowledgments

Code is inspired by Pytorch-CycleGAN

About

[ECCV 2018]: T2Net: Synthetic-to-Realistic Translation for Depth Estimation Tasks

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages