Skip to content

official code of CVPR'18 paper "learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks"

Notifications You must be signed in to change notification settings

weixiong-ur/mdgan

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Learning to Generate Time-lapse Videos Using Multi-stage Dynamic Generative Adversarial Networks

This is the official code of the CVPR 2018 PAPER.

CVPR 2018 PAPER | Project Page | Dataset

Usage

  1. Requirements:
    • download our time-lapse dataset
    • python2.7
    • pytorch 0.3.0 or 0.3.1
    • ffmpeg
  2. Testing:
    • download our pretrained models
    • run python test.py --cuda --testf your_test_dataset_folder
  3. Sample outputs:
    • in ./sample_outputsthere are mp4 files which are generated on my machine.

Citing

@InProceedings{Xiong_2018_CVPR,
author = {Xiong, Wei and Luo, Wenhan and Ma, Lin and Liu, Wei and Luo, Jiebo},
title = {Learning to Generate Time-Lapse Videos Using Multi-Stage Dynamic
Generative Adversarial Networks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition
(CVPR)},
month = {June},
year = {2018}
}

About

official code of CVPR'18 paper "learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published