Official PyTorch implementation of "Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks" by
Sihyun Yu*,1,
Jihoon Tack*,1,
Sangwoo Mo*,1,
Hyunsu Kim2,
Junho Kim2,
Jung-Woo Ha2,
Jinwoo Shin1.
1KAIST, 2NAVER AI Lab (KAIST-NAVER Hypercreative AI Center)
TL;DR: We make video generation scalable leveraging implicit neural representations.
paper | project page
Illustration of the (a) generator and (b) discriminator of DIGAN. The generator creates a video INR weight from random content and motion vectors, which produces an image that corresponds to the input 2D grids {(x, y)} and time t. Two discriminators determine the reality of each image and motion (from a pair of images and their time difference), respectively.
conda create -n digan python=3.8
conda activate digan
pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
pip install hydra-core==1.0.6
pip install tqdm scipy scikit-learn av ninja
pip install click gitpython requests psutil einops tensorboardX
One should organize the video dataset as follows:
UCF-101
|-- train
|-- class1
|-- video1.avi
|-- video2.avi
|-- ...
|-- class2
|-- video1.avi
|-- video2.avi
|-- ...
|-- ...
Video dataset
|-- train
|-- video1
|-- frame00000.png
|-- frame00001.png
|-- ...
|-- video2
|-- frame00000.png
|-- frame00001.png
|-- ...
|-- ...
|-- val
|-- video1
|-- frame00000.png
|-- frame00001.png
|-- ...
|-- ...
- Link: UCF-101, Sky Time lapse, TaiChi-HD
- For Kinetics-food dataset, read prepare_data/README.md
To train the model, navigate to the project directory and run:
python src/infra/launch.py hydra.run.dir=. +experiment_name=<EXP_NAME> +dataset.name=<DATASET>
You may change training options via modifying configs/main.yml
and configs/digan.yml
.
Also the dataset list is as follows, <DATASET>
: {UCF-101
,sky
,taichi
,kinetics
}.
The default data path is /data
, where you can change it via modifying configs/main.yml
.
python src/scripts/compute_fvd_kvd.py --network_pkl <MODEL_PATH> --data_path <DATA_PATH>
Genrate and visualize videos (as gif and mp4):
python src/scripts/generate_videos.py --network_pkl <MODEL_PATH> --outdir <OUTPUT_PATH>
Generated video results of DIGAN on TaiChi (top) and Sky (bottom) datasets.
More generated video results are available at the following site.
One can download the pretrained checkpoints from the following link.
@inproceedings{
yu2022digan,
title={Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks},
author={Yu, Sihyun and Tack, Jihoon and Mo, Sangwoo and Kim, Hyunsu and Kim, Junho and Ha, Jung-Woo and Shin, Jinwoo},
booktitle={International Conference on Learning Representations},
year={2022},
}
This code is mainly built upon StyleGAN2-ada and INR-GAN repositories.
We also used the code from following repositories: DiffAug, VideoGPT, MDGAN
Copyright 2022-present NAVER Corp.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.