Can be related to controllable GAN and GAN inversion.
主要摘录利用 pre-trained GAN 来做事情
Some methods finetune pretrained GANs on new datasets, which typically results in higher performance coompared to training from scratch, especially in the limited-data regime.
To extend the success of GANs to the limited-data regime, it is common to use pretraining.
image-to-image translation can be done by injecting encoded features to StyleGANs
image inpainting and outpainting can realized by locating the appropriate codes in the latent space
design different latent optimizationmethods to do inpainting, style transfer, morphing, colorization,denoising and super resolution
use the generator as a fixeddecoder, and facilitate disentanglement by training an encoderfor identity and another encoder for pose.
Richardsonet al. do image translation by training encoders fromsketches or semantic maps into StyleGAN’s W space.
This is built upon the foundation that recent GANs provides a naturally disentangled latet space for generation.
In contrast to training-based methods, pre-trained GANs are proved to naturally have good disentanglement property. Manipulation in the latent space causing directly semantic changes into the image space.
Work on leveraging pre-trained representations for GANs can be divided into two categories:
- transferring parts of a GAN to a new dataset.
- using pre-trained models to control and improve GANs
the pre-trained model contains
This dataset does not exist: training models from generated images
Victor Besnier, Himalaya Jain, Andrei Bursuc, Matthieu Cord, Patrick Pérez
[ICASSP 2020]
Ensembling with Deep Generative Views
Lucy Chai, Jun-Yan Zhu, Eli Shechtman, Phillip Isola, Richard Zhang
[CVPR 2021]
Generative Interventions for Causal Learning
Chengzhi Mao, Augustine Cha, Amogh Gupta, Hao Wang, Junfeng Yang, Carl Vondrick
[CVPR2021]
Data Augmentation Using GANs
Fabio Henrique Kiyoiti dos Santos Tanaka, Claus Aranha
[ACML 2019]
Conditional Infilling GANs for Data Augmentation in Mammogram Classification
Eric Wu, Kevin Wu, David Cox, William Lotter
[MICCAI 2018]
Finding an Unsupervised Image Segmenter in Each of Your Deep Generative Models
Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina, Andrea Vedaldi
Transferring GANs: generating images from limited data
Yaxing Wang, Chenshen Wu, Luis Herranz, Joost van de Weijer, Abel Gonzalez-Garcia, Bogdan Raducanu
[ECCV 2018
] (UAB
)
Seeing What a GAN Cannot Generate
David Bau, Jun-Yan Zhu, Jonas Wulff, William Peebles, Hendrik Strobelt, Bolei Zhou, Antonio Torralba
[ICCV 2019
] (MIT, CUHK
) [Code]
Projected GANs Converge Faster
Axel Sauer, Kashyap Chitta, Jens Müller, Andreas Geiger
[NeurIPS 2021
] (MPI
)
Image2stylegan:How to embed images into the stylegan latentspace?
Style generator inversionfor image enhancement and animation.
Image processingusing multi-code GAN prior
Exploiting deep generativeprior for versatile image restoration and manipulation
Disentangling in latent space by harnessing a pretrainedgenerator.
Encodingin style: a stylegan encoder for image-to-image translation
Image manipulation with perceptual discriminators