Skip to content

nogu-atsu/small-dataset-image-generation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Image Generation from Small Datasets via Batch Statistics Adaptation

The author's official minimal implementation of Image Generation from Small Datasets via Batch Statistics Adaptation.

Requirements

chainer>=5.0.0
opencv_python
numpy
scipy
Pillow
PyYAML

Dataset preparation

data_path
├── dataset1
├── dataset2
...
  • Place all training sample in the same directory for each dataset.
  • Specify the root path to data_path in configs/default.yml.
  • Specify the dataset directory name to dataset in configs/default.yml.

Run

For single GPU training, run

python ./train.py --config_path configs/default.yml

For Multiple GPU training, run

mpirun python ./train.py --config_path configs/default.yml

Multiple GPU training is supported only for BigGAN. For BigGAN ,we used 4 GPUs for training.

Inference

Initialize the generator and load pretrained weights.

e.g. biggan based model

gen = AdaBIGGAN(config, datasize, comm=comm)
chainer.serializers.load_npz("your_pretrained_model.h5", gen.gen)
gen.to_gpu(device) # send to gpu if necessary
gen.gen.to_gpu(device) # send to gpu if necessary

Random sampling

e.g. randomly sample 5 images with temperature=0.5 without truncation

random_imgs = gen.random(tmp=0.5, n=5, truncate=False)

Interpolation

e.g. interpolation between 0th and 1st image

interpolated_imgs = gen.interpolate(self, source=0, dest=1, num=5)

Acknowledgement

Pytorch re-implementation from Satoshi Tsutsui and Minjun Li.

About

No description, website, or topics provided.

Resources

License

MIT, MIT licenses found

Licenses found

MIT
LICENSE.txt
MIT
LICENSE-sngan

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages