Pytorch implementation of AnimeGAN for fast photo animation
- Paper: AnimeGAN: a novel lightweight GAN for photo animation - Semantic scholar or from Yoshino repo
- Original implementation in Tensorflow by Tachibana Yoshino
- Try it on Hugging Face
- Demo and Docker image on Replicate
- Sample anime video: https://www.youtube.com/watch?v=45ASFOR3rNU
Input | Animation |
---|---|
- 09/06/2024: Integrated on Hugging Face Spaces, try it here
- 02/06/2024: Arcane (result here) and Shinkai style released
- 05/05/2024: Add color_transfer module to retain original color of generated images, See here.
- 23/04/2024: Added DDP training.
- 16/04/2024: AnimeGANv2 (Hayao style) is released with training code
git clone https://github.com/ptran1203/pytorch-animeGAN.git
cd pytorch-animeGAN
Run Inference on your local machine
--src can be directory or image file
python3 inference.py --weight hayao:v2 --src /your/path/to/image_dir --out /path/to/output_dir
- Python code
from inference import Predictor
predictor= Predictor(
'hayao:v2',
# if set True, generated image will retain original color as input image
retain_color=True
)
url = 'https://github.com/ptran1203/pytorch-animeGAN/blob/master/example/result/real/1%20(20).jpg?raw=true'
predictor.transform_file(url, "anime.jpg")
Model name | Model | Dataset | Weight |
---|---|---|---|
Hayao | AnimeGAN | train_photo + Hayao style | generator_hayao.pt |
Shinkai | AnimeGAN | train_photo + Shinkai style | generator_shinkai.pt |
Hayao:v2 | AnimeGANv2 | Google Landmark v2 + Hayao style | GeneratorV2_gldv2_Hayao.pt |
Shinkai:v2 | AnimeGANv2 | Google Landmark v2 + Shinkai style | GeneratorV2_gldv2_Shinkai.pt |
Arcane:v2 | AnimeGANv2 | Face ffhq + Arcane style | GeneratorV2_ffhq_Arcane_210624_e350.pt |
- Training notebook on Google colab
- Inference notebook on Google colab
wget -O anime-gan.zip https://github.com/ptran1203/pytorch-animeGAN/releases/download/v1.0/dataset_v1.zip
unzip anime-gan.zip
=> The dataset folder can be found in your current folder with named dataset
You need to have a video file located on your machine.
Step 1. Create anime images from the video
python3 script/video_to_images.py --video-path /path/to/your_video.mp4\
--save-path dataset/MyCustomData/style\
--image-size 256\
Step 2. Create edge-smooth version of dataset from Step 1.
python3 script/edge_smooth.py --dataset MyCustomData --image-size 256
To train the animeGAN from command line, you can run train.py
as the following:
python3 train.py --anime_image_dir dataset/Hayao \
--real_image_dir dataset/photo_train \
--model v2 \ # animeGAN version, can be v1 or v2
--batch 8 \
--amp \ # Turn on Automatic Mixed Precision training
--init_epochs 10 \
--exp_dir runs \
--save-interval 1 \
--gan-loss lsgan \ # one of [lsgan, hinge, bce]
--init-lr 1e-4 \
--lr-g 2e-5 \
--lr-d 4e-5 \
--wadvd 300.0\ # Aversarial loss weight for D
--wadvg 300.0\ # Aversarial loss weight for G
--wcon 1.5\ # Content loss weight
--wgra 3.0\ # Gram loss weight
--wcol 30.0\ # Color loss weight
--use_sn\ # If set, use spectral normalization, default is False
To convert images in a folder or single image, run inference.py
, for example:
--src and --out can be a directory or a file
python3 inference.py --weight path/to/Generator.pt \
--src dataset/test/HR_photo \
--out inference_images
To convert a video to anime version:
Be careful when choosing --batch-size, it might lead to CUDA memory error if the resolution of the video is too large
python3 inference.py --weight hayao:v2\
--src test_vid_3.mp4\
--out test_vid_3_anime.mp4\
--batch-size 4
Input | Hayao style v2 |
---|---|
Input | Arcane |
---|---|