Skip to content

[NIPS2024] Official Code Repository for "Dual Encoder GAN Inversion for High-Fidelity 3D Head Reconstruction from Single Images"

License

Notifications You must be signed in to change notification settings

berkegokmen1/dual-enc-3d-gan-inversion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dual Encoder GAN Inversion for High-Fidelity 3D Head Reconstruction from Single Images

Accepted to NeurIPS 2024.

Paper | Project Website | BibTeX

Authors

Ahmet Berke Gökmen*, Bahri Batuhan Bilecen*, Ayşegül Dündar
* Indicates Equal Contribution

Results

grid_images_last.mp4

TODO

  • Release Website
  • Release Paper
  • Release Code
  • Release Checkpoints

Requirements and installation

  • Make sure you have 64-bit Python 3.8, PyTorch 11.1 (or above), and CUDA 11.3 (or above).
  • Preferably, create a new environment via conda or venv and activate the environment.
  • Clone repository: git clone --recursive https://github.com/berkegokmen1/dual-enc-3d-gan-inversion
  • Install pip dependencies: cd ./dual-enc-3d-gan-inversion && pip install -r requirements.txt

Checkpoints

Network Filename
PanoHead easy-khair-180-gpc0.8-trans10-025000.pkl
Latent Avg. latent_avg.npy
Visible Net visible_net.pt
Occluded Net occluded_net.pt
IR-SE50 Model model_ir_se50.pth
CurricularFace Backbone CurricularFace_Backbone.pth
MTCNN mtcnn/

Make sure to update configs/paths_config.py accordingly.

Dataset preparation

1- Download FFHQ Dataset and LPFF Dataset. Combine two datasets using LPFF's approach.

2- Then follow PanoHead's approach for pose extraction and face alignment. For this, you need to follow the setup procedure of PanoHead and ensure that you do not skip the setup of 3DDFA_V2.

3- On the combined dataset, run background removal tool using the following command: ./remove-background.sh by setting appropriate paths.

4- Lastly, run ./gen_synth_data.sh to create the synthetic dataset detailed in the paper. Make sure to set appropriate paths.

Training

You can run the following commands to train two encoders separately.

./train_occluded.sh
./train_visible.sh

Inference

You can run the following command to infer the trained checkpoints or use the downloaded ones and generate images, videos and meshes.

./infer.sh

Contributions

Pull requests are welcome.

Questions

You may reach me through LinkedIn.

BibTeX

@misc{bilecen2024dualencoderganinversion,
      title={Dual Encoder GAN Inversion for High-Fidelity 3D Head Reconstruction from Single Images}, 
      author={Bahri Batuhan Bilecen and Ahmet Berke Gokmen and Aysegul Dundar},
      year={2024},
      eprint={2409.20530},
      archivePrefix={NeurIPS},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2409.20530}, 
}

License

Copyright 2024 Bilkent DLR

Licensed under the Apache License, Version 2.0 (the "License")


About

[NIPS2024] Official Code Repository for "Dual Encoder GAN Inversion for High-Fidelity 3D Head Reconstruction from Single Images"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •