This is the official repository of
Anurag Ranjan, Kwang Moo Yi, Rick Chang, Oncel Tuzel, FaceLit: Neural 3D Relightable Faces. CVPR 2023
interp_view_light.mp4
conda create -f facelit/enviroment.yml
conda activate facelit
Download pretrained models
bash download_models.sh
Generate video demos.
python gen_videos.py --outdir=out --trunc=0.7 --seeds=0-3 --grid=2x2 --network=pretrained/NETWORK.pkl --light_cond=True --entangle=[camera, light, lightcam, specular, specularcam]
Train with a neural rendering resolution of 64x64
python train.py --outdir==out --cfg=ffhq --data=DATA_DIR --gpus=8 --batch=32 --gamma=1 --gen_pose_cond=True --gen_light_cond=True --light_mode=[diffuse, full] --normal_reg_weight=1e-4 --neural_rendering_resolution_final=64
Fine tune with a neural rendering resolution of 128x128
python train.py --outdir==out --cfg=ffhq --data=DATA_DIR --gpus=8 --batch=32 --gamma=1 --gen_pose_cond=True --gen_light_cond=True --light_mode=[diffuse, full] --normal_reg_weight=1e-4 --neural_rendering_resolution_final=128 --resume=pretrained/NETWORK.pkl
We use the dataset from EG3D and obtain camera parameters and illumination parameters using DECA.
git clone https://github.com/YadiraF/DECA.git
cd DECA
git checkout 022ed52
bash install_conda.sh
conda activate deca-env
bash fetch_data.sh
Apply our patch
git apply FACELIT_DIR/third_party/deca.patch
To generate deca fits, run generate_deca_fits.sh
.
Evaluation of models requires setting up DECA (see here) and setting up Deep3DFaceRecon (see below).
Use this fork to set up Deep3DFaceRecon_pytorch.
git clone https://github.com/Xiaoming-Zhao/Deep3DFaceRecon_pytorch
To run the evaluation, run eval_metrics.sh
. Note that due to randomness in the generation process, the metrics reported might vary by ±2%.
@inproceedings{ranjan2023,
author = {Anurag Ranjan and Kwang Moo Yi and Rick Chang and Oncel Tuzel},
title = {FaceLit: Neural 3D Relightable Faces},
booktitle = {Proceedings of the IEEE conference on computer vision and pattern recognition},
year = {2023}
}
This code is based on EG3D, we thank the authors for their github contribution. We also use portions of the code from GMPI.