This repository contains the major source code for our paper in ECCV'24 CADL Workshop:
LightAvatar provides a proof-of-concept framework of employing neural light field (NeLF) to build photorealistic 3D head avatars. It takes 3DMM parameters and camera pose as input, and renders the RGB via a single network forward, no need for mesh as input. Thanks to the dedicatedly designed NeLF network, LightAvatar can render 512x512 images at 174 FPS on a consumer-grade GPU (RTX3090) with stock deep learning framework.LightAvatar: Efficient Head Avatar as Dynamic Neural Light Field [arxiv] [pdf]
Huan Wang1,2,†, Feitong Tan2, Ziqian Bai2,3, Yinda Zhang2, Shichen Liu2, Qiangeng Xu2, Menglei Chai2, Anish Prabhu2, Rohit Pandey, Sean Fanello2, Zeng Huang2, and Yun Fu1
1Northeastern University, USA 2Google, USA 3Simon Fraser University, Canada
†Work done when Huan was an intern at Google.
Corresponding author: Huan Wang, huan.wang.cool@gmail.com.
The code is based on TensorFlow. Due to Google IP issues, we are not able to release the complete code. Here we release the key code modules (model architecture and loss function) for reference. Should you have any questions, welcome to raise an issue or contact Huan Wang (huan.wang.cool@gmail.com).
If our work or code helps you, please consider to cite our paper. Thank you!
@inproceedings{wang2024lightavatar,
author = {Huan Wang and Feitong Tan and Ziqian Bai and Yinda Zhang and Shichen Liu and Qiangeng Xu and Menglei Chai and Anish Prabhu and Rohit Pandey and Sean Fanello and Zeng Huang and Yun Fu},
title = {LightAvatar: Efficient Head Avatar as Dynamic Neural Light Field},
booktitle = {ECCV Workshop},
year = {2024}
}