Skip to content

yufan1012/MonoGaussianAvatar

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Monogaussianavatar: Monocular gaussian point-based head avatar

Yufan Chen†,1, Lizhen Wang2, Qijing Li2, Hongjiang Xiao3, Shengping Zhang*,1, Hongxun Yao1, Yebin Liu2

1Harbin Institute of Technology   2Tsinghua Univserity   3Communication University of China
*Corresponding author   Work done during an internship at Tsinghua Univserity

Getting Started

  • Create a conda or python environment and activate. For e.g., conda create -n monogshead python=3.9; conda activate monogshead.
  • Install PyTorch 1.11.0 with conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch. This version works with both PyTorch3d and functorch.
  • Install PyTorch3d:
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install pytorch3d
  • Install other requirements: cd ../monogaussianavatar; pip install -r requirement.txt; pip install -U face-detection-tflite
  • Install gaussian rasterization:
cd submodules/
git clone https://github.com/graphdeco-inria/gaussian-splatting --recursive
cd gaussian-splatting/
pip install -e submodules/diff-gaussian-rasterization
cd ..
  • Download FLAME model, choose FLAME 2020 and unzip it, copy 'generic_model.pkl' into ./code/flame/FLAME2020

Preparing dataset

Our data format is the same as IMavatar. You can download a preprocessed dataset from Google drive (subject 1 and 2)

If you'd like to generate your own dataset, please follow intructions in the IMavatar repo.

Link the dataset folder to ./data/datasets. Link the experiment output folder to ./data/experiments.

Pre-trained model

Download a pretrained model from . Uncompress and put into the experiment folder ./data/experiments.

Training

python scripts/exp_runner.py ---conf ./confs/subject1.conf [--is_continue]

Evaluation

Set the is_eval flag for evaluation, optionally set checkpoint (if not, the latest checkpoint will be used) and load_path

python scripts/exp_runner.py --conf ./confs/subject1.conf --is_eval [--checkpoint 60] [--load_path ...]

GPU requirement

We train our models with a single Nvidia 24GB RTX3090 GPU.

Citation

If you find our code or paper useful, please cite as:

@inproceedings{chen2024monogaussianavatar,
  title={Monogaussianavatar: Monocular gaussian point-based head avatar},
  author={Chen, Yufan and Wang, Lizhen and Li, Qijing and Xiao, Hongjiang and Zhang, Shengping and Yao, Hongxun and Liu, Yebin},
  booktitle={ACM SIGGRAPH 2024 Conference Papers},
  pages={1--9},
  year={2024}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages