Chengan He1, Jun Saito2, James Zachary2, Holly Rushmeier1, Yi Zhou2
1Yale University, 2Adobe Research
This code was developed on Ubuntu 20.04 with Python 3.9, CUDA 11.3 and PyTorch 1.9.0.
To begin with, please set up the virtual environment with Anaconda:
conda create -n nemf python=3.9
conda activate nemf
pip install -r requirements.txt
Our code relies on SMPL as the body model. You can download our processed version from here.
AMASS mocap data is used to train and evaluate our model in most of the experiments. After downloading its raw data, we run the processing scripts provided by HuMoR to unify frame rates, detect contacts, and remove problematic sequences. We then split the data into training, validation and testing sets by running:
python src/datasets/amass.py amass.yaml
You can also download our processed AMASS dataset from here.
The dog mocap data in MANN is used to evaluate the reconstruction capability of our model in long sequences, whose raw data can be downloaded from here. To process the data, we first manually remove some sequences on uneven terrain and then run:
python src/datasets/animal.py dog_mocap.yaml
Our processed data can be downloaded from here.
Note: All the preprocessed data mentioned before should be extracted to the
data
directory. Otherwise you need to update the configuration files to point them to the path you extracted.
We provide a pre-trained generative NeMF model and global motion predictor. Download and extract them to the outputs
directory.
We deploy our trained model as a generic motion prior to sovle different motion tasks. To run the applications we showed in the paper, use:
python src/application.py --config application.yaml --task {application task} --save_path {save path}
Here we implement several applications including motion_reconstruction
, latent_interpolation
, motion_inbetweening
, motion_renavigating
, aist_inbetweening
, and time_translation
. For aist_inbetweening
, you need to download the motion data and dance videos from AIST++ Dataset and place them under data/aist
. You also need to have FFmpeg installed to process these videos.
To evaluate our trained model on different tasks, use:
python src/evaluation.py --config evaluation.yaml --task {evaluation task} --load_path {load path}
The tasks we implement here include ablation_study
, comparison
, inbetween_benchmark
, super_sampling
, and smoothness_test
, which cover the tables and figures we showed in the paper. The quantitative results will be saved in .csv
files.
To evaludate FID and Diversity, we provide a pre-trained feature extractor at here, which is essentially an auto-encoder. You can train a new one on your data by running:
python src/train.py evaluation.yaml
In our paper we proposed three different models: a single-motion NeMF that overfits specific motion sequences, a generative NeMF that learns a motion prior, and a global motion predictor that generates root translations separately. Below we describe how to train these three models from scratch.
To train the single-motion NeMF on AMASS sequences, use:
python src/train_basic.py basic.yaml
The code will obtain sequences of 32, 64, 128, 256, and 512 frames and reconstruct them at 30, 60, 120, and 240 fps.
To train the model on dog mocap sequences, use:
python src/train_basic.py basic_dog.yaml
To train the generative NeMF on AMASS dataset, use:
python src/train.py generative.yaml
To train the global motion predictor on AMASS dataset, use:
python src/train_gmp.py gmp.yaml
Our codebase outputs .npz
data following the AMASS data format, thus they can be visualized directly with the SMPL-X Blender add-on. To render the skeleton animation of .bvh
files, we use the rendering scripts provided in deep-motion-editing.
- Our bvh I/O code is adapted from the work of Daniel Holden.
- The code in
src/rotations.py
is adapted from PyTorch3D. - The code in
src/datasets/amass.py
is adapted from AMASS. - The code in
src/nemf/skeleton.py
is taken from deep-motion-editing and our code structure is also based on it. - Part of the code in
sec/evaluation.py
is adapted from Action2Motion. - Part of the code in
src/utils.py
is taken from HuMoR. - The code in
src/soft_dtw_cuda.py
is taken from pytorch-softdtw-cuda.
Huge thanks to these great open-source projects!
If you found this code or paper useful, please consider citing:
@inproceedings{he2022nemf,
author = {He, Chengan and Saito, Jun and Zachary, James and Rushmeier, Holly and Zhou, Yi},
title = {NeMF: Neural Motion Fields for Kinematic Animation},
booktitle = {NeurIPS},
year = {2022}
}
If you run into any problems or have questions, please create an issue or contact chengan.he@yale.edu
.