Skip to content

sergeyprokudin/smplpix

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

116595226-621dc600-a923-11eb-85d4-f52a5b9f265c

Left: SMPL-X human mesh registered with SMPLify-X, middle: SMPLpix render, right: ground truth. Video.

SMPLpix: Neural Avatars from 3D Human Models

SMPLpix neural rendering framework combines deformable 3D models such as SMPL-X with the power of image-to-image translation frameworks (aka pix2pix models).

Please check our WACV 2021 paper or a 5-minute explanatory video for more details on the framework.

Important note: this repository is a re-implementation of the original framework, made by the same author after the end of internship. It does not contain the original Amazon multi-subject, multi-view training data and code, and uses full mesh rasterizations as inputs rather than point projections (as described here).

Demo

Description Link
Process a video into a SMPLpix dataset Open In Colab
Train SMPLpix Open In Colab

Prepare the data

demo_openpose_simplifyx

We provide the Colab notebook for preparing SMPLpix training dataset. This will allow you to create your own neural avatar given monocular video of a human moving in front of the camera.

Run demo training

We provide some preprocessed data which allows you to run and test the training pipeline right away:

git clone https://github.com/sergeyprokudin/smplpix
cd smplpix
python setup.py install
python smplpix/train.py --workdir='/content/smplpix_logs/' \
                        --data_url='https://www.dropbox.com/s/coapl05ahqalh09/smplpix_data_test_final.zip?dl=0'

Train on your own data

You can train SMPLpix on your own data by specifying the path to the root directory with data:

python smplpix/train.py --workdir='/content/smplpix_logs/' \
                        --data_dir='/path/to/data'

The directory should contain train, validation and test folders, each of which should contain input and output folders. Check the structure of the demo dataset for reference.

You can also specify various parameters of training via command line. E.g., to reproduce the results of the demo video:

python smplpix/train.py --workdir='/content/smplpix_logs/' \
                        --data_url='https://www.dropbox.com/s/coapl05ahqalh09/smplpix_data_test_final.zip?dl=0' \
                        --downsample_factor=2 \
                        --n_epochs=500 \
                        --sched_patience=2 \
                        --batch_size=4 \
                        --n_unet_blocks=5 \
                        --n_input_channels=3 \
                        --n_output_channels=3 \
                        --eval_every_nth_epoch=10

Check the args.py for the full list of parameters.

More examples

Animating with novel poses

116546566-0edf4f80-a8f2-11eb-9fb2-a173c0018a4e

Left: poses from the test video sequence, right: SMPLpix renders. Video.

Rendering faces

116543423-23214d80-a8ee-11eb-9ded-86af17c56549

Left: FLAME face model inferred with DECA, middle: ground truth test video, right: SMPLpix render. Video.

Thanks to Maria Paola Forte for providing the sequence.

Few-shot artistic neural style transfer

116544826-e9514680-a8ef-11eb-8682-0ea8d19d0d5e

Left: rendered AMASS motion sequence, right: generated SMPLpix animations. Full video. See the explanatory video for details.

Credits to Alexander Kabarov for providing the training sketches.

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{prokudin2021smplpix,
  title={SMPLpix: Neural Avatars from 3D Human Models},
  author={Prokudin, Sergey and Black, Michael J and Romero, Javier},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={1810--1819},
  year={2021}
}

License

See the LICENSE file.

visitors

About

SMPLpix: Neural Avatars from 3D Human Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published