The articulated 3D pose of the human body is high-dimensional and complex. Many applications make use of a prior distribution over valid human poses, but modeling this distribution is difficult. Here we present VPoser, a learning based variational human pose prior trained from a large dataset of human poses represented as SMPL bodies. This body prior can be used as an Inverse Kinematics (IK) solver for many tasks such as fitting a body model to images as the main contribution of this repository for SMPLify-X. VPoser has the following features:
- defines a prior of SMPL pose parameters
- is end-to-end differentiable
- provides a way to penalize impossible poses while admitting valid ones
- effectively models correlations among the joints of the body
- introduces an efficient, low-dimensional, representation for human pose
- can be used to generate valid 3D human poses for data-dependent tasks
- Description
- Installation
- Tutorials
- Advanced IK Capabilities
- Train VPoser
- Citation
- License
- Acknowledgments
- Contact
- FAQ
Requirements
- Python 3.7
- PyTorch 1.7.1
Clone this repo and run the following from the root folder:
pip install -r requirements.txt
python setup.py develop
Given position of some key points one can find the necessary body joints' rotation configurations via inverse kinematics (IK). The keypoints could either be 3D (joint locations, 3D mocap markers on body surface) or 2D (as in SMPLify-X). We provide a comprehensive IK engine with flexible key point definition interface demonstrated in tutorials:
One can define keypoints on the SMPL body, e.g. joints, or any locations relative to the body surface and fit body model parameters to them while utilizing the efficient learned pose parameterization of VPoser. The supported features are:
- Batch enabled
- Flexible key point definition
- LBFGS with wolfe line-search and ADAM optimizer already enabled
- No need for initializing the body (always starts from zero)
- Optimizes body pose, translation and body global orientation jointly and iteratively
We train VPoser, as a variational autoencoder that learns a latent representation of human pose and regularizes the distribution of the latent code to be a normal distribution. We train our prior on data from the AMASS dataset, that holds the SMPL pose parameters of various publicly available human motion capture datasets.
Please cite the following paper if you use this code directly or indirectly in your research/projects:
@inproceedings{SMPL-X:2019,
title = {Expressive Body Capture: 3D Hands, Face, and Body from a Single Image},
author = {Pavlakos, Georgios and Choutas, Vasileios and Ghorbani, Nima and Bolkart, Timo and Osman, Ahmed A. A. and Tzionas, Dimitrios and Black, Michael J.},
booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
year = {2019}
}
Also note that if you consider training your own VPoser for your research using the AMASS dataset, then please follow its respective citation guideline.
The code in this repository is developed by Nima Ghorbani while at Perceiving Systems, Max-Planck Institute for Intelligent Systems, Tübingen, Germany.
If you have any questions you can contact us at smplx@tuebingen.mpg.de.
For commercial licensing, contact ps-licensing@tue.mpg.de
Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions and any accompanying documentation before you download and/or use the SMPL-X/SMPLify-X model, data and software, (the "Model & Software"), including 3D meshes, blend weights, blend shapes, textures, software, scripts, and animations. By downloading and/or using the Model & Software (including downloading, cloning, installing, and any other use of this github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Model & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.