This repo is official PyTorch implementation of MultiAct: Long-Term 3D Human Motion Generation from Multiple Actions (AAAI 2023 Oral.).
- Install PyTorch and Python >= 3.8.13. Run
sh requirements.sh
to install the python packages. You should slightly changetorchgeometry
kernel code following here. - Download the pre-trained model from here and unzip in
${ROOT}/output
. - Prepare BABEL dataset following here.
- Prepare SMPL-H body model following here.
- Run
python generate.py --env gen --gpu 0 --mode gen_short
for the short-term generation. - Run
python generate.py --env gen --gpu 0 --mode gen_long
for the long-term generation. - Generated motions are stored in
${ROOT}/output/gen_release/vis/
.
- Prepare BABEL dataset following here.
- Unzip AMASS and babel_v1.0_release folder in dataset directory as below.
${ROOT}
|-- dataset
| |-- BABEL
| | |-- AMASS
| | |-- babel_v1.0_release
- Prepare SMPL-H body model from here.
- Place the human body 3D model files in human_models directory as below.
${ROOT}
|-- human_models
| |-- SMPLH_MALE.pkl
| |-- SMPLH_FEMALE.pkl
| |-- SMPLH_NEUTRAL.npz
- We use the body visualizer code released in this repo.
- Running requirements.sh installs the body visualizer in
${ROOT}/body_visualizer/
.
- Run
python train.py --env train --gpu 0
. - Running this code will override the downloaded checkpoints.
- Run
python test.py --env test --gpu 0
. - Note that the variation of the generation result depends on the random sampling of the latent vector from estimated prior Gaussian distribution. Thus, the evaluation result may be slightly different from the reported metric scores in our paper.
- Evaluation result is stored in the log file in
${ROOT}/output/test_release/log/
.
- Run
python generate.py --env gen --gpu 0 --mode gen_short
for the short-term generation. - Generated motions are stored in
${ROOT}/output/gen_release/vis/single_step_unseen
.
- Run
python generate.py --env gen --gpu 0 --mode gen_long
for the long-term generation. - Generated motions are stored in
${ROOT}/output/gen_release/vis/long_term/(exp_no)/(sample_no)/(step-by-step motion)
.
- Modify environment file
${ROOT}/envs/gen.yaml
to match your purpose. - Mark
resume: True
in environment file. - Specify
resume_exp, resume_sample, and resume_step
to determine which point to continue the generation. - Generated motions are stored in
${ROOT}/output/gen_release/vis/long_term/(next_exp_no)/(sample_no)/(step-by-step motion)
.
@InProceedings{Lee2023MultiAct,
author = {Lee, Taeryung and Moon, Gyeongsik and Lee, Kyoung Mu},
title = {MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels},
booktitle = {AAAI Conference on Artificial Intelligence (AAAI)},
year = {2023}
}