Code for the paper "The SFU-Store-Nav 3D Virtual Human Platform for Human-Aware Robotics", presented at the ICRA 2021 workshop on Long-term Human Motion Prediction.
This repo contains code to create 3D simulations for the SFU-Store-Nav dataset. The retail scene was re-created in Blender, and the 3D body shape and pose estimations were combined with motion capture data to have virtual humans interact with the scene as in the original experiment. We also propose an LSTM Variational Autoencoder that learns a latent representation of human pose and regularizes the distribution of the latent code to be a normal distribution. We can then predict future poses given an input sequence.
This code is built upon VPoser, and we use their methods to visualize the SMPL body.
If you only want to run the pose forecasting, you can skip to that section. Otherwise, the first few steps explain how to gather the data and visualize it in Blender.
Get the original SFU-Store-Nav dataset from here.
We use VIBE to get the SMPL body parameters and meshes. You will need to download the SMPL body from here. You can use the Colab demo here, but modify it to add the --save_obj
flag:
!python demo.py --vid_file VID_NAME.avi --output_folder OUTPUT_NAME --save_obj
After running, save the corresponding .obj files and the .pkl file.
We use the Stop Motion Obj plugin in Blender to import the body meshes. Follow the instructions, then in Blender:
- Click File > Import > Mesh Sequence
- Navigate to the folder where your mesh sequence is stored
- In the File Name box, provide 0
- Leave the Cache Mode set to Cached
- Click Select Folder and wait while your sequence is loaded
- Click the sequence in "Scene Collection"
- Click Context > Object
- Click Mesh Sequence > Advanced > Bake sequence
If there were multiple body IDs returned, you will need to repeat for each different ID folder.
You will need to put the corresponding .csv files into a folder of the form YOUR ROOT DIR/csvs/VIDEO_NAME
. Then, set the root_path
and video_name
variables in set_scene.py
and run it.
- Download the SMPL body from here and save it under
human_body_prior/smpl/models/neutral.pkl
. - Follow the instructions here to make the pkl file compatible with Python 3.
- Download a trained VPoser from the SMPL-X project website.
- Change the
torchgometry/core/conversions.py: L302:304
to:
mask_c1 = mask_d2 * ~mask_d0_d1
mask_c2 = ~mask_d2 * mask_d0_nd1
mask_c3 = ~mask_d2 * ~mask_d0_nd1
- Run the pose forecasting notebook. It is compatible with Colab, but the visualizations require GPU.