- Clone this repository:
git clone https://github.com/stevenlsw/Semi-Hand-Object.git
- Install the dependencies by the following command:
pip install -r requirements.txt
-
Download the MANO model files (
mano_v1_2.zip
) from MANO website. Unzip and putmano/models/MANO_RIGHT.pkl
intoassets/mano_models
. -
Download the YCB-Objects used in HO3D dataset. Put unzipped folder
object_models
underassets
. -
The structure should look like this:
Semi-Hand-Object/
assets/
mano_models/
MANO_RIGHT.pkl
object_models/
006_mustard_bottle/
points.xyz
textured_simple.obj
......
- Download and unzip HO3D dataset
to path you like, the unzipped path is referred as
$HO3D_root
.
The hand & object pose estimation performance on HO3D dataset. We evaluate hand pose results on the official CodaLab challenge. The hand metric below is mean joint/mesh error after procrustes alignment, the object metric is average object vertices error within 10% of object diameter (ADD-0.1D).
In our model, we use transformer architecture to perform hand-object contextual reasoning.
Please download the trained model and save to path you like, the model path is refered as $resume
.
trained-model | joint↓ | mesh↓ | cleanser↑ | bottle↑ | can↑ | ave↑ |
---|---|---|---|---|---|---|
link | 0.99 | 0.95 | 92.2 | 80.4 | 55.7 | 76.1 |
python traineval.py --evaluate --HO3D_root={path to the dataset} --resume={path to the model} --test_batch=24 --host_folder=exp_results
The testing results will be saved in the $host_folder
, which contains the following files:
option.txt
(saved options)object_result.txt
(object pose evaluation performance)pred.json
(zip -j pred.zip pred.json
and submit to the offical challenge for hand evaluation)
Please download the preprocessed files to train HO3D dataset.
The downloaded files contains training list and labels generated from the original dataset to accelerate training.
Please put the unzipped folder ho3d-process
to current directory.
python traineval.py --HO3D_root={path to the dataset} --train_batch=24 --host_folder=exp_results
The models will be automatically saved in $host_folder
@inproceedings{liu2021semi,
title={Semi-Supervised 3D Hand-Object Poses Estimation with Interactions in Time},
author={Liu, Shaowei and Jiang, Hanwen and Xu, Jiarui and Liu, Sifei and Wang, Xiaolong},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
year={2021}
}
- Google colab demo
We thank:
- obman_train provided by Yana Hasson
- segmentation-driven-pose provided by Yinlin Hu