Skeleton Annotation is a customizable pipeline designed for annotating human skeletons in image datasets. This tool is created for researchers and developers working in computer vision, particularly in pose estimation and person/object detection. It integrates seamlessly with LabelMe, offering export options for manual adjustments.
- Person and Object Detection: Utilizes customizable models for accurate detection.
- Image Extraction: Capable of extracting images of specific persons or objects.
- Pose Prediction: Supports pose annotation prediction with customizable models.
- LabelMe Integration: Export pose annotations to LabelMe format for manual adjustments.
On the left the original image and on the right the visualisation of the object/person detection
On the left the input image for pose estimation, in the middle the pose estimation and on the right the annotaion converted to LabelMe format
- Ubuntu system
- Python 3.8
- mmengine 2.0.0
- Clone Repository:
git clone https://github.com/g-ch/skeleton_annotation.git
. - Clone Libraries: Inside 'skeleton_annotation', clone
mmpose
andmmdetection
libraries. Visit MMPose Installation and MMDetection Installation for more details. - Create Conda Environment: Use the first lines in 'requirements.txt' to create an environment.
- Configure Paths: Modify the three directories in 'config.py'.
- Run Config: Navigate to ~/Pipeline/ in the terminal and execute 'config.py' to customize detection, pose estimation, and output format settings.
-
Prepare Dataset: Place your images in the '0_input_dataset' folder.
-
Execute Runner: Run 'runner.py'. The output is verbose, showing any warnings.
-
View Results: Check the final results and intermediate files in the working tree.
Optional: LabelMe Output Correction
- Detection: Change the detection model in the config. Built-in models are available in the MMDetection ModelZoo.
The detection model can be easily changed by choosing a different one and entering it into the config. The built in models are found in the MMDetection ModelZoo (https://mmdetection.readthedocs.io/en/latest/model_zoo.html). If a personally trained model is prefereerd the path to this model can also be specified.
After the person detection is done, the individuals are made into seperate cropped pictures by their bounding boxes. The threshold for the creation of these cropped images is the certainty of the detection model. This can be changed in 'config.py' and should be a value between 0.0 and 1.0, representing a threshold of between 0% and 100% certainty. If it is desired to cut out other objects, line 22 in 'cut_out_bb.py' can be changed to another label. Note that this makes the next part of the pipeline useless.
- Pose Estimation: Customize the pose estimation and visualisation model in 'config.py'. See MMPose Configs.
Within 'config.py' the model used for the pose estimation can also be customised. For built in 2D person pose estimation models, see: https://github.com/open-mmlab/mmpose/tree/main/configs/body_2d_keypoint. This works best if also a trained state of the model, checkpoiny file, is provided. If a personally trained model is preferred the directory can be provided. Furthermore the visualisation output options can be changed, a heatmap can be added and the threshold for the visualisation of keypoints can be adjusted. Beware: if changing to a different annotation than the Coco-format, the .json file created for LabelMe will stop working.
Within 'config.py' there are two options for the format of the .json file that is created. One where solely the keypoints are visible and one where the connecting lines between the keypoints are also visualised. To alter the output it is required to manually move keypoints by drag and dropping. When the connecting lines are also visualised the endpoints of these need to be moved seperate to the keypoints by drag and dropping as well.
- Zhang, Arthur and Eranki, Chaitanya and Zhang, Christina and Hong, Raymond and Kalyani, Pranav and Kalyanaraman, Lochana and Gamare, Arsh and Bagad, Arnav and Esteva, Maria and Biswas, Joydeep; "UT Campus Object Dataset (CODa)", Texas Data Repository, 2023
- MMPose Contributors, "OpenMMLab Pose Estimation Toolbox and Benchmark", https://github.com/open-mmlab/mmpose, 2020
- Chen, Kai and Wang, Jiaqi and Pang, Jiangmiao and Cao, Yuhang and Xiong, Yu and Li, Xiaoxiao and Sun, Shuyang and Feng, Wansen and Liu, Ziwei and Xu, Jiarui and Zhang, Zheng and Cheng, Dazhi and Zhu, Chenchen and Cheng, Tianheng and Zhao, Qijie and Li, Buyu and Lu, Xin and Zhu, Rui and Wu, Yue and Dai, Jifeng and Wang, Jingdong and Shi, Jianping and Ouyang, Wanli and Loy, Chen Change and Lin, Dahua, "Open MMLab Detection Toolbox and Benchmark", https://github.com/open-mmlab/mmdetection, 2019