Skip to content

tud-amr/pose_annotation_tool

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

69 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Skeleton Annotation

Table of Contents

  1. Introduction
  2. Features
  3. Installation and Setup
  4. Usage
  5. Customisation
  6. References

Introduction

Skeleton Annotation is a customizable pipeline designed for annotating human skeletons in image datasets. This tool is created for researchers and developers working in computer vision, particularly in pose estimation and person/object detection. It integrates seamlessly with LabelMe, offering export options for manual adjustments.

Features

  • Person and Object Detection: Utilizes customizable models for accurate detection.
  • Image Extraction: Capable of extracting images of specific persons or objects.
  • Pose Prediction: Supports pose annotation prediction with customizable models.
  • LabelMe Integration: Export pose annotations to LabelMe format for manual adjustments.

On the left the original image and on the right the visualisation of the object/person detection

On the left the input image for pose estimation, in the middle the pose estimation and on the right the annotaion converted to LabelMe format

Installation and Setup

Prerequisites

  • Ubuntu system
  • Python 3.8
  • mmengine 2.0.0

Step-by-Step Installation

  1. Clone Repository: git clone https://github.com/g-ch/skeleton_annotation.git.
  2. Clone Libraries: Inside 'skeleton_annotation', clone mmpose and mmdetection libraries. Visit MMPose Installation and MMDetection Installation for more details.
  3. Create Conda Environment: Use the first lines in 'requirements.txt' to create an environment.
  4. Configure Paths: Modify the three directories in 'config.py'. Configuration Step
  5. Run Config: Navigate to ~/Pipeline/ in the terminal and execute 'config.py' to customize detection, pose estimation, and output format settings. Setup Process

Usage

  1. Prepare Dataset: Place your images in the '0_input_dataset' folder. Dataset Preparation

  2. Execute Runner: Run 'runner.py'. The output is verbose, showing any warnings. Execution Step stap 4 simpel deel 2

  3. View Results: Check the final results and intermediate files in the working tree. Results

Optional: LabelMe Output Correction

  1. Create a new environment and install LabelMe.
  2. Open and manually adjust the .json files. LabelMe Adjustment

Customisation

- Detection: Change the detection model in the config. Built-in models are available in the MMDetection ModelZoo.

The detection model can be easily changed by choosing a different one and entering it into the config. The built in models are found in the MMDetection ModelZoo (https://mmdetection.readthedocs.io/en/latest/model_zoo.html). If a personally trained model is prefereerd the path to this model can also be specified.

- Cropping Images: Adjust the threshold for creating cropped images in 'config.py'.

After the person detection is done, the individuals are made into seperate cropped pictures by their bounding boxes. The threshold for the creation of these cropped images is the certainty of the detection model. This can be changed in 'config.py' and should be a value between 0.0 and 1.0, representing a threshold of between 0% and 100% certainty. If it is desired to cut out other objects, line 22 in 'cut_out_bb.py' can be changed to another label. Note that this makes the next part of the pipeline useless.

- Pose Estimation: Customize the pose estimation and visualisation model in 'config.py'. See MMPose Configs.

Within 'config.py' the model used for the pose estimation can also be customised. For built in 2D person pose estimation models, see: https://github.com/open-mmlab/mmpose/tree/main/configs/body_2d_keypoint. This works best if also a trained state of the model, checkpoiny file, is provided. If a personally trained model is preferred the directory can be provided. Furthermore the visualisation output options can be changed, a heatmap can be added and the threshold for the visualisation of keypoints can be adjusted. Beware: if changing to a different annotation than the Coco-format, the .json file created for LabelMe will stop working.

- Pose Correction: Choose between different .json output formats in 'config.py'.

Within 'config.py' there are two options for the format of the .json file that is created. One where solely the keypoints are visible and one where the connecting lines between the keypoints are also visualised. To alter the output it is required to manually move keypoints by drag and dropping. When the connecting lines are also visualised the endpoints of these need to be moved seperate to the keypoints by drag and dropping as well.

References

  1. Zhang, Arthur and Eranki, Chaitanya and Zhang, Christina and Hong, Raymond and Kalyani, Pranav and Kalyanaraman, Lochana and Gamare, Arsh and Bagad, Arnav and Esteva, Maria and Biswas, Joydeep; "UT Campus Object Dataset (CODa)", Texas Data Repository, 2023
  2. MMPose Contributors, "OpenMMLab Pose Estimation Toolbox and Benchmark", https://github.com/open-mmlab/mmpose, 2020
  3. Chen, Kai and Wang, Jiaqi and Pang, Jiangmiao and Cao, Yuhang and Xiong, Yu and Li, Xiaoxiao and Sun, Shuyang and Feng, Wansen and Liu, Ziwei and Xu, Jiarui and Zhang, Zheng and Cheng, Dazhi and Zhu, Chenchen and Cheng, Tianheng and Zhao, Qijie and Li, Buyu and Lu, Xin and Zhu, Rui and Wu, Yue and Dai, Jifeng and Wang, Jingdong and Shi, Jianping and Ouyang, Wanli and Loy, Chen Change and Lin, Dahua, "Open MMLab Detection Toolbox and Benchmark", https://github.com/open-mmlab/mmdetection, 2019

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages