Skip to content

AMBER pose estimation

Hannah L edited this page Apr 15, 2023 · 5 revisions

The AMBER_pose_estimation.py script will run your videos through all pose estimation and post-pose estimation steps require to prepare files for use in SimBA.

To run the script:


1) Open the windows command prompt with administrator privileges

2) Activate your deeplabcut conda environment:
conda activate DEEPLABCUT

3) Change your directory so you are in the AMBER-pipeline directory containing all the files downloaded when you cloned the AMBER repository using cd /d path/to/directory on windows
e.g. if the AMBER-pipeline folder is located on the desktop: cd /d C:\Desktop\AMBER-pipeline

4) Make sure all the videos you want to run are located in a single folder (anywhere on your computer). This script will run pose estimation steps on all the video files in the folder you give it. Make sure any videos you do not want run are moved to another location. Copy the address of the folder containing the videos to run.
Note: in windows, you can copy the directory path by right-clicking on the folder name in the file explorer and selection “Copy address as text”. You can then paste it in the the command window



5) To run pose estimation, you will enter “python”, the script name, followed by the path to the directory where your videos are location.
e.g. python AMBER_pose_estimation.py C:\Desktop\hannah_test_short
Press enter to execute the command.

The script will automatically run the following steps:
----1 Pose estimation for dams for all videos
----2 Create videos to check dam tracking
----3 Pose estimation for pups for all videos
----4 Create videos to check pup detections
----5 "Unpickle" pup detection files to convert to csv
----6 Join and reformat pup and dam pose estimation output so it is ready to use with SimBA
Note: The above steps can also be completed separately using the individual files supplied with AMBER

The deeplabcut files will appear in the same directory as your videos. There will also be two new folders created:

First, the pose_estimation_videos folder contains the video with the dam and pup track points to check model performance. These videos have been move to a separate directory to make importing videos into SimBA easier later on.

Second, the AMBER_joined_pose_estimation folder contains the reformatted pose estimation files with dam and pup tracking that should be used during behavior classification in SimBA.

6) Check your pose estimation videos to ensure that the tracking looks good before proceeding to behavior classification. If the pose estimation models are not performing well, you may need to label additional frames from your videos and retrain the dam or pup models.

If you feel confident in the pose estimation model performances and want to skip the “create tracking videos” step, you can pass “skip_create_videos” when you run the script.
e.g. Python AMBER_pose_estimation.py C:\Desktop\hannah_test_short skip_create_videos

7) Exit your deeplabcut conda environment
conda deactivate

Clone this wiki locally