Skip to content

RobVisLab/camera_calibration

Repository files navigation


Automatic Camera Calibration

This page accompanies our paper [1] on automatic calibration of depth cameras. The presented calibration target and automatic feature extraction are not limited to depth cameras but can also be used for conventional cameras. The provided code is designed to be used as an addon to the widely known camera calibration toolbox of Jean-Yves Bouguet

Calibration Target

The used calibration target consists of a central marker and circular patterns:

Calibration Target

Our automatic feature detection starts by searching for the central marker and then iteratively refining the circular markers around the central marker (depticted as black dashed line). Compared to standard checkerboard targets, our methods has the following advantages:
  • Target does not have to be visible as a whole
  • Detection of groups of circular patterns is more robust to perspective distortions than line crossings
  • Feature detection is more accurate for low-resolution cameras (like ToF or Event Cameras)

Example Detection Result

The following images show the result of the automatic feature detection on two exemplary images from the paper. The calibration target is detected in the gray value image and reprojected to the corresponding depth image from a Microsoft Kinect v2.0:

Detection result gray image Detection result projected to depth image



How to use the code

The provided source code is used as an addon the the Bouguet Camera Calibration Toolbox. Installation therefore amounts to:
  • Clone the calibration toolbox from GitHub [GitLab Link]
  • With the download_script.sh the Bouguet Toolbox can be updated to the latest version
  • Running autocalibration.m and selecting the images from testdata/image_xxx.jpg starts the mono calibration of the camera.
  • The calibration target can be created using the make_target.m function. Remember to measure it after printing!
  • To use it with the GUI of the Toolbox, simply start calib_gui_normal_auto.m which asks for the target parameters interactively.
  • Stereo calibration requires the use of calib_stereo_auto.m instead of calib_stereo.m because our method does not detect all grid points in all images!
The following parameters have to be set:
  • parameters.approx_marker_width_pixels: Approximate minimum size of the center marker in pixels
  • parameters.grid_width_mm: Grid width (distance between points) in millimeters
  • parameters.checker_aspect_ratio: Aspect ratio (= height/width)
  • parameters.grid_coordinates_h:Horizontal grid dimensions (i.e. -11:11)
  • parameters.grid_coordinates_v: Vertical grid dimensions (i.e. -18:16)
The target can be created by using the function function template = make_target (grid_width_pixels, grid_width_mm, grid_coordinates_h, grid_coordinates_v), i.e.: target = make_target(240,5,-18:18,-10:10);


How to cite the materials

We grant permission to use the code on this website. If you if you use the code in your own publication, we request that you cite our paper [1]. If you want to cite this website, please use the URL "https://rvlab.icg.tugraz.at/calibration/".


References

  1. Learning Depth Calibration of Time-of-Flight Cameras   [supp]
    David Ferstl, Christian Reinbacher, Gernot Riegler, Matthias Ruether, and Horst Bischof
    In Proceedings of British Machine Vision Conference, (BMVC), 2015
    Abstract
    We present a novel method for an automatic calibration of modern consumer Time-of-Flight cameras. Usually, these sensors come equipped with an integrated color camera. Albeit they deliver acquisitions at high frame rates they usually suffer from incorrect calibration and low accuracy due to multiple error sources. Using information from both cameras together with a simple planar target, we will show how to accurately calibrate both color and depth camera and tackle most error sources inherent to Time-of-Flight technology in a unified calibration framework. Automatic feature detection minimizes user interaction during calibration. We utilize a Random Regression Forest to optimize the manufacturer supplied depth measurements. We show the improvements to commonly used depth calibration methods in a qualitative and quantitative evaluation on multiple scenes acquired by an accurate reference system for the application of dense 3D reconstruction.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages