Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Camera intrinsic parameters corresponding to the Scene image? #269

Closed
ShukuiZhang opened this issue Jun 9, 2017 · 28 comments
Closed

Camera intrinsic parameters corresponding to the Scene image? #269

ShukuiZhang opened this issue Jun 9, 2017 · 28 comments

Comments

@ShukuiZhang
Copy link

Where can I find intrinsic parameters for the camera generating Scene images, e.g. focal length, principal point?

@saihv
Copy link
Contributor

saihv commented Jun 9, 2017

I figured those out by using the following formula, where HFOV is 90 degrees for the default PIPCamera. It may be an approximation, but reconstructing side by side scene images into pointclouds did work properly for me.

Horizontal FoV = 2 * arctan( width / 2f ) (then assuming fx = fy = f)

@lovettchris
Copy link
Member

See also point clouds.

@ShukuiZhang
Copy link
Author

ShukuiZhang commented Jun 25, 2017

Hi @lovettchris .
In the point cloud code, cv2.reprojectImageTo3D() takes the generated "depth" image as the input, while it is meant to be "disparity" image according to the the opencv documentation. Does it mean that the generated depth image is actually "disparity" image?

Update: I tried the point_cloud.py with the recorded "depth" image, it gives reasonable 3D points with the original depth image size. However, I modified the image size for both scene image and depth image when I record them, thus the showed projectionMatrix in the code do not work. Then the question is how to work out the proper projectionMatrix in my case. Can you please share the code and parameters to help to work out the matrix? FYI, I set both size to be 800x600 (798x598 in reality). Thank you!

@marcelinomalmeidan
Copy link

Okay, this was bugging me as well, so I prepared an experiment to get the camera parameters.
First, I got one of the students in our lab to prepare a simulation environment for Unreal with the checkerboard:
cameracalibairsim

Then, using a ROS publisher developed by our team, I got images from Airsim into ROS. Then, I used ROS' camera calibration package to capture images and calibrate the RGB camera.
screenshot from 2017-07-06 16-11-59

These are the parameters that the camera calibrator spat out:
width
1920

height
1080

[narrow_stereo]

camera matrix
959.661112 0.000000 959.385435
0.000000 959.693478 539.533360
0.000000 0.000000 1.000000

distortion
-0.000591 0.000519 0.000001 -0.000030 0.000000

rectification
1.000000 0.000000 0.000000
0.000000 1.000000 0.000000
0.000000 0.000000 1.000000

projection
959.779968 0.000000 959.290331 0.000000
0.000000 959.867798 539.535675 0.000000
0.000000 0.000000 1.000000 0.000000

@marcelinomalmeidan
Copy link

marcelinomalmeidan commented Jul 6, 2017

If you want to use my image dataset to run your own calibration procedure, here is the set of images:
https://drive.google.com/file/d/0B3yTdb-QXQ9-U0sxS2R4QTJFeUk/view?usp=sharing
Each square has a dimension of 50cmx50cm.

@ShukuiZhang
Copy link
Author

Thank you so much! @marcelinomalmeidan This is definitely an option.
The concern here are twofolds:

  1. I need to get the 3D points from the images, which will need the depth map. Since what was given are disparity maps, it means the baseline for the stereo is also necessary. @lovettchris
  2. The images thus the intrinsics are for 1920 x 1080. I'm not sure if disparity image and scene image with such big size may affect the recording performance since I'm not at a high end machine.

@saihv
Copy link
Contributor

saihv commented Jul 6, 2017

@ShukuiZhang As you can notice, the focal length in the intrinsic matrix for a 1080p image is almost close to 960 (1920/2), according to the formula I mentioned above, because the FoV is 90. And the principal point is w/2 and h/2. I'd say you'd get a pretty good estimate by dividing the image width by 2.

@ShukuiZhang
Copy link
Author

ShukuiZhang commented Jul 6, 2017

Yes. I see that point and I agree that is a brief while valuable approach. @saihv
On the other hand, I would prefer the value as close as to the "ground truth" because I have no idea how much effect it will cast to the point clouds even because of this small difference. I think the effect is more salient to the points far away from the camera.

@saihv
Copy link
Contributor

saihv commented Jul 6, 2017

Perhaps if @marcelinomalmeidan can link to the chessboard world, it would be easy to replicate the camera calibration procedure for different camera resolutions? And the baseline between left and right PIP could possibly be obtained by looking at the drone model (FlyingPawn.uasset in Unreal)?

@ShukuiZhang
Copy link
Author

Just get update from issue for the baseline.

@marcelinomalmeidan
Copy link

marcelinomalmeidan commented Jul 7, 2017

I am taking a different approach from what @lovettchris did... instead of getting a reprojection matrix to calculate the point cloud myself, I wanted to use the depth_image_proc package
If you check "point_cloud_xyz", it only requests a camera_info (calibration) and a depth map as input, and it produces a point cloud.
We basically already know the parameters Fx, Fy, cx, and cy for all resolutions (thanks @saihv for pointing out!). In addition, Tx should be constant regardless of the resolution.
Using the package, I get a point cloud, but its not scaled, its much bigger than what everything is. Also, the ground is not flat.
Does anyone know how to use depth_image_proc package and know how to work this out? I made a video of the output of my point cloud (0:23 shows what I was talking about the ground not being flat) :
https://youtu.be/oOmab7CPq0k

@marcelinomalmeidan
Copy link

@saihv, I'll find a way of sharing the checkerboard world!

@marcelinomalmeidan
Copy link

marcelinomalmeidan commented Jul 8, 2017

If anyone want to reproduce my results, this is the ROS image/depth publisher that I use:
https://github.com/marcelinomalmeidan/publishAirsimImgs

If you want to use it with depth_image_proc, you will have to download its repo, then add one line of code to image_pipeline/depth_image_proc/src/nodelets/point_cloud_xyz.cpp.
After line 109, you add:
cloud_msg->header.frame_id = "camera";

In addition, you can run the package by executing a launch file with the following content:

`

 <!-- Nodelet manager for this pipeline -->
<node pkg="nodelet" type="nodelet" args="manager" name="depth_transforms_manager" output="screen"/>

 <!-- Convert to point cloud -->
<node pkg="nodelet" type="nodelet" name="cloudify"
args="load depth_image_proc/point_cloud_xyz depth_transforms_manager --no-bond">

    <!-- Input: Camera calibration and metadata.  (sensor_msgs/CameraInfo) -->
    <remap from="rgb/camera_info"      to="/Airsim/camera_info"/> 
    <!-- Input: Rectified depth image -->
    <remap from="image_rect" to="/Airsim/depth"/>   

</node>
`

@sytelus
Copy link
Contributor

sytelus commented Jul 9, 2017

For intrinsic parameters of camera, see this discussion. Basically you need to convert depth to camera plan and then use FOV = 90 to create matrix. This code has example of doing many of this things.

@sytelus sytelus closed this as completed Jul 9, 2017
@sytelus
Copy link
Contributor

sytelus commented Jul 9, 2017

@marcelinomalmeidan Thanks for your contribution! Would you be able to send the ROS publisher you mentioned in your post as pull request? That sounds like great thing to have :).

Also, I've updated how the depth is generated. So you might want to rerun your calibration. I've added new APIs as well that allows you to get stereo + depth images. The API is simGetImages() allows you to get left, right and depth images simultaneously. The API return value is a struct that contains camera position, orientation and time stamp.

@sytelus sytelus reopened this Jul 9, 2017
@marcelinomalmeidan
Copy link

@sytelus, thanks! We worked on adapting my code to the new API today, but I had some problems:

  • First of all, the Airsim simulation keeps crashing while I am pulling images out of it. It happens after a while pulling images out (sometimes it takes 10 seconds, sometimes it takes minutes). Visual Studio throws an exception (it seems to be the same exception every time):
    "Exception thrown at 0x00007FFD84887944 (UE4Editor-Engine.dll) in UE4Editor.exe: 0xC0000005: Access violation reading location 0x0000000000000001C0.
    If there is a handler for this exception, the program may be safely continued."

  • Second issue, left and right camera don't seem to be synchronized. I poll these images simultaneously from Airsim, but when I visualize them, they don't seem to be well synchronized. Due to this, I am not being successful in getting stereo calibration to work. Every time I run a new calibration sequence, I get different results. This also implies that stereo disparity won't work even with a successful calibration.

  • How do I adjust resolution of the images? I was used to use Unreal engine Contents/HUDAssets, then look for the camera targets. Now there are not there anymore.

@sytelus
Copy link
Contributor

sytelus commented Jul 11, 2017

I saw the first issue and made a fix. I've tested the fix by running AirSim and generating images for couple of hours.

I will have to check for 2nd issue but my guess is that there was a lag between when you set pose and vehicle is actually set to that pose by Unreal. One way to check if that's the issue is by inserting delay. For example, do simSetPose(pose) the sleep for 1 sec then get image. I think you should get same image consistently. You can try to reduce delay to say 0.1 sec and see if that's optimal.

For the 3rd issue, I've just added feature to set resolution, FOV etc. Please scroll down this doc for more info.

Try out the latest code!

All of these code hot is right out of oven so some advance warnings :).

@marcelinomalmeidan
Copy link

@sytelus, I had to delete one line in Airsim/cmake/AirLib/CMakeLists.txt to get it to compile:
${AIRSIM_ROOT}/AirLib/src/controllers/Settings.cpp
(this file does not exist anymore)

@sytelus
Copy link
Contributor

sytelus commented Jul 12, 2017

Yes, I realized that. I've also made other fixes for Linux today.

@JonathanSchmalhofer
Copy link

@marcelinomalmeidan: Did you make any progress on StereoCamera Calibration? I tried to reproduce your approach as following:

  • Set Unreal Unit to cm.
  • Add a box in Unreal Editor (default size is 100x100x100 - I guess this is in Unreal Units, so with above point then 100cm, right?) and scale to 0.01 x 15 x 10 (so the checkerboard is 15m wide and 10m high)
  • Add a checkerboard texture with 15x10 squares. With above settings, each square should be 1000mm x 1000mm.
  • I then wrote an application that takes PNG screenshot once I hit ENTER based on the StereoImage-Example.

I then can take screenshots as the following:
left_00000
right_00000

I then try loading them in the stereoCameraCalibrator from MATLAB R2015a, but most of the images are rejected (probably because of the blocks still in the background?! not sure, will check). But the results seem to be nonsense, as the distance between left and right camera is estimated to be around 196m.

I also tried using the Camera Calibration Toolbox from MATLAB, as I have used that one a few years ago with good results for real images. When performing separate calibration for left and right camera, I do get results that seem to be more accurate. When loading them in the stereo_gui, I get a estimated distance from right to left camera of approx. 136mm, which seems more reasonable to me. But when running the stereo camera calibration, I get NaNs for all parameters during optimization. Not sure why.

Anyone else tried this before?

@JonathanSchmalhofer
Copy link

OK, as expected the MATLAB Stereo Camera Calibration App had problems auto-detecting the checkerboard if there are similar objects in the background (e.g. the blocks from the AirSim Environment 'Blocks'). I removed them et voila, all pictures were taken into account for calibration:

calibration

I got the following results:

Left Camera (#1)

Intrinsic Matrix

571,402636917134 0 0
-0,0833452467134678 571,544739332093 0
400,639999395168 300,427684345250 1

Focal Lenght

571,402636917134 571,544739332093

Principal Point

400,639999395168 300,427684345250

Skew

-0.083345246713468

Radial Distortion

0,000415036899996763 0,000556254050889156 -0,00275943702139659

Tangential Distortion

-0,000101728992258693 0,000205945187867511

Right Camera (#2)

Intrinsic Matrix

571,222935812704 0 0
-0,0184437475149830 571,365120227730 0
400,363512365099 300,369927037223 1

Focal Lenght

571,222935812704 571,365120227730

Principal Point

400,363512365099 300,369927037223

Skew

-0.018443747514983

Radial Distortion

0,00204149753440190 -0,00666695925778359 0,00559941924299634

Tangential Distortion

-9,25453504591489e-05 -7,74448998384522e-05

Stereo Camera Relation (#2 relative to #1)

Rotation

0,999999877459772 1,16767450105424e-07 -0,000495055983578262
-1,62889434791495e-07 0,999999995660114 -9,31651505587194e-05
0,000495055970551118 9,31652197816300e-05 0,999999873119906

Translation

-124,610118434116 0,00529493336163922 -3,87205255315630

@olivia-skydio
Copy link

@marcelinomalmeidan were you able to get image positions for the left and right camera that were synchronized? I am currently trying to do the same as you to generate stereo data and the positions are off. Any advice what to do? I've tried inserting a delay after simSetPose and before simGetImages but it doesn't seem to help.

@nikolaid77
Copy link

Camera parameters (e.g. focal length) found by the calibrations approaches seem to be in pixel units. How can I obtain the pixel size in Unreal Units, so that the camera parameters can be converted in Unreal Units?

@JonathanSchmalhofer
Copy link

JonathanSchmalhofer commented May 8, 2018

@nikolaid77: From the values I obtained above using MATLAB, they should all be in SI-Units, so [mm].

Edit: At least according to the MATLAB Vision Toolbox online help: Link

@nikolaid77
Copy link

@JonathanSchmalhofer: Thanks. However a focal length of 0,57m seems to be too big to be correct (I mean in correct units). But maybe I am wrong.

@husha1993
Copy link

@marcelinomalmeidan
Hi, would you like to share the checkerboard world and the set of images used to calibrate the camera?
The following link seems to have been canceled: https://drive.google.com/file/d/0B3yTdb-QXQ9-U0sxS2R4QTJFeUk/view?usp=sharing

@marcelinomalmeidan
Copy link

@husha1993, I don't have any of this anymore, and its been a long time since I last played with Airsim. Sorry, but I don't think I can be of much help =/

@karimmamer
Copy link

Hi @JonathanSchmalhofer,
Thanks a lot for sharing your results. What was the image dimensions used for the calibration?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests