-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Camera intrinsic parameters corresponding to the Scene image? #269
Comments
I figured those out by using the following formula, where HFOV is 90 degrees for the default PIPCamera. It may be an approximation, but reconstructing side by side scene images into pointclouds did work properly for me. Horizontal FoV = 2 * arctan( width / 2f ) (then assuming fx = fy = f) |
See also point clouds. |
Hi @lovettchris . Update: I tried the point_cloud.py with the recorded "depth" image, it gives reasonable 3D points with the original depth image size. However, I modified the image size for both scene image and depth image when I record them, thus the showed projectionMatrix in the code do not work. Then the question is how to work out the proper projectionMatrix in my case. Can you please share the code and parameters to help to work out the matrix? FYI, I set both size to be 800x600 (798x598 in reality). Thank you! |
Okay, this was bugging me as well, so I prepared an experiment to get the camera parameters. Then, using a ROS publisher developed by our team, I got images from Airsim into ROS. Then, I used ROS' camera calibration package to capture images and calibrate the RGB camera. These are the parameters that the camera calibrator spat out: height [narrow_stereo] camera matrix distortion rectification projection |
If you want to use my image dataset to run your own calibration procedure, here is the set of images: |
Thank you so much! @marcelinomalmeidan This is definitely an option.
|
@ShukuiZhang As you can notice, the focal length in the intrinsic matrix for a 1080p image is almost close to 960 (1920/2), according to the formula I mentioned above, because the FoV is 90. And the principal point is w/2 and h/2. I'd say you'd get a pretty good estimate by dividing the image width by 2. |
Yes. I see that point and I agree that is a brief while valuable approach. @saihv |
Perhaps if @marcelinomalmeidan can link to the chessboard world, it would be easy to replicate the camera calibration procedure for different camera resolutions? And the baseline between left and right PIP could possibly be obtained by looking at the drone model (FlyingPawn.uasset in Unreal)? |
Just get update from issue for the baseline. |
I am taking a different approach from what @lovettchris did... instead of getting a reprojection matrix to calculate the point cloud myself, I wanted to use the depth_image_proc package |
@saihv, I'll find a way of sharing the checkerboard world! |
If anyone want to reproduce my results, this is the ROS image/depth publisher that I use: If you want to use it with depth_image_proc, you will have to download its repo, then add one line of code to image_pipeline/depth_image_proc/src/nodelets/point_cloud_xyz.cpp. In addition, you can run the package by executing a launch file with the following content: `
|
For intrinsic parameters of camera, see this discussion. Basically you need to convert depth to camera plan and then use FOV = 90 to create matrix. This code has example of doing many of this things. |
@marcelinomalmeidan Thanks for your contribution! Would you be able to send the ROS publisher you mentioned in your post as pull request? That sounds like great thing to have :). Also, I've updated how the depth is generated. So you might want to rerun your calibration. I've added new APIs as well that allows you to get stereo + depth images. The API is simGetImages() allows you to get left, right and depth images simultaneously. The API return value is a struct that contains camera position, orientation and time stamp. |
@sytelus, thanks! We worked on adapting my code to the new API today, but I had some problems:
|
I saw the first issue and made a fix. I've tested the fix by running AirSim and generating images for couple of hours. I will have to check for 2nd issue but my guess is that there was a lag between when you set pose and vehicle is actually set to that pose by Unreal. One way to check if that's the issue is by inserting delay. For example, do simSetPose(pose) the sleep for 1 sec then get image. I think you should get same image consistently. You can try to reduce delay to say 0.1 sec and see if that's optimal. For the 3rd issue, I've just added feature to set resolution, FOV etc. Please scroll down this doc for more info. Try out the latest code! All of these code hot is right out of oven so some advance warnings :). |
@sytelus, I had to delete one line in Airsim/cmake/AirLib/CMakeLists.txt to get it to compile: |
Yes, I realized that. I've also made other fixes for Linux today. |
@marcelinomalmeidan: Did you make any progress on StereoCamera Calibration? I tried to reproduce your approach as following:
I then can take screenshots as the following: I then try loading them in the stereoCameraCalibrator from MATLAB R2015a, but most of the images are rejected (probably because of the blocks still in the background?! not sure, will check). But the results seem to be nonsense, as the distance between left and right camera is estimated to be around 196m. I also tried using the Camera Calibration Toolbox from MATLAB, as I have used that one a few years ago with good results for real images. When performing separate calibration for left and right camera, I do get results that seem to be more accurate. When loading them in the stereo_gui, I get a estimated distance from right to left camera of approx. 136mm, which seems more reasonable to me. But when running the stereo camera calibration, I get NaNs for all parameters during optimization. Not sure why. Anyone else tried this before? |
OK, as expected the MATLAB Stereo Camera Calibration App had problems auto-detecting the checkerboard if there are similar objects in the background (e.g. the blocks from the AirSim Environment 'Blocks'). I removed them et voila, all pictures were taken into account for calibration: I got the following results: Left Camera (#1)Intrinsic Matrix571,402636917134 0 0 Focal Lenght571,402636917134 571,544739332093 Principal Point400,639999395168 300,427684345250 Skew-0.083345246713468 Radial Distortion0,000415036899996763 0,000556254050889156 -0,00275943702139659 Tangential Distortion-0,000101728992258693 0,000205945187867511 Right Camera (#2)Intrinsic Matrix571,222935812704 0 0 Focal Lenght571,222935812704 571,365120227730 Principal Point400,363512365099 300,369927037223 Skew-0.018443747514983 Radial Distortion0,00204149753440190 -0,00666695925778359 0,00559941924299634 Tangential Distortion-9,25453504591489e-05 -7,74448998384522e-05 Stereo Camera Relation (#2 relative to #1)Rotation0,999999877459772 1,16767450105424e-07 -0,000495055983578262 Translation-124,610118434116 0,00529493336163922 -3,87205255315630 |
@marcelinomalmeidan were you able to get image positions for the left and right camera that were synchronized? I am currently trying to do the same as you to generate stereo data and the positions are off. Any advice what to do? I've tried inserting a delay after simSetPose and before simGetImages but it doesn't seem to help. |
Camera parameters (e.g. focal length) found by the calibrations approaches seem to be in pixel units. How can I obtain the pixel size in Unreal Units, so that the camera parameters can be converted in Unreal Units? |
@nikolaid77: From the values I obtained above using MATLAB, they should all be in SI-Units, so [mm]. Edit: At least according to the MATLAB Vision Toolbox online help: Link |
@JonathanSchmalhofer: Thanks. However a focal length of 0,57m seems to be too big to be correct (I mean in correct units). But maybe I am wrong. |
@marcelinomalmeidan |
@husha1993, I don't have any of this anymore, and its been a long time since I last played with Airsim. Sorry, but I don't think I can be of much help =/ |
Hi @JonathanSchmalhofer, |
Where can I find intrinsic parameters for the camera generating Scene images, e.g. focal length, principal point?
The text was updated successfully, but these errors were encountered: