Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to calculate corresponding points coordinates with RGBD image? #4473

Open
StardustLu opened this issue Apr 9, 2022 Discussed in #4355 · 2 comments
Open

How to calculate corresponding points coordinates with RGBD image? #4473

StardustLu opened this issue Apr 9, 2022 Discussed in #4355 · 2 comments

Comments

@StardustLu
Copy link

Discussed in #4355

Originally posted by StardustLu February 16, 2022
Dear all,
I try to calculate corresponding points coordinates in world coordinate system with RGBD image, but the result is not good. If I understand it right, the pos in airsim should have no error. So what's wrong with my calculation ?

I set intrinsic parameters with #issue269;
I set extrinsic parameters with airsim_rec.txt
here's my calculation codes:

import numpy as np
from scipy.spatial.transform import Rotation

def calPworld(u, v, d, q, t):
    fx = 959.661112
    fy = 959.693478
    cx = 959.385435
    cy = 539.533360
    # X Y Z W
    r_mat = Rotation.from_quat(q)
    R = np.array(r_mat.as_matrix())

    T = np.array([t]).transpose()
    Z = d
    X = (u - cx) * Z / fx
    Y = - (v - cy) * Z / fy
    cp = np.array([[X], [Y], [Z]])
    return np.dot(R.transpose(), cp - T)

where :

  • q denotes (Q_X, Q_Y, Q_Z, Q_W) in airsim_rec.txt
  • t denotes (POS_X, POS_Y, POS_Z) in airsim_rec.txt
  • d denotes depth in depth image
  • u,v denotes pixel coordinate in RGB image

here is my settings.json to record the RGBD image (the depth image seems to be fixed in 256 x 144):

{
  "SeeDocsAt": "https://github.com/Microsoft/AirSim/blob/master/docs/settings.md",
  "SettingsVersion": 1.2,
  "SimMode": "ComputerVision",
  "Recording": {
    "RecordInterval": 1,
    "Cameras": [
        { "CameraName": "front_center", "ImageType": 0, "PixelsAsFloat": false, "Compress": true },
        { "CameraName": "front_center", "ImageType": 1, "PixelsAsFloat": true, "Compress": true }
    ]
  },
  "CameraDefaults": {
    "CaptureSettings": [
      {
        "ImageType": 0,
        "Width": 1920,
        "Height": 1080,
        "FOV_Degrees": 90
      },
      {
        "ImageType": 1,
        "Width": 1920,
        "Height": 1080,
        "FOV_Degrees": 90
      }
    ]
  }
}

eg. set a pair of corresponding points: p1(214, 280), p2(654, 280) , and the pos recorded in airsim_rec.txt :

POS_X	POS_Y	POS_Z	Q_W	Q_X	Q_Y	Q_Z
38.0266	-57.7638	-43.7433	0.999962	-3.12201e-09	3.57614e-07	0.00873036	
38.3268	-74.9576	-43.7433	0.999962	-3.12201e-09	3.57614e-07	0.00873036	

I can't find depth metric which seems to be a little different with issue#1054. I test it and find the scale factor seems to be 100. So the depth of these corresponding points are 55.029296875 and 46.2890625.
But the results seem to be far away from each other: P1(-79.48, 74.04, 98.77), P2(-51.52, 88.38, 90.03)
What's wrong with my calculation?


raw

@OPyshkin
Copy link

Have you rectified the input images?

@StardustLu
Copy link
Author

Have you rectified the input images?

Thanks for your reply :)
I did nothing with the RGBD image
I just edit settings.json and record them, then I calculate the correspondence by RGBD image and extrinsic parameters in airsim_rec.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants