You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you very much for providing the open source code. I have a few questions:
After completing the dynamic point integration using the preprocessing step, a PLY file (dense cloud) and a corresponding image are saved. Since the dense point cloud is generated from all the LiDAR data in the bag file, how do you determine which image corresponds to this saved dense LiDAR point cloud in the PLY file?
To reproject the LiDAR data onto the image, I used the provided example data, extracted the images and LiDAR point clouds, and saved them according to their timestamps. I selected the LiDAR point cloud and the image with the closest timestamp to each other. I then used the following script to reproject the LiDAR point cloud onto the image. I obtained the extrinsic values for your example data, by using superglue, and used the intrinsic values of the camera, as specified in the documentation of this repository. The output is not clear. Could there be an issue with the script?
To Reproduce
import cv2
import numpy as np
image = cv2.imread("/mnt/3e91585b-beda-4cd4-aa01-6d4d656f2d2a/extracted_ouster/1679988354.830.png")
point_cloud = np.load("/mnt/3e91585b-beda-4cd4-aa01-6d4d656f2d2a/extracted_ouster/1679988354.860.npy")
point_cloud = point_cloud[:, :3] # Keep only the first three columns (x, y, z)
projection_matrix = np.float64([
[-0.03116, -0.01028, 0.99946, 0.05415],
[-0.99951, -0.00016, -0.03117, 0.00762],
[0.00048, -0.99995, -0.01027, 0.10261],
[0.00000, 0.00000, 0.00000, 1.00000]
])
def fuse_lidar_camera(image, point_cloud, projection_matrix):
(rvec, jac) = cv2.Rodrigues(projection_matrix[:3, :3])
tvec = projection_matrix[:3, 3]
camera_matrix = np.float64([
[1452.711762456289, 0, 1265.25895179213],
[0, 1455.877531619469, 1045.818593664107],
[0, 0, 1]
])
distortion = np.float64([
[-0.04203564850455424, 0.0873170980751213, 0.002386381727224478, 0.005629700706305988, -0.04251149335870252]
])
# Calculate distances to color the points
distances = np.linalg.norm(point_cloud, axis=1)
max_distance = distances.max()
point_cloud_reshaped = point_cloud.reshape(-1, 1, 3)
projected_points, _ = cv2.projectPoints(point_cloud_reshaped, rvec, tvec, camera_matrix, distortion)
# Create a color map from distances
distance_normalized = (255 * (1 - distances / max_distance)).astype(np.uint8)
colormap = cv2.applyColorMap(distance_normalized, cv2.COLORMAP_JET)
for i, point in enumerate(projected_points):
x, y = int(point[0][0]), int(point[0][1])
if 0 <= x < image.shape[1] and 0 <= y < image.shape[0]:
color = colormap[i][0].tolist() # Retrieve the corresponding color
cv2.circle(image, (x, y), 2, color, -1) # Draw colored circles
return image
result_image = fuse_lidar_camera(image, point_cloud, projection_matrix)
scale_percent = 50 # percentage of original size
width = int(result_image.shape[1] * scale_percent / 100)
height = int(result_image.shape[0] * scale_percent / 100)
dim = (width, height)
resized_image = cv2.resize(result_image, dim, interpolation=cv2.INTER_AREA)
cv2.imwrite('fused.png',resized_image)
cv2.namedWindow("fused", cv2.WINDOW_NORMAL) # Create a window that can be resized
cv2.imshow("fused", resized_image) # Show the resized image
cv2.waitKey(0) # Wait for a key press
cv2.destroyAllWindows() # Close all OpenCV windows
The image and lidar pointcloud have been extracted based on timestamps as described above. They have been attached here for your reference.
Expected behavior
The reprojection is not as expected.
Screenshots and sample data
The output of the re-projection:
Environment:
OS: Jetson Orin ubuntu 20.04
ROS version: ROS1 noetic
Additional context
Other Questions:
3) Could you provide the pixel_to_cm conversion factor or the sensor size of your camera for your camera? I need it to measure the reprojection error of the results obtained with your data.
4) Do you have an estimate of the maximum range within which the algorithm can optimize the initial guess? This would help us understand how inaccurate the initial guess can be for the algorithm to still refine it and produce reliable results.
5) I have been collecting data indoors with my setup, but I notice a lot of the roof is included in the data. Are there any limitations of this method in indoor spaces?
6) Are the parameters used in your code suitable for any type of sensor and environment, or do some parameters need to be adjusted for optimal performance with our specific sensors (our LiDAR and camera) and environment?
I apologize for the number of questions, but understanding these points is essential for progressing with my own data. Thank you for your time, and I appreciate any suggestions you can provide. I look forward to your response soon!
The text was updated successfully, but these errors were encountered:
Before opening an issue
If the issue is about build errors:
If the issue is about calibration procedure and results:
If the issue is about the algorithm:
Describe the bug
Thank you very much for providing the open source code. I have a few questions:
To Reproduce
The image and lidar pointcloud have been extracted based on timestamps as described above. They have been attached here for your reference.
Expected behavior
The reprojection is not as expected.
Screenshots and sample data
The output of the re-projection:
Environment:
Additional context
Other Questions:
3) Could you provide the pixel_to_cm conversion factor or the sensor size of your camera for your camera? I need it to measure the reprojection error of the results obtained with your data.
4) Do you have an estimate of the maximum range within which the algorithm can optimize the initial guess? This would help us understand how inaccurate the initial guess can be for the algorithm to still refine it and produce reliable results.
5) I have been collecting data indoors with my setup, but I notice a lot of the roof is included in the data. Are there any limitations of this method in indoor spaces?
6) Are the parameters used in your code suitable for any type of sensor and environment, or do some parameters need to be adjusted for optimal performance with our specific sensors (our LiDAR and camera) and environment?
I apologize for the number of questions, but understanding these points is essential for progressing with my own data. Thank you for your time, and I appreciate any suggestions you can provide. I look forward to your response soon!
The text was updated successfully, but these errors were encountered: