Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Intensity/Range spherical image from Pandar64 point cloud #92

Open
digiamm opened this issue Jan 10, 2021 · 2 comments
Open

Intensity/Range spherical image from Pandar64 point cloud #92

digiamm opened this issue Jan 10, 2021 · 2 comments

Comments

@digiamm
Copy link

digiamm commented Jan 10, 2021

I have some problems when I project Pandar64 3D cloud point into a spherical image. Here the little snippet:

# load dataset
dataset = pandaset.DataSet("/path/to/dataset")
seq001 = dataset["001"]
seq001.load()


np.set_printoptions(precision=4, suppress=True)

# generate projected points
seq_idx = 0
lidar = seq001.lidar

# useless pose ?
pose = lidar.poses[seq_idx] 
pose_homo_transformation = geometry._heading_position_to_mat(pose['heading'], pose['position'])
print(pose_homo_transformation)

data = lidar.data[seq_idx]
# this retrieve both pandarGT and pandar64
both_lidar_clouds = lidar.data[seq_idx].to_numpy()
# get only points belonging to pandar 64 mechanical lidar
idx_pandar64 = np.where(both_lidar_clouds[:, 5] == 0)[0]
points3d_lidar_xyzi = both_lidar_clouds[idx_pandar64][:, :4]
print("number of points of mechanical lidar Pandar64:", len(idx_pandar64))
print("number of points of lidar PandarGT:", len(data)-len(idx_pandar64))

num_rows = 64                 # the number of laser beams
num_columns = int(360 / 0.2)  # horizontal field of view / horizontal angular resolution

# vertical fov of pandar64, 40 deg
fov_up = math.radians(15)
fov_down = math.radians(-25)

# init empty imgages
intensity_img = np.full((num_rows, num_columns), fill_value=-1, dtype=np.float32)
range_img = np.full((num_rows, num_columns), fill_value=-1, dtype=np.float32)

# get abs full vertical fov
fov = np.abs(fov_down) + np.abs(fov_up) 

# transform points
# R = pose_homo_transformation[0:3, 0:3]
# t = pose_homo_transformation[0:3, 3]
# # print(R)
# # print(t)
# points3d_lidar_xyzi[:, :3] = points3d_lidar_xyzi[:, :3] @ np.transpose(R)

# get depth of all points
depth = np.linalg.norm(points3d_lidar_xyzi[:, :3], 2, axis=1)

# get scan components
scan_x = points3d_lidar_xyzi[:, 0]
scan_y = points3d_lidar_xyzi[:, 1]
scan_z = points3d_lidar_xyzi[:, 2]
intensity = points3d_lidar_xyzi[:, 3]

# get angles of all points
yaw = -np.arctan2(scan_y, scan_x)
pitch = np.arcsin(scan_z / depth)

# get projections in image coords
proj_x = 0.5 * (yaw / np.pi + 1.0)                  # in [0.0, 1.0]
proj_y = 1.0 - (pitch + abs(fov_down)) / fov        # in [0.0, 1.0]

# scale to image size using angular resolution
proj_x *= num_columns                              # in [0.0, width]
proj_y *= num_rows                                 # in [0.0, heigth]

# round and clamp for use as index
proj_x = np.floor(proj_x)
out_x_projections = proj_x[np.logical_or(proj_x > num_columns, proj_x < 0)] # just to check how many points out of image  
proj_x = np.minimum(num_columns - 1, proj_x)
proj_x = np.maximum(0, proj_x).astype(np.int32)   # in [0,W-1]

proj_y = np.floor(proj_y)
out_y_projections = proj_y[np.logical_or(proj_y > num_rows, proj_y < 0)] # just to check how many points out of image
proj_y = np.minimum(num_rows - 1, proj_y)
proj_y = np.maximum(0, proj_y).astype(np.int32)   # in [0,H-1]

print("projections out of image: ", len(out_x_projections), len(out_y_projections))
print("percentage of points out of image bound: ", len(out_x_projections)/len(idx_pandar64)*100, len(out_y_projections)/len(idx_pandar64)*100)

# order in decreasing depth
indices = np.arange(depth.shape[0])
order = np.argsort(depth)[::-1]
depth = depth[order]
intensity = intensity[order]
indices = indices[order]
proj_y = proj_y[order]
proj_x = proj_x[order]

# assing to images
range_img[proj_y, proj_x] = depth
intensity_img[proj_y, proj_x] = intensity

plt.figure(figsize=(20, 4), dpi=300)
plt.imshow(intensity_img, cmap='gray', vmin=0.5, vmax=50)#, vmin=0.5, vmax=80)
plt.show()

plt.figure(figsize=(20, 4), dpi=300)
plt.imshow(range_img,vmin=0.5, vmax=80)
plt.show()  

This current projection gets an image cut on half in which the lower part is completely empty.

depth_projection png
intensity_projection png

I've tried to project into a spherical depth/intensity image raw data (like in the tutorial raw_depth_projection) and I've completely different results in terms of quality and resolution.

intensity_from_raw_data
depth_from_raw_data

I don't understand what kind of problem I am having, if related to the cloud reference frame, to some Pandar64 internal params that I am messing up or something else. Really appreciate some help. Thank you in advance.

@zeng-hello-world
Copy link

Because Pandar64 lidar laser beams' vertical angle step are not even. Yout should try its vetical angle spreads as your proj_x generation.
#67 (comment)

@MaxChanger
Copy link

Hello @digiamm @zeyu-hello @xpchuan-95 , I have a similar problem, does raw_depth_projection.ipynb in tutorials work?
The dataset I download such as pandaset_1, loading pandaset_1/048/lidar/00.pkl.gz, and there are no variables (laser_id, column_id) in it.
So I can't do the correct projection either.
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants