Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for multi-sensor scenarios #232

Merged
merged 22 commits into from
Oct 27, 2023

Conversation

nachovizzo
Copy link
Collaborator

@nachovizzo nachovizzo commented Oct 3, 2023

Description

These are the required changes to use KISS-ICP as an odometry source for multi-sensor scenarios. Although the ticket is
about the comparison, to have the odometry in the base_footprint coordinate frame, we need these changes.

This PR also relates to and fixes #174

Tf tree fix

So the tf logic was a total disaster, and I hope to fix it with this PR. The basic idea is as follows.

  • If the user requires a particular base_frame, in our case, base_footprint (but it could be base_link), then all the point clouds must be expressed in that coordinate frame; for this reason, I changed the logic to spit the odometry seen from the base_frame eyes. If the user does not specify any particular base_frame, I interpret that it is running a LiDAR-only system (such as a dataset, etc.), and I keep the original frame_id from the PointCloud2 message.

Pending changes

Acknowledgments

image

This fix was made possible thanks to Dexory

Signed-off-by: Ignacio Vizzo <ignacio@dexory.com>
@tizianoGuadagnino
Copy link
Collaborator

The frame of reference it is wrong in this change actually. You are transforming the point cloud into the base frame before deskewing, which is not correct. The location of the points will be completely different in this frame, we should always keep it in LiDAR frame. Same holds for the downsampling I think, we will get a different set of "keypoints", probably it doesn;t matter for performances but the pipeline will give different results when using the ros wrapper and the python rosbag dataloader.

@tizianoGuadagnino
Copy link
Collaborator

It might be an idea to change the frame of the resulting pose from KISS. So we compute the odometry in LiDAR frame as before, but then we change the frame to base_link (or whatever) right before publishing into the ROS echosystem. What do you think ? @nachovizzo @benemer

ps: This will imply the also the local map must be expressed in the base_link frame before being published which it is probably a pain in the ass

@benemer
Copy link
Member

benemer commented Oct 4, 2023

It might be an idea to change the frame of the resulting pose from KISS. So we compute the odometry in LiDAR frame as before, but then we change the frame to base_link (or whatever) right before publishing into the ROS echosystem. What do you think ? @nachovizzo @benemer

ps: This will imply the also the local map must be expressed in the base_link frame before being published which it is probably a pain in the ass

I agree that de-skewing and downsampling should happen in the sensor frame. We had a look at moving these outside of the registration as a sort of pre-processing step, but this becomes quite messy since we need for example the previous poses for de-skewing.

@nachovizzo
Copy link
Collaborator Author

Thanks for the feedback, guys, I'm trying to find an alternative solution to this problem, and will be back soon'ish!

@nachovizzo
Copy link
Collaborator Author

@tizianoGuadagnino @benemer ready for another round of reviews ...

@playertr
Copy link
Contributor

Should these changes to the base frame logic apply to the core library rather than the ROS wrappers? For instance, I would like to run

kiss_icp_pipeline vehicle.bag --topics /lidar1 /lidar2 /lidar3 --vehicle_base_frame /vehicle_base

to process bags offline. If it were made possible to support this via the C++ or Python API, these changes to the ROS wrapper may be redundant.

(I'm still learning about this library, so let me know if I'm off base here.)

@nachovizzo
Copy link
Collaborator Author

Should these changes to the base frame logic apply to the core library rather than the ROS wrappers? For instance, I would like to run

kiss_icp_pipeline vehicle.bag --topics /lidar1 /lidar2 /lidar3 --vehicle_base_frame /vehicle_base

to process bags offline. If it were made possible to support this via the C++ or Python API, these changes to the ROS wrapper may be redundant.

(I'm still learning about this library, so let me know if I'm off base here.)

Very good observation 👏. Indeed this make sense.

What doesn't seem probable to me (but I might be now thinking big) is why a user would be playing with the python pipeline in such scenarios. If you have one use case, please comment.

We always tried to keep the core library as simple and as application-independent as possible. The ROS wrappers are and example of an application that consumes the core library , and it's always why I liked the suggestions of my colleagues of not interfering with the clouds. Kiss sees and process everything in an egocentric world, the only thing is the LiDAR.

Any other ideas?

Thanks for commenting :)

@playertr
Copy link
Contributor

Should these changes to the base frame logic apply to the core library rather than the ROS wrappers? For instance, I would like to run

kiss_icp_pipeline vehicle.bag --topics /lidar1 /lidar2 /lidar3 --vehicle_base_frame /vehicle_base

to process bags offline. If it were made possible to support this via the C++ or Python API, these changes to the ROS wrapper may be redundant.
(I'm still learning about this library, so let me know if I'm off base here.)

Very good observation 👏. Indeed this make sense.

What doesn't seem probable to me (but I might be now thinking big) is why a user would be playing with the python pipeline in such scenarios. If you have one use case, please comment.

We always tried to keep the core library as simple and as application-independent as possible. The ROS wrappers are and example of an application that consumes the core library , and it's always why I liked the suggestions of my colleagues of not interfering with the clouds. Kiss sees and process everything in an egocentric world, the only thing is the LiDAR.

Any other ideas?

Thanks for commenting :)

The use case I am considering is offline bag analysis for a vehicle with multiple lidars. In my workflow, I need to create a fused pointcloud throughout an entire trajectory using three lidars. Because this analysis is done offline, the Python pipeline is superior to a pipeline using ROS transport for two reasons:

  • rosbag play/record is limited to real time (or hacky-feeling multiples of real time using the -r flag), while the Python pipeline can process point clouds as fast as it can
  • on systems with limited IO performance, rosbag record can drop messages if IO is saturated, making bag analysis through the ROS wrapper nondeterministic

That said, I appreciate the disciplined design decision that Kiss operates in an egocentric world, using only LiDAR. It is not too costly to convert all three lidars into a common frame prior to feeding a bag to Kiss.

(as an aside, my project's needs for registering the point cloud to GPS coordinates and for accommodating imprecisely known sensor extrinsics might end up pulling me away from KISS-ICP and toward a more general optimization framework, though I would prefer to keep it simple)

@benemer
Copy link
Member

benemer commented Oct 18, 2023

Approved from my side :)

nachovizzo and others added 10 commits October 19, 2023 13:56
And fix a small bug, the order was of the transformation before was the
opposite, and therefore we were obtaining base2cloud. Since we multiply
by both sides we can't really see the difference, but it was
conceptually wrong.
* Build system changes for tf fix

* Modify params for tf fix

* Add ROS 1 tf fixes similar to ROS 2

* Update rviz config

* Remove unused debug publishers

* Remove unnecessary smart pointers

* Update ROS 1 to match ROS 2 changes
Fixing now the CI is a big pain
@nachovizzo nachovizzo changed the title Fix TF tree usage in ROS 2 wrapper Add support for multi-sensor scenarios Oct 20, 2023
Copy link
Member

@benemer benemer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good from my side.

@nachovizzo
Copy link
Collaborator Author

@tizianoGuadagnino do you have any concern or may I merge it?

@tizianoGuadagnino
Copy link
Collaborator

I'm preparing the new release so I'm gonna merge it myself ;) thanks to everybody

@tizianoGuadagnino tizianoGuadagnino merged commit 062cc44 into main Oct 27, 2023
17 checks passed
@tizianoGuadagnino tizianoGuadagnino deleted the nacho/fix_ros_tf_tree_usage branch October 27, 2023 11:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

In which frame has the laser scan data to be
4 participants