Integration with point clouds #178
-
Hello. First and foremost, I congratulate the authors on their impressive work. I would like to try the cuRobo package with obstacles described by a point cloud (e.g., data that would come from a LiDAR). It seems to me that the current method requires transforming the point cloud into RGBD data by simulating a camera and then using cuRobo's interface with nvblox. Is there a more convenient way to handle this? Thanks in advance |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
Nvblox has a LIDAR sensor input, however that input is projected to a depth image and then integrated: https://nvblox.readthedocs.io/en/pre-release/classnvblox_1_1Lidar.html The way to use it currently would be convert it to a depth image and send it to cuRobo as we haven't exposed LIDAR as an input in the PyTorch interface. |
Beta Was this translation helpful? Give feedback.
-
Thanks! I have another question, but it is about a different topic, so I will open another discussion. |
Beta Was this translation helpful? Give feedback.
-
Excuse me, |
Beta Was this translation helpful? Give feedback.
Nvblox has a LIDAR sensor input, however that input is projected to a depth image and then integrated: https://nvblox.readthedocs.io/en/pre-release/classnvblox_1_1Lidar.html
The way to use it currently would be convert it to a depth image and send it to cuRobo as we haven't exposed LIDAR as an input in the PyTorch interface.