Skip to content

Commit

Permalink
Merge pull request #79 from NVIDIA-ISAAC-ROS/release-dp3
Browse files Browse the repository at this point in the history
Isaac ROS 0.30.0 (DP3)
  • Loading branch information
jaiveersinghNV authored Apr 6, 2023
2 parents 711e844 + 0e81a94 commit 90f8890
Show file tree
Hide file tree
Showing 30 changed files with 2,003 additions and 176 deletions.
3 changes: 3 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# Ignore Python files in linguist
*.py linguist-detectable=false

# Images
*.gif filter=lfs diff=lfs merge=lfs -text
*.jpg filter=lfs diff=lfs merge=lfs -text
Expand Down
174 changes: 108 additions & 66 deletions README.md

Large diffs are not rendered by default.

13 changes: 11 additions & 2 deletions docs/elbrus-slam.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,15 @@ At this moment, a connection is added to the PoseGraph which makes a loop from t

The procedure for adding landmarks is designed such that if we do not see a landmark in the place where it was expected, then such landmarks are marked for eventual deletion. This allows you to use Elbrus over a changing terrain.

Along with visual data, Elbrus can use Inertial Measurement Unit (IMU) measurements. It automatically switches to IMU when VO is unable to estimate a pose – for example, when there is dark lighting or long solid surfaces in front of a camera. Using an IMU usually leads to a significant performance improvement in cases of poor visual conditions.

In case of severe degradation of image input (lights being turned off, dramatic motion blur on a bump while driving, and other possible scenarios), below mentioned motion estimation algorithms will ensure acceptable quality for pose tracking:

* The IMU readings integrator provides acceptable pose tracking quality for about ~< 1 seconds.

* In case of IMU failure, the constant velocity integrator continues to provide the last linear and angular velocities reported by Stereo VIO before failure.
This provides acceptable pose tracking quality for ~0.5 seconds.

## List of Useful Visualizations

* `visual_slam/vis/observations_cloud` - Point cloud for 2D Features
Expand All @@ -28,10 +37,10 @@ The procedure for adding landmarks is designed such that if we do not see a land

## Saving the map

Naturally, we would like to save the stored landmarks and pose graph in a map. We have implemented a ROS2 action to save the map to the disk called `SaveMap`.
Naturally, we would like to save the stored landmarks and pose graph in a map. We have implemented a ROS 2 action to save the map to the disk called `SaveMap`.

## Loading and Localization in the Map

Once the map has been saved to the disk, it can be used later to localize the robot. To load the map into the memory, we have made a ROS2 action called `LoadMapAndLocalize`. It requires a map file path and a prior pose, which is an initial guess of where the robot is in the map. Given the prior pose and current set of camera frames, Elbrus tries to find the pose of the landmarks in the requested map that matches the current set. If the localization is successful, Elbrus will load the map in the memory. Otherwise, it will continue building a new map.
Once the map has been saved to the disk, it can be used later to localize the robot. To load the map into the memory, we have made a ROS 2 action called `LoadMapAndLocalize`. It requires a map file path and a prior pose, which is an initial guess of where the robot is in the map. Given the prior pose and current set of camera frames, Elbrus tries to find the pose of the landmarks in the requested map that matches the current set. If the localization is successful, Elbrus will load the map in the memory. Otherwise, it will continue building a new map.

Both `SaveMap` and `LoadMapAndLocalize` can take some time to complete. Hence, they are designed to be asynchronous to avoid interfering with odometry calculations.
30 changes: 16 additions & 14 deletions docs/tutorial-isaac-sim.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,9 @@

## Overview

This tutorial walks you through a pipeline to estimate 3D pose of the camera with [Visual SLAM](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam) using images from Isaac Sim.
This tutorial walks you through a graph to estimate 3D pose of the camera with [Visual SLAM](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam) using images from Isaac Sim.

Last validated with [Isaac Sim 2022.2.1](https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/release_notes.html#id1)

## Tutorial Walkthrough

Expand All @@ -25,19 +27,19 @@ This tutorial walks you through a pipeline to estimate 3D pose of the camera wit
```

4. Install and launch Isaac Sim following the steps in the [Isaac ROS Isaac Sim Setup Guide](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/docs/isaac-sim-sil-setup.md)
5. Open up the Isaac ROS Common USD scene (using the "content" window) located at:
5. Open up the Isaac ROS Common USD scene (using the *Content* tab) located at:

`omniverse://localhost/NVIDIA/Assets/Isaac/2022.1/Isaac/Samples/ROS2/Scenario/carter_warehouse_apriltags_worker.usd`.
```text
http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/Isaac/2022.2.1/Isaac/Samples/ROS2/Scenario/carter_warehouse_apriltags_worker.usd
```

And wait for it to load completely.
> **Note:** To use a different server, replace `localhost` with `<your_nucleus_server>`
6. Go to the stage tab and select `/World/Carter_ROS/ROS_Cameras/ros2_create_camera_right_info`, then in properties tab -> Compute Node -> Inputs -> stereoOffset X change `0` to `-175.92`.
6. Go to the *Stage* tab and select `/World/Carter_ROS/ROS_Cameras/ros2_create_camera_right_info`, then in *Property* tab *-> OmniGraph Node -> Inputs -> stereoOffset X* change `0` to `-175.92`.
<div align="center"><img src="../resources/Isaac_sim_set_stereo_offset.png" width="500px"/></div>

7. Enable the right camera for a stereo image pair. Go to the stage tab and select `/World/Carter_ROS/ROS_Cameras/enable_camera_right`, then tick the `Condition` checkbox.
7. Enable the right camera for a stereo image pair. Go to the *Stage* tab and select `/World/Carter_ROS/ROS_Cameras/enable_camera_right`, then tick the *Condition* checkbox.
<div align="center"><img src="../resources/Isaac_sim_enable_stereo.png" width="500px"/></div>
8. Press **Play** to start publishing data from the Isaac Sim application.
<div align="center"><img src="../resources/Isaac_sim_visual_slam.png" width="800px"/></div>
<div align="center"><img src="../resources/Isaac_sim_play.png" width="800px"/></div>
9. In a separate terminal, start `isaac_ros_visual_slam` using the launch files:

```bash
Expand Down Expand Up @@ -67,17 +69,17 @@ This tutorial walks you through a pipeline to estimate 3D pose of the camera wit

## Saving and using the map

As soon as you start the visual SLAM node, it starts storing the landmarks and the pose graph. You can save them in a map and store the map onto a disk. Make a call to the `SaveMap` ROS2 Action with the following command:
As soon as you start the visual SLAM node, it starts storing the landmarks and the pose graph. You can save them in a map and store the map onto a disk. Make a call to the `SaveMap` ROS 2 Action with the following command:

```bash
ros2 action send_goal /visual_slam/save_map isaac_ros_visual_slam_interfaces/action/SaveMap "{map_url: /path/to/save/the/map}"
```

</br>
<br>
<div align="center"><img src="../resources/Save_map.png" width="400px"/></div>
</br>
<br>
<div align="center"><img src="../resources/RViz_isaac_sim_mapping.png" width="800px" alt="Sample run before saving the map" title="Sample run before saving the map."/></div>
</br>
<br>

Now, you will try to load and localize in the previously saved map. First, stop the `visual_slam` node lauched for creating and saving the map, then relaunch it.

Expand All @@ -88,7 +90,7 @@ ros2 action send_goal /visual_slam/load_map_and_localize isaac_ros_visual_slam_i
```

<div align="center"><img src="../resources/Load_and_localize.png" width="400px"/></div>
</br>
<br>

Once the above step returns success, you have successfully loaded and localized your robot in the map. If it results in failure, there might be a possibilty that the current landmarks from the approximate start location are not matching with stored landmarks and you need to provide another valid value.

Expand All @@ -98,7 +100,7 @@ Once the above step returns success, you have successfully loaded and localized
<figcaption>Before Localization</figcaption>
</figure>
</div>
</br>
<br>
<div align="center">
<figure class="image">
<img src="../resources/After_localization.png" width="600px" alt="After localization" title="After localization."/>
Expand Down
267 changes: 267 additions & 0 deletions docs/tutorial-realsense.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,267 @@
# Tutorial for Visual SLAM using a RealSense camera with integrated IMU

<div align="center"><img src="../resources/realsense.gif" width="600px"/></div>

## Overview

This tutorial walks you through setting up [Isaac ROS Visual SLAM](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam) with a [Realsense camera](https://www.intel.com/content/www/us/en/architecture-and-technology/realsense-overview.html).

> **Note**: The [launch file](../isaac_ros_visual_slam/launch/isaac_ros_visual_slam_realsense.launch.py) provided in this tutorial is designed for a RealSense camera with integrated IMU. If you want to run this tutorial with a RealSense camera without an IMU (like RealSense D435), then change `enable_imu` param in the launch file to `False`.
<!-- Split blockquote -->
> **Note**: This tutorial requires a compatible RealSense camera from the list available [here](https://github.com/NVIDIA-ISAAC-ROS/.github/blob/main/profile/realsense-setup.md#camera-compatibility).
## Tutorial Walkthrough - VSLAM execution

1. Complete the [RealSense setup tutorial](https://github.com/NVIDIA-ISAAC-ROS/.github/blob/main/profile/realsense-setup.md).

2. Complete the [Quickstart section](../README.md#quickstart) in the main README.

3. \[Terminal 1\] Run `realsense-camera` node and `visual_slam` node

Make sure you have your RealSense camera attached to the system, and then start the Isaac ROS container.

```bash
isaac_ros_container
```

> Or if you did not add the command in [step 1-3 of the quickstart section](../README.md#quickstart):
>
> ```bash
> cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
> ./scripts/run_dev.sh ${ISAAC_ROS_WS}
> ```

4. \[Terminal 1\] Inside the container, build and source the workspace:

```bash
cd /workspaces/isaac_ros-dev && \
colcon build --symlink-install && \
source install/setup.bash
```

5. \[Terminal 1\] Run the launch file, which launches the example and wait for 5 seconds:

```bash
ros2 launch isaac_ros_visual_slam isaac_ros_visual_slam_realsense.launch.py
```

6. \[Terminal 2\] Attach a second terminal to check the operation.

Attach another terminal to the running container for issuing other ROS2 commands.

```bash
isaac_ros_container
```

First check if you can see all the ROS2 topics expected.

```bash
ros2 topic list
```

> Output example:
>
> ```bash
> /camera/accel/imu_info
> /camera/accel/metadata
> /camera/accel/sample
> /camera/extrinsics/depth_to_accel
> /camera/extrinsics/depth_to_gyro
> /camera/extrinsics/depth_to_infra1
> /camera/extrinsics/depth_to_infra2
> /camera/gyro/imu_info
> /camera/gyro/metadata
> /camera/gyro/sample
> /camera/imu
> /camera/infra1/camera_info
> /camera/infra1/image_rect_raw
> /camera/infra1/image_rect_raw/compressed
> /camera/infra1/image_rect_raw/compressedDepth
> /camera/infra1/image_rect_raw/theora
> /camera/infra1/metadata
> /camera/infra2/camera_info
> /camera/infra2/image_rect_raw
> /camera/infra2/image_rect_raw/compressed
> /camera/infra2/image_rect_raw/compressedDepth
> /camera/infra2/image_rect_raw/theora
> /camera/infra2/metadata
> /parameter_events
> /rosout
> /tf
> /tf_static
> /visual_slam/imu
> /visual_slam/status
> /visual_slam/tracking/odometry
> /visual_slam/tracking/slam_path
> /visual_slam/tracking/vo_path
> /visual_slam/tracking/vo_pose
> /visual_slam/tracking/vo_pose_covariance
> /visual_slam/vis/gravity
> /visual_slam/vis/landmarks_cloud
> /visual_slam/vis/localizer
> /visual_slam/vis/localizer_loop_closure_cloud
> /visual_slam/vis/localizer_map_cloud
> /visual_slam/vis/localizer_observations_cloud
> /visual_slam/vis/loop_closure_cloud
> /visual_slam/vis/observations_cloud
> /visual_slam/vis/pose_graph_edges
> /visual_slam/vis/pose_graph_edges2
> /visual_slam/vis/pose_graph_nodes
> /visual_slam/vis/velocity
> ```

Check the frequency of the `realsense-camera` node's output frequency.
```bash
ros2 topic hz /camera/infra1/image_rect_raw --window 20
```
> Example output:
>
> ```bash
> average rate: 89.714
> min: 0.011s max: 0.011s std dev: 0.00025s window: 20
> average rate: 90.139
> min: 0.010s max: 0.012s std dev: 0.00038s window: 20
> average rate: 89.955
> min: 0.011s max: 0.011s std dev: 0.00020s window: 20
> average rate: 89.761
> min: 0.009s max: 0.013s std dev: 0.00074s window: 20
> ```
>
> `Ctrl` + `c` to stop the output.
You can also check the frequency of IMU topic.
```bash
ros2 topic hz /camera/imu --window 20
```
> Example output:
>
> ```bash
> average rate: 199.411
> min: 0.004s max: 0.006s std dev: 0.00022s window: 20
> average rate: 199.312
> min: 0.004s max: 0.006s std dev: 0.00053s window: 20
> average rate: 200.409
> min: 0.005s max: 0.005s std dev: 0.00007s window: 20
> average rate: 200.173
> min: 0.004s max: 0.006s std dev: 0.00028s window: 20
> ```
Lastly, check if you are getting the output from the `visual_slam` node at the same rate as the input.
```bash
ros2 topic hz /visual_slam/tracking/odometry --window 20
```
> Example output
>
> ```bash
> average rate: 58.086
> min: 0.002s max: 0.107s std dev: 0.03099s window: 20
> average rate: 62.370
> min: 0.001s max: 0.109s std dev: 0.03158s window: 20
> average rate: 90.559
> min: 0.009s max: 0.013s std dev: 0.00066s window: 20
> average rate: 85.612
> min: 0.002s max: 0.100s std dev: 0.02079s window: 20
> average rate: 90.032
> min: 0.010s max: 0.013s std dev: 0.00059s window: 20
> ```
## Tutorial Walkthrough - Visualization
At this point, you have two options for checking the `visual_slam` output.
- **Live visualization**: Run Rviz2 live while running `realsense-camera` node and `visual_slam` nodes.
- **Offline visualization**: Record rosbag file and check the recorded data offline (possibly on a different machine)
Running Rviz2 on a remote PC over the network is tricky and is very difficult especially when you have image message topics to subscribe due to added burden on ROS 2 network transport.
Working on Rviz2 in a X11-forwarded window is also difficult because of the network speed limitation.
Therefore, if you are running `visual_slam` on Jetson, it is generally recommended **NOT** to evaluate with live visualization (1).
### Live visualization
1. \[Terminal 2\] Open Rviz2 from the second terminal:
```bash
rviz2 -d src/isaac_ros_visual_slam/isaac_ros_visual_slam/rviz/realsense.cfg.rviz
```
As you move the camera, the position and orientation of the frames should correspond to how the camera moved relative to its starting pose.
<div align="center"><img src="../resources/realsense.gif" width="600px"/></div>
### Offline visualization
1. \[Terminal 2\] Save rosbag file
Record the output in your rosbag file (along with the input data for later visual inspection).
```bash
export ROSBAG_NAME=courtyard-d435i
ros2 bag record -o ${ROSBAG_NAME} \
/camera/imu /camera/accel/metadata /camera/gyro/metadata \
/camera/infra1/camera_info /camera/infra1/image_rect_raw \
/camera/infra1/metadata \
/camera/infra2/camera_info /camera/infra2/image_rect_raw \
/camera/infra2/metadata \
/tf_static /tf \
/visual_slam/status \
/visual_slam/tracking/odometry \
/visual_slam/tracking/slam_path /visual_slam/tracking/vo_path \
/visual_slam/tracking/vo_pose /visual_slam/tracking/vo_pose_covariance \
/visual_slam/vis/landmarks_cloud /visual_slam/vis/loop_closure_cloud \
/visual_slam/vis/observations_cloud \
/visual_slam/vis/pose_graph_edges /visual_slam/vis/pose_graph_edges2 \
/visual_slam/vis/pose_graph_nodes
ros2 bag info ${ROSBAG_NAME}
```
If you plan to run the rosbag on a remote machine (PC) for evaluation, you can send the rosbag file to your remote machine.
```bash
export IP_PC=192.168.1.100
scp -r ${ROSBAG_NAME} ${PC_USER}@${IP_PC}:/home/${PC_USER}/workspaces/isaac_ros-dev/
```
2. \[Terminal A\] Launch Rviz2
> If you are SSH-ing into Jetson from your PC, make sure you enabled X forwarding by adding `-X` option with SSH command.
>
> ```bash
> ssh -X ${USERNAME_ON_JETSON}@${IP_JETSON}
> ```
Launch the Isaac ROS container.
```bash
isaac_ros_container
```
Run Rviz with a configuration file for visualizing set of messages from Visual SLAM node.
```bash
cd /workspaces/isaac_ros-dev
rviz2 -d src/isaac_ros_visual_slam/isaac_ros_visual_slam/rviz/vslam_keepall.cfg.rviz
```
3. \[Terminal B\] Playback the recorded rosbag
Attach another terminal to the running container.
```bash
isaac_ros_container
```
Play the recorded rosbag file.
```bash
ros2 bag play ${ROSBAG_NAME}
```
RViz should start showing visualization like the following.
<div align="center"><img src="../resources/RViz_0217-cube_vslam-keepall.png" width="600px"/></div>
Loading

0 comments on commit 90f8890

Please sign in to comment.