Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pipeline Implementation #43

Open
j3soon opened this issue Aug 14, 2024 · 6 comments
Open

Pipeline Implementation #43

j3soon opened this issue Aug 14, 2024 · 6 comments

Comments

@j3soon
Copy link
Owner

j3soon commented Aug 14, 2024

I'm thinking about the best approach for implementing pipelines based on existing workspaces (such as #27 and #28).

The primary goal is to ensure ease of use, while maintaining minimal code duplication.

For example, the VLP-16 to Husky pipeline could be simplified by using symbolic links and gitignore for vlp_ws and husky_ws, which will 100% reuse the code of those 2 workspaces without duplication. I don't think this is the best way though...

I’m currently thinking of introducing only the required code and configs for vlp_to_husky_ws, where:

  • vlp_to_husky_ws/src/dummy_controller/dummy_controller/publisher.py publishes the commands.
  • vlp_to_husky_ws/docker/compose.yaml extends the compose files from vlp_ws and husky_ws.
  • and other necessary files.

This way, we can treat vlp_to_husky_ws just like a normal workspace (that depends on some workspaces) with minimal code duplication. Moreover, we can easily integrate existing workspaces such as gazebo_world_ws or isaac_sim_ws in the future.

I'm curious about @YuZhong-Chen's and @Assume-Zhan's thoughts on this. I look forward to your comments when you have time.

@YuZhong-Chen
Copy link
Collaborator

Reusing a previous workspace would make management more convenient. However, some aspects might not be as ideal. For example, if we want to simulate Husky and ZED in Gazebo, our URDF file must include both. However, the files for these two will be placed in different install folders within their respective workspaces/containers, leading to import failures and other issues. You can take a look at here for more details. Unless we restructure the entire workspace to use packages for separation, I don't think this will be a good approach.

@j3soon
Copy link
Owner Author

j3soon commented Aug 15, 2024

Thanks for your comment! I understand that some aspects may not be ideal, but the current copy-and-paste approach would lead to maintenance difficulties in the long run. (There may be a lot of duplicate code, such as five near-identical husky_ws scattered throughout this repo)

How about we only copy necessary files and use workspace overlay/underlay? Something like:

zed_to_husky_ws
└── src
    ├── dummy_controller
    ├── husky_control # this may not be required due to overlaying?
    ├── husky_description # this contains the zed+husky URDF (maybe rename to `robot_description`?)
    └── husky_gazebo # (maybe rename to `robot_gazebo`?)

and we'll source the husky_ws, zed_ws, zed_to_husky_ws environment, in this order?

If this is possible, we can minimize code duplication (by reusing underlay packages) and allow code modifications (by overriding the underlay packages through overlay).

So the gazebo simulation will run in the container of zed_to_husky_ws (while reusing packages in other workspaces, and may require install some dependencies for simulation), without requiring other containers. As for real world deployment, the container of zed_to_husky_ws will reduce to only running the dummy_controller package, and run the extra containers of zed_ws and husky_ws to interface with the real hardware.

We can configure this by using a docker compose file to extend the compose configs of zed_ws and husky_ws. This compose file can also distribute the containers across different hardware (for example, laptop and Jetson boards).


Sidenote: I think restructuring the current workspaces into packages may not be feasible.

@YuZhong-Chen
Copy link
Collaborator

YuZhong-Chen commented Aug 15, 2024

I can roughly understand your idea, and I like it! Conceptually, I think it's quite similar to how the kobuki_driver_ws was designed before. Since the method used there was feasible, I believe the current solution should also work.

However, when dealing with zed_to_husky_ws, there might be a need to fine-tune some things in husky_ws to make it more convenient. I'll test this out and see how to handle it better!

But first, I want to test using gazebo_world_ws to see if it's possible to reuse parts of husky_ws and make the Husky work in the simulated world inside gazebo_world_ws. Since only husky_ws has already been merged into the main branch, working on zed_to_husky_ws first might cause dependency issues with the changes in zed_ws. So handling gazebo_world_ws first should be relatively simpler.

@j3soon
Copy link
Owner Author

j3soon commented Aug 15, 2024

Yes, I agree with you! Please try it out in gazebo_world_ws when you have time. Thanks!

If this code structure seems to work, we can continue working on vlp_to_kobuki_ws and vlp_to_husky_ws.

As for the pipelines including ZED or RealSense, we may wait until the workspaces for them are fixed and merged.

It's worth noting that we should also allow vlp_to_husky_ws and vlp_to_kobuki_ws to reuse the worlds in gazebo_world_ws.

@j3soon
Copy link
Owner Author

j3soon commented Aug 21, 2024

It just occurred to me that it may also be possible for husky_ws to reuse the citysim package in gazebo_world_ws. Therefore reducing the duplicated copies of citysim in this repo.

This can be addressed after the gazebo_world_ws PR though.

@j3soon
Copy link
Owner Author

j3soon commented Sep 6, 2024

Just came across the multi-machine support feature in launch files, which may be useful for running pipelines across machines. Although it is supported in ROS 1, it isn't supported in ROS 2 yet.

References:


Pipelines may require launching multiple containers across workspaces at once (potentially across machines), and would prefer if a single launch file can launch nodes across containers (and machines).

Some random thoughts:

  • Launching multiple containers across workspaces.
    => A docker compose file for the pipeline that extends compose files from other workspaces.
  • Launching multiple containers across machine.
    => A bash script that directly ssh into other machines and run docker compose up (use pre-generated ssh-keys and hostnames to allow direct access)
  • Launching nodes across containers and workspaces.
    => Use ExecuteProcess in launch files to ssh into other containers (on host or another machine) to launch the nodes.
    (This would require sshd to be installed in each container, and a unique hostname for each container though...)
    References:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants