Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using sensor sensor QOS profile. #101

Closed
alluring-mushroom opened this issue Sep 7, 2022 · 5 comments
Closed

Using sensor sensor QOS profile. #101

alluring-mushroom opened this issue Sep 7, 2022 · 5 comments

Comments

@alluring-mushroom
Copy link

We have a Linux ROS2 system with a many sources of high bandwidth data, including several SICK LMS511. We are locking down our QOS policies. I have noticed that the sick driver use the default QOS policy.

We are converting many things that don't need to be reliable to best_effort, specifically rmw_qos_profile_sensor_data, and I'm curious what the general opinion on using this for the SICKs is, as they seem to be what this profile targets. Specifically, it seems unlikely anyone would need to catch every single instant of the resulting pointcloud. Perhaps this is a useful change, and could be implemented as an option to keep backawards compatibility?

Thank you for your time.

@rostest
Copy link
Collaborator

rostest commented Sep 8, 2022

Thanks for your feedback. In general, we recommend that the network topology is designed to provide sufficient bandwidth for the transmission of the ROS messages when using the lidars.

We will include your request in our backlog and plan it for the next release. We are implementing it such that the QoS profile can be configured in the launch file. The next release is planned for the end of October 2022.

@alluring-mushroom
Copy link
Author

Thank you for your consideration and prompt response 😄

@aiplemaSICKAG
Copy link
Collaborator

Hi,
in the mean time you can always change the QoS on the receiving side to best_effort, which will lead to best-effort data handling, as the chosen policy is negotiated between publisher and subscriber (see the section "QoS compatibilities" on the page that you refer to: https://docs.ros.org/en/humble/Concepts/About-Quality-of-Service-Settings.html#qos-compatibilities).

Best regards,
Manuel

@alluring-mushroom
Copy link
Author

@aiplemaSICKAG
Oh, that is some useful advice. I knew they were compatible, but I assumed it would lead to the behaviour appearing to be reliable, with the publisher retrying (due to being reliable), and that not affecting the client, as it didn't care either way.

Thank you for this clarification!

rostest pushed a commit that referenced this issue Nov 22, 2022
Update: Dynamical configuration of an additional pointcloud transform by rosparam, #104
Update: Configuration of ROS quality of service by launchfile, #101
@rostest
Copy link
Collaborator

rostest commented Nov 22, 2022

The QoS can now be configured in the launchfiles by parameter ros_qos:

        <!-- Configuration of ROS quality of service: -->
        <!-- On ROS-1, parameter "ros_qos" sets the queue_size of ros publisher -->
        <!-- On ROS-2, parameter "ros_qos" sets the QoS of ros publisher to one of the following predefined values: -->
        <!-- 0: rclcpp::SystemDefaultsQoS(), 1: rclcpp::ParameterEventsQoS(), 2: rclcpp::ServicesQoS(), 3: rclcpp::ParametersQoS(), 4: rclcpp::SensorDataQoS() -->
        <!-- See e.g. https://docs.ros2.org/dashing/api/rclcpp/classrclcpp_1_1QoS.html#ad7e932d8e2f636c80eff674546ec3963 for further details about ROS2 QoS -->
        <!-- Default value is -1, i.e. queue_size=10 on ROS-1 resp. qos=rclcpp::SystemDefaultsQoS() on ROS-2 is used.-->
        <param name="ros_qos" type="int" value="-1"/>  <!-- Default QoS=-1, i.e. do not overwrite, use queue_size=10 on ROS-1 resp. qos=rclcpp::SystemDefaultsQoS() on ROS-2 -->

Please checkout the latest release 2.8.14 and rebuild. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants