-
Notifications
You must be signed in to change notification settings - Fork 431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QoS depth settings for clients/service ignored #1785
QoS depth settings for clients/service ignored #1785
Comments
Yes. I can reproduce this problem. dds_qset_reliability(qos, DDS_RELIABILITY_RELIABLE, DDS_SECS(1));
dds_qset_history(qos, DDS_HISTORY_KEEP_ALL, DDS_LENGTH_UNLIMITED); So your settings doesn't take effect. |
I can reproduce this issue with |
ros2/rclcpp#1785 Signed-off-by: Tomoya Fujita <Tomoya.Fujita@sony.com>
@fujitatomoya As far as I can see, the QoS settings are correctly passed to the created DDS writers and readers. But then, whenever the reader has a request ready, the listener takes the request and keeps it internally on the callback here It is that method the one that is not respecting the |
@MiguelCompany thanks for quick response ❗ what would we want to do next? probably create sub-issue on https://github.com/ros2/rmw_fastrtps? |
That's been there since the initial commit. A.k.a. when I quickly cobbled together a half-working RMW layer without knowing much about people's expectations regarding the behaviour 🙂 From what I can tell looking at the That doesn't necessarily mean I think it wise to let the application (especially the service) control the queue depth in this manner. In a general a service is expected to receive multiple requests at the same time, letting it silently drop requests makes it become quite unreliable. There have been many issues about service reliability ... On the client side, it makes some more sense in that an application really does control how many requests it sends out in rapid succession and how it correlates the responses with the requests. Even so, it at first glance, the behaviour you get with a shallow history depth seems to fit better with ROS 2's actions than with its service/client mechanism. |
Yeah. Please do. |
Completely agree. |
I created an issue under https://github.com/ros2/rmw_fastrtps ? |
Thanks for your detailed explantion.
I think you may have realized that the
Thanks for locating the source code.
I think that the history const eprosima::fastrtps::HistoryQosPolicy & history = reader->get_qos().history();
if (eprosima::fastrtps::KEEP_LAST_HISTORY_QOS == history.kind &&
list.size() >= static_cast<size_t>(history.depth))
{
list.pop_front();
} I'd like to hear if you have a further suggestion? |
You are absolutely correct that that also needs to be removed, but I am afraid you may be giving me too much credit, at least this time: I had only such a fleeting look at it — essentially during a context switch from one of many things I had to attend to, to another of those many things — that I overlooked it ... 😳 Well, good thing I didn't attempt to also do a pull request 😀 |
You are correct @iuhilnehc-ynos. Having that code on both service and client callbacks would be enough, I think.
It may be a good idea to have different default profiles for services and clients. There is currently only this one, which may be okay for clients, but presumably not suitable for services (which should maybe have KEEP_ALL history). What do you think @eboasson? |
@Barry-Xu-2018 @iuhilnehc-ynos we can just go ahead to create PRs for |
@fujitatomoya thank you for the ping. AFAICT this shouldn't be a problem with In fact, I tried the reproducer provided by @mauropasse and things worked as expected (i.e. only the last request sent by the client gets processed): $ RMW_IMPLEMENTATION=rmw_connextdds install/cs_test/bin/cs_test
Setting OoS depth = 1 for client and service
Client: async_send_request number: 0
Client: async_send_request number: 1
Client: async_send_request number: 2
Client: async_send_request number: 3
Client: async_send_request number: 4
Client: async_send_request number: 5
Client: async_send_request number: 6
Client: async_send_request number: 7
Client: async_send_request number: 8
Client: async_send_request number: 9
Client: async_send_request number: 10
Client: async_send_request number: 11
Client: async_send_request number: 12
Client: async_send_request number: 13
Client: async_send_request number: 14
Client: async_send_request number: 15
Press key to spin the Server
a
Press key to spin the Client
service_executor->spin()
[INFO] [1633027743.048037298] [service]: Handle service. Request: 15 + 15 = 30
a
client_executor->spin()
[INFO] [1633027744.552063149] [client]: Client callback. Result of 15 + 15 = 30
a
rclcpp::shutdown() |
@asorbini that is great! thanks for checking! |
@iuhilnehc-ynos Thank you |
we still have ros2/rmw_cyclonedds#340 open. btw, i am not positive to backport this (it can be backported though) cz it changes default behavior. i would not think anyone would notice this default behavior but we would want to respect the stable behavior for distribution. |
i will go ahead to close this, please re-open if anything is up. |
Awesome, all working as expected now. Thank you guys! |
Seems that setting the QoS depth for clients and services is ignored:
The following program shows the issue:
The output:
The text was updated successfully, but these errors were encountered: