-
Notifications
You must be signed in to change notification settings - Fork 418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Publishing is slow in Docker with MutliThreadedExecutor #1487
Comments
So if I understand this bug report right, this is happening with some combination of One other test that would be interesting to do is to write your own very simple version of the Another test that would be interesting would be to switch from a |
That is right. I will try to make some more tests indeed to narrow it down. And give the results here. |
Ok so here is the code for the simple publisher I am using:
Results:
Sometime a bit more (max 300Hz). Simple |
Very interestingly I tried the exact same code (made a small package from scratch only dependent on rclcpp) and tested on
So what I thought is a combination of Docker and |
Same thing in a fresh |
ros2/rclcpp#1487 Signed-off-by: Tomoya.Fujita <Tomoya.Fujita@sony.com>
I think the reason is the same with ros2/ros2#1035 (comment), I tried to use #1382 to see if it can also fix the problem. it can be better but with this high frequency it cannot fix the problem completely. I think that to address this problem, the same TimerBase object should be taken once and not being scheduled on other threads. |
I am interested in this issue. |
After checking the source code, I found the mutex
is overused at rclcpp/rclcpp/src/rclcpp/executors/multi_threaded_executor.cpp Lines 79 to 97 in eddb938
and
. I think there is a way to fix this issue, which will break the ABI, but maybe it's worth adding a new mutex. diff --git a/rclcpp/include/rclcpp/executors/multi_threaded_executor.hpp b/rclcpp/include/rclcpp/executors/multi_threaded_executor.hpp
index c18df5b7..8ae3b44d 100644
--- a/rclcpp/include/rclcpp/executors/multi_threaded_executor.hpp
+++ b/rclcpp/include/rclcpp/executors/multi_threaded_executor.hpp
@@ -87,6 +87,7 @@ private:
std::chrono::nanoseconds next_exec_timeout_;
std::set<TimerBase::SharedPtr> scheduled_timers_;
+ std::mutex scheduled_timers_mutex_;
};
} // namespace executors
diff --git a/rclcpp/src/rclcpp/executors/multi_threaded_executor.cpp b/rclcpp/src/rclcpp/executors/multi_threaded_executor.cpp
index 0dfdc354..a692f7c5 100644
--- a/rclcpp/src/rclcpp/executors/multi_threaded_executor.cpp
+++ b/rclcpp/src/rclcpp/executors/multi_threaded_executor.cpp
@@ -76,14 +76,17 @@ MultiThreadedExecutor::run(size_t)
while (rclcpp::ok(this->context_) && spinning.load()) {
rclcpp::AnyExecutable any_exec;
{
- std::lock_guard<std::mutex> wait_lock(wait_mutex_);
- if (!rclcpp::ok(this->context_) || !spinning.load()) {
- return;
- }
- if (!get_next_executable(any_exec, next_exec_timeout_)) {
- continue;
+ {
+ std::lock_guard<std::mutex> wait_lock(wait_mutex_);
+ if (!rclcpp::ok(this->context_) || !spinning.load()) {
+ return;
+ }
+ if (!get_next_executable(any_exec, next_exec_timeout_)) {
+ continue;
+ }
}
if (any_exec.timer) {
+ std::lock_guard<std::mutex> wait_lock(scheduled_timers_mutex_);
// Guard against multiple threads getting the same timer.
if (scheduled_timers_.count(any_exec.timer) != 0) {
// Make sure that any_exec's callback group is reset before
@@ -103,7 +106,7 @@ MultiThreadedExecutor::run(size_t)
execute_any_executable(any_exec);
if (any_exec.timer) {
- std::lock_guard<std::mutex> wait_lock(wait_mutex_);
+ std::lock_guard<std::mutex> wait_lock(scheduled_timers_mutex_);
auto it = scheduled_timers_.find(any_exec.timer);
if (it != scheduled_timers_.end()) {
scheduled_timers_.erase(it);
What do you think about above patch? |
Just tested on the
So I guess something has already been implemented in one previous commit. Yet it is not entirely resolved. |
Confirmed that @iuhilnehc-ynos solution makes things better:
It also resolves the problem with ros2_control_demo. I am not sure what side effects it can have but at least it does solve this issue. |
thanks! yeah, i thought that way in the past. but i think there will be multi-thread problem with that fix. please see inline comment,
I gotta admit that is really rare, but it cannot guarantee that |
It definitely helped. Thanks.
Anyway, I'll see related issues later. |
not at all😄 i might be missing something, but if the |
It seems #1168 already did the same thing, so I will not create a new PR for it, Updated: It seems
|
I have no idea of how to fix this, so I will unassign myself (cc @clalancette). Well I have ideas of things to try, but the scope of this is much bigger of what I would expect from a randomly assigned issue. I will try to explain how I understand things to work and why that's an issue:
My idea would be to limit how "executables" can be scheduled, so that when one worker has scheduled that "executable" for execution no other worker can take it as "ready". That completely forbids the case of wanting to execute a callback of the same executable in parallel (e.g. two messages of same subscription), but I guess forbidding that is fine as that can potentially led to "out of order" execution. The single threaded executor also has its own problems:
|
I now remembered about #1328, which I opened a while before. |
#1328 effectively fixes this issue. |
I don't think that's ok to forbid. It's completely reasonable to want to execute the same callback multiple times in parallel, less so for timers, but definitely so for subscriptions. Even for timers, I think it's a reasonable thing to want to do, though I don't know of any cases where that's what you'd want. Perhaps you're using a timer to poll for work to be done, and so sometimes the callback is quick (maybe just printing information) and occasionally it takes longer, even longer than the period of the timer. In that scenario you'd like to be able to have callbacks be concurrent so that while you're processing the work in the infrequent case, you can still be responsive and print status messages in the meantime from other instances of the timer callbacks. It might make sense to make the trade-off in #1328, since the use cases it fixes are likely more common, but it definitely doesn't address the issue correctly.
I suppose that's the case. I vaguely remember stalling that pull request with Pedro for that reason. I still think something in that direction could be the right and correct solution. |
would you mind trying to use @ivanpauno 's fix #1516 to check the performance? that would be really appreciated. |
it definitely improves things but the rate is inconsistent. Here are some results:
Expected rate is 1khz which the SingleThreadedExecutor can follow perfectly fine. See the results above @iuhilnehc-ynos solution could track it more accurately. |
@buschbapti thanks for checking. that really helps. according to #1487 (comment), it does have multi-threaded problem as i mentioned. so i'd like to go with @ivanpauno 's fix at this moment. |
Perfect, at least it gives some pretty decent improvements and make it usable even at those frequencies. Thanks for taking care of this and thanks @ivanpauno for the solution. |
I am using ros2 foxy from the official ros2 docker images to build ros2_control and test their new implementations. I have noticed that when starting a joint state publisher that is set to publish at 200hz, monitoring the topic with
ros2 topic hz /joint_states
gives at best 30hz.I know this has nothing to do specifically with ros2_control because I had a similar issue when trying my own
lifecycle
publisher nodes. Basically, attaching the node to aMultiThreadedExecutor
produces a similar behavior where the topic is published at a much slower rate than expected, sometime by a factor of 10. Changing to aSingleThreadedExecutor
solves the issue. Problem is ros2_control relies on thisMultiThreadedExecutor
.I do believe this comes from the combination of Docker and
MultiThreadedExecutor
. I tested it on multiple computers and got similar behavior. I haven't been able to test on a non docker installation as it requires Ubuntu 20.04 which I don't have. But I will try it just in case.Steps to reproduce:
MultiThreadedExecutor
hz
Alternatively, follow the ros2_control_demo installed on a Docker foxy image and monitor
/joint_states
.The text was updated successfully, but these errors were encountered: