Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix server's request header's memory leakage #289

Closed
wants to merge 1 commit into from

Conversation

reed-lau
Copy link
Contributor

in ros2's server-client mode:
client send_request to server, then server take_request and send_response to the client, after client take_response, the one rpc is finished.

in the server side, take_request will create(in fact PyMem_Malloc is called) a header(type rmw_request_id_t) which is used for identifying the request, conceptually, the header will be destroyed after send_response. The offical implementation forget to PyMem_Free it, which will cause the memory leakage.

Signed-off-by: reed-lau geoliuwei@gmail.com

Signed-off-by: reed-lau <geoliuwei@gmail.com>
@tfoote tfoote added the in review Waiting for review (Kanban column) label Mar 21, 2019
@dirk-thomas
Copy link
Member

The header is a parameter to the function rclpy_send_response therefore I don't think the function can assume ownership and free the memory. The caller of the function might want to continue using the header parameter after the function returns. So the cleanup likely needs to be done somewhere else.

@reed-lau
Copy link
Contributor Author

reed-lau commented Mar 21, 2019

yes, it may be released somewhere else, but the current state is that: no one release it, which will cause a memory leakage in server side, and the amount is one sizeof(rmw_request_id_t) per request. we can also add one interface something like _rclpy_destroy_request_id in the _rclpy.c. but in my point of view, the request_id's lifecycle can be last until _rclpy_send_response finished.

to reproduce the memory leak:
change the add_two_ints_client to a infinite while loop, each request will cause the server leak sizeof(rmw_request_id) size memory.

@dirk-thomas dirk-thomas added in progress Actively being worked on (Kanban column) requires-changes bug Something isn't working and removed in review Waiting for review (Kanban column) labels Mar 21, 2019
@dirk-thomas
Copy link
Member

change the add_two_ints_client to a infinite while loop

Please provide the exact patch you are using.

@reed-lau
Copy link
Contributor Author

diff --git a/demo_nodes_py/demo_nodes_py/services/add_two_ints_client.py b/demo_nodes_py/demo_nodes_py/services/add_two_ints_client.py
index 3535846..c708ffc 100644
--- a/demo_nodes_py/demo_nodes_py/services/add_two_ints_client.py
+++ b/demo_nodes_py/demo_nodes_py/services/add_two_ints_client.py
@@ -25,15 +25,16 @@ def main(args=None):
     cli = node.create_client(AddTwoInts, 'add_two_ints')
     while not cli.wait_for_service(timeout_sec=1.0):
         print('service not available, waiting again...')
-    req = AddTwoInts.Request()
-    req.a = 2
-    req.b = 3
-    future = cli.call_async(req)
-    rclpy.spin_until_future_complete(node, future)
-    if future.result() is not None:
-        node.get_logger().info('Result of add_two_ints: %d' % future.result().sum)
-    else:
-        node.get_logger().error('Exception while calling service: %r' % future.exception())
+    while True:
+      req = AddTwoInts.Request()
+      req.a = 2
+      req.b = 3
+      future = cli.call_async(req)
+      rclpy.spin_until_future_complete(node, future)
+      if future.result() is not None:
+          node.get_logger().info('Result of add_two_ints: %d' % future.result().sum)
+      else:
+          node.get_logger().error('Exception while calling service: %r' % future.exception())
 
     node.destroy_node()
     rclpy.shutdown()

@reed-lau
Copy link
Contributor Author

@dirk-thomas

@dirk-thomas
Copy link
Member

I can confirm the increasing memory usage over time (I would recommend using while rclpy.ok(): though). But the patch in this PR doesn't fix the problem for me (despite doing the free in the wrong location).

@dirk-thomas
Copy link
Member

#302 is similar and it would be good to address that one first to make sure the simple cases don't have an increasing memory usage.

@dirk-thomas
Copy link
Member

As suspected the fix for #302 seems to also address the memory increase reported in this ticket. Please try the patch from ros2/rcl#418 to check that it addresses the problem for you too.

Since it does so for me and since this patch was not mergable I will go ahead and close the ticket for now. Please feel free to continue commenting with your findings.

@dirk-thomas dirk-thomas added duplicate This issue or pull request already exists and removed in progress Actively being worked on (Kanban column) labels Apr 17, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working duplicate This issue or pull request already exists requires-changes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants