-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Potential memory leak in aiohttp server #4478
Comments
Try upgrading the multidict package. There's been a huge refactoring with a number of subsequent fixes and patch releases. |
Thanks for the tip @webknjaz. Upgrading Using (Pdb) objgraph.show_most_common_types()
function 18050
dict 15242
CIMultiDict 14775
_KeysView 14462
tuple 10103
OrderedDict 9816
list 6121
FrameSummary 5029
weakref 4935
getset_descriptor 2881 Getting the most leaking objects roots = objgraph.get_leaking_objects()
(Pdb) objgraph.show_most_common_types(objects=roots)
_KeysView 14462
dict 1309
set 241
tuple 33
list 11
SignalDict 8
weakref 5
method 5
slice 2
CTypeDescr 2 |
How about downgrading? |
After testing multidict in different scenarios I was unable to detect any memory leak; everything is returned back to the allocator. Sorry, I cannot perform analyzing without a leaking code. |
I have not seen this memory leak either. Likely the fault is in the application code, rather than aiohttp itself. Is your app code storing requests somewhere, by any chance?... You should try to create a minimal example that reproduces the leak, and post it. |
@asvetlov @gjcarneiro @webknjaz thanks for looking into this and sorry for not being able to provide an isolated example. I have managed to isolate this issue to a middleware function which creates an opentracing-span for every incoming request. I am not sure why this causes a memory leak but I think that it has nothing to do with aiohttp, so I think this issue can be closed. The middleware in question does this, and I don't see anything obvious here that leaks memory. Perhaps something in the vendor implementation causes it, will debug further. @web.middleware
async def opentracing_middleware(request: web.Request, handler: Callable):
""" Tracing middleware function which is applied for all handlers. Extracts a
span context from the request and creates a new span using the context as the
parent. If there is no context, starts a new span without a reference. """
# Avoid polluting the traces by ignoring the health endpoint.
if request.rel_url.path == "/health":
return await handler(request)
try:
span_context = opentracing.tracer.extract(
format=Format.HTTP_HEADERS, carrier=request.headers
)
except (
opentracing.InvalidCarrierException,
opentracing.SpanContextCorruptedException,
):
span_context = None
with opentracing.tracer.start_active_span(
child_of=span_context,
operation_name=request.match_info.handler.__name__,
finish_on_close=True,
tags=default_server_tags(request),
) as scope: # noqa
return await handler(request) |
Long story short
I have a small HTTP API written with aiohttp as the backend. I am seeing a constantly growing memory footprint which results in segmentation faults in production. The segfaults seem to occur somewhat randomly. I am not able to get the core dumps at the moment for analysis, so I have resorted to debugging this locally. I do not have conclusive proof, but I am looking for some pointers on where to go from here.
The API makes two external calls per request; one to external API using
aiottp.ClientSession
and another to DynamoDB to fetch data. In both cases we maintain a separate session for the lifetime of the application.Expected behaviour
Stable memory consumption
Actual behaviour
Growing memory footprint at low rps. The following graph and tracemalloc data is from a short test where the API was run locally and traffic was generated at roughly 45 requests per second.
Tracemalloc snapshot comparison between test start and finish points to
RequestHandler.data_received()
-method andTCPConnector._wrap_create_connection()
-method having the largest increase in memory.Ojbgraph points to a large number of
CIMultiDict
-objects in memory and the following object graph can be generated (not sure how helpful this is)Additionally I am seeing errors reported in #3535 when traffic generation stops.
Any pointers on where to go from here for further debugging would be much appreciated.
Steps to reproduce
Unfortunately none at the moment. I will try to isolate a reproduceable snippet.
Your environment
The memory consumption is increased on both os x (my laptop) and on an ubuntu based docker image running on Kubernetes.
aiohttp version is 3.6.2 with uvloop on python 3.7.5, server and client.
The text was updated successfully, but these errors were encountered: