-
Notifications
You must be signed in to change notification settings - Fork 241
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker-compose built with python2 hangs in write()
when writing logs to stderr (PyRDP's default)
#175
Comments
Yesterday morning I deployed the new container with This morning I figured out how to do so and got very quickly something unexpected:
I'm getting these non-stop. Like every few seconds. I wonder what the debug mode changes so that these trip the debugger but usually not. |
This seems related to bad client implementation and/or client/server incompatibilities. We get a ton of these every few seconds and found no easy way of skipping these so I need to switch strategies here. I considered patching |
Trying to use
Then send a SIGABRT to the process when it is not responsive and check the docker logs. |
Process was hung again today. Sending the signal didn't produce the expected stacktrace. It points to a Python bug. Trying to attach to it with gdb gave this warning:
Going in the container to debug:
Running threads:
Backtrace:
Python backtrace:
Python code where we are stalled:
I'm not sure what to think of this. Is this our bug or an upstream bug? |
I had a bit of time to read through the callstack and I think I have an idea of what the problem might be: @obilodeau mentioned while showing me the issue that the The callstack for python is blocking in the
And looking at the native stack, we're stuck in libc's
My theory is this: The docker-compose output stops flushing its stdout buffer, meaning that all the logs pile up in the buffer, eventually triggering the The issue could be either the actual flushing to disk (I'm not sure if logging to disk is enabled?) or flushing to the docker-compose Next Steps |
After a bit of searching through docker-compose issues, I've found this which has the exact same symptoms as us but on |
write()
when writing logs to stdout
Congrats @alxbl you found it! I still had a hung process being debugged by
and then the logging to Trying to think of a fix, I don't think it's going to be as simple as I wonder why it's not happening with all the other containers that we run... |
I think PyRDP most likely doesn't output anything in stderr, so it's empty (or has been drained before the process blocked.) As for which containers are affected, we'll have to wait for a root-cause analysis from the compose team, or we could try the proposed fixes from the issue history (Although I haven't had the time to read through all of the history attentively yet.) In any case we should keep an eye on the progress of the linked issue. |
write()
when writing logs to stdoutwrite()
when writing logs to stderr
We looked more closely at the problem this morning with @alxbl. The Looking at the culprit commit highlighted by upstream bug: docker/docker-py@e9f31e1#diff-008bbf7c1c902c0264d9678151eadc64L26 We took a look at our docker-compose version running on that system:
It is using Python 2. Taking a look at the current latest version:
Uses Python 3. So, for now, we are going to try to run it with this new upstream version and see if it fixes our problem. |
The process is still running on the server and logging requests. This looks good. |
I'm considering this fixed now since there is an upstream fix where the problem is. Workaround: Use a version of docker-compose built using Python 3 to avoid the upstream issue |
write()
when writing logs to stderrwrite()
when writing logs to stderr (PyRDP's default)
Update
tl;dr: Run a docker-compose version that is built using Python 3 when running PyRDP in long-term mitm mode
Also, this thread contains numerous tips to debug and troubleshoot PyRDP when experiencing hard to reproduce problems.
Original description
Using current master, pyrdp-mitm hangs in the docker container. Sending a
HUP
signal doesn't crash or give a stracktrace which means that it could be a bug outside of the python code itself.Sending a
KILL
signal from outside the container seemingly stalled the whole container. Next time I'll try from inside the container.#174 could help debugging the issue.
The text was updated successfully, but these errors were encountered: