Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix possible pipeline connections leak #3104

Merged
merged 5 commits into from
Jan 11, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 28 additions & 26 deletions redis/cluster.py
Original file line number Diff line number Diff line change
Expand Up @@ -2166,32 +2166,34 @@ def _send_cluster_commands(
# we dont' multiplex on the sockets as they come available,
# but that shouldn't make too much difference.
node_commands = nodes.values()
for n in node_commands:
n.write()

for n in node_commands:
n.read()

# release all of the redis connections we allocated earlier
# back into the connection pool.
# we used to do this step as part of a try/finally block,
# but it is really dangerous to
# release connections back into the pool if for some
# reason the socket has data still left in it
# from a previous operation. The write and
# read operations already have try/catch around them for
# all known types of errors including connection
# and socket level errors.
# So if we hit an exception, something really bad
# happened and putting any oF
# these connections back into the pool is a very bad idea.
# the socket might have unread buffer still sitting in it,
# and then the next time we read from it we pass the
# buffered result back from a previous command and
# every single request after to that connection will always get
# a mismatched result.
for n in nodes.values():
n.connection_pool.release(n.connection)
try:
node_commands = nodes.values()
for n in node_commands:
n.write()

for n in node_commands:
n.read()
finally:
# release all of the redis connections we allocated earlier
# back into the connection pool.
# we used to do this step as part of a try/finally block,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment seems out-of-date — isn't it explaining why not to do exactly what this change is doing?

It should probably be update to explain why this is safe, not why it isn't. 😅

# but it is really dangerous to
# release connections back into the pool if for some
# reason the socket has data still left in it
# from a previous operation. The write and
# read operations already have try/catch around them for
# all known types of errors including connection
# and socket level errors.
# So if we hit an exception, something really bad
# happened and putting any oF
# these connections back into the pool is a very bad idea.
# the socket might have unread buffer still sitting in it,
# and then the next time we read from it we pass the
# buffered result back from a previous command and
# every single request after to that connection will always get
# a mismatched result.
for n in nodes.values():
n.connection_pool.release(n.connection)

# if the response isn't an exception it is a
# valid response from the node
Expand Down
Loading