Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
A fix for a bug that occurred during the response to a PubSub server
RECONNECT
message. In essence, a single WebSocket connection object spawned 2 new WebSockets both with the same pool index. My understanding is that there should only ever be a single WebSocket for each index in the pool and that reconnection should only produce a single new WebSocket.I've also updated the
pre-commit
script with a new version ofisort
and changed theflake8
repo to GitHub.INFO Logs
DEBUG Logs
I was monitoring the info logs for a local docker image and noticed that, after a PubSub
RECONNECT
message was finished being handled, I was getting 2 logs for each PubSub message. You can see this in the logs above, at 18:54:28 there are 2 logs for eachon_message
andclaim_bonus
, and though I've redacted the info they are for the same channel. Looking through the debug logs I could see that I was also getting 2 correspondingpoints_earned
andclaim_available
PubSub messages each. Additionally, from this point forward, the logs always showed these messages happening in pairs.After reviewing the code paths I could see the issue was caused by
on_close
being called during thehandle_reconnection
flow. This causes an additional call tohandle_reconnection
and the flow gets run fully again, causing an additional WebSocket to be created and, since they're created on the same index, detaching the 1st new WebSocket from the pool's list. The result is that you end up with 1 new WebSocket in the pool and another new one detached from the pool, but both are operational.Also, regarding
pre-commit
, I noticed when making a local commit that the hook failed to run. Turns out some of the packages were outdated so I've fixed that as part of the commit.Type of change
How Has This Been Tested?
It's kind of difficult to test because it relies on these 2 events happening in quick succession, however, I'm confident this is the cause of the issue. If this issue hasn't come up before, that's probably because it's pretty rare for the server to send a
RECONNECT
(the docs suggest this happens typically for server maintenance) and even rarer still foron_close
to get called in that 60 second window.Checklist: