Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for CLIENT SETINFO #2857

Merged
merged 23 commits into from
Aug 9, 2023
Merged

Add support for CLIENT SETINFO #2857

merged 23 commits into from
Aug 9, 2023

Conversation

dvora-h
Copy link
Collaborator

@dvora-h dvora-h commented Jul 19, 2023

Pull Request check-list

Please make sure to review and check all of these items:

  • Does $ tox pass with this change (including linting)?
  • Do the CI tests pass with this change (enable it first in your forked repo and wait for the github action build to finish)?
  • Is the new or changed code fully tested?
  • Is a documentation update included (if this change modifies existing APIs, or introduces new ones)?
  • Is there an example added to the examples folder (if applicable)?
  • Was the change added to CHANGES file?

NOTE: these things are not required to open a PR and can be done
afterwards / while the PR is open.

Description of change

closes #2856
closes #2682

@dvora-h dvora-h added the feature New feature label Jul 19, 2023
redis/asyncio/connection.py Outdated Show resolved Hide resolved
redis/connection.py Outdated Show resolved Hide resolved
redis/utils.py Show resolved Hide resolved
@codecov-commenter
Copy link

codecov-commenter commented Jul 26, 2023

Codecov Report

Patch coverage: 63.88% and project coverage change: +13.58% 🎉

Comparison is base (b0abd55) 77.33% compared to head (77cef76) 90.91%.
Report is 1 commits behind head on master.

❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.

Additional details and impacted files
@@             Coverage Diff             @@
##           master    #2857       +/-   ##
===========================================
+ Coverage   77.33%   90.91%   +13.58%     
===========================================
  Files         126      126               
  Lines       32178    32384      +206     
===========================================
+ Hits        24884    29443     +4559     
+ Misses       7294     2941     -4353     
Files Changed Coverage Δ
redis/cluster.py 92.70% <ø> (+74.31%) ⬆️
tests/conftest.py 86.02% <0.00%> (+10.20%) ⬆️
tests/test_asyncio/test_commands.py 97.12% <15.00%> (-0.77%) ⬇️
tests/test_commands.py 96.90% <19.04%> (-0.46%) ⬇️
redis/commands/core.py 87.08% <50.00%> (+0.17%) ⬆️
tests/test_asyncio/conftest.py 84.89% <85.71%> (+5.34%) ⬆️
redis/utils.py 91.13% <90.00%> (+7.08%) ⬆️
redis/_parsers/helpers.py 77.60% <100.00%> (+1.00%) ⬆️
redis/asyncio/client.py 91.99% <100.00%> (ø)
redis/asyncio/cluster.py 91.66% <100.00%> (+74.67%) ⬆️
... and 7 more

... and 18 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@chayim
Copy link
Contributor

chayim commented Jul 31, 2023

@kristjanvalur I'm digging through what I think is the initial issue (#2349) and subsequent PR (#2557), and the recent fix in #2859.. and to be honest, a bit flummoxed on why the test tests/test_asyncio/test_connection.py::test_connection_disconect_race[AsyncRESP2Parser] fails, and then cascades.

It ends with a timeout issue - but there's something missing in my understanding. Can I talk you into having a look 🙏 ? I will continue to do so.

@kristjanvalur
Copy link
Contributor

Hi, sure, what does it have to do with this PR? Where are you seeing the behaviour you describe?

@chayim
Copy link
Contributor

chayim commented Aug 1, 2023

So in theory it doesn't, which is why I'm flumoxxed. On connection we now call redis' identification commands (this is good!) and allow users to not do so should they choose.

But, this test fails because we hit the 30s timeout - which was the purpose of the test. Further down the failure deck, everything fails due to what appears to be data in the async retrieval side.

On the bright side the stack traces haven't been useful. Using this CI result as an example after the test fails, we eventually lead to read failures such as below - but honestly that's a red herring. The real issue is data being left in the reader.

ERROR tests/test_asyncio/test_timeseries.py::test_get[single-python-parser] - IndexError: pop from empty list

To me, this spells out the need to properly reset the async side prior to each test, but that's another issue.

@kristjanvalur
Copy link
Contributor

kristjanvalur commented Aug 1, 2023

Okay, I see. I will try to figure out what is going on. I was confused because a brief look at the failing tests didn't show me the issue you were speaking of. but clearly everything blows up.
And yes, the tests are not independent enough somehow.

@kristjanvalur
Copy link
Contributor

Ok, what is happening, is that the 30s timeout occurs as part of the test harness. The connect call fails, the test is aborted, but the coroutine running the test is never killed. i.e., this code:

with patch.object(asyncio, "open_connection", open_connection):
        await conn.connect()

never exits the context manager. This means that asyncio.open_connection remains patched for all subsequent tests.
Now the timeout you see is a weird one, and shouldn't happen. it comes from base_events.py, here:

timeout = None
        if self._ready or self._stopping:
            timeout = 0
        elif self._scheduled:
            # Compute the desired timeout.
            when = self._scheduled[0]._when
            timeout = min(max(0, when - self.time()), MAXIMUM_SELECT_TIMEOUT)

        event_list = self._selector.select(timeout)

i.e. we are hitting a hard timeout on select and there is nothing to catch it. For async, we need async timeouts.
I am also not clear on why you would want this test to suddenly start to timeout like this. I'll continue to look at what is happening.

@kristjanvalur
Copy link
Contributor

This test should not fail during connect, it is designed to test what happens during disconnect. But I need to find out why the test fails in this particular way, we should get a "nice" timeout which should ensure that the patching is undone.

@kristjanvalur
Copy link
Contributor

Ah, it is the "pytest_timeout".
Yes, if that triggers during an async test, then none of the async cleanup code is executed. We should never rely on such
a timeout to be part of an async test, we should consider it to be only a failsafe

@kristjanvalur
Copy link
Contributor

So, yes, we are timing out here:

 try:
            # set the library name and version
            await self.send_command("CLIENT", "SETINFO", "LIB-NAME", "redis-py")
            await self.read_response()
            await self.send_command("CLIENT", "SETINFO", "LIB-VER", get_lib_version())
            await self.read_response()
        except ResponseError:
            pass

during the first "self.read_response()". This is because we have already patched the send/receive stuff. let me find out a way to fix this test.

@kristjanvalur
Copy link
Contributor

I've pushed three commits here https://github.com/mainframeindustries/redis-py/tree/client-setinfo
First two address problems in testsuite (when running in VSCode, devcontainer), third is the actual fix for your problem.

@chayim
Copy link
Contributor

chayim commented Aug 2, 2023

@kristjanvalur thanks a tonne. I don't think I could have debugged this at all... The byproduct was bizarre - especially since frankly, it seemed completel illogical to me. This is my way of saying ❤️

@dvora-h can you take a look at merging this into our branch and seeing how it lies.

@kristjanvalur
Copy link
Contributor

kristjanvalur commented Aug 2, 2023

Right, so, the issue was (to summarize) that the new version handshake wasn't getting through because the test had mocked the streams. This caused the hard timeout to occur, which stopped the Task (without cleanup) which left the async connection object in a mocked/patched state.
mocking out the command methods was enough to cause the version handshake to become a no-op.

@kristjanvalur
Copy link
Contributor

There is a problem with mocking in 3.7, let me fix that.

@kristjanvalur
Copy link
Contributor

I've pushed an additional commit fixing 3.7, feel free to merge/cherry pick it.

@chayim
Copy link
Contributor

chayim commented Aug 3, 2023

Right, so, the issue was (to summarize) that the new version handshake wasn't getting through because the test had mocked the streams. This caused the hard timeout to occur, which stopped the Task (without cleanup) which left the async connection object in a mocked/patched state. mocking out the command methods was enough to cause the version handshake to become a no-op.

Awesome. My gut was down the side of going to repatch the mock for the function.. which I get. Handshaking, across clients is my current balliwick :D

self.send_command("CLIENT", "SETINFO", "LIB-NAME", self.lib_name)
self.read_response()
if self.lib_version:
self.send_command("CLIENT", "SETINFO", "LIB-VER", self.lib_version)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pipeline SETINFO and SELECT same as in the async suggestion above.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Specifically not sent because of the interface - but a suggestion we'd debated previously. At this level, the pipeline shouldn't be used (yet). IMHO once reorganizing connections, all items for connect should be pipelined, agreed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, you probably debated it out of band, not in this PR :) What "interface" are you speaking of?

A pipeline isn't an object "to be used", it is just a way of sending a bunch of commands and not waiting for each individual response before sending the next command. It is always supported by the server.
The spec specifically recommends pipelining startup commands to reduce latency. We now have two extra roundtrips to the server at startup in addition to the optional one to select a db.

@chayim chayim merged commit f121cf2 into redis:master Aug 9, 2023
55 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Clients should identify themselves on connect Add support for new redis command CLIENT SETINFO
5 participants