-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Python 3.10 test issues #8555
Conversation
Huh. |
Fair point @nicoddemus - but do you have any idea what's up with the failures on Windows? |
Ahh sorry, missed those comments. Not at a quick glance, will try to test this locally when I get the chance. 👍 |
testing/test_pytester.py
Outdated
repr(r) == "<RunResult ret=ExitCode.TESTS_FAILED len(stdout.lines)=3" | ||
" len(stderr.lines)=4 duration=0.50s>" | ||
) | ||
expected = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest we just make it consistent here -- make it use one or the other (the new one looks better to me)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose we definitely will search for "3.10" when dropping that version, so in the end doesn't really matter I guess.
testing/python/metafunc.py
Outdated
@@ -448,7 +448,10 @@ def test_idmaker_enum(self) -> None: | |||
enum = pytest.importorskip("enum") | |||
e = enum.Enum("Foo", "one, two") | |||
result = idmaker(("a", "b"), [pytest.param(e.one, e.two)]) | |||
assert result == ["Foo.one-Foo.two"] | |||
assert result in [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I described my preferred solution here, though shouldn't hold up this PR.
OK did some debugging (sorry for the delay) and here is what I found. First I could not reproduce the tests running them in isolation, I needed to do a binary search and to find the minimal two tests that when executed in order, would fail the second: import pytest
pytest.main([
"testing/test_nose.py::test_raises",
"testing/acceptance_test.py::test_warn_on_async_function",
"-x",
"-v"],
) Running this script causes Looking at def test_raises(pytester: Pytester) -> None:
pytester.makepyfile("import nose") And indeed the problem happens, so importing Why it fails? Well it changes the warning filtering to @pytest.mark.filterwarnings("default")
def test_warn_on_async_function(pytester: Pytester) -> None: And it checks the number of warnings exactly: result.stdout.fnmatch_lines(
[
"test_async.py::test_1",
"test_async.py::test_2",
"test_async.py::test_3",
"*async def functions are not natively supported*",
"*3 skipped, 3 warnings in*",
]
) So when Why this is happening only on Windows? I suspect there's some Windows-specific code which causes that warning to be triggered, but I didn't research it further. Why is the warning leaking from one test to the other? Not sure too. What to do? I think the test is a little brittle, it expects only 3 warnings even though it changed the warnings filter to result.stdout.fnmatch_lines(
[
"test_async.py::test_1",
"test_async.py::test_2",
"test_async.py::test_3",
"*async def functions are not natively supported*",
"*3 skipped, * warnings in*",
]
) ? Didn't check the other tests specifically, but I suspect it is the same problem. |
I agree with your suggestion. |
Sounds good @nicoddemus. Still busy with my pytest training, but I plan to pick this up tomorrow or on Friday. I think it's really important to have a passing CI again before everyone starts ignoring (or being confused by) the commit status. |
Definitely, thanks for tackling this @The-Compiler! |
Looks like there are way more tests failing though, sometimes we're e.g. expecting a certain order in the warning output, but the new warnings mess everything up. Still no idea why that's only the case on Windows, though. Will need to dig into it some more I guess, but not today. |
This should be ready for another round of review now - and hopefully finally result in a green CI everywhere, fingers crossed! 🤞 A couple of comments:
(the "During handling of the above exception, another exception occurred:" and everything above it is new). The new output maybe isn't optimal from an user perspective, but if we want to change that, I suggest to open a separate issue.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks!
Taking the freedom to merge this, since it's approved and I'd really like to see passing PR status checks again. @bluetech if you feel strongly about changing pytest to always output consistent enum repr's, I suggest you open a new issue. I can see where you're coming from, but personally I think it's too minor to worry about it, as those repr's aren't typically exposed to users. |
Fix Python 3.10 test issues (cherry picked from commit adc1974)
Adjust enum reprs for Python 3.10Note that the Enum reprs change was reverted in upstream. The Python version with the reverted thing is 3.10.0b4. We can adapt the if to say 3.11, or we can revert it for now, or we can adapt the if to be more complex (and check for the actual pre-release). |
I vote to adapt the |
On it. |
Potential alternative to #8553. Fixes the Python 3.10 tests for me, without #8540. See #8546, though I wouldn't close it yet, as we might want to change the enum reprs instead?