Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Python 3.10 test issues #8555

Merged
merged 5 commits into from
May 5, 2021
Merged

Conversation

The-Compiler
Copy link
Member

Potential alternative to #8553. Fixes the Python 3.10 tests for me, without #8540. See #8546, though I wouldn't close it yet, as we might want to change the enum reprs instead?

@The-Compiler The-Compiler requested a review from hroncok April 14, 2021 09:40
@The-Compiler
Copy link
Member Author

Huh. ubuntu-py310 works, but on windows-py310, all tests using @pytest.mark.filterwarnings("default") (I think?) still fail. I'm stumped.

@The-Compiler
Copy link
Member Author

Fair point @nicoddemus - but do you have any idea what's up with the failures on Windows?

@nicoddemus
Copy link
Member

Fair point @nicoddemus - but do you have any idea what's up with the failures on Windows?

Ahh sorry, missed those comments. Not at a quick glance, will try to test this locally when I get the chance. 👍

repr(r) == "<RunResult ret=ExitCode.TESTS_FAILED len(stdout.lines)=3"
" len(stderr.lines)=4 duration=0.50s>"
)
expected = [
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest we just make it consistent here -- make it use one or the other (the new one looks better to me)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose we definitely will search for "3.10" when dropping that version, so in the end doesn't really matter I guess.

@@ -448,7 +448,10 @@ def test_idmaker_enum(self) -> None:
enum = pytest.importorskip("enum")
e = enum.Enum("Foo", "one, two")
result = idmaker(("a", "b"), [pytest.param(e.one, e.two)])
assert result == ["Foo.one-Foo.two"]
assert result in [
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I described my preferred solution here, though shouldn't hold up this PR.

@nicoddemus
Copy link
Member

nicoddemus commented Apr 24, 2021

OK did some debugging (sorry for the delay) and here is what I found.

First I could not reproduce the tests running them in isolation, I needed to do a binary search and to find the minimal two tests that when executed in order, would fail the second:

import pytest

pytest.main([
	"testing/test_nose.py::test_raises",
	"testing/acceptance_test.py::test_warn_on_async_function", 
	"-x", 
	"-v"],
)

Running this script causes test_warn_on_async_function to fail, which it doesn't when run in isolation.

Looking at test_nose.py::test_raises, we don't see anything special in there, so I suspected that just by importing nose we were triggering the problem. I changed the test to:

def test_raises(pytester: Pytester) -> None:
    pytester.makepyfile("import nose")

And indeed the problem happens, so importing nose is triggering the extra warning in test_warn_on_async_function.

Why it fails? Well it changes the warning filtering to default:

@pytest.mark.filterwarnings("default")
def test_warn_on_async_function(pytester: Pytester) -> None:

And it checks the number of warnings exactly:

	result.stdout.fnmatch_lines(
        [
            "test_async.py::test_1",
            "test_async.py::test_2",
            "test_async.py::test_3",
            "*async def functions are not natively supported*",
            "*3 skipped, 3 warnings in*",
        ]
    )

So when nose is imported, an extra warning is triggered, which causes that check to fail (expected 3 warnings, got 4).

Why this is happening only on Windows? I suspect there's some Windows-specific code which causes that warning to be triggered, but I didn't research it further.

Why is the warning leaking from one test to the other? Not sure too.


What to do?

I think the test is a little brittle, it expects only 3 warnings even though it changed the warnings filter to default, so any extra warnings raised in the future would cause it to fail. I think we don't need to check the exact number of warnings here, just checking the messages is enough, so perhaps this:

	result.stdout.fnmatch_lines(
        [
            "test_async.py::test_1",
            "test_async.py::test_2",
            "test_async.py::test_3",
            "*async def functions are not natively supported*",
            "*3 skipped, * warnings in*",
        ]
    )

?

Didn't check the other tests specifically, but I suspect it is the same problem.

@bluetech
Copy link
Member

What to do?

I agree with your suggestion.

@The-Compiler
Copy link
Member Author

Sounds good @nicoddemus. Still busy with my pytest training, but I plan to pick this up tomorrow or on Friday. I think it's really important to have a passing CI again before everyone starts ignoring (or being confused by) the commit status.

@nicoddemus
Copy link
Member

Definitely, thanks for tackling this @The-Compiler!

@The-Compiler
Copy link
Member Author

Looks like there are way more tests failing though, sometimes we're e.g. expecting a certain order in the warning output, but the new warnings mess everything up. Still no idea why that's only the case on Windows, though.

Will need to dig into it some more I guess, but not today.

@The-Compiler
Copy link
Member Author

The-Compiler commented May 4, 2021

This should be ready for another round of review now - and hopefully finally result in a green CI everywhere, fingers crossed! 🤞

A couple of comments:

  • Agreed with making the version checks more explicit, based on @nicoddemus' suggestions.
  • I disagree with @nicoddemus' proposal above on how to handle warnings in tests. I don't think we should write our integration tests in a way that they are less strict and pass even if other warnings occur - in fact, quite the opposite! If unexpected warnings occur, the tests should fail. So instead of changing the matching, I decided to adjust the filterwarnings marks so they don't unconditionally set default (or always) for all warnings (which would ignore our existing filters in pyproject.toml, and also ignore the default error filter, i.e. potentially hide real issues!). Instead, they now only affect the warning class the test in question actually uses.
  • test_collect_symlink_dir failed for me on Windows (which needs special privileges to create symlinks). Not sure how it ever could've run in earlier versions, as it didn't use symlink_or_skip like all the others. Maybe it just runs on CI because the CI has those privileges or something. I didn't investigate further, as this PR already took a lot more work than expected 😅
  • The beta 1 brought a couple of new issues, which now should all be fixed as well. One of them is Update threading.Event.isSet() calls execnet#127, the other is changed output for SyntaxErrors in skip/xfail expressions:
_______________________ ERROR at setup of test_nameerror _______________________
name 'asd' is not defined

During handling of the above exception, another exception occurred:
Error evaluating 'skipif' condition
    asd
NameError: name 'asd' is not defined
________________________ ERROR at setup of test_syntax _________________________
invalid syntax. Perhaps you forgot a comma? (<xfail condition>, line 1)

During handling of the above exception, another exception occurred:
Error evaluating 'xfail' condition
    syntax error
     ^
SyntaxError: invalid syntax

(the "During handling of the above exception, another exception occurred:" and everything above it is new).

The new output maybe isn't optimal from an user perspective, but if we want to change that, I suggest to open a separate issue.

  • I also rebased and cleaned up the commits - I'd suggest not squashing this as there are different kinds of changes.

Copy link
Member

@nicoddemus nicoddemus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, thanks!

@The-Compiler
Copy link
Member Author

Taking the freedom to merge this, since it's approved and I'd really like to see passing PR status checks again. @bluetech if you feel strongly about changing pytest to always output consistent enum repr's, I suggest you open a new issue. I can see where you're coming from, but personally I think it's too minor to worry about it, as those repr's aren't typically exposed to users.

@The-Compiler The-Compiler merged commit adc1974 into pytest-dev:main May 5, 2021
mgorny pushed a commit to mgorny/pytest that referenced this pull request May 5, 2021
Fix Python 3.10 test issues

(cherry picked from commit adc1974)
@hroncok
Copy link
Member

hroncok commented Jul 11, 2021

Adjust enum reprs for Python 3.10

Note that the Enum reprs change was reverted in upstream.

https://mail.python.org/archives/list/python-dev@python.org/message/LSTMFAPSPD3BGZ4D6HQFODXZVB3PLYKF/

The Python version with the reverted thing is 3.10.0b4. We can adapt the if to say 3.11, or we can revert it for now, or we can adapt the if to be more complex (and check for the actual pre-release).

@nicoddemus
Copy link
Member

We can adapt the if to say 3.11, or we can revert it for now, or we can adapt the if to be more complex (and check for the actual pre-release).

I vote to adapt the if to use 3.11.

@hroncok
Copy link
Member

hroncok commented Jul 12, 2021

On it.

@hroncok
Copy link
Member

hroncok commented Jul 12, 2021

#8895

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants