You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there an option to disable dependencies? f.e by some parser option?
If someone wants to run f.e. a test suite, but this one will be dependent on some test from different test suite, tests will be skipped.
If there will be a lot of tests and they will be heavily dependent, then these dependencies saves a lot of time if tests are run all, but they are contraproductive while only part of the tests are to be run (one has to open a file and comment dependencies).
The text was updated successfully, but these errors were encountered:
No, currently there is no such option. To be honest, I don't know how such an option should look like.
If I understand you correctly, you have a large test suite and you only want to run a subset of the tests. But some of the selected tests depend on other tests that are not in the selection, so these tests get skipped, because their dependencies have not been run. What I could imagine would be to add a global option ignore_unknown_dependency. If set to False, the default, a test will be skipped unless all the dependencies have been run successful. This is the current behavior. But if this option is set to True, a test will be skipped if any of the dependencies has been skipped or failed. E.g. this would ignore any dependencies that have not been run at all. Would that suit your use case?
But test_BBB1 is skipped only because test_AAA have not been run at all.
With an option it may look like this: pytest -v -k BBB -ignore_unknown_dependency True gives:
Is there an option to disable dependencies? f.e by some parser option?
If someone wants to run f.e. a test suite, but this one will be dependent on some test from different test suite, tests will be skipped.
If there will be a lot of tests and they will be heavily dependent, then these dependencies saves a lot of time if tests are run all, but they are contraproductive while only part of the tests are to be run (one has to open a file and comment dependencies).
The text was updated successfully, but these errors were encountered: