-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: Possibility to annotate a test step as optional, i.e. test execution always continues after it #7786
Comments
@klhex Are your tests isolated from each other or not? It is important to answer this question, because there are different solutions depending on it. If your tests are isolated, I wonder whether specifying If your tests are not isolated, then we are probably talking about some "steps" (as you mention) that are all a part of a single multi-step test. And in this case you'd like some of your steps to not fail the test completely, but still be reflected in the test report as "failed steps". Do I understand this correctly? |
@dgozman Thanks for looking into this. The typical structure of our e2e tests looks like the following extract and the complete version of it corresponds to one test file: [copy & pasted from my comment in another issue] PW Test Code// playwright-test stuff.
import { test, expect } from '@playwright/test';
test.describe('Test Case 123: Apply for consumer loan', () => {
test.describe('Page 1 - Personal Data', () => {
test('Correct page is shown', async () => {
});
test('Existing customer question is answered', async () => {
});
test('Mandatory fields are filled with valid data', async () => {
});
test('Navigation to next page', async () => {
});
});
test.describe('Page 2 - Contact Data', () => {
test('Correct page is shown', async () => {
});
test('Mandatory fields are filled with valid data', async () => {
});
test('Navigation to next page', async () => {
});
});
test.describe('Page 3 - Profession & Financial Data', () => {
test('Correct page is shown', async () => {
});
test('Proof of income is uploaded', async () => {
});
test('Mandatory fields are filled with valid data', async () => {
});
test('All document links can be opened', async () => {
});
test('Navigation to next page', async () => {
});
});
}); The great majority of the test steps (i.e. test(...)) are not isolated. For example: if a test step fails to fill out a mandatory field on the current page it won't be possible to continue to the next page. And none of the test steps are flaky which is why retrying them would not make sense. For example regarding the mail tests if one such tests checking a specific mail fails it either means that the mail had not been created by the backend (= bug) or that its checked content (usually subject) has changed (= either a bug or an intended change which requires adjustment of the corresponding test). But they usually don't fail because a mail may arrive late because there's enough wait time before the mail check part of the whole test suite starts (if they should once fail because a mail has arrived late this would point to a latency issue somewhere that would need investigation and therefore the test should also fail in this scenario). Regarding your last paragraph you're right and you have understood it correctly. I'd run an e2e test file such as the above example with option "-x" so that the whole test case/suite (comprising all tests included in the file) would stop being executed after the first failed test step (e.g. I've just noticed something important regarding the described scenario and the current behavior of CLI option "-x" (I am currently evaluating the migration from Mocha to PW Test for our e2e tests and haven't checked all aspects yet, that's why I wasn't aware of this yet): currently, option "-x" (or --max-failures) seems to apply to all tests that are executed by PW Test, regardless of whether they are spread among different files or not. This would not be suitable for our scenario in which we have one e2e test case (such as the above) per file. If I am executing several e2e test cases (= several files) in parallel I wouldn't want execution of file B (e.g. e2e test case "open checking account") to stop because of a failed test step in file A (e.g. e2e test case "apply for consumer loan") because both files are not logically connected. Is it currently already possible to limit the effect of "-x" on a per-file-basis? |
I'd like to add an explanation on why most people writing e2e tests (i.e. simulating typical customer journeys in the tested application) likely have to use the "--max-failures" or "-x" command line option of PW Test: that's because in e2e tests most (if not all) test steps need to be executed sequentially (because most human users/customers of an application execute only one actitity (e.g. clicking, typing) at a time ;-) and often there is a logical dependency between test steps. Example: page 1 of the tested application contains a mandatory field and a button "Next" that allows to navigate to page 2 if all mandatory on page 1 have been correctly filled out. Let's assume PW Test would be run w/o option "-x" (stop after first failed test step) and let's assume that the filling out of that mandatory filed for some reason does not work (test step fails) and the field remains empty. Without option "-x" enabled, PW Test would continue executing test steps after that failed test step and would try to click on "Next", but the app would not be able to navigate to the next page because of the missing input in the mandatory field. And now, PW Test would continue executing all remaining test steps of the whole e2e test and it each step would run into a timeout error because the app is still waiting on page 1 and subsequent pages cannot be navigated to. And since "-x" or "--max-failures" are therefore needed for e2e tests, not having an option to mark a test step as optional (i.e. execution continues after failure even if "-x" etc. is enabled) does hurt whenever you have test steps in your e2e test which don't need to be successful in order for the customer to have an overall successful customer journey (e.g. being able to order a product at the end of the e2e test). |
I have stumbled upon an additional use case: sometimes (as in our case) a customer journey throughout your tested app differs a bit between the various stages you may be running tests on. For example, in our case we've got 3 stages before PROD (test, integration, QA) and some backend services, such as checking the customer's email address for TLS, for whatever reason are not running on all stages and thereby produce different app layout (e.g. popups, warnings, dialogs, text information -- i.e. stuff under test) depending on the stage. Having a test.optional feature would allow you to avoid boiler test code such as "IF stage = test THEN click checkbox to the left of text 'we could not confirm that your email address supports TLS, please confirm that you may receive your data unencrypted' etc. by just rendering the page.check() as optional. Another such example: for whatever reason our cookie consent tool popup does not pop up on test and int stage. But since we usually test on QA stage our test code contains a confirmation click handling the cookie consent popup. However, the same test code would not work on test/int because the test code expects the popup to appear. Easiest solution would be to wrap that click as test.optional. It would still show up as an erroneous test in the log but would not stop execution of the test suite (end-to-end tests) when "-x" / "--max-failures" mode is activated (-x is standard for our e2e tests). |
Happy New Year, everyone! @dgozman Any updates on this feature request? IMHO it would be a perfect candidate for starting the new year, setting Playwright Test further apart from most other test runners which (AFAIK) don't offer such a useful feature. 😃 |
Now that we have the equally useful soft assertions, the next logical step is soft tests (see above), for example: test.soft(...), isn't it? |
Why was this issue closed?Thank you for your involvement. This issue was closed due to limited engagement (upvotes/activity), lack of recent activity, and insufficient actionability. To maintain a manageable database, we prioritize issues based on these factors. If you disagree with this closure, please open a new issue and reference this one. More support or clarity on its necessity may prompt a review. Your understanding and cooperation are appreciated. |
Feature request
This is a feature request for Playwright Test (PW's test runner).
Please provide a possibility to flag a specific test step (i.e.
test( { } )
) as optional which would have the effect of continuing test execution after this test regardless of whether "--max-failures" or "-x" is set (see PW Test CLI). Note: when "--max-failures" is set it would first like usual repeat the test X times but then if the last attempt still fails it would not stop but continue with the next test.Use case: this feature would be especially useful for e2e tests which simulate usual/average customer/end-user activities in the tested application (customer journey). Often, there are important and not so important steps in a customer's journey in your application. The important ones are for example the ones relating to mandatory fields, while a not so important test step might be to fill out an optional select options for marketing purposes ("Please select where you found out about our product..." (--> Google search, TV ad, recommendation from friend, ...)).
For the optional (but still relevant) test steps I wouldn't want my whole test suite (test file) to stop if that optional step fails but I would want my test suite to stop after an important test step fails (e.g. customer enters their email address).
In our case, since the majority of the test steps in our e2e tests are important, we'd use the "-x" option, i.e. that test execution stops after the first failure. But so far this option applies to all tests, even though in every of our e2e test files there are some optional tests.
Another example where such an option would help: our application sends out several emails to the customer. Part of our e2e tests includes checking whether those emails have actually arrived at the indicated email address. Although this part (mailbox checking) doesn't use the PW API because we don't want to depend on UI changes of an email provider, these email checks are seamlessly integrated into the e2e tests and therefore use the same test runner. However, currently, with "-x" option active, the mail checks would stop after the first check failure. That means that if let's say the 4th mail out of 10 has not been received, the test runner would stop after checking the 4th mail and wouldn't check the remaining 6, even though these might have arrived fine.
Yet another example: sometimes you may want to add some "smoke" or validation tests to your e2e tests, such as confirming that it is not possible to enter a letter in a date field. If such a test failed it would not imply that a customer is unable to successfully use your application (e.g. request a loan) because they would still be able to enter a valid date in the date field. In that case you wouldn't want your e2e test execution to stop but you would like to get notified about that failure in the log...
By adding this feature, PW Test would offer a useful option which, AFAIK, is not (yet?) available in major test runners such as Mocha and Jest.
This is how it could look like:
or
This feature had already been suggested before in a side-comment of another issue, see here.
The text was updated successfully, but these errors were encountered: