-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Polish off retry #992
Polish off retry #992
Conversation
ec57acf
to
78cedb6
Compare
a812fd9
to
dbe2ba8
Compare
ping @brasmusson. I wasn't able to reproduce the crash you mentioned, but the totals in the second scenario do look funny, don't they? Any thoughts why? |
Note that also the pretty formatter output is messy (missing steps etc) but I don't care about that right now. |
3a27fa1
to
eb81b6b
Compare
@mattwynne The crash I'm talking about is this one (Travis build), however it does not occur after the Event bus have been moved the the core (that is only on the v2.x-bugfix branch). |
dbe2ba8
to
df2ebf9
Compare
OK I believe this is ready to merge. Anyone else want to take a look @cucumber/cucumber-ruby ? |
- changed the acceptance test to use more descriptive names for the test scenarios (I kept getting confused) - fixed a bug in the acceptance tests caused by our scenarios all running in one process, and using global variables (I added init scripts to reset them to zero at the start of a test run). - tidied up the docs a bit in the acceptance test - removed the @wip tag - added the retry filter into the filterchain in runtime There's a puzzle here with the output on the second scenario, with two retries. I've detailed what I think the summary should be, but there seems to be a passing scenario missing from the results totals. Did I make a mistake, or is the code still wrong?
3df7b4d
to
6532374
Compare
@@ -229,6 +229,7 @@ def filters | |||
filters << Cucumber::Core::Test::LocationsFilter.new(filespecs.locations) | |||
filters << Filters::Randomizer.new(@configuration.seed) if @configuration.randomize? | |||
filters << Filters::Quit.new | |||
filters << Filters::Retry.new(@configuration) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should Quit
be the last filter, so that it is possible to abort the retry of a long running test case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I expect I'm missing something, but won't the Quit filter be invoked as soon as the first attempt to run the test case is passed on by the retry filter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I expect the following behaviour:
Given I use retry
And a long-running, failing test has started to execute
When Cucumber.wants_to_quit is set
Then the long-running, failing test case is only executed once
As far as I can tell, the order of the Quit
and the Retry
filter needs to be swapped, to get this behaviour.
I have been giving this a try locally and I noticed that if you put This is my cucumber.yml (which works fine for everything but retry) #Profiles
default: -p retry -p base_options -p cli_summary -p allure -p json_pretty -p test_tags -r features
create_runtime_log: -p base_options -p cli_pretty -p test_tags -p parallel_logger-r features
#Options
retry: --retry 3
base_options: --no-source --color
test_tags: --tags ~@ignore
#Formatters
cli_summary: --format summary
cli_pretty: --format pretty
allure: --format AllureCucumber::Formatter --out /fake_dir
json_pretty: --format json_pretty --out report/json-results/results_<%= rand(36**10).to_s(36) %>.json
parellel_logger: --format ParallelTests::Gherkin::RuntimeLogger --out parallel_runtime_new.log Moreover, I have noticed that when using the pretty or summary formatter, the failure output at the end of the test run is duplicated for each individual failure of the same test. This seems dangerous for re-run logic and reporting. In my view, Cucumber should simply report the test as passed or failed. Failing Scenarios:
cucumber -p retry -p base_options -p cli_pretty -p allure -p json_pretty -p test_tags features/tests/repo/login.feature:32
cucumber -p retry -p base_options -p cli_pretty -p allure -p json_pretty -p test_tags features/tests/repo/login.feature:32
cucumber -p retry -p base_options -p cli_pretty -p allure -p json_pretty -p test_tags features/tests/repo/login.feature:32
cucumber -p retry -p base_options -p cli_pretty -p allure -p json_pretty -p test_tags features/tests/repo/login.feature:32` |
@tk8817 - thanks for reporting! @mattwynne and @brasmusson - do you think we should move this to a separate issue or fix it as part of this one? |
No, a test that passes after retry is not the same as
@danascheider The summary reporting and exit code handling when using |
@brasmusson - I agree with you 100%, apologies if my comment did not disaply that. As you can see in my example, the test failed 3 times, it never passed. My contention is that the formatter is showing it as 3 failures and if I were to re-run based on my junit/json results, it would attempt to re-run the same test case 3 times. I was not attempting to make a claim on cases which switch from failed to passed and indeed adding a status of 'flaky' would be very interesting. My larger concern is the fact that retry does not work when invoked from cucumber.yml Best, |
@danascheider I'd rather deal with these as separate issues I think. The profiles thing seems like a bug, but the formatting / reporting thing needs more thinking about, and will probably be more work to implement. @tk8817 try using the new |
Hmm, I am not articulating this well :( The issue is not about the visual output in realtime. I noted with the Failing Scenarios:
cucumber -p retry -p base_options -p cli_pretty -p allure -p json_pretty -p test_tags features/tests/repo/login.feature:32
cucumber -p retry -p base_options -p cli_pretty -p allure -p json_pretty -p test_tags features/tests/repo/login.feature:32
cucumber -p retry -p base_options -p cli_pretty -p allure -p json_pretty -p test_tags features/tests/repo/login.feature:32
cucumber -p retry -p base_options -p cli_pretty -p allure -p json_pretty -p test_tags features/tests/repo/login.feature:32` My concerns with this are
Until there is a strategy to coin tests as One alternate option would be to only report the final run of the scenario. As Matt said, a test will only be run for exactly as many times as it needs to. The upside is much cleaner reporting / re-running support. The downside is NO indication that a test was retried in that reporting. Rock, meet hard place :) |
With respect to the summary and the pretty formatters I think the current behaviour is correct:
This is what happened, and it should therefore be reported that way by the summary formatter (and similarly by the pretty formatter). With respect to the summary printout (side not - Cucumber-JVM prints the summary statistics, snippets etc. separately so if the pretty formatter output is sent to a file, then the summary statistics etc. is not included in the file), there is no need to list the same test case several times under "Failing Scenarios:", it should rather be listed once under "Flaky Scenarios:" - and the same in the Scenario statistics (" The Json formatter should absolutely include every individual execution of a flaky test case, we can neither assume that the same step fails each time a flaky test case fails, and if the same step actually happen to fail, we cannot assume that the error is the same. Everything need to be included in the Json formatter output. Using the Json formatter output it should be possible to find out exactly what happened, so each time any step definition or hook has executed the information about this needs to be in the Json formatter output. With respect to the JUnit formatter, reporting a flaky test case once could be an option. As the Cucumber statuses are mapped down to the three statuses, |
OK merging this one, let's discuss enhancements in new tickets. |
Somehow I merged this into the wrong branch instead of master. Have fixed that in 810384e |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
We've had a problem since we merged #920 that the retry behaviour doesn't quite work in integration.
This PR is my attempt to resolve that, so we can remove the
@wip
tags and close #982 and #981.There's a puzzle here with the output on the second scenario, with two
retries. I've detailed what I think the summary should be, but there
seems to be a passing scenario missing from the results totals, so the second scenario (
Retry twice, so Shakey starts to pass too
) is currently failing.Did I make a mistake, or is the code still wrong?
TO DO
Retry twice, so Shakey starts to pass too
scenario