Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--retry flag does not seem to be doing anything #1000

Closed
cristian-m-vasile opened this issue Jul 25, 2016 · 16 comments
Closed

--retry flag does not seem to be doing anything #1000

cristian-m-vasile opened this issue Jul 25, 2016 · 16 comments
Milestone

Comments

@cristian-m-vasile
Copy link

cristian-m-vasile commented Jul 25, 2016

cucumber --help specifies this: --retry ATTEMPTS Specify the number of times to retry failing tests (default: 0)
However, when trying to run cucumber --retry 5, nothing seemed to have changed from a normal run. Is this feature working anymore?

@danascheider
Copy link
Contributor

Hi Cristian! The --retry flag is a new feature, so this is could be a bug. Could you give us some more information? It'd be helpful to know the Cucumber version you're using, as well as any other options, if any, such as --format, etc.

Any other context you can give us would also be great, such as any code you're testing and any tests expected to be flaky. Thanks for reporting!

@brasmusson
Copy link
Contributor

You are right, #992 is also needed to make the retry functionality working.

@danascheider
Copy link
Contributor

Oh, has that not been merged yet?

@mattwynne
Copy link
Member

No it hasn't @danascheider. I think it would help to do #999 first actually.

@cristian-m-vasile
Copy link
Author

Hi @danascheider, my cucumber version is 2.4.0 and the command i'm using is bundle exec cucumber --retry 1 --tags=@this. I'm afraid I can't send you any examples of flaky tests, but I can tell you that they sometimes throw Selenium::WebDriver::Error::StaleElementReferenceError or even RSpec::Expectations::ExpectationNotMetError when the javascript executes too slowly

@danascheider
Copy link
Contributor

Thank you @cristian-m-vasile! It looks like we have some changes we need to merge to make this functionality work, so it is Cucumber, not you. I'm going to review those changes right now.

@luke-hill
Copy link
Contributor

Hi there. I'm running this as an experimental feature through Jenkins. Before I copy and paste a load of code into here so you can see what I'm doing at my company, can I confirm that this "should" be working currently? Or are there additional items required first?

P.S. Many thanks on this functionality, we currently use the Raketasks, which are a bit more confusing as we then need to setup our profiles

Luke

@danascheider
Copy link
Contributor

Hi @luke-hill, right now this functionality is not expected to work. This is our oversight and we're working on getting some other PRs merged that will fix it!

@mattwynne mattwynne added this to the 3.0 milestone Sep 2, 2016
@mattwynne
Copy link
Member

@luke-hill this should be fixed on the master branch. Can you give it a try?

@mattwynne
Copy link
Member

@luke-hill I've released version 3.0.0.pre.1 just now. Try that and see if you get any improvement.

@luke-hill
Copy link
Contributor

@mattwynne I'm not sure. I ran a set of tests which I know are problematic based off which environment and got the following report in Jenkins.

38 scenarios (16 failed, 22 passed)

Now the 2 features that were ran contained a total of (14+12) - 26 scenarios / outlines.

Something about the report that is output also doesn't sit well with me. (Not sure whether I should paste screenshots here). But one of the failures that is shown to me has passing steps at every juncture (Maybe it failed first time??)

The way we currently re-run tests we can see what happened at each re-run. So we get something like this.

Run1: 26 scenarios: 11 pass, 15 fail.
Run2: 15 scenarios: 9 pass, 6 fail
Run3: 6 scenarios: 6 fail.

Hope this makes sense, apologies if not

@rishi-freshbooks
Copy link

rishi-freshbooks commented Sep 29, 2016

Hey -- if a test fails on the first run but passes on a subsequent run, then should the exit code not be 0? Right now, it exiting with a non-zero status code.

The summary looks like this:

16:28:34 22 scenarios (1 failed, 21 passed)
16:28:34 321 steps (1 failed, 4 skipped, 316 passed)

@mattwynne
Copy link
Member

@rishi-freshbooks you're right, the I think the exit code should be zero if the scenario passed after retry (unless maybe you were in strict mode). Could you raise a new ticket about that please?

@mattwynne
Copy link
Member

@luke-hill right now it's going to just count and output all the test cases that were run - the totals printing bit doesn't know anything about retries, so it treats two runs of the same scenario as two test cases.

I'd like some help getting a spec for what the totals output should look like. Could you create a ticket and help us write a spec for it?

@luke-hill
Copy link
Contributor

@mattwynne Sure thing. We've a fairly comprehensive set of functions we use for our retries. Combining Rakes and the cucumber.YML file. As I'm a git newcomer, where do you want me to start documenting this. Under issues? Or somewhere else (Sorry)

Luke

@lock
Copy link

lock bot commented Apr 16, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked as resolved and limited conversation to collaborators Apr 16, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants