Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error for duplicate snapshot name when retrying test #142

Closed
rorystephenson opened this issue Jun 9, 2020 · 4 comments
Closed

Error for duplicate snapshot name when retrying test #142

rorystephenson opened this issue Jun 9, 2020 · 4 comments

Comments

@rorystephenson
Copy link

We have automatic retries for our feature specs since they are complex and intermittently fail for various reasons. These specs can fail after an incorrect snapshot is already sent and when we retry the spec and send the snapshot again we get:

[percy] StatusCodeError 400 - {"errors":[{"status":"bad_request"},{"source":{"pointer":"/data/attributes/base"},"detail":"The name of each snapshot must be unique, and this name already exists in the build: 'order flow allows a PAO ticket to be bought: completed departure date picker [mobile-web-browser]' -- You can fix this by passing a 'name' param when creating the snapshot. See the docs for more info on identifying snapshots for your specific client: https://percy.io/docs"}]}

This is obviously the documented behaviour.

Ideally we'd detect the wrong element selection and avoid sending the incorrect snapshot in the first place but, for reasons that would take a while to explain and are unrelated to this issue, that's not plausible.

Ideally we'd like to be able to do one of the following:

  1. Delay snapshot sending until the spec has actually passed.
  2. Overwrite the existing snapshot. Potentially even with a flag that indicates that overriding is allowed (which we'd only set on our retry runs).

Would you consider one of these two options? The first one can be implemented in this gem and I can write a PR. The second would likely require changes to the percy agent and to your API.

@wwilsman
Copy link
Contributor

Hi @rorystephenson!

Once a snapshot is recieved by Percy, that snapshot will immediately start processing (rendering, screenshotting, diffing). So when you take a snapshot with Percy, you should be sure that the DOM is accurate and settled for that specific snapshot beforehand. My advice would be to remove retries from your UI tests and address why they are needed, since that’s usually a bad sign and can cause other issues as well. If the retries are absolutely a necessity, then you can create a custom helper or hook that will only take the snapshot when the test is actually passing. I don't think we want this behavior directly in the SDK as that would mean having to support it for other SDKs as well to keep them in feature parity with each other.

@rorystephenson
Copy link
Author

Hi @wwilsman thanks for the response.

So when you take a snapshot with Percy, you should be sure that the DOM is accurate and settled for that specific snapshot beforehand. My advice would be to remove retries from your UI tests and address why they are needed, since that’s usually a bad sign and can cause other issues as well.

100% agreed that retries are not great and ideally would be removed. Unfortunately the tests are long and complex, essentially running through our whole application, and often fail for various reasons that are just not worth the time trying to chase down. These bugs are things like browser prcesses failing to start or chromedriver bugs causing incorrect elements to be selected. They're not generally bugs in the app or test suite and can take a very long time to track down and report to the relevant projects.

That being said...

I don't think we want this behavior directly in the SDK as that would mean having to support it for other SDKs as well to keep them in feature parity with each other.

Yep fair enough and I figured this would be the case.

If the retries are absolutely a necessity, then you can create a custom helper or hook that will only take the snapshot when the test is actually passing.

Given that multiple snapshots are taken within tests and we need to take the snapshots when the DOM is in the desired state, do you have any tips on how to queue up these snapshots to send them at the end of the test? As far as I've seen right now it would be necessary to monkeypatch.

@wwilsman
Copy link
Contributor

With multiple snapshots per test, it would be little more complicated than I led on initially. Like you mentioned, you'll want a queue of snapshots that you can send at the end of the test.

Our SDKs are fairly straightforward and made in a way to make custom SDKs straightforward as well. The snapshot method in our SDKs will grab a serialized string representation of your DOM and send it off to a background process to be handled and uploaded to our servers. To accomplish what you're looking for, you'll want to queue up those DOM strings and send them at the end of a successful test.

Since Ruby makes monkey patching easy, I would actually recommend that over writing something completely custom (unless of course that's something you'd rather do). You could either monkey patch the snapshot method and change this line, or monkey patch that underlying post method directly to push those body payloads into a queue. Then add a new method to loop over that queue and actually post the payloads. At the very bottom of your test, or in a hook, you can then call that additional method to send off the queued snapshots from that test. If your test fails, you can clear that queue before the retry, or when you add to the queue you can overwrite previous snapshots like you mentioned before. The implementation details are really up to you.

@Robdel12
Copy link
Contributor

Going to close this issue -- now that this SDK uses @percy/cli the issue is tracked upstream: percy/cli#281

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants