-
Notifications
You must be signed in to change notification settings - Fork 609
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Puppeteer fix for creating blank (off page) screenshots #808
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is very clever! Very similar to the Chromy implementation— which provides very consistent screenshots. Thank you for this PR.
I will test this ASAP. And yes, I would expect some difference between the existing screenshots and the new ones. Could you please run that report and then save the result as a pdf in this PR so we can see the differences? |
Since this PR is similar to Chromy's implementation, does it suffer from the Chromy |
@garris Here is a PDF only containing the failed tests, because the full output of the tests was producing a 30MB PDF which github didn't like. @matthew-dean I haven't tested it with vh, sorry. Good to hear positive feedback guys. I am happy it worked for others as well. |
@matthew-dean do you mind trying this on you project where you specify vh units? |
@garris thoughts about the test results? Do you need the complete test results? I can try to divide them into 3 parts to be able to attach them here. |
Hi, can you please post a simple app in the example directory which demonstrates this issue? It is hard to tell from your screenshots what is really happening — so just a simple page where you an produce the broken behavior and then show how your change fixes the behavior would help. Please check that into the examples directory. I personally have not experienced this issue so I need to understand better what this is fixing. Thank you! |
Also, when you generate pdf files — if you are on a Mac you should be able to “print to pdf” this way you are not generating huge bitmaps — only genuine source images are saved and placed inside the pdf documents. |
@garris Oh, sure, I'll pull this branch and run right now. |
Hmm 🤔 this branch seems to run slower than Chromy on the same project. Have you done performance timing comparisons between Chromy / Puppeteer as engines? Maybe I'm wrong. I haven't run a stopwatch and my Backstop tests already take forever. It could be Puppeteer is just less verbose in the output. It doesn't have the |
I am getting a number of 1-pixel diffs for some components (~80 out of 400), BUT they aren't significant, and everything else is now capturing correctly. It's more just an FYI - Chromy / Puppeteer tests are not 1-to-1 in terms of pixel precision. But normally, one would just choose one engine. |
@matthew-dean please check the report summary for a test duration. Would be great if you could post 1) the duration of your test in chromy 2) the duration of your test in puppeteer 3) the duration of your test in this new version of puppeteer. |
I will do testing of this one. Will update you when this is published to NPM. Thanks!!! |
ok -- published to backstop@beta! validate here... |
@cactusa Thank you for this PR -- it is clearly creating developer happiness already -- however -- I think I have been having the opposite problem. After applying your patch my screenshots are capturing unexpected screen areas. here is an example -- item on left is beofre -- center item is with your PR... So, I am curious why some engineers clearly have this issue where others dont. What is your environment? OS, RAM, NPM, NODE versions? And if possible could you please post a scenario which would enable me to reproduce? thank you! |
@garris FWIW I have seen similar glitches on my end -- but I admit they beat having only blank screenshots 🤷♂️ |
Which glitches? Can you be specific and post examples? I am trying to understand the root cause of the issue. As you can imagine -- it would be silly to release a version which is guaranteed to break most users' tests. On a separate note: Are the blank screenshots because you have an element below a scroll -- IOW they are not visible on the screen? Did you try viewing the screenshot process using |
Sorry. I didn't look at your screenshot carefully enough and perhaps it's different. I have seen glitches where screenshots are vertically slightly off, i.e., they show some of the area above the selector target. These glitches seem stable/reproducible so that tests will pass if I make it a reference; in the project where I was only getting blanks, that's still an improvement. |
@garris I will try to address all of you requests today. |
I will try to explain in detail what the issue us and what this PR is trying to fix: WRT to your "I have been having the opposite problem" issue from your screenshot I can see that the TEST image is taken from a page that is slightly different than what the REFERENCE. Looks like the "Reports to" section is new and that has not been captured in the original REFERENCE image. And the blue label at the top of the TEST image seems to be a "debug panel" which might have been mistakenly enabled when you were running the tests. Here is a printed PDF of the full smoke tests as you requested: npm - v4.2.0 @garris have I cover everything? Let me know if you need anything else. |
@cactusa -- ok -- this helps a lot -- thank you very much for that detail. I have to apologize -- that screen shot from last night had some genuine differences which made it confusing. Here is a new screenshot comparing previous reference with your patch. (PLEASE NOTE: GREY AREAS REMOVED FOR COMPLIANCE -- PLEASE IGNORE THAT) And here is a test I just ran without your patch... Chrome obviously does not render items that are offscreen. This would explain why some of your images render correctly, some are cut off (presumably where the scroll border intersects the images), and the rest don't render at all. This would explain why your solution fixes the issue -- because you force a fullscreen render then scope down by cropping. There is infact another user who has run into this issue and is looking to fix the problem by scrolling. Does this approach also fix your issue. The issue for me is that both fixes have side effects which need to be addressed before we can release. I like your approach because it does not change state. But there is clearly an unexpected geometry issue -- and -- there is a report that we may also have a perf issue. Conversely I like the scrolling because it is lightweight -- but it changes state in the client app -- which will likely have other side-effects. Also, the scrolling approach alone will not work in cases where the target element does not completely fit within the bounds of the enclosing element. There are a few things to consider here so we should think about what solution or combination of solutions will truly address the issue. Thank you for all your help here -- I am sure we can come up with a good solution! |
This reverts commit 698258d. # Conflicts: # package-lock.json # package.json
I have just created branch Also -- perhaps the vertical offset I am seeing is due to box-model -- i.e. I suspect there are inner element margin overflows etc. which are affecting the boundingBox values. I will check this later today. |
¯\_(ツ)_/¯ I've looked around in the generated html_report and I can't find anything that says anything about test duration. (Looked manually, did a search for |
To your comment - I am not convinced that this is what is causing it. My tests start off from a mobile sized screen and then get to tablet and desktop sizes. For the mobile screen size which I have defined width and height in the backstop config, the size of the screen is tiny the height of the rendered screen would simply not fit all of the screenshots that are taken correctly. If your theory was correct it should have started failing after the first two successful screenshots, in the mobile instance of the test run.
The only issue I can see from your tests results is that your TEST image contains a blue button/label that you can see in the original REFERENCE image. I am assuming that this is not expected.
The selector class that you have used to target that specific section of the page, does it have any positioning or negative margins that might have resulted in this cropped image? In terms of performance I don't really know if it would be a problem. Only thing I know is that the el.screenshot() uses page.screenshot() underneath as well. I can run the tests with and without my fix to compare run times and will update this post when I'm done. |
@garris So following your comments, I have tested a few things and here are my findings:
Based on the above I can say that there definitely is something weird with element.screenshot(). From its official documentation element.screenshot() is supposed to scroll to the element and then take a screenshot. This however does not happen as you would expect. It may be the case that the element is not yet rendered at the location where it scrolls to and takes screenshot of a blank space and then the element is rendered. I have also tried to incorporate the solution for scrolling (#765) before element.screenshot(). It made the results a bit worse actually. I still get the blank screenshots, but now some of the once that were successful are weirdly cropped. BackstopJS Report3.pdf Any thoughts @garris ? I am really trying to get this fix or any fix through so I can return my visual tests to their place in the deployment pipeline. Cromy just doesn't cut it anymore. Any updates about:
|
@cactusa Thank you for looking at the perf times. That looks totally fine. And also -- thank you for looking at setting the VP height. I looked at the element for margin or other positioning offset and didn't find anything other than box-shadow -- which I set to none and still got the same result. This looks like it is not an issue with Backstop per-se. I am assuming if you just attempted to take snapshots of your selectors with a vanilla puppeteer script you would get the same issue? If this is true then the problem should really be addressed in Puppeteer. Have you searched the puppeteer repo for similar issues? Are there maybe flags that could potentially solve the issue? Your solution looks like it should work -- but as you can imagine -- we cant just publish the change knowing it will break existing tests. I think before we go beyond here we need to understand what the true issue is. I appreciate you working on this issue -- and I want to give you the time you need to address the root cause. At the moment, your patch is published to backstopjs@beta. For your purpose you should be able to point your build scripts there and get your CI pipeline stabilized. Does it work for you to use @beta for now while we continue to research the root cause? I am about to go on holiday. I'll follow up when I can. Thank you! |
@garris I am not sure if it's an issue with puppeteer or we are not using the function in it's intended way. Either way I can try and post an issue of this in puppeteer, if I get any spare time. I have already put in too much time into this. I can work with it from a beta version sure, but the problem is that the docker image is pointing to the latest version, so even if I pull the beta version and build the docker image, it will still pull the latest backstopjs version instead of the beta version. If you can help with that it would be great. Otherwise I will have somehow edit where the docker image is pulling from and commit it into my own fork and work from that. |
I see. The other option would be to put it behind a flag in the config. E.G adding If you add this flag I would merge this right away. I appreciate the time you have invested — so I hope we’re moving to a workable intermediate resolution for your day job. Thanks! |
@garris that sounds like a plan. I greatly appreciate your flexibility and willingness to help. I will get on it as soon as I can. Thanks again! |
The beta version contains the content of this PR garris/BackstopJS#808 that we need to improve the reliability of the test suite results. The PR merge had been reverted here garris/BackstopJS@ca5d2b2ac7f906f1d6f967d9af59480163ee3e94as it was creating issues in some edge cases, and as such it's kept in the beta version Nonetheless those changes work well for us, so that's why we are using them
@pkra no, sorry. I have been crazy busy at work. I can guarantee when I will be doing it, but I am definitely doing it. |
@cactusa thanks for the response. No worries. If there's a way to help, I should have some spare time coming up in a week or so. |
I'm facing this problem too. I have an array capturing several elements (around 30), some times it works, some times it doesn't. |
For anyone interested in this solution. There is a new PR here: #901 |
Details of the issue can be found in this issue #704
Ran lint which produced two changes that only add an empty spaces ... meh.
My change is tiny and is to do with the way we use puppeteer to make screenshots of specific elements on a page. So instead of using the element.screenshot() function to directly take a screenshot of the selected element, I have used page.screenshot() and provided the boundingBox object of the element as reference for the location of the element on the page. For some reason this way produces the expected screenshots. I am surprised it doesn't just work the way it was implemented, because in puppeteer's API documentation they claim that element.screenshot() essentially uses page.screenshot underneath ... weird.
@garris I have run the some tests and I am getting some failures. Thing is that the reference images for the tests that fail look a bit wrong, so I'm not sure what to do. Shall you commit image changes for the smoke tests as well?