You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's purpose would be to try to eliminate cases where we break some existing behavour after each update. And also it could eliminate a cases where something stopped working without us knowing (like this one https://xeokit.github.io/xeokit-sdk/examples/scenemodel/#benchmarking_via_spector - (I will just delete this one)). The goal is generally to increase our confidence and quality of updates.
These tests are running on actual examples posted on a Xeokit website. I think this is how it should be, because this way we test an actual environment, not a test-environment that could be different.
These tests are quite slow (cause these are E2E tests, that will be always slower then unit tests). This means they should run automatically probably without us knowing about it, e.g. after each release, or every day once. We only should get the report of it later.
The only downside I see right now is the fact that these tests should be updated then (same as it is with .d.ts files) which will take some additional time. So it's good to decide for which examples we should do it, for all, or maybe for some of them. If for some of them: maybe let's list for which one it could be the best? Or maybe only for V3?
In some way these tests could be also used to measure other factors e.g. time of loading to see such form of regression.
Looking forward to hearing your opinion on that Lindsay :)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello @xeolabs,
Just wanted to post some expirements I did recently, which are E2E tests of Xeokit.
Right now I did 4 tests for 4 existing examples, it's located here: https://github.com/xeokit/xeopy/tree/master/xeopy/endtoend_xeokit_tests
One of them looks like this:
So in this one above script automatically search for annotations, click one after another and makes sure it will display the labels.
But there are also 3 more. One of them (https://github.com/xeokit/xeopy/blob/master/xeopy/endtoend_xeokit_tests/test_annotations_clickFlyToPosition.py) is doing a click of annotation and compares 2 images (expected and actual one) to make sure that camera moved directly same way after a click of annotation.
Few additional notes:
Looking forward to hearing your opinion on that Lindsay :)
Beta Was this translation helpful? Give feedback.
All reactions