-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Usage] Fix flaky UI Counters test #100979
Conversation
Pinging @elastic/kibana-core (Team:Core) |
Pinging @elastic/kibana-telemetry (Team:KibanaTelemetry) |
Could you run it against the flaky test runner? Such flakiness is often not reproducible locally as much as on CI |
@mshustov thanks! I was wondering if that is actually a thing 😅 |
@pgayvallet @mshustov Flaky test runner passed https://kibana-ci.elastic.co/job/kibana+flaky-test-suite-runner/1599/ |
@elasticmachine merge upstream |
@@ -32,7 +32,7 @@ export default function ({ getService }: FtrProviderContext) { | |||
.expect(200); | |||
|
|||
// wait for SO to index data into ES | |||
await new Promise((res) => setTimeout(res, 5 * 1000)); | |||
await new Promise((res) => setTimeout(res, 8 * 1000)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we retry requesting /api/saved_objects/_find?type=usage-counters
until it's succeeded instead of relying on a manual delay?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is doable although the delay pattern is all across kibana tests (200+ places)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, but it doesn't mean it's a good thing.
- it leads to false positives due to changes in Kibana internals (what you are fixing right now)
- it introduces unnecessary delays in the execution flow. Maybe the condition is true after 2sec?
- it's not clear how to pick a correct value, devs tend to copy-paste them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I completely agree. What i meant is that we might need to resolve it across all tests rather than just this one test. I've created a helper util (with tests) to retry assertions. We can move it outside the tests/.../telemetry
folder in a separate PR later on. Let me know what you think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can move it outside the tests/.../telemetry folder in a separate PR later on.
ok then. Let's create an issue for Core-related tests.
… into usage/fix_ui_counters_test
test/api_integration/apis/telemetry/utils/try_assertion_until.ts
Outdated
Show resolved
Hide resolved
💚 Build SucceededMetrics [docs]
History
To update your PR or re-run it, just comment with: |
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
💚 Backport successful
This backport PR will be merged automatically after passing CI. |
Increase the duration to wait for ES indices to refresh after SO increments from
5
to8
seconds. 8 seconds is long enough to remove flakiness. Already fixed in other similar telemetry test cases.Ran this test
1000
times locally with no flakiness.Ran the test
1000
times on the kibana flaky test runner (https://kibana-ci.elastic.co/job/kibana+flaky-test-suite-runner/1599/)closes #93159 and #98240