Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(tracer, metrics): use polling instead of fixed wait in e2e tests #654

Merged
merged 2 commits into from Mar 14, 2022
Merged

fix(tracer, metrics): use polling instead of fixed wait in e2e tests #654

merged 2 commits into from Mar 14, 2022

Conversation

ghost
Copy link

@ghost ghost commented Mar 11, 2022

Description of your changes

This PR changes the way the E2E tests for metrics and tracer perform their checks.
Instead of waiting for a fixed amount of time (and hoping that the expected data will be available after the wait time elapses), the tests will now actively and periodically poll the corresponding services (CloudWatch metrics, X-Ray traces). When the expected number of result items arrives, the tests will perform their validation as usual. In case there will be not enough (or too many) items, the polling will stop after a timeout, so the tests will fail as intended.

The implementation relies on polling using unique IDs as keys. For metrics, the namespaces already represent such unique IDs in form of UUIDs. For traces, instead of introducing additional UUID annotations, it turned out to be easier to use unique Lambda function names for each test run.

This PR adds a new dev dependency: the promise-retry package (and its corresponding type declaration package).

How to verify this change

Run the E2E tests for tracer and metrics multiple times:

cd ./packages/tracing/ && for i in {1..10}; do npm run test:e2e; done && cd ..
cd ./packages/metrics/ && for i in {1..10}; do npm run test:e2e; done && cd ..

Explore the test results.

NOTE: The tests might fail for a different (legitimate) reason, so a red test result doesn't necessarily mean an issue with this PR. If any test fails, compare the actual stored trace or metric data with the expected values using AWS CLI or the Management Console. If the values differ, the test failure is caused not by the way the tests gather the data, so those failures must be investigated separately.

Related issues, RFCs

#644

PR status

Is this ready for review?: YES
Is it a breaking change?: NO

Checklist

  • My changes meet the tenets criteria
  • I have performed a self-review of my own code
  • I have commented my code where necessary, particularly in areas that should be flagged with a TODO, or hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding changes to the examples
  • My changes generate no new warnings
  • The code coverage hasn't decreased
  • I have added tests that prove my change is effective and works
  • New and existing unit tests pass locally and in Github Actions
  • Any dependent changes have been merged and published in downstream module
  • The PR title follows the conventional commit semantics

Breaking change checklist

  • I have documented the migration process
  • I have added, implemented necessary warnings (if it can live side by side)

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

Dmitry Balabanov added 2 commits March 10, 2022 15:17
Instead of fixed-time waiting, the tests will poll for  results.
It fixes the false-negative result when traces need more time to arrive.
Instead of fixed-time waiting, the tests will poll for  metrics.
It fixes the false-negative result when data needs more time to arrive.
@dreamorosi dreamorosi assigned ghost Mar 14, 2022
@dreamorosi dreamorosi added this to the production-ready-release milestone Mar 14, 2022
@dreamorosi dreamorosi added tracer This item relates to the Tracer Utility metrics This item relates to the Metrics Utility internal PRs that introduce changes in governance, tech debt and chores (linting setup, baseline, etc.) labels Mar 14, 2022
Copy link
Contributor

@flochaz flochaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice !

Copy link
Contributor

@dreamorosi dreamorosi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @AWSDB thanks a lot for the PR, just ran the e2e tests a couple of times and the duration of the tracer one went down more than 50%!

@dreamorosi dreamorosi merged commit 6d4ab75 into aws-powertools:main Mar 14, 2022
@ghost ghost deleted the fix/e2e-tests-wait-behavior branch March 14, 2022 14:55
Copy link
Contributor

@dreamorosi dreamorosi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great one, thanks a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
internal PRs that introduce changes in governance, tech debt and chores (linting setup, baseline, etc.) metrics This item relates to the Metrics Utility tracer This item relates to the Tracer Utility
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants