RUMM-1588 + RUMM-1610 Update E2E test monitors to new metric format #601
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What and why?
⚙️ This PR updates our E2E test monitors to new format:
feature
tag (logs
|trace
|rum
|core
) to each monitor;monitor:behavior
tag to all monitors asserting data;monitor:performance
tag to all monitors asserting performance;"perf_measure"
span with test method sent as a resource name.It also adds a bonus for troubleshooting, now each monitor renders its corresponding E2E test code in monitor UI, e.g.:
This helps understanding each monitor, especially if it expects "data" or "no data".
How?
${{feature}}
variable to monitors template - it has no default value, so linter will emit an error if not specified.monitor:behaviour
andmonitor:performance
tags are hardcoded in each template.resource_name
for each span and use commonoperation_name
.Code rendering is done through template system. Code is parsed while reading monitors from test files.
I regenerated all monitors by first deleting them, and then running
terraform apply
.Review checklist