-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[APM] Script for creating functional test archive #76926
[APM] Script for creating functional test archive #76926
Conversation
|
||
async function run() { | ||
stampLogger(); | ||
const archiveName = (argv.name as string | undefined) ?? 'apm_8.0.0'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of testing against 8.0.0 (aka master), I think we should start testing against released versions.
Since 8.0.0 is not as constant as the version implies, an archive made in the very beginning of the 8.0 development will be different from an archive made when 8.0.0 is actually released. And it'll be different from the released minors. Eg. right now we are not testing 7.10.
We are also missing out on the opportunity to have the same test run against several different versions (to ensure backwards-compatability).
We might not want snapshots for every minor but having a couple per major would be nice.
Instead of letting the user specify the archive name, I think it should be set automatically based on the stack version by calling GET ${esUrl}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think locking it down like this would be the real benefit of using this script: if we have a strict process for updating the archives that's encoded in code it's impossible to screw it up. Just run the script, commit the new archives and call it a day.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would be okay with locking it down further as a start. @cauemarcondes do you think there's still a need for separate archives (if so, could you clarify what should be configurable)?
I'm not sure if we should worry too much about versions. I imagine this snapshot would be regularly updated, and in general code from earlier versions works with newer versions (across a major). WDYT about seeing how a simple approach works? We can always improve later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I imagine this snapshot would be regularly updated and in general code from earlier versions works with newer versions (across a major).
Isn't that the problem though? that we always develop features with the expectation that a certain field is always available, and forget to test older versions where it's not?
either way:
WDYT about seeing how a simple approach works? We can always improve later.
I'm good with the simple solution 👍
@dgieselaar when should we run this script? I suppose that we need to have a fixed time range when we generate the file am I right?? |
@cauemarcondes no, it will use -1h-30m by default |
But if I run this script, that by default uses -1h-30m, and replace the es_archiver, all test must fail, maybe I got it wrong. |
@cauemarcondes: to some extent. It automatically generates a file with the time range that we can import in our tests. But the data will change. Another reason to avoid snapshot tests 🙂 |
But even without using snapshots basically all tests would fail since the data is not the same. To have such an automated scenario we'd have to write our tests not expecting static values but instead, calculate based on the data received. |
@cauemarcondes yes. I don't think we should be doing snapshot(-esque) testing for API calls. There are other ways to test the response of our APIs. |
Pinging @elastic/apm-ui (Team:apm) |
💚 Build SucceededBuild metricsdistributable file count
History
To update your PR or re-run it, just comment with: |
* [APM] Script for creating functional test archive * Lock down variables; add documentation * Update tests
const rangeQueries = [ | ||
{ | ||
range: { | ||
'@timestamp': { | ||
gte, | ||
lt, | ||
}, | ||
}, | ||
}, | ||
{ | ||
range: { | ||
timestamp: { | ||
gte, | ||
lt, | ||
}, | ||
}, | ||
}, | ||
]; | ||
|
||
// some of the data is timeless/content | ||
const query = { | ||
bool: { | ||
should: [ | ||
...rangeQueries, | ||
{ | ||
bool: { | ||
must_not: [ | ||
{ | ||
exists: { | ||
field: '@timestamp', | ||
}, | ||
}, | ||
{ | ||
exists: { | ||
field: 'timestamp', | ||
}, | ||
}, | ||
], | ||
}, | ||
}, | ||
], | ||
minimum_should_match: 1, | ||
}, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is probably more a comment for #76522 but only noticing this now: Is it not possible to specify a query
per index?
I see that esarchiver takes indices
as a separate argument eg node scripts/es_archiver save ${archiveName} ${indices}
. Would it make sense to treat this as a shorthand if no --query
is given but ignore it if a query is given, and then require the query to contain the index?
So you query above would be:
const query = [
{
index: 'apm-*-transaction,apm-*-span,apm-*-error,apm-*-metric',
body: { range: { "@timestamp": { gte, lt } } }}
},
{
index: '.ml-anomalies*,.ml-config',
body: { range: { timestamp: { gte, lt } } }
}
];
That would make this a whole lot simpler.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's incompatible with how esarchiver works today: it will scroll through the hits for one search. It doesn't create a different search for every index.
* master: (68 commits) a11y tests on spaces home page including feature control (elastic#76515) [ML] Transforms list: persist pagination through refresh interval (elastic#76786) [ML] Replace all use of date_histogram interval with fixed_interval (elastic#76876) [Timelion] Update timelion deprecation links (elastic#77008) [Security Solution] Refactor Network Details to use Search Strategy (elastic#76928) Upgrade elastic charts to 21.1.2 (elastic#76939) [Alerting][Connectors] Refactor Jira: Generic Implementation (phase one) (elastic#73778) [Snapshot & Restore] fix pre existing policy with no existing repository (elastic#76861) Update saved object management UI text (elastic#76826) [Form lib] Add validations prop to UseArray and expose "moveItem" handler (elastic#76949) [Logs UI] Use fields api in log stream (elastic#76919) [UI Metrics] Support multi-colon keys (elastic#76913) [APM] Script for creating functional test archive (elastic#76926) [ENDPOINT] First version of the trusted apps list. (elastic#76304) Correct field for rum page url (elastic#76916) [Security Solution] Fix redirect properly old SIEM App routes (elastic#76868) Bump http-proxy from 1.17.0 to 1.18.1 (elastic#76924) [RUM Dashboard] Visitor breakdown usability (elastic#76834) [Search] Add a new advanced setting searchTimeout (elastic#75728) [DOCS] Adds timelion deprecation to new visualize docs structure (elastic#76959) ...
usage:
node x-pack/plugins/apm/scripts/create-functional-tests-archive --es-url=https://admin:changeme@localhost:9200 --kibana-url=https://localhost:5601
This will create an archive from the specified cluster. By default it uses the following parameters:
from
:now-1h
to
:now-30m
indices
:apm-*-transaction,apm-*-span,apm-*-error,apm-*-metric,.ml-anomalies*,.ml-config
name
:apm_8.0.0
The script will copy the generated archive to
{basic,trial}/fixtures/es_archiver/${name}
. It will also generate files that contain the time range for the generated archive, in order for tests to import the time ranges.