Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: recording verification rule #128

Merged
merged 9 commits into from
Sep 6, 2024
Merged

feat: recording verification rule #128

merged 9 commits into from
Sep 6, 2024

Conversation

Llandy3d
Copy link
Member

@Llandy3d Llandy3d commented Sep 4, 2024

Closes https://github.com/grafana/k6-cloud/issues/2530

This PR adds the "backend" logic for the Recording Verification Rule.

This rule will add checks to compare that the status codes from the responses match the ones from the recording.
I've added this as a separate new rules because it's very specific and it seems to make sense but this is open for debate if it should be part of a bigger Verification rule that handles user defined verification + this global recording check 🤔

@@ -21,7 +22,7 @@ export function createNewGeneratorFile(recordingPath = ''): GeneratorFileData {
testData: {
variables: [],
},
rules: [],
rules: [createEmptyRule('recording-verification')],
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this rule will be present on every newly created generator as a default that can be removed

Copy link
Collaborator

@e-fisher e-fisher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sweet! 🤩 Works as advertised and adds checks with correct status codes 👌

Few comments:

  1. Not a blocker but would be nice to have a test case covering this in codegen.test.ts
  2. Should we terminate the test when check fails? Currently, there's no indication that checks failed, but I'm guessing it will be added as another task

@Llandy3d
Copy link
Member Author

Llandy3d commented Sep 5, 2024

Sweet! 🤩 Works as advertised and adds checks with correct status codes 👌

Few comments:

  1. Not a blocker but would be nice to have a test case covering this in codegen.test.ts
  2. Should we terminate the test when check fails? Currently, there's no indication that checks failed, but I'm guessing it will be added as another task

According to k6 docs on checks they are not supposed to be make the test fail.

Regarding the missing inidication, that's because check information seems to be given only on end of test summary, I was already thinking that we might need a way to retrieve them and show in the Validator but we do not get them for free :(

@e-fisher
Copy link
Collaborator

e-fisher commented Sep 5, 2024

According to k6 docs on checks they are not supposed to be make the test fail.

It makes in context of performance testing, but using Validator I would expect it to stop when something goes wrong, allowing me to see which request breaks it. But I see 2 possible solutions here:

  • Add a rule option: Stop execution when check fails [yes/no]
  • Don't stop execution but display an icon (⚠️) near requests which failed check with a hoverable tooltip showing the reason (expected status 200, got 403)

WDYT? Those suggestions are not to be part of that PR though, just ideas for future improvements.

@Llandy3d
Copy link
Member Author

Llandy3d commented Sep 5, 2024

According to k6 docs on checks they are not supposed to be make the test fail.

It makes in context of performance testing, but using Validator I would expect it to stop when something goes wrong, allowing me to see which request breaks it. But I see 2 possible solutions here:

  • Add a rule option: Stop execution when check fails [yes/no]
  • Don't stop execution but display an icon (⚠️) near requests which failed check with a hoverable tooltip showing the reason (expected status 200, got 403)

WDYT? Those suggestions are not to be part of that PR though, just ideas for future improvements.

On the immediate future I think we might get checks information only at the end of the test run 🤔 This would exclude option 2 short-term

For option 1, I was looking into how to make them fail and I think we can combine them with Thresholds, possibly checking the rate of error for checks overall and expect all of them to pass, fail otherwise. With the setting like you mentioned could be a nice way to do it 🤔

@e-fisher
Copy link
Collaborator

e-fisher commented Sep 5, 2024

For option 1, I was looking into how to make them fail and I think we can combine them with Thresholds, possibly checking the rate of error for checks overall and expect all of them to pass, fail otherwise. With the setting like you mentioned could be a nice way to do it 🤔

Yeah, that's what we do for e2e tests

    thresholds: {
      checks: [
        {
          threshold: 'rate == 1.00',
        },
      ],
    },


@Llandy3d Llandy3d requested a review from e-fisher September 5, 2024 14:45
@Llandy3d
Copy link
Member Author

Llandy3d commented Sep 5, 2024

Re-requesting review as I've added a way to retrieve checks information to showcase in the Validator.

There is one big issue, this can be brittle, if the user uses its own version of handleSummary this will break the script, retrieving this information via a custom version of it, it's a nice and quick way to get that information but we should consider having the ability to export logs of this kind without having to rely on the handleSummary :(

Copy link
Collaborator

@e-fisher e-fisher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After looking into this some more - currently we are generating a check per every status code checked, it looks something like this:

       ✗ is status 200
        ↳  53% — ✓ 758 / ✗ 665
       ✗ is status 401
        ↳  26% — ✓ 14 / ✗ 38
       ✗ is status 204
        ↳  0% — ✓ 0 / ✗ 24
       ✗ is status 403
        ↳  0% — ✓ 0 / ✗ 23

Which might not be very helpful. Perhaps we should name the check "response status code matches recording" so all checks generated by verification rule are grouped under a single check? Another option would be adding request url to the check name, but that might generate too many checks 🤔


const verificationSnippet = `
check(resp, {
'Recording Verification Rule: status matches recording': (r) => r.status === ${response.statusCode},
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The checks are now grouped under a single name.

This still respects groups differences, in case you have two groups, it will be two checks

image

Comment on lines +107 to +118
const handleSummaryIndex = scriptLines.findIndex(
(line) =>
// NOTE: if the custom handle summary is commented out we can still insert our snippet
// this check should be improved
line.includes('export function handleSummary(') && !line.includes('//')
)

// NOTE: checks works only if the user doesn't define a custom summary handler
// if no custom handleSummary is defined we add our version to retrieve checks
if (handleSummaryIndex === -1) {
scriptLines.push(checksSnippet)
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We only add the checks retrieval logic via handleSummary if there is no custom summary defined by the user since we retrieve that information via summary

@Llandy3d Llandy3d requested a review from e-fisher September 6, 2024 10:08
Copy link
Collaborator

@e-fisher e-fisher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! 🙌

@Llandy3d Llandy3d merged commit 35241b6 into main Sep 6, 2024
1 check passed
@Llandy3d Llandy3d deleted the verification branch September 6, 2024 10:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants