-
Notifications
You must be signed in to change notification settings - Fork 524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add k6 benchmarking to run on PRs #392
Conversation
Signed-off-by: Joe Elliott <number101010@gmail.com>
Signed-off-by: Joe Elliott <number101010@gmail.com>
Signed-off-by: Joe Elliott <number101010@gmail.com>
Signed-off-by: Joe Elliott <number101010@gmail.com>
Signed-off-by: Joe Elliott <number101010@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good stuff! I carefully combed through all changes. These are some nice metrics, can we add a section in the README about what these metrics mean and which ones we should watch out for in every CI run?
s := cortex_e2e.NewConcreteService( | ||
"k6", | ||
k6Image, | ||
cortex_e2e.NewCommandWithoutEntrypoint("sh", "-c", "sleep 3600"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need the sleep here? This will only start up once tempo is up and running, i.e after this
require.NoError(t, s.StartAndWaitReady(tempo))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We just need a process to start and not exit. If you run
docker run loadimpact/k6:latest
Then it immediately exits. So we start this k6 container with sleep so it just hangs around for awhile and then use exec:
to kick off our tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks awesome :)
Added some comments, but nothing too important.
I think too that having some docs about the metrics is a good idea.
Signed-off-by: Joe Elliott <number101010@gmail.com>
Signed-off-by: Joe Elliott <number101010@gmail.com>
Not super concerned about this being perfectly defined. I really just want a regular record of the performance of our write/query paths recorded for every PR. This way if/when we do have performance issues we can at least go back through our PRs and get some idea of what may have occurred. This is very much a "first pass" at this feature and a lot of work could be put into improving these tests to better test query path or compaction. |
…l CI Signed-off-by: Joe Elliott <number101010@gmail.com>
Signed-off-by: Joe Elliott <number101010@gmail.com>
I like the new changes! LGTM :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, I like the separate workflow for benchmarks. :)
What this PR does:
The two tests will create output like:
Which will be good records of performance changes for each PR.
cc @dgzlopes