-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make the k6 terminal output more Human Readable #1319
Comments
I like most of the ideas here in general, though because of the way metrics are implemented in k6, the end-of-test summary is currently generated very organically, so it might be a bit tricky to handle all of the corner cases of us adding some rigid structure to it. For example, run a script that doesn't make any HTTP requests, but adds a custom metric... That said, it's definitely not impossible to add structure, we just have to evaluate the effort and prioritize accordingly. The thing I dislike most about the current proposal is the indentation of the so-called "minor" metrics. This has 2 big issues:
That said, I like pretty much everything else you suggested 🎉. I've written my ideas about filtering/whitelisting metrics in #1321. I think hiding the "minor" metrics by default isn't only a cosmetic/UX issue, but also a very important performance one, so it should be handled in a more fundamental way. And over Slack you suggested a very good solution for the summary layout flexibility - to have the end-of-test summary as a template file. We can ship k6 with a default one that we think is good for the majority of users. But we can also give the option to advanced users to specify specify their own alternative template for |
I agree with this. I'm not convinced about the remaining minor metrics: The template solution is very flexible, but after some thinking, I'm a little concerned that it might not be so easy for users to make changes to it.
|
If we didn't have sub-metrics, I wouldn't mind, but since we do, I think this would only add confusion. After all sub-metrics contain subsets of the values from the "parent" metric, so their stats would be somewhat similar. Whereas
I was thinking that users can specify their custom templates in a file, and they can pass the path to that file as a CLI flag. Since the end-of-test summary is only applicable for local execution (i.e.
The metric skipping would probably be better handled by #1321
True, though I'd consider this a feature for advanced users, I assume we're going to spend some time properly documenting the default template, and maybe even offer a few alternatives ourselves. And I assume that we'd have to answer the occasional support question...
Yes, that might work. Though the flag should probably be |
Just a random thought, but if we have support for text templates of the end-of-test CLI summary, it's a very small amount of work to also support HTML templates... This would allow us to very easily produce well-formatted executive reports at the end of a test run! 🎉 I thought we had an open issue about something like this, but I couldn't find it, so I'm posting this here. |
how about implementing the TAP standard? |
@mostafa, maybe I don't understand what TAP is about from the examples on that page, but I don't think it will be in any way useful in solving the problems presented in this issue? 😕 |
It's a output formatting standard for test suites. It can be used for checks to show a better output, for example. |
@sniku, now that we have Instead, can you please open several issues/PRs in the jslib repo, one for each specific suggestion and issue you've found in the current text summary? For example, having a new option for |
This is a proposal to redefine the k6 terminal output format.
The goal of this proposal is to improve the User Experience of the terminal users by bringing more focus to the important metrics and hiding by default metrics that are not important to a typical k6 user.
Feature Description
The most important standard metric in k6 is called
http_req_duration
. This metric is currently printed in the middle ofminor metrics
such ashttp_req_receiving
andhttp_req_connecting
, making it less prominent than it should be.It seems that metrics are currently printed in alphabetical order, and not in the order of importance.
Suggested Solution
I'm suggesting that the above output should look similar to this:
Changes:
http_reqs
andhttp_req_duration
are next to each otheriterations
anditeration_duration
are next to each other.data_sent
anddata_received
are next to each other, and in correct order.Better yet, I believe that the
minor metrics
should not be displayed by default since most users don't need them. I suggest we hide the minor metrics unless user specified an option to view them (--print-minor-metrics
configuration option or similar)The default output could be simplified to:
I believe the above format is more useful for the majority of k6 users than the current verbose, alphabetical format.
Complex scripts
More complex scripts that define
sub metrics
such ashttp_req_duration{name:PublicCrocs}
, and groups should be considered.Example
Suggested output
The text was updated successfully, but these errors were encountered: