Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/prometheusv2 #478

Closed
wants to merge 2 commits into from
Closed

Conversation

V3ckt0r
Copy link

@V3ckt0r V3ckt0r commented Jan 19, 2018

This is an alternative PR to #415 using the K6 engine directly to transpose metrics to Prometheus.

@V3ckt0r V3ckt0r mentioned this pull request Jan 19, 2018
@codecov-io
Copy link

codecov-io commented Jan 19, 2018

Codecov Report

Merging #478 into master will decrease coverage by 9.9%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #478      +/-   ##
==========================================
- Coverage   72.31%   62.41%   -9.91%     
==========================================
  Files         132       93      -39     
  Lines        9703     6667    -3036     
==========================================
- Hits         7017     4161    -2856     
- Misses       2272     2273       +1     
+ Partials      414      233     -181
Impacted Files Coverage Δ
api/server.go 73.33% <100%> (+7.81%) ⬆️
stats/sink.go 5.63% <0%> (-94.37%) ⬇️
lib/models.go 20.98% <0%> (-73.54%) ⬇️
stats/cloud/collector.go 0% <0%> (-70.39%) ⬇️
lib/runner.go 0% <0%> (-67.65%) ⬇️
lib/netext/dialer.go 34.48% <0%> (-59.89%) ⬇️
stats/cloud/config.go 0% <0%> (-56.87%) ⬇️
stats/cloud/errors.go 0% <0%> (-54.84%) ⬇️
stats/influxdb/util.go 34.78% <0%> (-43.48%) ⬇️
cmd/config.go 33.33% <0%> (-41.5%) ⬇️
... and 100 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3fc300c...43c6ea4. Read the comment docs.

@jonathonlacher
Copy link
Contributor

Could you add a gauge for p99 as well?

@V3ckt0r
Copy link
Author

V3ckt0r commented Jan 22, 2018

@jonathon-L, yea sure I'll add that in.

Before I do further work, I'll await to get some feedback.

@V3ckt0r
Copy link
Author

V3ckt0r commented Jan 29, 2018

hey @liclac @robingustafsson,

Any thoughts on this and #415?

Thanks,
B

@jonathonlacher
Copy link
Contributor

I'm not sure if this is in-scope for this PR, but ultimately exposing these metrics in histogram format would provide the most value. Gauges are fine for graphing an individual test. However, comparing tests over time requires storing the metrics for each request, or at least storing histogram data of each request.

@robingustafsson
Copy link
Member

@V3ckt0r Great, thanks for contributing this! Apologies for taking so long before reviewing. I think this PR looks preferable to #415 for the reasons discussed in that PR (avoiding an extra HTTP call to /v1/metrics and exposing metrics more directly in a prometheus native way).

There's some issues we need to solve though in terms of the metrics data. First, the statistics that k6 outputs through the Sink interface are global; avg, med, p(X) etc. are going to be across the entire set of samples for a metric, it's not time bucketed/per-stage/periodized since last call or anything like that (the idea being that that would be handled by the result storage system; InfluxDB, Prometheus, Load Impact Insights etc.).

The second issue relates to Trend metrics, before calling sink.Format() on a Trend metric we need to make sure that the sink.Calc() method has been called. If not then the samples in the float64 TrendSink.Values slice will be in chronological (or worse) order, which means that the med and all p(X) values will be wrong.

Given this, I think the most appropriate would be, as @jonathon-L said above, to expose Trend metrics in histogram format. I've only had a quick look at the prometheus Go client docs and code so I've no real understanding of the amount of work needed to make that happen though, so I'd like to hear ideas of how to solve the issues mentioned above. Thoughts?

@V3ckt0r
Copy link
Author

V3ckt0r commented May 8, 2018

Woops, sorry I must admit I dropped the ball with this @robingustafsson @jonathon-L 😵

I see what you mean about the way the Sink interface handles the percentiles/avg/med etc. I didn't appreciate this before so thought just taking them as is was correct. I agree that getting Prometheus to handle this is better and using Histograms to do this is the way forwards, as @jonathon-L said.

Correct me if I'm wrong but I think the second point you raised out about sink.Calc() wouldn't be an issue given the above? As the percentile/med/avg etc will be calculated in Prometheus.

In terms of the use case for this functionality. How do you envisage this working? Are you thinking that the metrics will be tagged with the particular endpoint that is tested. For instance something like http_reqs{endpoint="https://google.com"} for a k6 test of https://google.com

@mstoykov
Copy link
Contributor

Thanks a lot for this PR(and the one before it ;)).
We would like to support prometheus and I think we can push this along. I am going to try to look into the specific of the implementation until the end of the week and try to come up with an update list of issues/things to do. @V3ckt0r Do you still fill like you can continue pushing it along ?

@V3ckt0r
Copy link
Author

V3ckt0r commented Jan 21, 2019

yea I'll brush off the cobwebs and try pick this back up 😅

…et the metrics, instead of make use of the internal api as 415 does.
@CLAassistant
Copy link

CLAassistant commented Apr 21, 2019

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

)

var (
metrics = make([]v1.Metric, 0)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

metrics is a global variable (from gochecknoglobals)

@@ -0,0 +1,229 @@
package prometheus

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vendor/github.com/prometheus/client_model/go/metrics.pb.go:96:37: InternalMessageInfo not declared by package proto (from typecheck)

metrics = make([]v1.Metric, 0)
)

func HandlePrometheusMetrics() http.Handler {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

exported function HandlePrometheusMetrics should have comment or be unexported (from golint)

})
}

//prometheus exporter

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment on exported type Exporter should be of the form "Exporter ..." (with optional leading article) (from golint)


"github.com/cloudflare/cfssl/log"
"github.com/loadimpact/k6/api/common"
"github.com/loadimpact/k6/api/v1"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

File is not goimports-ed (from goimports)

@V3ckt0r
Copy link
Author

V3ckt0r commented Apr 21, 2019

rebased @mstoykov

@SuperQ
Copy link

SuperQ commented Aug 27, 2019

Any interest in moving this forward again?

@mstoykov
Copy link
Contributor

@SuperQ at least for the moment we are not inclined to add more outputs, as #1064 could basically solve all our problems without the need to support and develop all the outputs on our own. This is not definitive "No", but is a "No" for now and probably until we have time to look into #1064 and properly evaluate it.

You can at this point use telegraf or some other tool (for example) to translate from some our currently supported outputs (most likely influxdb) to prometheus.

@SuperQ
Copy link

SuperQ commented Aug 27, 2019

I would suggest moving to Prometheus/OpenMetrics compatible metrics as a base for your instrumentation. This is the direction that InfluxDB is going, as they are also adopting OpenMetrics. When using InfluxDB and Telegraf, it already knows how to ingest Prometheus/OpenMetrics. This means that neither system needs any kind of adapter.

@na--
Copy link
Member

na-- commented Aug 27, 2019

@SuperQ, thanks for mentioning it. There is a separate issue where we can discuss switching to/adding support for OpenMetrics: #858. I admit that I still haven't familiarized myself with the concepts there, but I'll mention it in the telegraf issue (#1064) as something we should evaluate before sinking a lot of time. Still, "OpenMetrics is coming soon" makes me a bit hesitant to bet on it...

@SuperQ
Copy link

SuperQ commented Aug 27, 2019

While "OpenMetrics" is still technically a draft, it's based on the widely adopted Prometheus format that has been stable since 2014. For the most part, it's 99% the same as the current Prometheus data model, with a few small extensions and some minor cleanup of the specification.

@mstoykov
Copy link
Contributor

mstoykov commented Mar 31, 2022

Thanks everybody for working on this, especially @V3ckt0r 🙇

k6 has had support for output extensions for some time now. As of today it also has a template repo for output extensions. This plus all the other extension outputs should be enough for people to extract this code in an extension and use it.

Given that I am closing this PR.

@mstoykov mstoykov closed this Mar 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants