Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: introduce metrics for failed evaluations #584

Merged
merged 3 commits into from
Apr 13, 2023

Conversation

odubajDT
Copy link
Contributor

@odubajDT odubajDT commented Apr 3, 2023

This PR

  • introduced a new metric which buckets evaluations by reason
  • small refactoring
  • unit tests

Related Issues

Fixes #559

How to test

Run flagd via

make run

Run

curl -X POST "localhost:8013/schema.v1.Service/ResolveString" -d '{"flagKey":"fibAlgo","context":{}}' -H "Content-Type: application/json"
curl -X POST "localhost:8013/schema.v1.Service/ResolveString" -d '{"flagKey":"myStringFlag","context":{}}' -H "Content-Type: application/json"
curl -X POST "localhost:8013/schema.v1.Service/ResolveString" -d '{"flagKey":"myStringFlagNonExisting","context":{}}' -H "Content-Type: application/json"
curl -X POST "localhost:8013/schema.v1.Service/ResolveString" -d '{"flagKey":"myStringFlagNonExisting2","context":{}}' -H "Content-Type: application/json"
curl -X POST "localhost:8013/schema.v1.Service/ResolveInt" -d '{"flagKey":"myStringFlag","context":{}}' -H "Content-Type: application/json"

Open http://localhost:8014/metrics and see something like the following

# HELP impressions_total The number of evaluation for a given flag
# TYPE impressions_total counter
impressions_total{feature_flag_key="fibAlgo",feature_flag_provider_name="flagd",feature_flag_reason="DEFAULT",feature_flag_variant="recursive",otel_scope_name="openfeature/flagd",otel_scope_version=""} 1
impressions_total{feature_flag_key="myStringFlag",feature_flag_provider_name="flagd",feature_flag_reason="STATIC",feature_flag_variant="key1",otel_scope_name="openfeature/flagd",otel_scope_version=""} 1

and

# HELP reasons_total The number of evaluations for a given reason
# TYPE reasons_total counter
reasons_total{feature_flag_provider_name="flagd",feature_flag_reason="DEFAULT",otel_scope_name="openfeature/flagd",otel_scope_version=""} 1
reasons_total{feature_flag_provider_name="flagd",feature_flag_reason="STATIC",otel_scope_name="openfeature/flagd",otel_scope_version=""} 1
reasons_total{exception_type="FLAG_NOT_FOUND",feature_flag_provider_name="flagd",feature_flag_reason="ERROR",otel_scope_name="openfeature/flagd",otel_scope_version=""} 2
reasons_total{exception_type="TYPE_MISMATCH",feature_flag_provider_name="flagd",feature_flag_reason="ERROR",otel_scope_name="openfeature/flagd",otel_scope_version=""} 1

@codecov
Copy link

codecov bot commented Apr 3, 2023

Codecov Report

Merging #584 (17b1eab) into main (c25d0ef) will increase coverage by 0.27%.
The diff coverage is 88.09%.

@@            Coverage Diff             @@
##             main     #584      +/-   ##
==========================================
+ Coverage   70.78%   71.05%   +0.27%     
==========================================
  Files          22       22              
  Lines        2249     2277      +28     
==========================================
+ Hits         1592     1618      +26     
- Misses        592      594       +2     
  Partials       65       65              
Impacted Files Coverage Δ
core/pkg/eval/json_evaluator.go 87.72% <33.33%> (ø)
core/pkg/service/flag-evaluation/flag_evaluator.go 67.82% <70.00%> (-0.52%) ⬇️
core/pkg/eval/ievaluator.go 100.00% <100.00%> (ø)
core/pkg/telemetry/metrics.go 100.00% <100.00%> (ø)

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@odubajDT odubajDT changed the title feat: adapt metrics to possibly agregate evaluations by reason feat: fix Apr 4, 2023
@odubajDT odubajDT changed the title feat: fix feat: introduce impression metrics for failed evaluations Apr 4, 2023
@odubajDT odubajDT marked this pull request as ready for review April 4, 2023 14:09
@odubajDT odubajDT requested a review from a team as a code owner April 4, 2023 14:09
Copy link
Member

@thisthat thisthat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the ticket, the agreement was to create a new metric instead of reusing impressions.

@odubajDT odubajDT marked this pull request as draft April 4, 2023 14:15
core/pkg/eval/ievaluator.go Outdated Show resolved Hide resolved
core/pkg/eval/ievaluator.go Outdated Show resolved Hide resolved
@odubajDT odubajDT marked this pull request as ready for review April 5, 2023 11:45
core/pkg/eval/ievaluator.go Outdated Show resolved Hide resolved
core/pkg/eval/ievaluator.go Outdated Show resolved Hide resolved
core/pkg/eval/ievaluator.go Outdated Show resolved Hide resolved
core/pkg/otel/metrics.go Outdated Show resolved Hide resolved
@beeme1mr beeme1mr self-requested a review April 5, 2023 16:16
Copy link
Member

@beeme1mr beeme1mr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This works great from a functionality perspective. I'll let others comment on the code itself.

core/pkg/otel/metrics.go Outdated Show resolved Hide resolved
core/pkg/otel/metrics.go Outdated Show resolved Hide resolved
@odubajDT odubajDT force-pushed the feat/559/metrics branch 3 times, most recently from 6ea04cb to b3359e1 Compare April 9, 2023 16:46
@odubajDT odubajDT changed the title feat: introduce impression metrics for failed evaluations feat: introduce metrics for failed evaluations Apr 10, 2023
@odubajDT odubajDT force-pushed the feat/559/metrics branch 2 times, most recently from 4885ce8 to 5bdb5c5 Compare April 10, 2023 18:14
Signed-off-by: odubajDT <ondrej.dubaj@dynatrace.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Metric for evaluations by reason (including errors)
7 participants