Skip to content

Commit

Permalink
docs: Fix must have Ruler alerts definition (#6058)
Browse files Browse the repository at this point in the history
* Fix must have Ruler alerts definition

ThanosRuler missing rule intervals metric used the wrong comparator sign, confusing users trying to create the rule.



Signed-off-by: Maxim Muzafarov <m.muzafarov@gmail.com>

* Update docs/components/rule.md

Co-authored-by: Saswata Mukherjee <saswataminsta@yahoo.com>
Signed-off-by: Maxim Muzafarov <m.muzafarov@gmail.com>

---------

Signed-off-by: Maxim Muzafarov <m.muzafarov@gmail.com>
Co-authored-by: Saswata Mukherjee <saswataminsta@yahoo.com>
  • Loading branch information
m-messiah and saswatamcode authored Feb 4, 2023
1 parent 8d80a44 commit 48e82c5
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/components/rule.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ The most important metrics to alert on are:

* `prometheus_rule_evaluation_failures_total`. If greater than 0, it means that that rule failed to be evaluated, which results in either gap in rule or potentially ignored alert. This metric might indicate problems on the queryAPI endpoint you use. Alert heavily on this if this happens for longer than your alert thresholds. `strategy` label will tell you if failures comes from rules that tolerate [partial response](#partial-response) or not.

* `prometheus_rule_group_last_duration_seconds < prometheus_rule_group_interval_seconds` If the difference is large, it means that rule evaluation took more time than the scheduled interval. It can indicate that your query backend (e.g Querier) takes too much time to evaluate the query, i.e. that it is not fast enough to fill the rule. This might indicate other problems like slow StoreAPis or too complex query expression in rule.
* `prometheus_rule_group_last_duration_seconds > prometheus_rule_group_interval_seconds` If the difference is positive, it means that rule evaluation took more time than the scheduled interval, and data for some intervals could be missing. It can indicate that your query backend (e.g Querier) takes too much time to evaluate the query, i.e. that it is not fast enough to fill the rule. This might indicate other problems like slow StoreAPis or too complex query expression in rule.

* `thanos_rule_evaluation_with_warnings_total`. If you choose to use Rules and Alerts with [partial response strategy's](#partial-response) value as "warn", this metric will tell you how many evaluation ended up with some kind of warning. To see the actual warnings see WARN log level. This might suggest that those evaluations return partial response and might not be accurate.

Expand Down

0 comments on commit 48e82c5

Please sign in to comment.