From 03b851124e0b493240e34bc4621c7ebddeb2d7e0 Mon Sep 17 00:00:00 2001 From: Maxim Muzafarov Date: Thu, 19 Jan 2023 13:59:17 +0000 Subject: [PATCH 1/2] Fix must have Ruler alerts definition ThanosRuler missing rule intervals metric used the wrong comparator sign, confusing users trying to create the rule. Signed-off-by: Maxim Muzafarov --- docs/components/rule.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/components/rule.md b/docs/components/rule.md index bef4f56b11..e9ad0619dc 100644 --- a/docs/components/rule.md +++ b/docs/components/rule.md @@ -158,7 +158,7 @@ The most important metrics to alert on are: * `prometheus_rule_evaluation_failures_total`. If greater than 0, it means that that rule failed to be evaluated, which results in either gap in rule or potentially ignored alert. This metric might indicate problems on the queryAPI endpoint you use. Alert heavily on this if this happens for longer than your alert thresholds. `strategy` label will tell you if failures comes from rules that tolerate [partial response](#partial-response) or not. -* `prometheus_rule_group_last_duration_seconds < prometheus_rule_group_interval_seconds` If the difference is large, it means that rule evaluation took more time than the scheduled interval. It can indicate that your query backend (e.g Querier) takes too much time to evaluate the query, i.e. that it is not fast enough to fill the rule. This might indicate other problems like slow StoreAPis or too complex query expression in rule. +* `prometheus_rule_group_last_duration_seconds > prometheus_rule_group_interval_seconds` If the difference is positive, it means that rule evaluation took more time than the scheduled interval and some intervals data could be missing. It can indicate that your query backend (e.g Querier) takes too much time to evaluate the query, i.e. that it is not fast enough to fill the rule. This might indicate other problems like slow StoreAPis or too complex query expression in rule. * `thanos_rule_evaluation_with_warnings_total`. If you choose to use Rules and Alerts with [partial response strategy's](#partial-response) value as "warn", this metric will tell you how many evaluation ended up with some kind of warning. To see the actual warnings see WARN log level. This might suggest that those evaluations return partial response and might not be accurate. From 8fc6e7728e8a49b2dbd563ec474b21d9c193a268 Mon Sep 17 00:00:00 2001 From: Maxim Muzafarov Date: Fri, 3 Feb 2023 08:43:25 +0000 Subject: [PATCH 2/2] Update docs/components/rule.md Co-authored-by: Saswata Mukherjee Signed-off-by: Maxim Muzafarov --- docs/components/rule.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/components/rule.md b/docs/components/rule.md index e9ad0619dc..cd7558d902 100644 --- a/docs/components/rule.md +++ b/docs/components/rule.md @@ -158,7 +158,7 @@ The most important metrics to alert on are: * `prometheus_rule_evaluation_failures_total`. If greater than 0, it means that that rule failed to be evaluated, which results in either gap in rule or potentially ignored alert. This metric might indicate problems on the queryAPI endpoint you use. Alert heavily on this if this happens for longer than your alert thresholds. `strategy` label will tell you if failures comes from rules that tolerate [partial response](#partial-response) or not. -* `prometheus_rule_group_last_duration_seconds > prometheus_rule_group_interval_seconds` If the difference is positive, it means that rule evaluation took more time than the scheduled interval and some intervals data could be missing. It can indicate that your query backend (e.g Querier) takes too much time to evaluate the query, i.e. that it is not fast enough to fill the rule. This might indicate other problems like slow StoreAPis or too complex query expression in rule. +* `prometheus_rule_group_last_duration_seconds > prometheus_rule_group_interval_seconds` If the difference is positive, it means that rule evaluation took more time than the scheduled interval, and data for some intervals could be missing. It can indicate that your query backend (e.g Querier) takes too much time to evaluate the query, i.e. that it is not fast enough to fill the rule. This might indicate other problems like slow StoreAPis or too complex query expression in rule. * `thanos_rule_evaluation_with_warnings_total`. If you choose to use Rules and Alerts with [partial response strategy's](#partial-response) value as "warn", this metric will tell you how many evaluation ended up with some kind of warning. To see the actual warnings see WARN log level. This might suggest that those evaluations return partial response and might not be accurate.