-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure WorkflowTaskExecutionFailureCounter is called with a tag #1658
Conversation
@@ -1170,7 +1170,7 @@ func (weh *workflowExecutionEventHandlerImpl) ProcessEvent( | |||
} | |||
defer func() { | |||
if p := recover(); p != nil { | |||
weh.metricsHandler.Counter(metrics.WorkflowTaskExecutionFailureCounter).Inc(1) | |||
incrementWorkflowTaskFailureCounter(weh.metricsHandler, "NonDeterminismError") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the panics here are always caused just by non determinism, what is users workflow code panics because of a code bug?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also I am pretty sure we can still end up emitting emitting this metric twice , that should be addressed to close #1450
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the panics here are always caused just by non determinism, what is users workflow code panics because of a code bug?
From my testing, when a user's workflow code panics, we trigger the call in internal_task_pollers.go
. The only way I saw us recover a panic from the defer call is when we panicIllegalState
from the state machine. But to be safe, I can add a check to check the panic message for TMPRL1100
to use NonDeterminismError
and default to WorkflowError
for all other scenarios.
Also I am pretty sure we can still end up emitting emitting this metric twice , that should be addressed to close #1450
Do you have any idea what scenario could cause this double logging? I've tried to hit this scenario the last few days and haven't been able to come up with anything.
The issue itself is on Prometheus erroring out due to the metric being emitted twice, once due to having a tag and the other without, which is now resolved with this PR (calling with different logs doesn't cause Prometheus to throw an error). But I'm happy to address this double logging, I'm just not able to find a scenario where we double log.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure let me try to come up with a counter example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I got a scenario that hits both ProcessEvent
and RespondTaskCompletedWithMetrics
. It looks like when both of these functions emit this metric, only the RespondTaskCompletedWithMetrics
inc gets counted.
A few scenarios I ran:
- When
RespondTaskCompletedWithMetrics
is the only one emitting the metric, the count is 1 - When
ProcessEvent
is the only one emitting the metric, the count is 0 - When neither emit the metric, the metric doesn't get emitted to http://localhost:9090/metrics altogether.
Is there some kind of scoping or something that's causing the earlier ProcessEvent
metric to get reset? I'm using the metrics sample to emit metrics to localhost://9090/metrics https://github.com/yuandrew/playground/tree/duplicate_logging
What was changed
Ensure the WorkflowTaskExecutionFailureCounter metric always has a tag set included.
Why?
Prometheus returns an error when you try incrementing the same counter if one instance has a tag set and the other didn't.
I validated that Prometheus is okay with different tag sets (in this case if we happen to call one instance with
WorkflowError
and the other withNonDeterminismError
tag sets). I'm not sure there's a scenario where we increment this counter from different sources, but if there is, this shouldn't be an issue with Prometheus.Checklist
workflow_task_execution_failed
is used with different sets of labels causing errors with Prometheus #1450