You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm investigating replacing my current sidecar consul-agent setup in k8s with consul-k8s. While testing this out I found that if consul-k8s has an issue talking to consul (for example) it simply logs errors and continues on. Since this is a single-pod that is responsible for syncing state my expectation is that there would be some way to know the pod is having problems so it can be rescheduled. From my looking around I don't see any mechanism (no ports exposed, no cli calls to make, etc.). Are there plans on adding these checks?
The text was updated successfully, but these errors were encountered:
This is a good catch! #67 added a health endpoint to the catalog sync pod, and the final piece of this is piped through the Helm chart in consul-helm PR 123 which will be merged shortly.
I'm investigating replacing my current sidecar consul-agent setup in k8s with consul-k8s. While testing this out I found that if consul-k8s has an issue talking to consul (for example) it simply logs errors and continues on. Since this is a single-pod that is responsible for syncing state my expectation is that there would be some way to know the pod is having problems so it can be rescheduled. From my looking around I don't see any mechanism (no ports exposed, no cli calls to make, etc.). Are there plans on adding these checks?
The text was updated successfully, but these errors were encountered: