You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
New Feature Request
Envoy being a popular choice as a side-car in microservice architecture - I was wondering if it's possible to send reverse notification to an app when a pod/instance fails health check and is removed from the cluster? Is it possible to achieve this via extension?
I'll expand with below scenarios below
Scenario: When Envoy is used as a side car for an Orchestrator (like Netflix Conductor)
Orchestrator and envoy are deployed as a single pod in Kubernetes. Envoy is configured to perform health check of services (all the services are deployed in same k8s cluster). In a situation, a particular core service starts failing envoy health check (or k8s readiness probe), then the platform might want the orchestrator to take itself out of rotation for the load-balancer (F5 or AWS Load Balancer).
A workaround I use, is to read failed health check logs + poll Envoy admin API for cluster health and act upon it.
What I would expect?
When envoy has less than N healthy connections for a cluster, it would call a POST endpoint (could do gRPC too) with details
cluster info + any other metadata we have configured
number of healthy connections envoy has
min number of configured healthy connections
total number of connections envoy has (healthy + unhealthy)
Last Health Check Failure Time
With this info, the destination server can act upon it.
New Feature Request
Envoy being a popular choice as a side-car in microservice architecture - I was wondering if it's possible to send reverse notification to an app when a pod/instance fails health check and is removed from the cluster? Is it possible to achieve this via extension?
I'll expand with below scenarios below
Scenario: When Envoy is used as a side car for an Orchestrator (like Netflix Conductor)
Orchestrator and envoy are deployed as a single pod in Kubernetes. Envoy is configured to perform health check of services (all the services are deployed in same k8s cluster). In a situation, a particular core service starts failing envoy health check (or k8s readiness probe), then the platform might want the orchestrator to take itself out of rotation for the load-balancer (F5 or AWS Load Balancer).
A workaround I use, is to read failed health check logs + poll Envoy admin API for cluster health and act upon it.
What I would expect?
When envoy has less than N healthy connections for a cluster, it would call a POST endpoint (could do gRPC too) with details
With this info, the destination server can act upon it.
The text was updated successfully, but these errors were encountered: