You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We had an outage in our k8s cluster yesterday. After it cleared up the remaining side effect was that haproxy while the backends were unavailable had sent a bunch of 404's to the CDN and browsers that were cached, rather than 503s.
I was affected by this recently as well. It's good to see it's been changed upstream in the nginx ingress. I believe the haproxy-ingress should reflect this pattern as well.
For a temporary solution, I built a custom default backend that will return 503 on specific subdomains. This is definitely a workaround though, I think the default backend receiving traffic on no endpoints available is confusing.
This should be fixed since the current snapshot version, v0.5-snapshot.2. Closing. Let me know if the problem persists updating this same issue. Thanks for pointing this out.
It’d be nice if you included the git commit that made the change... (either by referencing the issue in ththe commit message or by adding a reference to the commit when you close the issue...
We had an outage in our k8s cluster yesterday. After it cleared up the remaining side effect was that haproxy while the backends were unavailable had sent a bunch of 404's to the CDN and browsers that were cached, rather than 503s.
In the nginx ingress they recently changed this to be, I think, more sensible: kubernetes/ingress-nginx#1513
The text was updated successfully, but these errors were encountered: