-
-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consul SD causing 100% cpu usage if the service is down #627
Labels
Comments
nicot
pushed a commit
to nicot/kit
that referenced
this issue
Dec 3, 2017
Add backoff to the Consul instancer loop. Fixes go-kit#627.
nicot
pushed a commit
to nicot/kit
that referenced
this issue
Dec 3, 2017
Justification for jitter and growth factor: https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/. Add backoff to the Consul instancer loop. Fixes go-kit#627.
peterbourgon
pushed a commit
that referenced
this issue
Apr 2, 2018
* Add backoff package Justification for jitter and growth factor: https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/. Add backoff to the Consul instancer loop. Fixes #627. * Revert "Add backoff package" This reverts commit 924501a. * Get rid of external package and update exponential * Add instancer backoff * Fix old exponential name * Add doc comment * Fixup & respond to review
muscliary
pushed a commit
to muscliary/kit
that referenced
this issue
Sep 12, 2023
* Add backoff package Justification for jitter and growth factor: https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/. Add backoff to the Consul instancer loop. Fixes go-kit/kit#627. * Revert "Add backoff package" This reverts commit 924501ae1fcfadaa27593e9c019283412c513928. * Get rid of external package and update exponential * Add instancer backoff * Fix old exponential name * Add doc comment * Fixup & respond to review
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hey,
It seems that if the consul service is down, the Instancer will constantly try to get the list of services, causing 100% CPU usage until the service is back up. There should probably be a backoff mechanism which will only try after X amount of time. Here is how I'm using the client:
Terminal output:
ts=2017-10-26T11:08:33.168160872Z caller=instancer.go:69 service=maindb tags=[local] err="Get http://192.168.99.100:8500/v1/health/service/maindb?passing=1&tag=local&wait=10000ms: dial tcp 192.168.99.100:8500: getsockopt: connection refused" ts=2017-10-26T11:08:33.171253831Z caller=instancer.go:69 service=maindb tags=[local] err="Get http://192.168.99.100:8500/v1/health/service/maindb?passing=1&tag=local&wait=10000ms: dial tcp 192.168.99.100:8500: getsockopt: connection refused" ts=2017-10-26T11:08:33.175242518Z caller=instancer.go:69 service=maindb tags=[local] err="Get http://192.168.99.100:8500/v1/health/service/maindb?passing=1&tag=local&wait=10000ms: dial tcp 192.168.99.100:8500: getsockopt: connection refused" ts=2017-10-26T11:08:33.176126961Z caller=instancer.go:69 service=maindb tags=[local] err="Get http://192.168.99.100:8500/v1/health/service/maindb?passing=1&tag=local&wait=10000ms: dial tcp 192.168.99.100:8500: getsockopt: connection refused" ...
The text was updated successfully, but these errors were encountered: