-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubernetes watch connection don't release when reload create runner failed #12096
Comments
https://github.com/elastic/beats/blob/master/libbeat/cfgfile/list.go#L90
runner may create failed, but the kubernetes watch connection has been started in process of factory.Create |
Could you please share the config you are using? Starting a runner should not create a new watcher. |
FYI: config files
inputs.d/emptydir.json
inputs.d/stdout.json
|
I followered debug log and source code , it seems when create A newInput it will start the processors at the first line and the add_kubernets_metadata will start watch here: so if there any error happens in newinput process,it will create a useless watch. I'm fresh at filebeat, if I missed something please tell me, thx. |
I wonder if #12106 would fix this |
yes, fix goroutine leak #12125 add cleanup check when Input creating failed, but I wonder if this could fix kubernetes watch leak, cause it was created by add_kubenetes_metadata processor. |
I think we don't currently have a way to free processors on input stopping. A way to avoid this issue is configuring the |
https://discuss.elastic.co/t/kubernetes-watch-connection-never-close-when-inputreload-create-runner-failed/180112
run filebeat in kubernetes cluster ad set input reload true,when reload start runner failed but the kubernetes processor have already started a watch, after ten seconds reload will try start runner again failed, at last it will keep a lot of connection with kube-apiserver, in my case, the number of connections create by filebeat have been more than 10000, and lead to kube-apiserver cost 10G memory.
The text was updated successfully, but these errors were encountered: