-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Filebeat] Panic in K8s autodiscover #21843
Comments
Pinging @elastic/integrations-platforms (Team:Platforms) |
@ChrsMark Hm, after upgrading it still panics but the stacktrace is slightly different (libbeat/autodiscover/template.Mapper.GetConfig):
Shall I open a new issue? |
Hi @boernd! What kind of upgrade did you perform? The fix should be in with The line the error indicates was writing in the map before the fix:
|
Strange, pod description gives me
Upgrade was just updating the Helm values.yaml but will check again tomorrow just to be sure. |
@ChrsMark The line mentioned above is still in the v7.9.3 tag? I also can find the commit in the 7.9 branch but not in the tagged version. |
@boernd Sorry for the confusion here. I investigated this and found that the version was cut before the backport was merged so the fix didn't make it to Not sure if you can wait for |
After upgrading from 7.5.2 to 7.9.2 I observe sporadic container crashes with the following stacktrace:
I use Kubernetes autodiscover with a lot of conditions, e.g.
Filebeat Version: 7.9.2
Operating System: Official docker image running on OpenShift 3.11
Discuss Forum URL: https://discuss.elastic.co/t/filebeat-panic-in-k8s-autodiscover/252144
Steps to Reproduce: Not sure howto at the moment. Since the rollout of the new version (~14 hour timeframe) it happened three times (a total of 202 Filebeat pods are deployed and running as a daemonset).
The text was updated successfully, but these errors were encountered: