-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New CoreDNS pod warnings in log #4919
Comments
Yes, that's expected. If you import a globbed set of files, but no files match, coredns will log a warning about it. We added support for this import in #4397 but since it is optional as to whether or not the user creates any customizations, the warning will be logged if they do not. This is tracked upstream at coredns/coredns#3600 (comment) - there does not seem to be any interest in fixing it. |
I believe the issue is that there IS a configmap with a configured filename as a key that matches the glob pattern, but coredns is not seeing it because the pod has not been restarted. There is no way to provide this configmap as part of the server install, so you must provision it after the server has been started. Does the pod see the optional configmap without restarting? It looks to me like coredns sees the configmap on the next reload cycle and the messages stop, so I guess it does. I couldn't find good documentation on this behavior though searching. |
@brandond Do you have any update on this? In this case, there is a file match, so coredns should NOT log a warning. As you can see in the configmap, there is As mentioned in the the ticket you mentioned, if the file exists, there is NO warning, but in k3s, we still see the warning. |
The file only exists if you create and populate the customization configmap. If you don't do this for yourself, then you will see warnings in the logs. |
is below considered
The warnings still show in the log, as mentioned in post#1 |
Did you delete the coredns pod after creating that configmap? It has to exist at the time the pod is created to be projected into the pod as a volume. That's just how optional volumes work. |
First, my customization in configmap did get picked up by coredns, so I thought there's no need to delete. After your suggestion, I tried to delete the pod, and let k3s to recreate, but the new pod still gives the same warning. |
Hmm. If it's getting picked up by coredns then I'm not sure why it would be complaining about it in the logs. I'd suggest taking it up with the coredns team but as you saw in the issue linked above they don't seem too concerned about it. If it's working then what are you still trying to resolve? |
It turned out someone on the team changed the configmap name from coredns-custom to coredns-custom-config since I last tested. This caused those warnings and they are legit. Once I fix the configmap name, the warning goes away and it starts working again. Thanks and sorry for the false alarm. |
Would it be possible to hide the |
very annoying message spam in loki/grafana |
Can atleast only show up once when it starts and not spam? I have 20k lines of the same warning over 5days uptime.. |
Closing this out in favor of tracking the upstream issue at coredns/coredns#3600 (comment) We will leave the import statement in the config file as a user extension point; it is unfortunate that coredns logs so aggressively if the files don't exist. |
Is your feature request related to a problem? Please describe.
Related with #462
Verifying the logs output is displaying some WARNINGS Messages
Describe the solution you'd like
Validate that current behavior does not affect functionality
Steps to reproduce
Validations steps:
Using version: v1.23.1-rc1+k3s1
Create 1 server node and 1 agent node.
Deploy a coredns-custom, run commands and look the output.
Once server is ready, deployt this:
For logs, this warning was displayed:
kubectl logs -n kube-system coredns-84c56f7bfb-qcl4q
The text was updated successfully, but these errors were encountered: