-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
warnings spamming logs: "wrong number of labels for upstream peer, empty labels will be used instead" in nginx-plus ingress controller #5607
warnings spamming logs: "wrong number of labels for upstream peer, empty labels will be used instead" in nginx-plus ingress controller #5607
Comments
Hi @reddyblokesh thanks for reporting! Be sure to check out the docs and the Contributing Guidelines while you wait for a human to take a look at this 🙂 Cheers! |
@reddyblokesh thank you for contacting us about this. This looks to be a duplicate of #5010, can you confirm if you are using gRPC? |
no, we arent using gRPC !! thats the catch. our backend is based on rest API |
how comes this behaviour is not visible in nginx ingress controller OSS ??? any version in which this should be resolved or any workaround to fix this so we can patch it and rebuild the image from source ? |
@reddyblokesh Can you please let us know your ingress controller configuration? Either a helm values file or the manifest file. Thank you |
nginx.txt |
@reddyblokesh Thank you for adding your NGINX configuration. I tried to reproduce the problem but am not able to see the same results as you. I have attached my Ingress Controller deployment manifest. ic-deployment.txt. I used https://github.com/nginxinc/kubernetes-ingress/tree/main/examples/ingress-resources/complete-example as a sample app. Can you let us know if your Ingress Controller configuration is different to this one? |
the reason is you dont seem to have any upstreams or any configmap with upstreams, the error occurs when it doesnt find the expected labels hardocded in exporter refered to upstreams. mainly when you use multiple upstreams , it tries to capture the labels for these upstreams and it doesnt find it throws warnings !! recheck my nginx.conf which i shared. |
When NGINX Plus is present, ingress controller should be adding a zone and specific labels that relate to the backend service. |
agree but this is spamming badly our access logs which is overhelming our splunk. is there a workard or patch to disable this ? |
@reddyblokesh It's not in main yet, but we're hoping it will be fixed in 3.5.2 when that releases. |
@reddyblokesh, this fix has now released in release 3.5.2. Please update to this version to solve your problem. |
@reddyblokesh we're closing this, if you encounter any issue please feel free to open another one. Thanks! |
@vepatel @AlexFenlon this issue has not resolved even in 3.5.2 since you have merged a changed for prometheus labels by changing from warning to debug , now we are getting debug errors . since log level his controlled more at exporter level so couldnt change log level to warning or info as well !! level=debug ts=2024-06-17T07:23:43.507Z caller=nginx_plus.go:849 msg="wrong number of labels for upstream peer, empty labels will be used instead" upstream=tritonExternalAzure-dev peer=\ expected=1 got=0 |
@vepatel @AlexFenlon in next release is the fix planned ? |
Hi @reddyblokesh , we are targeting release NIC v3.6.1. Could you provide more info about upstream configuration you use? |
Describe the bug
our access logs are spammed with these warnings in nginx-plus ingress controller :
"wrong number of labels for upstream peer, empty labels will be used instead"
NOTE: THIS BEHAVIOUR IS NOT AVAILABLE IN NGINX INGRESS CONTROLLER( FREE/OSS) ONLY IN NGINX -PLUS INGRESS CONTROLLER
=================================================================================================
origin of this error: https://github.com/nginxinc/nginx-prometheus-exporter/blob/main/collector/nginx_plus.go#L838
level.Warn(c.logger).Log("msg", "wrong number of labels for upstream, empty labels will be used instead", "upstream", name, "expected", len(c.variableLabelNames.UpstreamServerVariableLabelNames), "got", len(varLabelValues))
=================================================================================================
To Reproduce
Steps to reproduce the behavior:
=================================================================================================
evel=warn ts=2024-05-23T08:06:54.604Z caller=nginx_plus.go:838 msg="wrong number of labels for upstream, empty labels will be used instead" upstream=xxxxxxxxxxxxx expected=4 got=0
level=warn ts=2024-05-23T08:06:54.604Z caller=nginx_plus.go:849 msg="wrong number of labels for upstream peer, empty labels will be used instead" upstream=xxxxxxx-dev peer=10.x.x.x :443 expected=1 got=0
level=warn ts=2024-05-23T08:06:54.604Z caller=nginx_plus.go:838 msg="wrong number of labels for upstream, empty labels will be used instead" upstream=xxxxxxxxx-dev expected=4 got=0
level=warn ts=2024-05-23T08:06:54.604Z caller=nginx_plus.go:849 msg="wrong number of labels for upstream peer, empty labels will be used instead" upstream=xxxxxxxxxx-dev peer=10.x.x.x expected=1 got=0
level=warn ts=2024-05-23T08:06:55.138Z caller=nginx_plus.go:838 msg="wrong number of labels for upstream, empty labels will be used instead" upstream=xxxxxxxxxxxx-dev expected=4 got=0
level=warn ts=2024-05-23T08:06:55.138Z caller=nginx_plus.go:849 msg="wrong number of labels for upstream peer, empty labels will be used instead" upstream=xxxxxxxxxxx-dev peer=10.x.x.x expected=1 got=0
level=warn ts=2024-05-23T08:06:55.138Z caller=nginx_plus.go:838 msg="wrong number of labels for upstream, empty labels will be used instead" upstream=xxxxxxxxxx-dev expected=4 got=0
level=warn ts=2024-05-23T08:06:55.138Z caller=nginx_plus.go:849 msg="wrong number of labels for upstream peer, empty labels will be used instead" upstream=xxxxxxxxxx-dev peer=10.x.x.x expected=1 got=0
===================================================================================
Expected behavior
our access.logs should have actual access.logs which is working but these warnings are spamming our logs
NOTE: THIS BEHAVIOUR IS NOT AVAILABLE IN NGINX INGRESS CONTROLLER( FREE/OSS)
Your environment
Client Version: v1.28.4
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.26.13
WARNING: version difference between client (1.28) and server (1.26) exceeds the supported minor version skew of +/-1
Additional context
Add any other context about the problem here. Any log files you want to share.
The text was updated successfully, but these errors were encountered: