-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems with container logs in release 0.18.0/0.18.1 #10413
Comments
I'm just a passer-by but this to me looks like Too Many Requests response from Loki: |
@jszwedko apologies I'm just a commenter here not the original poster. You probably wanted to tag @k8s-comandante |
Doh, apologies! |
Hi @jszwedko, yes, my main pain is that Vector is still loses container logs after some time of operation. As soon as I restart the daemon, it finds new logs and started using them. |
Thanks for the response! It does sound like this is the same as #8616 then. I'll close this issue, but feel free to leave any additional thoughts over there! |
Hello everyone!
I am using a vector to collect logs from containers in a k8s cluster.
Based on this issues, I conducted the of updating the agent to version 0.18.0, and then to 0.18.1 in the hope that my problem with logs will be solved, but the problem with the loss of logs has not been solved, I see in the agent logs:
The problem remained exactly the same as in #8616.
Once I restarted the DaemonSet it detected new logs and started consuming them.
2021-12-13T09:54:02.787861Z INFO source{component_kind="source" component_id=kubernetes_logs_redline component_type=kubernetes_logs component_name=kubernetes_logs_redline}:file_server: vector::internal_events::file::source: Stopped watching file. file=/var/log/pods/redline-api-56998874bf-pzltm_6f00e0a1-7590-4136-8d7d-edb1d1ec3255/redline-api/0.log
The problem remained exactly the same as in #7527
2021-12-13T09:53:47.283959Z ERROR source{component_kind="source" component_id=agent component_type=kubernetes_logs component_name=agent}: vector::internal_events::kubernetes::instrumenting_watcher: Watch stream failed. error=Desync { source: Desync } internal_log_rate_secs=5
2021-12-13T09:53:47.283990Z WARN source{component_kind="source" component_id=agent component_type=kubernetes_logs component_name=agent}: vector::internal_events::kubernetes::reflector: Handling desync. error=Desync
There is also a new one that I have not met before, there are a lot of messages in the logs:
2021-12-13T09:51:00.364788Z ERROR sink{component_kind="sink" component_id=loki-product component_type=loki component_name=loki-product}:request{request_id=799}: vector_core::stream::driver: Service call failed. error=ServerError { code: 429 } request_id=799
Either I'm doing something wrong or the 0.18 release did not solve the above problems, please tell me.
The text was updated successfully, but these errors were encountered: