Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loki driver for Docker flooding local logs with empty line message #3384

Closed
opsxcq opened this issue Feb 25, 2021 · 5 comments · Fixed by #4396
Closed

Loki driver for Docker flooding local logs with empty line message #3384

opsxcq opened this issue Feb 25, 2021 · 5 comments · Fixed by #4396
Labels
component/docker-driver good first issue These are great first issues. If you are looking for a place to start, start here! Hacktoberfest help wanted We would love help on these issues. Please come help us! keepalive An issue or PR that will be kept alive and never marked as stale.

Comments

@opsxcq
Copy link

opsxcq commented Feb 25, 2021

Describe the bug

Apparently the log level set here

levelVal := os.Getenv("LOG_LEVEL")

is not being respected here:

level.Debug(l.logger).Log("msg", "ignoring empty line", "line", string(m.Line))

Loki docker driver was installed using

https://github.com/opsxcq/ansible-role-linux-server/blob/master/tasks/monitoring.yml#L29

Bellow some configuration files which may help

#cat /etc/docker/daemon.json 
{
    "graph": "/data/docker",
    "log-driver": "loki",
    "log-opts": {
        "loki-url": "http://XXXXXX:3100/loki/api/v1/push",
        "loki-batch-size": "5000",
        "loki-retries": "5",
        "max-size": "10m",
        "max-file": "3"
    }
}

And the problem /var/log/daemon.log is full of these messages

Feb 25 10:44:22 thor dockerd[1035]: time="2021-02-25T10:44:22Z" level=info msg="level=info ts=2021-02-25T10:44:22.431371136Z caller=loki.go:56 container_id=fc7f7f7d21c0b0b43e59f08814dd1b76c733506537cc22731000ba7fdf1c0c1e msg=\"ignoring empty line\" line=" plugin=f58a796ad30493cc8779dd3b66303578b59b8ccad4dfabe92c306a9c178c039d

The real problem is that this fills the whole mountpoint with trash logs, pilling up as few gbs/day.

To Reproduce
Steps to reproduce the behavior:

  1. Started Loki (SHA or version)
  2. Install loki docker driver (https://github.com/opsxcq/ansible-role-linux-server/blob/master/tasks/monitoring.yml#L29)
  3. Let it running with some container that produces the empty line

Loki driver config.json (default, not touched)


{"plugin":{"Config":{"Args":{"Description":"","Name":"","Settable":null,"Value":null},"Description":"Loki Logging Driver","DockerVersion":"17.09.0-ce","Documentation":"https://github.com/grafana/loki","Entrypoint":["/bin/docker-driver"],"Env":[{"Description":"Set log level to output for plugin logs","Name":"LOG_LEVEL","Settable":["value"],"Value":"info"}],"Interface":{"Socket":"loki.sock","Types":["docker.logdriver/1.0"]},"IpcHost":false,"Linux":{"AllowAllDevices":false,"Capabilities":null,"Devices":null},"Mounts":null,"Network":{"Type":"host"},"PidHost":false,"PropagatedMount":"","User":{},"WorkDir":"","rootfs":{"diff_ids":["sha256:2b09fd0540a8bb13f84b5dc7c905c782afa6086668b9b9fee19ca75f33bef727"],"type":"layers"}},"Enabled":true,"Id":"f58a796ad30493cc8779dd3b66303578b59b8ccad4dfabe92c306a9c178c039d","Name":"loki:latest","PluginReference":"docker.io/grafana/loki-docker-driver:latest","Settings":{"Args":[],"Devices":[],"Env":["LOG_LEVEL=info"],"Mounts":[]}},"Rootfs":"/data/docker/plugins/f58a796ad30493cc8779dd3b66303578b59b8ccad4dfabe92c306a9c178c039d/rootfs","Config":"sha256:f47742f58cade5c85373f06f331e81fa3a41fa4ea6585012b38dba56eab1abf3","Blobsums":["sha256:6bc9287f49ad6c2a2dcfb7e5f3b67066400817c9387f460a40146c0efe38b671"],"SwarmServiceID":""}

Expected behavior

Respect log level defined as info

Environment:

  • Debian buster x64 with docker
  • Ansible
#docker info
Containers: 5
 Running: 5
 Paused: 0
 Stopped: 0
Images: 56
Server Version: 18.09.1
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: loki
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
runc version: 1.0.0~rc6+dfsg1-3
init version: v0.18.0 (expected: fec3683b971d9c3ef73f284f176672c44b448662)
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.19.0-8-amd64
Operating System: Debian GNU/Linux 10 (buster)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.67GiB
Name: thor
ID: OIPI:MVDP:4GIL:FHSG:LIVT:BGSS:E6JC:L46M:SJLI:TG3Q:J23X:C6GN
Docker Root Dir: /data/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
@stale
Copy link

stale bot commented Jun 3, 2021

This issue has been automatically marked as stale because it has not had any activity in the past 30 days. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale A stale issue or PR that will automatically be closed. label Jun 3, 2021
@opsxcq
Copy link
Author

opsxcq commented Jun 3, 2021

Is not stale, the problem still happening, please don't delete it because maybe I will have time to fix soon.

@stale stale bot removed the stale A stale issue or PR that will automatically be closed. label Jun 3, 2021
cyriltovena pushed a commit to cyriltovena/loki that referenced this issue Jun 11, 2021
…#3384)

Updated tests
Regenerated documentation
Added CHANGELOG entry

Signed-off-by: Christopher Bradford <christopher.bradford@datastax.com>

Regenerate documentation and update certificate key file to private key file

Signed-off-by: Christopher Bradford <christopher.bradford@datastax.com>
@stale
Copy link

stale bot commented Jul 9, 2021

This issue has been automatically marked as stale because it has not had any activity in the past 30 days. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale A stale issue or PR that will automatically be closed. label Jul 9, 2021
@kavirajk kavirajk added keepalive An issue or PR that will be kept alive and never marked as stale. and removed stale A stale issue or PR that will automatically be closed. labels Jul 9, 2021
@opsxcq
Copy link
Author

opsxcq commented Jul 10, 2021

Please keep this issue open because this error still happening and still every annoying.

@slim-bean
Copy link
Collaborator

@opsxcq would you be able to open a PR to remove this line? I'm not sure there is any real value in having it so we should just remove it.

@slim-bean slim-bean added help wanted We would love help on these issues. Please come help us! good first issue These are great first issues. If you are looking for a place to start, start here! component/docker-driver labels Sep 30, 2021
owen-d added a commit that referenced this issue Sep 30, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/docker-driver good first issue These are great first issues. If you are looking for a place to start, start here! Hacktoberfest help wanted We would love help on these issues. Please come help us! keepalive An issue or PR that will be kept alive and never marked as stale.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants