Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to disable this WARN machine_libipmctl.go:64] There are no NVM devices! #3198

Closed
zawadaa opened this issue Nov 14, 2022 · 9 comments · Fixed by #3359
Closed

How to disable this WARN machine_libipmctl.go:64] There are no NVM devices! #3198

zawadaa opened this issue Nov 14, 2022 · 9 comments · Fixed by #3359
Assignees

Comments

@zawadaa
Copy link

zawadaa commented Nov 14, 2022

My environment is a virtual machine without NVM device
I start cAdvisor like that:

docker run   --volume=/:/rootfs:ro   --volume=/var/run:/var/run:ro   --volume=/sys:/sys:ro   --volume=/var/lib/docker/:/var/lib/docker:ro   --volume=/dev/disk/:/dev/disk:ro   --publish=8080:8080   --detach=true   --name=cadvisor   --privileged   --device=/dev/kmsg   --restart=unless-stopped  gcr.io/cadvisor/cadvisor:v0.46.0

Periodically in the logs appears:

W1114 11:03:35.331845 1 machine_libipmctl.go:64] There are no NVM devices!

How to disable this message?

@abraxaswd
Copy link

Looks like this issue is appearing with Ubuntu 22.04 LTS and it is not possible to start the container and get any data to prometheus.

@Creatone Creatone self-assigned this Dec 1, 2022
@Creatone
Copy link
Collaborator

Creatone commented Dec 1, 2022

The gcr.io/cadvisor/cadvisor:v0.46.0 image has set every build tag. It means that it is using the nvm/machine_libipmctl.go rather than nvm/machine_no_libipmctl.go file when building binary.
Unfortunately, this is the only mechanism that decides about gathering NVM-related metrics.

To avoid that warning you need to build an image without libipmctl flag. To achieve that you can simply edit deploy/Dockerfile lines 44 - 46.

-   if [ "$(uname --machine)" = "x86_64" ]; then \
-         export GO_TAGS="$GO_TAGS,libipmctl"; \
-   fi; \

@Creatone Creatone closed this as completed Dec 1, 2022
@skupjoe
Copy link

skupjoe commented Jan 13, 2023

This is also happening for me on v0.47.1 using the image directly from gcr.io/cadvisor. I don't think it should be necessary to build this image ourselves to simply ignore this!

@abraxaswd
Copy link

The problem on ubuntu was fixed after installing updates for ubuntu 22.04 LTS kernel and container.io. After that the error is still appearing but data is transfered to prometheus and grafana. I used the gcr.io/cadvisor 46.0 container image. But even after upgrading to 0.47.1 it is still working.

@zawadaa
Copy link
Author

zawadaa commented Jan 13, 2023

I don't think it should be necessary to build this image ourselves to simply ignore this!

Totally agree! This should be done in runtime.
Like many others metrics... Too bad that I don't know how to achieve it :-(

@skupjoe
Copy link

skupjoe commented Mar 30, 2023

Can this be reopened? I am noticing this on another deployment of cadvisor on a completely different machine

@namlh-eureka
Copy link

got this error when using gcr.io/cadvisor/cadvisor-amd64:v0.47.2 . I'm running cadvisor as docker service in swarm

version: '3'

services:
  cadvisor-exporter:
    image: gcr.io/cadvisor/cadvisor-amd64:v0.47.2
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:ro
      - /sys:/sys:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /var/lib/docker/:/var/lib/docker:ro
    networks:
      - monitoring
    deploy:
      mode: global
    command:
      - "--housekeeping_interval=30s"
      - "--docker_only=true"
      - "--disable_metrics=percpu,sched,tcp,udp,disk,diskIO,hugetlb,referenced_memory,cpu_topology,resctrl"

networks:
  monitoring:
    external: true

@iwankgb
Copy link
Collaborator

iwankgb commented Jul 28, 2023

It should clearly be reopened :)

@safoueneraddaoui
Copy link

is the problem resolved !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants