Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZFS snapshots showing up among filesystems #2152

Closed
lapo-luchini opened this issue Sep 24, 2021 · 8 comments · Fixed by #2227
Closed

ZFS snapshots showing up among filesystems #2152

lapo-luchini opened this issue Sep 24, 2021 · 8 comments · Fixed by #2227

Comments

@lapo-luchini
Copy link
Contributor

Host operating system: output of uname -a

FreeBSD (hostname) 13.0-RELEASE-p4 FreeBSD 13.0-RELEASE-p4 #0: Tue Aug 24 07:33:27 UTC 2021 root@amd64-builder.daemonology.net:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64

node_exporter version: output of node_exporter --version

node_exporter, version 1.2.2 (branch: release-1.2, revision: 0)
  build user:       root
  build date:
  go version:       go1.16.7
  platform:         freebsd/amd64

node_exporter command line flags

/usr/local/bin/node_exporter --web.listen-address=:9100 --collector.textfile.directory=/var/tmp/node_exporter

Are you running node_exporter in Docker?

No.

What did you do that produced an error?

http http://localhost:9100/metrics | egrep 'device="[^@+]+@' | head

What did you expect to see?

Nothing at all (only actual filesystems, no snapshots).

What did you see instead?

% http http://localhost:9100/metrics | egrep 'device="[^@+]+@' | head
node_filesystem_avail_bytes{device="z/home@2017-01-01_05.30.00--9y",fstype="zfs",mountpoint="/home/.zfs/snapshot/2017-01-01_05.30.00--9y"} 2.020526154752e+12
node_filesystem_avail_bytes{device="z/home@2017-02-01_06.21.04--9y",fstype="zfs",mountpoint="/home/.zfs/snapshot/2017-02-01_06.21.04--9y"} 2.020526154752e+12
node_filesystem_avail_bytes{device="z/home@2017-03-01_05.30.00--9y",fstype="zfs",mountpoint="/home/.zfs/snapshot/2017-03-01_05.30.00--9y"} 2.020526154752e+12
node_filesystem_avail_bytes{device="z/home@2017-04-01_05.30.00--9y",fstype="zfs",mountpoint="/home/.zfs/snapshot/2017-04-01_05.30.00--9y"} 2.020526154752e+12
node_filesystem_avail_bytes{device="z/home@2017-05-01_05.30.00--9y",fstype="zfs",mountpoint="/home/.zfs/snapshot/2017-05-01_05.30.00--9y"} 2.020526154752e+12
node_filesystem_avail_bytes{device="z/home@2017-06-01_05.30.00--9y",fstype="zfs",mountpoint="/home/.zfs/snapshot/2017-06-01_05.30.00--9y"} 2.020526154752e+12
node_filesystem_avail_bytes{device="z/home@2017-07-01_05.30.00--9y",fstype="zfs",mountpoint="/home/.zfs/snapshot/2017-07-01_05.30.00--9y"} 2.020526154752e+12
node_filesystem_avail_bytes{device="z/home@2017-08-01_05.30.00--9y",fstype="zfs",mountpoint="/home/.zfs/snapshot/2017-08-01_05.30.00--9y"} 2.020526154752e+12
node_filesystem_avail_bytes{device="z/home@2017-09-01_06.11.59--9y",fstype="zfs",mountpoint="/home/.zfs/snapshot/2017-09-01_06.11.59--9y"} 2.020526154752e+12
node_filesystem_avail_bytes{device="z/home@2017-10-01_05.30.00--9y",fstype="zfs",mountpoint="/home/.zfs/snapshot/2017-10-01_05.30.00--9y"} 2.020526154752e+12

Please note that % mount | fgrep -c @ returns 0, i.e. no snapshot is explicitly mounted in the system.

Something even stranger: only a small fraction of snapshots do show up there.
Maybe those are the ones that one time or the other I access (by using .zfs/snapshot/name)?
I still would expect node_exporter to ignore any and all snapshots… I can fix this during scraping to avoid those lines, but I would prefer to avoid enumerating them rather than doing some work and then throw it away during scraping.

@lapo-luchini
Copy link
Contributor Author

Yes it is indeed related to the "opening" on the snapshots:

% http http://localhost:9100/metrics | egrep 'device="[^@+]+@' | head
% ls /home/.zfs/snapshot
20190311        20190416        20190603        20190726
20190404        20190524-12.0p5 20190712        20190906
% http http://localhost:9100/metrics | egrep 'device="[^@+]+@' | head
node_filesystem_avail_bytes{device="z/home@20190311",fstype="zfs",mountpoint="/home/.zfs/snapshot/20190311"} 5.10893596672e+11
node_filesystem_avail_bytes{device="z/home@20190404",fstype="zfs",mountpoint="/home/.zfs/snapshot/20190404"} 5.10893596672e+11
node_filesystem_avail_bytes{device="z/home@20190416",fstype="zfs",mountpoint="/home/.zfs/snapshot/20190416"} 5.10893596672e+11
node_filesystem_avail_bytes{device="z/home@20190524-12.0p5",fstype="zfs",mountpoint="/home/.zfs/snapshot/20190524-12.0p5"} 5.10893596672e+11
node_filesystem_avail_bytes{device="z/home@20190603",fstype="zfs",mountpoint="/home/.zfs/snapshot/20190603"} 5.10893596672e+11
node_filesystem_avail_bytes{device="z/home@20190712",fstype="zfs",mountpoint="/home/.zfs/snapshot/20190712"} 5.10893596672e+11
node_filesystem_avail_bytes{device="z/home@20190726",fstype="zfs",mountpoint="/home/.zfs/snapshot/20190726"} 5.10893596672e+11
node_filesystem_avail_bytes{device="z/home@20190906",fstype="zfs",mountpoint="/home/.zfs/snapshot/20190906"} 5.10893596672e+11
node_filesystem_device_error{device="z/home@20190311",fstype="zfs",mountpoint="/home/.zfs/snapshot/20190311"} 0
node_filesystem_device_error{device="z/home@20190404",fstype="zfs",mountpoint="/home/.zfs/snapshot/20190404"} 0

I tried this on a different host than the bug opening host.

I verified this on node_exporter versions 1.1.2 and 1.2.2.

@discordianfish
Copy link
Member

We're just exposing /proc/1/mounts, so I'm surprised that mount is not showing them either..? I'd assume mount also looks at /proc/mounts.

You could set --collector.filesystem.mount-points-exclude to exclude them. But yes, if they are not really mounted it sounds like a bug somewhere.

@lapo-luchini
Copy link
Contributor Author

FreeBSD doesn't mount /proc by default, so I guess you're using some other source on that OS.
I'll try and read the sources maybe (I speak no Go, but as far as reading goes I should understand it…).

@lapo-luchini
Copy link
Contributor Author

It seems that this arrives from filesystem_bsd.go GetStats() which in turns uses system getmntinfo(3).
I'll check that in more detail as I find time, maybe with a minimal C program to dump the result of that stdc call.

@KrisShannon
Copy link

The ones you see are indeed the ones that have been mounted by accessing the .zfs/snapshot/name directory.

They don't show up in a normal call to mount because they are mounted with a special flag (MNT_IGNORE) which it uses to skip over them unless called with -v (verbose)

mount.c - skip printing ignored mounts unless verbose mode is enabled

If you run mount -v | fgrep -c @ you should see the same number as the collector is returning.

@KrisShannon
Copy link

A quick check of the FreeBSD source tree and commit history shows that this MNT_IGNORE flag is only used for zfs snapshots.

The major original use was to make df not report by default on any mounted snapshot directories.

It's not really documented but it definitely seems to be the intent that these are hidden mounts that don't need to be reported on.

I would agree with @lapo-luchini that these probably shouldn't be reported by the collector.

@lapo-luchini
Copy link
Contributor Author

It's almost a one-liner; should I can create a PR anyways?

@discordianfish
Copy link
Member

discordianfish commented Nov 27, 2021

It's not really documented but it definitely seems to be the intent that these are hidden mounts that don't need to be reported on.

I would agree with @lapo-luchini that these probably shouldn't be reported by the collector.

+1

It's almost a one-liner; should I can create a PR anyways?

Yes please!

lapo-luchini added a commit to lapo-luchini/node_exporter that referenced this issue Nov 27, 2021
Closes prometheus#2152.

Signed-off-by: Lapo Luchini <lapo@lapo.it>
SuperQ pushed a commit that referenced this issue Dec 1, 2021
* Ignore filesystems flagges as MNT_IGNORE.
Closes #2152.

Signed-off-by: Lapo Luchini <lapo@lapo.it>
SuperQ pushed a commit that referenced this issue Dec 4, 2021
* Ignore filesystems flagges as MNT_IGNORE.
Closes #2152.

Signed-off-by: Lapo Luchini <lapo@lapo.it>
oblitorum pushed a commit to shatteredsilicon/node_exporter that referenced this issue Apr 9, 2024
* Ignore filesystems flagges as MNT_IGNORE.
Closes prometheus#2152.

Signed-off-by: Lapo Luchini <lapo@lapo.it>
oblitorum pushed a commit to shatteredsilicon/node_exporter that referenced this issue Apr 9, 2024
* Ignore filesystems flagges as MNT_IGNORE.
Closes prometheus#2152.

Signed-off-by: Lapo Luchini <lapo@lapo.it>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants