-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose per dataset ZFS metrics #1602
Comments
Is that available as unprivileged user? Or does it require root permissions to read? |
What version of ZFS is that? I'm not seeing it on my 0.7.5. |
I tried reading it from a non root user and it worked. EDIT: Also, can I work on this issue? |
I have some ideas about the metric and labels names to accomplish this task. The existing structure to query metric in the IO file is: My proposals are very similar. I have two alternatives, please highlight on which is better:
Pros:
Cons:
Pros:
Cons:
|
I like option 2 better, option 1 might break existing alerts / notifications / etc But I would like to support this issue, since I was just looking for exactly that. The metrics are available starting with ZFS 0.8 |
Thanks for the response @thoro. |
Hello moderators Brian and Johannes! I would also like to let you know that the second alternative didn't work as planned. I'm guessing its because of this. The number of However, the query structure below worked like a charm: |
FYI, I figured out what the problem was. Like it says in the docstring, the combination of metric name, label name, help string that I was using was getting overwritten and behaving erratically. |
Would the sum of node_zfs_zpool_dataset_nread be equal to node_zfs_zpool_nread? Does that make sense? |
Actually they don't match because node_zfs_zpool_dataset_nread counts any read that happens on the dataset, including those served from ARC cache. On the other hand node_zfs_zpool_nread only counts reads that hit disk - including zpool scrubs, which don't show in any dataset. |
Makes sense, then go with option 2 |
I'm a bit curious about the node_zfs_zpool_dataset_nread naming. Would dataset there be the whole path to the dataset, which in my case can be up to 70 characters long, or would it just be the "basename" and full path would be distinguishable by tha dataset tag? Edit: just realised it's probably "dataset" since it's dataset statistics.. |
While we're at it, it would be nice to make the new metric names follow Prometheus conventions. |
Using the example in my initial comment, It's 'ZPOOL NAME/DATASET NAME' . |
@pmb311 yeah, I just wondered if the dataset name would end up in the metric name since the dataset name could be something like "Remote systems/backups/webservers/web0001/customerdata/customer0200/staticdata" for example. |
This functionality was added in #1632. Perhaps this issue can be closed? |
Yes, this feature has been added! @aqw |
Host operating system: output of
uname -a
Linux foo1.example.com 4.19.67 #1 SMP Thu Aug 22 16:06:16 EDT 2019 x86_64 GNU/Linux
node_exporter version: output of
node_exporter --version
node_exporter, version 0.18.1 (branch: release-0.18, revision: 0037b4808adeaa041eb5dd699f7427714ca8b8c0)
build user: foo
build date: 20191030-22:24:38
go version: go1.13.3
node_exporter command line flags
/usr/sbin/node_exporter --collector.zfs
Are you running node_exporter in Docker?
No
What did you do that produced an error?
N/A
What did you expect to see?
We'd like to collect the per-dataset contents of /proc/spl/kstat/zfs/ZPOOL NAME/objset-*
E.g.:
root@foo:~$ cat /proc/spl/kstat/zfs/ZPOOL NAME/objset-0x4c3
49 1 0x01 7 1904 882869614188 7661358045725488
name type data
dataset_name 7 ZPOOL NAME/DATASET NAME
writes 4 162659962
nwritten 4 169357302418427
reads 4 19860562
nread 4 20787773826774
nunlinks 4 5326
nunlinked 4 5326
What did you see instead?
The contents of this file are not collected by node_exporter
The text was updated successfully, but these errors were encountered: