Skip to content
This repository has been archived by the owner on Jul 9, 2024. It is now read-only.

Incorrect volume name or error "no Volumes were given" #11

Closed
yongzhang opened this issue Mar 27, 2017 · 4 comments
Closed

Incorrect volume name or error "no Volumes were given" #11

yongzhang opened this issue Mar 27, 2017 · 4 comments
Assignees
Labels
Milestone

Comments

@yongzhang
Copy link

yongzhang commented Mar 27, 2017

Hi,

Can anyone explain why all of my volume names were "devops-registry" from metrics?

gluster_exporter version: v0.2.6
Glusterfs server version: 3.10.0

# HELP gluster_node_size_free_bytes Free bytes reported for each node on each instance. Labels are to distinguish origins
# TYPE gluster_node_size_free_bytes counter
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-alertmanager/brick",volume="devops-registry"} 1.05489092608e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-es-data0/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-es-data1/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-es-data2/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-grafana/brick",volume="devops-registry"} 1.0409398272e+10
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-influxdb/brick",volume="devops-registry"} 9.925582848e+09
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-prometheus/brick",volume="devops-registry"} 1.0427400192e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-registry/brick",volume="devops-registry"} 4.8898015232e+10
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-alertmanager/brick",volume="devops-registry"} 1.05489092608e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-es-data0/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-es-data1/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-es-data2/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-grafana/brick",volume="devops-registry"} 1.0409398272e+10
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-influxdb/brick",volume="devops-registry"} 9.925582848e+09
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-prometheus/brick",volume="devops-registry"} 1.04273997824e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-registry/brick",volume="devops-registry"} 4.8898019328e+10

here's my glusterfs vol info

Volume Name: devops-influxdb
Type: Replicate
Volume ID: 2803fc56-cdc6-469e-a57e-7982fc20023c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.10.0.100:/glusterfsvolumes/devops/devops-influxdb/brick
Brick2: 10.10.0.101:/glusterfsvolumes/devops/devops-influxdb/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
 
Volume Name: devops-prometheus
Type: Replicate
Volume ID: 89c44318-e975-408d-9a6c-d15e44fddd0d
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.10.0.100:/glusterfsvolumes/devops/devops-prometheus/brick
Brick2: 10.10.0.101:/glusterfsvolumes/devops/devops-prometheus/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
 
Volume Name: devops-registry
Type: Replicate
Volume ID: 2bb07777-248d-46aa-863a-dad64a5207d0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.10.0.100:/glusterfsvolumes/devops/devops-registry/brick
Brick2: 10.10.0.101:/glusterfsvolumes/devops/devops-registry/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet

error logs from syslog:

Mar 27 15:25:03 prdsh01glus01 gluster_exporter[23074]: time="2017-03-27T15:25:03+08:00" level=warning msg="no Volumes were given." source="main.go:286"
Mar 27 15:25:08 prdsh01glus01 gluster_exporter[23074]: time="2017-03-27T15:25:08+08:00" level=warning msg="no Volumes were given." source="main.go:286"
Mar 27 15:25:32 prdsh01glus01 gluster_exporter[23074]: time="2017-03-27T15:25:32+08:00" level=warning msg="no Volumes were given." source="main.go:286"
@yongzhang yongzhang changed the title Incorrect volume name Incorrect volume name or error "no Volumes were given" Mar 27, 2017
@ofesseler ofesseler added the bug label Mar 27, 2017
@ofesseler ofesseler added this to the v0.2.7 milestone Mar 27, 2017
@ofesseler
Copy link
Owner

@hiscal2015 thanks for reporting this error, seems that gluster_node_size_free_bytes and gluster_node_size_total_bytes are affected.

The warning message is more or less a reminder, that you're implicitly querying all volumes.

@yongzhang
Copy link
Author

@ofesseler Expecting v0.2.7, this is a wonderful exporter!

@ofesseler ofesseler self-assigned this Mar 28, 2017
ofesseler added a commit that referenced this issue Mar 28, 2017
To verify issue #11 and test volume names
@yongzhang
Copy link
Author

@ofesseler Thanks for fixing this. Can you upload the latest release to the "release" tab? Seems I have some issues to build... Thanks.

@ofesseler
Copy link
Owner

I made a new release

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants