-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
most variables not exported due to "unsupported value type" #44
Comments
Hey there @x3nb63. Can you share the full command line arguments or query string parameters sent to the exporter? A quick check confirms that you should only see the 'false' for exporting a variable if the argument that limits what to export is set. There is also a default filter applied based on the most common attributes folks seem to be interested in that is probably coming into play. You are also correct to note that the UPS name isn't exported as a label since the exporter expects there to either be only one UPS or the UPS to be scraped is passed as a parameter (allowing you to rely on the job name or other labels automatically set by prom) |
hello @DRuggeri ; here is what my prometheus /presumably/ should be doing, done by hand. No labels except for
the variables in the default filter are reported by that ups, all of them:
as for the last part: then my understanding of the scrape config may be wrong. I derived it from the example but actually want this one job to scrape three UPSes. So it looks this:
which results it getting values I cant distinguish in queries EDIT: note that its actually so that all three upsmon servers know all three UPSes each, only that per server one is primary while two are secondary. My hope was/is to figure this with relabel_configs and/or by promql query ... |
alright, part if not origin of this issue may be my misunderstanding of who talks to whom in the NUT system: my ... but I dont want to run 3 nut-exporters (one per So my hope was / is that a single exporter can do this, pretty much no matter of how many UPSes I have. (and to complicate it further: these servers share these UPSes because they have multiple power-supplies each ... so I actually have multiple sets of servers + UPSes, which may form a scrape job, but ideally not ... which is what needs sorting out in the queries, whether its a a failed power supply is involved ... I thought the labels will allow this) |
all right, for now I handle it by creating three scrape jobs. Works, but contradicts my understanding of Prometheus, where scrape jobs are the things that discover what there is pretty much all the time. For instance I have just one scrape job per K8s cluster and each discovers all Pods and other objects for the entire cluster. following that my ideal scrape config for UPSes would be:
where each |
Hi, Im a bit late to the party but maybe someone will find this usefull in the future. I also had the issue that nothing exept 3 variables actually got exported. I just set EDIT: |
i am running the docker image
druggeri/nut_exporter:3.1.1@sha256:0d9a0a00554081876178369ab9d46717e002fcf550b18dcd85f98c315438b524
and experience pretty much all variables not being exported, exceptups.load
,ups.status
,battery.charge
,battery.voltage
and twoinput.*
ones.See these log lines:
these values are numbers, INTEGER mostly and some it appears to convert from STRING ... but then decides to not export.
Another observation is: those that are exported are missing labels. For example:
Is all that gets scraped. Both labels are from prometheus, none from nur-exporter. Not even the ups name of "ups1" it successfully recognized. And I dont use
relabel_configs:
so far.The text was updated successfully, but these errors were encountered: