You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First an idea for an improvement...for manual metrics provide an extension to 'refresh_metrics/1' to select a specific metric to update (e.g. :beam, :application etc).
Question is, and this may be more of a Prometheus question. I am scraping metrics every 30 seconds, these are important but easy to create metrics. I want to add a whole bunch of new metrics. These are more expensive to generate, there will be a significant number of them too, but we can scrape them infrequently.
I really don't want to be generating these metrics every 30 seconds, but I don't see any way to create a second metrics server (so there will be 2 x jobs and 2 scapes).
What I thought of doing was to generate the metrics hourly, send them with the other metrics in the next scrape, and then delete them from the ets table so they won't be sent to Prometheus in any future scapes until another another hour. This is more to ensure that we don't have crazy high cardinality.
I'd have to do a non-standard PromQL query in the Grafana dashboard of course. Using last_over_time or something equivalent.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi,
First an idea for an improvement...for manual metrics provide an extension to 'refresh_metrics/1' to select a specific metric to update (e.g. :beam, :application etc).
Question is, and this may be more of a Prometheus question. I am scraping metrics every 30 seconds, these are important but easy to create metrics. I want to add a whole bunch of new metrics. These are more expensive to generate, there will be a significant number of them too, but we can scrape them infrequently.
I really don't want to be generating these metrics every 30 seconds, but I don't see any way to create a second metrics server (so there will be 2 x jobs and 2 scapes).
What I thought of doing was to generate the metrics hourly, send them with the other metrics in the next scrape, and then delete them from the ets table so they won't be sent to Prometheus in any future scapes until another another hour. This is more to ensure that we don't have crazy high cardinality.
I'd have to do a non-standard PromQL query in the Grafana dashboard of course. Using last_over_time or something equivalent.
Can you think of any better ways to do this?
Thanks and Happy New Year!!!
Beta Was this translation helpful? Give feedback.
All reactions