Do you have a Grafana instance? frigga makes sure you don’t scrape metrics in Prometheus, which you don’t present in Grafana dashboards.
Scrape only relevant metrics in Prometheus, according to your Grafana dashboards, see the before and after snapshot. frigga generates keep
filters on metric_relabel_configs, and adds them to your prometheus.yml
file
frigga is extremely useful for Grafana Cloud customers since the pricing is per DataSeries ingestions.
Python 3.6.7+
$ pip install frigga
docker run --rm -it unfor19/frigga
For ease of use, add an alias in your ~/.bashrc
file
alias frigga="docker run --rm -it unfor19/frigga"
Auto-generated by unfor19/replacer-action, see readme.yml
Usage: frigga [OPTIONS] COMMAND [ARGS]...
No confirmation prompts
Options:
-ci, --ci Use this flag to avoid confirmation prompts
--help Show this message and exit.
Commands:
client-start Alias: cs
grafana-list Alias: gl
prometheus-apply Alias: pa
prometheus-get Alias: pg
prometheus-reload Alias: pr
version Print the installed version
webserver-start Alias: ws
-
Grafana - Import the dashboard frigga - Jobs Usage (ID: 12537) to Grafana, and check out the number of DataSeries
-
Grafana - Generate an API Key for
Viewer
-
frigga - Get the list of metrics that are used in your Grafana dasboards
$ frigga gl # gl is grafana-list, or good luck :) Grafana url [http://localhost:3000]: http://my-grafana.grafana.net Grafana api key: (hidden) >> [LOG] Getting the list of words to ignore when scraping from Grafana ... >> [LOG] Found a total of 269 unique metrics to keep
.metrics.json
- automatically generated in pwd{ "all_metrics": [ "cadvisor_version_info", "container_cpu_usage_seconds_total", "container_last_seen", "container_memory_max_usage_bytes", ... ] }
-
Add the following snippet to the bottom of your
prometheus.yml
file. Check the example in docker-compose/prometheus-original.yml--- name: frigga exclude_jobs: []
-
frigga - Use the
.metrics.json
file to apply the rules to your existingprometheus.yml
$ frigga pa # pa is prometheus-apply, or pam-tada-dam Prom yaml path [docker-compose/prometheus.yml]: /etc/prometheus/prometheus.yml Metrics json path [./.metrics.json]: /home/willywonka/.metrics.json >> [LOG] Reading documents from docker-compose/prometheus.yml ... >> [LOG] Done! Now reload docker-compose/prometheus.yml with 'frigga pr -u http://localhost:9090'
-
As mentioned in the previous step, reload the
prometheus.yml
to Prometheus, here are two ways of doing it- "Kill" Prometheus
$ docker exec $PROM_CONTAINER_NAME kill -HUP 1
- Send a POST request to
/-/reload
- this requires Prometheus to be loaded with--web.enable-lifecycle
, for example, see docker-compose.ymlOr with curl$ frigga prometheus-reload --prom-url http://localhost:9090
$ curl -X POST http://localhost:9090/-/reload
- "Kill" Prometheus
-
Make sure the
prometheus.yml
was loaded successfully to Prometheus$ docker logs --tail 10 $PROM_CONTAINER_NAME level=info ts=2020-06-27T15:45:34.514Z caller=main.go:799 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml level=info ts=2020-06-27T15:45:34.686Z caller=main.go:827 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml
-
Grafana - Now check
frigga - Jobs Usage
dashboard, the numbers should be signifcantly lower (up to 60% or even more)
-
git clone this repository
-
Run Docker daemon (Docker for Desktop)
-
Make sure ports 3000,8080,9100 are not in use (state=closed)
docker run --rm -it --network=host unfor19/net-tools nmap -p 8080,3000,9100 -n localhost
-
Deploy locally the services: Prometheus, Grafana, node-exporter and cadvisor
$ bash docker-compose/deploy_stack.sh Creating network "frigga_net1" with the default driver ... >> Grafana - Generating API Key - for Viewer eyJrIjoiT29hNGxGZjAwT2hZcU1BSmpPRXhndXVwUUE4ZVNFcGQiLCJuIjoibG9jYWwiLCJpZCI6MX0= # Save this key ^^^
-
Open your browser, navigate to http://localhost:3000
- Username and password are admin:admin
- You'll be prompted to update your password, so just keep using
admin
or hit Skip
-
Go to Jobs Usage dashboard, you'll see that Prometheus is processing ~2800 DataSeries
-
Get all the metrics that are used in your Grafana dasboards
$ export GRAFANA_API_KEY=the-key-that-was-generated-in-the-deploy-locally-step $ frigga gl -gurl http://localhost:3000 -gkey $GRAFANA_API_KEY >> [LOG] Getting the list of words to ignore when scraping from Grafana ... >> [LOG] Found a total of 269 unique metrics to keep # Generated .metrics.json in pwd
-
Check the number of data series BEFORE filtering with frigga
$ frigga pg -u http://localhost:9090 # prometheus-get >> [LOG] Total number of data-series: 1863
-
Apply the rules to
prometheus.yml
, keep the defaults$ frigga pa # prometheus-apply Prom yaml path [docker-compose/prometheus.yml]: Metrics json path [./.metrics.json]: ... >> [LOG] Done! Now reload docker-compose/prometheus.yml with 'docker exec $PROM_CONTAINER_NAME kill -HUP 1'
-
Reload
prometheus.yml
to Prometheus$ frigga pr -u http://localhost:9090 # prometheus-reload >> [LOG] Successfully reloaded Prometheus - http://localhost:9090/-/reload
-
Check the number of data series AFTER filtering with frigga
$ frigga pg -u http://localhost:9090 # prometheus-get >> [LOG] Total number of data-series: 898 # Decreased from 1863 to 898, decreased 51% !
-
Go to Jobs Usage, you'll see that Prometheus is processing only ~898 DataSeries (previously ~1863)
- In case you don't see the change, don't forget to hit the refersh button
-
Cleanup
$ docker-compose -p frigga --file docker-compose/docker-compose.yml down
- Grafana-Cloud - As a Grafana Cloud customer, the main reason for writing this tool was lowering the costs. This goal was achieved by sending only the relevant DataSeries to Grafana Cloud
- Saves disk-space on the machine running Prometheus
- Improves PromQL performance by querying less metrics; significant only when processing high volumes
- After applying the rules in
prometheus.yml
, it makes the file less readable. Due to the fact it's not a file that you play with on a daily basis, it's okayish - The memory usage of Prometheus increases slightly, around ~30MB, not significant, but worth mentioning
- If you intend to use more metrics, for example, you've added a new dashboard which uses more metrics, you'll need to do the same process again;
frigga gl
andfrigga pa
Report issues/questions/feature requests on the Issues section.
Pull requests are welcome! Ideally, create a feature branch and issue for every single change you make. These are the steps:
- Fork this repo
- Create your feature branch from master (
git checkout -b my-new-feature
) - Install from source
$ git clone https://github.com/${GITHUB_OWNER}/frigga.git && cd frigga ... $ pip install --upgrade pip ... $ python -m venv ./ENV $ . ./ENV/bin/activate ... $ (ENV) pip install --editable . ... # Done! Now when you run 'frigga' it will get automatically updated when you modify the code
- Add the code of your new feature
- Test - make sure
frigga grafana-list
andfrigga prometheus-apply
commands work - Commit your remarkable changes (
git commit -am 'Added new feature'
) - Push to the branch (
git push --set-up-stream origin my-new-feature
) - Create a new Pull Request and tell us about your changes
Created and maintained by Meir Gabay
This project is licensed under the MIT License - see the LICENSE file for details