Skip to content

Cluster monitoring stack for clusters based on Prometheus Operator

Notifications You must be signed in to change notification settings

AndyG-0/cluster-monitoring

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cluster Monitoring stack for ARM / X86-64 platforms

The Prometheus Operator for Kubernetes provides easy monitoring definitions for Kubernetes services and deployment and management of Prometheus instances.

This have been tested on a hybrid ARM64 / X84-64 Kubernetes cluster deployed as this article.

This repository collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

The content of this project is written in jsonnet and is an extension of the fantastic kube-prometheus project.

To continue using my previous stack with manifests and previous versions of the operator and components, use the legacy repo tag from: https://github.com/carlosedp/prometheus-operator-ARM/tree/legacy.

Components included in this package:

  • The Prometheus Operator
  • Highly available Prometheus
  • Highly available Alertmanager
  • Prometheus node-exporter
  • kube-state-metrics
  • CoreDNS
  • Grafana
  • SMTP relay to Gmail for Grafana notifications

There are additional modules (disabled by default) to monitor other components of the infra-structure. These can be disabled on vars.jsonnet file by setting the module in installModules to false.

The additional modules are:

  • ARM_exporter to generate temperature metrics
  • MetalLB metrics
  • Traefik metrics
  • ElasticSearch metrics
  • APC UPS metrics

There are also options to set the ingress domain suffix and enable persistence for Grafana and Prometheus.

After changing these parameters, rebuild the manifests with make.

Quickstart

The repository already provides a set of compiled manifests to be applied into the cluster. The deployment can be customized thru the jsonnet files.

To simply deploy the stack, run:

$ make deploy

# Or manually:

$ kubectl apply -f manifests/

# It can take a few seconds for the above 'create manifests' command to fully create the following resources, so verify the resources are ready before proceeding.
$ until kubectl get customresourcedefinitions servicemonitors.monitoring.coreos.com ; do date; sleep 1; echo ""; done
$ until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done

$ kubectl apply -f manifests/ # This command sometimes may need to be done twice (to workaround a race condition).

If you get an error from applying the manifests, run the make deploy or kubectl apply -f manifests/ again. Sometimes the resources required to apply the CRDs are not deployed yet.

Customizing for K3s

To have your K3s cluster and the monitoring stack on it, follow the steps:

# Download K3s binary
wget https://github.com/rancher/k3s/releases/download/`curl -s https://api.github.com/repos/rancher/k3s/releases/latest | grep -oP '"tag_name": "\K(.*)(?=")'`/k3s && chmod +x k3s

# Move to your path
sudo mv k3s /usr/local/bin

# Start K3s
sudo k3s server --docker &

To generate the metrics with all metadata required by the dashboards, K3s needs to be started with Docker as the runtime.

Now to deploy the monitoring stack on your K3s cluster, there are three parameters to be configured on vars.jsonnet:

  1. Set k3s.enabled to true.
  2. Change your K3s master node IP(your VM or host IP) on k3s.master_ip.
  3. Edit suffixDomain to have your node IP with the .nip.io suffix. This will be your ingress URL suffix.
  4. Set traefikExporter enabled parameter to true to collect Traefik metrics and deploy dashboard.

After changing these values, run make to build the manifests and k3s kubectl apply -f manifests/ to apply the stack to your cluster. In case of errors on some resources, re-run the command.

Now you can open the applications:

To list the created ingresses, run k3s kubectl get ingress --all-namespaces.

There are some dashboards that shows no values due to some cadvisor metrics not having the complete metadata if K3s is started with default script or no --docker arg. Check the open issues for more information.

Updating the ingress suffixes

To avoid rebuilding all manifests, there is a make target to update the Ingress URL suffix to a different suffix (using nip.io) to match your host IP. Run make change_suffix IP="[IP-ADDRESS]" to change the ingress route IP for Grafana, Prometheus and Alertmanager and reapply the manifests. If you have a K3s cluster, run make change_suffix IP="[IP-ADDRESS] K3S=k3s.

Customizing

The content of this project consists of a set of jsonnet files making up a library to be consumed.

Pre-reqs

The project requires json-bundler and the jsonnet compiler. The Makefile does the heavy-lifting of installing them. You need Go already installed:

git clone https://github.com/carlosedp/cluster-monitoring
cd prometheus-operator-ARM
make vendor
# Change the jsonnet files...
make

After this, a new customized set of manifests is built into the manifests dir. To apply to your cluster, run:

make deploy

To uninstall, run:

make teardown

Images

This project depends on the following images (all supports ARM, ARM64 and AMD64 thru manifests):

Alertmanager Blackbox_exporter Node_exporter Snmp_exporter Prometheus

ARM_exporter

Prometheus-operator

Prometheus-adapter

Grafana

Kube-state-metrics

Addon-resizer

Obs. This image is a clone of AMD64, ARM64 and ARM with a manifest. It's cloned and generated by the build_images.sh script

configmap_reload

prometheus-config-reloader

SMTP-server

Kube-rbac-proxy

About

Cluster monitoring stack for clusters based on Prometheus Operator

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jsonnet 71.4%
  • Shell 25.7%
  • Makefile 2.9%