Installs the kube-prometheus stack to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus and exporters using the Prometheus Operator. The included chart provides:
- Prometheus Operator - The Prometheus Operator provides Kubernetes native deployment and management of Prometheus and related monitoring components.
- Prometheus The open source monitoring toolkit.
- Prometheus node-exporter - Prometheus exporter for hardware and OS metrics exposed by *NIX kernels.
- Prometheus Adapter for Kubernetes Metrics APIs - Prometheus adapter leverage the metrics collected by Prometheus to allow for autoscaling based on metrics.
- kube-state-metrics - A service that listens to the Kubernetes API server and generates metrics about the state of the objects.
Follow the private key docs tutorial to obtain your secret key tutorial to obtain your secret key.
Prometheus operator require a secret
called coralogix-keys
with the relevant private key
under a secret key called PRIVATE_KEY
, inside the same namespace
that the chart is installed in.
kubectl create secret generic coralogix-keys \
--from-literal=PRIVATE_KEY=<private-key>
The created secret should look like this:
apiVersion: v1
data:
PRIVATE_KEY: <encrypted-private-key>
kind: Secret
metadata:
name: coralogix-keys
namespace: <the-release-namespace>
type: Opaque
Depending on your region, you need to configure correct Coralogix endpoint. Here are the available Endpoints:
https://coralogix.com/docs/coralogix-endpoints/.
values:
#values.yaml:
---
global:
endpoint: "<remote_write_endpoint>"
#values.yaml:
---
kube-prometheus-stack:
prometheus:
prometheusSpec:
secrets: [] ## important when not using a secret
remoteWrite:
- url: '<remote_write_endpoint>'
name: 'crx'
remoteTimeout: 120s
bearerToken: '<private_key>'
helm upgrade --install prometheus-coralogix coralogix-charts-virtual/prometheus-operator-coralogix \
--namespace=monitoring \
-f values.yaml
helm uninstall prometheus-coralogix \
--namespace=monitoring
This chart uses kube-prometheus-stack chart.
To add labels to metrics via the Prometheus configuration, you can use the externalLabels
key in the values.yaml file as shown below:
kube-prometheus-stack:
prometheus:
prometheusSpec:
externalLabels:
cluster: MyCluster
For this example we are going to deploy an app that is exposing metrics
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
spec:
replicas: 3
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: fabxc/instrumented_app
ports:
- name: web
containerPort: 8080
Once we have the application running we need to create a Prometheus podMonitor.
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
podMetricsEndpoints:
- port: web
As you can see in this example, the selector is pointing to every pod that has the label “ app: example-app”
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app.kubernetes.io/component: exporter
app.kubernetes.io/name: node-exporter
name: node-exporter
namespace: monitoring
spec:
selector:
matchLabels:
app.kubernetes.io/component: exporter
app.kubernetes.io/name: node-exporter
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2
template:
metadata:
labels:
app.kubernetes.io/component: exporter
app.kubernetes.io/name: node-exporter
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: '/metrics'
prometheus.io/port: "9100"
spec:
hostPID: true
hostIPC: true
hostNetwork: true
enableServiceLinks: false
containers:
- name: node-exporter
image: prom/node-exporter
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
args:
- '--path.sysfs=/host/sys'
- '--path.rootfs=/root'
- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)
- --collector.netclass.ignored-devices=^(veth.*)$
ports:
- containerPort: 9100
protocol: TCP
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 50m
memory: 50Mi
volumeMounts:
- name: sys
mountPath: /host/sys
mountPropagation: HostToContainer
- name: root
mountPath: /root
mountPropagation: HostToContainer
tolerations:
- operator: Exists
effect: NoSchedule
volumes:
- name: sys
hostPath:
path: /sys
- name: root
hostPath:
path: /
kubectl create -f <exporter.yml>
Once we have the node exporter running in every node, we need to create a Prometheus serviceMonitor so we can scrape the metrics.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
k8s-app: node-exporter
name: node-exporter
namespace: default
spec:
endpoints:
- path: /metrics
port: metrics
jobLabel: k8s-app
selector:
matchLabels:
app.kubernetes.io/name: prometheus-node-exporter
This serviceMonitor will match any service that has label “app.kubernetes.io/name: prometheus-node-exporter”
For this example we are going to deploy an app that is exposing metrics
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
spec:
replicas: 3
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: fabxc/instrumented_app
ports:
- name: web
containerPort: 8080
Once we have the application running we need to create a Prometheus podMonitor.
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
podMetricsEndpoints:
- port: web
As you can see in this example, the selector is pointing to every pod that has the label “ app: example-app”
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app.kubernetes.io/component: exporter
app.kubernetes.io/name: node-exporter
name: node-exporter
namespace: monitoring
spec:
selector:
matchLabels:
app.kubernetes.io/component: exporter
app.kubernetes.io/name: node-exporter
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2
template:
metadata:
labels:
app.kubernetes.io/component: exporter
app.kubernetes.io/name: node-exporter
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: '/metrics'
prometheus.io/port: "9100"
spec:
hostPID: true
hostIPC: true
hostNetwork: true
enableServiceLinks: false
containers:
- name: node-exporter
image: prom/node-exporter
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
args:
- '--path.sysfs=/host/sys'
- '--path.rootfs=/root'
- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)
- --collector.netclass.ignored-devices=^(veth.*)$
ports:
- containerPort: 9100
protocol: TCP
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 50m
memory: 50Mi
volumeMounts:
- name: sys
mountPath: /host/sys
mountPropagation: HostToContainer
- name: root
mountPath: /root
mountPropagation: HostToContainer
tolerations:
- operator: Exists
effect: NoSchedule
volumes:
- name: sys
hostPath:
path: /sys
- name: root
hostPath:
path: /
kubectl create -f <exporter.yml>
Once we have the node exporter running in every node, we need to create a Prometheus serviceMonitor so we can scrape the metrics.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
k8s-app: node-exporter
name: node-exporter
namespace: default
spec:
endpoints:
- path: /metrics
port: metrics
jobLabel: k8s-app
selector:
matchLabels:
app.kubernetes.io/name: prometheus-node-exporter
This serviceMonitor will match any service that has label “app.kubernetes.io/name: prometheus-node-exporter”