The Kubernetes operator of Lumigo provides a one-click solution to monitoring Kubernetes clusters with Lumigo.
Install the Lumigo Kubernetes operator in your Kubernets cluster with helm:
helm repo add lumigo https://lumigo-io.github.io/lumigo-kubernetes-operator
helm install lumigo lumigo/lumigo-operator --namespace lumigo-system --create-namespace --set cluster.name=<cluster_name>
Note: You have the option to alter the namespace from lumigo-system
to a name of your choosing, but its important to be aware that doing so might cause slight discrepancies throughout the steps below.
(The cluster.name
is optional, but highly advised, see the Naming your cluster section.)
You can verify that the Lumigo Kubernetes operator is up and running with:
$ kubectl get pods -n lumigo-system
NAME READY STATUS RESTARTS AGE
lumigo-kubernetes-operator-7fc8f67bcc-ffh5k 2/2 Running 0 56s
Note: While installing the Lumigo Kubernetes operator via kustomize
is generally expected to work (except the uninstallation of instrumentation on removal), it is not actually supported1.
On EKS, the pods of the Lumigo Kubernetes operator itself need to be running on nodes running on Amazon EC2 virtual machines.
Your monitored applications, however, can run on the Fargate profile without any issues.
Installing the Lumigo Kubernetes operator on an EKS cluster without EC2-backed nodegroups, results in the operator pods staying in Pending
state:
$ kubectl describe pod -n lumigo-system lumigo-kubernetes-operator-5999997fb7-cvg5h
Namespace: lumigo-system
Priority: 0
Service Account: lumigo-kubernetes-operator
Node: <none>
Labels: app.kubernetes.io/instance=lumigo
app.kubernetes.io/name=lumigo-operator
control-plane=controller-manager
lumigo.auto-trace=false
lumigo.cert-digest=dJTiBDRVJUSUZJQ
pod-template-hash=5999997fb7
Annotations: kubectl.kubernetes.io/default-container: manager
kubernetes.io/psp: eks.privileged
Status: Pending
(The reason for this limitation is very long story, but it is necessary for Lumigo to figure out which EKS cluster is the operator sending data from.) If you are installing the Lumigo Kubernetes operator on an EKS cluster with only the Fargate profile, add a managed nodegroup.
Kubernetes clusters does not have a built-in nothing of their identity1, but when running multiple Kubernetes clusters, you almost certainly have names from them.
The Lumigo Kubernetes operator will automatically add to your telemetry the k8s.cluster.uid
OpenTelemetry resource attribute, set to the value of the UID of the kube-system
namespace, but UIDs are not meant for humans to remember and recognize easily.
The Lumigo Kubernetes operator allows you to set a human-readable name using the cluster.name
Helm setting, which enables you to filter all your tracing data based on the cluster in Lumigo's Explore view.
1 Not even Amazon EKS clusters, as their ARN is not available anywhere inside the cluster itself.
You can check which version of the Lumigo Kubernetes operator you have deployed in your cluster as follows:
$ helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
lumigo lumigo-system 2 2023-07-10 09:20:04.233825 +0200 CEST deployed lumigo-operator-13 13
The Lumigo Kubernetes operator is reported as APP VERSION
.
To upgrade to a newer version of the Lumigo Kubernetes operator, run:
helm repo update
helm upgrade lumigo lumigo/lumigo-operator --namespace lumigo-system
The Lumigo Kubernetes operator automatically adds distributed tracing to pods created via:
- Deployments (
apps/v1.Deployment
) - Daemonsets (
apps/v1.DaemonSet
) - ReplicaSets (
apps/v1.ReplicaSet
) - StatefulSets (
apps/v1.StatefulSet
) - CronJobs (
batch/v1.CronJob
) - Jobs (
batch/v1.Job
)
The distributed tracing is provided by the Lumigo OpenTelemetry distribution for JS, the Lumigo OpenTelemetry distribution for Java and the Lumigo OpenTelemetry distribution for Python.
The Lumigo Kubernetes operator will automatically trace all Java, Node.js and Python processes found in the containers of pods created in the namespaces that Lumigo traces.
To activate automatic tracing for resources in a namespace, create in that namespace a Kubernetes secret containing your Lumigo token, and reference it from a Lumigo
(operator.lumigo.io/v1alpha1.Lumigo
) custom resource.
Save the following into the lumigo.yml
:
apiVersion: v1
kind: Secret
metadata:
name: lumigo-credentials
stringData:
# Kubectl won't allow you to deploy this dangling anchor.
# Get the actual value from Lumigo following this documentation: https://docs.lumigo.io/docs/lumigo-tokens
token: *lumigo-token # <--- Change this! Example: t_123456789012345678901
---
apiVersion: operator.lumigo.io/v1alpha1
kind: Lumigo
metadata:
labels:
app.kubernetes.io/name: lumigo
app.kubernetes.io/instance: lumigo
app.kubernetes.io/part-of: lumigo-operator
name: lumigo
spec:
lumigoToken:
secretRef:
name: lumigo-credentials # This must match the name of the secret; the secret must be in the same namespace as this Lumigo custom resource
key: token # This must match the key in the Kubernetes secret (don't touch)
After creating lumigo.yml
, deploy it in the desired namespace:
kubectl apply -f lumigo.yml -n <YOUR_NAMESPACE>
ℹ️ Important note
Apply the secret and the custom resource to the namespace you wish to start tracing, not to
lumigo-system
.
Each Lumigo
resource keeps in its state a list of resources it currently instruments:
$ kubectl describe lumigo -n my-namespace
Name: lumigo
Namespace: my-namespace
API Version: operator.lumigo.io/v1alpha1
Kind: Lumigo
Metadata:
... # Data removed for readability
Spec:
... # Data removed for readability
Status:
Conditions:
... # Data removed for readability
Instrumented Resources:
API Version: apps/v1
Kind: StatefulSet
Name: my-statefulset
Namespace: my-namespace
Resource Version: 320123
UID: 93d6d809-ac2a-43a9-bc07-f0d4e314efcc
The tracing feature can be entirely turned-off in case it's not desired, by setting the spec.tracing.enabled
field to false
in the Lumigo
resource:
apiVersion: operator.lumigo.io/v1alpha1
kind: Lumigo
metadata:
labels:
app.kubernetes.io/name: lumigo
app.kubernetes.io/instance: lumigo
app.kubernetes.io/part-of: lumigo-operator
name: lumigo
spec:
lumigoToken: ...
tracing:
enabled: false
logging:
enabled: true # usually set to when `tracing.enabled` is `false`, otherwise injecting Lumigo into the workload will not be useful
this is usually done when only logs from the workloads should be sent to Lumigo, regardless of the tracing content in which the logs was generated.
- Note that this does not affect the injection of the Lumigo distro into pods - only the fact that the distro will not send traces to Lumigo. For more fine-grained control over the injection in general, see the Opting out for specific resources section.
The Lumigo Kubernetes operator can automatically forward logs emitted by traced pods to Lumigo's log-management solution, supporting several logging providers (currently logging
for Python apps, Winston
and Bunyan
for Node.js apps).
Enabling log forwarding is done by adding the spec.logging.enabled
field to the Lumigo
resource:
apiVersion: operator.lumigo.io/v1alpha1
kind: Lumigo
metadata:
labels:
app.kubernetes.io/name: lumigo
app.kubernetes.io/instance: lumigo
app.kubernetes.io/part-of: lumigo-operator
name: lumigo
spec:
lumigoToken: ... # same token used for tracing
logging:
enabled: true # enables log forwarding for pods with tracing injected
Workloads that are using runtimes not supported by current Lumigo OTEL distro (e.g. Go, Rust) can still send logs to Lumigo, via logs files from containers that k8s manages on each node in the cluster. The Lumigo Kubernetes operator will automatically collect logs from those files and send them to Lumigo, once the following setting is applied when installing the operator:
helm upgrade -i lumigo lumigo/lumigo-operator \
# ...
--set "clusterCollection.logs.enabled=true"
--set "lumigoToken.value=t_123456789012345678901"
this will automatically collect logs from the file /var/log/pods
folder in each node, and forward them to Lumigo (with the exception of the kube-system
and lumigo-system
namespaces).
To further customize the workloads patterns for log collection, the following settings can be provided:
echo "
lumigoToken:
value: t_123456789012345678901
clusterCollection:
logs:
enabled: true
include:
- namespacePattern: some-ns
podPattern: some-pod-*
containerPattern: some-container-*
exclude:
- containerPattern: some-other-container-*
" | helm upgrade -i lumigo lumigo/lumigo-operator --values -
In the example above, logs from all containers prefixed with some-container-
running in pods prefixed with some-pod-
(effectively, pods from a specific deployment) under the some-ns
namespace will be collected, with the exception of logs from containers prefixed with some-other-container-
from the aforementioned namespace and pods.
Notes about the settings:
include
andexclude
are arrays of glob patterns to include or exclude logs, where each pattern being a combination ofnamespacePattern
,podPattern
andcontainerPattern
(all are optional).- If a pattern is not provided for one of the components, it will be considered as a wildcard pattern - e.g. including pods while specifying
podPattern
will include all containers of those pods in all namespaces. - Each
exclude
value is checked against the paths matched byinclude
, meaning if a path is matched by bothinclude
andexclude
, it will be excluded. - By default, all logs from all pods in all namespaces are included, with no exclusions. Exceptions are the
kube-system
andlumigo-system
namespaces, that will be always added to the default or provided exclusion list.
To prevent the Lumigo Kubernetes operator from injecting tracing to pods managed by some resource in a namespace that contains a Lumigo
resource, add the lumigo.auto-trace
label set to false
:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello-node
lumigo.auto-trace: "false" # <-- No injection will take place
name: hello-node
namespace: my-namespace
spec:
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- command:
- /agnhost
- netexec
- --http-port=8080
image: registry.k8s.io/e2e-test-images/agnhost:2.39
name: agnhost
In the logs of the Lumigo Kubernetes operator, you will see a message like the following:
1.67534267851615e+09 DEBUG controller-runtime.webhook.webhooks wrote response {"webhook": "/v1alpha1/inject", "code": 200, "reason": "the resource has the 'lumigo.auto-trace' label set to 'false'; resource will not be mutated", "UID": "6d341941-c47b-4245-8814-1913cee6719f", "allowed": true}
By default, when detecting a new Lumigo resource in a namespace, the Lumigo controller will instrument existing resources of the supported types. The injection will cause new pods to be created for daemonsets, deployments, replicasets, statefulsets and jobs; cronjobs will spawn injected pods at the next iteration. To turn off the automatic injection of existing resources, create the Lumigo resource as follows
apiVersion: operator.lumigo.io/v1alpha1
kind: Lumigo
metadata:
labels:
app.kubernetes.io/name: lumigo
app.kubernetes.io/instance: lumigo
app.kubernetes.io/part-of: lumigo-operator
name: lumigo
spec:
lumigoToken: ...
tracing:
injection:
injectLumigoIntoExistingResourcesOnCreation: false # Default: true
By default, when detecting the deletion of the Lumigo resource in a namespace, the Lumigo controller will remove instrumentation from existing resources of the supported types. The injection will cause new pods to be created for daemonsets, deployments, replicasets, statefulsets and jobs; cronjobs will spawn non-injected pods at the next iteration. To turn off the automatic removal of injection from existing resources, create the Lumigo resource as follows
apiVersion: operator.lumigo.io/v1alpha1
kind: Lumigo
metadata:
labels:
app.kubernetes.io/name: lumigo
app.kubernetes.io/instance: lumigo
app.kubernetes.io/part-of: lumigo-operator
name: lumigo
spec:
lumigoToken: ...
tracing:
injection:
removeLumigoFromResourcesOnDeletion: false # Default: true
Note: The removal of injection from existing resources does not occur on uninstallation of the Lumigo Kubernetes operator, as the role-based access control is has likely already been deleted.
The Lumigo Kubernetes operator will automatically collect Kubernetes object versions in the namespaces with a Lumigo
resource in active state, and send them to Lumigo for issue detection (e.g., when you pods crash).
The collected object types are: corev1.Events
, corev1.Pods
, appsv1.Deployments
, apps/v1.DaemonSet
, apps/v1.ReplicaSet
, apps/v1.StatefulSet
, batch/v1.CronJob
, and batch/v1.Job
.
Besides events, the object versions, e.g., pods, replicasets and deployments, are needed to be able to correlate events across the owner-reference chain, e.g., the pod belongs to that replicaset, which belongs to that deployment.
To disable the automated collection of Kubernetes events and object versions, you can configure your Lumigo
resources as follows:
apiVersion: operator.lumigo.io/v1alpha1
kind: Lumigo
metadata:
labels:
app.kubernetes.io/name: lumigo
app.kubernetes.io/instance: lumigo
app.kubernetes.io/part-of: lumigo-operator
name: lumigo
spec:
lumigoToken: ...
infrastructure:
kubeEvents:
enabled: false # Default: true
When a Lumigo
resource is deleted from a namespace, the collection of Kubernetes events and object versions is automatically halted.
By default, the manager will log all INFO
level and above logs.
The current log level can be viewed by running:
kubectl -n lumigo-system get deploy lumigo-lumigo-operator-controller-manager -o=json | jq '.spec.template.spec.containers[0].args'
With the default settings, there will be no log level explicitly set and the above command will return:
[
"--health-probe-bind-address=:8081",
"--metrics-bind-address=127.0.0.1:8080",
"--leader-elect"
]
To set the log level to only show ERROR
level logs, run:
kubectl -n lumigo-system patch deploy lumigo-lumigo-operator-controller-manager --type=json -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--zap-log-level=error"}]'
If a log level is already set, instead of using the add
operation we use replace
and modify the path from /args/-
to the index of containing the log level setting, such as /args/3
:
kubectl -n lumigo-system patch deploy lumigo-lumigo-operator-controller-manager --type=json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/args/3", "value": "--zap-log-level=info"}]'
NOTE: The container argument array is zero indexed, so the first argument is at index 0.
The removal of the Lumigo Kubernetes operator is performed by:
helm delete lumigo --namespace lumigo-system
In namespaces with the Lumigo resource having spec.tracing.injection.enabled
and spec.tracing.injection.removeLumigoFromResourcesOnDeletion
both set to true
, supported resources that have been injected by the Lumigo Kubernetes operator will be updated to remove the injection, with the following caveat:
Note: The removal of injection from existing resources does not apply to batchv1.Job
resources, as their corev1.PodSpec
is immutable after the batchv1.Job
resource has been created.
The Lumigo Kubernetes operator injector webhook uses a self-signed certificate that is automatically generate during the installation of the Helm chart. The generated certificate has a 365 days expiration, and a new certificate will be generated every time you upgrade Lumigo Kubernetes operator's helm chart.
The Lumigo Kubernetes operator will add events to the resources it instruments with the following reasons and in the following cases:
Reason | Created on resource types | Under which conditions |
---|---|---|
LumigoAddedInstrumentation |
apps/v1.Deployment , apps/v1.DaemonSet , apps/v1.ReplicaSet , apps/v1.StatefulSet , batch/v1.CronJob |
If a Lumigo resources exists in the namespace, and the resource is instrumented with Lumigo as a result |
LumigoCannotAddInstrumentation |
apps/v1.Deployment , apps/v1.DaemonSet , apps/v1.ReplicaSet , apps/v1.StatefulSet , batch/v1.CronJob |
If a Lumigo resources exists in the namespace, and the resource should be instrumented by Lumigo as a result, but an error occurs |
LumigoUpdatedInstrumentation |
apps/v1.Deployment , apps/v1.DaemonSet , apps/v1.ReplicaSet , apps/v1.StatefulSet , batch/v1.CronJob |
If a Lumigo resources exists in the namespace, and the resource has the Lumigo instrumented updated as a result |
LumigoCannotUpdateInstrumentation |
apps/v1.Deployment , apps/v1.DaemonSet , apps/v1.ReplicaSet , apps/v1.StatefulSet , batch/v1.CronJob |
If a Lumigo resources exists in the namespace, and the resource should have the Lumigo instrumented updated as a result, but an error occurs |
LumigoRemovedInstrumentation |
apps/v1.Deployment , apps/v1.DaemonSet , apps/v1.ReplicaSet , apps/v1.StatefulSet , batch/v1.CronJob |
If a Lumigo resources is deleted from the namespace, and the resource has the Lumigo instrumented removed as a result |
LumigoCannotRemoveInstrumentation |
apps/v1.Deployment , apps/v1.DaemonSet , apps/v1.ReplicaSet , apps/v1.StatefulSet , batch/v1.CronJob |
If a Lumigo resources is deleted from the namespace, and the resource should have the Lumigo instrumented removed as a result, but an error occurs |
Footnotes
-
The user experience of having to install Cert Manager is unnecessarily complex, and Kustomize layers, while they may be fine for one's own applications, are simply unsound for a batteries-included, rapidly-evolving product like the Lumigo Kubernetes operator. Specifically, please expect your Kustomize layers to stop working with any release of the Lumigo Kubernetes operator. ↩ ↩2 ↩3