Skip to content

K8S operator for installing Lightrun agents to your application deployments

License

Notifications You must be signed in to change notification settings

lightrun-platform/lightrun-k8s-operator

Repository files navigation

Lightrun

Lightrun Kubernetes Operator

Build Status Tests

The Lightrun Kubernetes(K8s) Operator makes it easy to insert Lightrun agents into your K8s workloads without changing your docker or manifest files. The Lightrun K8s Operator project was initially scaffolded using operator-sdk and kubebuilder book, and aims to follow the Kubernetes Operator pattern.

Table of contents

Description

In theory for adding a Lightrun agent to an application running on Kubernetes, you must:

  1. Install the agent into the Kubernetes pod.
  2. Notify the running application to start using the installed agent.

The Lightrun K8s operator does those steps for you. details

Important - Read this before deploying to production.

Requirements

  • Kubernetes >= 1.19

Example

To set up the Lightrun K8s operator:

  1. Create namespace for the operator and test deployment
kubectl create namespace lightrun-operator
kubectl create namespace lightrun-agent-test

lightrun-operator namespace is hardcoded in the example operator.yaml due to Role and RoleBinding objects If you want to deploy operator to a different namespace - you can use helm chart

  1. Deploy operator to the operator namesapce
kubectl apply -f https://raw.githubusercontent.com/lightrun-platform/lightrun-k8s-operator/main/examples/operator.yaml -n lightrun-operator
  1. Create simple deployment for test

App source code PrimeMain.java

kubectl apply -f https://raw.githubusercontent.com/lightrun-platform/lightrun-k8s-operator/main/examples/deployment.yaml -n lightrun-agent-test
  1. Download Lightrun agent config
curl https://raw.githubusercontent.com/lightrun-platform/lightrun-k8s-operator/main/examples/lightrunjavaagent.yaml > agent.yaml
  1. Update the following config parameters in the agent.yaml file.
  • serverHostname - for SaaS it is app.lightrun.com, for on-prem use your own hostname

  • lightrun_key - You can find this value on the set up page, 2nd step

  • pinned_cert_hash - you can fetch it from https://<serverHostname>/api/getPinnedServerCert

    have to be authenticated

  1. Create agent custom resource
kubectl apply -f agent.yaml -n lightrun-agent-test
  1. Go to the Lightrun server and check if you see new agent registered in the list of the agents

Example with Helm Chart

Helm chart is available in repository branch helm-repo

  • Add the repo to your Helm repository list
helm repo add lightrun-k8s-operator https://lightrun-platform.github.io/lightrun-k8s-operator
  • Install the Helm chart:

Using default values

helm install lightrun-k8s-operator/lightrun-k8s-operator  -n lightrun-operator --create-namespace

Using custom values file

helm install lightrun-k8s-operator/lightrun-k8s-operator  -f <values file>  -n lightrun-operator --create-namespace

helm upgrade --install or helm install --dry-run may not work properly due to limitations of how Helm work with CRDs. You can find more info here

  • Uninstall the Helm chart.
helm delete lightrun-k8s-operator

CRDs will not be deleted due to Helm CRDs limitations. You can learn more about the limitations here.

Chart version vs controller version

For the sake of simplicity, we are keeping the convention of the same version for both the controller image and the Helm chart. This helps to ensure that controller actions are aligned with CRDs preventing failed resource validation errors.

Limitations

Contributing Guide

If you have any idea for an improvement or find a bug do not hesitate in opening an issue, just simply fork and create a pull-request. Please open an issue first for any big changes.

make post-commit-hook
Run this command to add post commit hook. It will regenerate rules and CRD from the code after every commit, so you'll not forget to do it. You'll need to commit those changes as well.

Test It Out Locally

You’ll need a Kubernetes cluster to run against. You can use KIND or K3S to get a local cluster for testing, or run against a remote cluster.
Note: When using make commands, your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster kubectl cluster-info shows).

  1. Clone repo
git clone git@github.com:lightrun-platform/lightrun-k8s-operator.git
cd lightrun-k8s-operator
  1. Install the CRDs into the cluster:
make install
  1. Run your controller (this will run in the foreground):
make run
  1. Open another terminal tab and deploy simple app to your cluster
kubectl apply -f ./examples/deployment.yaml
kubectl get deployments app
  1. Update lightrun_key, pinned_cert_hash and serverHostname in the CR example file

  2. Create LightrunJavaAgent custom resource

kubectl apply -f ./config/samples/agents_v1beta_lightrunjavaagent.yaml

At this point you will see in the controller logs that it recognized new resource and started to work. If you run the following command, you will see that changes done by the controller (init container, volume, patched ENV var).

kubectl describe deployments app

License

Copyright 2022 Lightrun

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.