Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running AutoKuma in Kubernetes without Docker? #58

Open
nogweii opened this issue Jul 25, 2024 · 12 comments
Open

Running AutoKuma in Kubernetes without Docker? #58

nogweii opened this issue Jul 25, 2024 · 12 comments

Comments

@nogweii
Copy link

nogweii commented Jul 25, 2024

I am thinking about setting up AutoKuma to run in my homelab, but it is powered by Kubernetes running on containerd rather than Docker.

Can I run AutoKuma in my homelab, at least with just static configs for the time being?

(It would also be cool if it could support a custom resource in Kubernetes as an alternative to container labels.)

@BigBoot
Copy link
Owner

BigBoot commented Jul 26, 2024

Hi, you can just disable the docker integration by setting AUTOKUMA__DOCKER__ENABLED=false.

As for Kubernetes, I think native Kubernetes support (i.e. using CRDs e.t.c.) is way out of scope for AutoKuma, reading
(Pod, Deployment, Daemonset, etc..) labels using the Kubernetes API on the other hand is something I see as possible, although I don't have any plans for implementing this myself (i.e. I'm open for PRs).

Depending on how you manager you cluster (i.e. using Terraform/OpenTofu, Pulumi, e.t.c), you might be able to automaticaly mount specific ConfigMaps as static Monitor Definitions.

@AurimasNav
Copy link

AurimasNav commented Aug 26, 2024

Hi, you can just disable the docker integration by setting AUTOKUMA__DOCKER__ENABLED=false.

As for Kubernetes, I think native Kubernetes support (i.e. using CRDs e.t.c.) is way out of scope for AutoKuma, reading (Pod, Deployment, Daemonset, etc..) labels using the Kubernetes API on the other hand is something I see as possible, although I don't have any plans for implementing this myself (i.e. I'm open for PRs).

Depending on how you manager you cluster (i.e. using Terraform/OpenTofu, Pulumi, e.t.c), you might be able to automaticaly mount specific ConfigMaps as static Monitor Definitions.

there is a problem with mounting configMaps as volumes, due to how it is being done in kubernetes (creates links) and then how autokuma tries to sync it:

WARN [autokuma::sync] Encountered error during sync: Unable to deserialize: Unsupported static monitor file type: /autokuma/static-monitors/..2024_08_26_08_07_18.3501614066, supported: .json, .toml

when I mount configMap in k8s it looks like this:

root@autokuma-58fb5b9fdf-b97mc:/autokuma/static-monitors# ls
example.json

root@autokuma-58fb5b9fdf-b97mc:/autokuma/static-monitors# ls -lah
total 12K
drwxrwxrwx 3 root root 4.0K Aug 26 08:07 .
drwxr-xr-x 3 root root 4.0K Aug 26 08:07 ..
drwxr-xr-x 2 root root 4.0K Aug 26 08:07 ..2024_08_26_08_07_18.3501614066
lrwxrwxrwx 1 root root   32 Aug 26 08:07 ..data -> ..2024_08_26_08_07_18.3501614066
lrwxrwxrwx 1 root root   19 Aug 26 08:07 example.json -> ..data/example.json

for now I am using subPath as workaround, but it is not fun to manage as every json needs to be a separate volume mount in k8s deployment.

As possible solution AutoKuma could simply look for .json and .toml files in the directory and ignore everything else.

@polarathene
Copy link

I don't have experience with k8s, but if it helps this project implemented discovery support via annotations.

@BigBoot
Copy link
Owner

BigBoot commented Oct 21, 2024

I've added a native kubernetes integration for creating Monitors using CRs. This works fine using a local minikube cluster. However since I haven't used Kubernetes in some time I'll need some help in creating a set of deployment yamls for a typical deployment, just a basic set which would work in a typical cluster with rbac enabled etc.

Additionally I'd need to know what settings I need to make configurable.

If anyone who's looking for a native kubernetes integration can provide these things I'll enable the integration with the next release.

@emouawad
Copy link

Wonderful news! Not an expert but I'd like to help if needed. It's really strange how AutoKuma is little known compared to the popularity of Uptime Kuma - They have a huge user base and 600+ contributors!
To me AutoKuma is the missing core of Uptime - without it i'd never use it (Devops Engineer here) - because gitops is so much better than UI based configuration.

If you can release it, i'd like to try it as soon as possible - I can share my deployment yamls if it works out well.
yaml deployments are nice but ultimately a helm chart is probably best suited.

You already mentioned a minikube deployment - those should be enough for now anyway.

@BigBoot
Copy link
Owner

BigBoot commented Oct 22, 2024

@emouawad the integration is available in the dev channel (ghcr.io/bigboot/autokuma:master) .
You'd need to enable the integration (and probably disable docker so it doesn't spam the logs) with AUTOKUMA__KUBERNETES__ENABLED=true (and AUTOKUMA__DOCKER__ENABLED=false).

The CRDs need to be applied beforehand, you can find them at autokuma/kubernetes/crds-autokuma.yml.

An example CR looks like this:

apiVersion: "autokuma.bigboot.dev/v1"
kind: KumaEntity
metadata:
  name: hello-k8s-monitor
spec:
  config: 
    name: Static Json Example
    type: http
    url: https://example.com

The integration should pick up your service account/cluster config automatically when running inside the cluster.

@emouawad
Copy link

Hey @BigBoot - Works in GKE - Thanks!

RBAC Needed

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: autokuma
rules:
  - apiGroups: ["autokuma.bigboot.dev"]
    resources: ["*"]
    verbs: ["list", "patch", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: autokuma-binding
subjects:
  - kind: ServiceAccount
    name: default
    namespace: uptime-kuma
roleRef:
  kind: ClusterRole
  name: autokuma
  apiGroup: rbac.authorization.k8s.io

@emouawad
Copy link

@BigBoot Would it be possible to add Status Pages as well?

I believe the CR might need a bit of changing to add a unique id/key and bind on it status pages? to make friendly name updatable?

@BigBoot
Copy link
Owner

BigBoot commented Oct 22, 2024

There's #81 for keeping track of this, and ENTITY_TYPES.md describing how resolve the different kinds of entities. As of now support for status pages is notably completely missing though.

Unfortunately I cannot influence the actual uptime kuma id, it get's assigned by uptime kuma on the database insert, most entity types support resolving by an "autokuma id" in the case of kubernetes CRs that's the metadata.name.

@tschlaepfer
Copy link

@BigBoot are you still interested in the K8S deployment yaml files? I'm happy to provide mine, I deployed AutoKuma on EKS and it works perfectly fine.

@aelogonpin
Copy link

aelogonpin commented Nov 20, 2024

@tschlaepfer Hi you may be able to share your code, I'm trying to do it on a cluster of k0s, but I find it impossible to initialize it with the uptime-kuma page.

I include my code to see if you can somehow identify what the fault may be.

apiVersion: apps/v1
kind: Deployment
metadata:
name: autokuma
namespace: kuma
spec:
replicas: 1
selector:
matchLabels:
app: autokuma
template:
metadata:
labels:
app: autokuma
spec:
containers:
- name: autokuma
image: ghcr.io/bigboot/autokuma:master
env:
- name: AUTOKUMA__KUBERNETES__ENABLED
value: "true" # Habilitar la integración con Kubernetes
- name: AUTOKUMA__DOCKER__ENABLED
value: "false" # Desactivar la integración con Docker
- name: AUTOKUMA__KUMA__URL
value: "http://uptime-kuma-service:3001" # URL del servicio de Uptime Kuma
- name: AUTOKUMA__KUMA__USERNAME
value: "username"
- name: AUTOKUMA__KUMA__PASSWORD
value: "passw"
volumeMounts:
- name: kuma-config
mountPath: /autokuma/static-monitors # Configurar donde AutoKuma buscará los monitores
volumes:
- name: kuma-config
configMap:
name: kuma-static-monitors # ConfigMap que contiene los monitores estáticos


apiVersion: v1
kind: ConfigMap
metadata:
name: kuma-static-monitors
namespace: kuma
data:
example.json: |
{
"name": "Test nginx server",
"type": "http",
"url": "http://nginx-service.default.svc.cluster.local:9113/metrics"
}

image

@tschlaepfer
Copy link

@aelogonpin I have shared my deployment in another issue in this project, please have a look here: #91 (comment)

From a quick look at your deployment, I think you are missing the AUTOKUMA__STATIC_MONITORS environment variable.

Hope this helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants