Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: cilium add-mode support #312

Merged
merged 4 commits into from
May 20, 2022
Merged

feat: cilium add-mode support #312

merged 4 commits into from
May 20, 2022

Conversation

RouxAntoine
Copy link
Contributor

when cni management by kilo is disable, we can use existing cluster's cni setup thanks to add-on mode

https://kilo.squat.ai/docs/introduction#add-on-mode

this is a really basic implementation based on comment #81 (comment)

when cni management by kilo is disable, we can use existing cluster's cni setup thanks to add-on mode

https://kilo.squat.ai/docs/introduction#add-on-mode
@squat
Copy link
Owner

squat commented May 4, 2022

@RouxAntoine thanks so much for putting this PR together 😍
Have you tried running this yourself on a cluster?

@RouxAntoine
Copy link
Contributor Author

You welcome, yes it's seem to be running,

with cilium as replacement of kube-proxy so like comment i have manually bind kubeconfig.

Only one peer

image

this is my pod configuration :

resource "kubernetes_service_account_v1" "kilo_service_account" {
  metadata {
    name = "kilo"
    namespace = local.default_namespace
  }
}

resource "kubernetes_cluster_role_v1" "kilo_cluster_role" {
  metadata {
    name = "kilo"
  }
  rule {
    api_groups = [""]
    resources = ["nodes"]
    verbs = ["list", "patch", "watch"]
  }
  rule {
    api_groups = ["kilo.squat.ai"]
    resources = ["peers"]
    verbs = ["list", "update", "watch"]
  }
  rule {
    api_groups = ["apiextensions.k8s.io"]
    resources = ["customresourcedefinitions"]
    verbs = ["create"]
  }
}

resource "kubernetes_cluster_role_binding_v1" "kilo_cluster_role_binding" {
  metadata {
    name = "kilo"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = kubernetes_cluster_role_v1.kilo_cluster_role.metadata.0.name
  }
  subject {
    kind = "ServiceAccount"
    name = kubernetes_service_account_v1.kilo_service_account.metadata.0.name
    namespace = local.default_namespace
  }
}

resource "kubernetes_daemon_set_v1" "kilo_deployment" {
  metadata {
    name = "kilo-daemonset"
    namespace = local.default_namespace
    labels = {
      app = "kilo-daemonset"
    }
  }
  spec {
    selector {
      match_labels = {
        "app.kubernetes.io/name" = "kilo"
        "app.kubernetes.io/part-of" = "kilo"
      }
    }
    template {
      metadata {
        labels = {
          "app.kubernetes.io/name" = "kilo"
          "app.kubernetes.io/part-of" = "kilo"
        }
      }
      spec {
        service_account_name = kubernetes_service_account_v1.kilo_service_account.metadata.0.name
        host_network = true
        container {
          name = "kilo"
          image = "docker.registry/kilo"
          security_context {
            privileged = true
          }
          args = [
            "--hostname=$(NODE_NAME)",
            "--kubeconfig=/etc/kubernetes/kubeconfig",
            "--cni=false",
            "--encapsulate=crosssubnet",
            "--clean-up-interface=true",
            "--compatibility=cilium",
            "--local=false",
            "--subnet=172.31.254.0/24",
            "--log-level=all"
          ]
          port {
            container_port = 51820
            protocol = "UDP"
            host_port = 51820
          }
          env {
            name = "NODE_NAME"
            value_from {
              field_ref {
                field_path = "spec.nodeName"
              }
            }
          }
          volume_mount {
            name       = "lib-modules"
            mount_path = "/lib/modules"
            read_only = true
          }
          volume_mount {
            name       = "kilo-dir"
            mount_path = "/var/lib/kilo"
          }
          volume_mount {
            name       = "xtables-lock"
            mount_path = "/run/xtables.lock"
            read_only = false
          }
          volume_mount {
            name       = "kubeconfig"
            mount_path = "/etc/kubernetes/kubeconfig"
            sub_path = "admin.conf"
            read_only = true
          }
        }
        volume {
          name = "lib-modules"
          host_path {
            path = "/lib/modules"
          }
        }
        volume {
          name = "kilo-dir"
          persistent_volume_claim {
            claim_name = kubernetes_persistent_volume_claim_v1.kilo_dir.metadata.0.name
          }
        }
        volume {
          name = "xtables-lock"
          host_path {
            path = "/run/xtables.lock"
            type = "FileOrCreate"
          }
        }
        volume {
          name = "kubeconfig"
          host_path {
            path = "/etc/kubernetes"
          }
        }
      }
    }
  }
}

resource "kubernetes_persistent_volume_claim_v1" "kilo_dir" {
  metadata {
    name = "kilo-pvc"
    namespace = local.default_namespace
  }
  spec {
    access_modes = ["ReadWriteMany"]
    resources {
      requests = {
        storage = "2Gi"
      }
    }
  }
}

@RouxAntoine
Copy link
Contributor Author

RouxAntoine commented May 4, 2022

moreover to build image I use manual tools because makefile seem to not working with my local docker configuration (I use lima https://github.com/lima-vm/lima) :

export CGO_ENABLED=0; export GOOS=linux; export GOARCH=arm64; go build -ldflags '-X github.com/squat/kilo/pkg/version.Version=0.1.0-dirty' -o bin/linux/$GOARCH/kg ./cmd/kg/...

export CGO_ENABLED=0; export GOOS=linux; export GOARCH=arm64; go build -ldflags '-X github.com/squat/kilo/pkg/version.Version=0.1.0-dirty' -o bin/$GOOS/$GOARCH/kgctl ./cmd/kgctl/...

docker-multi-arch-builder build -n kilo --platforms linux/arm64,linux/amd64 -v debug

commit RouxAntoine@0ef31ca#diff-76ed074a9305c04054cdebb9e9aad2d818052b07091de1f20cad0bbac34ffb52R157-R160

@RouxAntoine
Copy link
Contributor Author

In same time i change this on custom branch (not include here) :

diff --git a/pkg/mesh/backend.go b/pkg/mesh/backend.go
index 203661d..3b62c38 100644
--- a/pkg/mesh/backend.go
+++ b/pkg/mesh/backend.go
@@ -85,7 +85,7 @@ func (n *Node) Ready() bool {
                n.Endpoint.Ready() &&
                n.Key != wgtypes.Key{} &&
                n.Subnet != nil &&
-               time.Now().Unix()-n.LastSeen < int64(checkInPeriod)*2/int64(time.Second)
+               time.Now().Unix()-n.LastSeen < int64(checkInPeriod)*8/int64(time.Second)
 }

on commit RouxAntoine@0ef31ca#diff-9d1920ed195f23a0a31a5d736410a029c56e65f207971525797d1b900302bfceR88

Indeed i have some time drift between my local kgctl and kg server

@squat
Copy link
Owner

squat commented May 4, 2022

ack, I am hoping to move away entirely from using this lastseen annotation to instead using the Kilo pod readiness checks so that we don't require any time synchronization at all.

@RouxAntoine
Copy link
Contributor Author

Interesting, I did not understand the purpose of this condition.

Test are flaky is it normal ?

@squat
Copy link
Owner

squat commented May 5, 2022

No, it's not normal. I haven't seen a flaky test in almost a year. In any case, the e2e tests passed now :)

Could you also add an example manifest for Kilo with cilium? If not, we can certainly translate your HCL to YAML later.

@squat
Copy link
Owner

squat commented May 5, 2022

Thank you @RouxAntoine I'll take another closer look during lunch :)

@RouxAntoine RouxAntoine force-pushed the main branch 2 times, most recently from 4c63474 to 8e14ae5 Compare May 9, 2022 08:27
@RouxAntoine
Copy link
Contributor Author

RouxAntoine commented May 12, 2022

@squat I rewrite the pull request by error three day ago sorry. No more change to had on this pull request ? Do you think this could be merge for one of the next release delivery ?

@squat
Copy link
Owner

squat commented May 12, 2022

yes, absolutely want this in the next release

pkg/encapsulation/cilium.go Outdated Show resolved Hide resolved
pkg/encapsulation/cilium.go Outdated Show resolved Hide resolved
Copy link
Owner

@squat squat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay!
This looks good overall, I just have a few nits and one more important consideration for concurrency safety

Copy link
Owner

@squat squat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @RouxAntoine. Very cool 🎉
In a follow up, if like to add an e2e test to assert that this keeps working 👍

@squat squat merged commit 4be792e into squat:main May 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants