-
Notifications
You must be signed in to change notification settings - Fork 53
Kubernetes on ARM64 with CDK
On an arm64 machine running Ubuntu 16.04, remove any apt-installed juju and lxd packages and rely on the snaps instead:
$ apt remove -y juju lxd lxd-client
$ snap install lxd
$ snap install juju --classic
Default everything except storage backend (dir), bridge name (lxdbr1), and IPv6 setup (none).
$ lxd init # You may need to run this as root.
Would you like to use LXD clustering? (yes/no) [default=no]: no
Do you want to configure a new storage pool (yes/no) [default=yes]? yes
Name of the new storage pool [default=default]: default
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]: dir
Would you like to connect to a MAAS server (yes/no) [default=no]? no
Would you like to create a new network bridge (yes/no) [default=yes]? yes
What should the new bridge be called [default=lxdbr0]? lxdbr1
What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? auto
What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? none
Would you like LXD to be available over the network (yes/no) [default=no]? no
Would you like stale cached images to be updated automatically (yes/no) [default=yes]? yes
Would you like a YAML "lxd init" preseed to be printed [default=no]? no
$ juju bootstrap
Clouds
aws
aws-china
aws-gov
azure
azure-china
cloudsigma
google
joyent
localhost
oracle
rackspace
Select a cloud [localhost]: localhost
Enter a name for the Controller [localhost-localhost]: localhost-localhost
Creating Juju controller "localhost-localhost" on localhost/localhost
Looking for packaged Juju agent version 2.3.5 for arm64
To configure your system to better support LXD containers, please see: https://github.com/lxc/lxd/blob/master/doc/production-setup.md
Launching controller instance(s) on localhost/localhost...
- juju-5d1a96-0 (arch=arm64)
Installing Juju agent on bootstrap instance
Fetching Juju GUI 2.12.1
Waiting for address
Attempting to connect to 10.205.42.149:22
Connected to 10.205.42.149
Running machine configuration script...
Bootstrap agent now started
Contacting Juju controller at 10.205.42.149 to verify accessibility...
Bootstrap complete, "localhost-localhost" controller now available
Controller machines are in the "controller" model
Initial model "default" added
Edit the profile for juju's default
model (substitute default
for your model name as needed):
$ lxc profile edit juju-default
Replace the contents with this:
name: juju-default
config:
boot.autostart: "true"
linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
raw.lxc: |
lxc.apparmor.profile=unconfined
lxc.mount.auto=proc:rw sys:rw cgroup:rw
lxc.cgroup.devices.allow=a
lxc.cap.drop=
security.nesting: "true"
security.privileged: "true"
description: ""
devices:
aadisable:
path: /sys/module/nf_conntrack/parameters/hashsize
source: /dev/null
type: disk
aadisable1:
path: /sys/module/apparmor/parameters/enabled
source: /dev/null
type: disk
This file defines the applications and relations needed for a CDK cluster:
series: xenial
services:
easyrsa:
charm: cs:~containers/easyrsa-68
num_units: 1
etcd:
charm: cs:~containers/etcd-126
num_units: 1
options:
channel: 3.2/stable
kubeapi-load-balancer:
charm: cs:~containers/kubeapi-load-balancer-88
expose: true
num_units: 1
kubernetes-master:
charm: cs:~containers/kubernetes-master-144
num_units: 1
options:
channel: 1.11/stable
kubernetes-worker:
charm: cs:~containers/kubernetes-worker-163
expose: true
num_units: 2
options:
channel: 1.11/stable
default-backend-image: gcr.io/google_containers/defaultbackend-arm64:1.4
kubelet-extra-args: 'pod-infra-container-image=gcr.io/google-containers/pause-arm64:3.1'
nginx-image: cdkbot/nginx-ingress-controller-arm64:0.9.0-beta.15
relations:
- - kubernetes-master:kube-api-endpoint
- kubeapi-load-balancer:apiserver
- - kubernetes-master:loadbalancer
- kubeapi-load-balancer:loadbalancer
- - kubernetes-master:kube-control
- kubernetes-worker:kube-control
- - kubernetes-master:certificates
- easyrsa:client
- - etcd:certificates
- easyrsa:client
- - kubernetes-master:etcd
- etcd:db
- - kubernetes-worker:certificates
- easyrsa:client
- - kubernetes-worker:kube-api-endpoint
- kubeapi-load-balancer:website
- - kubeapi-load-balancer:certificates
- easyrsa:client
Create overlay bundle(s) that will be merged into bundle.yaml
when deployed.
CDK charms can be configured to install snaps and docker images via proxies if needed:
applications:
etcd:
options:
snap_proxy: https://squid.internal:3128
kubernetes-master:
options:
snap_proxy: https://squid.internal:3128
kubernetes-worker:
options:
snap_proxy: https://squid.internal:3128
http_proxy: http://squid.internal:3128
https_proxy: https://squid.internal:3128
(where https://squid.internal:3128 is an example proxy and port)
CDK supports Flannel, Calico, and Canal (Flannel connectivity + Calico policy) as the container networking provider:
applications:
flannel:
charm: cs:~containers/flannel-81
relations:
- - flannel:etcd
- etcd:db
- - flannel:cni
- kubernetes-master:cni
- - flannel:cni
- kubernetes-worker:cni
applications:
calico:
charm: cs:~containers/calico-116
options:
calico-node-image: cdkbot/node-arm64:v2.6.10
calico-policy-image: cdkbot/kube-controllers-arm64:v1.0.4
relations:
- - calico:etcd
- etcd:db
- - calico:cni
- kubernetes-master:cni
- - calico:cni
- kubernetes-worker:cni
applications:
canal:
charm: cs:~containers/canal-112
options:
calico-node-image: cdkbot/node-arm64:v2.6.10
calico-policy-image: cdkbot/kube-controllers-arm64:v1.0.4
relations:
- - canal:etcd
- etcd:db
- - canal:cni
- kubernetes-master:cni
- - canal:cni
- kubernetes-worker:cni
$ juju deploy ./bundle.yaml --overlay=<[flannel|calico|canal].yaml> [--overlay proxy.yaml]
Wait for the deployment to finish and settle. When it's ready, it should look like this:
$ juju status
Model Controller Cloud/Region Version SLA
default localhost-localhost localhost/localhost 2.3.5 unsupported
App Version Status Scale Charm Store Rev OS Notes
easyrsa 3.0.1 active 1 easyrsa jujucharms 33 ubuntu
etcd 2.3.8 active 1 etcd jujucharms 7 ubuntu
flannel 0.9.1 active 2 flannel jujucharms 18 ubuntu
kubeapi-load-balancer 1.10.3 active 1 kubeapi-load-balancer jujucharms 55 ubuntu exposed
kubernetes-master 1.9.4 active 1 kubernetes-master jujucharms 97 ubuntu
kubernetes-worker 1.9.4 active 1 kubernetes-worker jujucharms 77 ubuntu exposed
Unit Workload Agent Machine Public address Ports Message
easyrsa/0* active idle 0 10.205.42.96 Certificate Authority connected.
etcd/0* active idle 1 10.205.42.119 2379/tcp Healthy with 1 known peer
kubeapi-load-balancer/0* active idle 2 10.205.42.7 443/tcp Loadbalancer ready.
kubernetes-master/0* active idle 3 10.205.42.205 6443/tcp Kubernetes master running.
flannel/0* active idle 10.205.42.205 Flannel subnet 10.1.85.1/24
kubernetes-worker/0* active idle 4 10.205.42.184 80/tcp,443/tcp Kubernetes worker running.
flannel/1 active idle 10.205.42.184 Flannel subnet 10.1.101.1/24
Machine State DNS Inst id Series AZ Message
0 started 10.205.42.96 juju-44f41f-0 xenial Running
1 started 10.205.42.119 juju-44f41f-1 xenial Running
2 started 10.205.42.7 juju-44f41f-2 xenial Running
3 started 10.205.42.205 juju-44f41f-3 xenial Running
4 started 10.205.42.184 juju-44f41f-4 xenial Running
Relation provider Requirer Interface Type Message
easyrsa:client etcd:certificates tls-certificates regular
easyrsa:client kubeapi-load-balancer:certificates tls-certificates regular
easyrsa:client kubernetes-master:certificates tls-certificates regular
easyrsa:client kubernetes-worker:certificates tls-certificates regular
etcd:cluster etcd:cluster etcd peer
etcd:db flannel:etcd etcd regular
etcd:db kubernetes-master:etcd etcd regular
kubeapi-load-balancer:loadbalancer kubernetes-master:loadbalancer public-address regular
kubeapi-load-balancer:website kubernetes-worker:kube-api-endpoint http regular
kubernetes-master:cni flannel:cni kubernetes-cni subordinate
kubernetes-master:kube-api-endpoint kubeapi-load-balancer:apiserver http regular
kubernetes-master:kube-control kubernetes-worker:kube-control kube-control regular
kubernetes-worker:cni flannel:cni kubernetes-cni subordinate
SSH into the master to start poking at the cluster:
$ juju ssh kubernetes-master/0
$ kubectl get no
NAME STATUS ROLES AGE VERSION
juju-44f41f-4 Ready <none> 8m v1.9.4
$ kubectl get po
NAME READY STATUS RESTARTS AGE
default-http-backend-sfh6r 1/1 Running 0 8m
nginx-ingress-kubernetes-worker-controller-bbvn2 1/1 Running 0 8m