-
Notifications
You must be signed in to change notification settings - Fork 53
Running CDK in a restricted environment
- Set up a mirror
- Bootstrap an offline Juju environment
- Install CDK using cdk-shrinkwrap
- The cdk-offline test harness
- Upgrades
To use CDK in a restricted-network (or offline) environment, you will need to setup a mirror that makes required deployment data available to machines in the environment. This includes apt packages, docker images, and juju metadata. The mirror will need approximately 200GB of free disk space.
Install packages on the mirror:
sudo apt update
sudo apt install -y apache2 apt-mirror docker.io simplestreams
Configure apt-mirror:
sudo tee /etc/apt/mirror.list > /dev/null <<EOL
set nthreads 20
set _tilde 0
deb http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-security main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial-security main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse
clean http://archive.ubuntu.com/ubuntu
EOL
Run apt-mirror (this will download ~190GB of data and can take a while):
sudo apt-mirror
Configure the apache2 mirror site:
sudo tee /etc/apache2/sites-available/ubuntu-mirror.conf > /dev/null <<EOL
<VirtualHost *:80>
ServerName cdk-juju
ServerAlias *
DocumentRoot /var/spool/apt-mirror/mirror/archive.ubuntu.com/
LogLevel info
ErrorLog /var/log/apache2/mirror-archive.ubuntu.com-error.log
CustomLog /var/log/apache2/mirror-archive.ubuntu.com-access.log combined
<Directory /var/spool/apt-mirror/>
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Directory>
</VirtualHost>
EOL
Enable the mirror site:
sudo a2ensite ubuntu-mirror.conf
sudo systemctl restart apache2
Configure docker (ensure the following IP and port are correct for your mirror):
PRIMARYIP=`hostname -i`
export REGISTRY="$PRIMARYIP:5000"
sudo tee /etc/docker/daemon.json > /dev/null <<EOL
{
"insecure-registries": ["$REGISTRY"]
}
EOL
sudo systemctl restart docker
Start the registry process:
sudo docker run -d -p 5000:5000 --restart=always --name registry registry:2
Add the pause, ingress, and default backend images:
sudo docker pull k8s.gcr.io/pause-amd64:3.1
sudo docker tag k8s.gcr.io/pause-amd64:3.1 ${REGISTRY}/pause-amd64:3.1
sudo docker push ${REGISTRY}/pause-amd64:3.1
sudo docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.16.1
sudo docker tag quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.16.1 ${REGISTRY}/kubernetes-ingress-controller/nginx-ingress-controller:0.16.1
sudo docker push ${REGISTRY}/kubernetes-ingress-controller/nginx-ingress-controller:0.16.1
sudo docker pull k8s.gcr.io/defaultbackend:1.4
sudo docker tag k8s.gcr.io/defaultbackend:1.4 ${REGISTRY}/defaultbackend:1.4
sudo docker push ${REGISTRY}/defaultbackend:1.4
If you plan to enable CDK-Addons (which includes the kubernetes dashboard, metrics-server, kube-dns, etc), you will need the following images in your registry:
sudo docker pull cdkbot/addon-resizer-amd64:1.8.1
sudo docker tag cdkbot/addon-resizer-amd64:1.8.1 ${REGISTRY}/addon-resizer-amd64:1.8.1
sudo docker push ${REGISTRY}/addon-resizer-amd64:1.8.1
sudo docker pull k8s.gcr.io/heapster-amd64:v1.5.3
sudo docker tag k8s.gcr.io/heapster-amd64:v1.5.3 ${REGISTRY}/heapster-amd64:v1.5.3
sudo docker push ${REGISTRY}/heapster-amd64:v1.5.3
sudo docker pull k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
sudo docker tag k8s.gcr.io/heapster-influxdb-amd64:v1.3.3 ${REGISTRY}/heapster-influxdb-amd64:v1.3.3
sudo docker push ${REGISTRY}/heapster-influxdb-amd64:v1.3.3
sudo docker pull k8s.gcr.io/heapster-grafana-amd64:v4.4.3
sudo docker tag k8s.gcr.io/heapster-grafana-amd64:v4.4.3 ${REGISTRY}/heapster-grafana-amd64:v4.4.3
sudo docker push ${REGISTRY}/heapster-grafana-amd64:v4.4.3
sudo docker pull k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.10
sudo docker tag k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.10 ${REGISTRY}/k8s-dns-kube-dns-amd64:1.14.10
sudo docker push ${REGISTRY}/k8s-dns-kube-dns-amd64:1.14.10
sudo docker pull k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.10
sudo docker tag k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.10 ${REGISTRY}/k8s-dns-dnsmasq-nanny-amd64:1.14.10
sudo docker push ${REGISTRY}/k8s-dns-dnsmasq-nanny-amd64:1.14.10
sudo docker pull k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.10
sudo docker tag k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.10 ${REGISTRY}/k8s-dns-sidecar-amd64:1.14.10
sudo docker push ${REGISTRY}/k8s-dns-sidecar-amd64:1.14.10
sudo docker pull k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
sudo docker tag k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 ${REGISTRY}/kubernetes-dashboard-amd64:v1.8.3
sudo docker push ${REGISTRY}/kubernetes-dashboard-amd64:v1.8.3
sudo docker pull k8s.gcr.io/metrics-server-amd64:v0.2.1
sudo docker tag k8s.gcr.io/metrics-server-amd64:v0.2.1 ${REGISTRY}/metrics-server-amd64:v0.2.1
sudo docker push ${REGISTRY}/metrics-server-amd64:v0.2.1
If you plan to enable Calico/Canal, add the following to your registry:
sudo docker pull quay.io/calico/node:v2.6.10
sudo docker tag quay.io/calico/node:v2.6.10 ${REGISTRY}/calico/node:v2.6.10
sudo docker push ${REGISTRY}/calico/node:v2.6.10
sudo docker pull quay.io/calico/kube-controllers:v1.0.4
sudo docker tag quay.io/calico/kube-controllers:v1.0.4 ${REGISTRY}/calico/kube-controllers:v1.0.4
sudo docker push ${REGISTRY}/calico/kube-controllers:v1.0.4
If you plan to include Sonatype Nexus or Rancher in your environment, add the following to your registry:
sudo docker pull sonatype/nexus3:latest
sudo docker tag sonatype/nexus3:latest ${REGISTRY}/nexus3:latest
sudo docker push ${REGISTRY}/nexus3:latest
sudo docker pull rancher/rancher:latest
sudo docker tag rancher/rancher:latest ${REGISTRY}/rancher:latest
sudo docker push ${REGISTRY}/rancher:latest
If you cannot docker pull and push on the same machine because of ingress/egress restrictions, use the docker load/save mechanism and a connected machine to make the above images available from your mirror. For example:
## on a connected system
sudo docker pull k8s.gcr.io/pause-amd64:3.1
sudo docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.16.1
sudo docker pull <additional-images>
sudo docker save -o images.tar \
k8s.gcr.io/pause-amd64 \
quay.io/kubernetes-ingress-controller/nginx-ingress-controller \
<additional-images>
## transfer the tarball from above to your mirror
## on your restricted mirror
export REGISTRY="$PRIMARYIP:5000"
sudo docker load --input=images.tar
sudo docker tag k8s.gcr.io/pause-amd64:3.1 ${REGISTRY}/pause-amd64:3.1
sudo docker push ${REGISTRY}/pause-amd64:3.1
sudo docker tag quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.16.1 ${REGISTRY}/kubernetes-ingress-controller/nginx-ingress-controller:0.16.1
sudo docker push ${REGISTRY}/kubernetes-ingress-controller/nginx-ingress-controller:0.16.1
sudo docker tag <additional-images> ${REGISTRY}/<additional-images>
sudo docker push ${REGISTRY}/<additional-images>
Configure simplestreams and image data used by Juju:
export WORKDIR=/var/spool/sstreams/juju
sudo sstream-mirror --no-verify --progress --max=1 \
--path=streams/v1/index2.sjson \
https://streams.canonical.com/juju/tools/ \
$WORKDIR 'arch=amd64' 'release~(xenial|bionic)' 'version~(2.2|2.3|2.4)'
export WORKDIR=/var/spool/sstreams/lxdkvm
sudo sstream-mirror --keyring=/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg \
--progress --max=1 --path=streams/v1/index.json \
https://cloud-images.ubuntu.com/releases/ \
$WORKDIR/_latest 'arch=amd64' 'release~(trusty|xenial)' \
'ftype~(lxd.tar.xz|squashfs|root.tar.xz|root.tar.gz|disk1.img|.json|.sjson)'
Juju metadata must be served over SSL. Generate appropriate config:
sudo mkdir -p /etc/pki/tls/private/
sudo mkdir -p /etc/pki/tls/certs/
# Ensure the following IP is correct for your mirror
export PRIMARYIP=`hostname -i`
sudo tee /root/$HOSTNAME.conf > /dev/null <<EOL
[ req ]
prompt = no
default_bits = 4096
distinguished_name = req_distinguished_name
req_extensions = req_ext
[ req_distinguished_name ]
C=GB
ST=London
L=London
O=Canonical
OU=Canonical
CN=$HOSTNAME
[ req_ext ]
subjectAltName = @alt_names
[alt_names]
DNS.1 = $HOSTNAME
DNS.2 = $PRIMARYIP
IP.1 = $PRIMARYIP
EOL
sudo openssl req -new -newkey rsa:4096 -days 3650 -nodes -x509 \
-config /root/$HOSTNAME.conf \
-keyout /etc/pki/tls/private/mirror.key \
-out /etc/pki/tls/certs/mirror.crt
Configure the apache2 metadata site:
sudo tee /etc/apache2/sites-available/sstreams-mirror.conf > /dev/null <<EOL
<VirtualHost *:443>
ServerName sstreams.cdk-juju
ServerAlias *
DocumentRoot /var/spool/sstreams/
SSLCACertificatePath /etc/ssl/certs
SSLCertificateFile /etc/pki/tls/certs/mirror.crt
SSLEngine On
SSLCertificateKeyFile /etc/pki/tls/private/mirror.key
LogLevel info
ErrorLog /var/log/apache2/mirror-lxdkvm-error.log
CustomLog /var/log/apache2/mirror-lxdkvm-access.log combined
<Directory /var/spool/sstreams/>
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Directory>
</VirtualHost>
EOL
Enable the metadata site:
sudo a2enmod ssl
sudo a2ensite sstreams-mirror.conf
sudo systemctl restart apache2
Juju works with clouds -- they may be public (AWS), private (MAAS), local (LXD containers), or manual (pre-provisioned machines). The most common use case in restricted-network environments is to use pre-provisioned machines as a 'manual' cloud. The next section highlights requirements for this scenario; skip to the Bootstrap section if you are not planning to use pre-provisioned machines with Juju.
Before Juju can use a manual cloud, first ensure your juju client machine (typically a laptop or management workstation) can SSH to the pre-existing machine you want to use as your controller. Next, add the cloud to juju with the following:
juju add-cloud
Choose 'manual' and input the requested information when prompted. For more information, see the manual cloud documentation.
Bootstrap Juju using the mirror configured earlier:
export MIRROR=<mirror-hostname-or-ip>
juju bootstrap \
--config apt-mirror=http://$MIRROR/ubuntu/ \
--config agent-stream=release \
--config container-image-metadata-url=https:///$MIRROR/lxdkvm/ \
--config agent-metadata-url=https:///$MIRROR/juju/ \
--debug
At this point, Juju will walk you through the bootstrap process. If you created a manual cloud, you should see it listed in the "Select a cloud" prompt. Once completed, you are ready to deploy applications. For most clouds, Juju will add machines as needed to fulfill a deployment request. If you bootstrapped a 'manual' cloud, you will need to tell Juju about the machines that it can use:
# do this for each machine you want to make available to Juju
juju add-machine ssh:<user>@<ip>
cdk-shrinkwrap will create a tarball of charms, resources, snaps, and scripts that facilitate deployment of CDK in a restricted-network environment.
On your juju client machine, get the required source:
sudo apt install git unzip
git clone https://github.com/juju-solutions/cdk-shrinkwrap.git
Enter the repo directory and run the script for your target CDK bundle
(using canonical-kubernetes
as an example):
cd cdk-shrinkwrap
./shrinkwrap.py canonical-kubernetes --channel stable
Note: the machine architecture where you run
shrinkwrap.py
must match your target machine architecture.
This will produce a versioned and timestamped tarball of CDK, e.g.,
canonical-kubernetes-stable-2018-08-08-18-02-01.tar.gz
.
Untar the archive and run the deploy.sh
script:
tar -xf canonical-kubernetes-*.tar.gz
cd canonical-kubernetes-*
# verify deploy.sh accurately reflects your environment
./deploy.sh
Note: you should inspect the
deploy.sh
script to ensure it reflects your environment. For example, if you added machines to Juju manually, you should adjustMACHINES=
to enumerate the pre-existing machine IDs (e.g.,MACHINES="0 1 2 3 n"
) versus having the script calljuju add-machine
.
The tarball includes additional add-unit-*
scripts for customizing your
deployment. After running deploy.sh
, you may want to add an additional unit
to individual applications. This can be done with:
./add-unit-[easyrsa|etcd|flannel|kubeapi-load-balancer|kubernetes-master|kubernetes-worker].sh
Configure the kubernetes-worker charm to use the docker registry you setup earlier:
export REGISTRY=<registry-hostname-or-ip>:<port>
juju config kubernetes-worker docker-opts="--insecure-registry=$REGISTRY" \
kubelet-extra-args="pod-infra-container-image=$REGISTRY/pause-amd64:3.1" \
default-backend-image="$REGISTRY/defaultbackend:1.4" \
nginx-image="$REGISTRY/kubernetes-ingress-controller/nginx-ingress-controller:0.16.1"
If you plan to enable CDK Addons, configure with the registry you setup earlier:
export REGISTRY=<registry-hostname-or-ip>:<port>
juju config kubernetes-master addons-registry=$REGISTRY \
enable-nvidia-plugin=false
Otherwise, configure the kubernetes-master to disable addons:
juju config kubernetes-master enable-dashboard-addons=false \
enable-kube-dns=false \
enable-metrics=false \
enable-nvidia-plugin=false
Testing an offline deployment is challenging. We've created cdk-offline to simulate an offline environment in AWS. It creates a VPC with two subnets - one private, one public. One machine is created on the public subnet, which serves as a host for a squid proxy and the juju client. We'll refer to this machine henceforth as the client machine. On the private subnet, a controller is bootstrapped.
A gateway is created and attached to the VPC. A route is associated with the private subnet that routes all traffic through the client machine. Iptables is used to redirect all traffic going to ports 80 and 443 to ports 3129 and 3130, respectively, which squid is listening to. All other traffic is simply dropped.
The squid proxy server will proxy your apt mirror (which you setup earlier).
Preparation:
- Create an apt mirror
- Create a private docker registry
- Create a cdk-shrinkwrap tarball
- Configure your aws cli tool for the ap-southeast-2 region (This may be made configurable later)
Clone cdk-offline:
git clone https://github.com/juju-solutions/cdk-offline.git
Enter the repo directory and run deploy-squid.sh
:
cd cdk-offline
./deploy-squid.sh <your apt mirror IP address>
Standby to enter your AWS credentials. These are used to bootstrap the Juju environment and are not persisted.
After a while, the deployment script will complete. At the end of the deployment, a number of data points will be printed out. One of these is the IP address of the client machine. Make note of this.
During the deployment, a private key will be created and stored in the file
cdk-offline.pem
so that you can log into the client machine. Use this key to
copy your cdk-shrinkwrap tarball to the client machine, e.g.:
scp -i ./cdk-offline.pem canonical-kubernetes-stable-2018-08-08-18-02-01.tar.gz ubuntu@123.45.67.89:
Then ssh to the client machine:
ssh -i ./cdk-offline.pem ubuntu@123.45.67.89
On the client machine, unpack the cdk-shrinkwrap tarball:
tar -xf canonical-kubernetes-*.tar.gz
As noted in the cdk-shrinkwrap
section above, the unpacked cdk-shrinkwrap
directory will contain a deploy.sh
script. Edit this file by changing:
juju add-machine -n X
To this:
juju add-space cdkoffline 172.32.0.0/24
juju add-machine -n X --constraints spaces=cdkoffline
Finally, run the deployment script:
./deploy.sh
Once complete, you should have a working CDK installation inside a simulated offline environment in AWS.
In an internet-connected deployment, the charms download snap package updates directly from the snap store. In an offline deployment, where the charms can not communicate with the snap store, snap updates must be downloaded and attached to the charms manually. A script for doing this is shown below.
#!/bin/bash
set -eux
SNAP_CHANNEL="1.12/stable"
ALL_SNAPS="kube-apiserver kube-scheduler kube-controller-manager kube-proxy kubectl kubelet cdk-addons"
MASTER_SNAPS="kube-apiserver kube-scheduler kube-controller-manager kube-proxy kubectl cdk-addons"
WORKER_SNAPS="kube-proxy kubelet kubectl"
# Download latest snaps from designated channel
for snap in $ALL_SNAPS
do
snap download --channel=$SNAP_CHANNEL $snap
done
rm *.assert
# Attach new snaps to master units
for snap in $MASTER_SNAPS
do
juju attach kubernetes-master $snap=`ls ${snap}_*.snap`
done
# Attach new snaps to worker units
for snap in $WORKER_SNAPS
do
juju attach kubernetes-worker $snap=`ls ${snap}_*.snap`
done
# Upgrade to new snaps on masters, one at a time
for unit in `juju status --format json | jq -r '.applications|.["kubernetes-master"].units | keys[]'`
do
juju run-action $unit upgrade --wait
done
# Upgrade to new snaps on workers, one at a time
for unit in `juju status --format json | jq -r '.applications|.["kubernetes-worker"].units | keys[]'`
do
juju run-action $unit upgrade --wait
done