-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci(brigade.js): add e2e job #955
Conversation
Deploy preview for brigade-docs ready! Built with commit 7ad63ab |
77bc606
to
c20a3a5
Compare
The run here got further, but failed with loading images in and then failed on the |
From the log, I see that only these 3 images fail to load Error: failed to load image: exit status 1
kind not installed or error loading image: brigadecore/brigade-generic-gateway:c20a3a5
Loading brigadecore/brigade-vacuum:c20a3a5
Loading brigadecore/brig:c20a3a5
Loading brigadecore/brigade-worker:c20a3a5
Error: failed to load image: exit status 1
kind not installed or error loading image: brigadecore/brigade-worker:c20a3a5
Loading brigadecore/git-sidecar:c20a3a5
Error: failed to load image: exit status 1
kind not installed or error loading image: brigadecore/git-sidecar:c20a3a5 So, it's safe to assume that all the other images have been loaded correctly? Weird. Is there any way to freeze the cluster at this stage and do |
After some digging, it turns out Here's an example of a working pod specification that can be used to start a ---
apiVersion: v1
kind: Pod
metadata:
name: dind-k8s
spec:
containers:
- name: dind
image: radumatei/golang-dind:1.11-dev
securityContext:
privileged: true
volumeMounts:
- mountPath: /lib/modules
name: modules
readOnly: true
- mountPath: /sys/fs/cgroup
name: cgroup
- name: dind-storage
mountPath: /var/lib/docker
volumes:
- name: modules
hostPath:
path: /lib/modules
type: Directory
- name: cgroup
hostPath:
path: /sys/fs/cgroup
type: Directory
- name: dind-storage
emptyDir: {} This means that with the current Brigade release, we can't set up a pod with these mounts. However, #966 and brigadecore/brigadier#22 add support for this (although there are some checks needed), here's how the above would translate in a Brigade job: const { events, Job } = require("brigadier")
events.on("exec", (e, p) => {
const docker = new Job("dind", "radumatei/golang-dind:1.11-dev")
docker.privileged = true;
docker.volumeConfig = [
{
mount: {
name: "modules",
mountPath: "/lib/modules",
readOnly: true
},
volume: {
name: "modules",
hostPath: {
path: "/lib/modules",
type: "Directory"
}
}
},
{
mount: {
name: "cgroup",
mountPath: "/sys/fs/cgroup",
},
volume: {
name: "cgroup",
hostPath: {
path: "/sys/fs/cgroup",
type: "Directory"
}
}
},
{
mount: {
name: "docker-graph-storage",
mountPath: "/var/lib/docker",
},
volume: {
name: "docker-graph-storage",
emptyDir: {}
}
}
]
docker.tasks = [
"dockerd-entrypoint.sh &",
"sleep 20",
"curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl",
"chmod +x kubectl",
"mv kubectl /go/bin/",
"wget https://github.com/kubernetes-sigs/kind/releases/download/v0.4.0/kind-linux-amd64",
"chmod +x kind-linux-amd64",
"mv kind-linux-amd64 /go/bin/kind",
"docker run hello-world",
"kind create cluster",
`export KUBECONFIG="$(kind get kubeconfig-path)"`,
"kubectl cluster-info",
"unset $(env | grep KUBERNETES_ | xargs)",
"kubectl get pods -w"
];
docker.run()
}) Once we agree on the structure of the public API of #966, I'll go ahead and create a new library Edit: see brigadecore/brigade-utils#20 |
945f23f
to
98bd045
Compare
77fad9f
to
b8320d3
Compare
@radu-matei I dusted this branch off... I'm using the latest version of the |
You are right - I did merge the PR, but didn’t release a new version to NPM yet. |
cba068a
to
575527d
Compare
We're getting closer! The e2e job itself runs great when executing the event locally, sans the usual GH Check notifications. But, when wrapped with the latter, I'm currently seeing brigadecore/brigade-utils#29 (that's why the check results never report back here.) cc @radu-matei |
abc5371
to
2800558
Compare
Signed-off-by: Vaughn Dice <vadice@microsoft.com>
E2E job is passing and this is now ready for review! |
🎉 I propose we monitor the cluster for a few runs, to make sure there aren't any memory / |
Update 10/28/19 The various kinks/issues seem to have been ironed out and, as seen in the Check suite for this PR, the e2e job is running successfully. This PR is now ready for review.
This is where I'm at with adding @dgkanatsios 's e2e tests to CI. The only real change to enable running in a container was the bin dir setup.
As kind requires a Docker daemon to run (Kubernetes In Docker!), I went the route of running the tests in a Docker-In-Docker fashion, using an image that wraps docker:stable-dind with some utilities we have use for. (https://github.com/vdice/go-dind)
The part I'm stuck on is getting the kind cluster to properly launch in the context of a container in a k8s pod, as is the case when running via Brigade.
It variously fails on setup as in:
Or, the cluster creation may be successful but any attempts at contacting the api server fail...
As a comparison, the kind cluster launches just fine (and e2e tests pass!) when running via the same Docker image directly (using Docker for Mac):
So perhaps k8s-level logic is somehow interfering with running kind (in a docker container in a k8s pod)? (For my testing, I've been using a pretty stock AKS cluster to run the Brigade job off of this branch...)
Thoughts/ideas @dgkanatsios , others?