diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 83fbb8bda..647bfa890 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -4,6 +4,7 @@ The Cartographer project team welcomes contributions from the community. If you wish to contribute code and you have not signed our [contributor license agreement](https://cla.vmware.com/cla/1/preview), our bot will update the issue when you open a Pull Request. For any questions about the CLA process, please refer to our [FAQ](https://cla.vmware.com/faq). + ## Development Dependencies - [`ctlptl`]: for deploying local changes to a local registry @@ -13,7 +14,6 @@ The Cartographer project team welcomes contributions from the community. If you - [`kind`]: to run a local cluster - [`ko`]: for building and pushing the controller's container image - [`kuttl`]: for integration testing -- [`pack`]: for building the controller's container image using buildpacks. [`ctlptl`]: https://github.com/tilt-dev/ctlptl [`go`]: https://golang.org/dl/ @@ -22,19 +22,30 @@ The Cartographer project team welcomes contributions from the community. If you [`kind`]: https://kind.sigs.k8s.io/docs/user/quick-start/ [`ko`]: https://github.com/google/ko [`kuttl`]: https://github.com/kudobuilder/kuttl -[`pack`]: https://github.com/buildpacks/pack [`ytt`]: https://github.com/vmware-tanzu/carvel-ytt + ## Running a local cluster -A local kind cluster with Cartographer installed can be stood up with the command: -```yaml -make deploy-local + +A local Kubernetes cluster with a local registry and Cartographer installed can +be stood up with the [hack/setup.sh](./hack/setup.sh): + + +```bash +# here we're performing a few actions, one after another: +# +# - bringing the cluster up with a local registry already trusted +# - installing cert-manager, a cartographer's dependency +# - installing cartographer from the single-file release +# +./hack/setup.sh cluster cert-manager cartographer ``` This cluster will use a local registry. The controller running in the cluster will demonstrate the behavior of the codebase when the deploy command was run (Devs can check expected behavior by rerunning the deploy command). + ## Running the tests ### Unit tests @@ -45,6 +56,7 @@ Nothing else required aside from doing the equivalent of `go test ./...`: make test ``` + ### Integration tests Integration tests involve a Kubernetes API server, and a persistence service (etcd). @@ -56,18 +68,19 @@ There are two sets of Integration tests: or make test-kuttl-kind # see below section about testing with a full cluster. ``` - -2. Those that require asynchronous testing, run using [ginkgo](https://onsi.github.io/ginkgo/). + +2. Those that require asynchronous testing, run using [ginkgo](https://onsi.github.io/ginkgo/). ``` make test-integration ``` + ### Running integration tests without a complete cluster -For speed, both `kuttl` and the ginkgo tests can (and usually do) use +For speed, both `kuttl` and the ginkgo tests can (and usually do) use [`envtest` from controller-runtime](https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/envtest) -The `envTest` package runs a simplified control plane, using `kubernetes-apiserver` and `etcd`. You'll need `etcd` +The `envtest` package runs a simplified control plane, using `kubernetes-apiserver` and `etcd`. You'll need `etcd` and `kube-apiserver` installed in `/usr/local/kubebuilder/bin` and can download them with: @@ -78,34 +91,15 @@ curl -sSLo envtest-bins.tar.gz "https://storage.googleapis.com/kubebuilder-tools **Note:** `envTest` cannot run pods, so is limited, but is typically all that's needed to test controllers and webhooks. + ### Running integration tests with a complete cluster -Declarative `kuttl` tests can be run against a real cluster, using `kind`. This approach is slower but can be useful +Declarative `kuttl` tests can be run against a real cluster, using `kind`. This approach is slower but can be useful sometimes for easy debugging with the `kubectl` command line tool. ``` make test-kuttl-kind # see below section about testing with a full cluster. ``` -## Updating GitLab CI configuration - -The configuration for GitLab CI that is picked up by the GitLab runner -([./.gitlab-ci.yml](./.gitlab-ci.yml)) is generated (through `make -gen-ci-manifest`) based of a template that you can find at -[./.gitlab/.gitlab-ci.yml](./.gitlab/.gitlab-ci.yml). - -i.e., if you want to introduce extra commands for pipeline, make sure you -update `./.gitlab/.gitlab-ci.yml` and then run `make gen-ci-manifest`. If you -want to include or update dependencies in the base image, update -`./.gitlab/Dockerfile` and then run `make gen-ci-manifest`. - -This allows us to declaratively express how the base image to be used when -running the tests should look like -([./.gitlab/Dockerfile](./.gitlab/Dockerfile)) and have such image reference -specified in the final CI manifest by leveraging [kbld]. - -See `gen-ci-manifest` on [./Makefile](./Makefile) to know more about how the -generation takes place. - ## Merge Request Flow @@ -115,16 +109,20 @@ the merge request. If the work is not blocking other stories, the merge request overnight, to allow others on the team time to read. The following morning, a merge should be completed. + ## Maintaining a useful commit history The commit history should be legible and (to our greatest ability) logical. In pursuit of this: 1. Use small commits. Keeping logical work in their own commit helps document code. + 1. Remove WIP commits from a branch before merging it into another. If a WIP commit is made at the end of a day, a soft reset the following morning can help ensure that only logical commits remain in the branch. + 1. When merging, do not fast forward. E.g. use `git merge --no-ff` This will make clear the small commits that belong to a logical group. + 1. When merging Branch A into Branch B, perform a rebase of Branch A on Branch B. This will ensure that the commits of Branch A are easily readable when reading Branch B's history. @@ -149,63 +147,95 @@ section](#development-dependencies). ### What it consists of -Releasing Cartographer consists of producing a YAML file that contains all the -necessary Kubernetes objects that needs to be submitted to a cluster in order -to have the controller running there with the proper access to the Kubernetes -API in such environment. +Releasing Cartographer consists of producing: + +- a YAML file that contains all the necessary Kubernetes objects that needs to + be submitted to a cluster in order to have the controller running there with + the proper access to the Kubernetes API in such environment. + +- [carvel Packaging] objects to integrate with the package management APIs + offered by [kapp-controller]. + +``` +./release/ +├── cartographer.yaml +└── package + ├── package-install.yaml + ├── package-metadata.yaml + └── package.yaml +``` + +Despite the process being automated with GitHub actions whenever a new tag is +pushed, it can be done manually too (the GitHub workflow just wraps these steps +around). ```bash -# grab the credentials from lastpass for dockerhub +# prepare ~/.docker/config.json with the credentials necessary for +# pushing images to a # container image registry. +# +# if you're a maintainer, head to `lastpass` and search for +# projectcartographer dockerhub credentials. # docker login -# point `make` at the `release` target, which takes care of generating any YAML -# files based of Go code, as well as building the container image with the -# controller that's then placed in the Deployment's pod template spec. + +# run the release generation script pointing at our registry of choice +# (for a final release, `projectcartographer`). # -# when done, a `releases/release.yaml` file will have been populated with all -# the YAML necessary for bringing `cartographer` up in a Kubernetes cluster via -# `kubectl apply -f ./releases/release.yaml`. +# when done, the `release` directory will be populated with a series of +# YAML files as described in the example above. # -KO_DOCKER_REPO=projectcartographer \ - make release +REGISTRY=projectcartographer ./hack/release.sh ``` -That final file (`releases/release.yaml`) consists of: -1. `CustomResourceDefinition` objects that tells `kube-apiserver` how to expand - its API to now know about Cartographer primitives +## Running the end-to-end test -1. RBAC objects that are bound to the controller ServiceAccount so that the - controller can reach the Kubernetes API server to retrieve state and mutate - it when necessary +Cartographer has a script that allows users to: -1. Deployment-related objects that stand Cartographer up in a Pod so our code runs - inside the cluster (using that ServiceAccount that grants it the permissions - when it comes to reaching out to the API). +- create a local cluster with a local repository +- push the image of the controller to that cluster +- run the controller +- create the supply chain(s) and workload(s) the example directory +- assure that the expected objects are created by the Cartographer controller -As the `Deployment` needs a container image for the pods to use to run our -controller, we must have a way of building that container image in the first -place. For that, we make use of `ko`, which given a YAML file where it can find -an `image: ko://`, it then replaces that with the reference to the -image it built and pushed to a registry configured via `KO_DOCKER_REPO` (see -[deployment.yaml](./config/manager/deployment.yaml)). +To run the tests, first make sure you have Docker running and no Kubernetes +clusters already there, then proceed with the use of the +[setup.sh](./hack/setup.sh) script: -## Running the e2e tests -Cartographer has a script that allows users to: - - create a local cluster with a local repository - - push the image of the controller to that cluster - - run the controller - - create the supply chain(s) and workload(s) the example directory - - assure that the expected objects are created by the Cartographer controller - -To run the tests: ```bash -./hack/ci/e2e.sh run +# 1. create a local cluster using kind as well as a +# local container registry that is already +# trusted by the cluster, including +# pre-requisite dependencies like kapp-ctrl and +# cert-manager. +# +# 2. produce a new local release of cartographer and +# have it installed in the local cluster +# +# 3. install all the dependencies that are used by +# the examples +# +# 4. submit the example and wait for the final +# deployment (knative service with our app) to +# occur. +# +./hack/setup.sh cluster cartographer example-dependencies example ``` -To teardown (necessary if users wish to rerun the tests): + +Once the execution finished, you can either play around with the environment or +tear it down. + ```bash -./hack/ci/e2e.sh teardown +# get rid of all of the containers created to +# support the containers +# +./hack/setup.sh teardown ``` + +ps.: those commands can be all specified at once, for instance: + +[carvel Packaging]: https://carvel.dev/kapp-controller/docs/latest/packaging/ +[imgpkg bundle]: https://carvel.dev/imgpkg/docs/latest/ diff --git a/README.md b/README.md index 756dc0e56..ba5ce41db 100644 --- a/README.md +++ b/README.md @@ -129,7 +129,7 @@ Succeeded ps.: if you didn't use `kapp`, but instead just `kubectl apply`, make sure you wait for the deployment to finish before proceeding as `kubectl apply` doesn't -wait by default: +wait by default: ```bash kubectl get deployment --namespace cartographer-system --watch @@ -146,6 +146,126 @@ Once finished, Project Cartographer has been installed in the cluster - navigate to the [examples directory](./examples) for a walkthrough. +### extra: installation using Carvel Packaging + +Although Cartographer can be installed via plain `kubectl apply` or `kapp +deploy` like mentioned above, this repository also provides [carvel Packaging] +objects. + +To make use of them, first, make sure those pre-requisites above are satified + +1. admin access to a Kubernetes Cluster and [cert-manager] + +2. [kapp-controller] is already installed in the cluster + +```bash +kubectl get crd packageinstalls.packaging.carvel.dev +``` +```console +NAME CREATED AT +packageinstalls.packaging.carvel.dev 2021-09-13T14:32:00Z +``` + +In case you don't (i.e., you see _"packageinstalls.packaging.carvel.dev" not +found_), proceed with installing it. + +```bash +kapp deploy --yes -a kapp-controller \ + -f https://github.com/vmware-tanzu/carvel-kapp-controller/releases/download/v0.24.0/release.yml +``` +```console +Target cluster 'https://127.0.0.1:39993' (nodes: cartographer-control-plane) + +Changes + +Namespace Name Kind +(cluster) apps.kappctrl.k14s.io CustomResourceDefinition +^ internalpackagemetadatas.internal.packaging.carvel.dev CustomResourceDefinition +^ internalpackages.internal.packaging.carvel.dev CustomResourceDefinition +^ kapp-controller Namespace + + +2:56:08PM: ---- waiting on 1 changes [14/15 done] ---- +2:56:13PM: ok: reconcile apiservice/v1alpha1.data.packaging.carvel.dev (apiregistration.k8s.io/v1) cluster +2:56:13PM: ---- applying complete [15/15 done] ---- +2:56:13PM: ---- waiting complete [15/15 done] ---- + +Succeeded +``` + +3. the `default` service account has the capabilities necessary for installing + submitting all those objects above to the cluster + +```bash +kubectl create clusterrolebinding default-cluster-admin \ + --clusterrole=cluster-admin \ + --serviceaccount=default:default +``` +```console +clusterrolebinding.rbac.authorization.k8s.io/default-cluster-admin created +``` + +That done, submit the packaging objects to Kubernetes so that `kapp-controller` +will materialize them into an installation of Cartographer: + +```bash +kapp deploy --yes -a cartographer -f ./release/package +``` +```console +Target cluster 'https://127.0.0.1:42483' (nodes: cartographer-control-plane) + +Changes + +Namespace Name Kind Conds. Age Op Op st. Wait to Rs Ri +default cartographer.carto.run PackageMetadata - - create - reconcile - - +^ cartographer.carto.run.0.0.0-dev Package - - create - reconcile - - +^ cartographer.carto.run.0.0.0-dev PackageInstall - - create - reconcile - - + +... + +1:14:44PM: ---- applying 2 changes [0/3 done] ---- +1:14:44PM: create packagemetadata/cartographer.carto.run (data.packaging.carvel.dev/v1alpha1) namespace: default +1:14:54PM: ok: reconcile packageinstall/cartographer.carto.run.0.0.0-dev (packaging.carvel.dev/v1alpha1) namespace: default +1:14:54PM: ---- applying complete [3/3 done] ---- +1:14:54PM: ---- waiting complete [3/3 done] ---- + +Succeeded +``` + +ps.: if you relocated the images here to a private registry that requires +authentication, make sure you create a Secret with the credentials to the +registry as well as a `SecretExport` object to make those credentials available +to other namespaces. + + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: registry-credentials +type: kubernetes.io/dockerconfigjson # needs to be this type +stringData: + .dockerconfigjson: | + { + "auths": { + "": { + "username": "", + "password": "" + } + } + } + +--- +apiVersion: secretgen.carvel.dev/v1alpha1 +kind: SecretExport +metadata: + name: registry-credentials +spec: + toNamespaces: + - "*" +``` + + ## Uninstall Having installed all the objects using [kapp], which keeps track of all of them @@ -225,8 +345,9 @@ Refer to [CODE-OF-CONDUCT.md](CODE-OF-CONDUCT.md) for details on our code of con Refer to [LICENSE](LICENSE) for details. - [admission webhook]: https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/ +[carvel Packaging]: https://carvel.dev/kapp-controller/docs/latest/packaging/ [cert-manager]: https://github.com/jetstack/cert-manager +[kapp-controller]: https://carvel.dev/kapp-controller/ [kapp]: https://carvel.dev/kapp/ [kind]: https://github.com/kubernetes-sigs/kind