Skip to content

Commit

Permalink
Merge pull request kubernetes-sigs#380 from andrewsykim/improve-docs
Browse files Browse the repository at this point in the history
docs: clarify workflow using clusterctl and kubectl
  • Loading branch information
k8s-ci-robot committed Jun 27, 2019
2 parents 5c7030a + a167556 commit dd8599c
Show file tree
Hide file tree
Showing 2 changed files with 33 additions and 58 deletions.
55 changes: 2 additions & 53 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,57 +19,6 @@ You can reach the maintainers of this project at:

Participation in the Kubernetes community is governed by the [Kubernetes Code of Conduct](code-of-conduct.md).

### Quick Start
### Getting Started

Go [here](docs/README.md) for an example of how to get up and going with the cluster api using vSphere.

### Where to get the containers

The containers for this provider are currently hosted at `gcr.io/cnx-cluster-api/`. Each release of the
container are tagged with the release version appropriately. Please note, the release tagging changed to
stay uniform with the main cluster api repo. Also note, these are docker containers. A container runtime
must pull them. They cannot simply be downloaded.

| vSphere provider version | container url |
| --- | --- |
| 0.1.0 | gcr.io/cnx-cluster-api/vsphere-cluster-api-provider:v0.1 |
| 0.2.0 | gcr.io/cnx-cluster-api/vsphere-cluster-api-provider:0.2.0 |

| main Cluster API version | container url |
| --- | --- |
| 0.1.0 | gcr.io/k8s-cluster-api/cluster-api-controller:0.1.0 |

To use the appropriate version (instead of `:latest`), replace the version in the generated `provider-components.yaml`,
described in the quick start guide.

### Compatibility Matrix

Below are tables showing the compatibility between versions of the vSphere provider, the main cluster api,
kubernetes versions, and OSes. Please note, this table only shows version 0.2 of the vSphere provider. Due
to the way this provider bootstrap nodes (e.g. using Ubuntu package manager to pull some components), there
were changes in some packages that broke version 0.1 (but may get resolved at some point) so the compatibility
tables for that provider version are not provided here.

Compatibility matrix for Cluster API versions and the vSphere provider versions.

| | Cluster API 0.1.0 |
|--- | --- |
| vSphere Provider 0.2.0 ||

Compatibility matrix for the vSphere provider versions and Kubernetes versions.

| |k8s 1.11.x|k8s 1.12.x|k8s 1.13.x|k8s 1.14.x|
|---|---|---|---|---|
| vSphere Provider 0.2.0 |||||

Compatibility matrix for the vSphere provider versions and node OS. Further OS support may be added in future releases.

| | Ubuntu Xenial Cloud Image | Ubuntu Bionic Cloud Image |
| --- | --- | --- |
| vSphere Provider 0.2.0 |||

Users may download the cloud images here:

[Ubuntu Xenial (16.04)](https://cloud-images.ubuntu.com/xenial/current/)

[Ubuntu Bionic (18.04)](https://cloud-images.ubuntu.com/bionic/current/)
See the [Getting Started](docs/getting_started.md) guide to get up and going with Cluster API for vSphere.
36 changes: 31 additions & 5 deletions docs/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,14 +166,26 @@ path that it ran (i.e. `out/kubeconfig`). This is the **admin** kubeconfig file
going forward to spin up multiple clusters using Cluster API, however, it is recommended that you create dedicated roles
with limited access before doing so.

Note that from this point forward, you no longer need to use `clusterctl` to provision clusters since your management cluster
(the cluster used to manage workload clusters) has been created. Workload clusters should be provisioned by applying Cluster API resources
directly on the management cluster using `kubectl`. More on this below.

## Managing Workload Clusters using the Management Cluster

With your management cluster bootstrapped, it's time to reap the benefits of Cluster API. From this point forward,
clusters and machines (belonging to a cluster) are simply provisioned by creating `cluster`, `machine` and `machineset` resources.

Taking the generated `out/cluster.yaml` and `out/machine.yaml` file from earlier as a reference, you can create a cluster with the
initial control plane node by just editing the name of the cluster and machine resource. For example, the following cluster and
machine resource will provision a cluster named "prod-workload" with 1 initial control plane node:
Using the same `prod-yaml` make target, generate Cluster API resources for a new cluster, this time with a different name:
```
$ CLUSTER_NAME=prod-workload make prod-yaml
```

**NOTE**: The `make prod-yaml` target is not required to manage your Cluster API resources at this point but is used to simplify this guide.
You should manage your Cluster API resources in the same way you would manage your application yaml files for Kubernetes. Use the
generated yaml files from `make prod-yaml` as a reference.

The Cluster and Machine resource in `out/prod-workload/cluster.yaml` and `out/prod-workload/machines.yaml` defines your workload
cluster with the initial control plane.

```yaml
---
Expand Down Expand Up @@ -227,7 +239,7 @@ spec:
controlPlane: "1.13.6"
```
To add 3 additional worker nodes to your cluster, create a machineset like the following:
To add 3 additional worker nodes to your cluster, see the generated machineset file `out/prod-workload/machineset.yaml`:

```yaml
apiVersion: "cluster.k8s.io/v1alpha1"
Expand Down Expand Up @@ -269,7 +281,17 @@ spec:
controlPlane: "1.13.6"
```

Run `kubectl apply -f` to apply the above files on your management cluster and it should start provisioning the new cluster.
Run `kubectl apply -f` to apply the above files on your management cluster and it should start provisioning the new cluster:
```bash
$ cd out/prod-workload
$ kubectl apply -f cluster.yaml
cluster.cluster.k8s.io/prod-workload created
$ kubectl apply -f machines.yaml
machine.cluster.k8s.io/prod-workload-controlplane-1 created
$ kubectl apply -f machineset.yaml
machineset.cluster.k8s.io/prod-workload-machineset-1 created
```

Clusters that are provisioned by the management cluster that run your application workloads are called [Workload Clusters](https://github.com/kubernetes-sigs/cluster-api/blob/master/docs/book/GLOSSARY.md#workload-cluster).

The `kubeconfig` file to access workload clusters should be accessible as a Kubernetes Secret on the management cluster. As of today, the
Expand All @@ -286,3 +308,7 @@ $ kubectl get secret prod-workload-kubeconfig -o=jsonpath='{.data.value}' | base
```

Now that you have the `kubeconfig` for your Workload Cluster, you can start deploying your applications there.

**NOTE**: workload clusters do not have any addons applied aside from those added by kubeadm. Nodes in your workload clusters
will be in the `NotReady` state until you apply a CNI addon. The `addons.yaml` file generated from `make prod-yaml` has a default calico
addon which you can use, otherwise apply custom addons based on your use-case.

0 comments on commit dd8599c

Please sign in to comment.