diff --git a/docs/src/SUMMARY.md b/docs/src/SUMMARY.md index 302604f1..89cb5195 100644 --- a/docs/src/SUMMARY.md +++ b/docs/src/SUMMARY.md @@ -34,6 +34,9 @@ - [Using Antrea](./networking/antrea.md) - [Custom Networking](./networking/custom-networking.md) - [Private Cluster](./networking/private-cluster.md) +- [Managed Clusters (OKE)](./managed/managedcluster.md) + - [Boot volume expansion](./managed/boot-volume-expansion.md) + - [Networking customizations](./managed/networking.md) - [Reference](./reference/reference.md) - [API Reference](./reference/api-reference.md) - [Glossary](./reference/glossary.md) diff --git a/docs/src/gs/create-workload-cluster.md b/docs/src/gs/create-workload-cluster.md index 3fd34db7..c63393b1 100644 --- a/docs/src/gs/create-workload-cluster.md +++ b/docs/src/gs/create-workload-cluster.md @@ -3,7 +3,9 @@ ## Workload Cluster Templates Choose one of the available templates for to create your workload clusters from the -[latest released artifacts][latest-release]. Each workload cluster template can be +[latest released artifacts][latest-release]. Please note that the templates provided +are to be considered as references and can be customized further as +the [CAPOCI API Reference][api-reference]. Each workload cluster template can be further configured with the parameters below. ## Workload Cluster Parameters @@ -194,6 +196,6 @@ By default, the [OCI Cloud Controller Manager (CCM)][oci-ccm] is not installed i [calico]: ../networking/calico.md [cni]: https://www.cni.dev/ [oci-ccm]: https://github.com/oracle/oci-cloud-controller-manager -[latest-release]: https://github.com/oracle/cluster-api-provider-oci/releases +[latest-release]: https://github.com/oracle/cluster-api-provider-oci/releases/latest [install-oci-ccm]: ./install-oci-ccm.md [configure-authentication]: ./install-cluster-api.html#configure-authentication diff --git a/docs/src/managed/boot-volume-expansion.md b/docs/src/managed/boot-volume-expansion.md new file mode 100644 index 00000000..e2a0098e --- /dev/null +++ b/docs/src/managed/boot-volume-expansion.md @@ -0,0 +1,50 @@ +# Increase boot volume + +The default boot volume size of worker nodes is 50 GB. The following steps needs to be followed +to increase the boot volume size. + +## Increase the boot volume size in spec + +The following snippet shows how to increase the boot volume size of the instances. + +```yaml +kind: OCIManagedMachinePool +spec: + nodeSourceViaImage: + bootVolumeSizeInGBs: 100 +``` + +## Extend the root partition + +In order to take advantage of the larger size, you need to [extend the partition for the boot volume][boot-volume-extension]. +Custom cloud init scripts can be used for the same. The following cloud init script extends the root volume. + +```bash +#!/bin/bash + +# DO NOT MODIFY +curl --fail -H "Authorization: Bearer Oracle" -L0 http://169.254.169.254/opc/v2/instance/metadata/oke_init_script | base64 --decode >/var/run/oke-init.sh + +## run oke provisioning script +bash -x /var/run/oke-init.sh + +### adjust block volume size +/usr/libexec/oci-growfs -y + +touch /var/log/oke.done +``` + +Encode the file contents into a base64 encoded value as follows. +```bash +cat cloud-init.sh | base64 -w 0 +``` + +Add the value in the following `OCIManagedMachinePool` spec. +```yaml +kind: OCIManagedMachinePool +spec: + nodeMetadata: + user_data: "" +``` + +[boot-volume-extension]: https://docs.oracle.com/en-us/iaas/Content/Block/Tasks/extendingbootpartition.htm \ No newline at end of file diff --git a/docs/src/managed/managedcluster.md b/docs/src/managed/managedcluster.md new file mode 100644 index 00000000..00f30e03 --- /dev/null +++ b/docs/src/managed/managedcluster.md @@ -0,0 +1,93 @@ +# Managed Clusters (OKE) +- **Feature status:** Experimental +- **Feature gate:** OKE=true,MachinePool=true + +Cluster API Provider for OCI (CAPOCI) experimentally supports managing OCI Container +Engine for Kubernetes (OKE) clusters. CAPOCI implements this with three +custom resources: +- `OCIManagedControlPlane` +- `OCIManagedCluster` +- `OCIManagedMachinePool` + +## Workload Cluster Parameters + +The following Oracle Cloud Infrastructure (OCI) configuration parameters are available +when creating a managed workload cluster on OCI using one of our predefined templates: + +| Parameter | Default Value | Description | +|---------------------------------------|---------------------|------------------------------------------------------------------------------------------------------------------------| +| `OCI_COMPARTMENT_ID` | | The OCID of the compartment in which to create the required compute, storage and network resources. | +| `OCI_MANAGED_NODE_IMAGE_ID` | | The OCID of the image for the Kubernetes worker nodes. Please read the [doc][node-images] for more details. | +| `OCI_MANAGED_NODE_SHAPE ` | VM.Standard.E4.Flex | The [shape][node-images-shapes] of the Kubernetes worker nodes. | +| `OCI_MANAGED_NODE_MACHINE_TYPE_OCPUS` | 1 | The number of OCPUs allocated to the worker node instance. | +| `OCI_SSH_KEY` | | The public SSH key to be added to the Kubernetes nodes. It can be used to login to the node and troubleshoot failures. | + +## Pre-Requisites + +### Environment Variables + +Managed clusters also require the following feature flags set as environment variables before [installing +CAPI and CAPOCI components using clusterctl][install-cluster-api]. + +```bash +export EXP_MACHINE_POOL=true +export EXP_OKE=true +``` + +### OCI Security Policies + +Please read the [doc][oke-policies] and add the necessary policies required for the user group. +Please add the policies for dynamic groups if instance principal is being used as authentication +mechanism. Please read the [doc][install-cluster-api] to know more about authentication mechanisms. + +## Workload Cluster Templates + +Choose one of the available templates to create your workload clusters from the +[latest released artifacts][latest-release]. The managed cluster templates is of the +form `cluster-template-managed-`.yaml . The default managed template is +`cluster-template-managed.yaml`. Please note that the templates provided are to be considered +as references and can be customized further as the [CAPOCI API Reference][api-reference]. + +## Supported Kubernetes versions +The [doc][supported-versions] lists the Kubernetes versions currently supported by OKE. + +## Create a new OKE cluster. + +The following command will create an OKE cluster using the default template. The created node pool uses +[VCN native pod networking][vcn-native-pod-networking]. + +```bash +OCI_COMPARTMENT_ID= \ +OCI_MANAGED_NODE_IMAGE_ID= \ +OCI_SSH_KEY= \ +KUBERNETES_VERSION=v1.24.1 \ +NAMESPACE=default \ +clusterctl generate cluster \ +--from cluster-template-managed.yaml | kubectl apply -f - +``` + +## Create a new private OKE cluster. + +The following command will create an OKE private cluster. In this template, the control plane endpoint subnet is a +private subnet and the API endpoint is accessible only within the subnet. The created node pool uses +[VCN native pod networking][vcn-native-pod-networking]. + +```bash +OCI_COMPARTMENT_ID= \ +OCI_MANAGED_NODE_IMAGE_ID= \ +OCI_SSH_KEY= \ +KUBERNETES_VERSION=v1.24.1 \ +NAMESPACE=default \ +clusterctl generate cluster \ +--from cluster-template-managedprivate.yaml | kubectl apply -f - +``` + + + +[node-images-shapes]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Reference/contengimagesshapes.htm +[oke-policies]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Concepts/contengpolicyconfig.htm +[install-cluster-api]: ../gs/install-cluster-api.md +[latest-release]: https://github.com/oracle/cluster-api-provider-oci/releases/latest +[api-reference]: ../reference/api-reference.md +[supported-versions]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Concepts/contengaboutk8sversions.htm#supportedk8sversions +[vcn-native-pod-networking]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Concepts/contengpodnetworking_topic-OCI_CNI_plugin.htm \ No newline at end of file diff --git a/docs/src/managed/networking.md b/docs/src/managed/networking.md new file mode 100644 index 00000000..9b6cb720 --- /dev/null +++ b/docs/src/managed/networking.md @@ -0,0 +1,44 @@ +# Networking customizations +## Use a pre-existing VCN + +The following `OCIManagedCluster` snippet can be used to to use a pre-existing VCN. + +```yaml +kind: OCIManagedCluster +spec: + compartmentId: "${OCI_COMPARTMENT_ID}" + networkSpec: + skipNetworkManagement: true + vcn: + id: "" + networkSecurityGroups: + - id: "" + role: control-plane-endpoint + name: control-plane-endpoint + - id: "" + role: worker + name: worker + - id: "" + role: pod + name: pod + subnets: + - id: "" + role: control-plane-endpoint + name: control-plane-endpoint + type: public + - id: "" + role: worker + name: worker + - id: "" + role: pod + name: pod + - id: "" + role: service-lb + name: service-lb + type: public +``` + +## Use flannel as CNI + +Use the template `cluster-template-managed-flannel.yaml` as an example for using flannel as the CNI. The template +sets the correct parameters in the spec as well as create the proper security roles in the Network Security Group (NSG). diff --git a/exp/api/v1beta1/ocimanagedcluster_webhook.go b/exp/api/v1beta1/ocimanagedcluster_webhook.go index dcbead4b..f85c5036 100644 --- a/exp/api/v1beta1/ocimanagedcluster_webhook.go +++ b/exp/api/v1beta1/ocimanagedcluster_webhook.go @@ -593,7 +593,7 @@ func (c *OCIManagedCluster) GetLBServiceDefaultEgressRules() []infrastructurev1b return []infrastructurev1beta1.EgressSecurityRuleForNSG{ { EgressSecurityRule: infrastructurev1beta1.EgressSecurityRule{ - Description: common.String("Pod to Kubernetes API endpoint communication (when using VCN-native pod networking)."), + Description: common.String("Load Balancer to Worker nodes node ports."), Protocol: common.String("6"), TcpOptions: &infrastructurev1beta1.TcpOptions{ DestinationPortRange: &infrastructurev1beta1.PortRange{ diff --git a/exp/api/v1beta1/ocimanagedmachinepool_webhook_test.go b/exp/api/v1beta1/ocimanagedmachinepool_webhook_test.go index beaa2a24..782124fe 100644 --- a/exp/api/v1beta1/ocimanagedmachinepool_webhook_test.go +++ b/exp/api/v1beta1/ocimanagedmachinepool_webhook_test.go @@ -44,6 +44,23 @@ func TestOCIManagedMachinePool_CreateDefault(t *testing.T) { })) }, }, + { + name: "should not override cni type", + m: &OCIManagedMachinePool{ + Spec: OCIManagedMachinePoolSpec{ + NodePoolNodeConfig: &NodePoolNodeConfig{ + NodePoolPodNetworkOptionDetails: &NodePoolPodNetworkOptionDetails{ + CniType: FlannelCNI, + }, + }, + }, + }, + expect: func(g *gomega.WithT, c *OCIManagedMachinePool) { + g.Expect(c.Spec.NodePoolNodeConfig.NodePoolPodNetworkOptionDetails).To(Equal(&NodePoolPodNetworkOptionDetails{ + CniType: FlannelCNI, + })) + }, + }, } for _, test := range tests { diff --git a/templates/cluster-template-managed-flannel.yaml b/templates/cluster-template-managed-flannel.yaml new file mode 100644 index 00000000..14e2617f --- /dev/null +++ b/templates/cluster-template-managed-flannel.yaml @@ -0,0 +1,285 @@ +apiVersion: cluster.x-k8s.io/v1beta1 +kind: Cluster +metadata: + labels: + cluster.x-k8s.io/cluster-name: "${CLUSTER_NAME}" + name: "${CLUSTER_NAME}" + namespace: "${NAMESPACE}" +spec: + infrastructureRef: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: OCIManagedCluster + name: "${CLUSTER_NAME}" + namespace: "${NAMESPACE}" + controlPlaneRef: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: OCIManagedControlPlane + name: "${CLUSTER_NAME}" + namespace: "${NAMESPACE}" +--- +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: OCIManagedCluster +metadata: + labels: + cluster.x-k8s.io/cluster-name: "${CLUSTER_NAME}" + name: "${CLUSTER_NAME}" +spec: + compartmentId: "${OCI_COMPARTMENT_ID}" + networkSpec: + apiServerLoadBalancer: + name: "" + vcn: + cidr: 10.0.0.0/16 + networkSecurityGroups: + - egressRules: + - egressRule: + description: Allow Kubernetes API endpoint to communicate with OKE. + destination: all-iad-services-in-oracle-services-network + destinationType: SERVICE_CIDR_BLOCK + isStateless: false + protocol: "6" + - egressRule: + description: Path Discovery. + destination: all-iad-services-in-oracle-services-network + destinationType: SERVICE_CIDR_BLOCK + icmpOptions: + code: 4 + type: 3 + isStateless: false + protocol: "1" + - egressRule: + description: Allow Kubernetes API endpoint to communicate with worker + nodes. + destination: 10.0.64.0/20 + destinationType: CIDR_BLOCK + isStateless: false + protocol: "6" + tcpOptions: + destinationPortRange: + max: 10250 + min: 10250 + - egressRule: + description: Path Discovery. + destination: 10.0.64.0/20 + destinationType: CIDR_BLOCK + icmpOptions: + code: 4 + type: 3 + isStateless: false + protocol: "1" + ingressRules: + - ingressRule: + description: Kubernetes worker to Kubernetes API endpoint communication. + isStateless: false + protocol: "6" + source: 10.0.64.0/20 + sourceType: CIDR_BLOCK + tcpOptions: + destinationPortRange: + max: 6443 + min: 6443 + - ingressRule: + description: Kubernetes worker to Kubernetes API endpoint communication. + isStateless: false + protocol: "6" + source: 10.0.64.0/20 + sourceType: CIDR_BLOCK + tcpOptions: + destinationPortRange: + max: 12250 + min: 12250 + - ingressRule: + description: Path Discovery. + icmpOptions: + code: 4 + type: 3 + isStateless: false + protocol: "1" + source: 10.0.64.0/20 + sourceType: CIDR_BLOCK + - ingressRule: + description: External access to Kubernetes API endpoint. + isStateless: false + protocol: "6" + source: 0.0.0.0/0 + sourceType: CIDR_BLOCK + tcpOptions: + destinationPortRange: + max: 6443 + min: 6443 + name: control-plane-endpoint + role: control-plane-endpoint + - egressRules: + - egressRule: + description: Allow pods on one worker node to communicate with pods on other worker nodes. + destination: "10.0.64.0/20" + destinationType: CIDR_BLOCK + isStateless: false + protocol: "all" + - egressRule: + description: Allow worker nodes to communicate with OKE. + destination: all-iad-services-in-oracle-services-network + destinationType: SERVICE_CIDR_BLOCK + isStateless: false + protocol: "6" + - egressRule: + description: Path Discovery. + destination: 0.0.0.0/0 + destinationType: CIDR_BLOCK + icmpOptions: + code: 4 + type: 3 + isStateless: false + protocol: "1" + - egressRule: + description: Kubernetes worker to Kubernetes API endpoint communication. + destination: 10.0.0.8/29 + destinationType: CIDR_BLOCK + isStateless: false + protocol: "6" + tcpOptions: + destinationPortRange: + max: 6443 + min: 6443 + - egressRule: + description: Kubernetes worker to Kubernetes API endpoint communication. + destination: 10.0.0.8/29 + destinationType: CIDR_BLOCK + isStateless: false + protocol: "6" + tcpOptions: + destinationPortRange: + max: 12250 + min: 12250 + ingressRules: + - ingressRule: + description: Allow pods on one worker node to communicate with pods on other worker nodes. + isStateless: false + protocol: "all" + source: 10.0.64.0/20 + sourceType: CIDR_BLOCK + - ingressRule: + description: Allow Kubernetes API endpoint to communicate with worker nodes. + isStateless: false + protocol: "6" + source: 10.0.0.8/29 + sourceType: CIDR_BLOCK + - ingressRule: + description: Path Discovery. + icmpOptions: + code: 4 + type: 3 + isStateless: false + protocol: "1" + source: 0.0.0.0/0 + sourceType: CIDR_BLOCK + - ingressRule: + description: Load Balancer to Worker nodes node ports. + isStateless: false + protocol: "6" + source: 10.0.0.32/27 + sourceType: CIDR_BLOCK + tcpOptions: + destinationPortRange: + max: 32767 + min: 30000 + name: worker + role: worker + - egressRules: + - egressRule: + description: Load Balancer to Worker nodes node ports. + destination: 10.0.64.0/20 + destinationType: CIDR_BLOCK + isStateless: false + protocol: "6" + tcpOptions: + destinationPortRange: + max: 32767 + min: 30000 + ingressRules: + - ingressRule: + description: Accept http traffic on port 80 + isStateless: false + protocol: "6" + source: 0.0.0.0/0 + sourceType: CIDR_BLOCK + tcpOptions: + destinationPortRange: + max: 80 + min: 80 + - ingressRule: + description: Accept https traffic on port 443 + isStateless: false + protocol: "6" + source: 0.0.0.0/0 + sourceType: CIDR_BLOCK + tcpOptions: + destinationPortRange: + max: 443 + min: 443 + name: service-lb + role: service-lb + subnets: + - cidr: 10.0.0.8/29 + name: control-plane-endpoint + role: control-plane-endpoint + type: public + - cidr: 10.0.0.32/27 + name: service-lb + role: service-lb + type: public + - cidr: 10.0.64.0/20 + name: worker + role: worker + type: private +--- +kind: OCIManagedControlPlane +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +metadata: + name: "${CLUSTER_NAME}" + namespace: "${NAMESPACE}" +spec: + version: "${KUBERNETES_VERSION}" + clusterPodNetworkOptions: + - cniType: "FLANNEL_OVERLAY" +--- +apiVersion: cluster.x-k8s.io/v1beta1 +kind: MachinePool +metadata: + name: ${CLUSTER_NAME}-mp-0 + namespace: default +spec: + clusterName: ${CLUSTER_NAME} + replicas: ${NODE_MACHINE_COUNT} + template: + spec: + clusterName: ${CLUSTER_NAME} + bootstrap: + dataSecretName: "" + infrastructureRef: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: OCIManagedMachinePool + name: ${CLUSTER_NAME}-mp-0 + version: ${KUBERNETES_VERSION} +--- +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: OCIManagedMachinePool +metadata: + name: ${CLUSTER_NAME}-mp-0 + namespace: default +spec: + version: "${KUBERNETES_VERSION}" + nodeShape: "${OCI_MANAGED_NODE_SHAPE=VM.Standard.E4.Flex}" + sshPublicKey: "${OCI_SSH_KEY}" + nodeSourceViaImage: + imageId: "${OCI_MANAGED_NODE_IMAGE_ID}" + bootVolumeSizeInGBs: ${OCI_MANAGED_NODE_BOOT_VOLUME_SIZE=50} + nodeMetadata: + # The custom cloud int script generated from the script scripts.oke-custom-cloud-init.sh + user_data: "IyEvYmluL2Jhc2gKY3VybCAtLWZhaWwgLUggIkF1dGhvcml6YXRpb246IEJlYXJlciBPcmFjbGUiIC1MMCBodHRwOi8vMTY5LjI1NC4xNjkuMjU0L29wYy92Mi9pbnN0YW5jZS9tZXRhZGF0YS9va2VfaW5pdF9zY3JpcHQgfCBiYXNlNjQgLS1kZWNvZGUgPi92YXIvcnVuL29rZS1pbml0LnNoCnByb3ZpZGVyX2lkPSQoY3VybCAtLWZhaWwgLUggIkF1dGhvcml6YXRpb246IEJlYXJlciBPcmFjbGUiIC1MMCBodHRwOi8vMTY5LjI1NC4xNjkuMjU0L29wYy92Mi9pbnN0YW5jZS9pZCkKYmFzaCAvdmFyL3J1bi9va2UtaW5pdC5zaCAtLWt1YmVsZXQtZXh0cmEtYXJncyAiLS1wcm92aWRlci1pZD1vY2k6Ly8kcHJvdmlkZXJfaWQiCg==" + nodeShapeConfig: + ocpus: "${OCI_MANAGED_NODE_MACHINE_TYPE_OCPUS=1}" + nodePoolNodeConfig: + nodePoolPodNetworkOptionDetails: + cniType: "FLANNEL_OVERLAY" +--- \ No newline at end of file diff --git a/templates/cluster-template-managed-private.yaml b/templates/cluster-template-managed-private.yaml index b0a189f6..0a127e59 100644 --- a/templates/cluster-template-managed-private.yaml +++ b/templates/cluster-template-managed-private.yaml @@ -90,6 +90,5 @@ spec: imageId: "${OCI_MANAGED_NODE_IMAGE_ID}" bootVolumeSizeInGBs: ${OCI_MANAGED_NODE_BOOT_VOLUME_SIZE=50} nodeShapeConfig: - memoryInGBs: "${OCI_MANAGED_NODE_MACHINE_MEMORY=16}" ocpus: "${OCI_MANAGED_NODE_MACHINE_TYPE_OCPUS=1}" --- \ No newline at end of file diff --git a/templates/cluster-template-managed.yaml b/templates/cluster-template-managed.yaml index 31b4ade8..f6ff7911 100644 --- a/templates/cluster-template-managed.yaml +++ b/templates/cluster-template-managed.yaml @@ -71,6 +71,5 @@ spec: imageId: "${OCI_MANAGED_NODE_IMAGE_ID}" bootVolumeSizeInGBs: ${OCI_MANAGED_NODE_BOOT_VOLUME_SIZE=50} nodeShapeConfig: - memoryInGBs: "${OCI_MANAGED_NODE_MACHINE_MEMORY=16}" ocpus: "${OCI_MANAGED_NODE_MACHINE_TYPE_OCPUS=1}" --- \ No newline at end of file