Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add doc for OKE and bug fixes #172

Merged
merged 1 commit into from
Feb 11, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions docs/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,9 @@
- [Using Antrea](./networking/antrea.md)
- [Custom Networking](./networking/custom-networking.md)
- [Private Cluster](./networking/private-cluster.md)
- [Managed Clusters (OKE)](./managed/managedcluster.md)
- [Boot volume expansion](./managed/boot-volume-expansion.md)
- [Networking customizations](./managed/networking.md)
- [Reference](./reference/reference.md)
- [API Reference](./reference/api-reference.md)
- [Glossary](./reference/glossary.md)
6 changes: 4 additions & 2 deletions docs/src/gs/create-workload-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@
## Workload Cluster Templates

Choose one of the available templates for to create your workload clusters from the
[latest released artifacts][latest-release]. Each workload cluster template can be
[latest released artifacts][latest-release]. Please note that the templates provided
are to be considered as references and can be customized further as
the [CAPOCI API Reference][api-reference]. Each workload cluster template can be
further configured with the parameters below.

## Workload Cluster Parameters
Expand Down Expand Up @@ -194,6 +196,6 @@ By default, the [OCI Cloud Controller Manager (CCM)][oci-ccm] is not installed i
[calico]: ../networking/calico.md
[cni]: https://www.cni.dev/
[oci-ccm]: https://github.com/oracle/oci-cloud-controller-manager
[latest-release]: https://github.com/oracle/cluster-api-provider-oci/releases
[latest-release]: https://github.com/oracle/cluster-api-provider-oci/releases/latest
[install-oci-ccm]: ./install-oci-ccm.md
[configure-authentication]: ./install-cluster-api.html#configure-authentication
50 changes: 50 additions & 0 deletions docs/src/managed/boot-volume-expansion.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Increase boot volume

The default boot volume size of worker nodes is 50 GB. The following steps needs to be followed
to increase the boot volume size.

## Increase the boot volume size in spec

The following snippet shows how to increase the boot volume size of the instances.

```yaml
kind: OCIManagedMachinePool
spec:
nodeSourceViaImage:
bootVolumeSizeInGBs: 100
```

## Extend the root partition

In order to take advantage of the larger size, you need to [extend the partition for the boot volume][boot-volume-extension].
Custom cloud init scripts can be used for the same. The following cloud init script extends the root volume.

```bash
#!/bin/bash

# DO NOT MODIFY
curl --fail -H "Authorization: Bearer Oracle" -L0 http://169.254.169.254/opc/v2/instance/metadata/oke_init_script | base64 --decode >/var/run/oke-init.sh

## run oke provisioning script
bash -x /var/run/oke-init.sh

### adjust block volume size
/usr/libexec/oci-growfs -y

touch /var/log/oke.done
```

Encode the file contents into a base64 encoded value as follows.
```bash
cat cloud-init.sh | base64 -w 0
```

Add the value in the following `OCIManagedMachinePool` spec.
```yaml
kind: OCIManagedMachinePool
spec:
nodeMetadata:
user_data: "<base64 encoded value from above>"
```

[boot-volume-extension]: https://docs.oracle.com/en-us/iaas/Content/Block/Tasks/extendingbootpartition.htm
93 changes: 93 additions & 0 deletions docs/src/managed/managedcluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
# Managed Clusters (OKE)
- **Feature status:** Experimental
- **Feature gate:** OKE=true,MachinePool=true

Cluster API Provider for OCI (CAPOCI) experimentally supports managing OCI Container
Engine for Kubernetes (OKE) clusters. CAPOCI implements this with three
custom resources:
- `OCIManagedControlPlane`
- `OCIManagedCluster`
- `OCIManagedMachinePool`

## Workload Cluster Parameters

The following Oracle Cloud Infrastructure (OCI) configuration parameters are available
when creating a managed workload cluster on OCI using one of our predefined templates:

| Parameter | Default Value | Description |
|---------------------------------------|---------------------|------------------------------------------------------------------------------------------------------------------------|
| `OCI_COMPARTMENT_ID` | | The OCID of the compartment in which to create the required compute, storage and network resources. |
| `OCI_MANAGED_NODE_IMAGE_ID` | | The OCID of the image for the Kubernetes worker nodes. Please read the [doc][node-images] for more details. |
| `OCI_MANAGED_NODE_SHAPE ` | VM.Standard.E4.Flex | The [shape][node-images-shapes] of the Kubernetes worker nodes. |
| `OCI_MANAGED_NODE_MACHINE_TYPE_OCPUS` | 1 | The number of OCPUs allocated to the worker node instance. |
| `OCI_SSH_KEY` | | The public SSH key to be added to the Kubernetes nodes. It can be used to login to the node and troubleshoot failures. |

## Pre-Requisites

### Environment Variables

Managed clusters also require the following feature flags set as environment variables before [installing
CAPI and CAPOCI components using clusterctl][install-cluster-api].

```bash
export EXP_MACHINE_POOL=true
export EXP_OKE=true
```

### OCI Security Policies

Please read the [doc][oke-policies] and add the necessary policies required for the user group.
Please add the policies for dynamic groups if instance principal is being used as authentication
mechanism. Please read the [doc][install-cluster-api] to know more about authentication mechanisms.

## Workload Cluster Templates

Choose one of the available templates to create your workload clusters from the
[latest released artifacts][latest-release]. The managed cluster templates is of the
form `cluster-template-managed-<flavour>`.yaml . The default managed template is
`cluster-template-managed.yaml`. Please note that the templates provided are to be considered
as references and can be customized further as the [CAPOCI API Reference][api-reference].

## Supported Kubernetes versions
The [doc][supported-versions] lists the Kubernetes versions currently supported by OKE.

## Create a new OKE cluster.

The following command will create an OKE cluster using the default template. The created node pool uses
[VCN native pod networking][vcn-native-pod-networking].

```bash
OCI_COMPARTMENT_ID=<compartment-id> \
OCI_MANAGED_NODE_IMAGE_ID=<ubuntu-custom-image-id> \
OCI_SSH_KEY=<ssh-key> \
KUBERNETES_VERSION=v1.24.1 \
NAMESPACE=default \
clusterctl generate cluster <cluster-name>\
--from cluster-template-managed.yaml | kubectl apply -f -
```

## Create a new private OKE cluster.

The following command will create an OKE private cluster. In this template, the control plane endpoint subnet is a
private subnet and the API endpoint is accessible only within the subnet. The created node pool uses
[VCN native pod networking][vcn-native-pod-networking].

```bash
OCI_COMPARTMENT_ID=<compartment-id> \
OCI_MANAGED_NODE_IMAGE_ID=<ubuntu-custom-image-id> \
OCI_SSH_KEY=<ssh-key> \
KUBERNETES_VERSION=v1.24.1 \
shyamradhakrishnan marked this conversation as resolved.
Show resolved Hide resolved
NAMESPACE=default \
clusterctl generate cluster <cluster-name>\
--from cluster-template-managedprivate.yaml | kubectl apply -f -
```



[node-images-shapes]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Reference/contengimagesshapes.htm
[oke-policies]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Concepts/contengpolicyconfig.htm
[install-cluster-api]: ../gs/install-cluster-api.md
[latest-release]: https://github.com/oracle/cluster-api-provider-oci/releases/latest
[api-reference]: ../reference/api-reference.md
[supported-versions]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Concepts/contengaboutk8sversions.htm#supportedk8sversions
[vcn-native-pod-networking]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Concepts/contengpodnetworking_topic-OCI_CNI_plugin.htm
44 changes: 44 additions & 0 deletions docs/src/managed/networking.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Networking customizations
## Use a pre-existing VCN

The following `OCIManagedCluster` snippet can be used to to use a pre-existing VCN.

```yaml
kind: OCIManagedCluster
spec:
compartmentId: "${OCI_COMPARTMENT_ID}"
networkSpec:
skipNetworkManagement: true
vcn:
id: "<vcn-id>"
networkSecurityGroups:
- id: "<control-plane-endpoint-nsg-id>"
role: control-plane-endpoint
name: control-plane-endpoint
- id: "<worker-nsg-id>"
role: worker
name: worker
- id: "<pod-nsg-id>"
role: pod
name: pod
subnets:
- id: "<control-plane-endpoint-subnet-id>"
role: control-plane-endpoint
name: control-plane-endpoint
type: public
- id: "<worker-subnet-id>"
role: worker
name: worker
- id: "<pod-subnet-id>"
role: pod
name: pod
- id: "<service-lb-subnet-id>"
role: service-lb
name: service-lb
type: public
```

## Use flannel as CNI

Use the template `cluster-template-managed-flannel.yaml` as an example for using flannel as the CNI. The template
sets the correct parameters in the spec as well as create the proper security roles in the Network Security Group (NSG).
2 changes: 1 addition & 1 deletion exp/api/v1beta1/ocimanagedcluster_webhook.go
Original file line number Diff line number Diff line change
Expand Up @@ -593,7 +593,7 @@ func (c *OCIManagedCluster) GetLBServiceDefaultEgressRules() []infrastructurev1b
return []infrastructurev1beta1.EgressSecurityRuleForNSG{
{
EgressSecurityRule: infrastructurev1beta1.EgressSecurityRule{
Description: common.String("Pod to Kubernetes API endpoint communication (when using VCN-native pod networking)."),
Description: common.String("Load Balancer to Worker nodes node ports."),
Protocol: common.String("6"),
TcpOptions: &infrastructurev1beta1.TcpOptions{
DestinationPortRange: &infrastructurev1beta1.PortRange{
Expand Down
17 changes: 17 additions & 0 deletions exp/api/v1beta1/ocimanagedmachinepool_webhook_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,23 @@ func TestOCIManagedMachinePool_CreateDefault(t *testing.T) {
}))
},
},
{
name: "should not override cni type",
m: &OCIManagedMachinePool{
Spec: OCIManagedMachinePoolSpec{
NodePoolNodeConfig: &NodePoolNodeConfig{
NodePoolPodNetworkOptionDetails: &NodePoolPodNetworkOptionDetails{
CniType: FlannelCNI,
},
},
},
},
expect: func(g *gomega.WithT, c *OCIManagedMachinePool) {
g.Expect(c.Spec.NodePoolNodeConfig.NodePoolPodNetworkOptionDetails).To(Equal(&NodePoolPodNetworkOptionDetails{
CniType: FlannelCNI,
}))
},
},
}

for _, test := range tests {
Expand Down
Loading