-
Notifications
You must be signed in to change notification settings - Fork 408
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
2 changed files
with
254 additions
and
0 deletions.
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,254 @@ | ||
--- | ||
title: Yurtadm init/join | ||
authors: | ||
- "@windydayc" | ||
reviewers: | ||
- "@Peeknut" | ||
- "@rambohe-ch" | ||
creation-date: 2022-07-05 | ||
last-updated: 2022-07-05 | ||
status: provisional | ||
|
||
--- | ||
|
||
# Based on sealer to provide a solution to create high availability OpenYurt cluster | ||
|
||
## Table of Contents | ||
- [Based on sealer to provide a solution to create high availability OpenYurt cluster](#based-on-sealer-to-provide-a-solution-to-create-high-availability-openyurt-cluster) | ||
- [Table of Contents](#table-of-contents) | ||
- [Glossary](#glossary) | ||
- [Summary](#summary) | ||
- [Motivation](#motivation) | ||
- [Goals](#goals) | ||
- [Proposal](#proposal) | ||
- [OpenYurt Node Classification](#openyurt-node-classification) | ||
- [OpenYurt Components](#openyurt-components) | ||
- [Deployment Forms of OpenYurt Components](#deployment-forms-of-openyurt-components) | ||
- [Yurtadm init](#yurtadm-init) | ||
- [Kube-proxy](#kube-proxy) | ||
- [CoreDNS](#coredns) | ||
- [Kube-controller-manager](#kube-controller-manager) | ||
- [Deploy OpenYurt Components](#deploy-openyurt-components) | ||
- [Build Sealer CloudImage](#build-sealer-cloudimage) | ||
- [Yurtadm join](#yurtadm-join) | ||
- [High Availability Solution](#high-availability-solution) | ||
- [Generate images automatically](#generate-images-automatically) | ||
|
||
## Glossary | ||
|
||
Refer to the [OpenYurt Glossary](https://github.com/openyurtio/openyurt/blob/master/docs/proposals/00_openyurt-glossary.md). | ||
|
||
## Summary | ||
|
||
In this proposal, we will improve `yurtadm init` and `yurtadm join` command to create high availability OpenYurt cluster by integrating sealer tool, and provide the automatic generation of the corresponding version of the image. | ||
|
||
## Motivation | ||
|
||
At present, the installation of OpenYurt cluster is still a little complicated, and with the continuous upgrading of OpenYurt versions, there is a lack of a unified installation way that can simply and automatically install clusters of various versions. | ||
|
||
In addition, currently the `yurtadm` command cannot cope with scenarios that requiring high availability. Therefore, it is necessary to provide a way to create high availability OpenYurt cluster. | ||
|
||
## Goals | ||
|
||
- Support to create high availability OpenYurt cluster and support Kubernetes v1.18 and above. | ||
- Support to generate images automatically. | ||
|
||
## Proposal | ||
|
||
### OpenYurt Node Classification | ||
|
||
Currently, the overall architecture of OpenYurt is as follows: | ||
|
||
![openyurt-arch](../img/arch.png) | ||
|
||
Nodes in OpenYurt can be classified as follows: | ||
|
||
| Type | Introduction | Label | | ||
| ---------- | ------------------------------------------------------------ | --------------------------------- | | ||
| Master | The master node in Kubernetes, can also be used to deploy and run OpenYurt central control components. | openyurt.io/is-edge-worker: false | | ||
| Cloud node | It is connected to Kubernetes Master through the Intranet and is mainly used to deploy and run the OpenYurt central control component. | openyurt.io/is-edge-worker: false | | ||
| Edge node | It is connected to the Kubernetes Master through the public network, and is generally close to the edge production environment. It is mainly used to deploy and run edge business containers. | openyurt.io/is-edge-worker: true | | ||
|
||
### OpenYurt Components | ||
|
||
- **YurtHub:** | ||
|
||
SideCar of the node dimension, the traffic proxy between the components on the node and the kube-apiserver. It has two operating modes: Edge and Cloud. | ||
|
||
- In Edge mode, the traffic of Kubelet, kube-proxy, Flannel and other cloud native components accessing the cloud kube-apiserver will pass through the YurtHub component, and the YurtHub component will cache the data returned by the cloud to the local disk. When the cloud edge network is abnormal, yurthub will use the local cache data to recover the edge business. | ||
- In Cloud mode, compared with the edge mode, because the cloud network is stable, there is no need to detect the connection between nodes and kube-apiserver. Yurthub forwards all requests to kube-apiserver, and yurthub does not need to cache data locally. Therefore, the cloud mode yurthub closes the modules related to local request processing, but the functions related to the service topology can still be used normally. | ||
|
||
- **YurtControllerManager:** | ||
|
||
Controllers in the center, currently include NodeLifeCycle Controller (not evict Pods on autonomous nodes), YurtCSRController (for Approve edge certificate application). | ||
|
||
- **YurtAppManager:** | ||
|
||
Cross-regional resource and business load managers, currently including NodePool (node pool management), YurtAppSet (formerly known as UnitedDeployment) (node pool dimension business load management), YurtAppDaemon (node pool dimension Daemonset), YurtIngress (node pool dimension Ingress Controller Manager). | ||
|
||
- **YurtTunnel (Server/Agent):** | ||
|
||
Build a cloud-edge reverse tunnel with two-way authentication and encryption to forward O&M monitoring traffic from the cloud to the edge. | ||
|
||
### Deployment Forms of OpenYurt Components | ||
|
||
| Component | Form | Installation location | | ||
| ----------------------- | ---------- | ------------------------------ | | ||
| yurthub | Static Pod | All cloud nodes and edge nodes | | ||
| yurt-controller-manager | Deployment | master、cloud node | | ||
| yurt-app-manager | Deployment | master、cloud node | | ||
| yurt-tunnel-server | Deployment | master、cloud node | | ||
| yurt-tunnel-agent | DaemonSet | edge node | | ||
|
||
### Yurtadm init | ||
|
||
The bottom layer of `yurtadm init` will be implemented through sealer, use sealer to install master. Therefore, you first need to create a cluster image of OpenYurt, which mainly contains the following contents: | ||
|
||
#### Kube-proxy | ||
|
||
The k8s cluster deployed by kubeadm will generate kubeconfig configuration for kube-proxy. If Service Topology and Topology Aware Hints are not configured, kube-proxy uses this kubeconfig to get the full amount of endpoints. | ||
|
||
In the cloud-edge-end scenario, edge nodes may not be able to communicate with each other, so endpoints need to be topologically based on nodepool. So make the following modifications to kube-proxy: | ||
|
||
- Added featureGates: EndpointSliceProxying: true. | ||
- Remove the kubeconfig parameter so that it uses InClusterConfig to connect to the apiserver. | ||
|
||
#### CoreDNS | ||
|
||
In general scenarios, CoreDNS is deployed in the form of Deployment. In side-end scenarios, domain name resolution requests cannot cross `NodePool`, so CoreDNS needs to be deployed in the form of `Daemonset` or `YurtAppDaemon` to resolve hostname to tunnelserver address. At the same time, annotation is added, and the openyurt mechanism is used to realize the edge service topology. | ||
|
||
#### Kube-controller-manager | ||
|
||
In order to make yurt-controller-mamanger work properly, the default nodelifecycle controller needs to be turned off. The nodelifecycle controller can be disabled by configuring the --controllers parameter value and restarting the kube-controller-manager. | ||
|
||
#### Deploy OpenYurt Components | ||
|
||
- **Label node.** When disconnecting from the apiserver, only pods running on edge-autonomous nodes will not be evicted. Therefore, we first need to divide the nodes into cloud nodes and edge nodes by labeling openyurt.io/is-edge-worker. | ||
- **Deploy yurt-controller-manager.** The yurt-controller-manager needs to be deployed to prevent pods on autonomous edge nodes from being evicted during disconnection from the apiserver. | ||
- **Deploy yurt-app-manager.** Deploy yurt-app-manager to support node pool dimension pod management in edge scenarios. | ||
- **Deploy yurt-tunnel.** Deploy yurt-tunnel-server and yurt-tunnel-agent respectively. Finally, the YurtTunnel Server will be deployed on the Cloud Node in the form of Deployment, and the YurtTunnel Agent will be deployed on the Edge Node in the form of DaemonSet. | ||
|
||
> Note: yurthub is not installed in master. | ||
#### Build Sealer CloudImage | ||
|
||
According to the above content, the constructed kubefile is roughly as follows: | ||
|
||
```dockerfile | ||
FROM kubernetes:v1.19.8-alpine | ||
|
||
# flannel: https://github.com/sealerio/applications/tree/main/flannel | ||
COPY flannel/cni . | ||
COPY flannel/init-kube.sh /scripts/ | ||
COPY flannel/kube-flannel.yml manifests/ | ||
|
||
COPY shell-plugin.yaml plugins | ||
|
||
# openyurt | ||
COPY yurt-yamls/*.yaml manifests | ||
COPY install-openyurt.sh . | ||
RUN chmod 777 install-openyurt.sh | ||
|
||
CMD kubectl apply -f manifests/kube-flannel.yml | ||
CMD ./install-openyurt.sh | ||
``` | ||
|
||
install-openyut.sh: | ||
|
||
```shell | ||
#!/bin/bash | ||
|
||
echo "[INFO] Start installing OpenYurt." | ||
|
||
## label node | ||
kubectl label node $HOSTNAME openyurt.io/is-edge-worker=false | ||
|
||
## install openyurt components | ||
kubectl apply -f manifests/yurt-controller-manager.yaml | ||
kubectl apply -f manifests/yurt-tunnel-agent.yaml | ||
kubectl apply -f manifests/yurt-tunnel-server.yaml | ||
kubectl apply -f manifests/yurt-app-manager.yaml | ||
kubectl apply -f manifests/yurthub-cfg.yaml | ||
|
||
## configure coredns | ||
kubectl apply -f manifests/yurt-coredns.yaml | ||
kubectl annotate svc kube-dns -n kube-system openyurt.io/topologyKeys='openyurt.io/nodepool' | ||
kubectl scale --replicas=0 deployment/coredns -n kube-system | ||
|
||
## configure kube-proxy | ||
kubectl patch cm -n kube-system kube-proxy --patch '{"data": {"config.conf": "apiVersion: kubeproxy.config.k8s.io/v1alpha1\nbindAddress: 0.0.0.0\nfeatureGates:\n EndpointSliceProxying: true\nbindAddressHardFail: false\nclusterCIDR: 100.64.0.0/10\nconfigSyncPeriod: 0s\nenableProfiling: false\nipvs:\n excludeCIDRs:\n - 10.103.97.2/32\n minSyncPeriod: 0s\n strictARP: false\nkind: KubeProxyConfiguration\nmode: ipvs\nudpIdleTimeout: 0s\nwinkernel:\n enableDSR: false\nkubeconfig.conf:"}}' && kubectl delete pod --selector k8s-app=kube-proxy -n kube-system | ||
|
||
echo "[INFO] OpenYurt is successfully installed." | ||
``` | ||
|
||
Clusterfile: | ||
|
||
```yaml | ||
apiVersion: sealer.cloud/v2 | ||
kind: Cluster | ||
metadata: | ||
name: my-cluster | ||
spec: | ||
image: openyurt-cluster:latest | ||
hosts: | ||
- ips: [ 192.168.152.130,192.168.152.131 ] | ||
roles: [ master ] | ||
ssh: | ||
passwd: zf123456 | ||
user: root | ||
|
||
--- | ||
|
||
## Custom configurations must specify kind, will be merged to default kubeadm configs | ||
kind: ClusterConfiguration | ||
controllerManager: | ||
extraArgs: | ||
controllers: -nodelifecycle,*,bootstrapsigner,tokencleaner | ||
``` | ||
> Some configurable parameters have been temporarily omitted. | ||
### Yurtadm join | ||
The process of `yurtadm join` is basically the same as the original, mainly divided into the following phases: | ||
|
||
1. PreparePhase:Mainly used for initialization related to the system environment. For example, clean the /etc/kubernetes/manifests directory, shut down selinux, check and install kubelet, etc. | ||
2. PreflightPhase:Mainly run some kubeadm join related pre-check logic. | ||
3. JoinNodePhase: Mainly used to join the node to the OpenYurt cluster. Here, yurthub, kubelet and other operations will be started. The kubelet started here will directly connect to yurthub instead of apiserver. | ||
4. PostCheckPhase:Mainly do health checks of node, kubelet, yurthub, etc. | ||
|
||
### High Availability Solution | ||
|
||
The implementation of sealer's cluster high availability uses a lightweight load balancer lvscare. Compared with other load balancers, lvscare is very small with only a few hundred lines of code, and lvscare only protects the ipvs rules and does not do load itself, so it is very stable. It directly listens to the apiserver on the node. If it crashes, it removes the corresponding rules. After restarting, it will be automatically added back, which is equivalent to a dedicated load balancer. | ||
|
||
![sealer-high-availability](../img/sealer-high-availability.png) | ||
|
||
The high availability of OpenYurt can be realized with the help of sealer's high availability. | ||
|
||
The following is an example of sealer's clusterfile: | ||
|
||
```yaml | ||
apiVersion: sealer.cloud/v2 | ||
kind: Cluster | ||
metadata: | ||
name: default-kubernetes-cluster | ||
spec: | ||
image: kubernetes:v1.19.8 | ||
ssh: | ||
passwd: xxx | ||
hosts: | ||
- ips: [ 192.168.152.132,192.168.152.133 ] | ||
roles: [ master ] | ||
- ips: [ 192.168.0.5 ] | ||
roles: [ node ] | ||
``` | ||
|
||
Multiple master and node can be set in sealer, which has realized high availability. | ||
|
||
Therefore, `yurtadm init` can use sealer to achieve high availability of the master. | ||
|
||
Since the yurthub itself can point to multiple apiserver addresses, and select one of them for reverse proxy according to the load balancing algorithm, high availability has been achieved. Therefore, the `yurtadm join` logic can be roughly unchanged, just ensure that the yurthub can be configured with multiple kube-apiserver addresses. | ||
|
||
### Generate images automatically | ||
|
||
The image can be automatically published through GitHub Action. |