Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add KubeEdge installation procedure. #30

Merged
merged 4 commits into from
Dec 7, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 6 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
[![build-docker-images](https://github.com/fujitatomoya/ros_k8s/actions/workflows/build-docker-images.yml/badge.svg)](https://github.com/fujitatomoya/ros_k8s/actions/workflows/build-docker-images.yml)

# ROS Kubernetes
# ROS Kubernetes / KubeEdge

## Abstract

This repository provides tutorials how to use ROS and ROS 2 with Kubernetes Cluster System.
This repository provides tutorials how to use ROS and ROS 2 with Kubernetes and KubeEdge Cluster System.
User might need to have knowledge about Kubernetes to understand what is really going on the cluster system.

## Motivation

The primary goal for this repository is that everyone can try ROS and ROS 2 with Kubernetes Cluster.
The primary goal for this repository is that everyone can try ROS and ROS 2 with Kubernetes and KubeEdge Cluster.
Using container images and container orchestration allows application developer to be agnostic from system platform but only application logic.
ROS and ROS 2 provides good isolation between nodes, so that we can take the most advantage of application runtime framework and container orchestration.

Expand Down Expand Up @@ -51,6 +51,7 @@ This environment is very much useful to try or test your container images or ser
- [Install Kubernetes Packages](./docs/Install_Kubernetes_Packages.md)
- [Build ROS / ROS 2 Full Docker Images](./docs/Build_Docker_Images.md)
- [Setup Kuberenetes Cluster](./docs/Setup_Kubernetes_Cluster.md)
- [Setup KubeEdge Cloud/Edge Node](./docs/Setup_KubeEdge.md)
- [Setup Virtualized Kuberenetes Cluster](./docs/Setup_Virtualized_Cluster.md)
- [ROS Deployment Demonstration](./docs/ROS_Deployment_Demonstration.md)
- [ROS 2 Deployment Demonstration](./docs/ROS2_Deployment_Demonstration.md)
Expand All @@ -64,7 +65,8 @@ This environment is very much useful to try or test your container images or ser

## Reference

- [kubernetes official documentation](https://kubernetes.io/docs/home/)
- [Kubernetes Official Documentation](https://kubernetes.io/docs/home/)
- [KubeEdge Official Documentation](https://kubeedge.io/docs/welcome/getting-started)
- [Kubernetes IN Docker](https://kind.sigs.k8s.io/)
- [ROS Noetic](http://wiki.ros.org/noetic)
- [ROS Rolling](https://docs.ros.org/en/rolling/)
Expand Down
207 changes: 207 additions & 0 deletions docs/Setup_KubeEdge.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,207 @@
# Setup KubeEdge

KubeEdge is to extend native containerized application orchestration capabilities to hosts at Edge.
That said, KubeEdge requires Kubernetes cluster running already in the cluster infrastructure.
Tunneling between cloudcore and edgecore provides control-plane connectivity beyond internet, so that edge devices behind NAT or different network can join the cluster running in the cloud infrastructure.
Kubernetes worker nodes are transparent, so that the same operation can be applied with Kubernetes.

![KubeEdge System Overview](./../images/kubeedge-system-overview.png)

## Reference

- [KubeEdge Github](https://github.com/kubeedge/kubeedge)
- [KubeEdge Official Documentation](https://kubeedge.io/)

## Kubernetes Compatibility

**<span style="color: red;">CAUTION</span>**

According to the [KubeEdge Kubernetes Compatibility](https://github.com/kubeedge/kubeedge#kubernetes-compatibility), `v1.25.13` is not officially supported yet.
For using KubeEdge, we need to downgrade Kubernetes to `v1.23.17` as following.

```bash
root@tomoyafujita-HP-Compaq-Elite-8300-SFF:~# apt install -y --allow-downgrades kubeadm=1.23.17-00 kubelet=1.23.17-00 kubectl=1.23.17-00
```

## Container Network Interface (CNI)

Although [KubeEdge Roadmap](https://github.com/kubeedge/kubeedge/blob/master/docs/roadmap.md#integration-and-verification-of-third-party-cni) mentions that it supports CNI, it still requires the CNI dependent special operation to instantiate the CNI implementation with KubeEdge.

Please see more details for,
- [KubeEdge didn't support Weave CNI](https://github.com/kubeedge/kubeedge/issues/3935)
- [KubeEdge edgecore supports CNI Cilium](https://github.com/kubeedge/kubeedge/issues/4844)

At this moment, we user host network interface only.

KubeEdge community has been developing [edgemesh](https://github.com/kubeedge/edgemesh) for next-generation data-plane component, including the support as CNI.

## Setup Kubernetes API Server

As explained above, KubeEdge requires Kubernetes cluster running.
In other words, Kubernetes API server must be running before KubeEdge installation.

see [Setup Kubernetes API Server](./Setup_Kubernetes_Cluster.md#setup-kubernetes-api-server) and [Access API-server](./Setup_Kubernetes_Cluster.md#access-api-server).


In Kubernetes workloads, it requires one of CNI implementation running to set up the cluster.
see [Deploy CNI Plugin](https://github.com/fujitatomoya/ros_k8s/blob/master/docs/Setup_Kubernetes_Cluster.md#deploy-cni-plugin) to start the CNI for Kubernetes. (this CNI can only be used by Kubernetes worker nodes but KubeEdge.)

**<span style="color: red;">TODO: CNI needs to be uninstalled</span>**

This is only required to bring the Kubenretes API-server running, because we are going to deploy cloudcore to the same physical node with Kubernetes API-server.
Instead of having CNI deployed to bring the Kubernetes API-server up and runnig, we are not able to deploy cloudcore to the node since we cannot deploy the containers to any `NotReady` nodes.

## Setup KubeEdge

### Install keadm

- KubeEdge Cloud Core Node (amd64)

```bash
root@tomoyafujita-HP-Compaq-Elite-8300-SFF:~/ros_k8s# wget https://github.com/kubeedge/kubeedge/releases/download/v1.14.2/keadm-v1.14.2-linux-amd64.tar.gz
root@tomoyafujita-HP-Compaq-Elite-8300-SFF:~/ros_k8s# tar -zxvf keadm-v1.14.2-linux-amd64.tar.gz
keadm-v1.14.2-linux-amd64/
keadm-v1.14.2-linux-amd64/version
keadm-v1.14.2-linux-amd64/keadm/
keadm-v1.14.2-linux-amd64/keadm/keadm
root@tomoyafujita-HP-Compaq-Elite-8300-SFF:~/ros_k8s# cp keadm-v1.14.2-linux-amd64/keadm//keadm /usr/local/bin/keadm
root@tomoyafujita-HP-Compaq-Elite-8300-SFF:~/ros_k8s# keadm version
version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"5036064115fad46232dee1c8ad5f1f84fde7984b", GitTreeState:"clean", BuildDate:"2023-09-04T01:54:06Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}
```

- KubeEdge Edge Node (arm64)

```bash
root@ubuntu:~/ros_k8s# wget https://github.com/kubeedge/kubeedge/releases/download/v1.14.2/keadm-v1.14.2-linux-arm64.tar.gz
root@ubuntu:~/ros_k8s# tar -zxvf keadm-v1.14.2-linux-arm64.tar.gz
keadm-v1.14.2-linux-arm64/
keadm-v1.14.2-linux-arm64/version
keadm-v1.14.2-linux-arm64/keadm/
keadm-v1.14.2-linux-arm64/keadm/keadm
root@ubuntu:~/ros_k8s# cp keadm-v1.14.2-linux-arm64/keadm/keadm /usr/local/bin/keadm
root@ubuntu:~/ros_k8s# keadm version
version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"5036064115fad46232dee1c8ad5f1f84fde7984b", GitTreeState:"clean", BuildDate:"2023-09-04T01:54:04Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/arm64"}
```

### Cloud Core

- with the configuration, we will deploy the KubeEdge `cloudcore` to Kubernetes master node. Basically master node has the taints not to schedule the pods to keep the system resource for Kubenretes. So we need to remove that taint so that we can deploy the `cloudcore` pods to the mater node. (if this operation is not done, `keadm init` will fail with `Error: timed out waiting for the condition`)

```bash
root@tomoyafujita-HP-Compaq-Elite-8300-SFF:~# kubectl get nodes -o json | jq '.items[].spec.taints'
[
{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/master"
}
]

root@tomoyafujita-HP-Compaq-Elite-8300-SFF:~# kubectl taint nodes tomoyafujita-hp-compaq-elite-8300-sff node-role.kubernetes.io/master:NoSchedule-
node/tomoyafujita-hp-compaq-elite-8300-sff untainted

root@tomoyafujita-HP-Compaq-Elite-8300-SFF:~# kubectl get nodes -o json | jq '.items[].spec.taints'
null
```

- start KubeEdge `cloudcore`.

```bash
root@tomoyafujita-HP-Compaq-Elite-8300-SFF:~# keadm init --advertise-address=192.168.1.248 --profile version=v1.12.1
Kubernetes version verification passed, KubeEdge installation will start...
CLOUDCORE started
=========CHART DETAILS=======
NAME: cloudcore
LAST DEPLOYED: Thu Sep 14 22:09:35 2023
NAMESPACE: kubeedge
STATUS: deployed
REVISION: 1

root@tomoyafujita-HP-Compaq-Elite-8300-SFF:~# kubectl get all -n kubeedge
NAME READY STATUS RESTARTS AGE
pod/cloudcore-77b5dfdd57-btmlp 1/1 Running 0 24s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/cloudcore 1/1 1 1 24s

NAME DESIRED CURRENT READY AGE
replicaset.apps/cloudcore-77b5dfdd57 1 1 1 24s
```

### Edge Core

KubeEdge can be uninstalled via the following commands.

- get security token from cloudcore.

```bash
root@tomoyafujita-HP-Compaq-Elite-8300-SFF:/# keadm gettoken
40d9bb8bb2c3818728da3d46f1a78b58f4b9fba8665cc392ded4698d1eb5cab1.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2OTQ4NDA5ODN9.niGZHHdR7s89K4-919fCNKEVTyudb8DtTmE9p5PFzKg
```

- start KubeEdge `edgecore`

```bash
root@ubuntu:~# keadm join --cloudcore-ipport=192.168.1.248:10000 --token=40d9bb8bb2c3818728da3d46f1a78b58f4b9fba8665cc392ded4698d1eb5cab1.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2OTQ4NDA5ODN9.niGZHHdR7s89K4-919fCNKEVTyudb8DtTmE9p5PFzKg --kubeedge-version=v1.12.1 --runtimetype=docker --cgroupdriver systemd
I0915 06:28:19.544584 10424 command.go:845] 1. Check KubeEdge edgecore process status
I0915 06:28:19.578838 10424 command.go:845] 2. Check if the management directory is clean
I0915 06:28:19.579238 10424 join.go:107] 3. Create the necessary directories
I0915 06:28:19.580874 10424 join.go:184] 4. Pull Images
Pulling kubeedge/installation-package:v1.12.1 ...
Pulling kubeedge/pause:3.6 ...
Pulling eclipse-mosquitto:1.6.15 ...
I0915 06:28:19.590357 10424 join.go:184] 5. Copy resources from the image to the management directory
I0915 06:28:27.255374 10424 join.go:184] 6. Start the default mqtt service
I0915 06:28:27.256005 10424 join.go:107] 7. Generate systemd service file
I0915 06:28:27.256873 10424 join.go:107] 8. Generate EdgeCore default configuration
I0915 06:28:27.256988 10424 join.go:270] The configuration does not exist or the parsing fails, and the default configuration is generated
W0915 06:28:27.263577 10424 validation.go:71] NodeIP is empty , use default ip which can connect to cloud.
I0915 06:28:27.270836 10424 join.go:107] 9. Run EdgeCore daemon
I0915 06:28:34.350843 10424 join.go:435]
I0915 06:28:34.350933 10424 join.go:436] KubeEdge edgecore is running, For logs visit: journalctl -u edgecore.service -xe

root@ubuntu:~# systemctl status edgecore
● edgecore.service
Loaded: loaded (/etc/systemd/system/edgecore.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2023-09-15 06:28:34 UTC; 1min 9s ago
Main PID: 10603 (edgecore)
Tasks: 16 (limit: 9190)
Memory: 31.4M
CPU: 11.565s
CGroup: /system.slice/edgecore.service
└─10603 /usr/local/bin/edgecore
...<snip>
```
- check cluster nodes.
```bash
root@tomoyafujita-HP-Compaq-Elite-8300-SFF:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
tomoyafujita-hp-compaq-elite-8300-sff Ready control-plane,master 104m v1.23.17 192.168.1.248 <none> Ubuntu 20.04.6 LTS 5.15.0-83-generic docker://24.0.5
ubuntu Ready agent,edge 97s v1.22.6-kubeedge-v1.12.1 192.168.1.238 <none> Ubuntu 22.04.3 LTS 5.15.0-1034-raspi docker://24.0.5
```
## KubeEdge Test Deployment
```bash
root@tomoyafujita-HP-Compaq-Elite-8300-SFF:/home/tomoyafujita/DVT/github.com/fujitatomoya/ros_k8s/yaml# kubectl apply -f ros2-sample-hostnic.yaml
deployment.apps/ros2-talker-1 created
deployment.apps/ros2-listener-1 created

root@tomoyafujita-HP-Compaq-Elite-8300-SFF:/home/tomoyafujita/DVT/github.com/fujitatomoya/ros_k8s/yaml# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ros2-listener-1-64c4c996b4-scm9k 1/1 Running 0 14m 192.168.1.248 tomoyafujita-hp-compaq-elite-8300-sff <none> <none>
ros2-talker-1-bdd899d8d-drt2f 1/1 Running 0 14m 192.168.1.238 ubuntu <none> <none>
```
## Break Down KubeEdge
```bash
root@tomoyafujita-HP-Compaq-Elite-8300-SFF:~# keadm reset --force
I0915 00:26:18.977824 114454 util_unix.go:104] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="" fullURLFormat="unix://"
Failed to remove MQTT container: failed to new container runtime: unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix: missing address"
root@tomoyafujita-HP-Compaq-Elite-8300-SFF:~# \rm -rf /etc/kubeedge/*
root@tomoyafujita-HP-Compaq-Elite-8300-SFF:~# kubeadm reset --force
[reset] Reading configuration from the cluster...
root@tomoyafujita-HP-Compaq-Elite-8300-SFF:~# \rm -rf $HOME/.kube/config
```
Binary file added images/kubeedge-system-overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/system_overview.odp
Binary file not shown.
67 changes: 67 additions & 0 deletions yaml/ros2-sample-hostnic.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ros2-talker-1
spec:
replicas: 1
selector:
matchLabels:
app: ros2-talker-1
template:
metadata:
labels:
app: ros2-talker-1
spec:
containers:
- image: tomoyafujita/ros:rolling
command: ["/bin/bash", "-c"]
args: ["source /opt/ros/$ROS_DISTRO/setup.bash && ros2 topic pub /chatter1 std_msgs/String \"data: Hello, I am talker-1\""]
imagePullPolicy: IfNotPresent
tty: true
name: ros2-talker-1
nodeSelector:
kubernetes.io/hostname: ubuntu
hostNetwork: true
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
restartPolicy: Always

---

apiVersion: apps/v1
kind: Deployment
metadata:
name: ros2-listener-1
spec:
replicas: 1
selector:
matchLabels:
app: ros2-listener-1
template:
metadata:
labels:
app: ros2-listener-1
spec:
containers:
- image: tomoyafujita/ros:rolling
command: ["/bin/bash", "-c"]
args: ["source /opt/ros/$ROS_DISTRO/setup.bash && ros2 topic echo /chatter1 std_msgs/String"]
imagePullPolicy: IfNotPresent
tty: true
name: ros2-listener-1
nodeSelector:
kubernetes.io/hostname: tomoyafujita
hostNetwork: true
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
restartPolicy: Always
1 change: 1 addition & 0 deletions yaml/ubuntu22-daemonset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ spec:
command: ["/bin/bash", "-c"]
args: ["sleep 3600"]
imagePullPolicy: IfNotPresent
#hostNetwork: true
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
Expand Down