From 0e8dab5ad1b09e3f821b7d3eca9f799303024277 Mon Sep 17 00:00:00 2001 From: Lenny Chen Date: Fri, 5 Jan 2024 11:06:47 -0800 Subject: [PATCH 1/5] add additional known issues --- .../clusters/edge/networking/vxlan-overlay.md | 9 +++++-- .../site-installation/cluster-deployment.md | 8 ++++++ docs/docs-content/integrations/harbor-edge.md | 27 +++++++++++++++++++ 3 files changed, 42 insertions(+), 2 deletions(-) diff --git a/docs/docs-content/clusters/edge/networking/vxlan-overlay.md b/docs/docs-content/clusters/edge/networking/vxlan-overlay.md index c1eee0406f..837d30d041 100644 --- a/docs/docs-content/clusters/edge/networking/vxlan-overlay.md +++ b/docs/docs-content/clusters/edge/networking/vxlan-overlay.md @@ -32,6 +32,11 @@ The Analytics team of a manufacturing company is deploying an Edge host to their |---------------------|-----------------------| | Upon recovery, each Kubernetes component inside in the Edge host requests an IP address from the DHCP server, and receives a different IP address than their original IP address before the outage happened. Since Kubernetes expects several components in the control plane to have stable IP addresses, the cluster becomes non-operational and assembly line is unable to resume operations | Each Kubernetes component inside in the Edge host has a virtual IP address in the overlay network. Upon recovery, their IP addresses in the overlay network remain the same despite their IP addresses changing in the underlying DHCP network. The Edge host is able to assume its workload and the assembly line resumes operations | +## Limitations +- When adding multiple nodes to an existing cluster with overlay enabled, failure to add one node will block the addition of the other nodes. + +- When deleting an Edge host from a cluster with overlay enabled, ensure the node doesn't have the `palette-webhook` pod on it, or the node will be stuck in the deleting state. You can use the command `kubectl get pods -all-namespaces -output wide` to identify which node `palette-webhook` is on. + ## Prerequisites * At least one Edge host registered with your Palette account. @@ -84,7 +89,7 @@ You will not be able to change the network overlay configurations after the clus - In the Calico pack YAML file default template, uncomment `FELIX_IPV6SUPPORT` and set its value to `scbr-100` and uncomment `manifests.calico.env.calicoNode.IP_AUTODETECTION_METHOD` and set its value to `interface=scbr-100`. + In the Calico pack YAML file default template, uncomment `FELIX_MTUIFACEPATTERN` and set its value to `scbr-100` and uncomment `manifests.calico.env.calicoNode.IP_AUTODETECTION_METHOD` and set its value to `interface=scbr-100`. ```yaml {8,11} manifests: calico: @@ -93,7 +98,7 @@ You will not be able to change the network overlay configurations after the clus # Additional env variables for calico-node calicoNode: #IPV6: "autodetect" - FELIX_IPV6SUPPORT: "scbr-100" + FELIX_MTUIFACEPATTERN: "scbr-100" #CALICO_IPV6POOL_NAT_OUTGOING: "true" #CALICO_IPV4POOL_CIDR: "192.168.0.0/16" IP_AUTODETECTION_METHOD: "interface=scbr-100" diff --git a/docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md b/docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md index 02a5160116..c0a49a8469 100644 --- a/docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md +++ b/docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md @@ -20,6 +20,10 @@ Select the workflow that best fits your needs. Use the following steps to create a new host cluster so that you can add Edge hosts to the node pools. +### Limitations + +- In a multi-node cluster with PXK-E as its Kubernetes layer, you cannot change custom Network Interface Card (NIC). When you add an Edge host to such a cluster, leave the NIC field as its default value. + ### Prerequisites - A registered Edge host. @@ -157,6 +161,10 @@ You can also use the command `kubectl get nodes` to review the status of all nod You can add Edge hosts to the node pool of an existing host cluster. Use the following steps to add the Edge host to the node pool. +### Limitations + +- In a multi-node cluster with PXK-E as its Kubernetes layer, you cannot change custom Network Interface Card (NIC). When you add an Edge host to such a cluster, leave the NIC field as its default value. + ### Prerequisites - A registered Edge host. diff --git a/docs/docs-content/integrations/harbor-edge.md b/docs/docs-content/integrations/harbor-edge.md index 3fa0f049a4..0ba1201904 100644 --- a/docs/docs-content/integrations/harbor-edge.md +++ b/docs/docs-content/integrations/harbor-edge.md @@ -167,9 +167,36 @@ If you didn't provide a certificate or are using a self-signed certificate, Dock +### Known Issues + +The following known issues exist in the Harbor 1.0.0 release. + +- The Harbor DB pod might fail to start due to file permission issues. This is a [known issue](https://github.com/goharbor/harbor-helm/issues/1676) in the Harbor GitHub repository. Refer to the [Troubleshooting section](#scenario---harbor-db-pod-fails-to-start) for a workaround. + +## Troubleshooting + +### Scenario - Harbor DB Pod Fails to Start + +When you start a cluster with the Harbor pack, the **harbor-database** pod might fail to start and get stuck on the **CrashLoopBackoff** state. It's possible that this is due to known issue with the Harbor pack related to file permissions. The workaround is to delete the pod and a new pod will be automatically created. + +#### Debug Steps + +1. Issue the following command to identify the pods with names that start with `harbor-database`. + + ```shell + kubectl get pods --namespace harbor -o wide + ``` + +2. Delete the pod you identified in the previous step. Replace `POD_NAME` with the name of the pods. If there are multiple pods, run the command for each pod. + + ```shell + kubectl delete pod POD_NAME --namespace harbor + ``` + + ## Terraform You can reference the Harbor Edge-Native Config pack in Terraform with a data resource. From f227d4c1fd3e0673b085ff723addbc3fe6e97a4c Mon Sep 17 00:00:00 2001 From: Lenny Chen Date: Fri, 5 Jan 2024 11:22:04 -0800 Subject: [PATCH 2/5] address vale comments --- docs/docs-content/integrations/harbor-edge.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs-content/integrations/harbor-edge.md b/docs/docs-content/integrations/harbor-edge.md index 0ba1201904..caa7bf1f24 100644 --- a/docs/docs-content/integrations/harbor-edge.md +++ b/docs/docs-content/integrations/harbor-edge.md @@ -190,7 +190,7 @@ When you start a cluster with the Harbor pack, the **harbor-database** pod might kubectl get pods --namespace harbor -o wide ``` -2. Delete the pod you identified in the previous step. Replace `POD_NAME` with the name of the pods. If there are multiple pods, run the command for each pod. +2. Delete the pod you identified in the previous step. Replace `POD_NAME` with the name of the pods. If there are multiple pods, use the command for each pod. ```shell kubectl delete pod POD_NAME --namespace harbor From 27ab16a5fe6c19be3db8da5e1820b89fa2f4bfbd Mon Sep 17 00:00:00 2001 From: Lenny Chen <55669665+lennessyy@users.noreply.github.com> Date: Fri, 5 Jan 2024 13:13:55 -0800 Subject: [PATCH 3/5] Apply suggestions from code review Co-authored-by: Karl Cardenas --- docs/docs-content/clusters/edge/networking/vxlan-overlay.md | 2 +- docs/docs-content/integrations/harbor-edge.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/docs-content/clusters/edge/networking/vxlan-overlay.md b/docs/docs-content/clusters/edge/networking/vxlan-overlay.md index 837d30d041..3a978ef301 100644 --- a/docs/docs-content/clusters/edge/networking/vxlan-overlay.md +++ b/docs/docs-content/clusters/edge/networking/vxlan-overlay.md @@ -35,7 +35,7 @@ The Analytics team of a manufacturing company is deploying an Edge host to their ## Limitations - When adding multiple nodes to an existing cluster with overlay enabled, failure to add one node will block the addition of the other nodes. -- When deleting an Edge host from a cluster with overlay enabled, ensure the node doesn't have the `palette-webhook` pod on it, or the node will be stuck in the deleting state. You can use the command `kubectl get pods -all-namespaces -output wide` to identify which node `palette-webhook` is on. +- When deleting an Edge host from a cluster with overlay enabled, ensure the node doesn't have the `palette-webhook` pod on it, or the node will be stuck in the deleting state. You can use the command `kubectl get pods --all-namespaces --output wide` to identify which node `palette-webhook` is on. ## Prerequisites diff --git a/docs/docs-content/integrations/harbor-edge.md b/docs/docs-content/integrations/harbor-edge.md index caa7bf1f24..8184fb5226 100644 --- a/docs/docs-content/integrations/harbor-edge.md +++ b/docs/docs-content/integrations/harbor-edge.md @@ -171,7 +171,7 @@ If you didn't provide a certificate or are using a self-signed certificate, Dock The following known issues exist in the Harbor 1.0.0 release. -- The Harbor DB pod might fail to start due to file permission issues. This is a [known issue](https://github.com/goharbor/harbor-helm/issues/1676) in the Harbor GitHub repository. Refer to the [Troubleshooting section](#scenario---harbor-db-pod-fails-to-start) for a workaround. +- The Harbor database pod might fail to start due to file permission issues. This is a [known issue](https://github.com/goharbor/harbor-helm/issues/1676) in the Harbor GitHub repository. Refer to the [Troubleshooting section](#scenario---harbor-db-pod-fails-to-start) for a workaround. @@ -187,7 +187,7 @@ When you start a cluster with the Harbor pack, the **harbor-database** pod might 1. Issue the following command to identify the pods with names that start with `harbor-database`. ```shell - kubectl get pods --namespace harbor -o wide + kubectl get pods --namespace harbor --output wide ``` 2. Delete the pod you identified in the previous step. Replace `POD_NAME` with the name of the pods. If there are multiple pods, use the command for each pod. From fb61e37e49f61aef70340b54150a5fd9954f89f5 Mon Sep 17 00:00:00 2001 From: Lenny Chen Date: Fri, 5 Jan 2024 13:20:57 -0800 Subject: [PATCH 4/5] add link to pxk-e --- .../site-deployment/site-installation/cluster-deployment.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md b/docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md index c0a49a8469..defb4f61c1 100644 --- a/docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md +++ b/docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md @@ -163,7 +163,7 @@ You can add Edge hosts to the node pool of an existing host cluster. Use the fol ### Limitations -- In a multi-node cluster with PXK-E as its Kubernetes layer, you cannot change custom Network Interface Card (NIC). When you add an Edge host to such a cluster, leave the NIC field as its default value. +- In a multi-node cluster with [PXK-E](../../../../integrations/kubernetes-edge.md) as its Kubernetes layer, you cannot change custom Network Interface Card (NIC). When you add an Edge host to such a cluster, leave the NIC field as its default value. ### Prerequisites From a8b23b192cf7f666d690554d5c89b21fc69c3c45 Mon Sep 17 00:00:00 2001 From: Lenny Chen Date: Fri, 5 Jan 2024 13:57:38 -0800 Subject: [PATCH 5/5] add admonition for tech preview --- .../clusters/edge/networking/vxlan-overlay.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/docs-content/clusters/edge/networking/vxlan-overlay.md b/docs/docs-content/clusters/edge/networking/vxlan-overlay.md index 3a978ef301..f38b8e262b 100644 --- a/docs/docs-content/clusters/edge/networking/vxlan-overlay.md +++ b/docs/docs-content/clusters/edge/networking/vxlan-overlay.md @@ -11,11 +11,11 @@ Edge clusters are often deployed in locations where network environments are not Palette allows you to create a virtual overlay network on top of the physical network, and the virtual IP addresses of all cluster components are managed by Palette. Inside the cluster, the different components use the virtual IP addresses to communicate with each other instead of the underlying IP addresses that could change due to external factors. If the cluster experiences an outage with the overlay network enabled, components inside the cluster retain their virtual IP addresses in the overlay network, even if their IP addresses in the underlying physical network has changed, protecting the cluster from an outage. -
- ![VxLAN Overlay Architecture](/clusters_edge_site-installation_vxlan-overlay_architecture.png) -
+:::caution +Enabling overlay network on a cluster is a Tech Preview feature and is subject to change. Do not use this feature in production workloads. +::: ## When Should You Consider Enabling Overlay Network? If your Edge clusters are deployed in network environments that fit the following descriptions, you should consider enabling an overlay network for your cluster: @@ -33,9 +33,9 @@ The Analytics team of a manufacturing company is deploying an Edge host to their | Upon recovery, each Kubernetes component inside in the Edge host requests an IP address from the DHCP server, and receives a different IP address than their original IP address before the outage happened. Since Kubernetes expects several components in the control plane to have stable IP addresses, the cluster becomes non-operational and assembly line is unable to resume operations | Each Kubernetes component inside in the Edge host has a virtual IP address in the overlay network. Upon recovery, their IP addresses in the overlay network remain the same despite their IP addresses changing in the underlying DHCP network. The Edge host is able to assume its workload and the assembly line resumes operations | ## Limitations -- When adding multiple nodes to an existing cluster with overlay enabled, failure to add one node will block the addition of the other nodes. +- When adding multiple Edge hosts to an existing cluster with overlay enabled, failure to add one host will block the addition of the other hosts. -- When deleting an Edge host from a cluster with overlay enabled, ensure the node doesn't have the `palette-webhook` pod on it, or the node will be stuck in the deleting state. You can use the command `kubectl get pods --all-namespaces --output wide` to identify which node `palette-webhook` is on. +- When a cluster has overlay enabled, you cannot delete an Edge host that has the `palette-webhook` pod on it, or the Edge host will be stuck in the deleting state. You can use the command `kubectl get pods --all-namespaces --output wide` to identify which node `palette-webhook` is on. If you need to remove an Edge host that has the `palette-webhook` pod on it, please reach out to our support team by opening a ticket through our [support page](http://support.spectrocloud.io/). ## Prerequisites