diff --git a/docs/docs-content/clusters/edge/networking/vxlan-overlay.md b/docs/docs-content/clusters/edge/networking/vxlan-overlay.md
index c1eee0406f..f38b8e262b 100644
--- a/docs/docs-content/clusters/edge/networking/vxlan-overlay.md
+++ b/docs/docs-content/clusters/edge/networking/vxlan-overlay.md
@@ -11,11 +11,11 @@ Edge clusters are often deployed in locations where network environments are not
Palette allows you to create a virtual overlay network on top of the physical network, and the virtual IP addresses of all cluster components are managed by Palette. Inside the cluster, the different components use the virtual IP addresses to communicate with each other instead of the underlying IP addresses that could change due to external factors. If the cluster experiences an outage with the overlay network enabled, components inside the cluster retain their virtual IP addresses in the overlay network, even if their IP addresses in the underlying physical network has changed, protecting the cluster from an outage.
-
-
![VxLAN Overlay Architecture](/clusters_edge_site-installation_vxlan-overlay_architecture.png)
-
+:::caution
+Enabling overlay network on a cluster is a Tech Preview feature and is subject to change. Do not use this feature in production workloads.
+:::
## When Should You Consider Enabling Overlay Network?
If your Edge clusters are deployed in network environments that fit the following descriptions, you should consider enabling an overlay network for your cluster:
@@ -32,6 +32,11 @@ The Analytics team of a manufacturing company is deploying an Edge host to their
|---------------------|-----------------------|
| Upon recovery, each Kubernetes component inside in the Edge host requests an IP address from the DHCP server, and receives a different IP address than their original IP address before the outage happened. Since Kubernetes expects several components in the control plane to have stable IP addresses, the cluster becomes non-operational and assembly line is unable to resume operations | Each Kubernetes component inside in the Edge host has a virtual IP address in the overlay network. Upon recovery, their IP addresses in the overlay network remain the same despite their IP addresses changing in the underlying DHCP network. The Edge host is able to assume its workload and the assembly line resumes operations |
+## Limitations
+- When adding multiple Edge hosts to an existing cluster with overlay enabled, failure to add one host will block the addition of the other hosts.
+
+- When a cluster has overlay enabled, you cannot delete an Edge host that has the `palette-webhook` pod on it, or the Edge host will be stuck in the deleting state. You can use the command `kubectl get pods --all-namespaces --output wide` to identify which node `palette-webhook` is on. If you need to remove an Edge host that has the `palette-webhook` pod on it, please reach out to our support team by opening a ticket through our [support page](http://support.spectrocloud.io/).
+
## Prerequisites
* At least one Edge host registered with your Palette account.
@@ -84,7 +89,7 @@ You will not be able to change the network overlay configurations after the clus
- In the Calico pack YAML file default template, uncomment `FELIX_IPV6SUPPORT` and set its value to `scbr-100` and uncomment `manifests.calico.env.calicoNode.IP_AUTODETECTION_METHOD` and set its value to `interface=scbr-100`.
+ In the Calico pack YAML file default template, uncomment `FELIX_MTUIFACEPATTERN` and set its value to `scbr-100` and uncomment `manifests.calico.env.calicoNode.IP_AUTODETECTION_METHOD` and set its value to `interface=scbr-100`.
```yaml {8,11}
manifests:
calico:
@@ -93,7 +98,7 @@ You will not be able to change the network overlay configurations after the clus
# Additional env variables for calico-node
calicoNode:
#IPV6: "autodetect"
- FELIX_IPV6SUPPORT: "scbr-100"
+ FELIX_MTUIFACEPATTERN: "scbr-100"
#CALICO_IPV6POOL_NAT_OUTGOING: "true"
#CALICO_IPV4POOL_CIDR: "192.168.0.0/16"
IP_AUTODETECTION_METHOD: "interface=scbr-100"
diff --git a/docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md b/docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md
index 02a5160116..defb4f61c1 100644
--- a/docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md
+++ b/docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md
@@ -20,6 +20,10 @@ Select the workflow that best fits your needs.
Use the following steps to create a new host cluster so that you can add Edge hosts to the node pools.
+### Limitations
+
+- In a multi-node cluster with PXK-E as its Kubernetes layer, you cannot change custom Network Interface Card (NIC). When you add an Edge host to such a cluster, leave the NIC field as its default value.
+
### Prerequisites
- A registered Edge host.
@@ -157,6 +161,10 @@ You can also use the command `kubectl get nodes` to review the status of all nod
You can add Edge hosts to the node pool of an existing host cluster. Use the following steps to add the Edge host to the node pool.
+### Limitations
+
+- In a multi-node cluster with [PXK-E](../../../../integrations/kubernetes-edge.md) as its Kubernetes layer, you cannot change custom Network Interface Card (NIC). When you add an Edge host to such a cluster, leave the NIC field as its default value.
+
### Prerequisites
- A registered Edge host.
diff --git a/docs/docs-content/integrations/harbor-edge.md b/docs/docs-content/integrations/harbor-edge.md
index 3fa0f049a4..8184fb5226 100644
--- a/docs/docs-content/integrations/harbor-edge.md
+++ b/docs/docs-content/integrations/harbor-edge.md
@@ -167,9 +167,36 @@ If you didn't provide a certificate or are using a self-signed certificate, Dock
+### Known Issues
+
+The following known issues exist in the Harbor 1.0.0 release.
+
+- The Harbor database pod might fail to start due to file permission issues. This is a [known issue](https://github.com/goharbor/harbor-helm/issues/1676) in the Harbor GitHub repository. Refer to the [Troubleshooting section](#scenario---harbor-db-pod-fails-to-start) for a workaround.
+
+## Troubleshooting
+
+### Scenario - Harbor DB Pod Fails to Start
+
+When you start a cluster with the Harbor pack, the **harbor-database** pod might fail to start and get stuck on the **CrashLoopBackoff** state. It's possible that this is due to known issue with the Harbor pack related to file permissions. The workaround is to delete the pod and a new pod will be automatically created.
+
+#### Debug Steps
+
+1. Issue the following command to identify the pods with names that start with `harbor-database`.
+
+ ```shell
+ kubectl get pods --namespace harbor --output wide
+ ```
+
+2. Delete the pod you identified in the previous step. Replace `POD_NAME` with the name of the pods. If there are multiple pods, use the command for each pod.
+
+ ```shell
+ kubectl delete pod POD_NAME --namespace harbor
+ ```
+
+
## Terraform
You can reference the Harbor Edge-Native Config pack in Terraform with a data resource.