Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: add several known issues #1974

Merged
merged 6 commits into from
Jan 5, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 7 additions & 2 deletions docs/docs-content/clusters/edge/networking/vxlan-overlay.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,11 @@ The Analytics team of a manufacturing company is deploying an Edge host to their
|---------------------|-----------------------|
| Upon recovery, each Kubernetes component inside in the Edge host requests an IP address from the DHCP server, and receives a different IP address than their original IP address before the outage happened. Since Kubernetes expects several components in the control plane to have stable IP addresses, the cluster becomes non-operational and assembly line is unable to resume operations | Each Kubernetes component inside in the Edge host has a virtual IP address in the overlay network. Upon recovery, their IP addresses in the overlay network remain the same despite their IP addresses changing in the underlying DHCP network. The Edge host is able to assume its workload and the assembly line resumes operations |

## Limitations
- When adding multiple nodes to an existing cluster with overlay enabled, failure to add one node will block the addition of the other nodes.

- When deleting an Edge host from a cluster with overlay enabled, ensure the node doesn't have the `palette-webhook` pod on it, or the node will be stuck in the deleting state. You can use the command `kubectl get pods -all-namespaces -output wide` to identify which node `palette-webhook` is on.
lennessyy marked this conversation as resolved.
Show resolved Hide resolved
lennessyy marked this conversation as resolved.
Show resolved Hide resolved

## Prerequisites

* At least one Edge host registered with your Palette account.
Expand Down Expand Up @@ -84,7 +89,7 @@ You will not be able to change the network overlay configurations after the clus
<Tabs>
<TabItem value="calico" label="Calico">

In the Calico pack YAML file default template, uncomment `FELIX_IPV6SUPPORT` and set its value to `scbr-100` and uncomment `manifests.calico.env.calicoNode.IP_AUTODETECTION_METHOD` and set its value to `interface=scbr-100`.
In the Calico pack YAML file default template, uncomment `FELIX_MTUIFACEPATTERN` and set its value to `scbr-100` and uncomment `manifests.calico.env.calicoNode.IP_AUTODETECTION_METHOD` and set its value to `interface=scbr-100`.
```yaml {8,11}
manifests:
calico:
Expand All @@ -93,7 +98,7 @@ You will not be able to change the network overlay configurations after the clus
# Additional env variables for calico-node
calicoNode:
#IPV6: "autodetect"
FELIX_IPV6SUPPORT: "scbr-100"
FELIX_MTUIFACEPATTERN: "scbr-100"
#CALICO_IPV6POOL_NAT_OUTGOING: "true"
#CALICO_IPV4POOL_CIDR: "192.168.0.0/16"
IP_AUTODETECTION_METHOD: "interface=scbr-100"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,10 @@ Select the workflow that best fits your needs.

Use the following steps to create a new host cluster so that you can add Edge hosts to the node pools.

### Limitations

- In a multi-node cluster with PXK-E as its Kubernetes layer, you cannot change custom Network Interface Card (NIC). When you add an Edge host to such a cluster, leave the NIC field as its default value.
lennessyy marked this conversation as resolved.
Show resolved Hide resolved

### Prerequisites

- A registered Edge host.
Expand Down Expand Up @@ -157,6 +161,10 @@ You can also use the command `kubectl get nodes` to review the status of all nod

You can add Edge hosts to the node pool of an existing host cluster. Use the following steps to add the Edge host to the node pool.

### Limitations

- In a multi-node cluster with PXK-E as its Kubernetes layer, you cannot change custom Network Interface Card (NIC). When you add an Edge host to such a cluster, leave the NIC field as its default value.

### Prerequisites

- A registered Edge host.
Expand Down
27 changes: 27 additions & 0 deletions docs/docs-content/integrations/harbor-edge.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,9 +167,36 @@ If you didn't provide a certificate or are using a self-signed certificate, Dock
</TabItem>
</Tabs>

### Known Issues

The following known issues exist in the Harbor 1.0.0 release.

- The Harbor DB pod might fail to start due to file permission issues. This is a [known issue](https://github.com/goharbor/harbor-helm/issues/1676) in the Harbor GitHub repository. Refer to the [Troubleshooting section](#scenario---harbor-db-pod-fails-to-start) for a workaround.
lennessyy marked this conversation as resolved.
Show resolved Hide resolved

</TabItem>
</Tabs>

## Troubleshooting

### Scenario - Harbor DB Pod Fails to Start

When you start a cluster with the Harbor pack, the **harbor-database** pod might fail to start and get stuck on the **CrashLoopBackoff** state. It's possible that this is due to known issue with the Harbor pack related to file permissions. The workaround is to delete the pod and a new pod will be automatically created.

#### Debug Steps

1. Issue the following command to identify the pods with names that start with `harbor-database`.

```shell
kubectl get pods --namespace harbor -o wide
lennessyy marked this conversation as resolved.
Show resolved Hide resolved
```

2. Delete the pod you identified in the previous step. Replace `POD_NAME` with the name of the pods. If there are multiple pods, run the command for each pod.
lennessyy marked this conversation as resolved.
Show resolved Hide resolved

```shell
kubectl delete pod POD_NAME --namespace harbor
```


## Terraform

You can reference the Harbor Edge-Native Config pack in Terraform with a data resource.
Expand Down
Loading