-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: add new known issue #3340
Conversation
✅ Deploy Preview for docs-spectrocloud ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's get back to it once the behavior is clear :)
@@ -16,6 +16,7 @@ The following table lists all known issues that are currently active and affecti | |||
|
|||
| Description | Workaround | Publish Date | Product Component | | |||
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------- | --------------------- | | |||
| When you use content bundles to provision a new cluster without using the local Harbor registry, it's possible for the images to be pulled from external networks instead of from the content bundle. If your Edge host has no connection to the internet, some pods may enter the `ImagePullBackOff` state at first, but eventually the pods will be created using images from the content bundle. | No workarounds | July 11, 2024 | Edge | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| When you use content bundles to provision a new cluster without using the local Harbor registry, it's possible for the images to be pulled from external networks instead of from the content bundle. If your Edge host has no connection to the internet, some pods may enter the `ImagePullBackOff` state at first, but eventually the pods will be created using images from the content bundle. | No workarounds | July 11, 2024 | Edge | | |
| When you use content bundles to provision a new cluster without using the local Harbor registry, it's possible for the images to be pulled from external networks instead of from the content bundle. If your Edge host has no connection to the internet, some pods may enter the `ImagePullBackOff` state at first, but eventually the pods will be created using images from the content bundle. | No workaround. | July 11, 2024 | Edge | |
Added a period to "workaround" and removed the s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After reading the issue description i feel like there is a workaround and confirmed it with Venkatesh. I've added it to the table now.
| Azure IaaS clusters are having issues with deployed load balancers and ingress deployments when using Kubernetes versions 1.29.0 and 1.29.4. Incoming connections time out as a result due to a lack of network path inside the cluster. Azure AKS clusters are not impacted. | Use a Kubernetes version lower than 1.29.0 | June 12, 2024 | Clusters | | ||
| OIDC integration with Virtual Clusters is not functional. All other operations related to Virtual Clusters are operational. | No workaround is available. | Jun 11, 2024 | Virtual Clusters | | ||
| The VerteX enterprise cluster is unable to complete backup operations. | No workaround is available. | June 6, 2024 | VerteX | | ||
| Deploying self-hosted Palette or VerteX to a vSphere environment fails if vCenter has standalone hosts directly under a Datacenter. Persistent Volume (PV) provisioning fails due to an upstream issue with the vSphere Container Storage Interface (CSI) for all versions before v3.2.0. Palette and VerteX use the vSphere CSI version 3.1.2 internally. The issue may also occur in workload clusters deployed on vSphere using the same vSphere CSI for storage volume provisioning. | If you encounter the following error message when deploying self-hosted Palette or VerteX: `'ProvisioningFailed failed to provision volume with StorageClass "spectro-storage-class". Error: failed to fetch hosts from entity ComputeResource:domain-xyz` then use the following workaround. Remove standalone hosts directly under the Datacenter from vCenter and allow the volume provisioning to complete. After the volume is provisioned, you can add the standalone hosts back. You can also use a service account that does not have access to the standalone hosts as the user that deployed Palette. | May 21, 2024 | Self-Hosted | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚫 [vale] reported by reviewdog 🐶
[Vale.Spelling] Did you really mean 'Datacenter'?
| Azure IaaS clusters are having issues with deployed load balancers and ingress deployments when using Kubernetes versions 1.29.0 and 1.29.4. Incoming connections time out as a result due to a lack of network path inside the cluster. Azure AKS clusters are not impacted. | Use a Kubernetes version lower than 1.29.0 | June 12, 2024 | Clusters | | ||
| OIDC integration with Virtual Clusters is not functional. All other operations related to Virtual Clusters are operational. | No workaround is available. | Jun 11, 2024 | Virtual Clusters | | ||
| The VerteX enterprise cluster is unable to complete backup operations. | No workaround is available. | June 6, 2024 | VerteX | | ||
| Deploying self-hosted Palette or VerteX to a vSphere environment fails if vCenter has standalone hosts directly under a Datacenter. Persistent Volume (PV) provisioning fails due to an upstream issue with the vSphere Container Storage Interface (CSI) for all versions before v3.2.0. Palette and VerteX use the vSphere CSI version 3.1.2 internally. The issue may also occur in workload clusters deployed on vSphere using the same vSphere CSI for storage volume provisioning. | If you encounter the following error message when deploying self-hosted Palette or VerteX: `'ProvisioningFailed failed to provision volume with StorageClass "spectro-storage-class". Error: failed to fetch hosts from entity ComputeResource:domain-xyz` then use the following workaround. Remove standalone hosts directly under the Datacenter from vCenter and allow the volume provisioning to complete. After the volume is provisioned, you can add the standalone hosts back. You can also use a service account that does not have access to the standalone hosts as the user that deployed Palette. | May 21, 2024 | Self-Hosted | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚫 [vale] reported by reviewdog 🐶
[Vale.Spelling] Did you really mean 'Datacenter'?
| K3s version 1.27.7 has been marked as _Deprecated_. This version has a known issue that causes clusters to crash. | Upgrade to a newer version of K3s to avoid the issue, such as versions 1.26.12, 1.28.5, and 1.27.11. You can learn more about the issue in the [K3s GitHub issue](https://github.com/k3s-io/k3s/issues/9047) page. | April 14, 2024 | Packs, Clusters | | ||
| When deploying a multi-node AWS EKS cluster with the Container Network Interface (CNI) [Calico](../integrations/calico.md), the cluster deployment fails. | A workaround is to use the AWS VPC CNI in the interim while the issue is resolved. | April 14, 2024 | Packs, Clusters | | ||
| If a Kubernetes cluster deployed onto VMware is deleted, and later re-created with the same name, the cluster creation process fails. The issue is caused by existing resources remaining inside the PCG, or the System PCG, that are not cleaned up during the cluster deletion process. | Refer to the [VMware Resources Remain After Cluster Deletion](../troubleshooting/pcg.md#scenario---vmware-resources-remain-after-cluster-deletion) troubleshooting guide for resolution steps. | April 14, 2024 | Clusters | | ||
| In a VMware environment, self-hosted Palette instances do not receive a unique cluster ID when deployed, which can cause issues during a node repave event, such as a Kubernetes version upgrade. Specifically, Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) will experience start problems due to the lack of a unique cluster ID. | To resolve this issue, refer to the [Volume Attachment Errors Volume in VMware Environment](../troubleshooting/palette-upgrade.md#volume-attachment-errors-volume-in-vmware-environment) troubleshooting guide. | April 14, 2024 | Self-Hosted | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚫 [vale] reported by reviewdog 🐶
[Vale.Spelling] Did you really mean 'PVs'?
| K3s version 1.27.7 has been marked as _Deprecated_. This version has a known issue that causes clusters to crash. | Upgrade to a newer version of K3s to avoid the issue, such as versions 1.26.12, 1.28.5, and 1.27.11. You can learn more about the issue in the [K3s GitHub issue](https://github.com/k3s-io/k3s/issues/9047) page. | April 14, 2024 | Packs, Clusters | | ||
| When deploying a multi-node AWS EKS cluster with the Container Network Interface (CNI) [Calico](../integrations/calico.md), the cluster deployment fails. | A workaround is to use the AWS VPC CNI in the interim while the issue is resolved. | April 14, 2024 | Packs, Clusters | | ||
| If a Kubernetes cluster deployed onto VMware is deleted, and later re-created with the same name, the cluster creation process fails. The issue is caused by existing resources remaining inside the PCG, or the System PCG, that are not cleaned up during the cluster deletion process. | Refer to the [VMware Resources Remain After Cluster Deletion](../troubleshooting/pcg.md#scenario---vmware-resources-remain-after-cluster-deletion) troubleshooting guide for resolution steps. | April 14, 2024 | Clusters | | ||
| In a VMware environment, self-hosted Palette instances do not receive a unique cluster ID when deployed, which can cause issues during a node repave event, such as a Kubernetes version upgrade. Specifically, Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) will experience start problems due to the lack of a unique cluster ID. | To resolve this issue, refer to the [Volume Attachment Errors Volume in VMware Environment](../troubleshooting/palette-upgrade.md#volume-attachment-errors-volume-in-vmware-environment) troubleshooting guide. | April 14, 2024 | Self-Hosted | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚫 [vale] reported by reviewdog 🐶
[Vale.Spelling] Did you really mean 'PVCs'?
| In a VMware environment, self-hosted Palette instances do not receive a unique cluster ID when deployed, which can cause issues during a node repave event, such as a Kubernetes version upgrade. Specifically, Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) will experience start problems due to the lack of a unique cluster ID. | To resolve this issue, refer to the [Volume Attachment Errors Volume in VMware Environment](../troubleshooting/palette-upgrade.md#volume-attachment-errors-volume-in-vmware-environment) troubleshooting guide. | April 14, 2024 | Self-Hosted | | ||
| Day-2 operations related to infrastructure changes, such as modifying the node size and count, when using MicroK8s are not taking effect. | No workaround is available. | April 14, 2024 | Packs, Clusters | | ||
| If a cluster that uses the Rook-Ceph pack experiences network issues, it's possible for the file mount to become and remain unavailable even after the network is restored. | This a known issue disclosed in the [Rook GitHub repository](https://github.com/rook/rook/issues/13818). To resolve this issue, refer to [Rook-Ceph](../integrations/rook-ceph.md#file-mount-becomes-unavailable-after-cluster-experiences-network-issues) pack documentation. | April 14, 2024 | Packs, Edge | | ||
| Edge clusters on Edge hosts with ARM64 processors may experience instability issues that cause cluster failures. | ARM64 support is limited to a specific set of Edge devices. Currently, Nvidia Jetson devices are supported. | April 14, 2024 | Edge | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚫 [vale] reported by reviewdog 🐶
[Vale.Spelling] Did you really mean 'Jetson'?
* docs: add new known issue * docs: add additonal note * docs: fix typo * docs: add clarification * docs: add workaround --------- Co-authored-by: Lenny Chen <lenny.chen@spectrocloud.com> (cherry picked from commit d7723cc)
💔 Some backports could not be created
Note: Successful backport PRs will be merged automatically after passing CI. Manual backportTo create the backport manually run:
Questions ?Please refer to the Backport tool documentation and see the Github Action logs for details |
* docs: add new known issue * docs: add additonal note * docs: fix typo * docs: add clarification * docs: add workaround --------- Co-authored-by: Lenny Chen <lenny.chen@spectrocloud.com> (cherry picked from commit d7723cc) Co-authored-by: Lenny Chen <55669665+lennessyy@users.noreply.github.com>
* docs: add new known issue * docs: add additonal note * docs: fix typo * docs: add clarification * docs: add workaround --------- Co-authored-by: Lenny Chen <lenny.chen@spectrocloud.com>
Describe the Change
This PR adds a new known issue regarding content bundles.
Changed Pages
💻 Add Preview URL for Page
Jira Tickets
🎫 PE-4691
Backports
Can this PR be backported?