Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SURE-3752] [EKS] Allow the managed node group warning to be cleared #301

Closed
2 of 3 tasks
kkaempf opened this issue Dec 6, 2023 · 5 comments
Closed
2 of 3 tasks
Assignees
Labels
JIRA Must shout kind/bug Something isn't working kind/usability
Milestone

Comments

@kkaempf
Copy link

kkaempf commented Dec 6, 2023

SURE-3752

Business case

Using self-managed node groups from a registered EKS cluster is at times preferred or desired, the warning adds an annoyance to the Rancher experience.

Request description:

Allow the warning for 'Cluster must have at least one managed nodegroup' to be cleared, or informational

Actual behavior:

Red warning is always present in the UI

Expected behavior:

When the cluster is functioning as usual, cluster-agent is deployed and there are nodes registered, the warning is unnecessary.

This could also be confirmed via the EKS API endpoint or detecting a cluster-agent tunnel connection, if it was able to be cleared this would provide a way for users to remove the warning.

Workaround:

Is workaround available and implemented? no (kind of)
What is the workaround: Add a managed node group

PR's

@kkaempf
Copy link
Author

kkaempf commented Dec 6, 2023

see also rancher/dashboard#9618

@salasberryfin salasberryfin self-assigned this Dec 13, 2023
@salasberryfin salasberryfin moved this from Backlog to In Progress (5 max) in CAPI & Hosted Kubernetes providers (EKS/AKS/GKE) Dec 13, 2023
@salasberryfin salasberryfin moved this from In Progress (5 max) to Blocked in CAPI & Hosted Kubernetes providers (EKS/AKS/GKE) Dec 14, 2023
@salasberryfin salasberryfin moved this from Blocked to In Progress (5 max) in CAPI & Hosted Kubernetes providers (EKS/AKS/GKE) Dec 15, 2023
@salasberryfin
Copy link
Contributor

@mjura and I investigated this issue and found some relevant details:

  • We have validated that, if the EKS cluster uses self-managed node groups, Rancher is able to deploy the agent to the existing instances and appear as Active in the dashboard. When the number of instances in the self-managed node group is set to 0, or instances are not properly associated with the cluster, the agent won't be deployed and the cluster will be Waiting. The screenshot in the original issue, shows that moon-imported has 0 machines available, which is most likely the root cause of the issue.
  • Even though the agent can be deployed when there are no managed node groups, the cluster specification is designed to use this type of compute for EKS clusters, so behavior may be unstable in specific situations.

Furthermore, the design docs state that only using managed node groups is supported in EKS. You can read about this decision here.

We consider this enough to close this issue.

@mjura
Copy link
Contributor

mjura commented Dec 27, 2023

This will be fixed in rancher/rancher#43881

mjura added a commit to mjura/rancher that referenced this issue Dec 27, 2023
mjura added a commit to mjura/rancher that referenced this issue Dec 27, 2023
mjura added a commit to mjura/rancher-docs that referenced this issue Dec 28, 2023
Issue: rancher/eks-operator#301

Update eks cluster configuration with section about:
- Launching self-managed Amazon Linux nodes
- IAM roles for service accounts
@mjura
Copy link
Contributor

mjura commented Dec 28, 2023

Documentation part rancher/rancher-docs#1048

@mjura mjura reopened this Dec 28, 2023
@mjura mjura self-assigned this Dec 28, 2023
@mjura mjura moved this from Done to PR to be reviewed in CAPI & Hosted Kubernetes providers (EKS/AKS/GKE) Dec 28, 2023
@mjura mjura moved this from PR to be reviewed to To Test in CAPI & Hosted Kubernetes providers (EKS/AKS/GKE) Jan 17, 2024
mjura added a commit to mjura/rancher-docs that referenced this issue Jan 17, 2024
Issue: rancher/eks-operator#301

Update eks cluster configuration with section about:
- Launching self-managed Amazon Linux nodes
- IAM roles for service accounts
@cpinjani cpinjani self-assigned this Jan 17, 2024
@cpinjani
Copy link
Contributor

Verified as fixed on builds:

v2.8-d0a177f1745fe14a0b3de7483aa3ef895d1ab967-head
v2.9-b57600cdc9b427f30f6c964ad46c535cce8f5c86-head

Able to import EKS cluster with self-managed nodes - ✅
Error "Cluster must have at least one managed nodegroup" is not displayed and cluster can be explored - ✅

image

martyav added a commit to rancher/rancher-docs that referenced this issue Mar 7, 2024
* Update eks cluster configuration

Issue: rancher/eks-operator#301

Update eks cluster configuration with section about:
- Launching self-managed Amazon Linux nodes
- IAM roles for service accounts

* Apply suggestions from code review

* Apply suggestions from code review

* fixed bad link

* versioning

---------

Co-authored-by: Marty Hernandez Avedon <marty.avedon@suse.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
JIRA Must shout kind/bug Something isn't working kind/usability
Development

No branches or pull requests

4 participants