Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] k3s proxied downstream cluster does not work on v1.24.4+k3s1 but does work on v1.24.6+k3s1 #39284

Closed
slickwarren opened this issue Oct 12, 2022 · 4 comments
Assignees
Labels
area/k3s area/provisioning-v2 Provisioning issues that are specific to the provisioningv2 generating framework kind/bug-qa Issues that have not yet hit a real release. Bugs introduced by a new feature or enhancement team/hostbusters The team that is responsible for provisioning/managing downstream clusters + K8s version support
Milestone

Comments

@slickwarren
Copy link
Contributor

slickwarren commented Oct 12, 2022

Rancher Server Setup

  • Rancher version: v2.6-head (e139976)
  • Installation option (Docker install/Helm Chart): docker
    • If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): n/a
  • Proxy/Cert Details:n/a

Information about the Cluster

  • Kubernetes version: v1.24.4+k3s1
  • Cluster Type (Local/Downstream): downstream
    • If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider): custom , proxied cluster

User Information

  • What is the role of the user logged in? (Admin/Cluster Owner/Cluster Member/Project Owner/Project Member/Custom)
    • If custom, define the set of permissions: tested with standard user, cluster owner and as admin

Describe the bug

k3s proxied downstream cluster does not work on v1.24.4+k3s1 but does work on v1.24.6+k3s1

To Reproduce

  • deploy proxy for use in the downstream clsuter
  • in rancher, deploy a custom cluster and specify HTTP_PROXY, HTTPS_PROXY and NO_PROXY environment variables and use the latest k3s version available
  • connect a node to the custom cluster that is able to reach the proxy you created in an earlier step.

Result
node makes an initial connection, but never gets to an active state.

Expected Result

node should be able to register with rancher from behind the proxy

Screenshots

Additional context

rancher-system-agent logs:

level=info msg="[Applyinator] Running command: sh [-c k3s etcd-snapshot list --etcd-s3=false 2>/dev/null]"
rancher-system-agent[2790]: time="2022-10-12T18:09:47Z" level=info msg="[82a072010f1716ef0f3c866533fcbef07010c7ce199cc7c51a6d16b7d20dce0a_0:stdout]: Name Location Size Created"
rancher-system-agent[2790]: time="2022-10-12T18:09:47Z" level=info msg="[Applyinator] Command sh [-c k3s etcd-snapshot list --etcd-s3=false 2>/dev/null] finished with err: <nil> and exit code: 0"

k3s logs just keep spamming this message:

k3s[2936]: I1012 18:50:27.610159    2936 scope.go:110] "RemoveContainer" containerID="c9617aee67a6aae3a26ef124208a83bfa2dbc9029b0dbb0b570389a4eb9000e0"
k3s[2936]: I1012 18:50:36.190489    2936 scope.go:110] "RemoveContainer" containerID="c9617aee67a6aae3a26ef124208a83bfa2dbc9029b0dbb0b570389a4eb9000e0"
k3s[2936]: I1012 18:50:36.191032    2936 scope.go:110] "RemoveContainer" containerID="704d2ab8b106ef5fb27b04e75341bfcc47a258dbb15649986c51a00f4db90a22"
k3s[2936]: E1012 18:50:36.191476    2936 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-register\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-register pod=cattle-cluster-agent-6f55b75c86-ppcq6_cattle-system(94520d8a-72e0-427f-b45b-b6842edeeebc)\"" pod="cattle-system/cattle-cluster-agent-6f55b75c86-ppcq6" podUID=94520d8a-72e0-427f-b45b-b6842edeeebc
k3s[2936]: I1012 18:50:50.609819    2936 scope.go:110] "RemoveContainer" containerID="704d2ab8b106ef5fb27b04e75341bfcc47a258dbb15649986c51a00f4db90a22"
@slickwarren slickwarren added kind/bug-qa Issues that have not yet hit a real release. Bugs introduced by a new feature or enhancement area/k3s area/provisioning-v2 Provisioning issues that are specific to the provisioningv2 generating framework labels Oct 12, 2022
@slickwarren slickwarren added this to the v2.6.9 milestone Oct 12, 2022
@slickwarren slickwarren self-assigned this Oct 12, 2022
@Sahota1225 Sahota1225 added the release-note Note this issue in the milestone's release notes label Oct 13, 2022
@Sahota1225
Copy link
Contributor

proxy is broken and a fix will be available with a later version

@Sahota1225 Sahota1225 modified the milestones: v2.6.9, v2.7.1 Oct 13, 2022
@zube zube bot added [zube]: To Triage team/hostbusters The team that is responsible for provisioning/managing downstream clusters + K8s version support labels Dec 21, 2022
@Sahota1225
Copy link
Contributor

@slickwarren can you please test this with 1.24.9 and see if the issue still exist?

@snasovich
Copy link
Collaborator

@slickwarren , moving it "To Test" to check the above. If it's not reproducible on 1.24.9 (the latest to be in 2.7.x), let's close it. We're not going to fix 1.24.4 retroactively.

@snasovich snasovich added [zube]: To Test and removed release-note Note this issue in the milestone's release notes labels Jan 5, 2023
@zube zube bot removed the [zube]: To Triage label Jan 5, 2023
@slickwarren
Copy link
Contributor Author

confirmed, this is working on v1.24.8+k3s1. will close this issue

@zube zube bot removed the [zube]: Done label Apr 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/k3s area/provisioning-v2 Provisioning issues that are specific to the provisioningv2 generating framework kind/bug-qa Issues that have not yet hit a real release. Bugs introduced by a new feature or enhancement team/hostbusters The team that is responsible for provisioning/managing downstream clusters + K8s version support
Projects
None yet
Development

No branches or pull requests

3 participants