Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fails to start on Kubernetes 1.27 #307

Closed
langesven opened this issue Oct 11, 2023 · 6 comments · Fixed by #308
Closed

Fails to start on Kubernetes 1.27 #307

langesven opened this issue Oct 11, 2023 · 6 comments · Fixed by #308
Labels
bug Something isn't working

Comments

@langesven
Copy link

Brief summary

Hi,

we've recently been planning on giving the k6-operator a go so I set it up yesterday in our AWS EKS cluster (v1.27.5-eks-43840fb) but the manager fails to start with the below output:

kube-rbac-proxy I1011 06:18:25.661351       1 main.go:190] Valid token audiences: 
kube-rbac-proxy I1011 06:18:25.661433       1 main.go:262] Generating self signed cert as no cert is provided
kube-rbac-proxy I1011 06:18:26.427467       1 main.go:311] Starting TCP socket on 0.0.0.0:8443
kube-rbac-proxy I1011 06:18:26.427757       1 main.go:318] Listening securely on 0.0.0.0:8443
manager panic: runtime error: invalid memory address or nil pointer dereference
manager [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x12cd7f0]
manager 
manager goroutine 1 [running]:
manager k8s.io/client-go/discovery.convertAPIResource(...)
manager     /go/pkg/mod/k8s.io/client-go@v0.26.1/discovery/aggregated_discovery.go:88
manager k8s.io/client-go/discovery.convertAPIGroup({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000870978, 0x15}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...})
manager     /go/pkg/mod/k8s.io/client-go@v0.26.1/discovery/aggregated_discovery.go:69 +0x5f0
manager k8s.io/client-go/discovery.SplitGroupsAndResources({{{0xc000870018, 0x15}, {0xc0000444e0, 0x1b}}, {{0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...})
manager     /go/pkg/mod/k8s.io/client-go@v0.26.1/discovery/aggregated_discovery.go:35 +0x2f8
manager k8s.io/client-go/discovery.(*DiscoveryClient).downloadAPIs(0x1832030?)
manager     /go/pkg/mod/k8s.io/client-go@v0.26.1/discovery/discovery_client.go:310 +0x478
manager k8s.io/client-go/discovery.(*DiscoveryClient).GroupsAndMaybeResources(0xc00051a1e0?)
manager     /go/pkg/mod/k8s.io/client-go@v0.26.1/discovery/discovery_client.go:198 +0x5c
manager k8s.io/client-go/discovery.ServerGroupsAndResources({0x1a9f4d0, 0xc000401aa0})
manager     /go/pkg/mod/k8s.io/client-go@v0.26.1/discovery/discovery_client.go:392 +0x59
manager k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroupsAndResources.func1()
manager     /go/pkg/mod/k8s.io/client-go@v0.26.1/discovery/discovery_client.go:356 +0x25
manager k8s.io/client-go/discovery.withRetries(0x2, 0xc000129048)
manager     /go/pkg/mod/k8s.io/client-go@v0.26.1/discovery/discovery_client.go:621 +0x72
manager k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroupsAndResources(0x0?)
manager     /go/pkg/mod/k8s.io/client-go@v0.26.1/discovery/discovery_client.go:355 +0x3a
manager k8s.io/client-go/restmapper.GetAPIGroupResources({0x1a9f4d0?, 0xc000401aa0?})
manager     /go/pkg/mod/k8s.io/client-go@v0.26.1/restmapper/discovery.go:148 +0x42
manager sigs.k8s.io/controller-runtime/pkg/client/apiutil.NewDynamicRESTMapper.func1()
manager     /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.6/pkg/client/apiutil/dynamicrestmapper.go:94 +0x25
manager sigs.k8s.io/controller-runtime/pkg/client/apiutil.(*dynamicRESTMapper).setStaticMapper(...)
manager     /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.6/pkg/client/apiutil/dynamicrestmapper.go:130
manager sigs.k8s.io/controller-runtime/pkg/client/apiutil.NewDynamicRESTMapper(0xc0000ceab0?, {0x0, 0x0, 0x9fe144030cd13d01?})
manager     /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.6/pkg/client/apiutil/dynamicrestmapper.go:110 +0x174
manager sigs.k8s.io/controller-runtime/pkg/cluster.setOptionsDefaults.func1(0xc000796310?)
manager     /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.6/pkg/cluster/cluster.go:217 +0x25
manager sigs.k8s.io/controller-runtime/pkg/cluster.New(0xc00024b200, {0xc0001299e8, 0x1, 0x0?})
manager     /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.6/pkg/cluster/cluster.go:159 +0x18d
manager sigs.k8s.io/controller-runtime/pkg/manager.New(_, {0xc000796310, 0x0, 0x0, {{0x1a9d6d0, 0xc00068d380}, 0x0}, 0x1, {0x183d9e3, 0x10}, ...})
manager     /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.6/pkg/manager/manager.go:351 +0xf9
manager main.main()
manager     /workspace/main.go:64 +0x335
Stream closed EOF for k6-operator-system/k6-operator-controller-manager-6b7c55979d-rhngb (manager)

This is something I've seen before when upgrading from K8s 1.26 to K8s 1.27 for example k9s binary was showing the same behaviour until updated.

I am assuming this is related to the version of the k8s-client library used in go.

k6-operator version or image

ghcr.io/grafana/k6-operator:controller-v0.0.11rc3

K6 YAML

Stock bundle.yaml from https://github.com/grafana/k6-operator#bundle-deployment

Other environment details (if applicable)

No response

Steps to reproduce the problem

  • have Kubernetes 1.27
  • deploy the operator

Expected behaviour

  • operator starts and I can create CRDs to trigger it

Actual behaviour

  • operator does not start and goes into CrashLoopBackoff due to the error pasted above
@langesven langesven added the bug Something isn't working label Oct 11, 2023
@yorugac
Copy link
Collaborator

yorugac commented Oct 11, 2023

Hi @langesven 👋 Thanks for the report and good timing actually: I'm just preparing PR to update deps because of the CI issue - might as well update client-go.

I assume you meant this one in k9s: derailed/k9s#2055 and derailed/k9s#2075 - thanks for the pointer! It looks like 0.26.4 should work as well 0.27.0+ 🤔

@yorugac
Copy link
Collaborator

yorugac commented Oct 11, 2023

For the record. Sadly, this case is not repeatable with kind cluster 😞
CI job here is successful: https://github.com/grafana/k6-operator/actions/runs/6487684065/job/17618588971

@langesven
Copy link
Author

Yes I think it's not something that really happens on a vanilla cluster but actually needs a bit of garbage floating around 😅

If you have a preliminary image I'll gladly give it a go in the cluster where I encountered this issue

@yorugac
Copy link
Collaborator

yorugac commented Oct 13, 2023

@langesven, the image with dependency updates can be reached with:
ghcr.io/grafana/k6-operator:30577947ac70e51717e5e1ca4b1c84c84dceb11d

Preliminarily it should work though I haven't tested all the scenarios yet. If you could try it with 1.27, that'd be great!

@langesven
Copy link
Author

Thanks! This image works on my cluster, I was able to run the operator and start a test etc 😄 I probably run very basic things right now but compared to it not starting at all before I consider it success 😁

@yorugac
Copy link
Collaborator

yorugac commented Oct 17, 2023

🎉 Great to hear! Many thanks for checking @langesven 😄
I'll test the update a bit more and merge it in, hopefully this week.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants