Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faulty Initialisation of Cluster API using clusterctl #9101

Closed
d3bt3ch opened this issue Aug 1, 2023 · 4 comments · Fixed by #9107
Closed

Faulty Initialisation of Cluster API using clusterctl #9101

d3bt3ch opened this issue Aug 1, 2023 · 4 comments · Fixed by #9107
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@d3bt3ch
Copy link

d3bt3ch commented Aug 1, 2023

While initialising cluster api using clusterctl I am getting the following message but the initialisation completes. This was not happening when using clusterctl v1.4.4

[controller-runtime] log.SetLogger(...) was never called, logs will not be displayed:
goroutine 1 [running]:
runtime/debug.Stack()
        runtime/debug/stack.go:24 +0x64
sigs.k8s.io/controller-runtime/pkg/log.eventuallyFulfillRoot()
        sigs.k8s.io/controller-runtime@v0.15.0/pkg/log/log.go:59 +0xf0
sigs.k8s.io/controller-runtime/pkg/log.(*delegatingLogSink).WithName(0x14000400000, {0x104285f61, 0x14})
        sigs.k8s.io/controller-runtime@v0.15.0/pkg/log/deleg.go:147 +0x38
github.com/go-logr/logr.Logger.WithName({{0x104e3e0c8, 0x14000400000}, 0x0}, {0x104285f61?, 0x14000a02b78?})
        github.com/go-logr/logr@v1.2.4/logr.go:336 +0x48
sigs.k8s.io/controller-runtime/pkg/client.newClient(0x140025a2b40, {0x0, 0x140007fc150, {0x0, 0x0}, 0x0, {0x0, 0x0}, 0x0})
        sigs.k8s.io/controller-runtime@v0.15.0/pkg/client/client.go:115 +0x7c
sigs.k8s.io/controller-runtime/pkg/client.New(0x0?, {0x0, 0x140007fc150, {0x0, 0x0}, 0x0, {0x0, 0x0}, 0x0})
        sigs.k8s.io/controller-runtime@v0.15.0/pkg/client/client.go:101 +0x54
sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster.(*proxy).NewClient.func1()
        sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster/proxy.go:169 +0x68
sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster.retryWithExponentialBackoff.func1()
        sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster/client.go:232 +0x5c
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0x14000a02eb8?)
        k8s.io/apimachinery@v0.27.2/pkg/util/wait/wait.go:145 +0x48
k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff({0xee6b280, 0x3ff8000000000000, 0x3fb999999999999a, 0x9, 0x0}, 0x0?)
        k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:461 +0x58
sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster.retryWithExponentialBackoff({0xee6b280, 0x3ff8000000000000, 0x3fb999999999999a, 0x9, 0x0}, 0x6?)
        sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster/client.go:230 +0xcc
sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster.(*proxy).NewClient(0x14000a030f8?)
        sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster/proxy.go:167 +0x98
sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster.listProviders({0x104e41d68?, 0x140002da150?}, 0x0?)
        sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster/inventory.go:324 +0x30
sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster.(*inventoryClient).List.func1()
        sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster/inventory.go:314 +0x2c
sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster.retryWithExponentialBackoff.func1()
        sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster/client.go:232 +0x5c
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0x14000a03188?)
        k8s.io/apimachinery@v0.27.2/pkg/util/wait/wait.go:145 +0x48
k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff({0xee6b280, 0x3ff8000000000000, 0x3fb999999999999a, 0x9, 0x0}, 0x10285174c?)
        k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:461 +0x58
sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster.retryWithExponentialBackoff({0xee6b280, 0x3ff8000000000000, 0x3fb999999999999a, 0x9, 0x0}, 0x14000a032c8?)
        sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster/client.go:230 +0xcc
sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster.(*inventoryClient).List(0x1400081f140)
        sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster/inventory.go:313 +0xac
sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster.(*providerInstaller).Validate(0x1400048f4a0)
        sigs.k8s.io/cluster-api/cmd/clusterctl/client/cluster/installer.go:177 +0x38
sigs.k8s.io/cluster-api/cmd/clusterctl/client.(*clusterctlClient).Init(0x14000465410, {{{0x0, 0x0}, {0x0, 0x0}}, {0x16d5bf2af, 0xb}, {0x14000037fc0, 0x1, 0x1}, ...})
        sigs.k8s.io/cluster-api/cmd/clusterctl/client/init.go:135 +0x1c4
sigs.k8s.io/cluster-api/cmd/clusterctl/cmd.runInit()
        sigs.k8s.io/cluster-api/cmd/clusterctl/cmd/init.go:144 +0x1dc
sigs.k8s.io/cluster-api/cmd/clusterctl/cmd.glob..func13(0x10628bc00?, {0x140004de200?, 0x8?, 0x8?})
        sigs.k8s.io/cluster-api/cmd/clusterctl/cmd/init.go:86 +0x1c
github.com/spf13/cobra.(*Command).execute(0x10628bc00, {0x140004de180, 0x8, 0x8})
        github.com/spf13/cobra@v1.7.0/command.go:940 +0x5c8
github.com/spf13/cobra.(*Command).ExecuteC(0x10628d5e0)
        github.com/spf13/cobra@v1.7.0/command.go:1068 +0x35c
github.com/spf13/cobra.(*Command).Execute(...)
        github.com/spf13/cobra@v1.7.0/command.go:992
sigs.k8s.io/cluster-api/cmd/clusterctl/cmd.Execute()
        sigs.k8s.io/cluster-api/cmd/clusterctl/cmd/root.go:105 +0x2c
main.main()
        sigs.k8s.io/cluster-api/cmd/clusterctl/main.go:27 +0x1c

What did you expect to happen?

Clean initialisation

Cluster API version

1.5.0

Kubernetes version

NA

Anything else you would like to add?

No response

Label(s) to be applied

/kind bug
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Aug 1, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If CAPI contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sbueringer
Copy link
Member

sbueringer commented Aug 2, 2023

@debjitk Thx for reporting

This should fix it: #9107
This is a side-effect of our bump to CR v0.15

Can you test if this fixes the issue / logging behaves otherwise normal as expected?

(You can build clusterctl via make clusterctl)

(@richardcase @Ankitasw @wyike could be that you'll have to do something similar for clusterawsadm)

@d3bt3ch
Copy link
Author

d3bt3ch commented Aug 6, 2023

@sbueringer Sure

@d3bt3ch
Copy link
Author

d3bt3ch commented Aug 6, 2023

@sbueringer The issue is resolved with the following output...

Fetching providers
Skipping installing cert-manager as it is already installed
Installing Provider="cluster-api" Version="v1.5.0" TargetNamespace="capi-system"
Creating objects Provider="cluster-api" Version="v1.5.0" TargetNamespace="capi-system"
Creating inventory entry Provider="cluster-api" Version="v1.5.0" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v1.5.0" TargetNamespace="capi-system"
Creating objects Provider="bootstrap-kubeadm" Version="v1.5.0" TargetNamespace="capi-system"
Creating inventory entry Provider="bootstrap-kubeadm" Version="v1.5.0" TargetNamespace="capi-system"
Installing Provider="control-plane-kubeadm" Version="v1.5.0" TargetNamespace="capi-system"
Creating objects Provider="control-plane-kubeadm" Version="v1.5.0" TargetNamespace="capi-system"
Creating inventory entry Provider="control-plane-kubeadm" Version="v1.5.0" TargetNamespace="capi-system"

Your management cluster has been initialized successfully!

You can now create your first workload cluster by running the following:

  clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f -

Error: unable to verify clusterctl version: unable to semver parse clusterctl GitVersion: strconv.ParseUint: parsing "": invalid syntax

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants