Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout for clusterctl #4334

Closed
teoyaomiqui opened this issue Mar 17, 2021 · 15 comments
Closed

Timeout for clusterctl #4334

teoyaomiqui opened this issue Mar 17, 2021 · 15 comments
Assignees
Labels
area/clusterctl Issues or PRs related to clusterctl kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Milestone

Comments

@teoyaomiqui
Copy link
Contributor

teoyaomiqui commented Mar 17, 2021

User Story

As a user of clusterctl client or cli utility, it would greatly benefit me if I could specify a timeout when getting kubeconfig for the cluster:

# clusterctl get kubeconfig foo --kubeconfig-context bar --timeout 10s

or

kubeconfig, err := c.GetKubeconfig(&client.GetKubeconfigOptions{
	ParentKubeconfigPath:    f,
	ParentKubeconfigContext: parentContext,
	ManagedClusterNamespace: ref.Namespace,
	ManagedClusterName:      ref.Name,
	Timeout: "10s" // or time.Second * 10
})

.

Detailed Description

The current timeout is 300 seconds. If the management cluster is not reachable, clusterctl will fail after 10 retries to reach the cluster with a 30-second timeout per each try ( controller-runtime client default behavior). When running this command as an interactive user, this seems ok, because you can interrupt it with CTRL+C. However, when done as part of the CI/CD pipeline or another non-interactive script, we have to wait for 300s to understand that cluster is not reachable.

It would be a great help if we could specify this timeout. I think it is easy to implement by using different context here:
https://github.com/kubernetes-sigs/cluster-api/blob/master/cmd/clusterctl/client/cluster/workload_cluster.go#L53

func (p *workloadCluster) GetKubeconfig(workloadClusterName string, namespace string, timeout time.Duration) (string, error) {
...
	ctxWithTimeout, cancel := context.WithTimeout(ctx, timeout)
	defer cancel()
	dataBytes, err := utilkubeconfig.FromSecret(ctxWithTimeout, cs, obj)
...
}

/kind feature
/area clusterctl

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. area/clusterctl Issues or PRs related to clusterctl labels Mar 17, 2021
@vincepri
Copy link
Member

Hi @teoyaomiqui, thanks for the report! +1 from my side to have a configurable timeout and flag for this command. Would you be able to work on it?

/milestone v0.4.x

@k8s-ci-robot k8s-ci-robot added this to the v0.4.x milestone Mar 17, 2021
@teoyaomiqui
Copy link
Contributor Author

Sure, I can do this.

@vincepri
Copy link
Member

/assign @teoyaomiqui
/lifecycle active

@k8s-ci-robot k8s-ci-robot added the lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. label Mar 17, 2021
@CecileRobertMichon CecileRobertMichon modified the milestones: v0.4.x, v0.4 Mar 22, 2021
@MadhavJivrajani
Copy link

Hi @teoyaomiqui! Are you still working on this?

@teoyaomiqui
Copy link
Contributor Author

@MadhavJivrajani I didn't have time to start this yet. But I will start this week if that's alright with the release schedule?

@MadhavJivrajani
Copy link

Hi @teoyaomiqui, I probably should've been clearer in my comment, I'm not sure about the release cycle, I had asked mostly because I was hoping to work on this issue if you weren't planning to. Let me know if I can take this up! 😄

@teoyaomiqui
Copy link
Contributor Author

@MadhavJivrajani sure, you can pick it up :)

@MadhavJivrajani
Copy link

/unassign @teoyaomiqui
/assign

@fabriziopandini
Copy link
Member

Sorry for getting late to this issue, but what about considering an approach similar to https://cluster-api.sigs.k8s.io/clusterctl/configuration.html#cert-manager-timeout-override instead of adding a flag to all the clusterctl CLI commands?

@teoyaomiqui
Copy link
Contributor Author

Sorry for getting late to this issue, but what about considering an approach similar to https://cluster-api.sigs.k8s.io/clusterctl/configuration.html#cert-manager-timeout-override instead of adding a flag to all the clusterctl CLI commands?

This approach would make it harder to use as a library. Instead of simply passing it as an option to command, a user would have to actually have a file with that configuration or reimplement the configuration interface to override it.

For example, here is how simple it is for us to use right now: https://github.com/airshipit/airshipctl/blob/master/pkg/clusterctl/client/client.go#L173-L182

It would be great if we could keep it just as simple @fabriziopandini

@fabriziopandini
Copy link
Member

fabriziopandini commented May 18, 2021

This approach would make it harder to use as a library.

As far as I remember there are already 3 ways to set this value: if you don't want to use the file, and you are not considering to inject your own config provider, you can just set an env variable in you code and the library will pick up the setting.

Generalising a little bit, I don't think we should add field in method signature for each specific setting, otherwise things can get easily out of control.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. labels Aug 16, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 15, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/clusterctl Issues or PRs related to clusterctl kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants