Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Anyway to avoid etcd dependency when it's not needed? #44

Closed
fengye87 opened this issue Mar 29, 2021 · 10 comments
Closed

Anyway to avoid etcd dependency when it's not needed? #44

fengye87 opened this issue Mar 29, 2021 · 10 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@fengye87
Copy link

I'm building an apiserver with this project which doesn't use etcd as backend. However, the resulting binary still requires a --etcd-servers flag to get started normally, and it would keep connecting to that address until a real etcd service is connectd. And during this connecting period, the api endpoint would keep returning 503 error.

So, is there anyway I can avoid the etcd dep? If not, any suggestion to how to make it possible? I may compose a PR here.

@yue9944882
Copy link
Member

@fengye87 i think it's a duplicate of kubernetes-sigs/apiserver-builder-alpha#583. and i will get back w/ feasible fixes after reproducing that locally.

@yue9944882
Copy link
Member

kubernetes-sigs/apiserver-builder-alpha#583 (comment) @fengye87 i saw a few warning level loggings from the apiserver when it's actually working w/o etcd, but the custom resources will be working fine. the e2e test kubernetes-sigs/apiserver-builder-alpha#587 also confirms that..

@yue9944882
Copy link
Member

@fengye87 i cut a new release of apiserver-runtime in which etcd dependency can be completely removed by having WithoutEtcd() in the builder flow. can you try?

@fengye87
Copy link
Author

@yue9944882 I can confirm WithoutEtcd() works as expected, but doesn't solve my problem entirely.

I've created a sample to demonstrate my requirements. A few key points here:

  • A dummy resource Foo, ideally without any CRUD endpoints
  • A subresource Bar, which answers only to the Connect verb and has custom implementation
  • None of the above resources requires Etcd

In the previous commit of the sample, I can get it work but with an Etcd. In the current commit, I can remove the Etcd dependency only if I add one of CRUD operations to Foo, which would effectively disable the discovery of subresource Bar due to https://github.com/kubernetes-sigs/apiserver-runtime/blob/main/pkg/builder/builder_resource.go#L73. Moreover, if I do not add any CRUD operations to Foo, I have to provide a RESTOptionsGetter to the builder flow to avoid some nil pointers while starting the server. However, I still get a etcdclient: no available endpoints error with a proper RESTOptionsGetter.

@yue9944882
Copy link
Member

A dummy resource Foo, ideally without any CRUD endpoints

@fengye87 for Foo resource, just implementing resource.Object interface should be fine which will makes it a classic kubernetes resource. i'm not sure if all CRUD endpoints can be disabled at all but it doesn't seem to be a blocker for your case, is it?

A subresource Bar, which answers only to the Connect verb and has custom implementation

in the latest release of apiserver-runtime (v1.0.1), you can implement resource.ConnectorSubResource interface for your Bar definitions. Meanwhile, you're supposed to setup the connection between Foo and Bar as is shown in this example. basically i think your case is really close to the example project pod-exec from our peer project.

@fengye87
Copy link
Author

@yue9944882 I've just had a look at the example project pod-exec you mentioned, it is very close to my sample. Can the pod-exec apiserver run without etcd? In my sample project, as soon as I introduced the WithoutEtcd option, my apiserver just refuse to start up.

@k8s-triage-robot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 27, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 26, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants