Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support SV API for CRDs (using workqueue and wait channel) #124543

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

richabanker
Copy link
Contributor

@richabanker richabanker commented Apr 26, 2024

What type of PR is this?

/kind feature

What this PR does / why we need it:

This PR is based off of #120582.

Pseudocode

  1. In createCustomResourceDefinition and updateCustomResourceDefinition methods, create an svUpdateInfo object that stores the crd, a wait channel that indicates whether SV update for this CRD is finished, and some other metadata. Store this svUpdateInfo in manager.storageVersionUpdateInfoMap
  2. In getOrCreateServingInfoFor(), read the latest CRD from manager.storageVersionUpdateInfoMap and also store svUpdateInfo.storageVersionProcessedCh inside crdInfo
  3. While serving CR writes, check if crdInfo.storageVersionProcessedCh is closed -> if so, only then serve the request else fail with 405(StatusMethodNotAllowed).
  4. In createCustomResourceDefinition and updateCustomResourceDefinition, after the teardown of old handlers is completed, queue new CRDs (names) for SV updates.
  5. Manager processes SV updates of queued CRDs, for every crd.Name polled from the queue - find whats the latest CRD from storageVersionUpdateInfoMap and use that to update the SV.
  6. Manager abandons SV updates if there're active teardowns of previous handlers ongoing (tracked via storageVersionUpdateInfoMap.teardownCount). Once the teardown of previous handlers completes, re-enqueue the new CRD to try for SV update.

Which issue(s) this PR fixes:

Fixes kubernetes/enhancements#2339

Special notes for your reviewer:

Does this PR introduce a user-facing change?


Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

- [KEP]: https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/2339-storageversion-api-for-ha-api-servers

@k8s-ci-robot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@k8s-ci-robot
Copy link
Contributor

Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. kind/feature Categorizes issue or PR as related to a new feature. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. area/apiserver area/test sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/testing Categorizes an issue or PR as relevant to SIG Testing. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Apr 26, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: richabanker
Once this PR has been reviewed and has the lgtm label, please assign deads2k for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@richabanker
Copy link
Contributor Author

cc @roycaihw

Copy link
Member

@roycaihw roycaihw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding some context here for the new proposal. The proposal is to mainly address this race condition in "handling CR writes" caught by @deads2k .

"Handling CR writes" is equivalent to "unblocking one crdInfo", or "unblocking one handler".

In ideal world, each new handler will have one corresponding SV update. In reality, each new handler will have zero or one corresponding SV update. The SV update may not happen due to a newer handler comes and short-circuits the old handler's SV update.

Essentially what we are trying to achieve here is "unblock one handler when its corresponding SV update succeeds".

I think instead of introducing intermediate concepts (e.g. comparing CRD cache version and SV cache version) which try to remotely represent the ordering between "SV update" and "handler serving CR writes", we should just track the 1-to-1 relationship between a handler and its corresponding SV update. Because a remote tracking representation is hard to get right with various race conditions.

That means, when the SV update worker successfully performs an SV update, the corresponding handler should just know that its SV update succeeded.

To achieve this, I think we can do the following:

  1. In createCustomResourceDefinition and updateCustomResourceDefinition methods, create a new crdInfo (i.e. the new handler) when a old crdInfo is removed from the map. Create a corresponding svUpdatedSucceeded channel. Store the channel info inside of the crdInfo.
  2. Have a "pending SV update" cache, which is a map from "CRD name" to "SV + channel". The cache should be protected by a lock. Every time we write to the map for an existing key, simply overwrite the value (i.e. short-circuit older versions)
  3. Instead of letting the SV update worker to look at the CRD cache to decide what SV to update to, let the SV update worker to look at the new "pending SV update" cache. The worker will grab the "SV + channel". It will perform an SV update, and upon success, close the channel.

@@ -290,8 +300,6 @@ func (r *crdHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
return
}

terminating := apiextensionshelpers.IsCRDConditionTrue(crd, apiextensionsv1.Terminating)

crdInfo, err := r.getOrCreateServingInfoFor(crd.UID, crd.Name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We create handlers and track their corresponding SV updates in the createCustomResourceDefinition and updateCustomResourceDefinition methods. I think in the ServeHTTP method we can only get handlers but not create handlers, since we don't schedule SV update here.

This is because we want to make sure the 1-to-1 relationship between a handler and it's corresponding SV update is tracked.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are no longer creating handlers in createCustomResourceDefinition or updateCustomResourceDefinition as discussed offline.

@@ -453,14 +532,31 @@ func (r *crdHandler) createCustomResourceDefinition(obj interface{}) {
storageMap := r.customStorage.Load().(crdStorageMap)
oldInfo, found := storageMap[crd.UID]
if !found {
crdInfo, err := r.createServingInfoFor(crd.Name)
if err != nil {
klog.Errorf("createCustomResourceDefinition failed, error creating new handler: %v", err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Previously you mentioned that createServingInfoFor may fail if the CRD is just created and it hasn't been approved by the naming controller. When the naming controller approves the name and updates the CRD status, a new watch event will trigger a successfully handler creation. An optimization here can be detecting whether a naming is approved or not, and logging at a different level (instead of Errorf)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this comment may be obsolete now because I reverted splitting getOrCreateServingInfoFor() into 2 parts (as was done previously), and now the only change in getOrCreateServingInfoFor() is that it will read from manager.storageVersionUpdateInfoMap to know what is the latest CRD. But please correct me if I misunderstood.

}

// updateStorageVersion updates a StorageVersion for the given CRD.
func (m *Manager) updateStorageVersion(ctx context.Context, crd *apiextensionsv1.CustomResourceDefinition) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updateStorageVersion and updateCRDStorageVersion are duplicates

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. Thanks for catching that!

}

// UpdateActiveTeardownsCount updates the teardown count of the CRD in sync.Map.
func (m *Manager) UpdateActiveTeardownsCount(crdName string, value int) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we unit test this method?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

func (m *Manager) alreadyPublishedLatestSV(svUpdateInfo *storageVersionUpdateInfo) bool {
select {
case <-svUpdateInfo.processedCh:
klog.V(4).Infof("Storageversion is already updated to the latest value for crd: %s, returning", svUpdateInfo.crd.Name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this situation possible?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the very rare situation that the worker has kicked in and is about to start SV update for a CRD (say the latest version at this time was -v2), and before it begins processing, there is a new version of CRD available (v3) - which gets enqueued in the worker queue. The current worker will now find the latest version from the storageVersionUpdateInfoMap to be v3 and will update the SV to be v3. The queued SV update (v3) will be picked up by the worker in the next iteration.

This may be unnecessary, so ok with removing this condition if you think thats for the better.

svInfo := val.(*storageVersionUpdateInfo)
svInfo.crd = crd
if processedCh != nil {
svInfo.processedCh = processedCh
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use the Store method to update the sync.Map, to make sure the update is atomic?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@k8s-ci-robot k8s-ci-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Jun 5, 2024
@richabanker
Copy link
Contributor Author

/test all

@richabanker richabanker marked this pull request as ready for review June 5, 2024 23:15
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 5, 2024
…ruth for latest CRDs in getOrCreateServingInfoFor
@k8s-ci-robot
Copy link
Contributor

@richabanker: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubernetes-linter-hints 21b5b1b link false /test pull-kubernetes-linter-hints
pull-kubernetes-verify-lint 21b5b1b link true /test pull-kubernetes-verify-lint

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@fedebongio
Copy link
Contributor

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jun 6, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/apiserver area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

StorageVersion API for HA API servers
5 participants