-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ProvisioningRequestProcessor #6488
Add ProvisioningRequestProcessor #6488
Conversation
0e74dcc
to
27484d1
Compare
cluster-autoscaler/provisioningrequest/checkcapacity/processor.go
Outdated
Show resolved
Hide resolved
27484d1
to
c9a1ef0
Compare
538c5e9
to
5ba9f48
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
@kisieland: changing LGTM is restricted to collaborators In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
628ccc0
to
1f5d546
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, mostly LGTM. I don't particularly like the two-level processor, but I think with a bit of renaming and shuffling the code around, it would be slightly cleaner. Specifically, I suggest:
- Renaming
ProvisioningRequestProcessor provreq.ProvisioningRequestProcessor
toLoopStartNotifier *loopstart.ObserversList
(and perhaps itsProcess
method toRefresh
akin to cloud providerRefresh
also happening on loop start). This would be a new package & object (not interface) similar to the one inscale_down_candidates_observer.go
. - Renaming
ProvisioningClassProcessor
interface toProvisioningRequestProcessor
That should allow us to later move more logic out of RunOnce
in the future. Does that make sense?
cluster-autoscaler/processors/provreq/provsioning_request_processor.go
Outdated
Show resolved
Hide resolved
|
||
// ProvisioningRequestProcessor process ProvisionignRequests in the cluster. | ||
type ProvisioningRequestProcessor interface { | ||
Process() error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bit weird. Processors usually process something, ideally both accepting and returning a slice of a specific type. Instead, here you're just calling back a function that doesn't accept any parameters and returns a generic error type. It is hard to say what this interface, besides the name, has to do with provisioning requests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, currently ProvisioningRequest are hidden, so we don't have a list of ProvisioningRequests to pass it here and we don't need to return anything since nothing accept the list of ProvisioningRequest.
This was initial design decision from @kisieland . But I'm open to discussion if you think that we need to change this approach.
CleanUp() | ||
} | ||
|
||
// ProvisioningClassProcessor process ProvisionignRequests in the cluster. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, how is this different from ProvisioningRequestProcessor
? According to the descriptions, they are the same thing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wrote a comment below with full context, will copy the part of it that related to your question:
"Currently I'm adding support for check-capacity ProvisioningClass, but then I'll add support for atomicScaleUp ProvisioningClass. Each class will process ProvisioningRequest differently, so that's why ProvisioningRequestProcessor has a list of ProvisioningClassProcessor where each processor receives a list of ProvisioningRequests.
Initially I thought to have one interface for ProvisioningRequestProcessor and ProvisioningClassProcessor and wanted to pass nil list to ProvisioningRequestProcessor in the RunOnce() function, but @kisieland asked to have an empty input instead, so I introduced two processors."
cluster-autoscaler/provisioningrequest/provreqclient/testutils.go
Outdated
Show resolved
Hide resolved
@@ -507,6 +507,10 @@ func (a *StaticAutoscaler) RunOnce(currentTime time.Time) caerrors.AutoscalerErr | |||
a.AutoscalingContext.DebuggingSnapshotter.SetClusterNodes(l) | |||
} | |||
|
|||
if err := a.processors.ProvisioningRequestProcessor.Process(); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it important for this to happen here specifically? If it is only about notifying the code about new loop start, can we move towards abstracting all "loop start observers" away from static_autoscaler? There's a set of objects that do some state update per loop and this one looks like a good example. The reason I'm asking is that with over 400 lines this is one of the longest functions in the whole CA codebase and throwing in unrelated function calls decreases any reader's ability to reason about what really is happening here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ProvisioningRequestProcessor is aimed to process ProvisioningRequest conditions and check if they are expired, or the booking capacity mechanism is expired and update ProvisioningRequest conditions if needed.
This Processor should be called before ProvisioningRequestInjector (I haven't introduced it yet, it's will be in next PR), and ProvisioningRequestInjector will be a part of PodListProcessor (or it will be called just after PodListProcessor) and will be called after ProvisioningRequestFilter (which filter out real unscheduled pods that consume ProvisioningRequest from unscheduled pods to not trigger another scaleUp).
Currently we don't list ProvisioningRequests in the beginning of the loop (as we do for regular pods). I added a client to the processor and list ProvReqs inside the processor. That's why the input is empty.
Currently I'm adding support for check-capacity ProvisioningClass, but then I'll add support for atomicScaleUp ProvisioningClass. Each class will process ProvisioningRequest differently, so that's why ProvisioningRequestProcessor has a list of ProvisioningClassProcessor where each processor receives a list of ProvisioningRequests.
Initially I thought to have one interface for ProvisioningRequestProcessor and ProvisioningClassProcessor and wanted to pass nil list to ProvisioningRequestProcessor in the RunOnce() function, but @kisieland asked to have an empty input instead, so I introduced two processors.
4e737b6
to
9492b23
Compare
9492b23
to
28376c7
Compare
28376c7
to
d4337f5
Compare
cluster-autoscaler/processors/provreq/provsioning_request_processor.go
Outdated
Show resolved
Hide resolved
cluster-autoscaler/processors/provreq/provsioning_request_processor.go
Outdated
Show resolved
Hide resolved
cluster-autoscaler/processors/provreq/provsioning_request_processor.go
Outdated
Show resolved
Hide resolved
9eb2638
to
54733b3
Compare
/label tide/merge-method-squash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just two minor comments, otherwise LGTM.
54733b3
to
ea387cd
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kisieland, x13n, yaroslava-serdiuk The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
* Comment to explain why test is done on STS ownerRef * add informer argument to clusterapi provider builder This change adds the informer factory as an argument to the `buildCloudProvider` function for clusterapi so that building with tags will work properly. * Add informer argument to the CloudProviders builder. * clusterapi: add missing error check * Add instanceType/region support in Helm chart for Hetzner cloud provider * doc: cluster-autoscaler: Oracle provider: Add small security note * doc: cluster-autoscaler: Oracle provider: Add small security note * doc: cluster-autoscaler: Oracle provider: Add small security note * Update charts/cluster-autoscaler/README.md * Update Auto Labels of Subprojects * check empty ProviderID in ali NodeGroupForNode * add gce constructor with custom timeout * update README.md.gotmpl and added Helm docs for Hetzner Cloud * bump chart version * use older helm-docs version and remove empty line in values comment * add missing line breaks * Update charts/cluster-autoscaler/Chart.yaml Co-authored-by: Shubham <shubham.kuchhal@india.nec.com> * Reduce log spam in AtomicResizeFilteringProcessor Also, introduce default per-node logging quotas. For now, identical to the per-pod ones. * Bump golang in /vertical-pod-autoscaler/pkg/updater Bumps golang from 1.21.6 to 1.22.0. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * Bump golang in /vertical-pod-autoscaler/pkg/recommender Bumps golang from 1.21.6 to 1.22.0. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * Bump golang in /vertical-pod-autoscaler/pkg/admission-controller Bumps golang from 1.21.6 to 1.22.0. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * Update Chart.yaml * Move estimatorBuilder from AutoscalingContext to Orchestrator Init * VPA: bump golang.org/x/net to fix CVE-2023-39325 The version of golang.org/x/net currently used is vulnerable to https://avd.aquasec.com/nvd/2023/cve-2023-39325/, bump it to fix that. * Bump go version. * Fix e2e test setup * helm: enable clusterapi namespace autodiscovery * Fix expectedToRegister to respect instances with nil status * add option to keep node group backoff on OutOfResource error * remove changes to backoff interface * attach errors to scale-up request and add comments * revert optionally keeping node group backoff * remove RemoveBackoff from updateScaleRequests * Add ProvisioningRequestProcessor (kubernetes#6488) * Add kube-env to MigInfoProvider * CA: GCE: add pricing for new Z3 machines * Introduce LocalSSDSizeProvider interface for GCE * Use KubeEnv in gce/templates.go * Add templateName to kube-env to ensure that correct value is cached * Add unit-tests * extract create group to function * Merged PR 1379: added retry for creatingAzureManager in case of throttled requests added retry for forceRefresh in case of throttled requests ran tests MallocNanoZone=0 go test -race k8s.io/autoscaler/cluster-autoscaler/cloudprovider/azure -- passed and commented out unit test -- commented out as it takes 10 minutes to complete func TestCreateAzureManagerWithRetryError(t *testing.T) { ctrl := gomock.NewController(t) defer ctrl.Finish() mockVMClient := mockvmclient.NewMockInterface(ctrl) mockVMSSClient := mockvmssclient.NewMockInterface(ctrl) mockVMSSClient.EXPECT().List(gomock.Any(), "fakeId").Return([]compute.VirtualMachineScaleSet{}, retry.NewError(true, errors.New("test"))).AnyTimes() mockAzClient := &azClient{ virtualMachinesClient: mockVMClient, virtualMachineScaleSetsClient: mockVMSSClient, } manager, err := createAzureManagerInternal(strings.NewReader(validAzureCfg), cloudprovider.NodeGroupDiscoveryOptions{}, config.AutoscalingOptions{}, mockAzClient) assert.Nil(t, manager) assert.NotNil(t, err) } * docs: update outdated/deprecated taints in the examples Refactor references to taints & tolerations, replacing master key with control-plane across all the example YAMLs. Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com> * CA FAQ: clarify the point about scheduling constraints blocking scale-down * Add warning about vendor removal to Makefile build target Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com> * fix: add missing ephemeral-storage resource definition * Add BuildTestNodeWithAllocatable test utility method. * Add ProvisioningRequest injector (kubernetes#6529) * Add ProvisioningRequests injector * Add test case for Accepted conditions and add supported provreq classes list * Use Passive clock * Consider preemption policy for expandable pods * Fix a bug where atomic scale-down failure could affect subsequent atomic scale-downs * Update gce_price_info.go * Migrate from satori/go.uuid to google/uuid * Delay force refresh by DefaultInterval when OCI GetNodePool call returns 404 * CA: update dependencies to k8s v1.30.0-alpha.3, go1.21.8 * Bump golang in /vertical-pod-autoscaler/pkg/admission-controller Bumps golang from 1.22.0 to 1.22.1. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> * Bump golang in /vertical-pod-autoscaler/pkg/updater Bumps golang from 1.22.0 to 1.22.1. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> * Bump golang in /vertical-pod-autoscaler/pkg/recommender Bumps golang from 1.22.0 to 1.22.1. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> * Update expander options for the AWS cloud provider README * Remove shadow err variable in deleteCreatedNodesWithErros func * fix memory leak in NodeDeleteTracker * CA - Add 1.29 to version compatibility matrix * ClusterAutoscaler: Put APIs in a separate go module Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com> * Extend update-deps.sh so that we can automatically update k8s libraries in the apis pkg Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com> * Clean up update-deps.sh Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com> * Update apis version to v1.29.2 Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com> * Allow to override rancher provider settings Currently it is only possible to set provider settings over yaml file. This commit introduces env variables to override URL, token and cluster name. If particular environment variable is set it overrides value supplied in yaml file. Signed-off-by: Dinar Valeev <k0da@opensuse.org> Co-authored-by: Donovan Muller <donovan.muller@absa.africa> * Bump VPA version to 1.1.0 * Deprecate the Linode Cluster Autoscaler provider Signed-off-by: Ondrej Kokes <ondrej.kokes@gmail.com> * add price info for n4 * update n4 price info format * Set "pd-balanced" as DefaultBootDiskType It is a default since v1.24 Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/custom-boot-disks#specify * Clarify VPA and HPA limitations Signed-off-by: Luke Addison <lukeaddison785@gmail.com> * Update ionos-cloud-sdk-go and mocks * Update provider code * Add cloud API request metrics. * Fix and update README * Ignore ionos-cloud-sdk-go spelling * fix n4 price format * Add listManagedInstancesResults to GceCache. * [clusterapi] Do not skip nodegroups with minSize=maxSize * [clusterapi] Update tests for nodegroups with minSize=maxSize * add tests * made changes to support MIGs that use regional instance templates * modified current unit tests to support the new modifications * added comment to InstanceTemplateNameType * Ran hack/go-fmtupdate.h on mig_info_provider_test.go * Use KubeEnv in gce/templates.go * Add templateName to kube-env to ensure that correct value is cached * rebased and resolved conflicts * added fix for unit tests * changed InstanceTemplateNameType to InstanceTemplateName * separated url parser to its own function, created unit test for the function * separated url parser to its own function, created unit test for the function * added unit test with regional MIG * Migrate GCE client to server side operation wait * Track type of node group created/deleted in auto-provisioned group metrics. * trigger tests * fix comment * Add AtomicScaleUp method to NodeGroup interface * Add an option to Cluster Autoscaler that allows triggering new loops more frequently: based on new unschedulable pods and every time a previous iteration was productive. * Refactor StartDeletion usage patterns and enforce periodic scaledown status processor calls. * Bump golang to 1.22 * updated admission-controller to have adjustable --min-tls-version and --tls-ciphers * CA: Move the ProvisioningRequest CRD to apis module Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com> * Bump default VPA version to 1.1.0 As part of the 1.1.0 release: kubernetes#6388 * Format README * Add chart versions * Add script to update required chart versions in README * Add chart version column in version matrix * Move cluster-autoscaler update-chart-version-readme script to /hack * Only check recent revisions when updating README * Update min cluster-autoscaler chart for Kubernetes 1.29 * Remove unused NodeInfoProcessor * Fix broken link in README.md to point to equinixmetal readme * review comments - simplify retry logic * CA: Before we perform go test, synchronizing go vendor Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com> * Cleanup ProvReq wrapper * Make the Estimate func accept pods grouped. The grouping should be made by the schedulability equivalence meaning we can introduce optimizations to the binpacking. Introduce a benchmark that estimates capacity needed for 51k pods, which can be grouped to two equivalence groups 50k and 1k. * Update CAPI docs Add a link to the sample manifest and update the image used in the example. Signed-off-by: Lennart Jern <lennart.jern@est.tech> * Bump golang in /vertical-pod-autoscaler/pkg/updater Bumps golang from 1.22.1 to 1.22.2. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> * Bump golang in /vertical-pod-autoscaler/pkg/admission-controller Bumps golang from 1.22.1 to 1.22.2. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> * Bump golang in /vertical-pod-autoscaler/pkg/recommender Bumps golang from 1.22.1 to 1.22.2. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> * Introduce binbacking optimization for similar pods. The optimization uses the fact that pods which are equivalent do not need to be check multiple times against already filled nodes. This changes the time complexity from O(pods*nodes) to O(pods). * CA: Fix apis vendoring * Add g6 EC2 instance type for AWS * Copyright boilerplate * Lower errors verbosity for kube-env label missing * parentController may be nil when owner isn't scalable * Update ProvisioningClass API Group * Fix Autoscaling for worker nodes with invalid ProviderID This change fixes a bug that arises when the user's cluster includes worker nodes not from Hetzner Cloud, such as a Hetzner Dedicated server or any server resource other than Hetzner. It also corrects the behavior when a server has been physically deleted from Hetzner Cloud. Signed-off-by: Maksim Paskal <paskal.maksim@gmail.com> * Add tests for Pods owner that doesn't implement /scale * Add provreqOrchestrator that handle ProvReq classes (kubernetes#6627) * Add provreqOrchestrator that handle ProvReq classes * Review remarks * Review remarks * Cluster Autoscaler: Sync k8s.io dependencies to k/k v1.30.0, bump Go to 1.22.2 * [v1.30] fix(hetzner): hostname label is not considered The Node Group info we currently return does not include the `kubernetes.io/hostname` label, which is usually set on every node. This causes issues when the user has an unscheduled pod with a `topologySpreadConstraint` on `topologyKey: kubernetes.io/hostname`. cluster-autoscaler is unable to fulfill this constraint and does not scale up any of the node groups. Related to kubernetes#6715 * Remove the flag for enabling ProvisioningRequests The API is not stable yet, we don't want people to depend on the current version. * fix: scale up broken for providers not implementing NodeGroup.GetOptions() Properly handle calls to `NodeGroup.GetOptions()` that return `cloudprovider.ErrNotImplemented` in the scale up path. * Add --enable-provisioning-requests flag * [cluster-autoscaler-release-1.30] Fix ProvisioningRequest update (kubernetes#6825) * Fix ProvisioningRequest update * Review remarks --------- Co-authored-by: Yaroslava Serdiuk <yaroslava@google.com> * Update k/k vendor to 1.30.1 for CA 1.30 * sync changes * added sync changes file * golint fix * update vpa vendor * fixed volcengine * ran gofmt * synched azure * synched azure * synched IT * removed IT log file * addressed review comments --------- Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com> Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com> Signed-off-by: Dinar Valeev <k0da@opensuse.org> Signed-off-by: Ondrej Kokes <ondrej.kokes@gmail.com> Signed-off-by: Luke Addison <lukeaddison785@gmail.com> Signed-off-by: Lennart Jern <lennart.jern@est.tech> Signed-off-by: Maksim Paskal <paskal.maksim@gmail.com> Co-authored-by: Kubernetes Prow Robot <20407524+k8s-ci-robot@users.noreply.github.com> Co-authored-by: David Benque <david.benque@datadoghq.com> Co-authored-by: michael mccune <msm@opbstudios.com> Co-authored-by: shubham82 <shubham.kuchhal@india.nec.com> Co-authored-by: Markus Lehtonen <markus.lehtonen@intel.com> Co-authored-by: Niklas Rosenstein <niklas.rosenstein@helsing.ai> Co-authored-by: Ky-Anh Huynh <kyanh@viettug.org> Co-authored-by: Niklas Rosenstein <rosensteinniklas@gmail.com> Co-authored-by: Guy Templeton <guyjtempleton@googlemail.com> Co-authored-by: daimaxiaxie <44503972+daimaxiaxie@users.noreply.github.com> Co-authored-by: daimaxiaxie <codexiaxie@163.com> Co-authored-by: Michal Pitr <michalpitr@google.com> Co-authored-by: Daniel Kłobuszewski <danielmk@google.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Artur Żyliński <azylinski@google.com> Co-authored-by: Alvaro Aleman <aaleman@confluent.io> Co-authored-by: Marco Voelz <marco.voelz@sap.com> Co-authored-by: Jack Francis <jackfrancis@gmail.com> Co-authored-by: Yarin Miran <yarin@wiz.io> Co-authored-by: Will Bowers <22203232+wllbo@users.noreply.github.com> Co-authored-by: Yaroslava Serdiuk <yaroslava@google.com> Co-authored-by: Bartłomiej Wróblewski <bwroblewski@google.com> Co-authored-by: Anish Shah <shah.anish07@gmail.com> Co-authored-by: Mahmoud Atwa <mahmoudatwa@google.com> Co-authored-by: pawel siwek <pawelsiwek@google.com> Co-authored-by: Miranda Craghead <mcraghead@microsoft.com> Co-authored-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com> Co-authored-by: Kuba Tużnik <jtuznik@google.com> Co-authored-by: Johnnie Ho <johnnieho89@gmail.com> Co-authored-by: Walid Ghallab <walidghallab@google.com> Co-authored-by: Karol Wychowaniec <kawych@google.com> Co-authored-by: oksanabaza <obazylie@redhat.com> Co-authored-by: Vijay Bhargav Eshappa <vijay.bhargav.eshappa@oracle.com> Co-authored-by: David <drmorr@appliedcomputing.io> Co-authored-by: Damika Gamlath <damika@google.com> Co-authored-by: Ashish Pani <ashishpani988@gmail.com> Co-authored-by: Yuki Iwai <yuki.iwai.tz@gmail.com> Co-authored-by: Dinar Valeev <k0da@opensuse.org> Co-authored-by: Donovan Muller <donovan.muller@absa.africa> Co-authored-by: Luiz Antonio <luizaoj@google.com> Co-authored-by: Ondrej Kokes <ondrej.kokes@gmail.com> Co-authored-by: Yuan <yuanwangyw@google.com> Co-authored-by: Luke Addison <lukeaddison785@gmail.com> Co-authored-by: Mario Valderrama <mario.valderrama@ionos.com> Co-authored-by: Max Fedotov <m.a.fedotov@gmail.com> Co-authored-by: Daniel-Redeploy <84455776+Daniel-Redeploy@users.noreply.github.com> Co-authored-by: Edwinhr716 <edandres249@gmail.com> Co-authored-by: Maksym Fuhol <mfuhol@google.com> Co-authored-by: Allen Mun <allen.mun197@gmail.com> Co-authored-by: mewa <marcin.k.chmiel@gmail.com> Co-authored-by: Aayush Rangwala <ayush.rangwala@gmail.com> Co-authored-by: prachigandhi <prachigandhi@microsoft.com> Co-authored-by: Daniel Gutowski <danielgutowski@google.com> Co-authored-by: Lennart Jern <lennart.jern@est.tech> Co-authored-by: mendelski <mendelski@google.com> Co-authored-by: ceuity <everland7942@gmail.com> Co-authored-by: Maksim Paskal <paskal.maksim@gmail.com> Co-authored-by: Julian Tölle <julian.toelle@hetzner-cloud.de> Co-authored-by: k8s-infra-cherrypick-robot <90416843+k8s-infra-cherrypick-robot@users.noreply.github.com>
What type of PR is this?
/kind feature
ProvisioningRequestProcessor reconciles ProvisioningRequest state/conditions. It checks whether BookingExpired condition should be added or ProvisioningRequest was pending too long.
Also removed CapacityFound condition and use uniform Provisioned condition instead.
What this PR does / why we need it:
Part of implementation https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/proposals/provisioning-request.md