Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP-2943: Add initial draft on Network Policy Status #2947

Merged
merged 1 commit into from
Feb 3, 2022

Conversation

rikatz
Copy link
Contributor

@rikatz rikatz commented Sep 6, 2021

/sig network
/assign @thockin @caseydavenport

@k8s-ci-robot k8s-ci-robot added sig/network Categorizes an issue or PR as relevant to SIG Network. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Sep 6, 2021
@k8s-ci-robot k8s-ci-robot requested review from dcbw and thockin September 6, 2021 16:56
@k8s-ci-robot k8s-ci-robot added the kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory label Sep 6, 2021

```
<<[UNRESOLVED optional short context or usernames ]>>
We need to define if .status.PolicyStatus[].Status will have its own type with
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will the properly added network policy have a "GOOD" status like the parsed by default.. Or the idea is to reserve this for errors only.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be good to have the possible status list as an enumeration with fixed values.

Copy link
Contributor

@youngnick youngnick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some comments based on the things I wish people had told me about status when I first started writing status for Kube objects.

I would also recommend though, that unless namespacing the status by CNI plugin is vital, the Conditions should live at the root (will make the status much easier to parse by tooling).

* Add the `NetworkPolicyStatus` struct as following:
```go
// NetworkPolicyStatus describe the current state of the NetworkPolicy.
type NetworkPolicyStatus struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This reads like it's expected that a NetworkPolicy will be implemented by multiple CNIs (because the status is namespaced by CNI name)? Is that a requirement or a common use case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually removed this and turned into a Conditions field.

Maybe we should add in the conditions field a reporter for that (@thockin this is not common, but as a Network Policy can be consumed by multiple providers, should we add the Reporter of a condition as a field?)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to add reporter at this point, see my comment below.

Comment on lines 167 to 204
type CNIPolicyStatus struct {
// CNI defines the CNI name
CNI string

// Status defines the status string (parsed, error, etc)
Status string

// Description defines the description of the Status (why it's on that status)
Description string
}
Copy link
Contributor

@youngnick youngnick Sep 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd recommend this use a metav1.Conditions array instead of just status and description fields, because:

  • Conditions have the ObservedGeneration and LastTransitionTime fields that are essential for checking that the version of the status you're seeing matches the version of the object (that is, that things are not still being updated).
  • Lets there be a an expected main summary condition like Accepted or Valid that, when its status is true, indicates that everything was parsed correctly. When the status is false, then the Reason and Message field can be used to indicate the exact problem (like I would imagine Description is used here).
  • This also lets extra Conditions be easily added to the spec later (I would guess there might also be a use for a desired-true Ready Condition that lets you know when the rules have been completely programmed into the underlying CNI.

I asked the question above, but in the case that the status does need to be namespaced by CNI name, then in that case, this would look like:

type CNIPolicyStatus struct {
  // CNI defines the CNI name that this status is written by.
  CNI string
  
  // Conditions holds an array of metav1,Conditions that describe the state of the NetworkPolicy.
  // When a CNI or other NetworkPolicy controller implementation parses this object,
  // it must ensure that the `Accepted` Condition is updated with the parsing outcome.
  // The Reason and Description must indicate the cause of the failure, if there was one.
  
  // (the Conditions field should also have a default that adds the `Accepted` Condition
  // with a `status` of `Unknown`.)
 Conditions []metav1.Conditon
}

(edit: I tried a suggestion, but the diff was not very useful here.)

Ideally, the update will also include Reason constants for the common Condition reasons (like EverythingOK, FailedParsing or something).

The idea here is that this is a spec that needs to be implemented by lots of CNI plugins, so the more clear the behavior is, the less chance there is of drift between implementations.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I second this recommendation; we could add an UnsupportedFeature NetworkPolicyConditionType as a new condition type which can make use of Message and Reason fields to provide more information on what the CNI is doing with regards to this.

 type NetworkPolicyConditionType string

 const (
 	NetworkPolicyReady NetworkPolicyConditionType = "Ready"
 	UnsupportedFeature NetworkPolicyConditionType = "UnsupportedFeature"
 )

 type NetworkPolicyCondition struct {
 	Type NetworkPolicyConditionType `json:"type,omitempty"`
 	// Status of the condition, one of True, False, Unknown.
 	Status v1.ConditionStatus `json:"status,omitempty"`

 	// +optional
 	// Last time the condition transited from one status to another.
 	LastTransitionTime metav1.Time `json:"lastTransitionTime,omitempty"`
 	// +optional
 	// A human readable message indicating details about the transition.
 	Message string `json:"message,omitempty"`
 	// +optional
 	// Unique, one-word, CamelCase reason for the condition's last transition.
 	Reason string `json:"reason,omitempty"`
 }

 type NetworkPolicyStatus struct {
 	Conditions []NetworkPolicyCondition `json:"conditions,omitempty"`
 }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually a -1 on making custom Condition types, I think unless you're adding something substantial to the metav1.Condition type, it's much better to use it directly. In particular, the ObservedGeneration field is absolutely vital for ensuring that the Condition isn't stale. Regardless, they'll deserialize the same, but code using the field defs directly will find it easier to use the metav1.Condition than a custom type. (I recommend this because I did it in Contour's HTTPProxy, and it was a mistake.)

I'd encourage supplying string constants for the Type, Reason and Message fields instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a -1 on custom Condition Types as well. I've updated the KEP to contain Conditions, I think we are on the same page here in this case

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, TIL that Conditions are actually a meta type. Let me fix here in the KEP

```go
type CNIPolicyStatus struct {
// CNI defines the CNI name
CNI string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CNI isn't involved in the enforcement of the network policy API, despite the fact that many network plugins also include a network policy implementation and some policy implementations rely on being involved in container networking.

I would recommend avoiding any naming that focuses on implementation details like this in general.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PolicyControllerStatus?

@abhiraut
Copy link
Contributor

@rikatz my proposal is in #2947 (comment)


## Motivation

While implementing new features in Network Policy, we've realized that some CNIs were
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It sounds like these scenarios are for providers that only partially implement the network policy API, or have some restrictions / limitations in the objects they will accept from the API.

If that is the case, should we be handling this via some form of validation so that users get immediate feedback rather than needing to seek it out?

If I create an "invalid" policy for my cluster, I'd like to know about it at the time I attempt to create it, rather than a few weeks down the road when something unexpected happens.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you Casey, but in this case that should be some common Validation Webhook right?

Also, just reminding that this Status field was mostly a PRR request to provide some 'feature usage' feedback

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I didn't mean to imply a solution. I was just suggesting that these use-cases sound better suited to validation as far as user-experience goes.

Could either be a webhook, some sort of capability negotiation, etc. Haven't thought that far into it!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

webhook would be the responsibility of individual network policy providers and not something that will be part of k/k.. right? but i agree that would be most appropriate, but not something we can solve with a KEP

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

webhook would be the responsibility of individual network policy providers

Yes, but populating the conditions field of a NetworkPolicy status is also ultimately up to the network policy provider to actually implement.

- Impact of its degraded performance or high-error rates on the feature:
-->

### Scalability
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a scalability concern. All nodes are watching for Network Policy events in order to perform their enforcement. If the status field is updated by a cluster-wide controller, or some other entity, it means that the number of events sent for each Network Policy created will be two: one for the creation and a second for the update of the Network Policy status.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 14, 2021
Copy link
Member

@thockin thockin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took a quick spin thru this one. I don't fundamentally object to Status here, but I feel like I need to ask - why not events?

@wojtek-t
Copy link
Member

wojtek-t commented Jan 4, 2022

I took a quick spin thru this one. I don't fundamentally object to Status here, but I feel like I need to ask - why not events?

@thockin

I guess #2709 (comment) is a trigger for this KEP and I'm 100% supportive for this change (although I also just skimmed it).

Re why not events - events are best-effort and ephemeral (default 1h TTL). When I launched my network-policy and then went to to 1h meeting, I would like to be able to learn why my network-policy doesn't work and I won't be able to do that from the event, because it was already GC-ed. But I would be able to figure this out from the status.

@wojtek-t
Copy link
Member

wojtek-t commented Jan 4, 2022

@rikatz - will you be able to get back to this work?

@rikatz
Copy link
Contributor Author

rikatz commented Jan 4, 2022

Hi folks, being completely honest with you: I WANT to keep the work on this, I'm not sure I CAN as I'm pretty low on bandwidth (pun intended) those last few days and don't wanna block anyone because of this.

I've pinged other folks on sig-net-policy-api (@astoycos I'm looking at you!) to check if someone can help me on this, otherwise I can take care of the kep as soon as it doesn't block other Keps :)

@rikatz
Copy link
Contributor Author

rikatz commented Jan 17, 2022

@wojtek-t I will re-read all the comments during this and next week and see what I can do.

Will try to attend sig net netpol meeting as well and discuss with the group how they want to prioritize this, and how I can get some help. I know the majority of the group is focused on cluster scoped netpol, so maybe even from that work we can get something that can be reused here.

@thockin what's your feeling about @wojtek-t comments on Events? I mean, I agree that the Events TTL may be frustrating, but OTOH events may bring some more "pattern" than some open status right? (if we use status, we will probably end up again on that problem of "we need a new status field for feature XPTO" in a future)

@astoycos
Copy link
Contributor

astoycos commented Jan 18, 2022

Hi folks, being completely honest with you: I WANT to keep the work on this, I'm not sure I CAN as I'm pretty low on bandwidth (pun intended) those last few days and don't wanna block anyone because of this.

@rikatz Totally understandable, I'm in the same boat. We had an out of band meeting with Tim last week and thankfully I don't think this is really blocking us at the moment with new network policy objects moving forward. I think with CNP at least we are going to have a standard CNP status field that is simply a metav1.Conditions array and for alpha we will not enforce any "standard" conditions that need to be reported but instead allow CNIs to report what they see as prudent, then as the API evolves we can look at if the status needs to be standardized

In terms of this KEP (i.e adding status to existing netPol) that may be a bit more difficult since it is already a mature API, unless we follow the same vane an allow CNI's to simply populate the status conditions with whatever they want

Copy link
Member

@thockin thockin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I remove my objections in general, though there are speciofic bits of feedback here I agree with.

  • Conditions
  • Define a standard "FeatureNotSupported" condition (or similar)
  • Maybe an "Accepted" condition for well-behaved modern implementations to set
  • Don't say "CNI"

Question: do we need to support multiple sets of conditions (e.g. per controller) ? Is multiple implementations of NetPol a thing? In theory it could be (e.g. iptables and in a cloud firewall)...

@thockin
Copy link
Member

thockin commented Jan 25, 2022

It sounds like "not for v1.24" right?

@k8s-ci-robot k8s-ci-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. area/enhancements Issues or PRs related to the Enhancements subproject area/provider/azure Issues or PRs related to azure provider kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Feb 1, 2022
Copy link

@aanm aanm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aanm from "Non goals":

* Provide a feedback to users on whether a Network Policy was applied on all the cluster
nodes

I think that, from my perspective the field would be there and the providers can use the way they want, so if Cilium wants to use them to provide informations like:

  • This kind of rule was blocked by the admin (like, users trying to implement a policy with CIDR 0.0.0.0/0 and this being blocked)
  • The provider cannot parse "SCTP" rules
  • Users passed an invalid CIDR

Are valid cases, whether

  • Policy could not be applied in node XXXX
    is a decision from the netpol provider to implement or not

@rikatz thanks for pointing this out! I read this proposal a couple of months back and I remember that the scalability was a non-issue because of the non-goal: "Provide a feedback to users on whether a Network Policy was applied on all the cluster nodes" but I miss this today. LGTM then

@thockin
Copy link
Member

thockin commented Feb 2, 2022

Just to echo and emphasize:

  1. Each KNPP node will update its status on the KNP either stating: "Yes, it's enforced" or "No, I have some issues with this policy";

Any model that requires "all nodes" to check in is doomed to fail at scale. :)

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 2, 2022
@rikatz
Copy link
Contributor Author

rikatz commented Feb 2, 2022

/lifecycle active

@k8s-ci-robot k8s-ci-robot added the lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. label Feb 2, 2022
Comment on lines +211 to +218
Should Network Policy Status cover multiple network policy provider conditions?
If so, how can this be solved? Prefixing the Type field with the implementation
name as `myprovider.com/PartialFailure?`

Also, what kind of polatiry should we use in a default condition Type? Positive polarity
like "Accepted=bool" or negative polarity like PartialFailure=true?

See the original discussion in https://github.com/kubernetes/enhancements/pull/2947#discussion_r796974848
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Gateway API, we've settled on this, (based on my personal opinions as much as anything else):

  • a small number (ideally one) of positive-polarity rollup conditions (Accepted would be the go here), but I could see some value in both Accepted, meaning the config is okay, and Ready, meaning the responsible controller has rolled the config out everywhere it needs to be. The idea for these is to provide a low-noise signal that either "everything is OK, no worries", or "something is wrong, you should check more".
  • If necessary, negative-polarity conditions that indicate the presence of a specific error (IPNotAllowed, or similar error conditions.) The Node error conditions are the exemplar here, I think. Also note that providing a good Reason and Message on an Accepted condition may be sufficient, and that these negative-polarity conditions should probably start out namespaced to the owning controller until there's some agreement about what's useful.

Having a Ready Condition as well as an Accepted one allows a user to easily distinguish between "My config is busted" and "my config hasn't been deployed yet", without requiring all nodes to update the object. I'd guess that it should be relatively straightforward for the policy controller to have some mechanism for knowing which nodes have the config version, even if that's just polling.

For any required Conditions, I strongly recommend providing string constants for Reason values you want to be well known, and document the cases you expect them to be used as clearly as possible.

However, having these non-namespaced summary Conditions required will make it difficult to impossible for multiple controllers to coordinate. If multiple controllers are really required, all Conditions should be namespaced with the name of the owning controller, and the API definitions can only suggest Type suffixes. This can be done just using the existing Type field (since it's kind of expected that it may contain domain-prefixed strings).

I'd agree with a comment from @aojea that questioned the need to support multiple network policy providers, personally, but my knowledge in this area is pretty out-of-date, so take that with a whole pile of salt.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd agree with a comment from @aojea that questioned the need to support multiple network policy providers, personally, but my knowledge in this area is pretty out-of-date, so take that with a whole pile of salt.

+1 to this one, but again, I'm far from being an expert here.

My main question is - do we have to address that for Alpha? While I think we should resolve that for beta, given that alpha is disabled by default, we can probably start with a single provider support only and reevaluate for beta.
@thockin - thoughts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think, IMO that in this case and as Tim stated before, we can deal with that on the implementation moment rather than right now. We are targeting alpha, so we can change as much as we want right? :)

Thanks folks for your thoughts on this!

Copy link
Member

@wojtek-t wojtek-t left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rikatz I added multiple comments, but many of them are not blockers for Alpha (just you filled all sections already so I commented on that)

keps/sig-network/2943-networkpolicy-status/README.md Outdated Show resolved Hide resolved

###### How can a rollout or rollback fail? Can it impact already running workloads?

A rollout won't cause any problem, as this is a new field.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically, one can imagine a crashing conversions in kube-apiserver or sth like that...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry @wojtek-t can you explain a bit more on this. It's the only part I'm missing to adjust. What's the scenario? Conversions between old objects not supporting status and new ones?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. Or more generically, the registry strategy -related code in kube-apiserver. The risk is low, but non-zero.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the answer is to ACK "There is low, but non-zero, risk that registering a status strategy could have bugs that cause problems even when the gate is disabled."

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah - I would just change to something like:

"Unexpected crashes of kube-apiserver. Running workloads will not be affected".

keps/sig-network/2943-networkpolicy-status/README.md Outdated Show resolved Hide resolved
keps/sig-network/2943-networkpolicy-status/README.md Outdated Show resolved Hide resolved
keps/sig-network/2943-networkpolicy-status/README.md Outdated Show resolved Hide resolved
Copy link

@antoninbas antoninbas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@thockin @rikatz @jayunit100 Speaking on behalf of the Antrea project, there is no concern on our side when it comes to implementing this. From a user perspective, a validation webhook is better IMO, but I guess making it part of the API is a better incentive for CNI providers to implement it as opposed to “you should have a validation webhook if you don’t support all features”.

I understand the rationale for prefixing the Type field with a provider namespace, but 1) it seems unlikely to me that there will be multiple controller for NP implementation, 2) I feel like it hurts the ability of users to consume this Status as opposed to having a fixed / unique condition Type (e.g. Accepted) that can be checked to determine whether the NP can be implemented by the provider.

@antoninbas
Copy link

+1

@caseydavenport
Copy link
Member

caseydavenport commented Feb 2, 2022

I'm kind of surprised this "Scalability" entry is empty.

Yep, I had the same concerns originally @aanm, but am OK with the non-goal as stated. Have pushed back on the "know when my policy in fully implemented" requirement in the past as well for this same reason - it doesn't scale well.

My main concern with this at the moment is that I can't see us getting a lot of value from it.

Take the endPort example - Calico vA.B.C has support for it, but Calico vX.Y.Z existed prior to the endPort field being added to the API. It simply won't know about the field and won't be able to report back to the API that the policy isn't implemented correctly, and it never will be able to because the code is already released and in the wild.

Seems like this only works for when a network policy provider explicitly writes code to check for a specific policy formulation it doesn't allow, which is certainly better than nothing but might provide a false sense of security to users who see "Accepted" on a policy only to find out that the version they are running is just old.

TL;DR: /lgtm, has some limitations but better than nothing and worth playing with in alpha.

@thockin
Copy link
Member

thockin commented Feb 2, 2022

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 2, 2022
@rikatz
Copy link
Contributor Author

rikatz commented Feb 2, 2022

@wojtek-t As the only missing part is for when targetting beta releases and I don't wanna block on this, your call, should I:

  • Add a TBD in that field?
  • Remove it?
  • Add something else?
    Thanks once more! I know I'm being a bit annoying on this! Just want to get this done before the freeze

@wojtek-t wojtek-t self-assigned this Feb 3, 2022
Copy link
Member

@wojtek-t wojtek-t left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rikatz - just two more small things


###### Does enabling the feature change any default behavior?

No
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just realized that it actually should be "yes", because after enabling the feature gate network policy providers will start setting some status.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They will just if implemented on their side. This is why I thought this was no. The change of a "default behavior" for me would be something like "enabling the feature gate will make all TCP connections deny something".

But let me change here :)


###### How can a rollout or rollback fail? Can it impact already running workloads?

A rollout won't cause any problem, as this is a new field.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah - I would just change to something like:

"Unexpected crashes of kube-apiserver. Running workloads will not be affected".

@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 3, 2022
@wojtek-t
Copy link
Member

wojtek-t commented Feb 3, 2022

This is more than enough for PRR for Alpha. We can always tune it further in Beta.

/lgtm
/approve PRR

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 3, 2022
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: aanm, rikatz, thockin, wojtek-t

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 3, 2022
@wojtek-t
Copy link
Member

wojtek-t commented Feb 3, 2022

It seems that everyone above was supportive, so cancelling hold.

/hold cancel

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 3, 2022
@k8s-ci-robot k8s-ci-robot merged commit 1205bf4 into kubernetes:master Feb 3, 2022
@k8s-ci-robot k8s-ci-robot added this to the v1.24 milestone Feb 3, 2022
@thockin
Copy link
Member

thockin commented Feb 3, 2022

I don't think we need to sweat beta criteria at alpha time.

@wojtek-t
Copy link
Member

wojtek-t commented Feb 3, 2022

I don't think we need to sweat beta criteria at alpha time.

Sure - I was just adding comments to them as I was also adding to alpha ones. But I mentioned offline to @rikatz that beta ones wasn't blocking for alpha.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory lgtm "Looks good to me", indicates that a PR is ready to be merged. lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. sig/network Categorizes an issue or PR as relevant to SIG Network. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.