Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds support for BGP in OVNK #1636

Merged
merged 6 commits into from
Aug 5, 2024
Merged

Conversation

trozet
Copy link
Contributor

@trozet trozet commented Jun 7, 2024

No description provided.

@openshift-ci openshift-ci bot requested review from danwinship and dougbtv June 7, 2024 02:24
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
@fedepaol
Copy link
Member

fedepaol commented Jun 7, 2024

cc @oribon

Copy link

@cgoncalves cgoncalves left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Valuable starting point to start the discussion, thanks!

Comment on lines 89 to 91
* As a baremetal or egress IP user, I do not want to have to restrict my nodes to the same layer 2 segment and prefer
to use a pure routing implementation to handle advertising virtual IP (VIP) movement across nodes.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the intent of the enhancement to also BGP advertise IPs from the control plane (e.g. K8S API server and node)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that was one of the goals.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In such a case, the scope of this enhancement goes beyond what is mostly written in this document, particularly the title and introduction sections. I am not opposed to this idea, though.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related to the FRR driver discussion below, where/how is it going to be implemented?

If control-plane IPs are to be BGP advertised, OVN-Kubernetes should not run/manage its driver instance has OVNK scope is Pod interfaces. Is it going to be possible to configure which IPs (control plane IPs, OVN-K managed IPs such as Pod and Egress IPs, or all) to be advertised?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah there are configurable options in the RouteAdvertisements CRD to control what you want to advertise. I dont see what that has to do with the FRR driver though. Maybe I am misunderstanding your question.

enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
* Replacing the support that Metal LB provides today for advertising service IPs.
* Support for any other type of BGP speaker other than FRR.

### Future Goals

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RPKI support?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it necessary for edge? I guess we would be using it to validate routes propagated to us by the provider's fabric?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm interested in its potential benefits, but I haven't seen any requests for it yet. I was curious if you've heard any interest in integrating it. Edge clusters could require it, either to validate routes propagated to/by us to/from the provider's fabric or datacenter (reachable via 3rd party networks).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not very familiar with it so I'll defer to you. Shall I include it in future goals?

enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Show resolved Hide resolved
@trozet trozet force-pushed the ovnk_bgp branch 3 times, most recently from f681aba to 7ae25dc Compare June 12, 2024 21:33
Signed-off-by: Tim Rozet <trozet@redhat.com>
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Show resolved Hide resolved
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
Signed-off-by: Tim Rozet <trozet@redhat.com>
Signed-off-by: Tim Rozet <trozet@redhat.com>

##### Egress IP

EgressIP feature that is dynamically moved between nodes. By enabling the feature in RouteAdvertisements, OVN-Kubernetes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you provide RouteAdvertisements example here for both primary EIP and secondary EIP scenarios ?
For secondary EIP, it may have to choose another secondary interface on the node for establishing BGP session with a neighbor. doesn't it require changes to CR to accommodate source address or interface ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

originally we planned on providing egress IP as well as other features on secondary networks. it looks like that functionality is no longer needed now that we can provide a layer 2 type network as the primary network. So we don't need to worry about providing egress IP functionality on secondary networks.

For providing egress ip multi-nic functionality (which I think is really what you are getting at) across mutliple interfaces for different user defined networks there are 2 use cases.

  1. Single BGP instance operating in the default VRF with user defined networks leaking into it (targetVRF=default):
    The user would be responsible for setting up BGP peering over those other NICs. Egress IP reachability would be advertised out of all of them. But egress traffic would only go out one of the interfaces.
  2. Multiple VRF BGP instances, using VRF lite to carry the user defined networks over specific interfaces (targetVRF=auto):
    The egress IP would only be advertised over the VRF instance that it maps to, and subsequently the host interface where the BGP peering is happening for that VRF. If egress IP multi-nic was configured to use an interface other than this VRF interface, it should be considered a misconfiguration. We can choose to explicitly raise an error here and/or forbid this behavior.

Does that makes sense?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok @trozet , for case 1, doesn't this just pick up primary interface ip for bgp peering (the same ip is used as next hop for pod ip, egressip routes), isn't it ? that's why i thought of providing an option to choose required interface for bgp peering.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm so you are thinking of checking to see if BGP is enabled on this network, and its advertising egress IP, then there should be a BGP peer on the selected egress IP interface? If so, it might be useful to warn the user or maybe post an event to the egress IP, but I'm not sure we want to forbid it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes @trozet , thats what i meant and ability to provide BGP over secondary interfaces. But if we plan to provide only L2 support on secondary interfaces, we can skip this for now.

Changes-Include:
 - Removal of support for advertising KAPI VIP
 - Removal of support for advertising service cluster IP/external IP
 - OpenShift API added

Signed-off-by: Tim Rozet <trozet@redhat.com>
enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
Comment on lines 76 to 79
### Multi-homing, Link Redundancy, Fast Convergence

BGP can use multi-homing with ECMP routing in order to provide layer 3 failover. When a link goes down, BGP can reroute
via a different path. This functionality can be coupled with BFD in order to provide fast failover.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Failover/redundancy of what exactly?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BGP allows to learn multiple paths to the same destination, weighted or not (it's the part about multi-homing / ecmp routing). The redundancy is related to the various paths available to reach the announced target, the failover is about learning more or less quickly about the fact that that particular path is down.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I understand what the feature means in general, I'm asking for the specifics of why you brought it up here since you don't talk about multiple paths or failover anywhere else in the enhancement.

eg, are you saying, perhaps, that you are going to advertise that all pod IPs are reachable via all nodes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No we are not advertising all IPs reachable by any node. The multiple paths is coming from multiple NICs on the host connected to BGP fabric. This is common in spine and leaf topologies. This way if one link goes down to a BGP router, there is a 2nd link available to another leaf, and the failover happens really quickly with BFD. I'll clarify the use case.


* To provide a user facing API to allow configuration of iBGP or eBGP peers, along typical BGP configuration including
communities, route filtering, etc.
* Support for advertising Egress IP addresses.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Advertising egress IP addresses to who? What functionality does this enable?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is useful when an egressip-ed pod sends packet to an endpoint within the fabric: in order to allow the return traffic to reach the pod, the endpoint's reply needs to find its way to the node.
Since egressip is dynamic, here we announce it via bgp so the fabric will learn how to drive the return traffic to the proper node.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the user stories we have:

  • As a baremetal or egress IP user, I do not want to have to restrict my nodes to the same layer 2 segment and prefer
    to use a pure routing implementation to handle advertising virtual IP (VIP) movement across nodes.

which I think covers it...although we removed support for baremetal VIP from this enhancement so I'll clean that up a bit

* To provide a user facing API to allow configuration of iBGP or eBGP peers, along typical BGP configuration including
communities, route filtering, etc.
* Support for advertising Egress IP addresses.
* To enable BFD to BGP peers.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know a lot about BGP and I know even less about BFD, but this feels underspecified as well. Among other things, does this mean allowing ovn-k to do BFD to external BGP peers, or allowing the external network to do BFD to ovn-k? And why? What will this enable that wouldn't be possible without this feature?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A BGP session can be associated and backed up by a BFD session towards the same neighbor.
BGP alone has a min link failure detection time of 3 seconds, while bfd can go sub second.
When this is enabled, as soon as bfd declares the link dead, the bgp session disconnects and the routes through that link are removed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The question of "what" is doing BFD is s good one. OVN is capable of doing BFD. We use it today with Multiple External Gateways (MEG). In this case when one of the external gateways goes down, BFD detects it within a couple hundred milliseconds, and OVN purges the routes to that gateway, so that egress traffic will choose a different one.

With FRR, it has BFD daemon support, where it can run BFD in the host and detect if peers go down. With this enhancement OVN has no integration with FRR, so it cannot tell FRR to remove the routes from its RIB (although it could remove the routes from the OVN datapath). Trying to hack that together would be dicey until we have real support with OVN and FRR. Therefore for the purpose of this enhancement, the BFD will be done with FRR so the flow would look like this:

  1. BFDd detects a failure and notifies FRR.
  2. FRR removes the routes from its RIB.
  3. OVNK notices via netlink that the routes are gone, and removes them from OVN.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added a BFD section in the doc


OVN-Kubernetes will leverage other projects that already exist to enable BGP in Linux. FRR will be used as the BGP
speaker and already has EVPN support for native Linux constructs like Linux bridges, VRF devices, VXLAN tunnels, etc.
FRR may need some code contributions to allow it to integrate with OVN and Open vSwitch. For FRR configuration, the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"some"? Is this a trivial amount of work or a major part of the overall BGP-in-OVNK project?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I copied this from the draft document that included EVPN and did not reword it. For detailed info about the work needed in OVN see: https://issues.redhat.com/browse/FDP-670?focusedId=25089147&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-25089147

Requirements 1 and 2 are applicable to this enhancement. I'll try to reword things in the enhancement to make more sense.

// FRR is required for enabling for using any OpenShift features that require dynamic
// routing. This includes BGP support for OVN-Kubernetes and MetalLB.
// +optional
DeployFRR *bool `json:"deployFRR,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like this should have a more generic name referring to "BGP" rather than referring to the specific implementation of it that we happen to currently be using.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah Jaime had the same feedback. I didn't want to list is as "BGP" because FRR is an entire routing stack itself, so it would also enable OSPF, etc. I think @jcaamano came up with "AdvancedRouting" so I'll go with that for now.


A networkSelector may select more than one network, including user-defined networks. In such a case the network subnets
will be checked by OVN-Kubernetes to determine if there is any overlap of the IP subnets. If so, an error status will be
reported to the CRD and no BGP configuration will be done by OVN-Kubernetes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are the subnets in a network immutable after creation time? If that's not enforced by validation then you also need to recheck the RouteAdvertisements when the Network changes

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing them after creation (and after having pods assigned a cidr) can be messy I think? What is gonna happen to a pod with an IP that no longer belongs to the subnet?

I think it's reasonable to have the subnets immutable and enforce that, possibly with cel transition rules, but I'd wait @trozet

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should not be possible to change the subnet after creation time. Thanks for the comment, I will add that to the UDN CRD PR: #1638

Comment on lines +287 to +290
The frrConfigurationSelector is used in order to determine which FRRConfiguration CR to use for building the OVN-Kubernetes
driven FRRConfiguration. OVN-Kubernetes needs to leverage a pre-existing FRRConfiguration to be able to find required
pieces of configuration like BGP peering, etc. If more than one FRRConfiguration is found matching the selector, then
an error will be propagated to the RouteAdvertisements CR and no configuration shall be done.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is not actually frr-specific configuration then it should have a more generic name. If it is frr-specific then it probably shouldn't be. We should be providing high-level configuration, not escape hatches for people to directly configure the low-level implementation (which are likely to lead to compatibility problems and customers doing things we didn't intend for them to be doing).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The two options here were either duplicating what we have in the frrconfiguration crd (mostly related to how to establish a bgp session with a given neighbor, such as ip, password, asn, timers and eventually bfd) or to rely to the existing configuration and select it, to avoid having multiple ways to describe the same thing.

On the other hand an ovnk wrapper will make the api it more sealed and will open the door to easily change the underlying bgp implementation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The FRR technology is part of the API (FRR-K8S), and OVNK is generating FRR-K8S API from other FRR-K8S CRs, hence whey the selector is there. I guess if we want to abstract from FRR in OVNK, we would need a more generic construct like:

RouteAdvertisements:
  sourceRoutingConfiguration:
    type: frr-k8s
    selector:
          select: something

is this what you are thinking @danwinship ?

Copy link
Contributor

@danwinship danwinship Jul 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean

  1. If we aren't certain that we will always want to be using frr as the backend, then we should be careful about exposing the fact that we currently are
  2. If we don't explicitly intend for the admin to be able to use every possible frr feature (including future ones we don't know about yet), then we shouldn't expose every possible frr config option.

Like, the network.operator.openshift.io object lets you configure arbitrary kube-proxy options and that was always somewhat problematic for openshift-sdn, because the admin might change the proxy behavior in some way we weren't really expecting or accounting for. Or with ingress-nginx, apparently some annotation values get substituted directly into the generated nginx config, and this has been a big source of CVEs, because there's no way ingress-nginx can understand every possible nginx config option to make sure you're using it in a way that makes sense from ingress-nginx.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your point, but that is kind of outside the scope of this enhancement. FRR-K8S is a feature that is going to be GA'ed in 4.17 with or without OVNK using it. From the OVNK side we are restricting what config we will allow configured in FRR by using the routeAdvertisements CRD. However, nothing stops a user from going and creating an FRR-K8S rawConfig type CR, and putting whatever config they want in it.

@fedepaol have you thought about what support will look like for FRR-K8S in the case where a user does rawConfig?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, nothing stops a user from going and creating an FRR-K8S rawConfig type CR, and putting whatever config they want in it.

I assume you mean that cluster admins can do that, not anyone else, right?

If frr-k8s lets cluster admins specify arbitrary config, then that seems bad for support reasons, but it's not a security problem (and it's out of scope for this enhancement I guess).

But we need to make sure that in the case where a namespace admin creates a user-defined primary network for their namespace, that they can't also configure routing in any ways that the cluster admin didn't intend for them to. (Maybe this is already covered... I didn't check all the details of what config is allowed in what cases...)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The frr-k8s is very limited in what it allows to do (basically today it allows to do what metallb does, advertising prefixes, plus the ability of receiving them too).

The rawConfig field is not being documented and there's no plan to support it, but it's there to allow experimenting and as a backdoor in case there are scenarios to be solved quickly. If a user wants to use rawConfig, there'll be a support excpetion and a clear agreement on what they can add and why they can use it.

Finally, the frr-k8s daemon api is namespace scoped, and the instance ignores entries that do not belong to the namespace created by CNO for it, so it's definitely only for the cluster admin.

enhancements/network/bgp-ovn-kubernetes.md Outdated Show resolved Hide resolved
use an environment where a user has created an FRRConfiguration:

```yaml
apiVersion: frrk8s.metallb.io/v1beta1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ovn-k should not be depending on a metallb type...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the FRR-K8S project (the API to configure FRR via kube) was done in the metallb project. Are you suggesting it move to its own project?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, I was thinking this was more like some internal metallb type


### API Extensions
FRR-K8S API will be used in order to create BGP Peering and configure other BGP related configuration. A
RouteAdvertisements CRD will be introduced in order to determine which routes should be advertised for specific networks.
Copy link
Contributor

@jcaamano jcaamano Jul 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Can multiple RouteAdvertisements CRs select the same network?
  • Can a RouteAdvertisements CR select network blue and target VRF red? Or target VRF should be either blue, auto or default?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Can multiple RouteAdvertisements CRs select the same network?

We should not allow this. If it happens we should report an error status to the CR.

  • Can a RouteAdvertisements CR select network blue and target VRF red? Or target VRF should be either blue, auto or default?

For now, we should not allow leaking between VRFs other than the default. If we leaked blue into red, when ingress traffic arrived into red destined for blue, we would have no ip rule or anything to steer it into the blue vrf. Given our limited time and capacity I think we should just error in the CR if we detect this case.

agree?

Signed-off-by: Tim Rozet <trozet@redhat.com>

In OVN, BFD is supported today with the Multiple External Gateways (MEG) feature. However, there is no support for
OVN to communicate with the FRR stack. Therefore for BFD support with BGP, FRR BFD daemon will be used in order to
detect BFD link failures. It will then signal to FRR that the peer is down, and FRR will remove the routes routes from
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this also serves the external router to remove routes advertised by OVNK / FRR-K8s

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clarified

Signed-off-by: Tim Rozet <trozet@redhat.com>
@jcaamano
Copy link
Contributor

jcaamano commented Aug 5, 2024

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Aug 5, 2024
@jcaamano
Copy link
Contributor

jcaamano commented Aug 5, 2024

/approve

Copy link
Contributor

openshift-ci bot commented Aug 5, 2024

@trozet: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@trozet
Copy link
Contributor Author

trozet commented Aug 5, 2024

/approve

Copy link
Contributor

openshift-ci bot commented Aug 5, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jcaamano, trozet

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 5, 2024
@openshift-merge-bot openshift-merge-bot bot merged commit bdebb88 into openshift:master Aug 5, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants