Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CRD source: add event-handler support #2220

Merged

Conversation

ericrrath
Copy link
Contributor

Description

When the --events flag is passed at startup, Source.AddEventHandler() is called
on each configured source. Most sources provide AddEventHandler()
implementations that invoke the reconciliation loop when the configured source
changes, but the CRD source had a no-op implementation. I.e. when a custom
resource was created, updated, or deleted, external-dns remained unware, and the
reconciliation loop would not fire until the configured interval had passed.

This change adds an informer (on the CRD specified by --crd-source-apiversion
and --crd-source-kind=DNSEndpoint), and a Source.AddEventHandler()
implementation that calls Informer.AddEventHandler(). Now when a custom
resource is created, updated, or deleted, the reconciliation loop is invoked.

Testing

I ran external-dns with the "inmemory" provider, the "noop" registry, and the --events flag with a CRD source, and observed normal startup:

host:~ user$ external-dns \
> --provider=inmemory \
> --inmemory-zone=example.com \
> --registry=noop \
> --source=crd \
> --crd-source-apiversion=example.com/v1 \
> --crd-source-kind=DNSEndpoint \
> --events \
> --interval=1800s
INFO[0000] config: {APIServerURL: KubeConfig: RequestTimeout:30s DefaultTargets:[] ContourLoadBalancerService:heptio-contour/contour GlooNamespace:gloo-system SkipperRouteGroupVersion:zalando.org/v1 Sources:[crd] Namespace: AnnotationFilter: LabelFilter: FQDNTemplate: CombineFQDNAndAnnotation:false IgnoreHostnameAnnotation:false IgnoreIngressTLSSpec:false IgnoreIngressRulesSpec:false Compatibility: PublishInternal:false PublishHostIP:false AlwaysPublishNotReadyAddresses:false ConnectorSourceServer:localhost:8080 Provider:inmemory GoogleProject: GoogleBatchChangeSize:1000 GoogleBatchChangeInterval:1s GoogleZoneVisibility: DomainFilter:[] ExcludeDomains:[] RegexDomainFilter: RegexDomainExclusion: ZoneNameFilter:[] ZoneIDFilter:[] AlibabaCloudConfigFile:/etc/kubernetes/alibaba-cloud.json AlibabaCloudZoneType: AWSZoneType: AWSZoneTagFilter:[] AWSAssumeRole: AWSBatchChangeSize:1000 AWSBatchChangeInterval:1s AWSEvaluateTargetHealth:true AWSAPIRetries:3 AWSPreferCNAME:false AWSZoneCacheDuration:0s AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: AzureSubscriptionID: AzureUserAssignedIdentityClientID: BluecatConfigFile:/etc/kubernetes/bluecat.json CloudflareProxied:false CloudflareZonesPerPage:50 CoreDNSPrefix:/skydns/ RcodezeroTXTEncrypt:false AkamaiServiceConsumerDomain: AkamaiClientToken: AkamaiClientSecret: AkamaiAccessToken: AkamaiEdgercPath: AkamaiEdgercSection: InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true InfobloxView: InfobloxMaxResults:0 InfobloxFQDNRegEx: DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:/etc/kubernetes/oci.yaml InMemoryZones:[example.com] OVHEndpoint:ovh-eu OVHApiRateLimit:20 PDNSServer:http://localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:sync Registry:noop TXTOwnerID:default TXTPrefix: TXTSuffix: Interval:30m0s MinEventSyncInterval:5s Once:false DryRun:false UpdateEvents:true LogFormat:text MetricsAddress::7979 LogLevel:info TXTCacheInterval:0s TXTWildcardReplacement: ExoscaleEndpoint:https://api.exoscale.ch/dns ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:example.com/v1 CRDSourceKind:DNSEndpoint ServiceTypeFilter:[] CFAPIEndpoint: CFUsername: CFPassword: RFC2136Host: RFC2136Port:0 RFC2136Zone: RFC2136Insecure:false RFC2136GSSTSIG:false RFC2136KerberosRealm: RFC2136KerberosUsername: RFC2136KerberosPassword: RFC2136TSIGKeyName: RFC2136TSIGSecret: RFC2136TSIGSecretAlg: RFC2136TAXFR:false RFC2136MinTTL:0s RFC2136BatchChangeSize:50 NS1Endpoint: NS1IgnoreSSL:false NS1MinTTLSeconds:0 TransIPAccountName: TransIPPrivateKeyFile: DigitalOceanAPIPageSize:50 ManagedDNSRecordTypes:[A CNAME] GoDaddyAPIKey: GoDaddySecretKey: GoDaddyTTL:0 GoDaddyOTE:false} 
INFO[0000] Instantiating new Kubernetes client          
INFO[0000] Using kubeConfig                             
INFO[0000] Created Kubernetes client https://138.1.18.242:6443 
INFO[0008] All records are already up to date

Then I used the following files to create new instances of my custom resource, one in the default namespace, and one in the "foo" namespace (to verify that the event handler logic handles namespaces correctly):

host:~ user$ cat dnsendpoint-default.yaml 
apiVersion: example.com/v1
kind: DNSEndpoint
metadata:
  name: in-default-namespace
  namespace: default
spec:
  endpoints:
  - dnsName: 'in-default.example.com'
    recordType: A
    targets:
    - 192.0.2.1
host:~ user$ cat dnsendpoint-foo.yaml 
apiVersion: example.com/v1
kind: DNSEndpoint
metadata:
  name: in-foo-namespace
  namespace: foo
spec:
  endpoints:
  - dnsName: 'in-foo.example.com'
    recordType: A
    targets:
    - 192.0.2.2

I applied the files to create the resources. I added a sleep 30 in between to see if there was a corresponding period between external-dns detecting the two creations:

host:~ user$ kubectl apply -f dnsendpoint-default.yaml 
dnsendpoint.example.com/in-default-namespace created
host:~ user$ sleep 30
host:~ user$ kubectl apply -f dnsendpoint-foo.yaml 
dnsendpoint.example.com/in-foo-namespace created

Then I observed the expected external-dns logging indicating the resource creations were detected, and processed, with the expected interval:

INFO[0035] CREATE: in-default.example.com 0 IN A  192.0.2.1 [] 
INFO[0041] All records are already up to date           
INFO[0078] CREATE: in-foo.example.com 0 IN A  192.0.2.2 [] 
INFO[0084] All records are already up to date           

Then I deleted the resources:

host:~ user$ kubectl delete dnsendpoint in-default-namespace
dnsendpoint.example.com "in-default-namespace" deleted
host:~ user$ sleep 30
host:~ user$ kubectl -n foo delete dnsendpoint in-foo-namespace
dnsendpoint.example.com "in-foo-namespace" deleted

... and observed logging output indicating the resource deletions were detected and processed:

INFO[0119] DELETE: in-default.example.com 0 IN A  192.0.2.1 [] 
INFO[0165] DELETE: in-foo.example.com 0 IN A  192.0.2.2 [] 

Fixes #ISSUE

Checklist

  • Unit tests updated - I couldn't think of a way to unit-test this yet
  • End user documentation updated - will check

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Aug 12, 2021
source/store.go Outdated Show resolved Hide resolved
@njuettner njuettner self-assigned this Sep 1, 2021
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 20, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 18, 2022
// At present, client-go's fake.RESTClient (used by crd_test.go) is known to cause race conditions when used
// with informers: https://github.com/kubernetes/kubernetes/issues/95372
// So don't start the informer during testing.
startInformer := false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like there is a way around that. kubernetes/kubernetes#95897

@k0da
Copy link
Contributor

k0da commented Jan 18, 2022

This is must have feature, can we have it in?

@k0da
Copy link
Contributor

k0da commented Jan 18, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 18, 2022
When the --events flag is passed at startup, Source.AddEventHandler() is called
on each configured source.  Most sources provide AddEventHandler()
implementations that invoke the reconciliation loop when the configured source
changes, but the CRD source had a no-op implementation.  I.e. when a custom
resource was created, updated, or deleted, external-dns remained unware, and the
reconciliation loop would not fire until the configured interval had passed.

This change adds an informer (on the CRD specified by --crd-source-apiversion
and --crd-source-kind=DNSEndpoint), and a Source.AddEventHandler()
implementation that calls Informer.AddEventHandler().  Now when a custom
resource is created, updated, or deleted, the reconciliation loop is invoked.
This change disables the CRD source's informer during tests.  I made the mistake
of not running `make test` before the previous commit, and thus didn't realize
that leaving the informer enabled during the tests introduced a race condition:

	WARNING: DATA RACE
	Write at 0x00c0005aa130 by goroutine 59:
	  k8s.io/client-go/rest/fake.(*RESTClient).do()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/rest/fake/fake.go:113 +0x69
	  k8s.io/client-go/rest/fake.(*RESTClient).do-fm()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/rest/fake/fake.go:109 +0x64
	  k8s.io/client-go/rest/fake.roundTripperFunc.RoundTrip()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/rest/fake/fake.go:43 +0x3d
	  net/http.send()
		  /usr/local/go/src/net/http/client.go:251 +0x6da
	  net/http.(*Client).send()
		  /usr/local/go/src/net/http/client.go:175 +0x1d5
	  net/http.(*Client).do()
		  /usr/local/go/src/net/http/client.go:717 +0x2cb
	  net/http.(*Client).Do()
		  /usr/local/go/src/net/http/client.go:585 +0x68b
	  k8s.io/client-go/rest.(*Request).request()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/rest/request.go:855 +0x209
	  k8s.io/client-go/rest.(*Request).Do()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/rest/request.go:928 +0xf0
	  sigs.k8s.io/external-dns/source.(*crdSource).List()
		  /Users/erath/go/src/github.com/ericrrath/external-dns/source/crd.go:250 +0x28c
	  sigs.k8s.io/external-dns/source.NewCRDSource.func1()
		  /Users/erath/go/src/github.com/ericrrath/external-dns/source/crd.go:125 +0x10a
	  k8s.io/client-go/tools/cache.(*ListWatch).List()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/tools/cache/listwatch.go:106 +0x94
	  k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1.2()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/tools/cache/reflector.go:233 +0xf4
	  k8s.io/client-go/tools/pager.SimplePageFunc.func1()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/tools/pager/pager.go:40 +0x94
	  k8s.io/client-go/tools/pager.(*ListPager).List()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/tools/pager/pager.go:91 +0x1f4
	  k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/tools/cache/reflector.go:258 +0x2b7

	Previous write at 0x00c0005aa130 by goroutine 37:
	  k8s.io/client-go/rest/fake.(*RESTClient).do()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/rest/fake/fake.go:113 +0x69
	  k8s.io/client-go/rest/fake.(*RESTClient).do-fm()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/rest/fake/fake.go:109 +0x64
	  k8s.io/client-go/rest/fake.roundTripperFunc.RoundTrip()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/rest/fake/fake.go:43 +0x3d
	  net/http.send()
		  /usr/local/go/src/net/http/client.go:251 +0x6da
	  net/http.(*Client).send()
		  /usr/local/go/src/net/http/client.go:175 +0x1d5
	  net/http.(*Client).do()
		  /usr/local/go/src/net/http/client.go:717 +0x2cb
	  net/http.(*Client).Do()
		  /usr/local/go/src/net/http/client.go:585 +0x68b
	  k8s.io/client-go/rest.(*Request).request()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/rest/request.go:855 +0x209
	  k8s.io/client-go/rest.(*Request).Do()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/rest/request.go:928 +0xf0
	  sigs.k8s.io/external-dns/source.(*crdSource).List()
		  /Users/erath/go/src/github.com/ericrrath/external-dns/source/crd.go:250 +0x28c
	  sigs.k8s.io/external-dns/source.(*crdSource).Endpoints()
		  /Users/erath/go/src/github.com/ericrrath/external-dns/source/crd.go:171 +0x13c4
	  sigs.k8s.io/external-dns/source.testCRDSourceEndpoints.func1()
		  /Users/erath/go/src/github.com/ericrrath/external-dns/source/crd_test.go:388 +0x4f6
	  testing.tRunner()
		  /usr/local/go/src/testing/testing.go:1193 +0x202

	Goroutine 59 (running) created at:
	  k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/tools/cache/reflector.go:224 +0x36f
	  k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/tools/cache/reflector.go:316 +0x1ab
	  k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/tools/cache/reflector.go:177 +0x4a
	  k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1()
		  /Users/erath/go/pkg/mod/k8s.io/apimachinery@v0.18.8/pkg/util/wait/wait.go:155 +0x75
	  k8s.io/apimachinery/pkg/util/wait.BackoffUntil()
		  /Users/erath/go/pkg/mod/k8s.io/apimachinery@v0.18.8/pkg/util/wait/wait.go:156 +0xba
	  k8s.io/client-go/tools/cache.(*Reflector).Run()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/tools/cache/reflector.go:176 +0xee
	  k8s.io/client-go/tools/cache.(*Reflector).Run-fm()
		  /Users/erath/go/pkg/mod/k8s.io/client-go@v0.18.8/tools/cache/reflector.go:174 +0x54
	  k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
		  /Users/erath/go/pkg/mod/k8s.io/apimachinery@v0.18.8/pkg/util/wait/wait.go:56 +0x45
	  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
		  /Users/erath/go/pkg/mod/k8s.io/apimachinery@v0.18.8/pkg/util/wait/wait.go:73 +0x6d

	Goroutine 37 (running) created at:
	  testing.(*T).Run()
		  /usr/local/go/src/testing/testing.go:1238 +0x5d7
	  sigs.k8s.io/external-dns/source.testCRDSourceEndpoints()
		  /Users/erath/go/src/github.com/ericrrath/external-dns/source/crd_test.go:376 +0x1fcf
	  testing.tRunner()
		  /usr/local/go/src/testing/testing.go:1193 +0x202

It looks like client-go's fake.RESTClient (used by crd_test.go) is known to
cause race conditions when used with informers:
<kubernetes/kubernetes#95372>.  None of the CRD tests
_depend_ on the informer yet, so disabling the informer at least allows the
existing tests to pass without race conditions.  I'll look into further changes
that 1) test the new event-handler behavior, and 2) allow all tests to pass
without race conditions.
njuettner suggested using a var instead of boolean literals for the
startInformer arg to NewCRDSource; good idea.
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 5, 2022
@seanmalloy
Copy link
Member

/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Feb 11, 2022
@mgruener
Copy link
Contributor

mgruener commented Mar 7, 2022

What remains to be done here? This would be quite an important feature to have if you can't have external-dns running with a very short sync-interval.

We are running external-dns on ~40 clusters against the same AWS account and with that setup we are running into the AWS API rate limiting if we set external-dns to sync every few minutes. Because of this, the normal reconcile only runs every 60 minutes, which means our users have to wait at least an hour till their deployments are fully up and reachable. If the external-dns CRD source would support event-handling, this could be massively improved.

@ericrrath
Copy link
Contributor Author

ericrrath commented Mar 7, 2022

I'm not aware of any more changes to the PR; I think this is ready to go once it's approved.

@jhoelzel
Copy link

jhoelzel commented Apr 1, 2022

I would also love to see this PR approved, its an amazing feature to have =)

@tanujd11
Copy link
Contributor

Can we have this merge please if it is tested? We need to use this feature. Thanks

@sameeraksc
Copy link

sameeraksc commented Jul 6, 2022

We also need to use this feature. Thanks

@jhoelzel
Copy link

jhoelzel commented Jul 6, 2022

Guys this is already implemented in the helm chart it seems:
repoURL: https://charts.bitnami.com/bitnami
targetRevision: 6.5.3
chart: external-dns

      triggerLoopOnEvent: true
        policy: sync
        sources:
          - ingress
          - crd
          - service
        domainFilters:
          - xxy.test.com
        provider: pdns
        txtOwnerId: prodk3s
        txtPrefix: external-dns
        registry: txt
        crd:
          create: true

or if you are argoing (like you should):

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: external-dns
  namespace: argocd
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  destination:
    namespace: external-dns
    server: https://kubernetes.default.svc
  project: system
  source:
    repoURL: https://charts.bitnami.com/bitnami
    targetRevision: 6.5.3
    chart: external-dns
    helm:
       values: |
        triggerLoopOnEvent: true
        policy: sync
        sources:
          - ingress
          - crd
          - service
        domainFilters:
          - k8s.xxx.xxx
        provider: pdns
        txtOwnerId: prodk3s
        txtPrefix: external-dns
        registry: txt
        pdns:
          apiUrl: http://xxx
          apiKey: xxx
          apiPort: 80
        crd:
          create: true
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

and is also documented here:
https://github.com/kubernetes-sigs/external-dns/blob/master/docs/contributing/crd-source.md

I have been using it for a while now and it works like a charm

@metroshica
Copy link

@njuettner Is there anything else missing for this to be merged in?

@sameeraksc
Copy link

Not working for us @jhoelzel can you please check this if all good @njuettner ?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 14, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 14, 2022
@Djelibeybi
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Dec 14, 2022
source/store.go Outdated Show resolved Hide resolved
mgruener suggested that the --events flag could be wired to control whether or
not the CRD source created and started its informer.  This commit makes that
change; good idea!
Copy link
Member

@njuettner njuettner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Apr 14, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ericrrath, jlamillan, nachomillangarcia, njuettner

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 14, 2023
@njuettner
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added the ok-to-test Indicates a non-member PR verified by an org member that is safe to test. label Apr 14, 2023
@k8s-ci-robot k8s-ci-robot merged commit e6ec8ea into kubernetes-sigs:master Apr 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet