Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

etcd3 with enableEtcdTLS and enableTLSAuth failing on AWS #6024

Closed
Vlaaaaaaad opened this issue Oct 31, 2018 · 6 comments
Closed

etcd3 with enableEtcdTLS and enableTLSAuth failing on AWS #6024

Vlaaaaaaad opened this issue Oct 31, 2018 · 6 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Milestone

Comments

@Vlaaaaaaad
Copy link
Contributor

TL;DR: etcd3 used with enableEtcdTLS and enableTLSAuth is failing TLS auth likely due to wrong reverse DNS.
Is this a supported feature? Am I doing something wrong? Am I missing something?

Template:

1. What kops version are you running? The command kops version, will display
this information.

1.10.0

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

Trying to build a 1.10.6 cluster

3. What cloud provider are you using?
AWS in the us-east-1 region

4. What commands did you run? What is the simplest way to reproduce this issue?
kops create cluster -f mycluster.yaml which is created using kops toolbox template. The relevant parts:

  • etcd3 is used with enableEtcdTLS and enableTLSAuth both set to true
  • an AWS ACM SSL cert is used for the API LB
    • domains on the cert:
      • bob.vlaaaaaaad.experimental.example.com
      • *.bob.vlaaaaaaad.experimental.example.com
      • api.k8s.bob.vlaaaaaaad.experimental.example.com
      • *.api.k8s.bob.vlaaaaaaad.experimental.example.com
      • internal.k8s.bob.vlaaaaaaad.experimental.example.com
      • *.internal.k8s.bob.vlaaaaaaad.experimental.example.com
      • *.k8s.bob.vlaaaaaaad.experimental.example.com
      • k8s.bob.vlaaaaaaad.experimental.example.com

5. What happened after the commands executed?
Cluster was created, but failed to fully validate due to etcd TLS issues caused by reverse DNS resolving.

In the etcd.log I can see many errors such as:

2018-10-31 11:12:33.876726 I | etcdmain: rejected connection from "10.80.91.18:50678" 
      (error "tls: \"10.80.91.18\" does not match any of DNSNames [\"*.internal.bob.vlaaaaaaad.experimental.example.com\" \"localhost\"]", 
      ServerName "etcd-a.internal.bob.vlaaaaaaad.experimental.example.com",
      IPAddresses ["127.0.0.1"], 
      DNSNames ["*.internal.bob.vlaaaaaaad.experimental.example.com" "localhost"])

Looks like reverse DNS for that domain is not working properly:

-> dig -x 10.80.91.18

; <<>> DiG 9.9.5-9+deb8u16-Debian <<>> -x 10.80.91.18
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39576
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;18.91.80.10.in-addr.arpa.	IN	PTR

;; ANSWER SECTION:
18.91.80.10.in-addr.arpa. 20	IN	PTR	ip-10-80-91-18.ec2.internal.

;; Query time: 5 msec
;; SERVER: 10.80.0.2#53(10.80.0.2)
;; WHEN: Wed Oct 31 11:11:29 UTC 2018
;; MSG SIZE  rcvd: 94

-> nslookup 10.80.91.18
Server:		10.80.0.2
Address:	10.80.0.2#53

Non-authoritative answer:
18.91.80.10.in-addr.arpa	name = ip-10-80-91-18.ec2.internal.

First lines of etcd.log:

2018-10-31 11:53:48.505663 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-a.internal.bob.vlaaaaaaad.experimental.example.com:4001
2018-10-31 11:53:48.507936 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/srv/kubernetes/etcd.pem
2018-10-31 11:53:48.507975 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true
2018-10-31 11:53:48.507997 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/var/etcd/data
2018-10-31 11:53:48.508033 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-a.internal.bob.vlaaaaaaad.experimental.example.com:2380
2018-10-31 11:53:48.508048 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-a=https://etcd-a.internal.bob.vlaaaaaaad.experimental.example.com:2380,etcd-b=https://etcd-b.internal.bob.vlaaaaaaad.experimental.example.com:2380,etcd-c=https://etcd-c.internal.bob.vlaaaaaaad.experimental.example.com:2380
2018-10-31 11:53:48.508074 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new
2018-10-31 11:53:48.508102 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster-token-etcd
2018-10-31 11:53:48.508121 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/srv/kubernetes/etcd-key.pem
2018-10-31 11:53:48.508134 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:4001
2018-10-31 11:53:48.508160 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380
2018-10-31 11:53:48.508188 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-a
2018-10-31 11:53:48.508208 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/srv/kubernetes/etcd.pem
2018-10-31 11:53:48.508222 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true
2018-10-31 11:53:48.508239 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/srv/kubernetes/etcd-key.pem
2018-10-31 11:53:48.508247 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/srv/kubernetes/ca.crt
2018-10-31 11:53:48.508280 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/srv/kubernetes/ca.crt
2018-10-31 11:53:48.508324 I | etcdmain: etcd Version: 3.2.24
2018-10-31 11:53:48.508337 I | etcdmain: Git SHA: 420a45226
2018-10-31 11:53:48.508342 I | etcdmain: Go Version: go1.8.7
2018-10-31 11:53:48.508347 I | etcdmain: Go OS/Arch: linux/amd64
2018-10-31 11:53:48.508352 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2018-10-31 11:53:48.508415 I | embed: peerTLS: cert = /srv/kubernetes/etcd.pem, key = /srv/kubernetes/etcd-key.pem, ca = , trusted-ca = /srv/kubernetes/ca.crt, client-cert-auth = true
2018-10-31 11:53:48.509264 I | embed: listening for peers on https://0.0.0.0:2380
2018-10-31 11:53:48.509322 I | embed: listening for client requests on 0.0.0.0:4001
2018-10-31 11:53:48.528142 I | pkg/netutil: resolving etcd-a.internal.bob.vlaaaaaaad.experimental.example.com:2380 to 203.0.113.123:2380
2018-10-31 11:53:48.528614 I | pkg/netutil: resolving etcd-a.internal.bob.vlaaaaaaad.experimental.example.com:2380 to 203.0.113.123:2380
2018-10-31 11:53:49.570299 I | etcdserver: name = etcd-a
2018-10-31 11:53:49.570386 I | etcdserver: data dir = /var/etcd/data
2018-10-31 11:53:49.570416 I | etcdserver: member dir = /var/etcd/data/member
2018-10-31 11:53:49.570438 I | etcdserver: heartbeat = 100ms
2018-10-31 11:53:49.570459 I | etcdserver: election = 1000ms
2018-10-31 11:53:49.570479 I | etcdserver: snapshot count = 100000
2018-10-31 11:53:49.570509 I | etcdserver: advertise client URLs = https://etcd-a.internal.bob.vlaaaaaaad.experimental.example.com:4001
2018-10-31 11:53:49.570557 I | etcdserver: initial advertise peer URLs = https://etcd-a.internal.bob.vlaaaaaaad.experimental.example.com:2380
2018-10-31 11:53:49.570611 I | etcdserver: initial cluster = etcd-a=https://etcd-a.internal.bob.vlaaaaaaad.experimental.example.com:2380,etcd-b=https://etcd-b.internal.bob.vlaaaaaaad.experimental.example.com:2380,etcd-c=https://etcd-c.internal.bob.vlaaaaaaad.experimental.example.com:2380
2018-10-31 11:53:49.574614 I | etcdserver: starting member 4e471207b5a4e218 in cluster 1e10667ccf3d8cbc
2018-10-31 11:53:49.574706 I | raft: 4e471207b5a4e218 became follower at term 0
2018-10-31 11:53:49.574734 I | raft: newRaft 4e471207b5a4e218 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2018-10-31 11:53:49.574754 I | raft: 4e471207b5a4e218 became follower at term 1
2018-10-31 11:53:49.581800 W | auth: simple token is not cryptographically signed
2018-10-31 11:53:49.586391 I | rafthttp: starting peer 147e07d111cd56bb...
2018-10-31 11:53:49.586454 I | rafthttp: started HTTP pipelining with peer 147e07d111cd56bb
2018-10-31 11:53:49.587028 I | rafthttp: started streaming with peer 147e07d111cd56bb (writer)
2018-10-31 11:53:49.587362 I | rafthttp: started streaming with peer 147e07d111cd56bb (writer)
2018-10-31 11:53:49.593177 I | rafthttp: started peer 147e07d111cd56bb
2018-10-31 11:53:49.593240 I | rafthttp: added peer 147e07d111cd56bb
2018-10-31 11:53:49.593285 I | rafthttp: starting peer 653239ededb46fed...
2018-10-31 11:53:49.593446 I | rafthttp: started HTTP pipelining with peer 653239ededb46fed
2018-10-31 11:53:49.593609 I | rafthttp: started streaming with peer 147e07d111cd56bb (stream MsgApp v2 reader)
2018-10-31 11:53:49.593882 I | rafthttp: started streaming with peer 147e07d111cd56bb (stream Message reader)
2018-10-31 11:53:49.594602 I | rafthttp: started streaming with peer 653239ededb46fed (writer)
2018-10-31 11:53:49.600188 I | rafthttp: started streaming with peer 653239ededb46fed (writer)
2018-10-31 11:53:49.601461 I | rafthttp: started peer 653239ededb46fed
2018-10-31 11:53:49.603426 I | rafthttp: added peer 653239ededb46fed
2018-10-31 11:53:49.603477 I | etcdserver: starting server... [version: 3.2.24, cluster version: to_be_decided]
2018-10-31 11:53:49.603933 I | rafthttp: started streaming with peer 653239ededb46fed (stream Message reader)
2018-10-31 11:53:49.604899 I | embed: ClientTLS: cert = /srv/kubernetes/etcd.pem, key = /srv/kubernetes/etcd-key.pem, ca = , trusted-ca = /srv/kubernetes/ca.crt, client-cert-auth = true
2018-10-31 11:53:49.605161 I | rafthttp: started streaming with peer 653239ededb46fed (stream MsgApp v2 reader)
2018-10-31 11:53:49.605794 I | etcdserver/membership: added member 147e07d111cd56bb [https://etcd-b.internal.bob.vlaaaaaaad.experimental.example.com:2380] to cluster 1e10667ccf3d8cbc
2018-10-31 11:53:49.605963 I | etcdserver/membership: added member 4e471207b5a4e218 [https://etcd-a.internal.bob.vlaaaaaaad.experimental.example.com:2380] to cluster 1e10667ccf3d8cbc
2018-10-31 11:53:49.606075 I | etcdserver/membership: added member 653239ededb46fed [https://etcd-c.internal.bob.vlaaaaaaad.experimental.example.com:2380] to cluster 1e10667ccf3d8cbc
2018-10-31 11:53:51.275042 I | raft: 4e471207b5a4e218 is starting a new election at term 1
2018-10-31 11:53:51.275098 I | raft: 4e471207b5a4e218 became candidate at term 2
2018-10-31 11:53:51.275125 I | raft: 4e471207b5a4e218 received MsgVoteResp from 4e471207b5a4e218 at term 2
2018-10-31 11:53:51.275139 I | raft: 4e471207b5a4e218 [logterm: 1, index: 3] sent MsgVote request to 653239ededb46fed at term 2
2018-10-31 11:53:51.275150 I | raft: 4e471207b5a4e218 [logterm: 1, index: 3] sent MsgVote request to 147e07d111cd56bb at term 2
2018-10-31 11:53:53.075429 I | raft: 4e471207b5a4e218 is starting a new election at term 2
2018-10-31 11:53:53.075466 I | raft: 4e471207b5a4e218 became candidate at term 3
2018-10-31 11:53:53.075478 I | raft: 4e471207b5a4e218 received MsgVoteResp from 4e471207b5a4e218 at term 3
2018-10-31 11:53:53.075489 I | raft: 4e471207b5a4e218 [logterm: 1, index: 3] sent MsgVote request to 147e07d111cd56bb at term 3
2018-10-31 11:53:53.075505 I | raft: 4e471207b5a4e218 [logterm: 1, index: 3] sent MsgVote request to 653239ededb46fed at term 3
2018-10-31 11:53:54.375077 I | raft: 4e471207b5a4e218 is starting a new election at term 3
2018-10-31 11:53:54.375111 I | raft: 4e471207b5a4e218 became candidate at term 4
2018-10-31 11:53:54.375122 I | raft: 4e471207b5a4e218 received MsgVoteResp from 4e471207b5a4e218 at term 4
2018-10-31 11:53:54.375134 I | raft: 4e471207b5a4e218 [logterm: 1, index: 3] sent MsgVote request to 147e07d111cd56bb at term 4
2018-10-31 11:53:54.375144 I | raft: 4e471207b5a4e218 [logterm: 1, index: 3] sent MsgVote request to 653239ededb46fed at term 4
2018-10-31 11:53:54.593771 W | rafthttp: health check for peer 147e07d111cd56bb could not connect: EOF
2018-10-31 11:53:54.604041 W | rafthttp: health check for peer 653239ededb46fed could not connect: <nil>

6. What did you expect to happen?
Certs to be created by kops, added to the nodes and everything to work.

7. Please provide your cluster manifest.

apiVersion: kops/v1alpha2
kind: Cluster
metadata:
  name: bob.vlaaaaaaad.experimental.example.com
spec:
  additionalPolicies:
    master: |
      [
        {
          "Effect": "Allow",
          "Action": ["*"],
          "Resource": "*"
        }
      ]
    node: |
      [
        {
          "Effect": "Allow",
          "Action": ["*"],
          "Resource": "*"
        }
      ]
  api:
    loadBalancer:
      sslCertificate: arn:aws:acm:us-east-1:1111111:certificate/111111-1111-1111-1111-1111
      type: Internal
  authorization:
    rbac: {}
  channel: stable
  cloudLabels:
    CreatedBy: kops
    Owner: vlaaaaaaad
  cloudProvider: aws
  configBase: s3://state-bucket/bob.vlaaaaaaad.experimental.example.com
  dnsZone: experimental.example.com
  etcdClusters:
  - enableEtcdTLS: true
    enableTLSAuth: true
    etcdMembers:
    - instanceGroup: master-bob.vlaaaaaaad.experimental.example.com-a
      name: a
      volumeIops: 123
      volumeSize: 11
      volumeType: io1
    - instanceGroup: master-bob.vlaaaaaaad.experimental.example.com-b
      name: b
      volumeIops: 123
      volumeSize: 11
      volumeType: io1
    - instanceGroup: master-bob.vlaaaaaaad.experimental.example.com-c
      name: c
      volumeIops: 123
      volumeSize: 11
      volumeType: io1
    name: main
    version: 3.2.24
  - enableEtcdTLS: true
    enableTLSAuth: true
    etcdMembers:
    - instanceGroup: master-bob.vlaaaaaaad.experimental.example.com-a
      name: a
      volumeIops: 121
      volumeSize: 10
      volumeType: io1
    - instanceGroup: master-bob.vlaaaaaaad.experimental.example.com-b
      name: b
      volumeIops: 121
      volumeSize: 10
      volumeType: io1
    - instanceGroup: master-bob.vlaaaaaaad.experimental.example.com-c
      name: c
      volumeIops: 121
      volumeSize: 10
      volumeType: io1
    name: events
    version: 3.2.24
  iam:
    allowContainerRegistry: false
    legacy: false
  kubeAPIServer:
    runtimeConfig:
      autoscaling/v2beta1: "true"
  kubeControllerManager:
    horizontalPodAutoscalerDownscaleDelay: 5m0s
    horizontalPodAutoscalerSyncPeriod: 15s
    horizontalPodAutoscalerUpscaleDelay: 30s
    horizontalPodAutoscalerUseRestClients: true
  kubelet:
    enableCustomMetrics: true
  kubernetesApiAccess:
  - 10.80.0.0/16
  - 123.123.123.123/16
  - 123.123.123.123/32
  kubernetesVersion: 1.10.6
  masterInternalName: api.internal.k8s.bob.vlaaaaaaad.experimental.example.com
  masterPublicName: api.k8s.bob.vlaaaaaaad.experimental.example.com
  networkCIDR: 10.80.0.0/16
  networkID: vpc-111111111111
  networking:
    kopeio-vxlan: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 123.123.123.123/16
  - 10.80.0.0/16
  - 123.123.123.123/32
  subnets:
  - cidr: 10.80.32.0/19
    name: us-east-1a
    type: Private
    zone: us-east-1a
  - cidr: 10.80.64.0/19
    name: us-east-1b
    type: Private
    zone: us-east-1b
  - cidr: 10.80.96.0/19
    name: us-east-1c
    type: Private
    zone: us-east-1c
  - cidr: 10.80.0.0/22
    name: utility-us-east-1a
    type: Utility
    zone: us-east-1a
  - cidr: 10.80.4.0/22
    name: utility-us-east-1b
    type: Utility
    zone: us-east-1b
  - cidr: 10.80.8.0/22
    name: utility-us-east-1c
    type: Utility
    zone: us-east-1c
  topology:
    dns:
      type: Public
    masters: private
    nodes: private
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  labels:
    kops.k8s.io/cluster: bob.vlaaaaaaad.experimental.example.com
  name: master-bob.vlaaaaaaad.experimental.example.com-a
spec:
  associatePublicIp: false
  image: kope.io/k8s-1.10-debian-jessie-amd64-hvm-ebs-2018-08-17
  machineType: m4.large
  maxPrice: "0.5"
  maxSize: 1
  minSize: 1
  role: Master
  subnets:
  - us-east-1a
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  labels:
    kops.k8s.io/cluster: bob.vlaaaaaaad.experimental.example.com
  name: master-bob.vlaaaaaaad.experimental.example.com-b
spec:
  associatePublicIp: false
  image: kope.io/k8s-1.10-debian-jessie-amd64-hvm-ebs-2018-08-17
  machineType: m4.large
  maxPrice: "0.5"
  maxSize: 1
  minSize: 1
  role: Master
  subnets:
  - us-east-1b
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  labels:
    kops.k8s.io/cluster: bob.vlaaaaaaad.experimental.example.com
  name: master-bob.vlaaaaaaad.experimental.example.com-c
spec:
  associatePublicIp: false
  image: kope.io/k8s-1.10-debian-jessie-amd64-hvm-ebs-2018-08-17
  machineType: m4.large
  maxPrice: "0.5"
  maxSize: 1
  minSize: 1
  role: Master
  subnets:
  - us-east-1c
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  labels:
    kops.k8s.io/cluster: bob.vlaaaaaaad.experimental.example.com
  name: workers
spec:
  associatePublicIp: false
  cloudLabels:
    k8s.io/cluster-autoscaler/cluster/bob.vlaaaaaaad.experimental.example.com: ""
    k8s.io/cluster-autoscaler/enabled: ""
  image: kope.io/k8s-1.10-debian-jessie-amd64-hvm-ebs-2018-08-17
  machineType: m4.large
  maxPrice: "0.5"
  maxSize: 8
  minSize: 1
  role: Node
  subnets:
  - us-east-1a
  - us-east-1b
  - us-east-1c
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  labels:
    kops.k8s.io/cluster: bob.vlaaaaaaad.experimental.example.com
  name: bastions
spec:
  associatePublicIp: true
  image: kope.io/k8s-1.10-debian-jessie-amd64-hvm-ebs-2018-08-17
  machineType: t2.micro
  maxSize: 1
  minSize: 1
  role: Bastion
  subnets:
  - utility-us-east-1a

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

N/A

9. Anything else do we need to know?
I tried to edit the DHCP options by setting the domain but that did not seem to help.

Some relevant etcd issues: etcd-io/etcd#8268 mostly, but etcd-io/etcd#9575 too

@aaroniscode
Copy link

The root cause of this issue appears to be a breaking change in etcd v3.2 around TLS.

Snippets from etcd security documentation:

Since v3.2.0, server resolves TLS DNSNames when checking SAN. For instance, if peer cert 
contains only DNS names (no IP addresses) in Subject Alternative Name (SAN) field, server
authenticates a peer only when forward-lookups (dig b.com) on those DNS names have 
matching IP with the remote IP address.

Above is the result of etcd-io/etcd#7767. Also:

Since v3.2.5, server supports reverse-lookup on wildcard DNS SAN. For instance, if peer
cert contains only DNS names (no IP addresses) in Subject Alternative Name (SAN) field,
server first reverse-lookups the remote IP address to get a list of names mapping to that
address (e.g. nslookup IPADDR). Then accepts the connection if those names have a
matching name with peer cert's DNS names (either by exact or wildcard match). If none is
matched, server forward-lookups each DNS entry in peer cert (e.g. look up 
example.default.svc when the entry is *.example.default.svc), and accepts connection only
when the host's resolved addresses have the matching IP address with the peer's remote
IP address. 

kops certficates do not have IP addresses in the SANs. I believe this is due to the creation of instances from autoscaling groups and you cannot predict the IP address. The SAN for the cluster is created in the pki model and has the following SANS:

*.internal.<dns-name>, localhost, 127.0.0.1

Unless my understanding is wrong, this means that kops doesn't currently support etcd v3.2.0 and higher with multi-master nodes as etcd is unable to authenticate with peers.

@justinsb justinsb self-assigned this Nov 24, 2018
justinsb added a commit to justinsb/kops that referenced this issue Nov 25, 2018
This works around the (very unusual) etcd changes for validation of
peer certificates by DNS lookup, which were introduced in etcd 3.2.

Issue kubernetes#6024
@justinsb justinsb added this to the 1.11 milestone Nov 25, 2018
@rdrgmnzs
Copy link
Contributor

rdrgmnzs commented Dec 7, 2018

@justinsb now that #6112 has landed can we go ahead and close this?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 7, 2019
@justinsb justinsb modified the milestones: 1.11, 1.12 Mar 14, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 13, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants