Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failure in samples/scenario/create-check-and-delete-pod-with-local-persistent-volume.yaml #45

Open
msidana opened this issue Oct 25, 2019 · 9 comments

Comments

@msidana
Copy link

msidana commented Oct 25, 2019

Scenario samples/scenario/create-check-and-delete-pod-with-local-persistent-volume.yaml fails with error persistentvolume-controller waiting for first consumer to be created before binding

The persistent volume gets created, but the claim fails.
Disk space is sufficient. Is there any pre-requisite needed for this on the cluster ?

root@oplind-19:~/xrally-kubernetes# kubectl describe persistentvolumeclaim --all-namespaces
Name:          rally-bae606e5-iufeyg2q
Namespace:     c-rally-bae606e5-zpvrf10e
StorageClass:  c-rally-bae606e5-17ulfb0n
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    rally-bae606e5-iufeyg2q
Events:
  Type    Reason                Age                From                         Message
  ----    ------                ----               ----                         -------
  Normal  WaitForFirstConsumer  14s (x2 over 20s)  persistentvolume-controller  waiting for first consumer to be created before binding
root@oplind-19:~/xrally-kubernetes#

Scenario Logs:

TimeoutException: Rally tired waiting 50.00 seconds for Pod rally-bae606e5-iufeyg2q:186e8a4e-3bc8-442c-8194-b6644d4b5455 to become Running current status Pending

Traceback (most recent call last):
  File "/root/rally-plugins/venv/local/lib/python2.7/site-packages/rally/task/runner.py", line 71, in _run_scenario_once
    getattr(scenario_inst, method_name)(**scenario_kwargs)
  File "/root/rally-plugins/venv/local/lib/python2.7/site-packages/rally_plugins/scenarios/kubernetes/volumes/local_persistent_volume.py", line 98, in run
    status_wait=status_wait
  File "/root/rally-plugins/venv/local/lib/python2.7/site-packages/rally_plugins/scenarios/kubernetes/volumes/base.py", line 45, in run
    status_wait=status_wait
  File "/root/rally-plugins/venv/local/lib/python2.7/site-packages/rally/task/service.py", line 116, in wrapper
    return func(instance, *args, **kwargs)
  File "/root/rally-plugins/venv/local/lib/python2.7/site-packages/rally/task/atomic.py", line 91, in func_atomic_actions
    f = func(self, *args, **kwargs)
  File "/root/rally-plugins/venv/local/lib/python2.7/site-packages/rally_plugins/services/kube/kube.py", line 436, in create_pod
    volume=volume)
  File "/root/rally-plugins/venv/local/lib/python2.7/site-packages/rally_plugins/services/kube/kube.py", line 73, in wait_for_status
    timeout=(retries_total * sleep_time))
TimeoutException: Rally tired waiting 50.00 seconds for Pod rally-bae606e5-iufeyg2q:186e8a4e-3bc8-442c-8194-b6644d4b5455 to become Running current status Pending
@prazumovsky
Copy link
Contributor

prazumovsky commented Oct 25, 2019

Is your node has master role or tainted?

@msidana
Copy link
Author

msidana commented Oct 25, 2019

I have a two node setup. One master and one worker. Master is tainted.

@prazumovsky
Copy link
Contributor

Also does any node match node affinity settings of the sample:

node_affinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
         - key: beta.kubernetes.io/os
            operator: In
            values:
            - linux

@prazumovsky
Copy link
Contributor

I have a two node setup. One master and one worker. Master is tainted.

Does pod created in this test is Running or it's in Pending state with 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.?

@prazumovsky prazumovsky added bug and removed question labels Oct 25, 2019
@msidana
Copy link
Author

msidana commented Oct 25, 2019

The pod remains in Pending state due to '0/2 nodes are available: 2 node(s) didn''t find available persistent
PFB the logs

# kubectl get pod rally-615f3201-hewsk83y  --namespace c-rally-615f3201-181e0u9f -oyaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2019-10-25T16:34:35Z"
  labels:
    role: rally-615f3201-hewsk83y
  name: rally-615f3201-hewsk83y
  namespace: c-rally-615f3201-181e0u9f
  resourceVersion: "3039656"
  selfLink: /api/v1/namespaces/c-rally-615f3201-181e0u9f/pods/rally-615f3201-hewsk83y
  uid: 11df0a37-59e6-44d9-bdd8-62d11e4ee965
spec:
  containers:
  - image: gcr.io/google-samples/hello-go-gke:1.0
    imagePullPolicy: IfNotPresent
    name: rally-615f3201-hewsk83y
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /opt/check
      name: rally-615f3201-hewsk83y
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: c-rally-615f3201-181e0u9f-token-jpsjd
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: c-rally-615f3201-181e0u9f
  serviceAccountName: c-rally-615f3201-181e0u9f
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: rally-615f3201-hewsk83y
    persistentVolumeClaim:
      claimName: rally-615f3201-hewsk83y
  - name: c-rally-615f3201-181e0u9f-token-jpsjd
    secret:
      defaultMode: 420
      secretName: c-rally-615f3201-181e0u9f-token-jpsjd
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2019-10-25T16:34:35Z"
    message: '0/2 nodes are available: 2 node(s) didn''t find available persistent
      volumes to bind.'
    reason: Unschedulable
    status: "False"
    type: PodScheduled
  phase: Pending
  qosClass: BestEffort

Rally Logs:

2019-10-25 18:35:26.353 142356 INFO rally.task.runner [-] Task ef98091a-3ceb-4686-89ef-4f510ba0ca91 | ITER: 2 START
2019-10-25 18:36:17.373 142356 INFO rally.task.runner [-] Task ef98091a-3ceb-4686-89ef-4f510ba0ca91 | ITER: 2 END: Error TimeoutException: Rally tired waiting 50.00 seconds for Pod rally-615f3201-86g66as1:97659fad-671c-4958-b6e8-460b48576df6 to become Running current status Pending

PV gets created

# kubectl get pv --all-namespaces
NAME                      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS                REASON   AGE
rally-615f3201-hewsk83y   1Gi        RWO            Retain           Available           c-rally-615f3201-fgi3em3a            2m35s

PVC remains in pending state

# kubectl get pvc --all-namespaces
NAMESPACE                   NAME                      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS                AGE
c-rally-615f3201-181e0u9f   rally-615f3201-9bh0w82h   Pending                                      c-rally-615f3201-fgi3em3a   83s
c-rally-615f3201-181e0u9f   rally-615f3201-hewsk83y   Pending                                      c-rally-615f3201-fgi3em3a   3m56s
c-rally-615f3201-181e0u9f   rally-615f3201-mx456goz   Pending                                      c-rally-615f3201-fgi3em3a   31s

PVC fails at below error

  Normal  WaitForFirstConsumer  7s (x13 over 3m4s)  persistentvolume-controller  waiting for first consumer to be created before binding

@msidana
Copy link
Author

msidana commented Oct 25, 2019

Also does any node match node affinity settings of the sample:

How can I check this ? Can you give me the command for this ?

@prazumovsky
Copy link
Contributor

I think the reason is in incorrect nodeAffinity. Please, provide kubectl describe node -o yaml for confirmation of suggestion. Possible solution is to add to pod creation tolerations for master node + to define nodeAffinity correctly for your kubernetes cluster. I don't have 100% assurance that tolerations are necessary, maybe only nodeAffinity is the solution of this issue.

@msidana
Copy link
Author

msidana commented Oct 29, 2019

Here is the output:

# kubectl -oyaml get node
apiVersion: v1
items:
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
      node.alpha.kubernetes.io/ttl: "0"
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: "2019-10-04T04:17:06Z"
    labels:
      beta.kubernetes.io/arch: amd64
      beta.kubernetes.io/os: linux
      kubernetes.io/arch: amd64
      kubernetes.io/hostname: oplind-19
      kubernetes.io/os: linux
      node-role.kubernetes.io/master: ""
    name: oplind-19
    resourceVersion: "3526314"
    selfLink: /api/v1/nodes/oplind-19
    uid: 023120a8-5e49-427a-8241-b0d3e88b21df
  spec:
    podCIDR: 10.233.64.0/24
  status:
    addresses:
    - address: <HIDDEN>
      type: InternalIP
    - address: oplind-19
      type: Hostname
    allocatable:
      cpu: 39800m
      ephemeral-storage: "423116284023"
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      intel.com/intel_sriov_dpdk: "0"
      intel.com/intel_sriov_netdevice: "0"
      intel.com/mlnx_sriov_rdma: "0"
      memory: 64929468Ki
      pods: "110"
    capacity:
      cpu: "40"
      ephemeral-storage: 459110552Ki
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      intel.com/intel_sriov_dpdk: "0"
      intel.com/intel_sriov_netdevice: "0"
      intel.com/mlnx_sriov_rdma: "0"
      memory: 65531868Ki
      pods: "110"
    conditions:
    - lastHeartbeatTime: "2019-10-04T04:18:24Z"
      lastTransitionTime: "2019-10-04T04:18:24Z"
      message: Calico is running on this node
      reason: CalicoIsUp
      status: "False"
      type: NetworkUnavailable
    - lastHeartbeatTime: "2019-10-29T10:42:32Z"
      lastTransitionTime: "2019-10-04T04:17:03Z"
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: "2019-10-29T10:42:32Z"
      lastTransitionTime: "2019-10-04T04:17:03Z"
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: "2019-10-29T10:42:32Z"
      lastTransitionTime: "2019-10-04T04:17:03Z"
      message: kubelet has sufficient PID available
      reason: KubeletHasSufficientPID
      status: "False"
      type: PIDPressure
    - lastHeartbeatTime: "2019-10-29T10:42:32Z"
      lastTransitionTime: "2019-10-04T04:18:27Z"
      message: kubelet is posting ready status. AppArmor enabled
      reason: KubeletReady
      status: "True"
      type: Ready
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      - voereir/touchstone-server-ubuntu:v1.1
      sizeBytes: 1151540050
    - names:
      - <none>@<none>
      - <none>:<none>
      sizeBytes: 1144980410
    - names:
      - <none>@<none>
      - <none>:<none>
      sizeBytes: 1144953804
    - names:
      - <none>@<none>
      - <none>:<none>
      sizeBytes: 879065039
    - names:
      - cmk:v1.3.1
      sizeBytes: 767072444
    - names:
      - python@sha256:9c6c97ea31915fc82d4adeca1f9aa8cbad0ca113f4237d350ab726cf05485585
      - python:3.4.6
      sizeBytes: 679592754
    - names:
      - nfvpe/multus@sha256:214cb880e1345e36db7867970ece5ba44e1708badaef79e6fcdded28f58a7752
      - nfvpe/multus:v3.2.1
      sizeBytes: 499877728
    - names:
      - golang@sha256:2293e952c79b8b3a987e1e09d48b6aa403d703cef9a8fa316d30ba2918d37367
      - golang:alpine
      sizeBytes: 359138126
    - names:
      - gcr.io/google-containers/kube-apiserver@sha256:120c31707be05d6ff5bd05e56e95cac09cdb75e3b533b91fd2c6a2b771c19609
      - gcr.io/google-containers/kube-apiserver:v1.15.3
      sizeBytes: 206843838
    - names:
      - gcr.io/google-containers/kube-controller-manager@sha256:0bf6211a0d8cb1c444aa3148941ae4dfbb43dfbbd2a7a9177a9594535fbed838
      - gcr.io/google-containers/kube-controller-manager:v1.15.3
      sizeBytes: 158743102
    - names:
      - calico/node@sha256:a2782b53500c96e35299b8af729eaf39423f9ffd903d9fda675073f4a063502a
      - calico/node:v3.7.3
      sizeBytes: 156259173
    - names:
      - calico/cni@sha256:258a0cb3c25022e44ebda3606112c40865adb67b8fb7be3d119f960957301ad6
      - calico/cni:v3.7.3
      sizeBytes: 135366007
    - names:
      - gcr.io/google_containers/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747
      - gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1
      sizeBytes: 121711221
    - names:
      - ubuntu@sha256:1dfb94f13f5c181756b2ed7f174825029aca902c78d0490590b1aaa203abc052
      - ubuntu:xenial-20180417
      sizeBytes: 112952004
    - names:
      - nginx@sha256:23b4dcdf0d34d4a129755fc6f52e1c6e23bb34ea011b315d87e193033bcd1b68
      - nginx:1.15
      sizeBytes: 109331233
    - names:
      - kubernetesui/dashboard@sha256:ae756074fa3d1b72c39aa98cfc6246c6923e7da3beaf350d80b91167be868871
      - kubernetesui/dashboard:v2.0.0-beta5
      sizeBytes: 91466354
    - names:
      - kubernetesui/dashboard@sha256:a35498beec44376efcf8c4478eebceb57ec3ba39a6579222358a1ebe455ec49e
      - kubernetesui/dashboard:v2.0.0-beta4
      sizeBytes: 84034786
    - names:
      - gcr.io/google-containers/kube-proxy@sha256:6f910100972afda5b14037ccbba0cd6aa091bb773ae749f46b03f395380935c9
      - gcr.io/google-containers/kube-proxy:v1.15.3
      sizeBytes: 82408284
    - names:
      - gcr.io/google-containers/kube-scheduler@sha256:e365d380e57c75ee35f7cda99df5aa8c96e86287a5d3b52847e5d67d27ed082a
      - gcr.io/google-containers/kube-scheduler:v1.15.3
      sizeBytes: 81107582
    - names:
      - k8s.gcr.io/k8s-dns-node-cache@sha256:bd894670505be5ec57ead09b5e6a7ef96cba2217aad33ddcba5d292559b58345
      - k8s.gcr.io/k8s-dns-node-cache:1.15.4
      sizeBytes: 62534058
    - names:
      - k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:0abeb6a79ad5aec10e920110446a97fb75180da8680094acb6715de62507f4b0
      - k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.6.0
      sizeBytes: 47668785
    - names:
      - calico/kube-controllers@sha256:6bc3fd181fabb580df33442d728b39d4eeccae0883a50d85b1a26cac796a2999
      - calico/kube-controllers:v3.7.3
      sizeBytes: 46774220
    - names:
      - coredns/coredns@sha256:263d03f2b889a75a0b91e035c2a14d45d7c1559c53444c5f7abf3a76014b779d
      - coredns/coredns:1.6.0
      sizeBytes: 42155587
    - names:
      - kubernetesui/metrics-scraper@sha256:35fcae4fd9232a541a8cb08f2853117ba7231750b75c2cb3b6a58a2aaa57f878
      - kubernetesui/metrics-scraper:v1.0.1
      sizeBytes: 40101504
    - names:
      - quay.io/coreos/etcd@sha256:cb9cee3d9d49050e7682fde0a9b26d6948a0117b1b4367b8170fcaa3960a57b8
      - quay.io/coreos/etcd:v3.3.10
      sizeBytes: 39468433
    - names:
      - nfvpe/sriov-device-plugin:latest
      sizeBytes: 24637505
    - names:
      - aquasec/kube-bench@sha256:ddbcf94fee8c0535d8ddd903df61bcaa476c9f45984e7b3f1e7bb187d88d7e77
      - aquasec/kube-bench:0.0.34
      sizeBytes: 20356537
    - names:
      - gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e
      - gcr.io/google-samples/hello-go-gke:1.0
      sizeBytes: 11443478
    - names:
      - alpine@sha256:72c42ed48c3a2db31b7dafe17d275b634664a708d901ec9fd57b1529280f01fb
      - alpine:latest
      sizeBytes: 5581746
    - names:
      - appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7
      - appropriate/curl:latest
      sizeBytes: 5496756
    - names:
      - busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e
      - busybox:latest
      sizeBytes: 1219782
    - names:
      - gcr.io/google-containers/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea
      - gcr.io/google_containers/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610
      - gcr.io/google-containers/pause:3.1
      - gcr.io/google_containers/pause-amd64:3.1
      sizeBytes: 742472
    - names:
      - kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105
      - kubernetes/pause:latest
      sizeBytes: 239840
    nodeInfo:
      architecture: amd64
      bootID: 8d97a3c3-48b0-4de7-bd4f-8fa0d203e572
      containerRuntimeVersion: docker://18.9.7
      kernelVersion: 4.4.0-142-generic
      kubeProxyVersion: v1.15.3
      kubeletVersion: v1.15.3
      machineID: f797e94a52017be76fe095615d95ef4d
      operatingSystem: linux
      osImage: Ubuntu 16.04.6 LTS
      systemUUID: 4C4C4544-0047-4610-8039-B6C04F465632
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
      node.alpha.kubernetes.io/ttl: "0"
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: "2019-10-04T04:17:56Z"
    labels:
      beta.kubernetes.io/arch: amd64
      beta.kubernetes.io/os: linux
      kubernetes.io/arch: amd64
      kubernetes.io/hostname: oplind-23
      kubernetes.io/os: linux
    name: oplind-23
    resourceVersion: "3526310"
    selfLink: /api/v1/nodes/oplind-23
    uid: e97ccb65-f31a-4350-8d98-4be336ff2ae7
  spec:
    podCIDR: 10.233.65.0/24
  status:
    addresses:
    - address:  <HIDDEN>
      type: InternalIP
    - address: oplind-23
      type: Hostname
    allocatable:
      cpu: 39900m
      ephemeral-storage: "1239200783562"
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      intel.com/intel_sriov_dpdk: "0"
      intel.com/intel_sriov_netdevice: "0"
      intel.com/mlnx_sriov_rdma: "0"
      memory: 65179456Ki
      pods: "110"
    capacity:
      cpu: "40"
      ephemeral-storage: 1344618908Ki
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      intel.com/intel_sriov_dpdk: "0"
      intel.com/intel_sriov_netdevice: "0"
      intel.com/mlnx_sriov_rdma: "0"
      memory: 65531856Ki
      pods: "110"
    conditions:
    - lastHeartbeatTime: "2019-10-10T07:25:56Z"
      lastTransitionTime: "2019-10-10T07:25:56Z"
      message: Calico is running on this node
      reason: CalicoIsUp
      status: "False"
      type: NetworkUnavailable
    - lastHeartbeatTime: "2019-10-29T10:42:29Z"
      lastTransitionTime: "2019-10-10T07:25:54Z"
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: "2019-10-29T10:42:29Z"
      lastTransitionTime: "2019-10-10T07:25:54Z"
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: "2019-10-29T10:42:29Z"
      lastTransitionTime: "2019-10-10T07:25:54Z"
      message: kubelet has sufficient PID available
      reason: KubeletHasSufficientPID
      status: "False"
      type: PIDPressure
    - lastHeartbeatTime: "2019-10-29T10:42:29Z"
      lastTransitionTime: "2019-10-10T07:25:54Z"
      message: kubelet is posting ready status. AppArmor enabled
      reason: KubeletReady
      status: "True"
      type: Ready
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      - voereir/touchstone-server-ubuntu:v1.1
      sizeBytes: 1151540050
    - names:
      - <none>@<none>
      - <none>:<none>
      sizeBytes: 1144980410
    - names:
      - <none>@<none>
      - <none>:<none>
      sizeBytes: 1144953804
    - names:
      - nfvpe/multus@sha256:214cb880e1345e36db7867970ece5ba44e1708badaef79e6fcdded28f58a7752
      - nfvpe/multus:v3.2.1
      sizeBytes: 499877728
    - names:
      - calico/node@sha256:a2782b53500c96e35299b8af729eaf39423f9ffd903d9fda675073f4a063502a
      - calico/node:v3.7.3
      sizeBytes: 156259173
    - names:
      - k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb
      - k8s.gcr.io/echoserver:1.4
      sizeBytes: 140393469
    - names:
      - calico/cni@sha256:258a0cb3c25022e44ebda3606112c40865adb67b8fb7be3d119f960957301ad6
      - calico/cni:v3.7.3
      sizeBytes: 135366007
    - names:
      - nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4
      - nginx:latest
      sizeBytes: 126215561
    - names:
      - nginx@sha256:23b4dcdf0d34d4a129755fc6f52e1c6e23bb34ea011b315d87e193033bcd1b68
      - nginx:1.15
      sizeBytes: 109331233
    - names:
      - kubernetesui/dashboard@sha256:a35498beec44376efcf8c4478eebceb57ec3ba39a6579222358a1ebe455ec49e
      - kubernetesui/dashboard:v2.0.0-beta4
      sizeBytes: 84034786
    - names:
      - gcr.io/google-containers/kube-proxy@sha256:6f910100972afda5b14037ccbba0cd6aa091bb773ae749f46b03f395380935c9
      - gcr.io/google-containers/kube-proxy:v1.15.3
      sizeBytes: 82408284
    - names:
      - k8s.gcr.io/k8s-dns-node-cache@sha256:bd894670505be5ec57ead09b5e6a7ef96cba2217aad33ddcba5d292559b58345
      - k8s.gcr.io/k8s-dns-node-cache:1.15.4
      sizeBytes: 62534058
    - names:
      - coredns/coredns@sha256:263d03f2b889a75a0b91e035c2a14d45d7c1559c53444c5f7abf3a76014b779d
      - coredns/coredns:1.6.0
      sizeBytes: 42155587
    - names:
      - kubernetesui/metrics-scraper@sha256:35fcae4fd9232a541a8cb08f2853117ba7231750b75c2cb3b6a58a2aaa57f878
      - kubernetesui/metrics-scraper:v1.0.1
      sizeBytes: 40101504
    - names:
      - nfvpe/sriov-device-plugin:latest
      sizeBytes: 24637505
    - names:
      - aquasec/kube-bench@sha256:ddbcf94fee8c0535d8ddd903df61bcaa476c9f45984e7b3f1e7bb187d88d7e77
      - aquasec/kube-bench:0.0.34
      sizeBytes: 20356537
    - names:
      - gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e
      - gcr.io/google-samples/hello-go-gke:1.0
      sizeBytes: 11443478
    - names:
      - appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7
      - appropriate/curl:latest
      sizeBytes: 5496756
    - names:
      - busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e
      - busybox:latest
      sizeBytes: 1219782
    - names:
      - gcr.io/google_containers/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610
      - gcr.io/google_containers/pause-amd64:3.1
      sizeBytes: 742472
    - names:
      - kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105
      - kubernetes/pause:latest
      sizeBytes: 239840
    nodeInfo:
      architecture: amd64
      bootID: d9f32deb-06ea-4b79-a8e8-602eff70077e
      containerRuntimeVersion: docker://18.9.7
      kernelVersion: 4.4.0-165-generic
      kubeProxyVersion: v1.15.3
      kubeletVersion: v1.15.3
      machineID: 190525cd14aabd91f73f8d0d5d95efb2
      operatingSystem: linux
      osImage: Ubuntu 16.04.6 LTS
      systemUUID: 4C4C4544-0047-4410-804A-B6C04F465632
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
# 

@msidana
Copy link
Author

msidana commented May 18, 2021

@prazumovsky Any update on this ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants