Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All Minikube cluster nodes not being scanned #600

Open
tas50 opened this issue Oct 12, 2022 · 8 comments
Open

All Minikube cluster nodes not being scanned #600

tas50 opened this issue Oct 12, 2022 · 8 comments

Comments

@tas50
Copy link
Member

tas50 commented Oct 12, 2022

Describe the bug
All Minikube cluster nodes not being scanned

Only the first node shows up when running 1.6.3

To Reproduce
Steps to reproduce the behavior:

  1. Spin up a 3 node cluster
  2. Install the operator

Expected behavior
All nodes should scan

Screenshots
image

Note: nodes 2 and 3 were scanned with the previous operator install.

@czunker
Copy link
Contributor

czunker commented Oct 12, 2022

I cannot reproduce with kvm2:

image

The command I used to start the cluster: minikube start --driver kvm2 --nodes 2

This is a fresh cluster and fresh install of the 1.6.3 operator.

@czunker
Copy link
Contributor

czunker commented Oct 12, 2022

The same with the docker driver:

image

@czunker
Copy link
Contributor

czunker commented Oct 12, 2022

Also the next scan an hour later worked for both nodes:

image

@tas50 Seems like something is different between your cluster and my test cluster. Could you please check CronJobs, Pods, and Logs? Which minikube driver are you using?

@czunker
Copy link
Contributor

czunker commented Oct 12, 2022

Just to be really sure, I also tried three node cluster, but that also works:

image

@joelddiaz
Copy link
Contributor

joelddiaz commented Oct 12, 2022

I fired up a 3-node cluster as well (linux with kvm2 driver with minikube 1.26.1) and see all the minikube VMs

image

What I did see what the actual "node" assets only had a single node show up...but I think we were going to disable the 'node' asset type for now...

I'll try minikube on MacOS next

@joelddiaz
Copy link
Contributor

Just tried MacOS (arm) with minikube 1.27.0 and I see all three nodes that I asked for...

image

@tas50
Copy link
Member Author

tas50 commented Oct 20, 2022

The cluster is getting in a bad state. Operator is working as expected

@tas50 tas50 closed this as completed Oct 20, 2022
@tas50 tas50 reopened this Jan 30, 2025
@tas50
Copy link
Member Author

tas50 commented Jan 30, 2025

Having this issue occur again.

 ~/dev  kubectl describe nodes
Name:               prod-k8s
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=prod-k8s
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=dd5d320e41b5451cdf3c01891bc4e13d189586ed
                    minikube.k8s.io/name=prod-k8s
                    minikube.k8s.io/primary=true
                    minikube.k8s.io/updated_at=2025_01_29T17_37_03_0700
                    minikube.k8s.io/version=v1.35.0
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 29 Jan 2025 17:36:59 -0800
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  prod-k8s
  AcquireTime:     <unset>
  RenewTime:       Wed, 29 Jan 2025 18:09:02 -0800
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 29 Jan 2025 18:04:46 -0800   Wed, 29 Jan 2025 17:36:58 -0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 29 Jan 2025 18:04:46 -0800   Wed, 29 Jan 2025 17:36:58 -0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 29 Jan 2025 18:04:46 -0800   Wed, 29 Jan 2025 17:36:58 -0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Wed, 29 Jan 2025 18:04:46 -0800   Wed, 29 Jan 2025 17:37:00 -0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.49.2
  Hostname:    prod-k8s
Capacity:
  cpu:                10
  ephemeral-storage:  61202244Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             8127560Ki
  pods:               110
Allocatable:
  cpu:                10
  ephemeral-storage:  61202244Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             8127560Ki
  pods:               110
System Info:
  Machine ID:                 101e9e8aa65d48938816994a61d7c4e4
  System UUID:                101e9e8aa65d48938816994a61d7c4e4
  Boot ID:                    706ec043-419c-40f1-a8b1-59673b612aa3
  Kernel Version:             6.10.14-linuxkit
  OS Image:                   Ubuntu 22.04.5 LTS
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  docker://27.4.1
  Kubelet Version:            v1.32.0
  Kube-Proxy Version:         v1.32.0
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (8 in total)
  Namespace                   Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-668d6bf9bc-xvbbx            100m (1%)     0 (0%)      70Mi (0%)        170Mi (2%)     32m
  kube-system                 etcd-prod-k8s                       100m (1%)     0 (0%)      100Mi (1%)       0 (0%)         32m
  kube-system                 kindnet-x47mh                       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      32m
  kube-system                 kube-apiserver-prod-k8s             250m (2%)     0 (0%)      0 (0%)           0 (0%)         32m
  kube-system                 kube-controller-manager-prod-k8s    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32m
  kube-system                 kube-proxy-86sln                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         32m
  kube-system                 kube-scheduler-prod-k8s             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32m
  kube-system                 storage-provisioner                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         32m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                850m (8%)   100m (1%)
  memory             220Mi (2%)  220Mi (2%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
  hugepages-32Mi     0 (0%)      0 (0%)
  hugepages-64Ki     0 (0%)      0 (0%)
Events:
  Type    Reason                   Age   From             Message
  ----    ------                   ----  ----             -------
  Normal  Starting                 31m   kube-proxy
  Normal  Starting                 32m   kubelet          Starting kubelet.
  Normal  NodeAllocatableEnforced  32m   kubelet          Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  32m   kubelet          Node prod-k8s status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    32m   kubelet          Node prod-k8s status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     32m   kubelet          Node prod-k8s status is now: NodeHasSufficientPID
  Normal  RegisteredNode           32m   node-controller  Node prod-k8s event: Registered Node prod-k8s in Controller


Name:               prod-k8s-m02
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=prod-k8s-m02
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=dd5d320e41b5451cdf3c01891bc4e13d189586ed
                    minikube.k8s.io/name=prod-k8s
                    minikube.k8s.io/primary=false
                    minikube.k8s.io/updated_at=2025_01_29T17_37_25_0700
                    minikube.k8s.io/version=v1.35.0
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 29 Jan 2025 17:37:24 -0800
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  prod-k8s-m02
  AcquireTime:     <unset>
  RenewTime:       Wed, 29 Jan 2025 18:09:02 -0800
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 29 Jan 2025 18:04:57 -0800   Wed, 29 Jan 2025 17:37:24 -0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 29 Jan 2025 18:04:57 -0800   Wed, 29 Jan 2025 17:37:24 -0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 29 Jan 2025 18:04:57 -0800   Wed, 29 Jan 2025 17:37:24 -0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Wed, 29 Jan 2025 18:04:57 -0800   Wed, 29 Jan 2025 17:37:25 -0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.49.3
  Hostname:    prod-k8s-m02
Capacity:
  cpu:                10
  ephemeral-storage:  61202244Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             8127560Ki
  pods:               110
Allocatable:
  cpu:                10
  ephemeral-storage:  61202244Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             8127560Ki
  pods:               110
System Info:
  Machine ID:                 ada798e27ee54c068ef56e853f621d5b
  System UUID:                ada798e27ee54c068ef56e853f621d5b
  Boot ID:                    706ec043-419c-40f1-a8b1-59673b612aa3
  Kernel Version:             6.10.14-linuxkit
  OS Image:                   Ubuntu 22.04.5 LTS
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  docker://27.4.1
  Kubelet Version:            v1.32.0
  Kube-Proxy Version:         v1.32.0
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
Non-terminated Pods:          (3 in total)
  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
  kube-system                 kindnet-8tz24                             100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31m
  kube-system                 kube-proxy-2xrxh                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         31m
  mondoo-operator             mondoo-client-scan-api-d75db9cbb-thmqn    300m (3%)     1 (10%)     250M (3%)        450M (5%)      30m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests        Limits
  --------           --------        ------
  cpu                400m (4%)       1100m (11%)
  memory             302428800 (3%)  502428800 (6%)
  ephemeral-storage  0 (0%)          0 (0%)
  hugepages-1Gi      0 (0%)          0 (0%)
  hugepages-2Mi      0 (0%)          0 (0%)
  hugepages-32Mi     0 (0%)          0 (0%)
  hugepages-64Ki     0 (0%)          0 (0%)
Events:
  Type    Reason                   Age                From             Message
  ----    ------                   ----               ----             -------
  Normal  Starting                 31m                kube-proxy
  Normal  NodeHasSufficientMemory  31m (x2 over 31m)  kubelet          Node prod-k8s-m02 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    31m (x2 over 31m)  kubelet          Node prod-k8s-m02 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     31m (x2 over 31m)  kubelet          Node prod-k8s-m02 status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  31m                kubelet          Updated Node Allocatable limit across pods
  Normal  NodeReady                31m                kubelet          Node prod-k8s-m02 status is now: NodeReady
  Normal  RegisteredNode           31m                node-controller  Node prod-k8s-m02 event: Registered Node prod-k8s-m02 in Controller


Name:               prod-k8s-m03
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=prod-k8s-m03
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=dd5d320e41b5451cdf3c01891bc4e13d189586ed
                    minikube.k8s.io/name=prod-k8s
                    minikube.k8s.io/primary=false
                    minikube.k8s.io/updated_at=2025_01_29T17_37_43_0700
                    minikube.k8s.io/version=v1.35.0
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 29 Jan 2025 17:37:43 -0800
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  prod-k8s-m03
  AcquireTime:     <unset>
  RenewTime:       Wed, 29 Jan 2025 18:08:58 -0800
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 29 Jan 2025 18:04:07 -0800   Wed, 29 Jan 2025 17:37:43 -0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 29 Jan 2025 18:04:07 -0800   Wed, 29 Jan 2025 17:37:43 -0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 29 Jan 2025 18:04:07 -0800   Wed, 29 Jan 2025 17:37:43 -0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Wed, 29 Jan 2025 18:04:07 -0800   Wed, 29 Jan 2025 17:37:44 -0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.49.4
  Hostname:    prod-k8s-m03
Capacity:
  cpu:                10
  ephemeral-storage:  61202244Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             8127560Ki
  pods:               110
Allocatable:
  cpu:                10
  ephemeral-storage:  61202244Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             8127560Ki
  pods:               110
System Info:
  Machine ID:                 6e067c15523b44138d8c55ad5abbfb5a
  System UUID:                6e067c15523b44138d8c55ad5abbfb5a
  Boot ID:                    706ec043-419c-40f1-a8b1-59673b612aa3
  Kernel Version:             6.10.14-linuxkit
  OS Image:                   Ubuntu 22.04.5 LTS
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  docker://27.4.1
  Kubelet Version:            v1.32.0
  Kube-Proxy Version:         v1.32.0
PodCIDR:                      10.244.2.0/24
PodCIDRs:                     10.244.2.0/24
Non-terminated Pods:          (3 in total)
  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
  kube-system                 kindnet-w4f6z                                          100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31m
  kube-system                 kube-proxy-svnl4                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         31m
  mondoo-operator             mondoo-operator-controller-manager-868ccf859f-hthzc    100m (1%)     200m (2%)   70Mi (0%)        140Mi (1%)     30m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                200m (2%)   300m (3%)
  memory             120Mi (1%)  190Mi (2%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
  hugepages-32Mi     0 (0%)      0 (0%)
  hugepages-64Ki     0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                From             Message
  ----    ------                   ----               ----             -------
  Normal  Starting                 31m                kube-proxy
  Normal  Starting                 31m                kubelet          Starting kubelet.
  Normal  NodeHasSufficientMemory  31m (x2 over 31m)  kubelet          Node prod-k8s-m03 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    31m (x2 over 31m)  kubelet          Node prod-k8s-m03 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     31m (x2 over 31m)  kubelet          Node prod-k8s-m03 status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  31m                kubelet          Updated Node Allocatable limit across pods
  Normal  NodeReady                31m                kubelet          Node prod-k8s-m03 status is now: NodeReady
  Normal  RegisteredNode           31m                node-controller  Node prod-k8s-m03 event: Registered Node prod-k8s-m03 in Controller
Image Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants