Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flatcar doesn't boot on OpenStack #15385

Closed
Wieneo opened this issue May 9, 2023 · 26 comments
Closed

Flatcar doesn't boot on OpenStack #15385

Wieneo opened this issue May 9, 2023 · 26 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@Wieneo
Copy link

Wieneo commented May 9, 2023

/kind bug

1. What kops version are you running? The command kops version, will display
this information.

Client version: 1.26.3

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.3", GitCommit:"9e644106593f3f4aa98f8a84b23db5fa378900bd", GitTreeState:"clean", BuildDate:"2023-03-15T13:33:11Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"darwin/arm64"}

Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.9", GitCommit:"a1a87a0a2bcd605820920c6b0e618a8ab7d117d4", GitTreeState:"clean", BuildDate:"2023-04-12T12:08:36Z", GoVersion:"go1.19.8", Compiler:"gc", Platform:"linux/amd64"}

3. What cloud provider are you using?
OpenStack

4. What commands did you run? What is the simplest way to reproduce this issue?

kops create cluster \
          --cloud openstack \
          --name flat-test.k8s.local \
          --state s3://kops-poc \
          --zones az1 \
          --master-zones az1 \
          --network-cidr 10.10.0.0/16 \
          --image "Flatcar Container Linux 3510.2.0" \
          --master-count=3 \
          --node-count=3 \
          --node-size 3 \
          --master-size SCS-8V:8:100 \
          --etcd-storage-type __DEFAULT__ \
          --api-loadbalancer-type public \
          --topology private \
          --ssh-public-key /tmp/id_rsa.pub \
          --networking calico \
          --os-ext-net ext01 \
          --os-octavia=true \
          --os-octavia-provider="amphora"

kops update cluster --name flat-test.k8s.local --yes --admin
kops validate cluster --wait 15m --name flat-test.k8s.local

-> Timeout

5. What happened after the commands executed?
Validation of the cluster never succeeds as systemd bootup of instances fails.
A look at the console of the instances reveals that flatcars ignition-fetch.service fails to start:

error at line 1 col 2: invalid character 'C' looking for beginning of value

6. What did you expect to happen?
Flatcar boots up normally.

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: null
  generation: 1
  name: flat-test.k8s.local
spec:
  api:
    loadBalancer:
      type: Public
  authorization:
    rbac: {}
  channel: stable
  cloudConfig:
    openstack:
      blockStorage:
        bs-version: v3
        ignore-volume-az: false
      loadbalancer:
        floatingNetwork: ext01
        floatingNetworkID: ce897d51-94d9-4d00-bff6-bf7589a65993
        method: ROUND_ROBIN
        provider: amphora
        useOctavia: true
      monitor:
        delay: 1m
        maxRetries: 3
        timeout: 30s
      router:
        externalNetwork: ext01
  cloudProvider: openstack
  configBase: s3://kops-poc/flat-test.k8s.local
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - instanceGroup: control-plane-az1-1
      name: etcd-1
      volumeType: __DEFAULT__
    - instanceGroup: control-plane-az1-2
      name: etcd-2
      volumeType: __DEFAULT__
    - instanceGroup: control-plane-az1-3
      name: etcd-3
      volumeType: __DEFAULT__
    manager:
      env:
      - name: ETCD_LISTEN_METRICS_URLS
        value: http://0.0.0.0:8081
      - name: ETCD_METRICS
        value: basic
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - instanceGroup: control-plane-az1-1
      name: etcd-1
      volumeType: __DEFAULT__
    - instanceGroup: control-plane-az1-2
      name: etcd-2
      volumeType: __DEFAULT__
    - instanceGroup: control-plane-az1-3
      name: etcd-3
      volumeType: __DEFAULT__
    manager:
      env:
      - name: ETCD_LISTEN_METRICS_URLS
        value: http://0.0.0.0:8082
      - name: ETCD_METRICS
        value: basic
    memoryRequest: 100Mi
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: 1.25.9
  masterPublicName: api.flat-test.k8s.local
  networkCIDR: 10.10.0.0/16
  networking:
    calico: {}
  nodePortAccess:
  - 10.10.0.0/16
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: 10.10.32.0/19
    name: az1
    type: Private
    zone: az1
  - cidr: 10.10.0.0/22
    name: utility-az1
    type: Private
    zone: az1
  topology:
    dns:
      type: Public
    masters: private
    nodes: private

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2023-05-09T07:07:32Z"
  generation: 1
  labels:
    kops.k8s.io/cluster: flat-test.k8s.local
  name: control-plane-az1-1
spec:
  image: Flatcar Container Linux 3510.2.0
  machineType: SCS-8V:8:100
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: control-plane-az1-1
  role: Master
  subnets:
  - az1

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2023-05-09T07:07:32Z"
  generation: 1
  labels:
    kops.k8s.io/cluster: flat-test.k8s.local
  name: control-plane-az1-2
spec:
  image: Flatcar Container Linux 3510.2.0
  machineType: SCS-8V:8:100
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: control-plane-az1-2
  role: Master
  subnets:
  - az1

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2023-05-09T07:07:32Z"
  generation: 1
  labels:
    kops.k8s.io/cluster: flat-test.k8s.local
  name: control-plane-az1-3
spec:
  image: Flatcar Container Linux 3510.2.0
  machineType: SCS-8V:8:100
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: control-plane-az1-3
  role: Master
  subnets:
  - az1

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2023-05-09T07:07:32Z"
  generation: 1
  labels:
    kops.k8s.io/cluster: flat-test.k8s.local
  name: nodes-az1
spec:
  image: Flatcar Container Linux 3510.2.0
  machineType: SCS-16V:32:100
  maxSize: 3
  minSize: 3
  nodeLabels:
    kops.k8s.io/instancegroup: nodes-az1
  packages:
  - nfs-common
  role: Node
  subnets:
  - az1

8. Anything else do we need to know?
I compared the user data generated by kOps and other tools (Gardener) and they appear to be using a completely diffrent format.
kOps:

Content-Type: multipart/mixed; boundary="MIMEBOUNDARY"
MIME-Version: 1.0

--MIMEBOUNDARY
Content-Disposition: attachment; filename="nodeup.sh"
Content-Transfer-Encoding: 7bit
Content-Type: text/x-shellscript
Mime-Version: 1.0

#!/bin/bash
set -o errexit
set -o nounset
set -o pipefail

NODEUP_URL_AMD64=https://artifacts.k8s.io/binaries/kops/1.26.3/linux/amd64/nodeup,https://github.com/kubernetes/kops/releases/download/v1.26.3/nodeup-linux-amd64
NODEUP_HASH_AMD64=973ba5b414c8c702a1c372d4c37f274f44315b28c52fb81ecfd19b68c98461de
NODEUP_URL_ARM64=https://artifacts.k8s.io/binaries/kops/1.26.3/linux/arm64/nodeup,https://github.com/kubernetes/kops/releases/download/v1.26.3/nodeup-linux-arm64
NODEUP_HASH_ARM64=cf36d2300445fc53052348e29f57749444e8d03b36fa4596208275e6c300b720

export OS_APPLICATION_CREDENTIAL_ID='REDACTED'
export OS_APPLICATION_CREDENTIAL_SECRET='REDACTED'
export OS_AUTH_URL='https://intern1.api.pco.get-cloud.io:5000'
export OS_DOMAIN_ID=''
export OS_DOMAIN_NAME=''
export OS_PROJECT_DOMAIN_ID=''
export OS_PROJECT_DOMAIN_NAME=''
export OS_PROJECT_ID=''
export OS_PROJECT_NAME=''
export OS_REGION_NAME='intern1'
export OS_TENANT_ID=''
export OS_TENANT_NAME=''
export S3_ACCESS_KEY_ID=REDACTED
export S3_ENDPOINT=https://de-2.s3.psmanaged.com
export S3_REGION=
export S3_SECRET_ACCESS_KEY=REDACTED




sysctl -w net.core.rmem_max=16777216 || true
sysctl -w net.core.wmem_max=16777216 || true
sysctl -w net.ipv4.tcp_rmem='4096 87380 16777216' || true
sysctl -w net.ipv4.tcp_wmem='4096 87380 16777216' || true


function ensure-install-dir() {
  INSTALL_DIR="/opt/kops"
  # On ContainerOS, we install under /var/lib/toolbox; /opt is ro and noexec
  if [[ -d /var/lib/toolbox ]]; then
    INSTALL_DIR="/var/lib/toolbox/kops"
  fi
  mkdir -p ${INSTALL_DIR}/bin
  mkdir -p ${INSTALL_DIR}/conf
  cd ${INSTALL_DIR}
}

# Retry a download until we get it. args: name, sha, urls
download-or-bust() {
  local -r file="$1"
  local -r hash="$2"
  local -r urls=( $(split-commas "$3") )

  if [[ -f "${file}" ]]; then
    if ! validate-hash "${file}" "${hash}"; then
      rm -f "${file}"
    else
      return 0
    fi
  fi

  while true; do
    for url in "${urls[@]}"; do
      commands=(
        "curl -f --compressed -Lo "${file}" --connect-timeout 20 --retry 6 --retry-delay 10"
        "wget --compression=auto -O "${file}" --connect-timeout=20 --tries=6 --wait=10"
        "curl -f -Lo "${file}" --connect-timeout 20 --retry 6 --retry-delay 10"
        "wget -O "${file}" --connect-timeout=20 --tries=6 --wait=10"
      )
      for cmd in "${commands[@]}"; do
        echo "Attempting download with: ${cmd} {url}"
        if ! (${cmd} "${url}"); then
          echo "== Download failed with ${cmd} =="
          continue
        fi
        if ! validate-hash "${file}" "${hash}"; then
          echo "== Hash validation of ${url} failed. Retrying. =="
          rm -f "${file}"
        else
          echo "== Downloaded ${url} (SHA256 = ${hash}) =="
          return 0
        fi
      done
    done

    echo "All downloads failed; sleeping before retrying"
    sleep 60
  done
}

validate-hash() {
  local -r file="$1"
  local -r expected="$2"
  local actual

  actual=$(sha256sum ${file} | awk '{ print $1 }') || true
  if [[ "${actual}" != "${expected}" ]]; then
    echo "== ${file} corrupted, hash ${actual} doesn't match expected ${expected} =="
    return 1
  fi
}

function split-commas() {
  echo $1 | tr "," "\n"
}

function download-release() {
  case "$(uname -m)" in
  x86_64*|i?86_64*|amd64*)
    NODEUP_URL="${NODEUP_URL_AMD64}"
    NODEUP_HASH="${NODEUP_HASH_AMD64}"
    ;;
  aarch64*|arm64*)
    NODEUP_URL="${NODEUP_URL_ARM64}"
    NODEUP_HASH="${NODEUP_HASH_ARM64}"
    ;;
  *)
    echo "Unsupported host arch: $(uname -m)" >&2
    exit 1
    ;;
  esac

  cd ${INSTALL_DIR}/bin
  download-or-bust nodeup "${NODEUP_HASH}" "${NODEUP_URL}"

  chmod +x nodeup

  echo "Running nodeup"
  # We can't run in the foreground because of https://github.com/docker/docker/issues/23793
  ( cd ${INSTALL_DIR}/bin; ./nodeup --install-systemd-unit --conf=${INSTALL_DIR}/conf/kube_env.yaml --v=8  )
}

####################################################################################

/bin/systemd-machine-id-setup || echo "failed to set up ensure machine-id configured"

echo "== nodeup node config starting =="
ensure-install-dir

cat > conf/cluster_spec.yaml << '__EOF_CLUSTER_SPEC'
cloudConfig:
  manageStorageClasses: true
containerRuntime: containerd
containerd:
  logLevel: info
  runc:
    version: 1.1.4
  version: 1.6.18
docker:
  skipInstall: true
encryptionConfig: null
etcdClusters:
  events:
    cpuRequest: 100m
    manager:
      env:
      - name: ETCD_LISTEN_METRICS_URLS
        value: http://0.0.0.0:8082
      - name: ETCD_METRICS
        value: basic
    memoryRequest: 100Mi
    version: 3.5.7
  main:
    cpuRequest: 200m
    manager:
      env:
      - name: ETCD_LISTEN_METRICS_URLS
        value: http://0.0.0.0:8081
      - name: ETCD_METRICS
        value: basic
    memoryRequest: 100Mi
    version: 3.5.7
kubeAPIServer:
  allowPrivileged: true
  anonymousAuth: false
  apiAudiences:
  - kubernetes.svc.default
  apiServerCount: 3
  authorizationMode: Node,RBAC
  bindAddress: 0.0.0.0
  cloudProvider: external
  enableAdmissionPlugins:
  - NamespaceLifecycle
  - LimitRanger
  - ServiceAccount
  - DefaultStorageClass
  - DefaultTolerationSeconds
  - MutatingAdmissionWebhook
  - ValidatingAdmissionWebhook
  - NodeRestriction
  - ResourceQuota
  etcdServers:
  - https://127.0.0.1:4001
  etcdServersOverrides:
  - /events#https://127.0.0.1:4002
  image: registry.k8s.io/kube-apiserver:v1.25.9@sha256:c8518e64657ff2b04501099d4d8d9dd402237df86a12f7cc09bf72c080fd9608
  kubeletPreferredAddressTypes:
  - InternalIP
  - Hostname
  - ExternalIP
  logLevel: 2
  requestheaderAllowedNames:
  - aggregator
  requestheaderExtraHeaderPrefixes:
  - X-Remote-Extra-
  requestheaderGroupHeaders:
  - X-Remote-Group
  requestheaderUsernameHeaders:
  - X-Remote-User
  securePort: 443
  serviceAccountIssuer: https://api.internal.flat-test.k8s.local
  serviceAccountJWKSURI: https://api.internal.flat-test.k8s.local/openid/v1/jwks
  serviceClusterIPRange: 100.64.0.0/13
  storageBackend: etcd3
kubeControllerManager:
  allocateNodeCIDRs: true
  attachDetachReconcileSyncPeriod: 1m0s
  cloudProvider: external
  clusterCIDR: 100.96.0.0/11
  clusterName: flat-test.k8s.local
  configureCloudRoutes: false
  image: registry.k8s.io/kube-controller-manager:v1.25.9@sha256:23a76a71f2b39189680def6edc30787e40a2fe66e29a7272a56b426d9b116229
  leaderElection:
    leaderElect: true
  logLevel: 2
  useServiceAccountCredentials: true
kubeProxy:
  clusterCIDR: 100.96.0.0/11
  cpuRequest: 100m
  image: registry.k8s.io/kube-proxy:v1.25.9@sha256:42fe09174a5eb6b8bace3036fe253ed7f06be31d9106211dcc4a09f9fa99c79a
  logLevel: 2
kubeScheduler:
  image: registry.k8s.io/kube-scheduler:v1.25.9@sha256:19712fa46b8277aafd416b75a3a3d90e133f44b8a4dae08e425279085dc29f7e
  leaderElection:
    leaderElect: true
  logLevel: 2
kubelet:
  anonymousAuth: false
  cgroupDriver: systemd
  cgroupRoot: /
  cloudProvider: external
  clusterDNS: 100.64.0.10
  clusterDomain: cluster.local
  enableDebuggingHandlers: true
  evictionHard: memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%,imagefs.available<10%,imagefs.inodesFree<5%
  kubeconfigPath: /var/lib/kubelet/kubeconfig
  logLevel: 2
  podInfraContainerImage: registry.k8s.io/pause:3.6@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db
  podManifestPath: /etc/kubernetes/manifests
  protectKernelDefaults: true
  registerSchedulable: true
  shutdownGracePeriod: 30s
  shutdownGracePeriodCriticalPods: 10s
  volumePluginDirectory: /var/lib/kubelet/volumeplugins/
masterKubelet:
  anonymousAuth: false
  cgroupDriver: systemd
  cgroupRoot: /
  cloudProvider: external
  clusterDNS: 100.64.0.10
  clusterDomain: cluster.local
  enableDebuggingHandlers: true
  evictionHard: memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%,imagefs.available<10%,imagefs.inodesFree<5%
  kubeconfigPath: /var/lib/kubelet/kubeconfig
  logLevel: 2
  podInfraContainerImage: registry.k8s.io/pause:3.6@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db
  podManifestPath: /etc/kubernetes/manifests
  protectKernelDefaults: true
  registerSchedulable: true
  shutdownGracePeriod: 30s
  shutdownGracePeriodCriticalPods: 10s
  volumePluginDirectory: /var/lib/kubelet/volumeplugins/

__EOF_CLUSTER_SPEC

cat > conf/kube_env.yaml << '__EOF_KUBE_ENV'
CloudProvider: openstack
ConfigBase: s3://kops-poc/flat-test.k8s.local
InstanceGroupName: control-plane-az1-1
InstanceGroupRole: ControlPlane
NodeupConfigHash: 4Zb8f/LBOyZeX/RqQIDBgk8UmkTUd+ANhlam8okLPgU=

__EOF_KUBE_ENV

download-release
echo "== nodeup node config done =="

Gardener:

#cloud-config

coreos:
  update:
    reboot_strategy: "off"
  units:
  - name: update-engine.service
    mask: true
    command: stop
  - name: locksmithd.service
    mask: true
    command: stop
  - name: cloud-config-downloader.service
    enable: true
    content: |-
      [Unit]
      Description=Downloads the actual cloud config from the Shoot API server and executes it
      After=docker.service docker.socket
      Wants=docker.socket
      [Service]
      Restart=always
      RestartSec=30
      RuntimeMaxSec=1200
      EnvironmentFile=/etc/environment
      ExecStart=/var/lib/cloud-config-downloader/download-cloud-config.sh
      [Install]
      WantedBy=multi-user.target
    command: start
  - name: run-command.service
    enable: true
    content: |
      [Unit]
      Description=Oneshot unit used to run a script on node start-up.
      Before=containerd.service kubelet.service
      [Service]
      Type=oneshot
      EnvironmentFile=/etc/environment
      ExecStart=/opt/bin/run-command.sh
      [Install]
      WantedBy=containerd.service kubelet.service
    command: start
  - name: enable-cgroupsv2.service
    enable: true
    content: |
      [Unit]
      Description=Oneshot unit used to patch the kubelet config for cgroupsv2.
      Before=containerd.service kubelet.service
      [Service]
      Type=oneshot
      EnvironmentFile=/etc/environment
      ExecStart=/opt/bin/configure-cgroupsv2.sh
      [Install]
      WantedBy=containerd.service kubelet.service
    command: start
write_files:
- encoding: b64
  content: REDACTED
  path: /var/lib/cloud-config-downloader/credentials/server
  permissions: "644"
- encoding: b64
  content: REDACTED
  path: /var/lib/cloud-config-downloader/credentials/ca.crt
  permissions: "644"
- encoding: b64
  content: REDACTED
  path: /var/lib/cloud-config-downloader/download-cloud-config.sh
  permissions: "744"
- content: REDACTED
  path: /var/lib/cloud-config-downloader/credentials/bootstrap-token
  permissions: "644"
- content: |
    [Service]
    SyslogIdentifier=containerd
    ExecStart=
    ExecStart=/bin/bash -c 'PATH="/run/torcx/unpack/docker/bin:$PATH" /run/torcx/unpack/docker/bin/containerd --config /etc/containerd/config.toml'
  path: /etc/systemd/system/containerd.service.d/11-exec_config.conf
  permissions: "0644"
- content: |
    #!/bin/bash

    CONTAINERD_CONFIG=/etc/containerd/config.toml

    ALTERNATE_LOGROTATE_PATH="/usr/bin/logrotate"

    # initialize default containerd config if does not exist
    if [ ! -s "$CONTAINERD_CONFIG" ]; then
        mkdir -p /etc/containerd/
        /run/torcx/unpack/docker/bin/containerd config default > "$CONTAINERD_CONFIG"
        chmod 0644 "$CONTAINERD_CONFIG"
    fi

    # if cgroups v2 are used, patch containerd configuration to use systemd cgroup driver
    if [[ -e /sys/fs/cgroup/cgroup.controllers ]]; then
        sed -i "s/SystemdCgroup *= *false/SystemdCgroup = true/" "$CONTAINERD_CONFIG"
    fi

    # provide kubelet with access to the containerd binaries in /run/torcx/unpack/docker/bin
    if [ ! -s /etc/systemd/system/kubelet.service.d/environment.conf ]; then
        mkdir -p /etc/systemd/system/kubelet.service.d/
        cat <<EOF | tee /etc/systemd/system/kubelet.service.d/environment.conf
    [Service]
    Environment="PATH=/run/torcx/unpack/docker/bin:$PATH"
    EOF
        chmod 0644 /etc/systemd/system/kubelet.service.d/environment.conf
        systemctl daemon-reload
    fi

    # some flatcar versions have logrotate at /usr/bin instead of /usr/sbin
    if [ -f "$ALTERNATE_LOGROTATE_PATH" ]; then
        sed -i "s;/usr/sbin/logrotate;$ALTERNATE_LOGROTATE_PATH;" /etc/systemd/system/containerd-logrotate.service
        systemctl daemon-reload
    fi
  path: /opt/bin/run-command.sh
  permissions: "0755"
- content: |
    #!/bin/bash

    KUBELET_CONFIG=/var/lib/kubelet/config/kubelet

    if [[ -e /sys/fs/cgroup/cgroup.controllers ]]; then
            echo "CGroups V2 are used!"
            echo "=> Patch kubelet to use systemd as cgroup driver"
            sed -i "s/cgroupDriver: cgroupfs/cgroupDriver: systemd/" "$KUBELET_CONFIG"
    else
            echo "No CGroups V2 used by system"
    fi
  path: /opt/bin/configure-cgroupsv2.sh
  permissions: "0755"
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label May 9, 2023
@Wieneo
Copy link
Author

Wieneo commented May 9, 2023

I found part of the problem!
I had .spec.additionalUserData set:

additionalUserData:
  - name: ps_cloud_init.txt
    type: text/cloud-config
    content: |
       REDACTED

Without additionalUserData set, the instances boot but don't join the cluster.

@hakman
Copy link
Member

hakman commented May 9, 2023

Without additionalUserData set, the instances boot but don't join the cluster.

Did you try that with a fresh cluster?
What errors do you see during the boot sequence?

@Wieneo
Copy link
Author

Wieneo commented May 9, 2023

I tried it with a fresh cluster.
The validate command fails with the following output (tailed):

KIND	NAME								MESSAGE
Machine	3d8b0b26-5470-42ca-9891-6feccd2a69aa				machine "3d8b0b26-5470-42ca-9891-6feccd2a69aa" has not yet joined cluster
Machine	436828e0-2463-4e67-86d2-8f02d37402c9				machine "436828e0-2463-4e67-86d2-8f02d37402c9" has not yet joined cluster
Machine	a2985f4d-9817-468d-b052-6d5addf58613				machine "a2985f4d-9817-468d-b052-6d5addf58613" has not yet joined cluster
Machine	a7dc858f-1928-4bdb-8717-59067894f05f				machine "a7dc858f-1928-4bdb-8717-59067894f05f" has not yet joined cluster
Machine	ba9f7f63-b455-4e1a-9586-afca4a10a4e9				machine "ba9f7f63-b455-4e1a-9586-afca4a10a4e9" has not yet joined cluster
Machine	d3528f13-474f-437c-bf3f-b7cae5113831				machine "d3528f13-474f-437c-bf3f-b7cae5113831" has not yet joined cluster
Pod	kube-system/calico-kube-controllers-59d58646f4-pkbkc		system-cluster-critical pod "calico-kube-controllers-59d58646f4-pkbkc" is pending
Pod	kube-system/coredns-7cc468f8df-sj9xb				system-cluster-critical pod "coredns-7cc468f8df-sj9xb" is pending
Pod	kube-system/coredns-autoscaler-5fc98c7959-49754			system-cluster-critical pod "coredns-autoscaler-5fc98c7959-49754" is pending
Pod	kube-system/csi-cinder-controllerplugin-56d6db9c57-zf4tc	system-cluster-critical pod "csi-cinder-controllerplugin-56d6db9c57-zf4tc" is pending
Pod	kube-system/dns-controller-74854cbb7f-qcm74			system-cluster-critical pod "dns-controller-74854cbb7f-qcm74" is pending
Validation Failed
W0509 11:04:25.234828     167 validate_cluster.go:232] (will retry): cluster not yet healthy
Error: validation failed: wait time exceeded during validation

I'm not quite sure what causes the error as the nodes seem fine:

master-az1-1-er3ovv ~ # systemctl --all --failed
UNIT LOAD ACTIVE SUB DESCRIPTION
0 loaded units listed

master-az1-1-er3ovv ~ # ctr -n k8s.io c ls
CONTAINER                                                           IMAGE                                                                                                                      RUNTIME
07246fe3adda81a248699108803e116b5260b4c7c391679d4e343967f0e25831    registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db                              io.containerd.runc.v2
12b4eddacab946e874dec0675e80c3e7cd81a52755ada184d4d9b7f9d6bf8330    registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db                              io.containerd.runc.v2
12e60b40ad10e604b863ae684c41271af426982a03aa97786cd3dafce0b6a6a4    registry.k8s.io/kube-controller-manager@sha256:23a76a71f2b39189680def6edc30787e40a2fe66e29a7272a56b426d9b116229            io.containerd.runc.v2
4688c12cdfcf9366fc8523409115494823a24b4f1ba0ccdb026d1230cef67e27    registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db                              io.containerd.runc.v2
54a21defe868f2f86bb588b5adb69d673879b06fd33f906b2fa6b558e6a38477    registry.k8s.io/etcdadm/etcd-manager@sha256:5ffb3f7cade4ae1d8c952251abb0c8bdfa8d4d9acb2c364e763328bd6f3d06aa               io.containerd.runc.v2
643563a3d3e4ab40fa49b632c144d918e9cad9d94e4bcd5d47e285923060024a    registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db                              io.containerd.runc.v2
678db0d6c86b5b694707dca9d0300d8d2107be82abb4fa36604e5c7799c139dd    registry.k8s.io/kube-controller-manager@sha256:23a76a71f2b39189680def6edc30787e40a2fe66e29a7272a56b426d9b116229            io.containerd.runc.v2
83da13e648f1d3b52dadfccb6f05c9cc9d7d28849aefd8797e0b70630daed1ca    registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db                              io.containerd.runc.v2
8bf86f696e1f9cc556100df803fb425217c0216af702d03722b46be078a11b40    registry.k8s.io/kube-apiserver@sha256:c8518e64657ff2b04501099d4d8d9dd402237df86a12f7cc09bf72c080fd9608                     io.containerd.runc.v2
8e41f4eaa58fce83da9d6cd8a421efef04df9176d98f9e8f85bc48623fbefccd    registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db                              io.containerd.runc.v2
972810dc74091a0cb8bca9518e5cd401c5e2ba2595780e43cb3a9d9e78dc8fcd    registry.k8s.io/etcdadm/etcd-manager@sha256:5ffb3f7cade4ae1d8c952251abb0c8bdfa8d4d9acb2c364e763328bd6f3d06aa               io.containerd.runc.v2
af2af2a34bf1a442213495428cb00b35047512f115dec94dad92e776f8a75e06    registry.k8s.io/kube-proxy@sha256:42fe09174a5eb6b8bace3036fe253ed7f06be31d9106211dcc4a09f9fa99c79a                         io.containerd.runc.v2
c8feaf253772950062b921e4f59369aae6d988940b79fa32da14dc9977681bb0    registry.k8s.io/kops/kube-apiserver-healthcheck@sha256:547c6bf1edc798e64596aa712a5cfd5145df0f380e464437a9313c1f1ae29756    io.containerd.runc.v2
c9dfe8396146b76247b262085a7a701ac5ece72847fb72984d2778cb1d24b28d    registry.k8s.io/kube-scheduler@sha256:19712fa46b8277aafd416b75a3a3d90e133f44b8a4dae08e425279085dc29f7e                     io.containerd.runc.v2
f6f69768c5571fe745d63c7ba0022ed91b010594363e3fb3d1a037ae358e02c5    registry.k8s.io/kube-apiserver@sha256:c8518e64657ff2b04501099d4d8d9dd402237df86a12f7cc09bf72c080fd9608                     io.containerd.runc.v2

Kubelet constantly logs the following error:

"Error getting node" err="node \"master-az1-1-er3ovv.novalocal\" not found"

Please let me know if you need more logs or info.

@hakman
Copy link
Member

hakman commented May 9, 2023

This means that your control plane is up and running.
Maybe ssh to a node and look for the kops-configuration.service and kubelet.service logs.
Also, this may help https://kops.sigs.k8s.io/operations/troubleshoot.

@Wieneo
Copy link
Author

Wieneo commented May 9, 2023

I looked into it and can't find the issue :(
I uploaded the logs of the mentioned services: https://gist.github.com/Wieneo/47cddf4dca42e3f8e46b9925b3e37961

@hakman
Copy link
Member

hakman commented May 9, 2023

Could you try creating the cluster with --dns=none?

@hakman
Copy link
Member

hakman commented May 9, 2023

@zetaab Any idea on what it may be wrong here?

@zetaab
Copy link
Member

zetaab commented May 9, 2023

no idea, I have not used flatcar (we are using ubuntu). I can try it tomorrow

@Wieneo
Copy link
Author

Wieneo commented May 9, 2023

Creating the cluster with --dns=none doesn't seem to fix the issue.

@hakman
Copy link
Member

hakman commented May 9, 2023

Creating the cluster with --dns=none doesn't seem to fix the issue.

The goal is to understand why the failure happens. You are the only person with access to the logs.
My guess is that if you connect to the control plane, you should see which pods are running and CCM or API server should have some errors with hints.

@gabriel-samfira
Copy link

gabriel-samfira commented May 9, 2023

The issue seems to stem from the fact that flatcar uses the FQDN of the node as a hostname. The node registers itself using the short name, and then tries to authenticate itself against the control plane using the fqdn. This leads to errors in the kube-apiserver.log like the following:

I0509 13:43:54.264397      11 node_authorizer.go:285] NODE DENY: 'nodes-nova-4fyhog' &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc00887b7c0), Verb:"get", Namespace:"", APIGroup:"storage.k8s.io", APIVersion:"v1", Resource:"csinodes", Subresource:"", Name:"nodes-nova-4fyhog.novalocal", ResourceRequest:true, Path:"/apis/storage.k8s.io/v1/csinodes/nodes-nova-4fyhog.novalocal"}
I0509 13:43:55.264465      11 node_authorizer.go:285] NODE DENY: 'nodes-nova-4fyhog' &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc008a04ec0), Verb:"get", Namespace:"", APIGroup:"storage.k8s.io", APIVersion:"v1", Resource:"csinodes", Subresource:"", Name:"nodes-nova-4fyhog.novalocal", ResourceRequest:true, Path:"/apis/storage.k8s.io/v1/csinodes/nodes-nova-4fyhog.novalocal"}
I0509 13:43:56.264000      11 node_authorizer.go:285] NODE DENY: 'nodes-nova-4fyhog' &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc0088a83c0), Verb:"get", Namespace:"", APIGroup:"storage.k8s.io", APIVersion:"v1", Resource:"csinodes", Subresource:"", Name:"nodes-nova-4fyhog.novalocal", ResourceRequest:true, Path:"/apis/storage.k8s.io/v1/csinodes/nodes-nova-4fyhog.novalocal"}
I0509 13:43:57.265231      11 node_authorizer.go:285] NODE DENY: 'nodes-nova-4fyhog' &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc008978700), Verb:"get", Namespace:"", APIGroup:"storage.k8s.io", APIVersion:"v1", Resource:"csinodes", Subresource:"", Name:"nodes-nova-4fyhog.novalocal", ResourceRequest:true, Path:"/apis/storage.k8s.io/v1/csinodes/nodes-nova-4fyhog.novalocal"}
I0509 13:43:58.265137      11 node_authorizer.go:285] NODE DENY: 'nodes-nova-4fyhog' &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc008a35d80), Verb:"get", Namespace:"", APIGroup:"storage.k8s.io", APIVersion:"v1", Resource:"csinodes", Subresource:"", Name:"nodes-nova-4fyhog.novalocal", ResourceRequest:true, Path:"/apis/storage.k8s.io/v1/csinodes/nodes-nova-4fyhog.novalocal"}

I accessed the node and ran the following:

hostnamectl set-hostname "nodes-nova-x04eyu"
systemctl restart systemd-networkd
systemctl restart kubelet

After which the node joined:

root@openstack-antelope:~# kubectl get nodes
NAME                                  STATUS   ROLES           AGE     VERSION
control-plane-nova-rruwba.novalocal   Ready    control-plane   6m35s   v1.26.3
nodes-nova-x04eyu                     Ready    node            67s     v1.26.3

This seems to be an older issue that manifested on AWS as well: flatcar/Flatcar#707

Not sure if this is something that should be fixed in flatcar or kops.
I will open a PR to address this in Flatcar in the following days.

As a side note, kops validate cluster continues to fail with:

root@openstack-antelope:~# kops validate cluster
Using cluster from kubectl context: my-cluster.k8s.local

Validating cluster my-cluster.k8s.local

INSTANCE GROUPS
NAME			ROLE		MACHINETYPE	MIN	MAX	SUBNETS
control-plane-nova	ControlPlane	m1.medium	1	1	nova
nodes-nova		Node		m1.medium	1	1	nova

NODE STATUS
NAME			ROLE	READY
nodes-nova-x04eyu	node	True

VALIDATION ERRORS
KIND	NAME					MESSAGE
Machine	b776c5b9-85e2-423b-afb8-79c5b61883ef	machine "b776c5b9-85e2-423b-afb8-79c5b61883ef" has not yet joined cluster

Validation Failed
Error: validation failed: cluster not yet healthy

Even though the control plane node is up and Ready.

@gabriel-samfira
Copy link

gabriel-samfira commented May 13, 2023

Hi folks,

A short update.

A fix for this issue has merged in flatcar and is now available in the nightly builds of the next alpha release. If you want to test it out, you can download it here:

https://bincache.flatcar-linux.net/images/amd64/3602.0.0/flatcar_production_openstack_image.img.bz2

Keep in mind this is not a stable release.

Thanks!

@hakman
Copy link
Member

hakman commented May 13, 2023

Thanks for the update @gabriel-samfira. Any thoughts / info about additional userdata for cloudinit not working?

@gabriel-samfira
Copy link

Flatcar is normally configured using ignition during first boot. To maintain compatibility with cloud-init based environments, it also has it's own agent that implements a subset of what cloud-init offers, called coreos-cloudinit.

The additional userdata feature in kops uses the MIME multipart feature in cloud-init which allows it to add multiple files inside userdata. This particular feature of cloud-init is not implemented in coreos-cloudinit.

There are two options to get this working. Either we implement multipart in coreos-cloudinit or we add ignition support in kops. Ignition is where most of the development is happening in Flatcar. It's the native way to configure it.

CC: @jepio @pothos

What do you think would be the best path forward?

@pothos
Copy link
Contributor

pothos commented May 14, 2023

So far the approach followed in similar efforts like CAPI support was to use Ignition (Fedora CoreOS and other Ignition users will also benefit from that).

@hakman
Copy link
Member

hakman commented May 15, 2023

At the moment, kOps doesn't have a way to know much about the distro image that is used before booting. It may be possible, but would require updating the implementation of all supported cloud providers. As things stand I see 3 possibilities:

  1. do nothing - continue as is, without supporting MIME multipart for Flatcar (eventually someone will contribute this feature if important enough for their use case)
  2. make user choose - add an option to specify userdata format (either cloudinit or ignition`)
  3. add support for MIME multipart for Flatcar (not quite sure how big the effort would be here)

Any thoughts about 2 & 3?

@gabriel-samfira
Copy link

I think we can have both 2 & 3.

The short term solution would be to have MIME multipart support in coreos-cloudinit, but long term we will need to add ignition support to kops, as that is the idiomatic (and in some cases, the only) way to configure distros that use ignition.

I will open a separate issue for adding ignition support in kops.

The immediate issue reported here should be fixed (sans the additionalUserData option) once a stable release of flatcar is cut with the above mentioned fix. @Wieneo could you test out the image I linked to and confirm it works for you?

@gabriel-samfira
Copy link

A PR was created to add multipart support to coreos-cloudinit here:

@hakman
Copy link
Member

hakman commented May 18, 2023

Thanks @gabriel-samfira.I appreciate the update.

@Wieneo
Copy link
Author

Wieneo commented May 30, 2023

I tested the newest Flatcar Alpha Image and kOps bootstrapped the cluster succesfully. 👍

@gabriel-samfira
Copy link

Multipart mime support has been merged in the main branch of flatcar. This will probably be part of the next alpha release.

This means you'll be able to use additionalUserData when deploying with kops, as long as you only use the subset of cloud-config that coreos-cloudinit currently supports.

@hakman
Copy link
Member

hakman commented Jun 9, 2023

Excellent. Thanks a lot @gabriel-samfira!

@zadjadr
Copy link
Contributor

zadjadr commented Aug 3, 2023

I encountered a similar issue with flatcar using Kops 1.27 5.15.119-flatcar on Openstack.

The static hostname assigned to the hosts has the .openstack.internal suffix while the K8s certificate created don't have these in the subject name.

So you get errors like this on the worker nodes:

Aug 03 10:16:16 nodes-es1-gaj3jq.openstack.internal kubelet[1475]: I0803 10:16:16.585379    1475 csi_plugin.go:913] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "nodes-es1-gaj3jq.openstack.internal" is forbidden: User "system:node:nodes-es1-gaj3jq" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope: can only access CSINode with the same name as the requesting node

After manually changing the hostname, the node connects to the Cluster without issue.

core@nodes-es1-gaj3jq ~ $ openssl x509 -in /srv/kubernetes/kubelet-server.crt -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = kubernetes-ca
        Subject: CN = nodes-es1-gaj3jq

core@nodes-es1-gaj3jq ~ $ hostnamectl
 Static hostname: nodes-es1-gaj3jq.openstack.internal
       Icon name: computer-vm

After fix:

core@nodes-es1-gaj3jq ~ $ hostnamectl
 Static hostname: nodes-es1-gaj3jq
       Icon name: computer-vm

This issue is fixed with the beta flatcar release 3602.0.0

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 25, 2024
@zadjadr
Copy link
Contributor

zadjadr commented Jan 26, 2024

From my side this can be closed. The current flatcar stable (3760.2.0) release works.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 25, 2024
@hakman hakman closed this as completed Feb 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants