Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] TLS handshake timeout running kubectl cluster-info #1380

Open
prasannavl opened this issue Nov 30, 2023 · 1 comment
Open

[BUG] TLS handshake timeout running kubectl cluster-info #1380

prasannavl opened this issue Nov 30, 2023 · 1 comment
Labels
bug Something isn't working

Comments

@prasannavl
Copy link

prasannavl commented Nov 30, 2023

Env

$ uname -a
Linux pvl-x1c 6.5.0-13-generic #13-Ubuntu SMP PREEMPT_DYNAMIC Fri Nov  3 12:16:05 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
- Fresh OS install of Ubuntu 23.10
- apt install podman podman-docker
- Add docker groups to user, restart. (Not needed, just to be sure)

# Install k3d  
#
# Using nix pkg manager, but shouldn't really matter - just documenting for completion, 
# Tried manual install as well instead of from nix pkg manger

- sudo apt install nix-bin
- nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs
- nix-channel --update
- nix-env -i kubectl
- nix-env -i k3d

Also tried: k3d from manual install without nix, just for the sake of it though it shouldn't make a difference. Same issue.

What did you do

  • How was the cluster created?
    • sudo $(which k3d) cluster create local (sudo due to my docker socket access)
$ sudo $(which k3d) cluster create local
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-local'                  
INFO[0000] Created image volume k3d-local-images        
INFO[0000] Starting new tools node...                   
INFO[0000] Starting Node 'k3d-local-tools'              
INFO[0001] Creating node 'k3d-local-server-0'           
INFO[0001] Creating LoadBalancer 'k3d-local-serverlb'   
INFO[0002] Using the k3d-tools node to gather environment information 
INFO[0002] HostIP: using network gateway 10.89.1.1 address 
INFO[0002] Starting cluster 'local'                     
INFO[0002] Starting servers...                          
INFO[0002] Starting Node 'k3d-local-server-0'           
INFO[0006] All agents already running.                  
INFO[0006] Starting helpers...                          
INFO[0006] Starting Node 'k3d-local-serverlb'           
INFO[0012] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... 
INFO[0015] Cluster 'local' created successfully!        
INFO[0015] You can now use it like this:                
kubectl cluster-info
 sudo docker ps
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
CONTAINER ID  IMAGE                               COMMAND               CREATED        STATUS            PORTS                    NAMES
93e5ab0c3db1  docker.io/rancher/k3s:v1.21.7-k3s1  server --tls-san ...  2 minutes ago  Up 2 minutes ago                           k3d-local-server-0
be0815522492  ghcr.io/k3d-io/k3d-proxy:5.6.0                            2 minutes ago  Up 2 minutes ago  0.0.0.0:42131->6443/tcp  k3d-local-serverlb
  • What did you do afterwards?

  • Copy over config to user account, since it was k3d was run as root

    • cat /root/.kube/config > ~/.kube/config
    • Note: Also tried without this, and using sudo kubectl, just in case, so everything is run as root. Same issue, just documenting for completion.
  • Run kubectl to connect to the cluster

    • kubectl cluster-info or any other kubectl commands

Screenshots or terminal output

$ kubectl cluster-info -v=10                                                                                                                                                                  
I1130 07:58:54.379969  936349 loader.go:395] Config loaded from file:  /home/pvl/.kube/config-local                                                                                           
I1130 07:58:54.380366  936349 round_trippers.go:466] curl -v -XGET  -H "User-Agent: kubectl/v1.28.4 (linux/amd64) kubernetes/bae2c62" -H "Accept: application/json;g=apidiscovery.k8s.io;v=v2b
eta1;as=APIGroupDiscoveryList,application/json" 'https://0.0.0.0:33977/api?timeout=32s'                         
I1130 07:58:54.380760  936349 round_trippers.go:510] HTTP Trace: Dial to tcp:0.0.0.0:33977 succeed              
I1130 07:59:04.385250  936349 round_trippers.go:553] GET https://0.0.0.0:33977/api?timeout=32s  in 10004 milliseconds
I1130 07:59:04.385326  936349 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 10004 ms Duration 10004 ms
I1130 07:59:04.385355  936349 round_trippers.go:577] Response Headers:                                          
E1130 07:59:04.385590  936349 memcache.go:265] couldn't get current server API group list: Get "https://0.0.0.0:33977/api?timeout=32s": net/http: TLS handshake timeout
I1130 07:59:04.385634  936349 cached_discovery.go:120] skipped caching discovery info due to Get "https://0.0.0.0:33977/api?timeout=32s": net/http: TLS handshake timeout
I1130 07:59:04.387098  936349 round_trippers.go:466] curl -v -XGET  -H "Accept: application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json" -H "User-Agent: ku
bectl/v1.28.4 (linux/amd64) kubernetes/bae2c62" 'https://0.0.0.0:33977/api?timeout=32s'
I1130 07:59:04.388954  936349 round_trippers.go:510] HTTP Trace: Dial to tcp:0.0.0.0:33977 succeed                                                                                            
I1130 07:59:14.389969  936349 round_trippers.go:553] GET https://0.0.0.0:33977/api?timeout=32s  in 10002 milliseconds
I1130 07:59:14.390027  936349 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 1 ms TLSHandshake 10000 ms Duration 10002 ms
I1130 07:59:14.390048  936349 round_trippers.go:577] Response Headers:                                          
E1130 07:59:14.390139  936349 memcache.go:265] couldn't get current server API group list: Get "https://0.0.0.0:33977/api?timeout=32s": net/http: TLS handshake timeout
I1130 07:59:14.390161  936349 cached_discovery.go:120] skipped caching discovery info due to Get "https://0.0.0.0:33977/api?timeout=32s": net/http: TLS handshake timeout

Which OS & Architecture

  • output of k3d runtime-info
$ sudo $(which k3d) runtime-info
arch: amd64
cgroupdriver: systemd
cgroupversion: "2"
endpoint: /var/run/docker.sock
filesystem: extfs
infoname: pvl-x1c
name: docker
os: ubuntu
ostype: linux
version: 4.3.1

Which version of k3d

  • output of k3d version
$ sudo $(which k3d) version
k3d version v5.6.0
k3s version v1.21.7-k3s1 (default)

Which version of docker

  • output of docker version and docker info
$ docker version
Client:       Podman Engine
Version:      4.3.1
API Version:  4.3.1
Go Version:   go1.20.7
Built:        Thu Jan  1 05:30:00 1970
OS/Arch:      linux/amd64


$ docker info
host:
  arch: amd64
  buildahVersion: 1.28.2
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2.1.6+ds1-1_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.6, commit: unknown'
  cpuUtilization:
    idlePercent: 96.37
    systemPercent: 0.94
    userPercent: 2.7
  cpus: 16
  distribution:
    codename: mantic
    distribution: ubuntu
    version: "23.10"
  eventLogger: journald
  hostname: cake-pvl-x1c
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.5.0-13-generic
  linkmode: dynamic
  logDriver: journald
  memFree: 13989158912
  memTotal: 33329016832
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun_1.8.5-1_amd64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.5
      commit: b6f80f766c9a89eb7b1440c0a70ab287434b17ed
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +WASM:wasmedge +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.0-1_amd64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 34359734272
  swapTotal: 34359734272
  uptime: 22h 29m 55.00s (Approximately 0.92 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries: {}
store:
  configFile: /home/pvl/.config/containers/storage.conf
  containerStore:
    number: 9
    paused: 0
    running: 0
    stopped: 9
  graphDriverName: vfs
  graphOptions: {}
  graphRoot: /home/pvl/.local/share/containers/storage
  graphRootAllocated: 981132795904
  graphRootUsed: 520202375168
  graphStatus: {}
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 19
  runRoot: /run/user/1000/containers
  volumePath: /home/pvl/.local/share/containers/storage/volumes
version:
  APIVersion: 4.3.1
  Built: 0
  BuiltTime: Thu Jan  1 05:30:00 1970
  GitCommit: ""
  GoVersion: go1.20.7
  Os: linux
  OsArch: linux/amd64
  Version: 4.3.1

@prasannavl prasannavl added the bug Something isn't working label Nov 30, 2023
@prasannavl
Copy link
Author

Related issue: #838

Firewall related info:

ufw: inactive.

iptables:

$ sudo iptables --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
NETAVARK_FORWARD  all  --  anywhere             anywhere             /* netavark firewall plugin rules */

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain NETAVARK_FORWARD (1 references)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             10.89.1.0/24         ctstate RELATED,ESTABLISHED
ACCEPT     all  --  10.89.1.0/24         anywhere            

nf-tables rulesets:

$ sudo nft list ruleset
# Warning: table ip nat is managed by iptables-nft, do not touch!
table ip nat {
        chain POSTROUTING {
                type nat hook postrouting priority srcnat; policy accept;
                counter packets 9790 bytes 1325950 jump NETAVARK-HOSTPORT-MASQ
                ip saddr 10.89.1.0/24 counter packets 275 bytes 21688 jump NETAVARK-2EE7F3DEE5FA2
        }

        chain NETAVARK-HOSTPORT-SETMARK {
                counter packets 1 bytes 60 meta mark set mark or 0x2000
        }

        chain NETAVARK-HOSTPORT-MASQ {
                 meta mark & 0x00002000 == 0x00002000 counter packets 1 bytes 60 masquerade
        }

        chain NETAVARK-HOSTPORT-DNAT {
                tcp dport 33977  counter packets 6 bytes 360 jump NETAVARK-DN-2EE7F3DEE5FA2
        }

        chain PREROUTING {
                type nat hook prerouting priority dstnat; policy accept;
                fib daddr type local counter packets 316 bytes 22502 jump NETAVARK-HOSTPORT-DNAT
        }

        chain OUTPUT {
                type nat hook output priority -100; policy accept;
                fib daddr type local counter packets 2467 bytes 181491 jump NETAVARK-HOSTPORT-DNAT
        }

        chain NETAVARK-2EE7F3DEE5FA2 {
                ip daddr 10.89.1.0/24 counter packets 0 bytes 0 accept
                ip daddr != 224.0.0.0/4 counter packets 247 bytes 16150 masquerade
        }

        chain NETAVARK-DN-2EE7F3DEE5FA2 {
                ip saddr 10.89.1.0/24 ip daddr 0.0.0.0 tcp dport 33977 counter packets 0 bytes 0 jump NETAVARK-HOSTPORT-SETMARK
                ip saddr 127.0.0.1 ip daddr 0.0.0.0 tcp dport 33977 counter packets 0 bytes 0 jump NETAVARK-HOSTPORT-SETMARK
                ip daddr 0.0.0.0 tcp dport 33977 counter packets 0 bytes 0 dnat to 10.89.1.15:6443
        }
}
# Warning: table ip filter is managed by iptables-nft, do not touch!
table ip filter {
        chain NETAVARK_FORWARD {
                ip daddr 10.89.1.0/24 ct state related,established counter packets 52106 bytes 153831198 accept
                ip saddr 10.89.1.0/24 counter packets 40762 bytes 2399333 accept
        }

        chain FORWARD {
                type filter hook forward priority filter; policy accept;
                 counter packets 411233 bytes 520621661 jump NETAVARK_FORWARD
        }
}
table ip6 filter {
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant