Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SSL issues when from within a pod i try to reach https resources #348

Closed
lzecca78 opened this issue Mar 6, 2019 · 3 comments
Closed

SSL issues when from within a pod i try to reach https resources #348

lzecca78 opened this issue Mar 6, 2019 · 3 comments
Labels

Comments

@lzecca78
Copy link

lzecca78 commented Mar 6, 2019

Hi, at the time of writing, we are experiencing lots of issues related with microk8s networking.
Oure setup is a virtualbox vanilla installation with bento/ubuntu-18.04 with the following provision shell

sudo apt-get update && \
  sudo apt-get install -y unzip
if ! type microk8s.status > /dev/null; then
  sudo snap install microk8s --classic
  sudo snap alias microk8s.kubectl kubectl
  sudo snap alias microk8s.docker docker
fi
sudo echo "--allow-privileged=true" >> /var/snap/microk8s/current/args/kubelet
sudo echo "--iptables=false" >> /var/snap/microk8s/current/args/dockerd
sudo echo "--allow-privileged=true" >> /var/snap/microk8s/current/args/kube-apiserver
sudo cat >> /etc/sysctl.conf <<EOF
  fs.file-max = 65536
EOF
sudo cat >> /etc/security/limits.conf <<EOF
* - nofile 65536
EOF
sudo cat > /var/snap/microk8s/current/args/docker-daemon.json <<EOF
{
  "insecure-registries" : ["localhost:32000"],
  "default-ulimits": {
    "nofile": {
      "Name": "nofile",
      "Hard": 65536,
      "Soft": 65536
     }
  }
}
EOF



sudo sysctl -p
sudo systemctl restart snap.microk8s.daemon-docker
sudo systemctl restart snap.microk8s.daemon-kubelet.service
sudo systemctl restart snap.microk8s.daemon-apiserver.service
microk8s.status --wait-ready
microk8s.enable dns ingress storage

what is happeninig

First problem is about skydns that lookup to /etc/resolv.conf, but in ubuntu 18.04, resolv.conf point to a systemd-resolver that is not reachable outside the hosts, so the first operation to do is to to a ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf that enable correctly the dns service inside microk8s.
The real problem that comes out after that was that, inside every pod running in the cluster, for every call to an https resource, we received several weird error messages about a mismatch host in the SSL response, like there is an http_proxy in the middle. We observed this behaviour in 2 situations:

  1. a normal installation of a package inside an alpine distribution (the same command executing the same docker image, outside the cluster, works without throwing this exception)
 apk add vim
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
SSL certificate subject doesn't match host dl-cdn.alpinelinux.org
  1. using awscli binary to fetch an object from an aws s3 bucket (the same command executing the same docker image, outside the cluster, works without throwing this exception) :
Fatal error: SSL validation failed for ...

what should happen

i expect that the containers inside the clusters, works at the same way of the once i execute outside.

@ktsakalozos
Copy link
Member

Hi @lzecca78

There is this issue opened with a similar (if not the same) problem gliderlabs/docker-alpine#386 . They say it is a dns issue. We have dns set to 8.8.8.8 which you can change with microk8s.kubectl edit -n kube-system cm/kube-dns. Have you tried a different dns?

@lzecca78
Copy link
Author

lzecca78 commented Apr 1, 2019

@ktsakalozos i can confirm that the same issue happened today also to my collegue on minikube as well. It is difficult to debug because is a vagrant provisioned in the same way of all other collegues (everything is perfect the same configuration, setup, environment) and he is experiencing this issue. I will go deeper in the logs to understand what could be the problem.

@stale
Copy link

stale bot commented Apr 4, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the inactive label Apr 4, 2020
@stale stale bot closed this as completed Jul 7, 2020
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants