-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
why calico-ipam allocate IP using network IP
?
#1710
Comments
Could you share the problems is caused for you? Typically this is OK since Calico uses point-to-point routed interfaces rather than joining workloads via an L2 network to a router and so the |
here is the example: $ kubectl run nginx1 --image=nginx --replicas=3 --port=80 --expose
$ kubectl run nginx2 --image=nginx --replicas=3 --port=80 --expose
$ kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default nginx1-7b4875b88b-9wcs9 1/1 Running 0 51s 172.20.237.64 192.168.1.43
default nginx1-7b4875b88b-bdnmd 1/1 Running 0 51s 172.20.135.128 192.168.1.41
default nginx1-7b4875b88b-hk4dz 1/1 Running 0 51s 172.20.120.0 192.168.1.42
default nginx2-55fdc5f6c8-jcdbm 1/1 Running 0 25s 172.20.135.129 192.168.1.41
default nginx2-55fdc5f6c8-pfbg4 1/1 Running 0 25s 172.20.237.65 192.168.1.43
default nginx2-55fdc5f6c8-vmvsc 1/1 Running 0 25s 172.20.120.1 192.168.1.42
kube-system calico-kube-controllers-68764bd457-p4tpk 1/1 Running 0 21h 192.168.1.43 192.168.1.43
kube-system calico-node-4zq8m 2/2 Running 0 21h 192.168.1.42 192.168.1.42
kube-system calico-node-fw5hq 2/2 Running 0 21h 192.168.1.43 192.168.1.43
kube-system calico-node-jtlgp 2/2 Running 0 21h 192.168.1.41 192.168.1.41 the first IPs allocated by $ kubectl run --rm -it busy --image=busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget --spider --timeout=1 172.20.237.64
Connecting to 172.20.237.64 (172.20.237.64:80)
wget: can't connect to remote host (172.20.237.64): Connection refused
/ # wget --spider --timeout=1 172.20.135.128
Connecting to 172.20.135.128 (172.20.135.128:80)
wget: can't connect to remote host (172.20.135.128): Connection refused
/ # wget --spider --timeout=1 172.20.120.0
Connecting to 172.20.120.0 (172.20.120.0:80)
wget: can't connect to remote host (172.20.120.0): Connection refused
/ # wget --spider --timeout=1 172.20.237.65
Connecting to 172.20.237.65 (172.20.237.65:80)
/ # wget --spider --timeout=1 172.20.135.129
Connecting to 172.20.135.129 (172.20.135.129:80)
/ # wget --spider --timeout=1 172.20.120.1
Connecting to 172.20.120.1 (172.20.120.1:80) |
Are you able to tell what is refusing the connection? Where is your cluster running and how is the underlying network configured? When I run a similar test in GCE I see traffic working as expected even to the zero address in the different blocks. |
My 3-nodes K8s cluster runs on my own virtual machines (kvm) in the same subnet, $ tcpdump -i ens3 host 172.20.120.0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens3, link-type EN10MB (Ethernet), capture size 262144 bytes
08:58:22.250993 IP 192.168.1.41 > 172.20.120.0: ICMP echo request, id 4307, seq 1, length 64
08:58:22.251335 IP 172.20.120.0 > 192.168.1.41: ICMP echo reply, id 4307, seq 1, length 64
08:58:23.250669 IP 192.168.1.41 > 172.20.120.0: ICMP echo request, id 4307, seq 2, length 64
08:58:23.250773 IP 172.20.120.0 > 192.168.1.41: ICMP echo reply, id 4307, seq 2, length 64
08:58:27.305725 IP 192.168.1.41.45694 > 172.20.120.0.http: Flags [S], seq 1730342650, win 29200, options [mss 1460,sackOK,TS val 1377451619 ecr 0,nop,wscale 7], length 0
08:58:27.306022 IP 172.20.120.0.http > 192.168.1.41.45694: Flags [R.], seq 0, ack 1730342651, win 0, length 0 |
some more info: |
Calico node doesn't operate differently when deployed as a service vs as a daemonset so I would guess that this is somehow a configuration issue though TBH I don't have the faintest idea what config would need to be changed. |
thanks for your tips, the iptable DROP rules were not hit. Actually, it was the nginx server who reset the connection, the client sends ' 08:58:27.305725 IP 192.168.1.41.45694 > 172.20.120.0.http: Flags [S], seq 1730342650, win 29200, options [mss 1460,sackOK,TS val 1377451619 ecr 0,nop,wscale 7], length 0
08:58:27.306022 IP 172.20.120.0.http > 192.168.1.41.45694: Flags [R.], seq 0, ack 1730342651, win 0, length 0 so maybe it was the nginx server that somehow has to check the IP it is binding to. |
@gjmzj did you figure out the source of the problem? Do you mind if I close this issue? |
My suggestion is to skip the first |
As discussed above, the use of the first IP in the block is OK in the Calico model because it's a ptp routed network. I think this sounds likely to be an nginx configuration issue. I'm going to close this. |
for example:
i have a three node k8s cluster using calico network, everything is fine:
then i installed kube-dns:
to notice pod kube-dns-566c7c77d8-lshlt IP is 172.20.120.0, which is a Network IP, we usually treate this kind of IPs not availabled in network industry, and it actually raised some problems, can we change this behavior? can
calico-ipam
alocate the first IP172.20.120.1
not172.20.120.0
?Your Environment
The text was updated successfully, but these errors were encountered: