Skip to content
This repository has been archived by the owner on Feb 1, 2021. It is now read-only.

Network connect multihost env | invalid container <nil> : #1402

Closed
cerias opened this issue Nov 11, 2015 · 8 comments
Closed

Network connect multihost env | invalid container <nil> : #1402

cerias opened this issue Nov 11, 2015 · 8 comments

Comments

@cerias
Copy link

cerias commented Nov 11, 2015

If i try to connect an container to an network it's works only on the same host as the master is. When i connect to an other host direct to docker i can connect the container to the network and everything works as excepted.

Steps to reproduce:

# eval "$(docker-machine env --swarm docker-app1)"
# docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS               NAMES
243268f57f74        elasticsearch:2.0.0   "/docker-entrypoint.s"   48 minutes ago      Up 48 minutes                           docker-app1/es3
19b319ccfd75        elasticsearch:2.0.0   "/docker-entrypoint.s"   49 minutes ago      Up 49 minutes                           docker-app2/es1
b9d355311060        elasticsearch:2.0.0   "/docker-entrypoint.s"   49 minutes ago      Up 49 minutes                           docker-app2/es2

# network ls
NETWORK ID          NAME                 DRIVER
b85370d450ec        docker-app1/host     host                
fa3f0425c192        test                 overlay             
cc1f7df9d179        docker-app2/bridge   bridge              
ceddf908f39d        docker-app2/none     null                
ef2d1f54becb        docker-app2/host     host                
15d19af8965b        docker-app1/bridge   bridge              
2e13fa81d306        docker-app1/none     null   

# docker network connect test es3
# docker network connect test es2
Error response from daemon: invalid container <nil> : nosuchcontainer: no such id: es2
# eval "$(docker-machine env  docker-app2)"
# docker network connect test es2

Inspecting the Network delivers only for the current host the connected containers.

[
    {
        "Name": "test",
        "Id": "fa3f0425c19243cc84c6f0ffd73a754f6f86ef8380e519f466eac7259b4dfcf2",
        "Scope": "global",
        "Driver": "overlay",
        "IPAM": {
            "Driver": "default",
            "Config": [
                {}
            ]
        },
        "Containers": {
            "b9d3553110600e558f7963fddcd8d737a8d694cb8d6d6e68196c9a2969fff716": {
                "EndpointID": "f3a82c84c451018d365fd567c5767ea62d14d3151f108edcd4e3f1d48f239e52",
                "MacAddress": "02:42:0a:00:01:03",
                "IPv4Address": "10.0.1.3/24",
                "IPv6Address": ""
            }
        },
        "Options": {}
    }
]

All machines was installed by docker-machine.

System informations:
docker info

Containers: 6
Images: 11
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 2
 docker-app1: 192.168.100.49:2376
  └ Containers: 3
  └ Reserved CPUs: 0 / 2
  └ Reserved Memory: 0 B / 4.049 GiB
  └ Labels: disk=ssd, executiondriver=native-0.2, kernelversion=3.19.0-25-generic, mtype=app, operatingsystem=Ubuntu 14.04.3 LTS, provider=generic, storagedriver=aufs
 docker-app2: 192.168.100.46:2376
  └ Containers: 3
  └ Reserved CPUs: 0 / 2
  └ Reserved Memory: 0 B / 4.053 GiB
  └ Labels: disk=ssd, executiondriver=native-0.2, kernelversion=3.19.0-25-generic, mtype=app, operatingsystem=Ubuntu 14.04.3 LTS, provider=generic, storagedriver=aufs
CPUs: 4
Total Memory: 8.102 GiB
Name: 7415c30d0456

docker version

Client:
 Version:      1.9.0
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   76d6bc9
 Built:        Tue Nov  3 17:43:42 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      swarm/1.0.0
 API version:  1.21
 Go version:   go1.5.1
 Git commit:   087e245
 Built:        
 OS/Arch:      linux/amd64

docker-machine -v

docker-machine version 0.5.0 (04cfa58)

The problem also appear when connecting containers via rest api. Only the container on the host with the swarm-master could connect.

@euprogramador
Copy link

I am having the same problem

@dongluochen
Copy link
Contributor

I can reproduce this problem with docker-machine. The issue is the network connect command is routed to incorrect machine. In the following example, container meb666 is on mhs-demo0 but the connect command is routed to mhs-demo1. After some time (<1 min) swarm resolves it automatically and routes the request to the right machine.

Dongluos-MacBook-Pro:keypair dongluochen$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
ac680f500bcf        nginx               "nginx -g 'daemon off"   13 minutes ago      Up 13 minutes                           mhs-demo0/meb666
docker@mhs-demo1:/var/log$ hostname
mhs-demo1

...
time="2015-11-19T19:59:27.302973031Z" level=debug msg="Calling POST /v1.21/networks/70a64e7bea783ae3b4e3341d296c2c0887d0030662b44a3001dc869caedbc94b/connect"                                             
time="2015-11-19T19:59:27.303011718Z" level=info msg="POST /v1.21/networks/70a64e7bea783ae3b4e3341d296c2c0887d0030662b44a3001dc869caedbc94b/connect"                                                      
time="2015-11-19T19:59:27.306418295Z" level=error msg="Handler for POST /v1.21/networks/70a64e7bea783ae3b4e3341d296c2c0887d0030662b44a3001dc869caedbc94b/connect returned error: invalid container <nil> :
time="2015-11-19T19:59:27.306442701Z" level=error msg="HTTP Error" err="invalid container <nil> : nosuchcontainer: no such id: meb666" statusCode=404                                                     

@schmunk42
Copy link

@dongluochen Thank you for the hint.

root@tex-roj:/repo/stacks# docker network connect internal 713
Error response from daemon: invalid container <nil> : nosuchcontainer: no such id: 713
root@tex-roj:/repo/stacks# docker network connect internal 71396a62af5b

Connection established here.

root@tex-roj:/repo/stacks# docker network connect internal 713
Error response from daemon: container already connected to network internal
root@tex-roj:/repo/stacks# docker network connect internal 71396a62af5b
Error response from daemon: invalid container <nil> : nosuchcontainer: no such id: 71396a62af5b
root@tex-roj:/repo/stacks# docker network connect internal 713
Error response from daemon: invalid container <nil> : nosuchcontainer: no such id: 713
root@tex-roj:/repo/stacks# docker network connect internal 71396a62af5b
Error response from daemon: invalid container <nil> : nosuchcontainer: no such id: 71396a62af5b
root@tex-roj:/repo/stacks# docker network connect internal 71396a62af5b
Error response from daemon: invalid container <nil> : nosuchcontainer: no such id: 71396a62af5b
root@tex-roj:/repo/stacks# docker network connect internal 71396a62af5b
Error response from daemon: invalid container <nil> : nosuchcontainer: no such id: 71396a62af5b
root@tex-roj:/repo/stacks# docker network connect internal 71396a62af5b
Error response from daemon: invalid container <nil> : nosuchcontainer: no such id: 71396a62af5b
root@tex-roj:/repo/stacks# docker network connect internal 71396a62af5b
Error response from daemon: invalid container <nil> : nosuchcontainer: no such id: 71396a62af5b
root@tex-roj:/repo/stacks# docker network connect internal 71396a62af5b
Error response from daemon: container already connected to network internal
root@tex-roj:/repo/stacks# docker network connect internal 71396a62af5b
Error response from daemon: invalid container <nil> : nosuchcontainer: no such id: 71396a62af5b

I have this problem in rather random order.

@aluzzardi
Copy link
Contributor

Could the problem be we create overlay networks by default?

If the engines are not properly set up to handle overlay networking (which docker-machine doesn't do), then I don't think it will work out of the box.

@vieux ?

@schmunk42
Copy link

@aluzzardi

If the engines are not properly set up to handle overlay networking (which docker-machine doesn't do), then I don't think it will work out of the box.

What do you mean exactly, setting up the kv-store?

My test was running the commands manually btw.

@ahmetb
Copy link
Contributor

ahmetb commented Dec 10, 2015

I am running into the same problem and it does not go away. I am trying to achieve the following networks:

  • frontend network containers: fe1, fe2, fe-lb
  • backend network containers: be1, be2, be-lb
  • frontend-backend network containers: fe-lb, be-lb

First I create the overlay networks:

$ docker network create -d overlay frontend
$ docker network create -d overlay backend
$ docker network create -d overlay frontend-backend

Then I create the containers in the swarm cluster:

$ docker run -d --network=frontend --name fe1 nginx
$ docker run -d --network=frontend --name fe2 nginx
$ docker run -d --network=frontend --name fe-lb nginx

$ docker run -d --network=backend --name be1 nginx
$ docker run -d --network=backend --name be2 nginx
$ docker run -d --network=backend --name be-lb nginx

It looks like this:

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
306fd2450781        nginx               "nginx -g 'daemon off"   5 minutes ago       Up 5 minutes        80/tcp, 443/tcp     mhs-demo1/be-lb
dd7930a533fc        nginx               "nginx -g 'daemon off"   5 minutes ago       Up 5 minutes        80/tcp, 443/tcp     mhs-demo0/be2
1e96a8001c30        nginx               "nginx -g 'daemon off"   5 minutes ago       Up 5 minutes        80/tcp, 443/tcp     mhs-demo2/be1
e68f50cd438e        nginx               "nginx -g 'daemon off"   5 minutes ago       Up 5 minutes        80/tcp, 443/tcp     mhs-demo1/fe2
42dbeeb585bd        nginx               "nginx -g 'daemon off"   5 minutes ago       Up 5 minutes        80/tcp, 443/tcp     mhs-demo0/fe1
e92cd39490aa        nginx               "nginx -g 'daemon off"   5 minutes ago       Up 5 minutes        80/tcp, 443/tcp     mhs-demo1/fe-lb

And then as a last step I am trying to attach frontend-backend network to *-lb containers:

$ docker network connect frontend-backend be-lb
Error response from daemon: invalid container <nil> : nosuchcontainer: no such id: be-lb
$ docker network connect frontend-backend mhs-demo1/be-lb
Error response from daemon: invalid container <nil> : nosuchcontainer: no such id: mhs-demo1/be-lb
$ docker network connect frontend-backend 306fd2450781
Error response from daemon: invalid container <nil> : nosuchcontainer: no such id: 306fd2450781

FWIW:

$ docker network ls
NETWORK ID          NAME                        DRIVER
35fb065de7fd        mhs-demo2/host              host                
058805fda8e3        mhs-demo2/docker_gwbridge   bridge              
96d6700b1761        mhs-demo1/none              null                
b528d051b632        mhs-demo2/none              null                
365870c0f296        frontend                    overlay             
67fcf49b6db1        backend                     overlay             
aa6235bed57c        frontend-backend            overlay             
32ffc983691d        mhs-demo1/docker_gwbridge   bridge              
9db608cca4b4        mhs-demo2/bridge            bridge              
e28a06cf195d        mhs-demo0/none              null                
a26cd8bf5479        mhs-demo0/host              host                
47b1bf0d9f33        mhs-demo1/bridge            bridge              
c2f96b5da004        mhs-demo1/host              host                
9fc44e5f108f        mhs-demo0/docker_gwbridge   bridge              
839c360adf09        mhs-demo0/bridge            bridge              

@abronan
Copy link
Contributor

abronan commented Dec 10, 2015

@ahmetalpbalkan This is fixed in swarm:1.0.1 that was just released now 😄 (official image probably not live yet but will be super soon). Please make sure to update to the new version which fixes a few bugs, this one included.

@abronan
Copy link
Contributor

abronan commented Dec 10, 2015

Fixed by #1438

@abronan abronan closed this as completed Dec 10, 2015
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

7 participants