Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No servers are inside upstream in /etc/nginx/conf.d/default.conf #1020

Closed
valdemarrolfsen opened this issue Jan 6, 2018 · 16 comments
Closed

Comments

@valdemarrolfsen
Copy link

I am working on running my application over https using nginx and the letsencrypt-nginx-proxy-companion images, but I am experiencing some weird results. My docker-compose.yml looks like this:

version: '2.2'
services:
  ignite-api:
    restart: always
    build: ./ignite-api
    ports:
      - "8000"
    links:
      - postgres:postgres
      - rabbit:rabbit
      - worker:worker
    volumes:
      - /usr/src/app/
      - /usr/src/app/static
    env_file: .env
    command: /usr/local/bin/gunicorn api.wsgi:application -w 2 -b :8000 --reload
    networks:
      - api-net
      - nginx-proxy
    environment:
      VIRTUAL_PORT: 8000
      VIRTUAL_HOST: ignite-api.local
      LETSENCRYPT_HOST: example.com
      LETSENCRYPT_EMAIL: example@email.com

  postgres:
    ...

  rabbit:
    ...

  worker:
    ...

  nginx-proxy:
    image: jwilder/nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - "./nginx/vhost.d:/etc/nginx/vhost.d"
      - "./nginx/html:/usr/share/nginx/html"
      - "./nginx/certs:/etc/nginx/certs"
      - "/var/run/docker.sock:/tmp/docker.sock:ro"
    networks:
      - nginx-proxy

  letsencrypt-nginx-proxy-companion:
    image: jrcs/letsencrypt-nginx-proxy-companion
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
    volumes_from:
      - "nginx-proxy"
    networks:
      - nginx-proxy

volumes:
  pgdata:
    driver: local
  rabbitmqdata:
    driver: local

networks:
  nginx-proxy:
    external:
      name: nginx-proxy
  api-net: 
    driver: bridge

When running docker-compose logs nginx-proxy everything seems to be working fine with the following logs:

nginx-proxy_1                        | WARNING: /etc/nginx/dhparam/dhparam.pem was not found. A pre-generated dhparam.pem will be used for now while a new one
nginx-proxy_1                        | is being generated in the background.  Once the new dhparam.pem is in place, nginx will be reloaded.
nginx-proxy_1                        | forego     | starting dockergen.1 on port 5000
nginx-proxy_1                        | forego     | starting nginx.1 on port 5100
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:30:58 Generated '/etc/nginx/conf.d/default.conf' from 3 containers
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:30:58 Running 'nginx -s reload'
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:30:58 Watching docker events
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:30:59 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:30:59 Received event start for container c305e8685566
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:30:59 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:31:02 Received event start for container f98262bb33da
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:31:02 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:31:06 Received event start for container 9e22795bbfd1
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:31:07 Generated '/etc/nginx/conf.d/default.conf' from 6 containers
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:31:07 Running 'nginx -s reload'
nginx-proxy_1                        | 2018/01/06 17:31:26 [notice] 92#92: signal process started
nginx-proxy_1                        | Generating DH parameters, 2048 bit long safe prime, generator 2
nginx-proxy_1                        | This is going to take a long time
nginx-proxy_1                        | dhparam generation complete, reloading nginx
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:31:58 Received event start for container 4890d509bda7
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:31:58 Received event die for container 4890d509bda7
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:31:59 Generated '/etc/nginx/conf.d/default.conf' from 6 containers
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:31:59 Running 'nginx -s reload'
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:32:00 Received event start for container 4890d509bda7
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:32:00 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:32:00 Received event die for container 4890d509bda7
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:32:01 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx-proxy_1                        | dockergen.1 | 2018/01/06 17:32:01 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'

However when inspecting the default nginx config, it does not seem like it has been configured at all... (command docker-compose run nginx-proxy cat /etc/nginx/conf.d/default.conf)

server {
    listen       80;
    server_name  localhost;

    #charset koi8-r;
    #access_log  /var/log/nginx/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}
}

Any request to the domain also fails, although it is now served over https!

Any idea of what I might have gotten wrong here?

Thank you very much!

@cron410
Copy link

cron410 commented Jan 9, 2018

I'm getting the same error today after updating to the latest version of this container.

@EnorMOZ
Copy link

EnorMOZ commented Jan 13, 2018

I think it might be related to nginx-proxy/docker-gen#270.

@valdemarrolfsen
Copy link
Author

@cron410 did you solve this issue in any way? I am desperately looking for a solution...

@pvande
Copy link

pvande commented Jan 26, 2018

@valdemarrolfsen I don't know that it will solve your problem, but docker-compose run nginx-proxy cat /etc/nginx/conf.d/default.conf should always be unconfigured – docker-compose run will start a new container instance to run your command, then exit.

docker-compose exec nginx-proxy cat /etc/nginx/conf.d/default.conf is what you'll need to extract the configuration updates.

@cron410
Copy link

cron410 commented Feb 22, 2018

@valdemarrolfsen sorry for the late reply, when it works, it works fine for a long time. I believe it works after a host reboot but not sure. I stumbled across this issue again tonight after rebuilding some containers and the nginx proxy failing to proxy for the replacement containers. It may be as simple as new config files not being generated because docker-gen is fucked.

@cron410
Copy link

cron410 commented Feb 22, 2018

I found a fork 1 month out of date from this repo that works flawlessly! https://hub.docker.com/r/bbtsoftwareag/nginx-proxy-unrestricted-requestsize/

@cnaslain
Copy link

cnaslain commented Aug 3, 2018

@cron410 If you need to add an nginx param such as "client_max_body_size 0;" you don't need an alternative docker image; simply mount a file with that additional config to /etc/nginx/conf.d/unrestricted_client_body_size.conf using the docker volume arg. You'll still be able to use the official jwilder image.

@cron410
Copy link

cron410 commented Aug 3, 2018

@cnaslain I'm looking for something I don't have to babysit. That image just works and doesn't vomit on itself in the middle of the night. The nginx proxy needs to hum along, doing one thing the most reliable way possible because so many things rely on it.

docker run -d \
  --name=nginx -p 80:80 \
  -v /var/run/docker.sock:/tmp/docker.sock:ro \
  --restart unless-stopped \
  bbtsoftwareag/nginx-proxy-unrestricted-requestsize:alpine

@affektde
Copy link

affektde commented Aug 24, 2018

Multi-container Docker running on 64bit Amazon Linux/2.7.4 has an new ECS-Agent Version and a brand new docker version 18.*-ce

With older versions of docker and AMI the proxy recognize all web containers and builds a new default.conf

But now with newer images it leads to:

No servers are inside upstream in /etc/nginx/conf.d/default.conf

and docker containers kill themselve.

Does anyone have the same problems with cloud services and new docker versions or AMI of AWS?
What's the reason for that behavior? I can't update docker and AMIs until we got a fix here :-D

@k3a
Copy link

k3a commented Aug 26, 2018

Get a shell in the running nginx-proxy container using docker exec:
docker exec -it <container_name_or_id> bash

and inspect default.conf:
cat /etc/nginx/conf.d/default.conf

There may be reason written in the comment.

In my case when I had a similar problem, nginx-proxy container simply couldn't access the target service container because it wasn't using the same network.

@buchdag
Copy link
Member

buchdag commented Jan 11, 2019

@affektde I'm seing the exact same bug. No clue as to why.

@kukaraj
Copy link

kukaraj commented Jun 5, 2020

@buchdag is there any work around to get nginx-proxy work with the new ECS AMI ? I am facing the same issue

@thismatters
Copy link

This bug is happening because the .Docker.CurrentContainerID as described in docker-gen is not being populated while docker-gen runs.

@thismatters
Copy link

definitely related: nginx-proxy/docker-gen#284

thismatters added a commit to thismatters/nginx-proxy that referenced this issue Sep 17, 2020
@thismatters
Copy link

Since this repo seems unmaintained I've pushed an image to my own dockerhub to address this.

@buchdag
Copy link
Member

buchdag commented Jun 15, 2021

This should have been fixed by nginx-proxy/docker-gen#336 and nginx-proxy/docker-gen#345

@buchdag buchdag closed this as completed Jun 15, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants