-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No servers are inside upstream in /etc/nginx/conf.d/default.conf #1020
Comments
I'm getting the same error today after updating to the latest version of this container. |
I think it might be related to nginx-proxy/docker-gen#270. |
@cron410 did you solve this issue in any way? I am desperately looking for a solution... |
@valdemarrolfsen I don't know that it will solve your problem, but
|
@valdemarrolfsen sorry for the late reply, when it works, it works fine for a long time. I believe it works after a host reboot but not sure. I stumbled across this issue again tonight after rebuilding some containers and the nginx proxy failing to proxy for the replacement containers. It may be as simple as new config files not being generated because docker-gen is fucked. |
I found a fork 1 month out of date from this repo that works flawlessly! https://hub.docker.com/r/bbtsoftwareag/nginx-proxy-unrestricted-requestsize/ |
@cron410 If you need to add an nginx param such as "client_max_body_size 0;" you don't need an alternative docker image; simply mount a file with that additional config to /etc/nginx/conf.d/unrestricted_client_body_size.conf using the docker volume arg. You'll still be able to use the official jwilder image. |
@cnaslain I'm looking for something I don't have to babysit. That image just works and doesn't vomit on itself in the middle of the night. The nginx proxy needs to hum along, doing one thing the most reliable way possible because so many things rely on it.
|
Multi-container Docker running on 64bit Amazon Linux/2.7.4 has an new ECS-Agent Version and a brand new docker version 18.*-ce With older versions of docker and AMI the proxy recognize all web containers and builds a new default.conf But now with newer images it leads to:
and docker containers kill themselve. Does anyone have the same problems with cloud services and new docker versions or AMI of AWS? |
Get a shell in the running nginx-proxy container using docker exec: and inspect default.conf: There may be reason written in the comment. In my case when I had a similar problem, nginx-proxy container simply couldn't access the target service container because it wasn't using the same network. |
@affektde I'm seing the exact same bug. No clue as to why. |
@buchdag is there any work around to get nginx-proxy work with the new ECS AMI ? I am facing the same issue |
This bug is happening because the |
definitely related: nginx-proxy/docker-gen#284 |
… `CurrentContainer`
Since this repo seems unmaintained I've pushed an image to my own dockerhub to address this. |
This should have been fixed by nginx-proxy/docker-gen#336 and nginx-proxy/docker-gen#345 |
I am working on running my application over https using nginx and the letsencrypt-nginx-proxy-companion images, but I am experiencing some weird results. My docker-compose.yml looks like this:
When running
docker-compose logs nginx-proxy
everything seems to be working fine with the following logs:However when inspecting the default nginx config, it does not seem like it has been configured at all... (command
docker-compose run nginx-proxy cat /etc/nginx/conf.d/default.conf
)Any request to the domain also fails, although it is now served over https!
Any idea of what I might have gotten wrong here?
Thank you very much!
The text was updated successfully, but these errors were encountered: