I have another project based on this one, which allows you to configure v2ray-wmess-tls proxy on vps -> 👉 HERE 👈
Nginx-proxy container + acme ssl container + control script.
Allows you to easily launch entry router on your vps 80/443 ports which will automatically reroute requests, pin AND renew ssl certs for each subsequent container you launch with according ENV variables (detailed here), but basically just these:
- VIRTUAL_HOST
- LETSENCRYPT_HOST
- and connect to
inbound
network, more on that later
So you get this:
- (browser) https://subdomain.yourdomain.com =>
- (your vps) nginx-proxy =>
- auto-pin letsencrypt ssl =>
- reroute to container with project =>
- your container, with ssl 😊
Repo presents:
- nginx-proxy container which will receive all requests on your vps 80/443 ports
- nginx-proxy-acme container which is in charge of issuing certificates for any deployed container + auto renewing them
- control script for user-friendy tweaks
- compose file for oneline launch as an alternative
- You should own a domain, which has A record pointing to your vps ip
- On your vps with Ubuntu system, you should install git and docker (commands from official site, pick by your own if need) docker | git
sudo apt install git-all
sudo apt remove docker docker-engine docker.io containerd runc
sudo apt update
sudo apt install ca-certificates curl gnupg lsb-release
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker
- Stop apache on your server, it takes port 80 which prevents nginx from taking its' place.
sudo systemctl disable apache2
sudo systemctl stop apache2
- Clone this repo
git clone https://github.com/SanariSan/nginx-proxy-ssl
- Cd into directory
cd nginx-proxy-ssl
- Make script executable
chmod 755 ./start.sh
- Copy rename .env.copy to .env and replace values with your own, use nano or other editor
nano ./env
- Run script with
/bin/bash ./start.sh
- Or just run
docker-compose up --build --detach --force-recreate
for oneline setup
Pictures from v2ray-wmess-tls project but menu is almost the same
- Main menu
- Go to 1) section (containers) and run test certificate request (it's dry run, no cert generated)
- If that went fine start all the containers
- Make sure all containers up and running, you will see green circles
- Check out Logs in section 3) if need to.
- If you wish to enable autostart on boot proceed to 2) section.
- To enable BBR optimisation proceed to option 6).
Run this oneliner to spin-up a simple nodejs pingback server, wait for ~1-2 minutes then visit your domain and verify it opens with https and returns Let's Encrypt SSL cert.
docker run -it --rm --name test -e "VIRTUAL_HOST=your.domain" -e "LETSENCRYPT_HOST=your.domain" --net inbound node:alpine sh -c 'echo "require(\"http\").createServer((_,res)=>{res.writeHead(200);res.end(\"OK\");}).listen(80)" | node'
Project uses nginx-proxy and acme-companion containers. To make them work not only within this project, but also for proxying other projects, I assigned network inbound
to both containers. Left more info about that here.
...
networks:
inbound:
name: inbound
external: true
services:
app:
image: ...
networks:
- inbound
- default
environment:
VIRTUAL_HOST: 'subdomain.yourdomain.com'
LETSENCRYPT_HOST: 'subdomain.yourdomain.com'
postgres:
image: ...
networks:
- default
environment:
NETWORK_ACCESS: 'internal'
...
If you have multiple containers which communicate with each other you have to start your containers with inbound
network and then run connect
to add local network for containers to communicate (to not trash inbound network)
docker run -d --rm \
--name app-container \
--net inbound \
--env VIRTUAL_HOST='subdomain.yourdomain.com' \
--env LETSENCRYPT_HOST='subdomain.yourdomain.com' \
app
docker run -d --rm \
--name postgres-container \
--net inbound \
--env NETWORK_ACCESS='internal' \
postgres
docker network create local-net
docker network connect local-net app-container
docker network connect local-net postgres-container
SO, it's better to use docker-compose if you have more than one container!