Docker based development workflow with Nomad and Consul
nomadev is an attempt to simplify setting up a Nomad + Consul agents intended for local development workflows.
The setup is based on docker-compose
and is configured to spawn a single docker
container for each Nomad and Consul agent. Both agents are configured to run in server
+ client
modes.
This simplifies the setup for local use. It's possible to add more containers for additional servers/clients if required.
You will need docker
and docker-compose
installed.
make docker-build && make docker-up
You should be able to access the following endpoints:
- http://localhost:4646/ui/jobs (Nomad UI)
- http://localhost:8500/ui/dev/services (Consul UI)
(Grab the binaries for nomad and consul)
$ nomad server members
Name Address Port Status Leader Protocol Build Datacenter Region
iris.global 192.168.69.4 4648 alive true 2 1.1.5 dev global
$ consul members
Node Address Status Type Build Protocol DC Segment
iris 127.0.0.1:8301 alive server 1.10.3 2 dev <all>
You can checkout examples directory to explore a few Job examples.
For example:
nomad job run examples/redis.nomad
The sample redis
job exposes a random port on the host. Since we use net=host
(Host Network) to run our docker
containers, the same should be directly accesible from the host machine.
> nomad alloc status {{uuid}}
...
Allocation Addresses
Label Dynamic Address
*redis yes 192.168.69.4:23509 -> 6379
...
# Verify
❯ docker run --rm --net=host redis:6 redis-cli -h 192.168.69.4 -p 23509 ping
PONG
Some important things to note:
Nomad configures the destination of artifact
, template
etc relative to the task working directory. If you're using template
stanza, Nomad passes the /allocl/<id>/<task>/local/
path as a bind
mount option to the Docker daemon. What this means is that unless this exact path is present on your host machine, the task will fail to run.
The only way around is to mount /opt/nomad/data
(or whatever NOMAD_DATA_DIR
you choose inside nomad.hcl) to your host. The data directory path inside container and outside on the host should be exactly the same.
This can be verified by docker inspect
on any container which is spawned by nomad
.:
"Mounts": [
{
"Type": "bind",
"Source": "/opt/nomad/data/alloc/23d6cc4e-a7bd-d9df-b912-05256ef8a672/nginx/local/proxy.conf",
"Destination": "/etc/nginx/conf.d/proxy.conf",
"Mode": "",
"RW": false,
"Propagation": "rprivate"
}
]
This is the alloc
directory that Nomad creates where templates are rendered and the same paths are provided to docker
daemon when the task runs.
Refer docs for more details.
While this setup works fine for local development, it requires high privileges to function properly which include running the container as root
user with --privileged=true
.
Additionally, docker
socket needs to be mounted if you want to use the docker
task driver.
Goes without saying, do not use this in production.
Nomad 1.3.0 supports system with cgroups v2
. If your system only supports v2 (i.e. you don't have a file /sys/fs/cgroup/unified
in your machine), you must mount sys/fs/cgroup:/sys/fs/cgroup:ro
to docker-compose.yml
. This repository by default does it. If you're on a system with only cgroups v1, you can remove this from the docker-compose.yml
file.