Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nomad template not reading env variables if generated dynamically. #5982

Closed
manish-panwar opened this issue Jul 19, 2019 · 12 comments
Closed

Comments

@manish-panwar
Copy link

manish-panwar commented Jul 19, 2019

Nomad version

Nomad v0.9.1 (4b2bdbd)

Operating system and Environment details

Photon OS

Issue

If we are generating the environment variables in template dynamically based of some logic then Nomad isn't attaching the env variables to task. If we provide hardcoded values the env variables values are passed to docker container properly.

Reproduction steps

Following don't work - it generates the variable in key=value format.

template {
data = <<EOH
{{ $node_ip := env "NOMAD_IP_zk_port" }}{{range $i, $services := service "zk"}}{{ if eq .Address $node_ip }}ZOO_MY_ID="{{ $i | add 1 }}"{{end}}{{end}}
EOH
destination = "local/data/zkid.env"
change_mode = "noop"
env = true
}

This works (mind the hardcoded value):

template {
data = <<EOH
ZOO_MY_ID=123
EOH
destination = "local/data/zkid.env"
change_mode = "noop"
env = true
}

Job file (if appropriate)

job "zk" {
region = "global"
datacenters = ["dc1"]
type = "service"
constraint {
attribute = "${attr.kernel.name}"
value = "linux"
}
update {
stagger = "10s"
max_parallel = 1
}
group "zk" {
count = 3
restart {
interval = "5m"
attempts = 10
delay = "25s"
mode = "delay"
}
task "zk" {
driver = "docker"
template {
data = <<EOH
clientPort=2181
skipACL=yes
syncLimit=2
maxClientCnxns=60
4lw.commands.whitelist=*
tickTime=2000
dataDir=/data
dataLogDir=/datalog
reconfigEnabled=true
standaloneEnabled=false
initLimit=10
autopurge.snapRetainCount=3
autopurge.purgeInterval=4
dynamicConfigFile=/conf/zoo.cfg.dynamic
EOH
destination = "zookeeper/conf/zoo.cfg"
change_mode = "noop"
}
template {
data = <<EOH
{{range $i, $services := service "zk"}}
server.{{ $i | add 1 }}={{.Address}}:2888:3888;2181{{end}}
EOH
destination = "zookeeper/conf/zoo.cfg.dynamic"
change_mode = "noop"
}
template {
data = <<EOH
{{ $node_ip := env "NOMAD_IP_zk_port" }}{{range $i, $services := service "zk"}}{{ if eq .Address $node_ip }}ZOO_MY_ID="{{ $i | add 1 }}"{{end}}{{end}}
EOH
destination = "local/data/zkid.env"
change_mode = "noop"
env = true
}
config {
image = "zookeeper"
network_mode = "host"
labels {
group = "zk-docker"
}
volumes = [
"zookeeper/conf:/conf",
"/opt/zookeeper/data:/data",
"/opt/zookeeper/datalog:/datalog",
"/opt/zookeeper/logs:/logs"
]
}
env {
ZOO_STANDALONE_ENABLED = "false",
ZOO_LOG4J_PROP="DEBUG,ROLLINGFILE",
ZOO_CONF_DIR="/conf",
ZOO_DATA_DIR="/data",
ZOO_DATA_LOG_DIR="/datalog",
ZOO_LOG_DIR="/logs"
}
service {
name = "zk"
tags = [ "zk", "urlprefix-:2181;proto=tcp" ]
port = "zk_port"
address_mode = "host"
}
resources {
cpu = 1024 # 1024 Mhz
memory = 512 # 512MB
network {
port "zk_port" {
static = 2181
}
}
}
}
}
}

Nomad Client logs (if appropriate)

2019-07-19T00:37:30.329Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=182.462µs
2019-07-19T00:37:36.174Z [DEBUG] client: updated allocations: index=26120 total=34 pulled=0 filtered=34
2019-07-19T00:37:36.175Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=34
2019-07-19T00:37:36.175Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=34 errors=0
2019-07-19T00:37:40.330Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=192.917µs
2019-07-19T00:37:49.335Z [DEBUG] client: updated allocations: index=26126 total=34 pulled=1 filtered=33
2019-07-19T00:37:49.335Z [DEBUG] client: allocation updates: added=0 removed=0 updated=1 ignored=33
2019-07-19T00:37:49.335Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=1 ignored=33 errors=0
2019-07-19T00:37:49.339Z [DEBUG] consul.sync: sync complete: registered_services=0 deregistered_services=1 registered_checks=0 deregistered_checks=0
2019/07/19 00:37:49.341367 [DEBUG] (runner) receiving dependency health.service(zk|passing)
2019/07/19 00:37:49.341501 [DEBUG] (runner) initiating run
2019/07/19 00:37:49.341511 [DEBUG] (runner) checking template 4c71b33a7d393df453f4548df158fa44
2019/07/19 00:37:49.341716 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/cc983645-3a5c-c639-dfae-f7b1b180bdf7/zk/zookeeper/conf/zoo.cfg.dynamic"
2019/07/19 00:37:49.342265 [INFO] (runner) rendered "(dynamic)" => "/opt/nomad/alloc/cc983645-3a5c-c639-dfae-f7b1b180bdf7/zk/zookeeper/conf/zoo.cfg.dynamic"
2019/07/19 00:37:49.342284 [DEBUG] (runner) checking template 975fb82a72d69c6ce5200e90130c2cf2
2019/07/19 00:37:49.342466 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/cc983645-3a5c-c639-dfae-f7b1b180bdf7/zk/local/data/zkid.env"
2019/07/19 00:37:49.343193 [INFO] (runner) rendered "(dynamic)" => "/opt/nomad/alloc/cc983645-3a5c-c639-dfae-f7b1b180bdf7/zk/local/data/zkid.env"
2019/07/19 00:37:49.343208 [DEBUG] (runner) checking template a499eb31288cef07a2833b8b5167d325
2019/07/19 00:37:49.343305 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/cc983645-3a5c-c639-dfae-f7b1b180bdf7/zk/zookeeper/conf/zoo.cfg"
2019/07/19 00:37:49.343340 [DEBUG] (runner) diffing and updating dependencies
2019/07/19 00:37:49.343359 [DEBUG] (runner) health.service(zk|passing) is still needed
2019/07/19 00:37:49.343369 [DEBUG] (runner) watching 1 dependencies
2019/07/19 00:37:49.343375 [DEBUG] (runner) all templates rendered
2019/07/19 00:37:49.344134 [DEBUG] (runner) receiving dependency health.service(zk|passing)
2019/07/19 00:37:49.344283 [DEBUG] (runner) initiating run
2019/07/19 00:37:49.344378 [DEBUG] (runner) checking template 4c71b33a7d393df453f4548df158fa44
2019/07/19 00:37:49.344584 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/cc983645-3a5c-c639-dfae-f7b1b180bdf7/zk/zookeeper/conf/zoo.cfg.dynamic"
2019/07/19 00:37:49.345106 [INFO] (runner) rendered "(dynamic)" => "/opt/nomad/alloc/cc983645-3a5c-c639-dfae-f7b1b180bdf7/zk/zookeeper/conf/zoo.cfg.dynamic"
2019/07/19 00:37:49.345177 [DEBUG] (runner) checking template 975fb82a72d69c6ce5200e90130c2cf2
2019/07/19 00:37:49.345393 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/cc983645-3a5c-c639-dfae-f7b1b180bdf7/zk/local/data/zkid.env"
2019/07/19 00:37:49.345491 [DEBUG] (runner) checking template a499eb31288cef07a2833b8b5167d325
2019/07/19 00:37:49.345637 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/cc983645-3a5c-c639-dfae-f7b1b180bdf7/zk/zookeeper/conf/zoo.cfg"
2019/07/19 00:37:49.345718 [DEBUG] (runner) diffing and updating dependencies
2019/07/19 00:37:49.345786 [DEBUG] (runner) health.service(zk|passing) is still needed
2019/07/19 00:37:49.345850 [DEBUG] (runner) watching 1 dependencies
2019/07/19 00:37:49.345906 [DEBUG] (runner) all templates rendered
2019/07/19 00:37:49.442552 [DEBUG] (runner) receiving dependency health.service(zk|passing)
2019/07/19 00:37:49.442731 [DEBUG] (runner) initiating run
2019/07/19 00:37:49.442852 [DEBUG] (runner) checking template 4c71b33a7d393df453f4548df158fa44
2019/07/19 00:37:49.443141 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/cc983645-3a5c-c639-dfae-f7b1b180bdf7/zk/zookeeper/conf/zoo.cfg.dynamic"
2019/07/19 00:37:49.443747 [INFO] (runner) rendered "(dynamic)" => "/opt/nomad/alloc/cc983645-3a5c-c639-dfae-f7b1b180bdf7/zk/zookeeper/conf/zoo.cfg.dynamic"
2019/07/19 00:37:49.443880 [DEBUG] (runner) checking template 975fb82a72d69c6ce5200e90130c2cf2
2019/07/19 00:37:49.444205 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/cc983645-3a5c-c639-dfae-f7b1b180bdf7/zk/local/data/zkid.env"
2019/07/19 00:37:49.444327 [DEBUG] (runner) checking template a499eb31288cef07a2833b8b5167d325
2019/07/19 00:37:49.444494 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/cc983645-3a5c-c639-dfae-f7b1b180bdf7/zk/zookeeper/conf/zoo.cfg"
2019/07/19 00:37:49.444587 [DEBUG] (runner) diffing and updating dependencies
2019/07/19 00:37:49.444658 [DEBUG] (runner) health.service(zk|passing) is still needed
2019/07/19 00:37:49.444716 [DEBUG] (runner) watching 1 dependencies
2019/07/19 00:37:49.444765 [DEBUG] (runner) all templates rendered
2019-07-19T00:37:49.569Z [DEBUG] client: updated allocations: index=26130 total=34 pulled=0 filtered=34
2019-07-19T00:37:49.569Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=34
2019-07-19T00:37:49.569Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=34 errors=0
2019-07-19T00:37:49.730Z [DEBUG] client.alloc_runner.task_runner.task_hook.stats_hook: error fetching stats of task: alloc_id=cc983645-3a5c-c639-dfae-f7b1b180bdf7 task=zk error="container stopped"
2019-07-19T00:37:49.735Z [ERROR] client.driver_mgr.docker.docker_logger.nomad: reading plugin stderr: driver=docker error="read |0: file already closed"
2019-07-19T00:37:49.735Z [DEBUG] client.driver_mgr.docker.docker_logger: plugin process exited: driver=docker path=/usr/local/bin/nomad pid=23030
2019-07-19T00:37:49.735Z [DEBUG] client.driver_mgr.docker.docker_logger: plugin exited: driver=docker
2019-07-19T00:37:49.735Z [INFO ] client.gc: marking allocation for GC: alloc_id=cc983645-3a5c-c639-dfae-f7b1b180bdf7
2019-07-19T00:37:49.737Z [ERROR] client.alloc_runner.task_runner.task_hook.logmon.nomad: reading plugin stderr: alloc_id=cc983645-3a5c-c639-dfae-f7b1b180bdf7 task=zk error="read |0: file already closed"
2019-07-19T00:37:49.737Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin process exited: alloc_id=cc983645-3a5c-c639-dfae-f7b1b180bdf7 task=zk path=/usr/local/bin/nomad pid=22966
2019-07-19T00:37:49.737Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin exited: alloc_id=cc983645-3a5c-c639-dfae-f7b1b180bdf7 task=zk
2019/07/19 00:37:49.737507 [INFO] (runner) stopping
2019/07/19 00:37:49.737515 [DEBUG] (runner) stopping watcher
2019/07/19 00:37:49.737521 [DEBUG] (watcher) stopping all views
2019-07-19T00:37:49.737Z [DEBUG] client.alloc_runner.task_runner: task run loop exiting: alloc_id=cc983645-3a5c-c639-dfae-f7b1b180bdf7 task=zk
2019/07/19 00:37:49.737564 [INFO] (runner) received finish
2019-07-19T00:37:49.909Z [DEBUG] client: updated allocations: index=26132 total=34 pulled=0 filtered=34
2019-07-19T00:37:49.909Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=34
2019-07-19T00:37:49.909Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=34 errors=0
2019-07-19T00:37:50.331Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=166.396µs
2019-07-19T00:37:50.571Z [DEBUG] client: updated allocations: index=26136 total=35 pulled=1 filtered=34
2019-07-19T00:37:50.571Z [DEBUG] client: allocation updates: added=1 removed=0 updated=0 ignored=34
2019-07-19T00:37:50.573Z [DEBUG] client: allocation updates applied: added=1 removed=0 updated=0 ignored=34 errors=0
2019-07-19T00:37:50.578Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f task=zk path=/usr/local/bin/nomad args="[/usr/local/bin/nomad logmon]"
2019-07-19T00:37:50.578Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f task=zk path=/usr/local/bin/nomad pid=23134
2019-07-19T00:37:50.578Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f task=zk path=/usr/local/bin/nomad
2019-07-19T00:37:50.596Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f task=zk @module=logmon address=/tmp/plugin687436439 network=unix timestamp=2019-07-19T00:37:50.596Z
2019-07-19T00:37:50.596Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f task=zk version=2
2019-07-19T00:37:50.597Z [INFO ] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f task=zk @module=logmon path=/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/alloc/logs/.zk.stdout.fifo timestamp=2019-07-19T00:37:50.597Z
2019-07-19T00:37:50.597Z [INFO ] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f task=zk @module=logmon path=/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/alloc/logs/.zk.stderr.fifo timestamp=2019-07-19T00:37:50.597Z
2019/07/19 00:37:50.599446 [INFO] (runner) creating new runner (dry: false, once: false)
2019/07/19 00:37:50.599624 [DEBUG] (runner) final config: {"Consul":{"Address":"","Auth":{"Enabled":false,"Username":"","Password":""},"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":false,"Key":"","ServerName":"","Verify":true},"Token":"","Transport":{"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":3,"TLSHandshakeTimeout":10000000000}},"Dedup":{"Enabled":false,"MaxStale":2000000000,"Prefix":"consul-template/dedup/","TTL":15000000000},"Exec":{"Command":"","Enabled":false,"Env":{"Blacklist":[],"Custom":[],"Pristine":false,"Whitelist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":0},"KillSignal":2,"LogLevel":"WARN","MaxStale":2000000000,"PidFile":"","ReloadSignal":1,"Syslog":{"Enabled":false,"Facility":"LOCAL0"},"Templates":[{"Backup":false,"Command":"","CommandTimeout":30000000000,"Contents":"            clientPort=2181\n            skipACL=yes\n            syncLimit=2\n            maxClientCnxns=60\n            4lw.commands.whitelist=*\n            tickTime=2000\n            dataDir=/data\n            dataLogDir=/datalog\n            reconfigEnabled=true\n            standaloneEnabled=false\n            initLimit=10\n            autopurge.snapRetainCount=3\n            autopurge.purgeInterval=4\n            dynamicConfigFile=/conf/zoo.cfg.dynamic\n        ","CreateDestDirs":true,"Destination":"/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf/zoo.cfg","ErrMissingKey":false,"Exec":{"Command":"","Enabled":false,"Env":{"Blacklist":[],"Custom":[],"Pristine":false,"Whitelist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":30000000000},"Perms":420,"Source":"","Wait":{"Enabled":false,"Min":0,"Max":0},"LeftDelim":"{{","RightDelim":"}}"},{"Backup":false,"Command":"","CommandTimeout":30000000000,"Contents":"          {{range $i, $services := service \"zk\"}}\n          server.{{ $i | add 1 }}={{.Address}}:2888:3888;2181{{end}}\n        ","CreateDestDirs":true,"Destination":"/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf/zoo.cfg.dynamic","ErrMissingKey":false,"Exec":{"Command":"","Enabled":false,"Env":{"Blacklist":[],"Custom":[],"Pristine":false,"Whitelist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":30000000000},"Perms":420,"Source":"","Wait":{"Enabled":false,"Min":0,"Max":0},"LeftDelim":"{{","RightDelim":"}}"},{"Backup":false,"Command":"","CommandTimeout":30000000000,"Contents":"        {{ $node_ip := env \"NOMAD_IP_zk_port\" }}{{range $i, $services := service \"zk\"}}{{ if eq .Address $node_ip }}ZOO_MY_ID=\"{{ $i | add 1 }}\"{{end}}{{end}}\n        ","CreateDestDirs":true,"Destination":"/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/local/data/zkid.env","ErrMissingKey":false,"Exec":{"Command":"","Enabled":false,"Env":{"Blacklist":[],"Custom":[],"Pristine":false,"Whitelist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":30000000000},"Perms":420,"Source":"","Wait":{"Enabled":false,"Min":0,"Max":0},"LeftDelim":"{{","RightDelim":"}}"}],"Vault":{"Address":"","Enabled":false,"Grace":15000000000,"RenewToken":false,"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":true,"Key":"","ServerName":"","Verify":true},"Transport":{"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":3,"TLSHandshakeTimeout":10000000000},"UnwrapToken":false},"Wait":{"Enabled":false,"Min":0,"Max":0}}
2019/07/19 00:37:50.599695 [INFO] (runner) creating watcher
2019/07/19 00:37:50.599805 [INFO] (runner) starting
2019/07/19 00:37:50.599828 [DEBUG] (runner) running initial templates
2019/07/19 00:37:50.599837 [DEBUG] (runner) initiating run
2019/07/19 00:37:50.599875 [DEBUG] (runner) checking template a499eb31288cef07a2833b8b5167d325
2019/07/19 00:37:50.600024 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf/zoo.cfg"
2019/07/19 00:37:50.600646 [INFO] (runner) rendered "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf/zoo.cfg"
2019/07/19 00:37:50.600672 [DEBUG] (runner) checking template 4c71b33a7d393df453f4548df158fa44
2019/07/19 00:37:50.600875 [DEBUG] (runner) was not watching 1 dependencies
2019/07/19 00:37:50.600906 [DEBUG] (watcher) adding health.service(zk|passing)
2019/07/19 00:37:50.600925 [DEBUG] (runner) checking template 975fb82a72d69c6ce5200e90130c2cf2
2019/07/19 00:37:50.601063 [DEBUG] (runner) missing data for 1 dependencies
2019/07/19 00:37:50.601085 [DEBUG] (runner) diffing and updating dependencies
2019/07/19 00:37:50.601096 [DEBUG] (runner) watching 1 dependencies
2019/07/19 00:37:50.602375 [DEBUG] (runner) receiving dependency health.service(zk|passing)
2019/07/19 00:37:50.602403 [DEBUG] (runner) initiating run
2019/07/19 00:37:50.602412 [DEBUG] (runner) checking template a499eb31288cef07a2833b8b5167d325
2019/07/19 00:37:50.602512 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf/zoo.cfg"
2019/07/19 00:37:50.602554 [DEBUG] (runner) checking template 4c71b33a7d393df453f4548df158fa44
2019/07/19 00:37:50.602694 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf/zoo.cfg.dynamic"
2019/07/19 00:37:50.603065 [INFO] (runner) rendered "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf/zoo.cfg.dynamic"
2019/07/19 00:37:50.603094 [DEBUG] (runner) checking template 975fb82a72d69c6ce5200e90130c2cf2
2019/07/19 00:37:50.603263 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/local/data/zkid.env"
2019/07/19 00:37:50.603642 [INFO] (runner) rendered "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/local/data/zkid.env"
2019/07/19 00:37:50.603680 [DEBUG] (runner) diffing and updating dependencies
2019/07/19 00:37:50.603692 [DEBUG] (runner) health.service(zk|passing) is still needed
2019/07/19 00:37:50.603726 [DEBUG] (runner) watching 1 dependencies
2019/07/19 00:37:50.603741 [DEBUG] (runner) all templates rendered
2019-07-19T00:37:50.786Z [DEBUG] client: updated allocations: index=26140 total=35 pulled=0 filtered=35
2019-07-19T00:37:50.786Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=35
2019-07-19T00:37:50.786Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=35 errors=0
2019-07-19T00:37:51.009Z [DEBUG] client.driver_mgr.docker: docker pull succeeded: driver=docker image_ref=zookeeper:latest
2019-07-19T00:37:51.011Z [DEBUG] client.driver_mgr.docker: image reference count incremented: driver=docker image_name=zookeeper image_id=sha256:a873528df41f67308900374218cbcb334bf764074f9daa76482d86e53f058589 references=108
2019-07-19T00:37:51.011Z [DEBUG] client.driver_mgr.docker: configured resources: driver=docker task_name=zk memory=536870912 cpu_shares=1024 cpu_quota=0 cpu_period=0
2019-07-19T00:37:51.011Z [DEBUG] client.driver_mgr.docker: binding directories: driver=docker task_name=zk binds="[]string{"/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/alloc:/alloc", "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/local:/local", "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/secrets:/secrets", "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf:/conf", "/opt/zookeeper/data:/data", "/opt/zookeeper/datalog:/datalog", "/opt/zookeeper/logs:/logs"}"
2019-07-19T00:37:51.011Z [DEBUG] client.driver_mgr.docker: allocated static port: driver=docker task_name=zk ip=10.84.133.109 port=2181
2019-07-19T00:37:51.011Z [DEBUG] client.driver_mgr.docker: exposed port: driver=docker task_name=zk port=2181
2019-07-19T00:37:51.011Z [DEBUG] client.driver_mgr.docker: applied labels on the container: driver=docker task_name=zk labels=map[group:zk-docker]
2019-07-19T00:37:51.011Z [DEBUG] client.driver_mgr.docker: setting container name: driver=docker task_name=zk container_name=zk-924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f
2019-07-19T00:37:51.028Z [INFO ] client.driver_mgr.docker: created container: driver=docker container_id=e09ba2f59c9833a6a13f840c935da9c5dc7358fa30b6964c3219e333912b27cc
2019-07-19T00:37:51.297Z [INFO ] client.driver_mgr.docker: started container: driver=docker container_id=e09ba2f59c9833a6a13f840c935da9c5dc7358fa30b6964c3219e333912b27cc
2019-07-19T00:37:51.297Z [DEBUG] client.driver_mgr.docker.docker_logger: starting plugin: driver=docker path=/usr/local/bin/nomad args="[/usr/local/bin/nomad docker_logger]"
2019-07-19T00:37:51.297Z [DEBUG] client.driver_mgr.docker.docker_logger: plugin started: driver=docker path=/usr/local/bin/nomad pid=23195
2019-07-19T00:37:51.297Z [DEBUG] client.driver_mgr.docker.docker_logger: waiting for RPC address: driver=docker path=/usr/local/bin/nomad
2019-07-19T00:37:51.330Z [DEBUG] client.driver_mgr.docker.docker_logger.nomad: plugin address: driver=docker address=/tmp/plugin147793528 network=unix @module=docker_logger timestamp=2019-07-19T00:37:51.329Z
2019-07-19T00:37:51.330Z [DEBUG] client.driver_mgr.docker.docker_logger: using plugin: driver=docker version=2
2019-07-19T00:37:51.331Z [DEBUG] client.driver_mgr.docker.docker_logger.nomad: using client connection initialized from environment: driver=docker @module=docker_logger timestamp=2019-07-19T00:37:51.331Z
2019-07-19T00:37:51.340Z [DEBUG] consul.sync: sync complete: registered_services=1 deregistered_services=0 registered_checks=0 deregistered_checks=0
2019/07/19 00:37:51.341018 [DEBUG] (runner) receiving dependency health.service(zk|passing)
2019/07/19 00:37:51.341032 [DEBUG] (runner) initiating run
2019/07/19 00:37:51.341040 [DEBUG] (runner) checking template a499eb31288cef07a2833b8b5167d325
2019/07/19 00:37:51.341161 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf/zoo.cfg"
2019/07/19 00:37:51.341199 [DEBUG] (runner) checking template 4c71b33a7d393df453f4548df158fa44
2019/07/19 00:37:51.341366 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf/zoo.cfg.dynamic"
2019/07/19 00:37:51.341821 [INFO] (runner) rendered "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf/zoo.cfg.dynamic"
2019/07/19 00:37:51.341840 [DEBUG] (runner) checking template 975fb82a72d69c6ce5200e90130c2cf2
2019/07/19 00:37:51.342019 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/local/data/zkid.env"
2019/07/19 00:37:51.342074 [DEBUG] (runner) diffing and updating dependencies
2019/07/19 00:37:51.342122 [DEBUG] (runner) health.service(zk|passing) is still needed
2019/07/19 00:37:51.342134 [DEBUG] (runner) watching 1 dependencies
2019/07/19 00:37:51.342143 [DEBUG] (runner) all templates rendered
2019/07/19 00:37:51.362996 [DEBUG] (runner) receiving dependency health.service(zk|passing)
2019/07/19 00:37:51.363020 [DEBUG] (runner) initiating run
2019/07/19 00:37:51.363165 [DEBUG] (runner) checking template a499eb31288cef07a2833b8b5167d325
2019/07/19 00:37:51.363406 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf/zoo.cfg"
2019/07/19 00:37:51.363806 [DEBUG] (runner) checking template 4c71b33a7d393df453f4548df158fa44
2019/07/19 00:37:51.364112 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf/zoo.cfg.dynamic"
2019/07/19 00:37:51.364660 [INFO] (runner) rendered "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf/zoo.cfg.dynamic"
2019/07/19 00:37:51.364776 [DEBUG] (runner) checking template 975fb82a72d69c6ce5200e90130c2cf2
2019/07/19 00:37:51.365044 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/local/data/zkid.env"
2019/07/19 00:37:51.365491 [INFO] (runner) rendered "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/local/data/zkid.env"
2019/07/19 00:37:51.365562 [DEBUG] (runner) diffing and updating dependencies
2019/07/19 00:37:51.365628 [DEBUG] (runner) health.service(zk|passing) is still needed
2019/07/19 00:37:51.365689 [DEBUG] (runner) watching 1 dependencies
2019/07/19 00:37:51.365740 [DEBUG] (runner) all templates rendered
2019/07/19 00:37:51.461784 [DEBUG] (runner) receiving dependency health.service(zk|passing)
2019/07/19 00:37:51.461803 [DEBUG] (runner) initiating run
2019/07/19 00:37:51.461808 [DEBUG] (runner) checking template a499eb31288cef07a2833b8b5167d325
2019/07/19 00:37:51.461928 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf/zoo.cfg"
2019/07/19 00:37:51.461974 [DEBUG] (runner) checking template 4c71b33a7d393df453f4548df158fa44
2019/07/19 00:37:51.462140 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf/zoo.cfg.dynamic"
2019/07/19 00:37:51.462595 [INFO] (runner) rendered "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/zookeeper/conf/zoo.cfg.dynamic"
2019/07/19 00:37:51.462613 [DEBUG] (runner) checking template 975fb82a72d69c6ce5200e90130c2cf2
2019/07/19 00:37:51.462808 [DEBUG] (runner) rendering "(dynamic)" => "/opt/nomad/alloc/924d8ea1-d640-f5c7-f3d8-8a9a8af25d4f/zk/local/data/zkid.env"
2019/07/19 00:37:51.462851 [DEBUG] (runner) diffing and updating dependencies
2019/07/19 00:37:51.462861 [DEBUG] (runner) health.service(zk|passing) is still needed
2019/07/19 00:37:51.462875 [DEBUG] (runner) watching 1 dependencies
2019/07/19 00:37:51.462882 [DEBUG] (runner) all templates rendered
2019-07-19T00:37:51.582Z [DEBUG] client: updated allocations: index=26143 total=35 pulled=0 filtered=35
2019-07-19T00:37:51.582Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=35
2019-07-19T00:37:51.582Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=35 errors=0
2019-07-19T00:38:00.332Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=159.188µs
2019-07-19T00:38:01.507Z [DEBUG] client: updated allocations: index=26145 total=35 pulled=0 filtered=35
2019-07-19T00:38:01.507Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=35
2019-07-19T00:38:01.507Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=35 errors=0

Nomad Server logs (if appropriate)

@manish-panwar
Copy link
Author

I can send you the generated file as well if you guys want.

@notnoop
Copy link
Contributor

notnoop commented Jul 22, 2019

Thanks for reaching out. The issue here is that the template depends on consul service registration of self while it's launching and before it's marked as healthy. While zk service containers are launching and yet to be marked as healthy, {{range $i, $services := service "zk"}} is an empty set!

If I launch another service first and wait for it to be healthy, the environment variables will be set correctly. Here, I tested registering the example redis-cache example, see how all environment variables are set correctly except for ZOO_SELF_SERVICE:

The scripts to reproduce and observation:

$ nomad job init --short
Example job file written to example.nomad
$ nomad job run ./example.nomad
==> Monitoring evaluation "d86a8c32"
    Evaluation triggered by job "example"
    Allocation "e92a9788" created: node "3b248445", group "cache"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "d86a8c32" finished with status "complete"
$ nomad job run ./zk.hcl
==> Monitoring evaluation "a1647501"
    Evaluation triggered by job "zk"
    Allocation "38fbb42a" created: node "3b248445", group "zk"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "a1647501" finished with status "complete"
$ nomad alloc logs 38fbb42a
>>>>>>>>>>>>>>> FROM ENV_VARS
ZOO_SIMPLE_RANGE=a
ZOO_OTHER_SERVICE=127.0.0.1
ZOO_SIMPLE_VAR=127.0.0.1

>>>>>>>>>>>>>>> FROM ENV_FILE
# test variable lookup
ZOO_SIMPLE_VAR="127.0.0.1"
# test range assignment
ZOO_SIMPLE_RANGE="a"
# test range over other services
ZOO_OTHER_SERVICE="127.0.0.1"
# test range over self services

The job file is:

job "zk" {
  region      = "global"
  datacenters = ["dc1"]
  type        = "service"

  group "zk" {
    count = 1

    task "zk" {
      driver = "docker"

      template {
        data = <<EOH
# test variable lookup
{{ $node_ip := env "NOMAD_IP_zk_port" }}ZOO_SIMPLE_VAR="{{ $node_ip }}"
# test range assignment
{{ range $i, $letters := "a" | split " " }}ZOO_SIMPLE_RANGE="{{ $letters }}"{{end}}
# test range over other services
{{ range $i, $services := service "redis-cache" }}ZOO_OTHER_SERVICE="{{ $services.Address }}"{{end}}
# test range over self services
{{range $i, $services := service "zk"}}ZOO_SELF_SERVICE="{{ $services.Address }}"{{end}}

EOH

        destination = "local/zkid.env"
        change_mode = "noop"
        env         = true
      }

      config {
        image        = "alpine:latest"
        command = "/bin/sh"
        args = ["-c", "echo '>>>>>>>>>>>>>>> FROM ENV_VARS'; printenv | grep ZOO_; echo; echo '>>>>>>>>>>>>>>> FROM ENV_FILE'; cat /local/zkid.env; sleep 40000"]
      }

      service {
        name         = "zk"
        port         = "zk_port"
        address_mode = "host"
      }

      resources {
        cpu    = 1024 # 1024 Mhz
        memory = 512  # 512MB

        network {
          port "zk_port" {}
        }
      }
    }
  }
}

@notnoop notnoop closed this as completed Jul 22, 2019
@manish-panwar
Copy link
Author

@notnoop - I tested it even with hardcoded stuff (I mean {{range $i, $services := 10.10.10.10}}) and I was still seeing the issue. Surprisingly I could see Nomad returning all nodes where zk is deployed using {{range $i, $services := service "zk"}} even though containers aren't healthy yet - but yes, I agree that it's sort of a race condition, and Nomad may not return anything since we are looking for self. I raised this issue specifically around how env variables are loaded by Nomad, not necessarily how to do lookup. Is it possible to print some logs in Nomad where I can see what values Nomad is seeing from log file? I am wondering if some extra whitespace or something causing the issue.

@lbachar
Copy link

lbachar commented Aug 8, 2019

I have a similar issue on nomad 0.9.4 that happens sometimes and is not easy to reproduce - the template stanza listen to consul services changes (not-self service) and update file + env vars
the file created with the correct values but nomad doesn't attach the environment variables.

template stanza for kibana service:

template {
            data = <<EOH
                {{ range service "elasticsearch-logs" }}
                ELASTICSEARCH_URL="http://{{ .Address }}:{{ .Port }}"
                {{ end }}
            EOH
            destination = "local/elasticsearch.env"
            env         = true
        }

@langmartin
Copy link
Contributor

@lbachar thanks for the additional context! By design, consul template substitution only blocks evaluation on key value lookup, the range lookup returns immediately with whatever is available!

@angrycub has an example workaround that calls consul kv put after populating the range results. https://github.com/angrycub/nomad_example_jobs/tree/master/consul-template/coordination

If the kabana template is setup with the default change_mode (restart), it should restart every time the consul template values change, updating the env. If that's the case, we should open a new issue.

Thanks, Lang

@lbachar
Copy link

lbachar commented Aug 11, 2019

@langmartin thanks for the quick response.
on nomad 0.8.6 we see the values re-rendered according to consul service health check.
our restart stanza has change_mode set to restart (default value) but the consul-template is working well here and render the file local/elasticsearch.env but doesn't set them as env var.
if I ssh to the docker I can see the file is rendered correctly with the service ip & port.

I tried to find similar issues here and I found those:
#3498
#3529

Looks like it was fixed before using that function: MergeTemplateEnv which I couldn't find this method in the codebase anymore.

Let me know if it's better to open a new issue instead of this one.

@langmartin
Copy link
Contributor

@lbachar Hey, sorry for the very slow response! It seems to me that #6112 is an open version of your template issue.

@lbachar
Copy link

lbachar commented Aug 25, 2019

@langmartin Hi, indeed. I will follow #6112
Did you manage to reproduce the issue?

@cgbaker
Copy link
Contributor

cgbaker commented Aug 27, 2019

@lbachar , yes, this issue has been reproduced and is being addressed.

@Fuco1
Copy link
Contributor

Fuco1 commented Sep 29, 2019

This used to work on nomad 0.9.0 but after update to 0.9.5 I get the same results: the file is generated with the service data (from range), but the process does not see it. However in the allocation log I see that the process was restarted due to template change.

@cgbaker
Copy link
Contributor

cgbaker commented Oct 8, 2019

@Fuco1 , please follow #6112

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 18, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants