Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Endpoint metrics are not displayed #6712

Closed
Infusible opened this issue Nov 15, 2019 · 19 comments
Closed

Endpoint metrics are not displayed #6712

Infusible opened this issue Nov 15, 2019 · 19 comments

Comments

@Infusible
Copy link

Nomad version

nomad version: 0.9.3

Operating system and Environment details

server: ubuntu 16
client: windows server 16

Issue

telemetry settings for windows apps

Reproduction steps

run my job and it ran, but not metrics

# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0.0034373
go_gc_duration_seconds_sum 0.2238819
go_gc_duration_seconds_count 4042
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 163
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.11.11"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 7.05904e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 1.6699625488e+10
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 2.107578e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 2.5496417e+08
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction -3.889639412644311e-06
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 770048
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 7.05904e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 3.997696e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 1.06496e+07
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 53665
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 589824
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.4647296e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.573851380033981e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 2.55017835e+08
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 6784
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 156560
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 196608
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 1.0885504e+07
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 987198
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 2.12992e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 2.12992e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 2.0855032e+07
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 28
# HELP nomad_client_allocated_cpu nomad_client_allocated_cpu
# TYPE nomad_client_allocated_cpu gauge
nomad_client_allocated_cpu{datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 4680
# HELP nomad_client_allocated_disk nomad_client_allocated_disk
# TYPE nomad_client_allocated_disk gauge
nomad_client_allocated_disk{datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 600
# HELP nomad_client_allocated_memory nomad_client_allocated_memory
# TYPE nomad_client_allocated_memory gauge
nomad_client_allocated_memory{datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 6000
# HELP nomad_client_allocated_network nomad_client_allocated_network
# TYPE nomad_client_allocated_network gauge
nomad_client_allocated_network{datacenter="Stage",device="Ethernet0",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 200
# HELP nomad_client_allocations_blocked nomad_client_allocations_blocked
# TYPE nomad_client_allocations_blocked gauge
nomad_client_allocations_blocked{datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 0
# HELP nomad_client_allocations_migrating nomad_client_allocations_migrating
# TYPE nomad_client_allocations_migrating gauge
nomad_client_allocations_migrating{datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 0
# HELP nomad_client_allocations_pending nomad_client_allocations_pending
# TYPE nomad_client_allocations_pending gauge
nomad_client_allocations_pending{datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 0
# HELP nomad_client_allocations_running nomad_client_allocations_running
# TYPE nomad_client_allocations_running gauge
nomad_client_allocations_running{datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 2
# HELP nomad_client_allocations_terminal nomad_client_allocations_terminal
# TYPE nomad_client_allocations_terminal gauge
nomad_client_allocations_terminal{datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 0
# HELP nomad_client_allocs_cpu_system nomad_client_allocs_cpu_system
# TYPE nomad_client_allocs_cpu_system gauge
nomad_client_allocs_cpu_system{alloc_id="8c75a774-02b9-b71c-7cb5-306e5acc32a4",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
nomad_client_allocs_cpu_system{alloc_id="991c89c0-310a-0181-2eb3-a652213d9033",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
# HELP nomad_client_allocs_cpu_throttled_periods nomad_client_allocs_cpu_throttled_periods
# TYPE nomad_client_allocs_cpu_throttled_periods gauge
nomad_client_allocs_cpu_throttled_periods{alloc_id="8c75a774-02b9-b71c-7cb5-306e5acc32a4",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
nomad_client_allocs_cpu_throttled_periods{alloc_id="991c89c0-310a-0181-2eb3-a652213d9033",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
# HELP nomad_client_allocs_cpu_throttled_time nomad_client_allocs_cpu_throttled_time
# TYPE nomad_client_allocs_cpu_throttled_time gauge
nomad_client_allocs_cpu_throttled_time{alloc_id="8c75a774-02b9-b71c-7cb5-306e5acc32a4",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
nomad_client_allocs_cpu_throttled_time{alloc_id="991c89c0-310a-0181-2eb3-a652213d9033",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
# HELP nomad_client_allocs_cpu_total_percent nomad_client_allocs_cpu_total_percent
# TYPE nomad_client_allocs_cpu_total_percent gauge
nomad_client_allocs_cpu_total_percent{alloc_id="8c75a774-02b9-b71c-7cb5-306e5acc32a4",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
nomad_client_allocs_cpu_total_percent{alloc_id="991c89c0-310a-0181-2eb3-a652213d9033",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
# HELP nomad_client_allocs_cpu_total_ticks nomad_client_allocs_cpu_total_ticks
# TYPE nomad_client_allocs_cpu_total_ticks gauge
nomad_client_allocs_cpu_total_ticks{alloc_id="8c75a774-02b9-b71c-7cb5-306e5acc32a4",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
nomad_client_allocs_cpu_total_ticks{alloc_id="991c89c0-310a-0181-2eb3-a652213d9033",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
# HELP nomad_client_allocs_cpu_user nomad_client_allocs_cpu_user
# TYPE nomad_client_allocs_cpu_user gauge
nomad_client_allocs_cpu_user{alloc_id="8c75a774-02b9-b71c-7cb5-306e5acc32a4",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
nomad_client_allocs_cpu_user{alloc_id="991c89c0-310a-0181-2eb3-a652213d9033",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
# HELP nomad_client_allocs_memory_allocated nomad_client_allocs_memory_allocated
# TYPE nomad_client_allocs_memory_allocated gauge
nomad_client_allocs_memory_allocated{alloc_id="8c75a774-02b9-b71c-7cb5-306e5acc32a4",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 3.145728e+09
nomad_client_allocs_memory_allocated{alloc_id="991c89c0-310a-0181-2eb3-a652213d9033",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 3.145728e+09
# HELP nomad_client_allocs_memory_cache nomad_client_allocs_memory_cache
# TYPE nomad_client_allocs_memory_cache gauge
nomad_client_allocs_memory_cache{alloc_id="8c75a774-02b9-b71c-7cb5-306e5acc32a4",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
nomad_client_allocs_memory_cache{alloc_id="991c89c0-310a-0181-2eb3-a652213d9033",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
# HELP nomad_client_allocs_memory_kernel_max_usage nomad_client_allocs_memory_kernel_max_usage
# TYPE nomad_client_allocs_memory_kernel_max_usage gauge
nomad_client_allocs_memory_kernel_max_usage{alloc_id="8c75a774-02b9-b71c-7cb5-306e5acc32a4",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
nomad_client_allocs_memory_kernel_max_usage{alloc_id="991c89c0-310a-0181-2eb3-a652213d9033",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
# HELP nomad_client_allocs_memory_kernel_usage nomad_client_allocs_memory_kernel_usage
# TYPE nomad_client_allocs_memory_kernel_usage gauge
nomad_client_allocs_memory_kernel_usage{alloc_id="8c75a774-02b9-b71c-7cb5-306e5acc32a4",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
nomad_client_allocs_memory_kernel_usage{alloc_id="991c89c0-310a-0181-2eb3-a652213d9033",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
# HELP nomad_client_allocs_memory_max_usage nomad_client_allocs_memory_max_usage
# TYPE nomad_client_allocs_memory_max_usage gauge
nomad_client_allocs_memory_max_usage{alloc_id="8c75a774-02b9-b71c-7cb5-306e5acc32a4",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
nomad_client_allocs_memory_max_usage{alloc_id="991c89c0-310a-0181-2eb3-a652213d9033",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
# HELP nomad_client_allocs_memory_rss nomad_client_allocs_memory_rss
# TYPE nomad_client_allocs_memory_rss gauge
nomad_client_allocs_memory_rss{alloc_id="8c75a774-02b9-b71c-7cb5-306e5acc32a4",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 1.39792384e+08
nomad_client_allocs_memory_rss{alloc_id="991c89c0-310a-0181-2eb3-a652213d9033",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 1.39706368e+08
# HELP nomad_client_allocs_memory_swap nomad_client_allocs_memory_swap
# TYPE nomad_client_allocs_memory_swap gauge
nomad_client_allocs_memory_swap{alloc_id="8c75a774-02b9-b71c-7cb5-306e5acc32a4",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
nomad_client_allocs_memory_swap{alloc_id="991c89c0-310a-0181-2eb3-a652213d9033",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
# HELP nomad_client_allocs_memory_usage nomad_client_allocs_memory_usage
# TYPE nomad_client_allocs_memory_usage gauge
nomad_client_allocs_memory_usage{alloc_id="8c75a774-02b9-b71c-7cb5-306e5acc32a4",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
nomad_client_allocs_memory_usage{alloc_id="991c89c0-310a-0181-2eb3-a652213d9033",host="player-api-1",job="player-api",namespace="",task="player-api",task_group="player-api"} 0
# HELP nomad_client_host_cpu_idle nomad_client_host_cpu_idle
# TYPE nomad_client_host_cpu_idle gauge
nomad_client_host_cpu_idle{cpu="0,0",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 98
nomad_client_host_cpu_idle{cpu="0,1",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 85
nomad_client_host_cpu_idle{cpu="0,2",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 98
nomad_client_host_cpu_idle{cpu="0,3",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 92
nomad_client_host_cpu_idle{cpu="0,_Total",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 93
nomad_client_host_cpu_idle{cpu="_Total",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 93
# HELP nomad_client_host_cpu_system nomad_client_host_cpu_system
# TYPE nomad_client_host_cpu_system gauge
nomad_client_host_cpu_system{cpu="0,0",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 0
nomad_client_host_cpu_system{cpu="0,1",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 0
nomad_client_host_cpu_system{cpu="0,2",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 0
nomad_client_host_cpu_system{cpu="0,3",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 0
nomad_client_host_cpu_system{cpu="0,_Total",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 0
nomad_client_host_cpu_system{cpu="_Total",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 0
# HELP nomad_client_host_cpu_total nomad_client_host_cpu_total
# TYPE nomad_client_host_cpu_total gauge
nomad_client_host_cpu_total{cpu="0,0",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 98
nomad_client_host_cpu_total{cpu="0,1",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 97
nomad_client_host_cpu_total{cpu="0,2",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 98
nomad_client_host_cpu_total{cpu="0,3",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 98
nomad_client_host_cpu_total{cpu="0,_Total",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 97
nomad_client_host_cpu_total{cpu="_Total",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 97
# HELP nomad_client_host_cpu_user nomad_client_host_cpu_user
# TYPE nomad_client_host_cpu_user gauge
nomad_client_host_cpu_user{cpu="0,0",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 0
nomad_client_host_cpu_user{cpu="0,1",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 12
nomad_client_host_cpu_user{cpu="0,2",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 0
nomad_client_host_cpu_user{cpu="0,3",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 6
nomad_client_host_cpu_user{cpu="0,_Total",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 4
nomad_client_host_cpu_user{cpu="_Total",datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 4
# HELP nomad_client_host_disk_available nomad_client_host_disk_available
# TYPE nomad_client_host_disk_available gauge
nomad_client_host_disk_available{datacenter="Stage",disk="C:",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 7.795060736e+10
# HELP nomad_client_host_disk_inodes_percent nomad_client_host_disk_inodes_percent
# TYPE nomad_client_host_disk_inodes_percent gauge
nomad_client_host_disk_inodes_percent{datacenter="Stage",disk="C:",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 0
# HELP nomad_client_host_disk_size nomad_client_host_disk_size
# TYPE nomad_client_host_disk_size gauge
nomad_client_host_disk_size{datacenter="Stage",disk="C:",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 1.06846748672e+11
# HELP nomad_client_host_disk_used nomad_client_host_disk_used
# TYPE nomad_client_host_disk_used gauge
nomad_client_host_disk_used{datacenter="Stage",disk="C:",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 2.8896137216e+10
# HELP nomad_client_host_disk_used_percent nomad_client_host_disk_used_percent
# TYPE nomad_client_host_disk_used_percent gauge
nomad_client_host_disk_used_percent{datacenter="Stage",disk="C:",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 27.044471740722656
# HELP nomad_client_host_memory_available nomad_client_host_memory_available
# TYPE nomad_client_host_memory_available gauge
nomad_client_host_memory_available{datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 4.479909888e+09
# HELP nomad_client_host_memory_free nomad_client_host_memory_free
# TYPE nomad_client_host_memory_free gauge
nomad_client_host_memory_free{datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 0
# HELP nomad_client_host_memory_total nomad_client_host_memory_total
# TYPE nomad_client_host_memory_total gauge
nomad_client_host_memory_total{datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 8.589328384e+09
# HELP nomad_client_host_memory_used nomad_client_host_memory_used
# TYPE nomad_client_host_memory_used gauge
nomad_client_host_memory_used{datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 4.109418496e+09
# HELP nomad_client_unallocated_cpu nomad_client_unallocated_cpu
# TYPE nomad_client_unallocated_cpu gauge
nomad_client_unallocated_cpu{datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 3720
# HELP nomad_client_unallocated_disk nomad_client_unallocated_disk
# TYPE nomad_client_unallocated_disk gauge
nomad_client_unallocated_disk{datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 73505
# HELP nomad_client_unallocated_memory nomad_client_unallocated_memory
# TYPE nomad_client_unallocated_memory gauge
nomad_client_unallocated_memory{datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 2191
# HELP nomad_client_unallocated_network nomad_client_unallocated_network
# TYPE nomad_client_unallocated_network gauge
nomad_client_unallocated_network{datacenter="Stage",device="Ethernet0",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 800
# HELP nomad_client_uptime nomad_client_uptime
# TYPE nomad_client_uptime gauge
nomad_client_uptime{datacenter="Stage",host="player-api-1",node_class="none",node_id="b294f165-3036-571f-d7e8-c10afb58bacd"} 717098
# HELP nomad_runtime_alloc_bytes nomad_runtime_alloc_bytes
# TYPE nomad_runtime_alloc_bytes gauge
nomad_runtime_alloc_bytes{host="player-api-1"} 6.919096e+06
# HELP nomad_runtime_free_count nomad_runtime_free_count
# TYPE nomad_runtime_free_count gauge
nomad_runtime_free_count{host="player-api-1"} 2.54964032e+08
# HELP nomad_runtime_gc_pause_ns nomad_runtime_gc_pause_ns
# TYPE nomad_runtime_gc_pause_ns summary
nomad_runtime_gc_pause_ns{host="player-api-1",quantile="0.5"} 0
nomad_runtime_gc_pause_ns{host="player-api-1",quantile="0.9"} 0
nomad_runtime_gc_pause_ns{host="player-api-1",quantile="0.99"} 0
nomad_runtime_gc_pause_ns_sum{host="player-api-1"} 2.238819e+08
nomad_runtime_gc_pause_ns_count{host="player-api-1"} 4042
# HELP nomad_runtime_heap_objects nomad_runtime_heap_objects
# TYPE nomad_runtime_heap_objects gauge
nomad_runtime_heap_objects{host="player-api-1"} 52330
# HELP nomad_runtime_malloc_count nomad_runtime_malloc_count
# TYPE nomad_runtime_malloc_count gauge
nomad_runtime_malloc_count{host="player-api-1"} 2.55016368e+08
# HELP nomad_runtime_num_goroutines nomad_runtime_num_goroutines
# TYPE nomad_runtime_num_goroutines gauge
nomad_runtime_num_goroutines{host="player-api-1"} 155
# HELP nomad_runtime_sys_bytes nomad_runtime_sys_bytes
# TYPE nomad_runtime_sys_bytes gauge
nomad_runtime_sys_bytes{host="player-api-1"} 2.0855032e+07
# HELP nomad_runtime_total_gc_pause_ns nomad_runtime_total_gc_pause_ns
# TYPE nomad_runtime_total_gc_pause_ns gauge
nomad_runtime_total_gc_pause_ns{host="player-api-1"} 2.23881904e+08
# HELP nomad_runtime_total_gc_runs nomad_runtime_total_gc_runs
# TYPE nomad_runtime_total_gc_runs gauge
nomad_runtime_total_gc_runs{host="player-api-1"} 4042

config my client

telemetry {
  publish_allocation_metrics=true
  publish_node_metrics=true
  prometheus_metrics=true
}

https://www.nomadproject.io/docs/telemetry/metrics.html

To view this data via sending a signal to the Nomad process: on Unix, this is USR1 while on Windows it is BREAK. Once Nomad receives the signal, it will dump the current telemetry information to the agent's stderr.

Screenshot 2019-11-15 at 23 00 19

and my metrics = ZERO

what should you do to get these metrics to appear?

@tgross
Copy link
Member

tgross commented Nov 15, 2019

Hi @Infusible! Thanks for reporting this... I've been especially on the look out for Windows issues recently!

It looks like your configuration is correct; the publish_allocation_metrics flag is what you want on the client. It looks like you're getting all the client metrics, so that bit is working at least. What task driver are you using?

@tgross tgross self-assigned this Nov 15, 2019
@tgross tgross added this to Needs Triage in Nomad - Community Issues Triage via automation Nov 15, 2019
@tgross tgross moved this from Needs Triage to Triaged in Nomad - Community Issues Triage Nov 15, 2019
@Infusible
Copy link
Author

@tgross I use driver: raw_exec

@Infusible
Copy link
Author

Infusible commented Nov 16, 2019

For example my job, this work is different from the one shown in the screenshot, but the result is the same - there are no metrics. My job definition, I use the simplest setting of the job

{
  "Stop": false,
  "Region": "global",
  "Namespace": "",
  "ID": "api",
  "ParentID": "",
  "Name": "api",
  "Type": "service",
  "Priority": 50,
  "AllAtOnce": false,
  "Datacenters": [
    "Stage"
  ],
  "Constraints": [
    {
      "LTarget": "${meta.team}",
      "RTarget": "api",
      "Operand": "="
    }
  ],
  "Affinities": null,
  "Spreads": null,
  "TaskGroups": [
    {
      "Name": "api",
      "Count": 2,
      "Update": {
        "Stagger": 30000000000,
        "MaxParallel": 1,
        "HealthCheck": "task_states",
        "MinHealthyTime": 30000000000,
        "HealthyDeadline": 60000000000,
        "ProgressDeadline": 0,
        "AutoRevert": false,
        "AutoPromote": false,
        "Canary": 0
      },
      "Migrate": null,
      "Constraints": null,
      "RestartPolicy": {
        "Attempts": 10,
        "Interval": 1800000000000,
        "Delay": 30000000000,
        "Mode": "fail"
      },
      "Tasks": [
        {
          "Name": "api",
          "Driver": "raw_exec",
          "User": "",
          "Config": {
            "args": [
              "-register:false"
            ],
            "command": "api.exe"
          },
          "Env": {
            "env": "test",
          },
          "Services": [
            {
              "Name": "${NOMAD_TASK_NAME}",
              "PortLabel": "HTTP",
              "AddressMode": "auto",
              "Tags": [
                "stage",
                "v1.0.16"
              ],
              "CanaryTags": null,
              "Checks": [
                {
                  "Name": "service: \"${NOMAD_TASK_NAME}\" check",
                  "Type": "http",
                  "Command": "",
                  "Args": null,
                  "Path": "/status",
                  "Protocol": "",
                  "PortLabel": "HTTP",
                  "AddressMode": "",
                  "Interval": 30000000000,
                  "Timeout": 10000000000,
                  "InitialStatus": "",
                  "TLSSkipVerify": false,
                  "Method": "",
                  "Header": null,
                  "CheckRestart": null,
                  "GRPCService": "",
                  "GRPCUseTLS": false
                }
              ]
            }
          ],
          "Vault": null,
          "Templates": null,
          "Constraints": null,
          "Affinities": null,
          "Resources": {
            "CPU": 2340,
            "MemoryMB": 3000,
            "DiskMB": 0,
            "IOPS": 0,
            "Networks": [
              {
                "Device": "",
                "CIDR": "",
                "IP": "",
                "MBits": 100,
                "ReservedPorts": null,
                "DynamicPorts": [
                  {
                    "Label": "HTTP",
                    "Value": 0
                  }
                ]
              }
            ],
            "Devices": null
          },
          "DispatchPayload": null,
          "Meta": null,
          "KillTimeout": 5000000000,
          "LogConfig": {
            "MaxFiles": 10,
            "MaxFileSizeMB": 10
          },
          "Artifacts": [
            {
              "GetterSource": "https://zip.dev.local/api-v1.0.16.zip",
              "GetterOptions": null,
              "GetterMode": "any",
              "RelativeDest": "local/"
            }
          ],
          "Leader": false,
          "ShutdownDelay": 0,
          "KillSignal": ""
        }
      ],
      "EphemeralDisk": {
        "Sticky": false,
        "SizeMB": 300,
        "Migrate": false
      },
      "Meta": null,
      "ReschedulePolicy": null,
      "Affinities": null,
      "Spreads": null
    }
  ],
  "Update": {
    "Stagger": 30000000000,
    "MaxParallel": 1,
    "HealthCheck": "",
    "MinHealthyTime": 0,
    "HealthyDeadline": 0,
    "ProgressDeadline": 0,
    "AutoRevert": false,
    "AutoPromote": false,
    "Canary": 0
  },
  "Periodic": null,
  "ParameterizedJob": null,
  "Dispatched": false,
  "Payload": null,
  "Meta": {
    "version": "v1.0.16",
    "environment": "stage"
  },
  "VaultToken": "",
  "Status": "running",
  "StatusDescription": "",
  "Stable": true,
  "Version": 45,
  "SubmitTime": 1573825812441338000,
  "CreateIndex": 1859178,
  "ModifyIndex": 1881547,
  "JobModifyIndex": 1881526
}

@Infusible
Copy link
Author

@tgross any update?

@tgross tgross added this to the near-term milestone Nov 20, 2019
@tgross
Copy link
Member

tgross commented Nov 20, 2019

@Infusible we'll let you know when we've had a chance to investigate. In the meantime, linking to #6349 which may be related.

@tgross tgross modified the milestones: near-term, 0.10.2 Nov 20, 2019
@tgross tgross moved this from Triaged to In Review in Nomad - Community Issues Triage Nov 20, 2019
@tgross tgross modified the milestones: 0.10.2, near-term Nov 20, 2019
@Infusible
Copy link
Author

thank you)

@tgross
Copy link
Member

tgross commented Nov 21, 2019

Hey @Infusible just a heads up that I've verified that #6349 is working (on the soon-to-be-released 0.10.2). Here's my test outputs:

Test job

Running Docker containers on Windows 2016 is pretty painful b/c there's no support for Linux containers and most of the various public images MSFT publishes are for 2019.

job "winsleep" {
  datacenters = ["dc1"]
  type        = "service"

  group "sleepy_win" {
    count = 1

    restart {
      attempts = 10
      interval = "5m"
      delay    = "25s"
      mode     = "delay"
    }

    constraint {
      attribute = "${attr.kernel.name}"
      operator  = "="
      value     = "windows"
    }

    task "sleepy" {
      driver = "docker"

      config {
        image = "mcr.microsoft.com/windows/servercore:ltsc2016"
        command = "powershell"
        args = ["-command", "Start-Sleep", "-s", "600"]
      }

      resources {
        cpu    = 500
        memory = 256
      }
    }
  }
}
Client metrics
PS C:\Users\Administrator> (Invoke-WebRequest -Uri http://localhost:4646/v1/metrics?format=prometheus ).Content.ToString( ) | Write-Host

...
nomad_client_uptime{datacenter="dc1",node_class="none",node_id="e1b307f8-ff9d-08ff-2b5e-717dd0325412",node
_scheduling_eligibility="eligible",node_status="ready"} 8890
# HELP nomad_runtime_alloc_bytes nomad_runtime_alloc_bytes
# TYPE nomad_runtime_alloc_bytes gauge
nomad_runtime_alloc_bytes 5.837072e+06
# HELP nomad_runtime_free_count nomad_runtime_free_count
# TYPE nomad_runtime_free_count gauge
nomad_runtime_free_count 2.6426652e+07
# HELP nomad_runtime_gc_pause_ns nomad_runtime_gc_pause_ns
# TYPE nomad_runtime_gc_pause_ns summary
nomad_runtime_gc_pause_ns{quantile="0.5"} 0
nomad_runtime_gc_pause_ns{quantile="0.9"} 0
nomad_runtime_gc_pause_ns{quantile="0.99"} 0
nomad_runtime_gc_pause_ns_sum 1.40603e+07
nomad_runtime_gc_pause_ns_count 492
# HELP nomad_runtime_heap_objects nomad_runtime_heap_objects
# TYPE nomad_runtime_heap_objects gauge
nomad_runtime_heap_objects 40876
# HELP nomad_runtime_malloc_count nomad_runtime_malloc_count
# TYPE nomad_runtime_malloc_count gauge
nomad_runtime_malloc_count 2.6467528e+07
# HELP nomad_runtime_num_goroutines nomad_runtime_num_goroutines
# TYPE nomad_runtime_num_goroutines gauge
nomad_runtime_num_goroutines 141
# HELP nomad_runtime_sys_bytes nomad_runtime_sys_bytes
# TYPE nomad_runtime_sys_bytes gauge
nomad_runtime_sys_bytes 2.0069112e+07
# HELP nomad_runtime_total_gc_pause_ns nomad_runtime_total_gc_pause_ns
# TYPE nomad_runtime_total_gc_pause_ns gauge
nomad_runtime_total_gc_pause_ns 1.40603e+07
# HELP nomad_runtime_total_gc_runs nomad_runtime_total_gc_runs
# TYPE nomad_runtime_total_gc_runs gauge
nomad_runtime_total_gc_runs 492
...
etc, etc.
Allocation metrics
PS C:\Users\Administrator> (Invoke-WebRequest -Uri http://localhost:4646/v1/metrics?format=prometheus ).Content.ToString( ) -split '\n' |  Select-String "sleepy"

nomad_client_allocs_cpu_system{alloc_id="9a4a05ad-a975-6e1d-d644-a198f897a850",job="winsleep",namespace="
default",task="sleepy",task_group="sleepy_win"} 0
nomad_client_allocs_cpu_throttled_periods{alloc_id="9a4a05ad-a975-6e1d-d644-a198f897a850",job="winsleep",
namespace="default",task="sleepy",task_group="sleepy_win"} 0
nomad_client_allocs_cpu_throttled_time{alloc_id="9a4a05ad-a975-6e1d-d644-a198f897a850",job="winsleep",nam
espace="default",task="sleepy",task_group="sleepy_win"} 0
nomad_client_allocs_cpu_total_percent{alloc_id="9a4a05ad-a975-6e1d-d644-a198f897a850",job="winsleep",name
space="default",task="sleepy",task_group="sleepy_win"} 0
nomad_client_allocs_cpu_total_ticks{alloc_id="9a4a05ad-a975-6e1d-d644-a198f897a850",job="winsleep",namesp
ace="default",task="sleepy",task_group="sleepy_win"} 0
nomad_client_allocs_cpu_user{alloc_id="9a4a05ad-a975-6e1d-d644-a198f897a850",job="winsleep",namespace="de
fault",task="sleepy",task_group="sleepy_win"} 0
nomad_client_allocs_memory_allocated{alloc_id="9a4a05ad-a975-6e1d-d644-a198f897a850",job="winsleep",names
pace="default",task="sleepy",task_group="sleepy_win"} 2.68435456e+08
nomad_client_allocs_memory_cache{alloc_id="9a4a05ad-a975-6e1d-d644-a198f897a850",job="winsleep",namespace
="default",task="sleepy",task_group="sleepy_win"} 0
nomad_client_allocs_memory_kernel_max_usage{alloc_id="9a4a05ad-a975-6e1d-d644-a198f897a850",job="winsleep
",namespace="default",task="sleepy",task_group="sleepy_win"} 0
nomad_client_allocs_memory_kernel_usage{alloc_id="9a4a05ad-a975-6e1d-d644-a198f897a850",job="winsleep",na
mespace="default",task="sleepy",task_group="sleepy_win"} 0
nomad_client_allocs_memory_max_usage{alloc_id="9a4a05ad-a975-6e1d-d644-a198f897a850",job="winsleep",names
pace="default",task="sleepy",task_group="sleepy_win"} 1.46927616e+08
nomad_client_allocs_memory_rss{alloc_id="9a4a05ad-a975-6e1d-d644-a198f897a850",job="winsleep",namespace="
default",task="sleepy",task_group="sleepy_win"} 9.7611776e+07
nomad_client_allocs_memory_swap{alloc_id="9a4a05ad-a975-6e1d-d644-a198f897a850",job="winsleep",namespace=
"default",task="sleepy",task_group="sleepy_win"} 0
nomad_client_allocs_memory_usage{alloc_id="9a4a05ad-a975-6e1d-d644-a198f897a850",job="winsleep",namespace
="default",task="sleepy",task_group="sleepy_win"} 1.26980096e+08

@Infusible
Copy link
Author

Infusible commented Nov 21, 2019

hi @tgross
you start the service using "driver = docker" and I start using "driver = raw_exec" and no metrics are sent

@tgross
Copy link
Member

tgross commented Nov 21, 2019

Oh, good catch! Just checked that and we're looking good for the raw_exec driver as well.

raw_exec test job
job "infusible" {
  datacenters = ["dc1"]
  type        = "service"

  group "infusible" {
    count = 1

    constraint {
      attribute = "${attr.kernel.name}"
      operator  = "="
      value     = "windows"
    }

    task "infusible" {
      driver = "raw_exec"

      config {
        command = "powershell"
        args    = ["-command", "Start-Sleep", "-s", "600"]
      }
    }
  }
}
Allocation metrics
PS C:\Users\Administrator> (Invoke-WebRequest -Uri http://localhost:4646/v1/metrics?format=prometheus ).Content.ToString( ) -split '\n' |  Select-String "infusible"

nomad_client_allocs_cpu_system{alloc_id="e1b8eeb2-c5fd-5b34-936f-fd92ac1ad962",job="infusible",namespace="defa
ult",task="infusible",task_group="infusible"} 0
nomad_client_allocs_cpu_throttled_periods{alloc_id="e1b8eeb2-c5fd-5b34-936f-fd92ac1ad962",job="infusible",name
space="default",task="infusible",task_group="infusible"} 0
nomad_client_allocs_cpu_throttled_time{alloc_id="e1b8eeb2-c5fd-5b34-936f-fd92ac1ad962",job="infusible",namespa
ce="default",task="infusible",task_group="infusible"} 0
nomad_client_allocs_cpu_total_percent{alloc_id="e1b8eeb2-c5fd-5b34-936f-fd92ac1ad962",job="infusible",namespac
e="default",task="infusible",task_group="infusible"} 0
nomad_client_allocs_cpu_total_ticks{alloc_id="e1b8eeb2-c5fd-5b34-936f-fd92ac1ad962",job="infusible",namespace=
"default",task="infusible",task_group="infusible"} 0
nomad_client_allocs_cpu_user{alloc_id="e1b8eeb2-c5fd-5b34-936f-fd92ac1ad962",job="infusible",namespace="defaul
t",task="infusible",task_group="infusible"} 0
nomad_client_allocs_memory_allocated{alloc_id="e1b8eeb2-c5fd-5b34-936f-fd92ac1ad962",job="infusible",namespace
="default",task="infusible",task_group="infusible"} 3.145728e+08
nomad_client_allocs_memory_cache{alloc_id="e1b8eeb2-c5fd-5b34-936f-fd92ac1ad962",job="infusible",namespace="de
fault",task="infusible",task_group="infusible"} 0
nomad_client_allocs_memory_kernel_max_usage{alloc_id="e1b8eeb2-c5fd-5b34-936f-fd92ac1ad962",job="infusible",na
mespace="default",task="infusible",task_group="infusible"} 0
nomad_client_allocs_memory_kernel_usage{alloc_id="e1b8eeb2-c5fd-5b34-936f-fd92ac1ad962",job="infusible",namesp
ace="default",task="infusible",task_group="infusible"} 0
nomad_client_allocs_memory_max_usage{alloc_id="e1b8eeb2-c5fd-5b34-936f-fd92ac1ad962",job="infusible",namespace
="default",task="infusible",task_group="infusible"} 0
nomad_client_allocs_memory_rss{alloc_id="e1b8eeb2-c5fd-5b34-936f-fd92ac1ad962",job="infusible",namespace="defa
ult",task="infusible",task_group="infusible"} 8.4320256e+07
nomad_client_allocs_memory_swap{alloc_id="e1b8eeb2-c5fd-5b34-936f-fd92ac1ad962",job="infusible",namespace="def
ault",task="infusible",task_group="infusible"} 0
nomad_client_allocs_memory_usage{alloc_id="e1b8eeb2-c5fd-5b34-936f-fd92ac1ad962",job="infusible",namespace="de
fault",task="infusible",task_group="infusible"} 0
nomad_client_allocs_running{alloc_id="e1b8eeb2-c5fd-5b34-936f-fd92ac1ad962",job="infusible",namespace="default
",task="infusible",task_group="infusible"} 1

@Infusible
Copy link
Author

and you job used null - memory=0
nomad_client_allocs_memory_usage{alloc_id="e1b8eeb2-c5fd-5b34-936f-fd92ac1ad962",job="infusible",namespace="de fault",task="infusible",task_group="infusible"} 0

it is impossible

@Infusible
Copy link
Author

i will check on version 0.10.2

@tgross
Copy link
Member

tgross commented Nov 21, 2019

Bah, that's my fault... I wasn't paying close enough attention. But that's a slightly different problem than I thought we were facing -- Nomad is exposing the metrics it has but not collecting them properly.

@Infusible
Copy link
Author

but in nomad web-UI my job showed metrics
Screenshot 2019-11-21 at 21 51 46
:)

@tgross
Copy link
Member

tgross commented Nov 21, 2019

Yeah, I think that was just a bad test; the whole point of "sleep" is that it doesn't use any resources. I ran a quick golang application that does CPU-intensive prime numbers searching instead and got good results:

...
nomad_client_allocs_cpu_total_ticks{alloc_id="05c07a88-c21d-119c-2033-a3f0556deba2",job="primes",namespace="default",task="primes",task_group="primes"} 3769.5947265625
nomad_client_allocs_cpu_user{alloc_id="05c07a88-c21d-119c-2033-a3f0556deba2",job="primes",namespace="default", task="primes",task_group="primes"} 31.218175888061523
nomad_client_allocs_memory_allocated{alloc_id="05c07a88-c21d-119c-2033-a3f0556deba2",job="primes",namespace="default",task="primes",task_group="primes"} 3.145728e+08
...

@Infusible
Copy link
Author

golang app runned on driver = raw_exec?

@tgross
Copy link
Member

tgross commented Nov 21, 2019

Yup!

@Infusible
Copy link
Author

i will check on version 0.10.2 and comment on this issue

@tgross tgross modified the milestones: near-term, 0.10.2 Nov 21, 2019
@tgross tgross removed this from In Review in Nomad - Community Issues Triage Nov 21, 2019
@tgross
Copy link
Member

tgross commented Nov 22, 2019

I'm going to close this as part of tracking our 0.10.2 completion. We expect to cut the release very soon. Feel free to re-open this if you find that 0.10.2 doesn't solve the problem.

@tgross tgross closed this as completed Nov 22, 2019
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 16, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants