Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

alloc logs returns 500 if alloc has been garbage collected #18216

Closed
johnnyplaydrums opened this issue Aug 15, 2023 · 6 comments · Fixed by #18232
Closed

alloc logs returns 500 if alloc has been garbage collected #18216

johnnyplaydrums opened this issue Aug 15, 2023 · 6 comments · Fixed by #18232
Labels
stage/accepted Confirmed, and intend to work on. No timeline committment though. theme/allocation API type/bug

Comments

@johnnyplaydrums
Copy link

johnnyplaydrums commented Aug 15, 2023

Long time listener, first second time caller 😁. Nomad rocks, thank you for all you do. I tried to make the reproducibility as simple as possible. Let me know if there's anything I can improve on for next time. This is a minor bug and we're already working around it, so if we think this isn't worth investigating/resolving, we can close it.

Nomad version

Nomad v1.6.1

Operating system and Environment details

Ubuntu 22.04 and MacOS 13.4.1

Issue

The v1/client/fs/logs/<alloc_id> endpoint, and the nomad alloc logs command, returns a 500 if the allocation has been garbage collected due to new allocations and over max (<gc_max_allocs>). Interestingly, if you run the v1/system/gc endpoint or nomad system gc cli command, the 500 turns into a 404 (expected).

Reproduction steps

Run the nomad agent in dev mode with gc_max_allocs = 1 (for faster reproducibility)

nomad agent -dev -config=client.hcl

client.hcl contains

client {
  gc_max_allocs = 1
}

Then trigger at least 2 jobs/allocations so that the gc_max_allocs = 1 triggers garbage collection. I used a periodic job and dispatched it twice (job file below).

In the logs you will see something similar to this, referring to the first alloc that is now being garbage collected:

client.gc: garbage collecting allocation: alloc_id=afb4c8c8-1d33-357b-2b43-602a07c2213d reason="new allocations and over max (1)"

Then curl the logs endpoint for that allocation (or issue cli command), and you'll get a 500:

curl -i "http://127.0.0.1:4646/v1/client/fs/logs/afb4c8c8-1d33-357b-2b43-602a07c2213d?task=test&type=stdout"
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Vary: Accept-Encoding
Vary: Origin
Date: Tue, 15 Aug 2023 20:38:53 GMT
Content-Length: 182

failed to list entries: open /private/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/NomadClient1395255995/afb4c8c8-1d33-357b-2b43-602a07c2213d/alloc/logs: no such file or directory%

We see this happen daily in our Nomad cluster running Nomad version v1.5.3 on Ubuntu 22.04. We are running a lot of batch/parameterized jobs and use the logs endpoint heavily to retrieve logs (so they're viewable in airflow). I was able to reproduce it locally on MacOS 13.4.1 with the newest Nomad version v1.6.1.

Expected Result

We'd expect nomad to return a 404 with the text Unknown allocation "afb4c8c8-1d33-357b-2b43-602a07c2213d".

Actual Result

We get a 500 from the API with the text
failed to list entries: open /private/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/NomadClient1395255995/afb4c8c8-1d33-357b-2b43-602a07c2213d/alloc/logs: no such file or directory%.

Same thing if we use the nomad alloc logs command:

$ nomad alloc logs 067c2b24-25c3-dadc-16b5-edfde1621c40
Failed to read stdout file: error reading file: Unexpected response code: 500 (failed to list entries: open /private/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/NomadClient1395255995/afb4c8c8-1d33-357b-2b43-602a07c2213d/alloc/logs: no such file or directory)

In my local testing I occasionally noticed that the first time I curl the logs endpoint, I actually get a 200. But after the first one I always get the 500. This only seemed to happen some of the time; most of the time I get a 500 right away. Query the logs api a 2nd time if you get a 200 the first time.

Job file (if appropriate)

This will happen with any job file but I used this for reproducing the issue:

job "test" {
  datacenters = ["dc1"]
  type = "batch"
  parameterized {}
  group "test" {
    count = 1
    task "test" {
      driver = "raw_exec"
      config {
        command = "echo"
        args = ["hello world"]
      }
    }
  }
}

Nomad Server & Client (dev) logs

This is the full log output when reproducing the issue locally. You'll see the 500 error at the bottom. I ran this about 5 times (just to be sure) and got the same results every time.

$ nomad agent -dev -config=client.hcl
==> Loaded configuration from client.hcl
==> Starting Nomad agent...
==> Nomad agent configuration:

       Advertise Addrs: HTTP: 127.0.0.1:4646; RPC: 127.0.0.1:4647; Serf: 127.0.0.1:4648
            Bind Addrs: HTTP: [127.0.0.1:4646]; RPC: 127.0.0.1:4647; Serf: 127.0.0.1:4648
                Client: true
             Log Level: DEBUG
               Node Id: 562882b9-45c3-408a-9f6f-a40365455fad
                Region: global (DC: dc1)
                Server: true
               Version: 1.6.1

==> Nomad agent started! Log data will stream in below:

    2023-08-15T14:09:16.816-0700 [INFO]  nomad.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:fc7708b5-80c9-8136-9d58-e69089c28145 Address:127.0.0.1:4647}]"
    2023-08-15T14:09:16.816-0700 [INFO]  nomad.raft: entering follower state: follower="Node at 127.0.0.1:4647 [Follower]" leader-address= leader-id=
    2023-08-15T14:09:16.816-0700 [INFO]  nomad: serf: EventMemberJoin: Johns-MacBook-Pro.local.global 127.0.0.1
    2023-08-15T14:09:16.816-0700 [INFO]  nomad: starting scheduling worker(s): num_workers=10 schedulers=["system", "sysbatch", "service", "batch", "_core"]
    2023-08-15T14:09:16.816-0700 [DEBUG] nomad: started scheduling worker: id=b9a1d7c2-f600-d278-06f8-22c21cdcfeee index=1 of=10
    2023-08-15T14:09:16.816-0700 [DEBUG] nomad: started scheduling worker: id=d8aaf5f3-45bc-81a4-3b20-5243d59e8f66 index=2 of=10
    2023-08-15T14:09:16.816-0700 [DEBUG] nomad: started scheduling worker: id=38d6251d-9924-2302-f8c2-93b1f3748c37 index=3 of=10
    2023-08-15T14:09:16.816-0700 [DEBUG] nomad: started scheduling worker: id=404e2ad7-3c90-ac60-04d1-9a1dd031e62a index=4 of=10
    2023-08-15T14:09:16.816-0700 [DEBUG] nomad: started scheduling worker: id=fdc8a31a-41f7-920a-a3ee-d3d918ca3956 index=5 of=10
    2023-08-15T14:09:16.816-0700 [DEBUG] nomad: started scheduling worker: id=403dd22e-e39e-1076-b7c9-0fb26295e019 index=6 of=10
    2023-08-15T14:09:16.816-0700 [DEBUG] worker: running: worker_id=b9a1d7c2-f600-d278-06f8-22c21cdcfeee
    2023-08-15T14:09:16.817-0700 [DEBUG] worker: running: worker_id=38d6251d-9924-2302-f8c2-93b1f3748c37
    2023-08-15T14:09:16.817-0700 [DEBUG] worker: running: worker_id=fdc8a31a-41f7-920a-a3ee-d3d918ca3956
    2023-08-15T14:09:16.817-0700 [DEBUG] worker: running: worker_id=403dd22e-e39e-1076-b7c9-0fb26295e019
    2023-08-15T14:09:16.816-0700 [DEBUG] nomad: started scheduling worker: id=eb48b0a8-b3ef-fecd-7f7c-6124ce871cc6 index=7 of=10
    2023-08-15T14:09:16.817-0700 [DEBUG] nomad: started scheduling worker: id=dd48e3d7-2770-fd0a-4524-027995db0060 index=8 of=10
    2023-08-15T14:09:16.817-0700 [DEBUG] worker: running: worker_id=d8aaf5f3-45bc-81a4-3b20-5243d59e8f66
    2023-08-15T14:09:16.817-0700 [DEBUG] nomad: started scheduling worker: id=1dd00502-317f-8d71-0a7f-5c15aa8627a5 index=9 of=10
    2023-08-15T14:09:16.817-0700 [DEBUG] nomad: started scheduling worker: id=0f8a086b-d631-1c1c-5bb3-40600bf504d4 index=10 of=10
    2023-08-15T14:09:16.817-0700 [INFO]  nomad: started scheduling worker(s): num_workers=10 schedulers=["system", "sysbatch", "service", "batch", "_core"]
    2023-08-15T14:09:16.817-0700 [DEBUG] worker: running: worker_id=1dd00502-317f-8d71-0a7f-5c15aa8627a5
    2023-08-15T14:09:16.817-0700 [DEBUG] worker: running: worker_id=dd48e3d7-2770-fd0a-4524-027995db0060
    2023-08-15T14:09:16.817-0700 [DEBUG] worker: running: worker_id=404e2ad7-3c90-ac60-04d1-9a1dd031e62a
    2023-08-15T14:09:16.816-0700 [DEBUG] worker: running: worker_id=eb48b0a8-b3ef-fecd-7f7c-6124ce871cc6
    2023-08-15T14:09:16.817-0700 [DEBUG] worker: running: worker_id=0f8a086b-d631-1c1c-5bb3-40600bf504d4
    2023-08-15T14:09:16.817-0700 [INFO]  nomad: adding server: server="Johns-MacBook-Pro.local.global (Addr: 127.0.0.1:4647) (DC: dc1)"
    2023-08-15T14:09:16.817-0700 [DEBUG] nomad.keyring.replicator: starting encryption key replication
    2023-08-15T14:09:16.817-0700 [DEBUG] agent.plugin_loader.docker: using client connection initialized from environment: plugin_dir=""
    2023-08-15T14:09:16.817-0700 [INFO]  agent: detected plugin: name=exec type=driver plugin_version=0.1.0
    2023-08-15T14:09:16.817-0700 [INFO]  agent: detected plugin: name=qemu type=driver plugin_version=0.1.0
    2023-08-15T14:09:16.817-0700 [INFO]  agent: detected plugin: name=java type=driver plugin_version=0.1.0
    2023-08-15T14:09:16.817-0700 [INFO]  agent: detected plugin: name=docker type=driver plugin_version=0.1.0
    2023-08-15T14:09:16.817-0700 [INFO]  agent: detected plugin: name=raw_exec type=driver plugin_version=0.1.0
    2023-08-15T14:09:16.817-0700 [INFO]  client: using state directory: state_dir=/private/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/NomadClient2860287618
    2023-08-15T14:09:16.818-0700 [INFO]  client: using alloc directory: alloc_dir=/private/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/NomadClient547579027
    2023-08-15T14:09:16.818-0700 [INFO]  client: using dynamic ports: min=20000 max=32000 reserved=""
    2023-08-15T14:09:16.839-0700 [DEBUG] client.fingerprint_mgr: built-in fingerprints: fingerprinters=["arch", "cni", "consul", "cpu", "host", "landlock", "memory", "network", "nomad", "plugins_cni", "signal", "storage", "vault", "env_aws", "env_gce", "env_azure", "env_digitalocean"]
    2023-08-15T14:09:16.839-0700 [DEBUG] client.fingerprint_mgr: CNI config dir is not set or does not exist, skipping: cni_config_dir=/opt/cni/config
    2023-08-15T14:09:16.840-0700 [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=consul initial_period=15s
    2023-08-15T14:09:16.840-0700 [DEBUG] client.fingerprint_mgr.cpu: detected CPU model: name="Apple M1 Max"
    2023-08-15T14:09:16.840-0700 [DEBUG] client.fingerprint_mgr.cpu: detected CPU efficiency core speed: mhz=2064
    2023-08-15T14:09:16.840-0700 [DEBUG] client.fingerprint_mgr.cpu: detected CPU power core speed: mhz=3228
    2023-08-15T14:09:16.840-0700 [DEBUG] client.fingerprint_mgr.cpu: detected CPU efficiency core count: cores=2
    2023-08-15T14:09:16.840-0700 [DEBUG] client.fingerprint_mgr.cpu: detected CPU power core count: cores=8
    2023-08-15T14:09:16.860-0700 [WARN]  client.fingerprint_mgr.landlock: failed to fingerprint kernel landlock feature: error="landlock not supported on this platform"
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected and no speed specified by user, falling back to default speed: interface=lo0 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: detected interface IP: interface=lo0 IP=127.0.0.1
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: detected interface IP: interface=lo0 IP=::1
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=lo0 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=gif0 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=stf0 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=anpi2 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=anpi0 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=anpi1 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=en4 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=en5 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=en6 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=en1 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=en2 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=en3 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=bridge0 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=ap1 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=en0 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=awdl0 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=llw0 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=utun0 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=utun1 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=utun2 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=utun3 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=utun4 mbits=1000
    2023-08-15T14:09:16.860-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=utun5 mbits=1000
    2023-08-15T14:09:16.860-0700 [WARN]  client.fingerprint_mgr.cni_plugins: failed to read CNI plugins directory: cni_path=/opt/cni/bin error="open /opt/cni/bin: no such file or directory"
    2023-08-15T14:09:16.862-0700 [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=vault initial_period=15s
    2023-08-15T14:09:16.865-0700 [DEBUG] client.fingerprint_mgr.env_gce: could not read value for attribute: attribute=machine-type error="Get \"http://169.254.169.254/computeMetadata/v1/instance/machine-type\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
    2023-08-15T14:09:16.865-0700 [DEBUG] client.fingerprint_mgr.env_gce: error querying GCE Metadata URL, skipping
    2023-08-15T14:09:16.867-0700 [DEBUG] client.fingerprint_mgr.env_azure: could not read value for attribute: attribute=compute/azEnvironment error="Get \"http://169.254.169.254/metadata/instance/compute/azEnvironment?api-version=2019-06-04&format=text\": dial tcp 169.254.169.254:80: i/o timeout (Client.Timeout exceeded while awaiting headers)"
    2023-08-15T14:09:16.868-0700 [DEBUG] client.fingerprint_mgr.env_digitalocean: failed to request metadata: attribute=region error="Get \"http://169.254.169.254/metadata/v1/region\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
    2023-08-15T14:09:16.868-0700 [DEBUG] client.fingerprint_mgr: detected fingerprints: node_attrs=["arch", "cpu", "host", "network", "nomad", "signal", "storage"]
    2023-08-15T14:09:16.868-0700 [INFO]  client.plugin: starting plugin manager: plugin-type=csi
    2023-08-15T14:09:16.868-0700 [INFO]  client.plugin: starting plugin manager: plugin-type=driver
    2023-08-15T14:09:16.868-0700 [INFO]  client.plugin: starting plugin manager: plugin-type=device
    2023-08-15T14:09:16.868-0700 [DEBUG] client.device_mgr: exiting since there are no device plugins
    2023-08-15T14:09:16.868-0700 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=exec health=undetected description="exec driver unsupported on client OS"
    2023-08-15T14:09:16.868-0700 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=qemu health=undetected description=""
    2023-08-15T14:09:16.868-0700 [DEBUG] client.plugin: waiting on plugin manager initial fingerprint: plugin-type=driver
    2023-08-15T14:09:16.868-0700 [DEBUG] client.plugin: waiting on plugin manager initial fingerprint: plugin-type=device
    2023-08-15T14:09:16.868-0700 [DEBUG] client.plugin: finished plugin manager initial fingerprint: plugin-type=device
    2023-08-15T14:09:16.869-0700 [DEBUG] client.driver_mgr.docker: using client connection initialized from environment: driver=docker
    2023-08-15T14:09:16.869-0700 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=raw_exec health=healthy description=Healthy
    2023-08-15T14:09:16.869-0700 [ERROR] client.driver_mgr.docker: failed to list pause containers for recovery: driver=docker error="Get \"http://unix.sock/containers/json?filters=%7B%22label%22%3A%5B%22com.hashicorp.nomad.alloc_id%22%5D%7D\": dial unix /var/run/docker.sock: connect: no such file or directory"
    2023-08-15T14:09:16.869-0700 [DEBUG] client.driver_mgr.docker: could not connect to docker daemon: driver=docker endpoint=unix:///var/run/docker.sock error="Get \"http://unix.sock/version\": dial unix /var/run/docker.sock: connect: no such file or directory"
    2023-08-15T14:09:16.869-0700 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=docker health=undetected description="Failed to connect to docker daemon"
    2023-08-15T14:09:16.870-0700 [DEBUG] client.server_mgr: new server list: new_servers=[127.0.0.1:4647] old_servers=[]
    2023-08-15T14:09:16.876-0700 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=java health=undetected description=""
    2023-08-15T14:09:16.876-0700 [DEBUG] client.driver_mgr: detected drivers: drivers="map[healthy:[raw_exec] undetected:[qemu exec docker java]]"
    2023-08-15T14:09:16.876-0700 [DEBUG] client.plugin: finished plugin manager initial fingerprint: plugin-type=driver
    2023-08-15T14:09:16.876-0700 [INFO]  client: started client: node_id=0964c1e4-7302-47bf-e300-11f8a11990bf
    2023-08-15T14:09:16.876-0700 [DEBUG] http: UI is enabled
    2023-08-15T14:09:16.876-0700 [DEBUG] http: UI is enabled
    2023-08-15T14:09:18.036-0700 [WARN]  nomad.raft: heartbeat timeout reached, starting election: last-leader-addr= last-leader-id=
    2023-08-15T14:09:18.036-0700 [INFO]  nomad.raft: entering candidate state: node="Node at 127.0.0.1:4647 [Candidate]" term=2
    2023-08-15T14:09:18.036-0700 [DEBUG] nomad.raft: voting for self: term=2 id=fc7708b5-80c9-8136-9d58-e69089c28145
    2023-08-15T14:09:18.036-0700 [DEBUG] nomad.raft: calculated votes needed: needed=1 term=2
    2023-08-15T14:09:18.036-0700 [DEBUG] nomad.raft: vote granted: from=fc7708b5-80c9-8136-9d58-e69089c28145 term=2 tally=1
    2023-08-15T14:09:18.036-0700 [INFO]  nomad.raft: election won: term=2 tally=1
    2023-08-15T14:09:18.036-0700 [INFO]  nomad.raft: entering leader state: leader="Node at 127.0.0.1:4647 [Leader]"
    2023-08-15T14:09:18.037-0700 [INFO]  nomad: cluster leadership acquired
    2023-08-15T14:09:18.041-0700 [DEBUG] nomad.autopilot: autopilot is now running
    2023-08-15T14:09:18.041-0700 [DEBUG] nomad.autopilot: state update routine is now running
    2023-08-15T14:09:18.041-0700 [INFO]  nomad.core: established cluster id: cluster_id=812897c2-4f47-b1d5-315b-cdc1e7f1441e create_time=1692133758041637000
    2023-08-15T14:09:18.041-0700 [INFO]  nomad: eval broker status modified: paused=false
    2023-08-15T14:09:18.041-0700 [INFO]  nomad: blocked evals status modified: paused=false
    2023-08-15T14:09:18.046-0700 [INFO]  nomad.keyring: initialized keyring: id=43516bae-87f8-64b8-dc3a-f02dea915544
    2023-08-15T14:09:18.072-0700 [INFO]  client: node registration complete
    2023-08-15T14:09:18.073-0700 [DEBUG] client: updated allocations: index=1 total=0 pulled=0 filtered=0
    2023-08-15T14:09:18.073-0700 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=0
    2023-08-15T14:09:18.073-0700 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=0 errors=0
    2023-08-15T14:09:18.073-0700 [DEBUG] client: state updated: node_status=ready
    2023-08-15T14:09:19.074-0700 [DEBUG] client: state changed, updating node and re-registering
    2023-08-15T14:09:19.079-0700 [INFO]  client: node registration complete
    2023-08-15T14:09:29.608-0700 [DEBUG] http: request complete: method=GET path=/v1/regions duration="150.375µs"
    2023-08-15T14:09:29.609-0700 [DEBUG] http: request complete: method=GET path=/v1/operator/license duration="2µs"
    2023-08-15T14:09:29.610-0700 [DEBUG] http: request failed: method=GET path=/v1/acl/token/self error="RPC Error:: 400,ACL support disabled" code=400
    2023-08-15T14:09:29.611-0700 [DEBUG] http: request complete: method=GET path=/v1/acl/token/self duration=1.485792ms
    2023-08-15T14:09:29.611-0700 [DEBUG] http: request complete: method=POST path=/v1/search/fuzzy duration=2.117125ms
    2023-08-15T14:09:29.611-0700 [DEBUG] http: request complete: method=GET path=/v1/agent/members duration=1.709333ms
    2023-08-15T14:09:29.643-0700 [DEBUG] http: request complete: method=GET path=/v1/jobs?meta=true duration="162.292µs"
    2023-08-15T14:09:29.643-0700 [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="122.083µs"
    2023-08-15T14:09:29.644-0700 [DEBUG] http: request complete: method=GET path=/v1/node/pools duration="107.708µs"
    2023-08-15T14:09:29.651-0700 [DEBUG] http: request complete: method=GET path=/v1/agent/self duration="866.792µs"
    2023-08-15T14:09:29.692-0700 [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="240.667µs"
    2023-08-15T14:09:30.560-0700 [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="542.667µs"
    2023-08-15T14:09:31.639-0700 [DEBUG] http: request complete: method=POST path="/v1/jobs/parse?namespace=*" duration=6.00825ms
    2023-08-15T14:09:31.648-0700 [DEBUG] nomad.job.batch_sched: reconciled current state with desired state: eval_id=e7373130-59dd-73c2-a0e1-db99d82f8f0b job_id=test namespace=default
  results=
  | Total changes: (place 1) (destructive 0) (inplace 0) (stop 0) (disconnect 0) (reconnect 0)
  | Desired Changes for "test": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)

    2023-08-15T14:09:31.649-0700 [DEBUG] nomad.job.batch_sched: setting eval status: eval_id=e7373130-59dd-73c2-a0e1-db99d82f8f0b job_id=test namespace=default status=complete
    2023-08-15T14:09:31.660-0700 [DEBUG] http: request complete: method=POST path=/v1/job/test/plan duration=15.425667ms
    2023-08-15T14:09:32.284-0700 [DEBUG] http: request complete: method=POST path=/v1/jobs duration=2.91725ms
    2023-08-15T14:09:32.285-0700 [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&index=1" duration=2.601318791s
    2023-08-15T14:09:32.316-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test duration="303.959µs"
    2023-08-15T14:09:32.326-0700 [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="708.75µs"
    2023-08-15T14:09:32.327-0700 [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&namespace=default" duration="383.209µs"
    2023-08-15T14:09:32.327-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test/evaluations duration="189.917µs"
    2023-08-15T14:09:32.327-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test/allocations duration="172.083µs"
    2023-08-15T14:09:32.352-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test/summary?index=1 duration="265.792µs"
    2023-08-15T14:09:32.353-0700 [DEBUG] http: request complete: method=GET path="/v1/vars?prefix=nomad%2Fjobs%2Ftest" duration="493.167µs"
    2023-08-15T14:09:35.031-0700 [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&namespace=default&index=11" duration=2.695465917s
    2023-08-15T14:09:35.031-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test/evaluations?index=1 duration=2.697964042s
    2023-08-15T14:09:35.031-0700 [DEBUG] http: request complete: method=POST path=/v1/job/test/dispatch duration=4.110375ms
    2023-08-15T14:09:35.032-0700 [DEBUG] worker: dequeued evaluation: worker_id=1dd00502-317f-8d71-0a7f-5c15aa8627a5 eval_id=ea512a4d-2d7f-000d-b539-92442d588d7b type=batch namespace=default job_id=test/dispatch-1692133775-94d4480e node_id="" triggered_by=job-register
    2023-08-15T14:09:35.032-0700 [DEBUG] worker.batch_sched: reconciled current state with desired state: eval_id=ea512a4d-2d7f-000d-b539-92442d588d7b job_id=test/dispatch-1692133775-94d4480e namespace=default worker_id=1dd00502-317f-8d71-0a7f-5c15aa8627a5
  results=
  | Total changes: (place 1) (destructive 0) (inplace 0) (stop 0) (disconnect 0) (reconnect 0)
  | Desired Changes for "test": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)

    2023-08-15T14:09:35.037-0700 [DEBUG] worker: submitted plan for evaluation: worker_id=1dd00502-317f-8d71-0a7f-5c15aa8627a5 eval_id=ea512a4d-2d7f-000d-b539-92442d588d7b
    2023-08-15T14:09:35.037-0700 [DEBUG] worker.batch_sched: setting eval status: eval_id=ea512a4d-2d7f-000d-b539-92442d588d7b job_id=test/dispatch-1692133775-94d4480e namespace=default worker_id=1dd00502-317f-8d71-0a7f-5c15aa8627a5 status=complete
    2023-08-15T14:09:35.038-0700 [DEBUG] worker: updated evaluation: worker_id=1dd00502-317f-8d71-0a7f-5c15aa8627a5 eval="<Eval \"ea512a4d-2d7f-000d-b539-92442d588d7b\" JobID: \"test/dispatch-1692133775-94d4480e\" Namespace: \"default\">"
    2023-08-15T14:09:35.039-0700 [DEBUG] worker: ack evaluation: worker_id=1dd00502-317f-8d71-0a7f-5c15aa8627a5 eval_id=ea512a4d-2d7f-000d-b539-92442d588d7b type=batch namespace=default job_id=test/dispatch-1692133775-94d4480e node_id="" triggered_by=job-register
    2023-08-15T14:09:35.038-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test/allocations?index=1 duration=2.704991417s
    2023-08-15T14:09:35.044-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test%2Fdispatch-1692133775-94d4480e" duration=4.229542ms
    2023-08-15T14:09:35.044-0700 [DEBUG] client: updated allocations: index=14 total=1 pulled=1 filtered=0
    2023-08-15T14:09:35.044-0700 [DEBUG] client: allocation updates: added=1 removed=0 updated=0 ignored=0
    2023-08-15T14:09:35.044-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test type=Received msg="Task received by client" failed=false
    2023-08-15T14:09:35.045-0700 [DEBUG] client: allocation updates applied: added=1 removed=0 updated=0 ignored=0 errors=0
    2023-08-15T14:09:35.046-0700 [DEBUG] client.alloc_runner.task_runner: lifecycle start condition has been met, proceeding: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test
    2023-08-15T14:09:35.047-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test type="Task Setup" msg="Building Task Directory" failed=false
    2023-08-15T14:09:35.048-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test path=/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad args=["/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad", "logmon"]
    2023-08-15T14:09:35.050-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test path=/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad pid=43927
    2023-08-15T14:09:35.050-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test path=/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad
    2023-08-15T14:09:35.053-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test%2Fdispatch-1692133775-94d4480e/allocations" duration="481.5µs"
    2023-08-15T14:09:35.053-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test%2Fdispatch-1692133775-94d4480e/evaluations" duration="392.75µs"
    2023-08-15T14:09:35.053-0700 [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="284.5µs"
    2023-08-15T14:09:35.053-0700 [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&namespace=default" duration="331.083µs"
    2023-08-15T14:09:35.067-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test version=2
    2023-08-15T14:09:35.067-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test @module=logmon address=/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/plugin2256790035 network=unix timestamp=2023-08-15T14:09:35.067-0700
    2023-08-15T14:09:35.068-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test path=/private/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/NomadClient547579027/067c2b24-25c3-dadc-16b5-edfde1621c40/alloc/logs/.test.stdout.fifo @module=logmon timestamp=2023-08-15T14:09:35.068-0700
    2023-08-15T14:09:35.069-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test path=/private/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/NomadClient547579027/067c2b24-25c3-dadc-16b5-edfde1621c40/alloc/logs/.test.stderr.fifo @module=logmon timestamp=2023-08-15T14:09:35.069-0700
    2023-08-15T14:09:35.070-0700 [WARN]  client.alloc_runner.task_runner.task_hook.api: error creating task api socket: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test path=/private/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/NomadClient547579027/067c2b24-25c3-dadc-16b5-edfde1621c40/test/secrets/api.sock error="listen unix /private/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/NomadClient547579027/067c2b24-25c3-dadc-16b5-edfde1621c40/test/secrets/api.sock: bind: invalid argument"
    2023-08-15T14:09:35.071-0700 [INFO]  client.driver_mgr.raw_exec: starting task: driver=raw_exec driver_cfg="{Command:echo Args:[hello world]}"
    2023-08-15T14:09:35.071-0700 [DEBUG] client.driver_mgr.raw_exec.executor: starting plugin: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 driver=raw_exec task_name=test path=/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad args=["/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad", "executor", "{\"LogFile\":\"/private/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/NomadClient547579027/067c2b24-25c3-dadc-16b5-edfde1621c40/test/executor.out\",\"LogLevel\":\"debug\",\"FSIsolation\":false,\"CpuTotalTicks\":29952}"]
    2023-08-15T14:09:35.073-0700 [DEBUG] client.driver_mgr.raw_exec.executor: plugin started: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 driver=raw_exec task_name=test path=/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad pid=43928
    2023-08-15T14:09:35.073-0700 [DEBUG] client.driver_mgr.raw_exec.executor: waiting for RPC address: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 driver=raw_exec task_name=test path=/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad
    2023-08-15T14:09:35.080-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test%2Fdispatch-1692133775-94d4480e/summary?index=1" duration="438.333µs"
    2023-08-15T14:09:35.081-0700 [DEBUG] http: request complete: method=GET path="/v1/vars?prefix=nomad%2Fjobs%2Ftest" duration="299.125µs"
    2023-08-15T14:09:35.081-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test%2Fdispatch-1692133775-94d4480e/deployment" duration="324.875µs"
    2023-08-15T14:09:35.090-0700 [DEBUG] client.driver_mgr.raw_exec.executor: using plugin: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 driver=raw_exec task_name=test version=2
    2023-08-15T14:09:35.094-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test type=Started msg="Task started by client" failed=false
    2023-08-15T14:09:35.094-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test type=Terminated msg="Exit Code: 0" failed=false
    2023-08-15T14:09:35.095-0700 [DEBUG] client.driver_mgr.raw_exec.executor.stdio: received EOF, stopping recv loop: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 driver=raw_exec task_name=test err="rpc error: code = Unavailable desc = error reading from server: EOF"
    2023-08-15T14:09:35.096-0700 [INFO]  client.driver_mgr.raw_exec.executor: plugin process exited: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 driver=raw_exec task_name=test path=/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad pid=43928
    2023-08-15T14:09:35.096-0700 [DEBUG] client.driver_mgr.raw_exec.executor: plugin exited: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 driver=raw_exec task_name=test
    2023-08-15T14:09:35.096-0700 [INFO]  client.alloc_runner.task_runner: not restarting task: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test reason="Restart unnecessary as task terminated successfully"
    2023-08-15T14:09:35.096-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.stdio: received EOF, stopping recv loop: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test err="rpc error: code = Unavailable desc = error reading from server: EOF"
    2023-08-15T14:09:35.097-0700 [INFO]  client.alloc_runner.task_runner.task_hook.logmon: plugin process exited: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test path=/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad pid=43927
    2023-08-15T14:09:35.097-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin exited: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test
    2023-08-15T14:09:35.097-0700 [DEBUG] client.alloc_runner.task_runner: task run loop exiting: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test
    2023-08-15T14:09:35.097-0700 [INFO]  client.gc: marking allocation for GC: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40
    2023-08-15T14:09:35.105-0700 [DEBUG] http: request complete: method=GET path=/v1/node/0964c1e4-7302-47bf-e300-11f8a11990bf duration="310.583µs"
    2023-08-15T14:09:35.984-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test%2Fdispatch-1692133775-94d4480e/allocations?index=14" duration=924.213125ms
    2023-08-15T14:09:35.985-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test%2Fdispatch-1692133775-94d4480e?index=14" duration=906.109792ms
    2023-08-15T14:09:35.985-0700 [DEBUG] client: updated allocations: index=16 total=1 pulled=0 filtered=1
    2023-08-15T14:09:35.986-0700 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
    2023-08-15T14:09:35.986-0700 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
    2023-08-15T14:09:36.795-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test/allocations?index=1 duration="364.042µs"
    2023-08-15T14:09:36.795-0700 [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&namespace=default&index=14" duration="450.791µs"
    2023-08-15T14:09:36.796-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test/evaluations?index=1 duration="296.208µs"
    2023-08-15T14:09:36.817-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test/summary?index=11 duration="490.333µs"
    2023-08-15T14:09:38.575-0700 [DEBUG] http: request complete: method=POST path=/v1/job/test/dispatch duration=20.598333ms
    2023-08-15T14:09:38.575-0700 [DEBUG] worker: dequeued evaluation: worker_id=0f8a086b-d631-1c1c-5bb3-40600bf504d4 eval_id=89cdc818-e699-717a-73a5-99ef9d2a1ea6 type=batch namespace=default job_id=test/dispatch-1692133778-35feb78a node_id="" triggered_by=job-register
    2023-08-15T14:09:38.575-0700 [DEBUG] worker.batch_sched: reconciled current state with desired state: eval_id=89cdc818-e699-717a-73a5-99ef9d2a1ea6 job_id=test/dispatch-1692133778-35feb78a namespace=default worker_id=0f8a086b-d631-1c1c-5bb3-40600bf504d4
  results=
  | Total changes: (place 1) (destructive 0) (inplace 0) (stop 0) (disconnect 0) (reconnect 0)
  | Desired Changes for "test": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)

    2023-08-15T14:09:38.577-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test%2Fdispatch-1692133775-94d4480e/evaluations?index=15" duration=3.516826792s
    2023-08-15T14:09:38.577-0700 [DEBUG] worker: submitted plan for evaluation: worker_id=0f8a086b-d631-1c1c-5bb3-40600bf504d4 eval_id=89cdc818-e699-717a-73a5-99ef9d2a1ea6
    2023-08-15T14:09:38.578-0700 [DEBUG] worker.batch_sched: setting eval status: eval_id=89cdc818-e699-717a-73a5-99ef9d2a1ea6 job_id=test/dispatch-1692133778-35feb78a namespace=default worker_id=0f8a086b-d631-1c1c-5bb3-40600bf504d4 status=complete
    2023-08-15T14:09:38.578-0700 [DEBUG] worker: updated evaluation: worker_id=0f8a086b-d631-1c1c-5bb3-40600bf504d4 eval="<Eval \"89cdc818-e699-717a-73a5-99ef9d2a1ea6\" JobID: \"test/dispatch-1692133778-35feb78a\" Namespace: \"default\">"
    2023-08-15T14:09:38.578-0700 [DEBUG] worker: ack evaluation: worker_id=0f8a086b-d631-1c1c-5bb3-40600bf504d4 eval_id=89cdc818-e699-717a-73a5-99ef9d2a1ea6 type=batch namespace=default job_id=test/dispatch-1692133778-35feb78a node_id="" triggered_by=job-register
    2023-08-15T14:09:38.578-0700 [DEBUG] client: updated allocations: index=19 total=2 pulled=1 filtered=1
    2023-08-15T14:09:38.578-0700 [DEBUG] client: allocation updates: added=1 removed=0 updated=0 ignored=1
    2023-08-15T14:09:38.578-0700 [INFO]  client.gc: garbage collecting allocation: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 reason="new allocations and over max (1)"
    2023-08-15T14:09:38.578-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40 task=test type=Killing msg="Sent interrupt. Waiting 5s before force killing" failed=false
    2023-08-15T14:09:38.578-0700 [INFO]  client.gc: marking allocation for GC: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40
    2023-08-15T14:09:38.580-0700 [DEBUG] client.gc: alloc garbage collected: alloc_id=067c2b24-25c3-dadc-16b5-edfde1621c40
    2023-08-15T14:09:38.580-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test type=Received msg="Task received by client" failed=false
    2023-08-15T14:09:38.580-0700 [DEBUG] client: allocation updates applied: added=1 removed=0 updated=0 ignored=1 errors=0
    2023-08-15T14:09:38.580-0700 [DEBUG] client.alloc_runner.task_runner: lifecycle start condition has been met, proceeding: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test
    2023-08-15T14:09:38.581-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test type="Task Setup" msg="Building Task Directory" failed=false
    2023-08-15T14:09:38.581-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test path=/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad args=["/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad", "logmon"]
    2023-08-15T14:09:38.584-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test%2Fdispatch-1692133778-35feb78a" duration=1.010625ms
    2023-08-15T14:09:38.584-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test path=/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad pid=43931
    2023-08-15T14:09:38.584-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test path=/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad
    2023-08-15T14:09:38.588-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test%2Fdispatch-1692133778-35feb78a/allocations" duration="349.583µs"
    2023-08-15T14:09:38.588-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test%2Fdispatch-1692133778-35feb78a/evaluations" duration="299.042µs"
    2023-08-15T14:09:38.592-0700 [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="263.875µs"
    2023-08-15T14:09:38.592-0700 [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&namespace=default" duration="343.25µs"
    2023-08-15T14:09:38.603-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test version=2
    2023-08-15T14:09:38.603-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test @module=logmon address=/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/plugin3776088770 network=unix timestamp=2023-08-15T14:09:38.603-0700
    2023-08-15T14:09:38.604-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test path=/private/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/NomadClient547579027/739c0a5b-0570-02c3-4c12-420f36bae019/alloc/logs/.test.stdout.fifo @module=logmon timestamp=2023-08-15T14:09:38.604-0700
    2023-08-15T14:09:38.605-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test @module=logmon path=/private/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/NomadClient547579027/739c0a5b-0570-02c3-4c12-420f36bae019/alloc/logs/.test.stderr.fifo timestamp=2023-08-15T14:09:38.605-0700
    2023-08-15T14:09:38.605-0700 [WARN]  client.alloc_runner.task_runner.task_hook.api: error creating task api socket: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test path=/private/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/NomadClient547579027/739c0a5b-0570-02c3-4c12-420f36bae019/test/secrets/api.sock error="listen unix /private/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/NomadClient547579027/739c0a5b-0570-02c3-4c12-420f36bae019/test/secrets/api.sock: bind: invalid argument"
    2023-08-15T14:09:38.605-0700 [INFO]  client.driver_mgr.raw_exec: starting task: driver=raw_exec driver_cfg="{Command:echo Args:[hello world]}"
    2023-08-15T14:09:38.606-0700 [DEBUG] client.driver_mgr.raw_exec.executor: starting plugin: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 driver=raw_exec task_name=test path=/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad args=["/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad", "executor", "{\"LogFile\":\"/private/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/NomadClient547579027/739c0a5b-0570-02c3-4c12-420f36bae019/test/executor.out\",\"LogLevel\":\"debug\",\"FSIsolation\":false,\"CpuTotalTicks\":29952}"]
    2023-08-15T14:09:38.607-0700 [DEBUG] client.driver_mgr.raw_exec.executor: plugin started: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 driver=raw_exec task_name=test path=/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad pid=43932
    2023-08-15T14:09:38.607-0700 [DEBUG] client.driver_mgr.raw_exec.executor: waiting for RPC address: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 driver=raw_exec task_name=test path=/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad
    2023-08-15T14:09:38.618-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test%2Fdispatch-1692133778-35feb78a/summary?index=1" duration="583.958µs"
    2023-08-15T14:09:38.618-0700 [DEBUG] http: request complete: method=GET path="/v1/vars?prefix=nomad%2Fjobs%2Ftest" duration="412.542µs"
    2023-08-15T14:09:38.618-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test%2Fdispatch-1692133778-35feb78a/deployment" duration="343µs"
    2023-08-15T14:09:38.625-0700 [DEBUG] client.driver_mgr.raw_exec.executor: using plugin: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 driver=raw_exec task_name=test version=2
    2023-08-15T14:09:38.629-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test type=Started msg="Task started by client" failed=false
    2023-08-15T14:09:38.630-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test type=Terminated msg="Exit Code: 0" failed=false
    2023-08-15T14:09:38.630-0700 [DEBUG] client.driver_mgr.raw_exec.executor.stdio: received EOF, stopping recv loop: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 driver=raw_exec task_name=test err="rpc error: code = Unavailable desc = error reading from server: EOF"
    2023-08-15T14:09:38.631-0700 [INFO]  client.driver_mgr.raw_exec.executor: plugin process exited: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 driver=raw_exec task_name=test path=/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad pid=43932
    2023-08-15T14:09:38.631-0700 [DEBUG] client.driver_mgr.raw_exec.executor: plugin exited: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 driver=raw_exec task_name=test
    2023-08-15T14:09:38.631-0700 [INFO]  client.alloc_runner.task_runner: not restarting task: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test reason="Restart unnecessary as task terminated successfully"
    2023-08-15T14:09:38.632-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.stdio: received EOF, stopping recv loop: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test err="rpc error: code = Unavailable desc = error reading from server: EOF"
    2023-08-15T14:09:38.633-0700 [INFO]  client.alloc_runner.task_runner.task_hook.logmon: plugin process exited: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test path=/Users/johnspencer/.asdf/installs/nomad/1.6.1/bin/nomad pid=43931
    2023-08-15T14:09:38.633-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin exited: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test
    2023-08-15T14:09:38.633-0700 [DEBUG] client.alloc_runner.task_runner: task run loop exiting: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019 task=test
    2023-08-15T14:09:38.633-0700 [INFO]  client.gc: marking allocation for GC: alloc_id=739c0a5b-0570-02c3-4c12-420f36bae019
    2023-08-15T14:09:39.049-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test%2Fdispatch-1692133778-35feb78a?index=19" duration=432.793625ms
    2023-08-15T14:09:39.050-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test%2Fdispatch-1692133778-35feb78a/allocations?index=19" duration=451.22825ms
    2023-08-15T14:09:39.050-0700 [DEBUG] client: updated allocations: index=21 total=2 pulled=0 filtered=2
    2023-08-15T14:09:39.051-0700 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=2
    2023-08-15T14:09:39.051-0700 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=2 errors=0
    2023-08-15T14:09:39.934-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test/evaluations?index=15 duration="637.375µs"
    2023-08-15T14:09:39.940-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test/allocations?index=16 duration=4.658667ms
    2023-08-15T14:09:39.945-0700 [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&namespace=default&index=19" duration=4.713084ms
    2023-08-15T14:09:39.964-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test/summary?index=16 duration="884.083µs"
    2023-08-15T14:09:54.948-0700 [ERROR] http: request failed: method=GET path="/v1/client/fs/logs/067c2b24-25c3-dadc-16b5-edfde1621c40?task=test&type=stdout" error="failed to list entries: open /private/var/folders/q4/srtw1qzj1gv6261h2lnyx2440000gn/T/NomadClient547579027/067c2b24-25c3-dadc-16b5-edfde1621c40/alloc/logs: no such file or directory" code=500
    2023-08-15T14:09:54.948-0700 [DEBUG] http: request complete: method=GET path="/v1/client/fs/logs/067c2b24-25c3-dadc-16b5-edfde1621c40?task=test&type=stdout" duration="344.916µs"

Let me know what else you need from me. Thanks again team!

@lgfa29
Copy link
Contributor

lgfa29 commented Aug 16, 2023

Thank you for the report and kind words @johnnyplaydrums 💚

I was able to easily reproduce the problem following your detailed steps and it turns that distributed systems are hard 😅

The different results happen because when the allocation is only garbage collected from the client (not servers) the client still holds a reference to it until the servers also GC it:

nomad/client/client.go

Lines 205 to 207 in e21ab7d

// allocs maps alloc IDs to their AllocRunner. This map includes all
// AllocRunners - running and GC'd - until the server GCs them.
allocs map[string]interfaces.AllocRunner

But the client GC does delete all of the alloc files, so it's not possible to read the logs, resulting in the 500.

I've opened #18232 to fix this and return a 404 instead.

@lgfa29 lgfa29 added theme/allocation API stage/accepted Confirmed, and intend to work on. No timeline committment though. labels Aug 17, 2023
@johnnyplaydrums
Copy link
Author

Hi @lgfa29, thanks for turning this around so quickly! The PR looks great. It noticed I could approve the PR, which I wasn't expecting, but since I'm not a core contributor or super well-versed in Go I figured I would leave that to others? Looks great though!

@lgfa29
Copy link
Contributor

lgfa29 commented Aug 17, 2023

Ah yes, it is possible to approve PRs as community members. It's a good signal to us when people try the fix before merge, but we don't expect that to happen 😄

@johnnyplaydrums
Copy link
Author

Sounds good! I've never built Nomad locally so I wanted to give it a shot :) I built via Go since I didn't have Vangrant/VirtualBox set up and wasn't able to get it working correctly after a short try. The Go build was very fast. I did discover an issue, not sure if you're already aware of this or not, and wasn't sure whether to put this in the PR or here, but didn't want to clutter the PR so here I am 😅

I built 48a5407d5fb614f1b282d2374d6e46b977807e83 and ran through my error case and now when I run bin/nomad alloc logs <alloc-id> it just hangs and doesn't return. The logs don't show any activity. When I CTL-C out of the command, I then see in the logs your new message error="state for allocation a856154a-5163-471e-46f7-ceb0c8ae74ac not found on client" code=404. I wanted to double check there wasn't anything weird about my local dev built, so I built main and confirmed I can run the alloc command and get the old 500 back right away (as expected). Here's a screenshot showing the alloc and curl command hanging, and a full dump of the dev logs after that in case it's helpful.

Untitled

Logs ➜ nomad git:(b-return-404-on-gc-alloc) bin/nomad --version Nomad v1.6.2-dev BuildDate 2023-08-18T15:53:13Z Revision 48a5407 ➜ nomad git:(b-return-404-on-gc-alloc) sudo bin/nomad agent -dev -config=client.hcl Password: ==> Loaded configuration from client.hcl ==> Starting Nomad agent... ==> Nomad agent configuration:
   Advertise Addrs: HTTP: 127.0.0.1:4646; RPC: 127.0.0.1:4647; Serf: 127.0.0.1:4648
        Bind Addrs: HTTP: [127.0.0.1:4646]; RPC: 127.0.0.1:4647; Serf: 127.0.0.1:4648
            Client: true
         Log Level: DEBUG
           Node Id: 5214f3e5-5897-61bd-bf99-091755465634
            Region: global (DC: dc1)
            Server: true
           Version: 1.6.2-dev

==> Nomad agent started! Log data will stream in below:

2023-08-18T11:44:22.289-0700 [INFO]  nomad.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:af4706f2-7e70-9881-b62d-0fc11e7a2d1c Address:127.0.0.1:4647}]"
2023-08-18T11:44:22.290-0700 [INFO]  nomad.raft: entering follower state: follower="Node at 127.0.0.1:4647 [Follower]" leader-address= leader-id=
2023-08-18T11:44:22.290-0700 [INFO]  nomad: serf: EventMemberJoin: John-Spencers-MacBook-Pro.local.global 127.0.0.1
2023-08-18T11:44:22.290-0700 [INFO]  nomad: starting scheduling worker(s): num_workers=10 schedulers=["service", "batch", "system", "sysbatch", "_core"]
2023-08-18T11:44:22.290-0700 [DEBUG] nomad: started scheduling worker: id=fcb8865f-0b2f-b7a7-0b5b-61e6316f5637 index=1 of=10
2023-08-18T11:44:22.290-0700 [DEBUG] nomad: started scheduling worker: id=9db8773b-f81c-2e90-bbc6-b08b61798701 index=2 of=10
2023-08-18T11:44:22.290-0700 [DEBUG] nomad: started scheduling worker: id=7a7ec10d-810d-2b46-b56e-0e06f1c85c5c index=3 of=10
2023-08-18T11:44:22.290-0700 [DEBUG] nomad: started scheduling worker: id=497a18ba-0695-36cb-a583-7d49822d9a04 index=4 of=10
2023-08-18T11:44:22.290-0700 [DEBUG] nomad: started scheduling worker: id=b7888111-0d7d-83f5-f0f6-99e838a0e8fe index=5 of=10
2023-08-18T11:44:22.290-0700 [DEBUG] nomad: started scheduling worker: id=06c524ea-ac9c-fe13-93cc-34ce1f32c53d index=6 of=10
2023-08-18T11:44:22.290-0700 [DEBUG] nomad: started scheduling worker: id=d7661b3d-841b-989d-c053-3a642fb54611 index=7 of=10
2023-08-18T11:44:22.290-0700 [DEBUG] nomad: started scheduling worker: id=bc05e9b5-162f-71f6-a7fe-398e21b7c3f2 index=8 of=10
2023-08-18T11:44:22.290-0700 [DEBUG] worker: running: worker_id=7a7ec10d-810d-2b46-b56e-0e06f1c85c5c
2023-08-18T11:44:22.290-0700 [DEBUG] worker: running: worker_id=497a18ba-0695-36cb-a583-7d49822d9a04
2023-08-18T11:44:22.290-0700 [DEBUG] worker: running: worker_id=06c524ea-ac9c-fe13-93cc-34ce1f32c53d
2023-08-18T11:44:22.290-0700 [DEBUG] worker: running: worker_id=fcb8865f-0b2f-b7a7-0b5b-61e6316f5637
2023-08-18T11:44:22.290-0700 [DEBUG] worker: running: worker_id=25a4cef5-9edd-8b51-8d2f-6c988acd4044
2023-08-18T11:44:22.290-0700 [DEBUG] worker: running: worker_id=9db8773b-f81c-2e90-bbc6-b08b61798701
2023-08-18T11:44:22.290-0700 [DEBUG] worker: running: worker_id=d7661b3d-841b-989d-c053-3a642fb54611
2023-08-18T11:44:22.290-0700 [DEBUG] worker: running: worker_id=bc05e9b5-162f-71f6-a7fe-398e21b7c3f2
2023-08-18T11:44:22.290-0700 [DEBUG] nomad: started scheduling worker: id=25a4cef5-9edd-8b51-8d2f-6c988acd4044 index=9 of=10
2023-08-18T11:44:22.290-0700 [DEBUG] worker: running: worker_id=b7888111-0d7d-83f5-f0f6-99e838a0e8fe
2023-08-18T11:44:22.290-0700 [DEBUG] worker: running: worker_id=feea49b4-59e0-c538-be99-61d11c553099
2023-08-18T11:44:22.290-0700 [DEBUG] nomad: started scheduling worker: id=feea49b4-59e0-c538-be99-61d11c553099 index=10 of=10
2023-08-18T11:44:22.290-0700 [INFO]  nomad: started scheduling worker(s): num_workers=10 schedulers=["service", "batch", "system", "sysbatch", "_core"]
2023-08-18T11:44:22.290-0700 [INFO]  nomad: adding server: server="John-Spencers-MacBook-Pro.local.global (Addr: 127.0.0.1:4647) (DC: dc1)"
2023-08-18T11:44:22.290-0700 [DEBUG] nomad.keyring.replicator: starting encryption key replication
2023-08-18T11:44:22.290-0700 [DEBUG] agent.plugin_loader.docker: using client connection initialized from environment: plugin_dir=""
2023-08-18T11:44:22.290-0700 [INFO]  agent: detected plugin: name=exec type=driver plugin_version=0.1.0
2023-08-18T11:44:22.290-0700 [INFO]  agent: detected plugin: name=qemu type=driver plugin_version=0.1.0
2023-08-18T11:44:22.290-0700 [INFO]  agent: detected plugin: name=java type=driver plugin_version=0.1.0
2023-08-18T11:44:22.290-0700 [INFO]  agent: detected plugin: name=docker type=driver plugin_version=0.1.0
2023-08-18T11:44:22.290-0700 [INFO]  agent: detected plugin: name=mock_driver type=driver plugin_version=0.1.0
2023-08-18T11:44:22.290-0700 [INFO]  agent: detected plugin: name=raw_exec type=driver plugin_version=0.1.0
2023-08-18T11:44:22.291-0700 [INFO]  client: using state directory: state_dir=/private/tmp/NomadClient4036804151
2023-08-18T11:44:22.291-0700 [INFO]  client: using alloc directory: alloc_dir=/private/tmp/NomadClient2111891431
2023-08-18T11:44:22.291-0700 [INFO]  client: using dynamic ports: min=20000 max=32000 reserved=""
2023-08-18T11:44:22.318-0700 [DEBUG] client.fingerprint_mgr: built-in fingerprints: fingerprinters=["arch", "cni", "consul", "cpu", "host", "landlock", "memory", "network", "nomad", "plugins_cni", "signal", "storage", "vault", "env_azure", "env_digitalocean", "env_aws", "env_gce"]
2023-08-18T11:44:22.318-0700 [DEBUG] client.fingerprint_mgr: CNI config dir is not set or does not exist, skipping: cni_config_dir=/opt/cni/config
2023-08-18T11:44:22.318-0700 [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=consul initial_period=15s
2023-08-18T11:44:22.318-0700 [DEBUG] client.fingerprint_mgr.cpu: detected CPU model: name="Apple M1 Max"
2023-08-18T11:44:22.318-0700 [DEBUG] client.fingerprint_mgr.cpu: detected CPU efficiency core speed: mhz=2064
2023-08-18T11:44:22.318-0700 [DEBUG] client.fingerprint_mgr.cpu: detected CPU performance core speed: mhz=3228
2023-08-18T11:44:22.318-0700 [DEBUG] client.fingerprint_mgr.cpu: detected CPU efficiency core count: cores=2
2023-08-18T11:44:22.318-0700 [DEBUG] client.fingerprint_mgr.cpu: detected CPU performance core count: cores=8
2023-08-18T11:44:22.318-0700 [DEBUG] client.fingerprint_mgr.cpu: detected CPU core count: cores=10
2023-08-18T11:44:22.346-0700 [WARN]  client.fingerprint_mgr.landlock: failed to fingerprint kernel landlock feature: error="landlock not supported on this platform"
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected and no speed specified by user, falling back to default speed: interface=lo0 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: detected interface IP: interface=lo0 IP=127.0.0.1
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: detected interface IP: interface=lo0 IP=::1
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=lo0 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=gif0 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=stf0 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=anpi1 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=anpi0 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=anpi2 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=en4 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=en5 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=en6 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=en1 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=en2 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=en3 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=bridge0 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=ap1 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=en0 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=awdl0 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=llw0 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=utun0 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=utun1 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=utun2 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=utun4 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=utun5 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=utun6 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=utun7 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=utun3 mbits=1000
2023-08-18T11:44:22.346-0700 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=en7 mbits=1000
2023-08-18T11:44:22.346-0700 [WARN]  client.fingerprint_mgr.cni_plugins: failed to read CNI plugins directory: cni_path=/opt/cni/bin error="open /opt/cni/bin: no such file or directory"
2023-08-18T11:44:22.350-0700 [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=vault initial_period=15s
2023-08-18T11:44:22.351-0700 [DEBUG] client.fingerprint_mgr.env_azure: could not read value for attribute: attribute=compute/azEnvironment error="Get \"http://169.254.169.254/metadata/instance/compute/azEnvironment?api-version=2019-06-04&format=text\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2023-08-18T11:44:22.353-0700 [DEBUG] client.fingerprint_mgr.env_digitalocean: failed to request metadata: attribute=region error="Get \"http://169.254.169.254/metadata/v1/region\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2023-08-18T11:44:22.356-0700 [DEBUG] client.fingerprint_mgr.env_gce: could not read value for attribute: attribute=machine-type error="Get \"http://169.254.169.254/computeMetadata/v1/instance/machine-type\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2023-08-18T11:44:22.356-0700 [DEBUG] client.fingerprint_mgr.env_gce: error querying GCE Metadata URL, skipping
2023-08-18T11:44:22.356-0700 [DEBUG] client.fingerprint_mgr: detected fingerprints: node_attrs=["arch", "cpu", "host", "network", "nomad", "signal", "storage"]
2023-08-18T11:44:22.356-0700 [INFO]  client.plugin: starting plugin manager: plugin-type=csi
2023-08-18T11:44:22.356-0700 [INFO]  client.plugin: starting plugin manager: plugin-type=driver
2023-08-18T11:44:22.356-0700 [INFO]  client.plugin: starting plugin manager: plugin-type=device
2023-08-18T11:44:22.356-0700 [DEBUG] client.device_mgr: exiting since there are no device plugins
2023-08-18T11:44:22.356-0700 [DEBUG] client.plugin: waiting on plugin manager initial fingerprint: plugin-type=driver
2023-08-18T11:44:22.356-0700 [DEBUG] client.plugin: waiting on plugin manager initial fingerprint: plugin-type=device
2023-08-18T11:44:22.356-0700 [DEBUG] client.driver_mgr.docker: using client connection initialized from environment: driver=docker
2023-08-18T11:44:22.356-0700 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=exec health=undetected description="exec driver unsupported on client OS"
2023-08-18T11:44:22.356-0700 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=qemu health=undetected description=""
2023-08-18T11:44:22.356-0700 [DEBUG] client.plugin: finished plugin manager initial fingerprint: plugin-type=device
2023-08-18T11:44:22.356-0700 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=mock_driver health=healthy description=Healthy
2023-08-18T11:44:22.356-0700 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=raw_exec health=healthy description=Healthy
2023-08-18T11:44:22.357-0700 [DEBUG] client.driver_mgr.docker: could not connect to docker daemon: driver=docker endpoint=unix:///var/run/docker.sock error="Get \"http://unix.sock/version\": dial unix /var/run/docker.sock: connect: no such file or directory"
2023-08-18T11:44:22.357-0700 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=docker health=undetected description="Failed to connect to docker daemon"
2023-08-18T11:44:22.357-0700 [ERROR] client.driver_mgr.docker: failed to list pause containers for recovery: driver=docker error="Get \"http://unix.sock/containers/json?filters=%7B%22label%22%3A%5B%22com.hashicorp.nomad.alloc_id%22%5D%7D\": dial unix /var/run/docker.sock: connect: no such file or directory"
2023-08-18T11:44:22.358-0700 [DEBUG] client.server_mgr: new server list: new_servers=[127.0.0.1:4647] old_servers=[]
2023-08-18T11:44:22.366-0700 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=java health=undetected description=""
2023-08-18T11:44:22.366-0700 [DEBUG] client.driver_mgr: detected drivers: drivers="map[healthy:[mock_driver raw_exec] undetected:[exec qemu docker java]]"
2023-08-18T11:44:22.366-0700 [DEBUG] client.plugin: finished plugin manager initial fingerprint: plugin-type=driver
2023-08-18T11:44:22.366-0700 [INFO]  client: started client: node_id=0f9e5bf8-8bc5-f999-6742-4c5980a1545c
2023-08-18T11:44:22.366-0700 [DEBUG] http: UI is enabled
2023-08-18T11:44:22.366-0700 [DEBUG] http: UI is enabled
2023-08-18T11:44:23.375-0700 [WARN]  nomad.raft: heartbeat timeout reached, starting election: last-leader-addr= last-leader-id=
2023-08-18T11:44:23.375-0700 [INFO]  nomad.raft: entering candidate state: node="Node at 127.0.0.1:4647 [Candidate]" term=2
2023-08-18T11:44:23.375-0700 [DEBUG] nomad.raft: voting for self: term=2 id=af4706f2-7e70-9881-b62d-0fc11e7a2d1c
2023-08-18T11:44:23.375-0700 [DEBUG] nomad.raft: calculated votes needed: needed=1 term=2
2023-08-18T11:44:23.375-0700 [DEBUG] nomad.raft: vote granted: from=af4706f2-7e70-9881-b62d-0fc11e7a2d1c term=2 tally=1
2023-08-18T11:44:23.375-0700 [INFO]  nomad.raft: election won: term=2 tally=1
2023-08-18T11:44:23.375-0700 [INFO]  nomad.raft: entering leader state: leader="Node at 127.0.0.1:4647 [Leader]"
2023-08-18T11:44:23.376-0700 [INFO]  nomad: cluster leadership acquired
2023-08-18T11:44:23.381-0700 [DEBUG] nomad.autopilot: autopilot is now running
2023-08-18T11:44:23.381-0700 [DEBUG] nomad.autopilot: state update routine is now running
2023-08-18T11:44:23.382-0700 [INFO]  nomad.core: established cluster id: cluster_id=c837a793-a8e9-be3c-afed-44b5b6c437a1 create_time=1692384263381958000
2023-08-18T11:44:23.382-0700 [INFO]  nomad: eval broker status modified: paused=false
2023-08-18T11:44:23.382-0700 [INFO]  nomad: blocked evals status modified: paused=false
2023-08-18T11:44:23.387-0700 [INFO]  nomad.keyring: initialized keyring: id=80484fbd-9ae2-e2e4-21b4-621462d56e6a
2023-08-18T11:44:23.401-0700 [INFO]  client: node registration complete
2023-08-18T11:44:23.401-0700 [DEBUG] client: updated allocations: index=1 total=0 pulled=0 filtered=0
2023-08-18T11:44:23.401-0700 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=0
2023-08-18T11:44:23.401-0700 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=0 errors=0
2023-08-18T11:44:23.401-0700 [DEBUG] client: state updated: node_status=ready
2023-08-18T11:44:24.328-0700 [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="836.542µs"
2023-08-18T11:44:24.328-0700 [DEBUG] http: request complete: method=GET path=/v1/node/pools duration="645.208µs"
2023-08-18T11:44:24.328-0700 [DEBUG] http: request complete: method=GET path=/v1/jobs?meta=true duration=1.727ms
2023-08-18T11:44:24.402-0700 [DEBUG] client: state changed, updating node and re-registering
2023-08-18T11:44:24.403-0700 [INFO]  client: node registration complete
2023-08-18T11:44:25.148-0700 [DEBUG] http: request complete: method=GET path=/v1/namespaces duration=1.273625ms
2023-08-18T11:44:40.711-0700 [DEBUG] http: request complete: method=POST path="/v1/jobs/parse?namespace=*" duration=2.367708ms
2023-08-18T11:44:40.731-0700 [DEBUG] nomad.job.batch_sched: reconciled current state with desired state: eval_id=a0492c67-8464-8e02-0356-60616be3c26c job_id=test3 namespace=default

results=
| Total changes: (place 1) (destructive 0) (inplace 0) (stop 0) (disconnect 0) (reconnect 0)
| Desired Changes for "test": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)

2023-08-18T11:44:40.738-0700 [DEBUG] nomad.job.batch_sched: setting eval status: eval_id=a0492c67-8464-8e02-0356-60616be3c26c job_id=test3 namespace=default status=complete
2023-08-18T11:44:40.739-0700 [DEBUG] http: request complete: method=POST path=/v1/job/test3/plan duration=12.18675ms
2023-08-18T11:44:41.980-0700 [DEBUG] http: request complete: method=POST path=/v1/jobs duration=2.252209ms
2023-08-18T11:44:41.980-0700 [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&index=1" duration=17.612027459s
2023-08-18T11:44:42.012-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3 duration="269.834µs"
2023-08-18T11:44:42.022-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3/allocations duration="162.5µs"
2023-08-18T11:44:42.023-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3/evaluations duration="163.458µs"
2023-08-18T11:44:42.023-0700 [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&namespace=default" duration="138.792µs"
2023-08-18T11:44:42.024-0700 [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="112.625µs"
2023-08-18T11:44:42.044-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3/summary?index=1 duration="171.792µs"
2023-08-18T11:44:42.044-0700 [DEBUG] http: request complete: method=GET path="/v1/vars?prefix=nomad%2Fjobs%2Ftest3" duration="542.833µs"
2023-08-18T11:44:45.745-0700 [DEBUG] http: request complete: method=POST path=/v1/job/test3/dispatch duration=1.456375ms
2023-08-18T11:44:45.745-0700 [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&namespace=default&index=11" duration=3.716180834s
2023-08-18T11:44:45.745-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3/evaluations?index=1 duration=2.4343145s
2023-08-18T11:44:45.746-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3/evaluations?index=1 duration=3.718606209s
2023-08-18T11:44:45.746-0700 [DEBUG] worker: dequeued evaluation: worker_id=25a4cef5-9edd-8b51-8d2f-6c988acd4044 eval_id=4492845d-4d22-13d9-81d4-78bea492a16e type=batch namespace=default job_id=test3/dispatch-1692384285-f4eb9b34 node_id="" triggered_by=job-register
2023-08-18T11:44:45.746-0700 [DEBUG] worker.batch_sched: reconciled current state with desired state: eval_id=4492845d-4d22-13d9-81d4-78bea492a16e job_id=test3/dispatch-1692384285-f4eb9b34 namespace=default worker_id=25a4cef5-9edd-8b51-8d2f-6c988acd4044

results=
| Total changes: (place 1) (destructive 0) (inplace 0) (stop 0) (disconnect 0) (reconnect 0)
| Desired Changes for "test": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)

2023-08-18T11:44:45.747-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3/summary?index=11 duration=2.435394916s
2023-08-18T11:44:45.747-0700 [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&namespace=default&index=11" duration=2.435646916s
2023-08-18T11:44:45.752-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384285-f4eb9b34" duration="370.5µs"
2023-08-18T11:44:45.759-0700 [DEBUG] worker: submitted plan for evaluation: worker_id=25a4cef5-9edd-8b51-8d2f-6c988acd4044 eval_id=4492845d-4d22-13d9-81d4-78bea492a16e
2023-08-18T11:44:45.760-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3/allocations?index=1 duration=2.449890791s
2023-08-18T11:44:45.760-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3/allocations?index=1 duration=3.733190667s
2023-08-18T11:44:45.760-0700 [DEBUG] worker.batch_sched: setting eval status: eval_id=4492845d-4d22-13d9-81d4-78bea492a16e job_id=test3/dispatch-1692384285-f4eb9b34 namespace=default worker_id=25a4cef5-9edd-8b51-8d2f-6c988acd4044 status=complete
2023-08-18T11:44:45.761-0700 [DEBUG] client: updated allocations: index=14 total=1 pulled=1 filtered=0
2023-08-18T11:44:45.761-0700 [DEBUG] worker: updated evaluation: worker_id=25a4cef5-9edd-8b51-8d2f-6c988acd4044 eval="<Eval \"4492845d-4d22-13d9-81d4-78bea492a16e\" JobID: \"test3/dispatch-1692384285-f4eb9b34\" Namespace: \"default\">"
2023-08-18T11:44:45.761-0700 [DEBUG] client: allocation updates: added=1 removed=0 updated=0 ignored=0
2023-08-18T11:44:45.761-0700 [DEBUG] worker: ack evaluation: worker_id=25a4cef5-9edd-8b51-8d2f-6c988acd4044 eval_id=4492845d-4d22-13d9-81d4-78bea492a16e type=batch namespace=default job_id=test3/dispatch-1692384285-f4eb9b34 node_id="" triggered_by=job-register
2023-08-18T11:44:45.761-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test type=Received msg="Task received by client" failed=false
2023-08-18T11:44:45.761-0700 [DEBUG] client: allocation updates applied: added=1 removed=0 updated=0 ignored=0 errors=0
2023-08-18T11:44:45.764-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384285-f4eb9b34/allocations" duration="210.041µs"
2023-08-18T11:44:45.764-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384285-f4eb9b34/evaluations" duration="124µs"
2023-08-18T11:44:45.764-0700 [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="123.584µs"
2023-08-18T11:44:45.764-0700 [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&namespace=default" duration="161.25µs"
2023-08-18T11:44:45.768-0700 [DEBUG] client.alloc_runner.task_runner: lifecycle start condition has been met, proceeding: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test
2023-08-18T11:44:45.768-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test type="Task Setup" msg="Building Task Directory" failed=false
2023-08-18T11:44:45.769-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test path=/Users/johnspencer/bowery/nomad/bin/nomad args=["/Users/johnspencer/bowery/nomad/bin/nomad", "logmon"]
2023-08-18T11:44:45.771-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test path=/Users/johnspencer/bowery/nomad/bin/nomad pid=5875
2023-08-18T11:44:45.771-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test path=/Users/johnspencer/bowery/nomad/bin/nomad
2023-08-18T11:44:45.793-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384285-f4eb9b34?index=12" duration="415.042µs"
2023-08-18T11:44:45.794-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384285-f4eb9b34/summary?index=1" duration="339.458µs"
2023-08-18T11:44:45.795-0700 [DEBUG] http: request complete: method=GET path="/v1/vars?prefix=nomad%2Fjobs%2Ftest3" duration="255.875µs"
2023-08-18T11:44:45.795-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384285-f4eb9b34/deployment" duration="283.292µs"
2023-08-18T11:44:45.800-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test version=2
2023-08-18T11:44:45.800-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test @module=logmon address=/tmp/plugin1125363476 network=unix timestamp=2023-08-18T11:44:45.800-0700
2023-08-18T11:44:45.801-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test @module=logmon path=/private/tmp/NomadClient2111891431/a856154a-5163-471e-46f7-ceb0c8ae74ac/alloc/logs/.test.stdout.fifo timestamp=2023-08-18T11:44:45.801-0700
2023-08-18T11:44:45.802-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test path=/private/tmp/NomadClient2111891431/a856154a-5163-471e-46f7-ceb0c8ae74ac/alloc/logs/.test.stderr.fifo @module=logmon timestamp=2023-08-18T11:44:45.802-0700
2023-08-18T11:44:45.802-0700 [DEBUG] http: request complete: method=GET path=/v1/node/0f9e5bf8-8bc5-f999-6742-4c5980a1545c duration="439.625µs"
2023-08-18T11:44:45.803-0700 [INFO]  client.driver_mgr.raw_exec: starting task: driver=raw_exec driver_cfg="{Command:echo Args:[hello world]}"
2023-08-18T11:44:45.803-0700 [DEBUG] client.driver_mgr.raw_exec.executor: starting plugin: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac driver=raw_exec task_name=test path=/Users/johnspencer/bowery/nomad/bin/nomad args=["/Users/johnspencer/bowery/nomad/bin/nomad", "executor", "{\"LogFile\":\"/private/tmp/NomadClient2111891431/a856154a-5163-471e-46f7-ceb0c8ae74ac/test/executor.out\",\"LogLevel\":\"debug\",\"FSIsolation\":false}"]
2023-08-18T11:44:45.804-0700 [DEBUG] client.driver_mgr.raw_exec.executor: plugin started: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac driver=raw_exec task_name=test path=/Users/johnspencer/bowery/nomad/bin/nomad pid=5876
2023-08-18T11:44:45.804-0700 [DEBUG] client.driver_mgr.raw_exec.executor: waiting for RPC address: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac driver=raw_exec task_name=test path=/Users/johnspencer/bowery/nomad/bin/nomad
2023-08-18T11:44:45.828-0700 [DEBUG] client.driver_mgr.raw_exec.executor: using plugin: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac driver=raw_exec task_name=test version=2
2023-08-18T11:44:45.832-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test type=Started msg="Task started by client" failed=false
2023-08-18T11:44:45.837-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test type=Terminated msg="Exit Code: 0" failed=false
2023-08-18T11:44:45.839-0700 [DEBUG] client.driver_mgr.raw_exec.executor.stdio: received EOF, stopping recv loop: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac driver=raw_exec task_name=test err="rpc error: code = Unavailable desc = error reading from server: EOF"
2023-08-18T11:44:45.840-0700 [INFO]  client.driver_mgr.raw_exec.executor: plugin process exited: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac driver=raw_exec task_name=test path=/Users/johnspencer/bowery/nomad/bin/nomad pid=5876
2023-08-18T11:44:45.840-0700 [DEBUG] client.driver_mgr.raw_exec.executor: plugin exited: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac driver=raw_exec task_name=test
2023-08-18T11:44:45.840-0700 [INFO]  client.alloc_runner.task_runner: not restarting task: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test reason="Restart unnecessary as task terminated successfully"
2023-08-18T11:44:45.841-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.stdio: received EOF, stopping recv loop: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test err="rpc error: code = Unavailable desc = error reading from server: EOF"
2023-08-18T11:44:45.842-0700 [INFO]  client.alloc_runner.task_runner.task_hook.logmon: plugin process exited: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test path=/Users/johnspencer/bowery/nomad/bin/nomad pid=5875
2023-08-18T11:44:45.842-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin exited: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test
2023-08-18T11:44:45.842-0700 [DEBUG] client.alloc_runner.task_runner: task run loop exiting: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test
2023-08-18T11:44:45.842-0700 [INFO]  client.gc: marking allocation for GC: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac
2023-08-18T11:44:46.538-0700 [DEBUG] client: updated allocations: index=16 total=1 pulled=0 filtered=1
2023-08-18T11:44:46.538-0700 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
2023-08-18T11:44:46.538-0700 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
2023-08-18T11:44:46.538-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384285-f4eb9b34/allocations?index=14" duration=765.014167ms
2023-08-18T11:44:47.782-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384285-f4eb9b34?index=14" duration=1.123125ms
2023-08-18T11:44:47.783-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384285-f4eb9b34/summary?index=14" duration=1.066125ms
2023-08-18T11:44:48.365-0700 [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&namespace=default&index=14" duration=1.265834ms
2023-08-18T11:44:48.365-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3/allocations?index=1 duration="562.125µs"
2023-08-18T11:44:48.366-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3/evaluations?index=1 duration="386.667µs"
2023-08-18T11:44:48.403-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3/summary?index=11 duration="354.125µs"
2023-08-18T11:44:53.544-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3/summary?index=16 duration=3.168572375s
2023-08-18T11:44:53.544-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3/evaluations?index=15 duration=3.169228833s
2023-08-18T11:44:53.544-0700 [DEBUG] http: request complete: method=POST path=/v1/job/test3/dispatch duration=2.420333ms
2023-08-18T11:44:53.545-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384285-f4eb9b34/evaluations?index=15" duration=7.771761792s
2023-08-18T11:44:53.546-0700 [DEBUG] worker: dequeued evaluation: worker_id=25a4cef5-9edd-8b51-8d2f-6c988acd4044 eval_id=cdc3f7cf-bc7d-48fb-53fc-9248e46466fa type=batch namespace=default job_id=test3/dispatch-1692384293-de95ae2b node_id="" triggered_by=job-register
2023-08-18T11:44:53.548-0700 [DEBUG] worker.batch_sched: reconciled current state with desired state: eval_id=cdc3f7cf-bc7d-48fb-53fc-9248e46466fa job_id=test3/dispatch-1692384293-de95ae2b namespace=default worker_id=25a4cef5-9edd-8b51-8d2f-6c988acd4044

results=
| Total changes: (place 1) (destructive 0) (inplace 0) (stop 0) (disconnect 0) (reconnect 0)
| Desired Changes for "test": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)

2023-08-18T11:44:53.554-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b" duration="814.459µs"
2023-08-18T11:44:53.556-0700 [DEBUG] worker: submitted plan for evaluation: worker_id=25a4cef5-9edd-8b51-8d2f-6c988acd4044 eval_id=cdc3f7cf-bc7d-48fb-53fc-9248e46466fa
2023-08-18T11:44:53.556-0700 [DEBUG] worker.batch_sched: setting eval status: eval_id=cdc3f7cf-bc7d-48fb-53fc-9248e46466fa job_id=test3/dispatch-1692384293-de95ae2b namespace=default worker_id=25a4cef5-9edd-8b51-8d2f-6c988acd4044 status=complete
2023-08-18T11:44:53.556-0700 [DEBUG] worker: updated evaluation: worker_id=25a4cef5-9edd-8b51-8d2f-6c988acd4044 eval="<Eval \"cdc3f7cf-bc7d-48fb-53fc-9248e46466fa\" JobID: \"test3/dispatch-1692384293-de95ae2b\" Namespace: \"default\">"
2023-08-18T11:44:53.556-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3/allocations?index=16 duration=3.18063175s
2023-08-18T11:44:53.556-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384285-f4eb9b34/allocations?index=16" duration=5.774504708s
2023-08-18T11:44:53.556-0700 [DEBUG] worker: ack evaluation: worker_id=25a4cef5-9edd-8b51-8d2f-6c988acd4044 eval_id=cdc3f7cf-bc7d-48fb-53fc-9248e46466fa type=batch namespace=default job_id=test3/dispatch-1692384293-de95ae2b node_id="" triggered_by=job-register
2023-08-18T11:44:53.556-0700 [DEBUG] client: updated allocations: index=19 total=2 pulled=1 filtered=1
2023-08-18T11:44:53.556-0700 [DEBUG] client: allocation updates: added=1 removed=0 updated=0 ignored=1
2023-08-18T11:44:53.556-0700 [INFO]  client.gc: garbage collecting allocation: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac reason="new allocations and over max (1)"
2023-08-18T11:44:53.556-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac task=test type=Killing msg="Sent interrupt. Waiting 5s before force killing" failed=false
2023-08-18T11:44:53.556-0700 [INFO]  client.gc: marking allocation for GC: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac
2023-08-18T11:44:53.560-0700 [DEBUG] client.gc: alloc garbage collected: alloc_id=a856154a-5163-471e-46f7-ceb0c8ae74ac
2023-08-18T11:44:53.560-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test type=Received msg="Task received by client" failed=false
2023-08-18T11:44:53.560-0700 [DEBUG] client: allocation updates applied: added=1 removed=0 updated=0 ignored=1 errors=0
2023-08-18T11:44:53.561-0700 [DEBUG] client.alloc_runner.task_runner: lifecycle start condition has been met, proceeding: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test
2023-08-18T11:44:53.561-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test type="Task Setup" msg="Building Task Directory" failed=false
2023-08-18T11:44:53.562-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test path=/Users/johnspencer/bowery/nomad/bin/nomad args=["/Users/johnspencer/bowery/nomad/bin/nomad", "logmon"]
2023-08-18T11:44:53.564-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test path=/Users/johnspencer/bowery/nomad/bin/nomad pid=5881
2023-08-18T11:44:53.565-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test path=/Users/johnspencer/bowery/nomad/bin/nomad
2023-08-18T11:44:53.565-0700 [DEBUG] http: request complete: method=GET path=/v1/namespaces duration=1.518ms
2023-08-18T11:44:53.565-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b/allocations" duration=1.733333ms
2023-08-18T11:44:53.565-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b/evaluations" duration=1.675333ms
2023-08-18T11:44:53.566-0700 [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&namespace=default" duration=2.377542ms
2023-08-18T11:44:53.593-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test version=2
2023-08-18T11:44:53.593-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test @module=logmon address=/tmp/plugin127725230 network=unix timestamp=2023-08-18T11:44:53.593-0700
2023-08-18T11:44:53.594-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test @module=logmon path=/private/tmp/NomadClient2111891431/49ad1fa3-6a9c-4eb3-11f4-125be40f63d5/alloc/logs/.test.stdout.fifo timestamp=2023-08-18T11:44:53.594-0700
2023-08-18T11:44:53.595-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test path=/private/tmp/NomadClient2111891431/49ad1fa3-6a9c-4eb3-11f4-125be40f63d5/alloc/logs/.test.stderr.fifo @module=logmon timestamp=2023-08-18T11:44:53.595-0700
2023-08-18T11:44:53.595-0700 [INFO]  client.driver_mgr.raw_exec: starting task: driver=raw_exec driver_cfg="{Command:echo Args:[hello world]}"
2023-08-18T11:44:53.595-0700 [DEBUG] client.driver_mgr.raw_exec.executor: starting plugin: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 driver=raw_exec task_name=test path=/Users/johnspencer/bowery/nomad/bin/nomad args=["/Users/johnspencer/bowery/nomad/bin/nomad", "executor", "{\"LogFile\":\"/private/tmp/NomadClient2111891431/49ad1fa3-6a9c-4eb3-11f4-125be40f63d5/test/executor.out\",\"LogLevel\":\"debug\",\"FSIsolation\":false}"]
2023-08-18T11:44:53.597-0700 [DEBUG] client.driver_mgr.raw_exec.executor: plugin started: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 driver=raw_exec task_name=test path=/Users/johnspencer/bowery/nomad/bin/nomad pid=5882
2023-08-18T11:44:53.597-0700 [DEBUG] client.driver_mgr.raw_exec.executor: waiting for RPC address: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 driver=raw_exec task_name=test path=/Users/johnspencer/bowery/nomad/bin/nomad
2023-08-18T11:44:53.610-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b?index=17" duration="369.042µs"
2023-08-18T11:44:53.611-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b/summary?index=1" duration="253.5µs"
2023-08-18T11:44:53.612-0700 [DEBUG] http: request complete: method=GET path="/v1/vars?prefix=nomad%2Fjobs%2Ftest3" duration="217.5µs"
2023-08-18T11:44:53.613-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b/deployment" duration="181.125µs"
2023-08-18T11:44:53.622-0700 [DEBUG] client.driver_mgr.raw_exec.executor: using plugin: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 driver=raw_exec task_name=test version=2
2023-08-18T11:44:53.622-0700 [DEBUG] client: updated allocations: index=21 total=2 pulled=0 filtered=2
2023-08-18T11:44:53.622-0700 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=2
2023-08-18T11:44:53.622-0700 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=2 errors=0
2023-08-18T11:44:53.622-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b/allocations?index=19" duration=50.51625ms
2023-08-18T11:44:53.625-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test type=Started msg="Task started by client" failed=false
2023-08-18T11:44:53.628-0700 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test type=Terminated msg="Exit Code: 0" failed=false
2023-08-18T11:44:53.629-0700 [DEBUG] client.driver_mgr.raw_exec.executor.stdio: received EOF, stopping recv loop: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 driver=raw_exec task_name=test err="rpc error: code = Unavailable desc = error reading from server: EOF"
2023-08-18T11:44:53.630-0700 [INFO]  client.driver_mgr.raw_exec.executor: plugin process exited: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 driver=raw_exec task_name=test path=/Users/johnspencer/bowery/nomad/bin/nomad pid=5882
2023-08-18T11:44:53.630-0700 [DEBUG] client.driver_mgr.raw_exec.executor: plugin exited: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 driver=raw_exec task_name=test
2023-08-18T11:44:53.630-0700 [INFO]  client.alloc_runner.task_runner: not restarting task: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test reason="Restart unnecessary as task terminated successfully"
2023-08-18T11:44:53.631-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.stdio: received EOF, stopping recv loop: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test err="rpc error: code = Unavailable desc = error reading from server: EOF"
2023-08-18T11:44:53.632-0700 [INFO]  client.alloc_runner.task_runner.task_hook.logmon: plugin process exited: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test path=/Users/johnspencer/bowery/nomad/bin/nomad pid=5881
2023-08-18T11:44:53.632-0700 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin exited: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test
2023-08-18T11:44:53.632-0700 [DEBUG] client.alloc_runner.task_runner: task run loop exiting: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5 task=test
2023-08-18T11:44:53.633-0700 [INFO]  client.gc: marking allocation for GC: alloc_id=49ad1fa3-6a9c-4eb3-11f4-125be40f63d5
2023-08-18T11:44:53.876-0700 [DEBUG] client: updated allocations: index=22 total=2 pulled=0 filtered=2
2023-08-18T11:44:53.876-0700 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=2
2023-08-18T11:44:53.876-0700 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=2 errors=0
2023-08-18T11:44:55.584-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b/summary?index=19" duration="561.167µs"
2023-08-18T11:44:55.584-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b?index=19" duration="816.083µs"
2023-08-18T11:44:55.586-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b/allocations?index=21" duration=1.478333ms
2023-08-18T11:45:47.670-0700 [DEBUG] http: request complete: method=GET path=/v1/allocations?prefix=a856154a-5163-471e-46f7-ceb0c8ae74ac duration="297.875µs"
2023-08-18T11:45:47.671-0700 [DEBUG] http: request complete: method=GET path=/v1/allocation/a856154a-5163-471e-46f7-ceb0c8ae74ac?namespace=default duration="359.083µs"
2023-08-18T11:45:47.673-0700 [DEBUG] http: request complete: method=GET path=/v1/node/0f9e5bf8-8bc5-f999-6742-4c5980a1545c duration="367.167µs"
2023-08-18T11:49:23.382-0700 [DEBUG] worker: dequeued evaluation: worker_id=bc05e9b5-162f-71f6-a7fe-398e21b7c3f2 eval_id=53238c8d-c3d9-449d-ef20-8c90335bd973 type=_core namespace=- job_id=eval-gc node_id="" triggered_by=scheduled
2023-08-18T11:49:23.382-0700 [DEBUG] core.sched: eval GC scanning before cutoff index: index=0 eval_gc_threshold=1h0m0s
2023-08-18T11:49:23.382-0700 [DEBUG] core.sched: eval GC scanning before cutoff index: index=0 batch_eval_gc_threshold=24h0m0s
2023-08-18T11:49:23.383-0700 [DEBUG] worker: ack evaluation: worker_id=bc05e9b5-162f-71f6-a7fe-398e21b7c3f2 eval_id=53238c8d-c3d9-449d-ef20-8c90335bd973 type=_core namespace=- job_id=eval-gc node_id="" triggered_by=scheduled
2023-08-18T11:49:23.383-0700 [DEBUG] worker: dequeued evaluation: worker_id=bc05e9b5-162f-71f6-a7fe-398e21b7c3f2 eval_id=c2c60460-7bfe-bd7f-562f-2b1af046c6f3 type=_core namespace=- job_id=deployment-gc node_id="" triggered_by=scheduled
2023-08-18T11:49:23.383-0700 [DEBUG] core.sched: deployment GC scanning before cutoff index: index=0 deployment_gc_threshold=1h0m0s
2023-08-18T11:49:23.383-0700 [DEBUG] worker: ack evaluation: worker_id=bc05e9b5-162f-71f6-a7fe-398e21b7c3f2 eval_id=c2c60460-7bfe-bd7f-562f-2b1af046c6f3 type=_core namespace=- job_id=deployment-gc node_id="" triggered_by=scheduled
2023-08-18T11:49:23.383-0700 [DEBUG] worker: dequeued evaluation: worker_id=bc05e9b5-162f-71f6-a7fe-398e21b7c3f2 eval_id=3d806f8a-3e20-686f-692a-5d6e07effc9c type=_core namespace=- job_id=local-token-expired-gc node_id="" triggered_by=scheduled
2023-08-18T11:49:23.383-0700 [DEBUG] worker: ack evaluation: worker_id=bc05e9b5-162f-71f6-a7fe-398e21b7c3f2 eval_id=3d806f8a-3e20-686f-692a-5d6e07effc9c type=_core namespace=- job_id=local-token-expired-gc node_id="" triggered_by=scheduled
2023-08-18T11:49:23.383-0700 [DEBUG] worker: dequeued evaluation: worker_id=bc05e9b5-162f-71f6-a7fe-398e21b7c3f2 eval_id=fbcfdee1-5f99-36df-496b-e2985a699e7f type=_core namespace=- job_id=job-gc node_id="" triggered_by=scheduled
2023-08-18T11:49:23.384-0700 [DEBUG] core.sched: job GC scanning before cutoff index: index=0 job_gc_threshold=4h0m0s
2023-08-18T11:49:23.384-0700 [DEBUG] worker: ack evaluation: worker_id=bc05e9b5-162f-71f6-a7fe-398e21b7c3f2 eval_id=fbcfdee1-5f99-36df-496b-e2985a699e7f type=_core namespace=- job_id=job-gc node_id="" triggered_by=scheduled
2023-08-18T11:49:23.384-0700 [DEBUG] worker: dequeued evaluation: worker_id=bc05e9b5-162f-71f6-a7fe-398e21b7c3f2 eval_id=b29dcac7-4f52-3be9-e173-c13c69f2a55a type=_core namespace=- job_id=csi-plugin-gc node_id="" triggered_by=scheduled
2023-08-18T11:49:23.384-0700 [DEBUG] core.sched: CSI plugin GC scanning before cutoff index: index=0 csi_plugin_gc_threshold=1h0m0s
2023-08-18T11:49:23.384-0700 [DEBUG] worker: ack evaluation: worker_id=bc05e9b5-162f-71f6-a7fe-398e21b7c3f2 eval_id=b29dcac7-4f52-3be9-e173-c13c69f2a55a type=_core namespace=- job_id=csi-plugin-gc node_id="" triggered_by=scheduled
2023-08-18T11:49:23.383-0700 [DEBUG] worker: dequeued evaluation: worker_id=feea49b4-59e0-c538-be99-61d11c553099 eval_id=59170d7f-d8c4-c135-4248-ff0a5a37f54d type=_core namespace=- job_id=csi-volume-claim-gc node_id="" triggered_by=scheduled
2023-08-18T11:49:23.386-0700 [DEBUG] core.sched: CSI volume claim GC scanning before cutoff index: index=4 csi_volume_claim_gc_threshold=5m0s
2023-08-18T11:49:23.386-0700 [DEBUG] worker: ack evaluation: worker_id=feea49b4-59e0-c538-be99-61d11c553099 eval_id=59170d7f-d8c4-c135-4248-ff0a5a37f54d type=_core namespace=- job_id=csi-volume-claim-gc node_id="" triggered_by=scheduled
2023-08-18T11:49:23.383-0700 [DEBUG] worker: dequeued evaluation: worker_id=25a4cef5-9edd-8b51-8d2f-6c988acd4044 eval_id=a3d8c56d-2fde-ec18-ed42-ebcce5113331 type=_core namespace=- job_id=node-gc node_id="" triggered_by=scheduled
2023-08-18T11:49:23.386-0700 [DEBUG] core.sched: node GC scanning before cutoff index: index=0 node_gc_threshold=24h0m0s
2023-08-18T11:49:23.386-0700 [DEBUG] worker: ack evaluation: worker_id=25a4cef5-9edd-8b51-8d2f-6c988acd4044 eval_id=a3d8c56d-2fde-ec18-ed42-ebcce5113331 type=_core namespace=- job_id=node-gc node_id="" triggered_by=scheduled
2023-08-18T11:49:31.988-0700 [DEBUG] http: request complete: method=GET path=/v1/namespaces?index=1 duration=5m7.632822875s
2023-08-18T11:49:46.298-0700 [DEBUG] http: request failed: method=GET path="/v1/client/fs/logs/a856154a-5163-471e-46f7-ceb0c8ae74ac?follow=false&offset=0&origin=start&region=global&task=test&type=stdout" error="state for allocation a856154a-5163-471e-46f7-ceb0c8ae74ac not found on client" code=404
2023-08-18T11:49:46.298-0700 [DEBUG] http: request complete: method=GET path="/v1/client/fs/logs/a856154a-5163-471e-46f7-ceb0c8ae74ac?follow=false&offset=0&origin=start&region=global&task=test&type=stdout" duration=3m58.624511417s
2023-08-18T11:49:50.464-0700 [DEBUG] http: request failed: method=GET path="/v1/client/fs/logs/a856154a-5163-471e-46f7-ceb0c8ae74ac?task=test&type=stdout" error="state for allocation a856154a-5163-471e-46f7-ceb0c8ae74ac not found on client" code=404
2023-08-18T11:49:50.464-0700 [DEBUG] http: request complete: method=GET path="/v1/client/fs/logs/a856154a-5163-471e-46f7-ceb0c8ae74ac?task=test&type=stdout" duration=3m56.472775958s
2023-08-18T11:49:54.254-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3?index=11 duration=5m12.212170958s
2023-08-18T11:49:54.317-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b/evaluations?index=20" duration=5m0.746057292s
2023-08-18T11:49:56.814-0700 [DEBUG] client: updated allocations: index=22 total=2 pulled=0 filtered=2
2023-08-18T11:49:56.814-0700 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=2
2023-08-18T11:49:56.814-0700 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=2 errors=0
2023-08-18T11:50:01.291-0700 [DEBUG] http: request complete: method=GET path=/v1/job/test3?index=11 duration=5m17.980739625s
2023-08-18T11:50:02.624-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b?index=22" duration=5m5.031354041s
2023-08-18T11:50:02.656-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384285-f4eb9b34?index=16" duration=5m12.870507667s
2023-08-18T11:50:08.959-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b/summary?index=22" duration=5m11.3660445s
2023-08-18T11:50:14.984-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b/allocations?index=22" duration=5m17.391273833s
2023-08-18T11:51:54.933-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b/summary?index=22" duration=5m5.942634166s
2023-08-18T11:51:56.419-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b?index=22" duration=5m7.429240834s
2023-08-18T11:52:06.410-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b/allocations?index=22" duration=5m17.41954175s
2023-08-18T11:52:07.378-0700 [DEBUG] http: request complete: method=GET path="/v1/job/test3%2Fdispatch-1692384293-de95ae2b/evaluations?index=20" duration=5m18.388005459s

@lgfa29
Copy link
Contributor

lgfa29 commented Aug 18, 2023

Oof nice catch! And nice job getting the custom binary running 😄

I missed the early returns when the error happens, so the HTTP handler was stuck waiting for log data. c14d770 fixes this problem, and the branch should be working now.

Thanks for giving it a try!

@johnnyplaydrums
Copy link
Author

Awesome, just tested the most recent c14d770 commit and that one works as expected. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stage/accepted Confirmed, and intend to work on. No timeline committment though. theme/allocation API type/bug
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants