Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nomad kills off healthy allocations on deploy when new allocations fail #6864

Closed
kaspergrubbe opened this issue Dec 16, 2019 · 47 comments · Fixed by #6975
Closed

Nomad kills off healthy allocations on deploy when new allocations fail #6864

kaspergrubbe opened this issue Dec 16, 2019 · 47 comments · Fixed by #6975

Comments

@kaspergrubbe
Copy link

Nomad version

$ nomad -v
Nomad v0.10.2 (0d2d6e3dc5a171c21f8f31fa117c8a765eb4fc02)

Operating system and Environment details

Amazon Linux 2 running on AWS.

Issue

When we are deploying using the Blue/Green canary deployment model we have 4 allocations running (healthy), and we try to boot up 4 new allocations and they fail, Nomad retries that, and somehow Nomad kills off the healthy and running allocations leaving us with no healthy allocations running:

screenie_1576449035_369936

Reproduction steps

Deploy a healthy job, after that deploy a failing job, observe what happens.

Job file

job "billetto-production-web" {
  datacenters = ["dc1"]
  type = "service"

  update {
    max_parallel = 4
    min_healthy_time = "10s"
    healthy_deadline = "3m"
    progress_deadline = "10m"
    auto_revert = false
    auto_promote = false
    canary = 4
  }

  migrate {
    max_parallel = 3
    health_check = "checks"
    min_healthy_time = "10s"
    healthy_deadline = "5m"
  }

  group "group" {
    count = 4

    restart {
      attempts = 0
      interval = "30m"
      delay = "15s"
      mode = "fail"
    }

    ephemeral_disk {
      size = 300
    }

    task "nginx" {
      driver = "docker"

      config {
        image = "billetto/billetto-nginx:42.42.42"

        port_map {
          nginx = 80
        }

        dns_servers = ["172.17.0.1"]
      }

      resources {
        cpu    = 200
        memory = 170
        network {
          mbits = 50
          port "nginx" {}
        }
      }

      service {
        name = "billetto-production-nginx"
        tags = []
        port = "nginx"

        check {
          name     = "billetto-production-web nginx healthcheck"
          type     = "http"
          protocol = "http"
          path     = "/debug/healthcheck"
          interval = "5s"
          timeout  = "3s"
        }
      }
    }

    task "rails" {
      driver = "docker"

      config {
        image = "billetto/billetto:42.42.42"

        command = "bin/bundle"
        args = ["exec", "unicorn", "-c", "config/unicorn.rb", "--no-default-middleware"]

        port_map {
          web = 3000
        }

        dns_servers = ["172.17.0.1"]
      }

      resources {
        cpu    = 750
        memory = 1650
        network {
          mbits = 50
          port "web" {}
        }
      }

      service {
        name = "billetto-production-web"
        tags = []
        port = "web"

        check {
          name     = "billetto-production-web rails healthcheck"
          type     = "http"
          protocol = "http"
          path     = "/debug/healthcheck"
          interval = "5s"
          timeout  = "3s"
        }
      }
    }
  }
}
@drewbailey
Copy link
Contributor

Hi @kaspergrubbe, thanks for taking the time to file this issue. I'm trying to reproduce locally and am having some difficulties. Do you mind sharing any relevant logs or allocation details for the healthy allocations that got killed? Are you able to share the relevant parts of your nginx config for how it relates to the rails task and health checks?

@kaspergrubbe
Copy link
Author

@drewbailey This happened in our production environment, I'm currently setting up a local version on my Linux machine to replicate the behaviour here, give me a day to see if it happens in another environment as well.

@drewbailey drewbailey added this to Needs Triage in Nomad - Community Issues Triage via automation Dec 18, 2019
@drewbailey
Copy link
Contributor

drewbailey commented Dec 18, 2019

@kaspergrubbe thanks! Here is the job file I was using to try to reproduce, I have a very simple server with a docker tag fail which will return a 500 on /healthcheck

job "web" {
  datacenters = ["dc1"]
  type        = "service"

  update {
    max_parallel      = 4
    min_healthy_time  = "10s"
    healthy_deadline  = "3m"
    progress_deadline = "10m"
    auto_revert       = false
    auto_promote      = false
    canary            = 4
  }

  migrate {
    max_parallel     = 3
    health_check     = "checks"
    min_healthy_time = "10s"
    healthy_deadline = "5m"
  }

  group "group" {
    count = 4

    restart {
      attempts = 0
      interval = "30m"
      delay    = "15s"
      mode     = "fail"
    }

    ephemeral_disk {
      size = 300
    }

    task "nginx" {
      driver = "docker"

      config {
        image = "nginx"

        port_map {
          http = 80
        }

        volumes = [
          "local:/etc/nginx/conf.d",
        ]
      }

      template {
        data = <<EOF
upstream backend {
{{ range service "demo-webapp" }}
  server {{ .Address }}:{{ .Port }};
{{ else }}server 127.0.0.1:65535; # force a 502
{{ end }}
}

server {
   listen {{ env "NOMAD_PORT_nginx" }};

   location /nginx_health {
      return 200 'nginx health';
   }
   

   location / {
      proxy_pass http://backend;
   }
}
EOF

        destination   = "local/load-balancer.conf"
        change_mode   = "signal"
        change_signal = "SIGHUP"
      }

      resources {
        network {
          mbits = 10
          mode  = "host"

          port "nginx" {
            # static = 8080
          }
        }
      }

      service {
        name = "nginx"
        tags = []
        port = "nginx"

        check {
          name     = "nginx healthcheck"
          type     = "http"
          protocol = "http"
          path     = "/healthcheck"
          interval = "5s"
          timeout  = "3s"
        }
      }
    }

    task "server" {
      env {
        HTTP_PORT = "${NOMAD_PORT_http}"
      }

      driver = "docker"

      config {
        image = "drewbailey/simple-server:1"
      }

      resources {
        network {
          mbits = 10
          port  "http"{}
        }
      }

      service {
        name = "demo-webapp"
        port = "http"

        check {
          type     = "http"
          path     = "/healthcheck"
          interval = "2s"
          timeout  = "2s"
        }
      }
    }
  }
}

for my deployment I was changing the image name to

      config {
-        image = "drewbailey/simple-server:1"
+        image = "drewbailey/simple-server:fail"
      }

@kaspergrubbe
Copy link
Author

@drewbailey Ok, you win first round! I'm simply unable to replicate the issue locally as well both with using the terminal command nomad and with our deployment script that uses the API, so my sanity is restored.

I did however manage to recreate some of it on production:

screenie_1576725196_236203

As you can see in my first post we specify a count of 4, however, something have happened where 2 of the healthy containers have been killed off.

And when I check the logs it seems like Nomad did ask them to shut down:

screenie_1576725345_0563579

Nomad alloc status says "alloc not needed due to job update" maybe that's a clue?

$ nomad alloc status 6d91d6f4
ID                  = 6d91d6f4-b2cc-f606-dad4-d39bff351615
Eval ID             = 1253353a
Name                = billetto-production-web.group[2]
Node ID             = 0301a622
Node Name           = ip-10-1-3-7.eu-west-1.compute.internal
Job ID              = billetto-production-web
Job Version         = 221
Client Status       = complete
Client Description  = All tasks have completed
Desired Status      = stop
Desired Description = alloc not needed due to job update
Created             = 6h5m ago
Modified            = 10m52s ago
Deployment ID       = d3bc02d7
Deployment Health   = healthy

Can you tell me how to debug further and where to look?

@drewbailey
Copy link
Contributor

@kaspergrubbe Do you mind sharing the full output of nomad alloc status -verbose 6d91d6f4 as well as any other allocs that were healthy but ended up being killed as well? If you are able to dig up any related server logs that would also be helpful.

@kaspergrubbe
Copy link
Author

@drewbailey I will try and dig in the log files as well, in the meantime here's the full output of the command:

$ nomad alloc status -verbose 6d91d6f4
ID                  = 6d91d6f4-b2cc-f606-dad4-d39bff351615
Eval ID             = 1253353a-097c-1e16-757e-c8fb5e2c3254
Name                = billetto-production-web.group[2]
Node ID             = 0301a622-d0d8-b039-da78-b897001dd3a9
Node Name           = ip-10-1-3-7.eu-west-1.compute.internal
Job ID              = billetto-production-web
Job Version         = 221
Client Status       = complete
Client Description  = All tasks have completed
Desired Status      = stop
Desired Description = alloc not needed due to job update
Created             = 2019-12-18T21:12:15Z
Modified            = 2019-12-19T03:06:24Z
Deployment ID       = d3bc02d7-d610-fca8-70d5-89b39c89223d
Deployment Health   = healthy
Evaluated Nodes     = 6
Filtered Nodes      = 0
Exhausted Nodes     = 4
Allocation Time     = 90.608µs
Failures            = 0

Task "logging-nginx" is "dead"
Task Resources
CPU        Memory          Disk     Addresses
0/200 MHz  492 KiB/64 MiB  300 MiB

Task Events:
Started At     = 2019-12-18T21:12:16Z
Finished At    = 2019-12-19T03:06:24Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type        Description
2019-12-19T03:06:24Z  Killed      Task successfully killed
2019-12-19T03:06:24Z  Terminated  Exit Code: 0
2019-12-19T03:06:23Z  Killing     Sent interrupt. Waiting 5s before force killing
2019-12-18T21:12:16Z  Started     Task started by client
2019-12-18T21:12:15Z  Task Setup  Building Task Directory
2019-12-18T21:12:15Z  Received    Task received by client

Task "logging-rails" is "dead"
Task Resources
CPU        Memory          Disk     Addresses
0/200 MHz  496 KiB/64 MiB  300 MiB

Task Events:
Started At     = 2019-12-18T21:12:16Z
Finished At    = 2019-12-19T03:06:24Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type        Description
2019-12-19T03:06:24Z  Killed      Task successfully killed
2019-12-19T03:06:24Z  Terminated  Exit Code: 0
2019-12-19T03:06:23Z  Killing     Sent interrupt. Waiting 5s before force killing
2019-12-18T21:12:16Z  Started     Task started by client
2019-12-18T21:12:15Z  Task Setup  Building Task Directory
2019-12-18T21:12:15Z  Received    Task received by client

Task "nginx" is "dead"
Task Resources
CPU        Memory           Disk     Addresses
0/200 MHz  2.0 MiB/170 MiB  300 MiB  nginx: 10.1.3.7:23203

Task Events:
Started At     = 2019-12-18T21:12:30Z
Finished At    = 2019-12-19T03:06:24Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type        Description
2019-12-19T03:06:24Z  Killed      Task successfully killed
2019-12-19T03:06:24Z  Terminated  Exit Code: 0
2019-12-19T03:06:23Z  Killing     Sent interrupt. Waiting 5s before force killing
2019-12-18T21:12:30Z  Started     Task started by client
2019-12-18T21:12:15Z  Driver      Downloading image
2019-12-18T21:12:15Z  Task Setup  Building Task Directory
2019-12-18T21:12:15Z  Received    Task received by client

Task "rails" is "dead"
Task Resources
CPU         Memory           Disk     Addresses
11/750 MHz  1.1 GiB/1.6 GiB  300 MiB  web: 10.1.3.7:28107

Task Events:
Started At     = 2019-12-18T21:12:44Z
Finished At    = 2019-12-19T03:06:24Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type        Description
2019-12-19T03:06:24Z  Killed      Task successfully killed
2019-12-19T03:06:24Z  Terminated  Exit Code: 0
2019-12-19T03:06:23Z  Killing     Sent interrupt. Waiting 5s before force killing
2019-12-18T21:12:44Z  Started     Task started by client
2019-12-18T21:12:15Z  Driver      Downloading image
2019-12-18T21:12:15Z  Task Setup  Building Task Directory
2019-12-18T21:12:15Z  Received    Task received by client

Placement Metrics
  * Resources exhausted on 4 nodes
  * Dimension "memory" exhausted on 3 nodes
  * Dimension "network: bandwidth exceeded" exhausted on 1 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
0301a622-d0d8-b039-da78-b897001dd3a9  0.939    -0.75              0              0                        0.0947
aef43eaf-137f-659f-dfa8-2a7a18aac3fe  0.913    -0.75              0              0                        0.0817

I've just now realized that my nodes are resource exhausted, can this cause what I am seeing?

@drewbailey
Copy link
Contributor

Hey @kaspergrubbe still looking into this, I don't think that's the cause, those placement metrics show why / how many nodes were ruled out from being a viable placement

@kaspergrubbe
Copy link
Author

@drewbailey Ok, I will try and replicate it again and fetch the logs again this time. Do I need to run it with DEBUG logging enabled for a better trace? In that case I might need to redeploy some things.

@drewbailey
Copy link
Contributor

@kaspergrubbe DEBUG or TRACE would be ideal, if you are on 0.10.2 you can run the following to monitor instead of redeploying nomad monitor -server-id=leader -log-level=debug the verbose output of the job & allocs would be helpful too nomad job -verbose billetto-production-web

Sorry that I'm unable to reproduce this, I'll be slow to respond over the next two weeks.

@kaspergrubbe
Copy link
Author

Hi @drewbailey, I've finally had some time to test this out, really useful to know about the nomad monitor command, this is my output:

$ nomad run -verbose nomad_deploy/test.hcl
==> Monitoring evaluation "3bf0190d-400d-7c3b-8ff4-15381dcf354c"
    Evaluation triggered by job "billetto-production-web"
    Evaluation within deployment: "c726d965-8e63-74af-f94b-77e994c46e4b"
    Allocation "1ec9dcb0-3206-8a52-f505-f66f0a0769ca" created: node "aef43eaf-137f-659f-dfa8-2a7a18aac3fe", group "group"
    Allocation "1f5b47c0-e95c-34e6-5f3d-4de402e75857" created: node "d66f862c-ea3f-acdb-262a-2f4c56b1953d", group "group"
    Allocation "b8e696bd-135c-1124-c851-a0fd7f8e8f36" created: node "0301a622-d0d8-b039-da78-b897001dd3a9", group "group"
    Allocation "f24e5a8f-1e7d-8715-b80b-41eb10339dd1" created: node "dffcab97-4335-ae80-ac61-efbb24edea2c", group "group"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "3bf0190d-400d-7c3b-8ff4-15381dcf354c" finished with status "complete"

This time we ended up with only one healthy allocation:

screenie_1577722689_1405718

I've attached the full log from nomad monitor -server-id=leader -log-level=debug as well.
leader.log

Here is the the job-file with our secret removed:

job "billetto-production-web" {
  datacenters = ["eu-west-1"]
  type = "service"

  update {
    max_parallel = 4
    min_healthy_time = "10s"
    healthy_deadline = "3m"
    progress_deadline = "10m"
    auto_revert = false
    auto_promote = false
    canary = 4
  }

  migrate {
    max_parallel = 3
    health_check = "checks"
    min_healthy_time = "10s"
    healthy_deadline = "5m"
  }

  group "group" {
    count = 4

    restart {
      attempts = 0
      interval = "30m"
      delay = "15s"
      mode = "fail"
    }

    ephemeral_disk {
      size = 300
    }

    task "nginx" {
      driver = "docker"

      template {
        data = <<EOF
I_AM_FAILING

upstream unicorn {
  # fail_timeout=0 means we always retry an upstream even if it failed
  # to return a good HTTP response (in case the unicorn master nukes a
  # single worker for timing out).

  # for UNIX domain socket setups:
  server unix:{{ env "NOMAD_ALLOC_DIR" }}/unicorn.sock fail_timeout=0;
}

server {
  listen       80;

  proxy_set_header Host $http_host;
  proxy_redirect off;

  client_max_body_size 20m;

  error_page 500 502 503 504 /500.html;

  root /app;

  gzip on;               # enable gzip
  gzip_http_version 1.1; # turn on gzip for http 1.1 and higher
  gzip_disable "msie6";  # IE 6 had issues with gzip
  gzip_comp_level 5;     # inc compresion level, and CPU usage
  gzip_min_length 100;   # minimal weight to gzip file
  gzip_proxied any;      # enable gzip for proxied requests (e.g. CDN)
  gzip_buffers 16 8k;    # compression buffers (if we exceed this value, disk will be used instead of RAM)
  gzip_vary on;          # add header Vary Accept-Encoding (more on that in Caching section)

  # define files which should be compressed
  gzip_types text/plain;
  gzip_types text/css;
  gzip_types text/xml;
  gzip_types application/xhtml+xml;
  gzip_types application/xml;
  gzip_types text/javascript;
  gzip_types application/x-javascript;
  gzip_types application/javascript;
  gzip_types application/json;
  gzip_types image/svg+xml;
  gzip_types image/x-icon;
  gzip_types application/rss+xml;

  gzip_types application/vnd.ms-fontobject;
  gzip_types font/opentype;
  gzip_types application/x-font;
  gzip_types application/x-font-opentype;
  gzip_types application/x-font-otf;
  gzip_types application/x-font-truetype;
  gzip_types application/x-font-ttf;
  gzip_types font/opentype;
  gzip_types font/otf;
  gzip_types font/ttf;

  location / {
    server_tokens off;
    try_files $uri @app;
  }

  location @app {
    proxy_set_header X-Real-IP         $remote_addr;
    proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
    proxy_set_header Host              $http_host;
    proxy_redirect off;

    proxy_pass http://unicorn;

    proxy_connect_timeout 30;
    proxy_read_timeout 30;
  }
}
EOF
        destination = "local/rails.conf"
        change_mode   = "signal"
        change_signal = "SIGHUP"
      }

      config {
        image = "billetto/billetto-rails:4.2.415-nginx"

        volumes = [
          "local/rails.conf:/etc/nginx/conf.d/rails.conf",
        ]

        port_map {
          nginx = 80
        }

        dns_servers = ["172.17.0.1"]
      }

      resources {
        cpu    = 200
        memory = 100
        network {
          mbits = 50
          port "nginx" {}
        }
      }

      service {
        name = "billetto-production-nginx"
        tags = []
        port = "nginx"

        check {
          name     = "billetto-production-web nginx healthcheck"
          type     = "http"
          protocol = "http"
          path     = "/debug/healthcheck"
          interval = "5s"
          timeout  = "3s"
        }
      }
    }

    task "logging-nginx" {
      driver = "docker"

      logs {
        max_files     = 1
        max_file_size = 1
      }

      config {
        image = "billetto/datadog-rsyslog:0.0.16"
      }

      env {
        DATADOG_TOKEN = ""
        LOGGED_TASK_NAME = "nginx"
        SYSLOG_SERVICE_NAME = "${NOMAD_JOB_NAME}-${NOMAD_GROUP_NAME}-nginx"
        SYSLOG_HOST_NAME = "${attr.unique.hostname}"
        SYSLOG_SOURCE_NAME = "nginx"
        SYSLOG_TAGS = "allocid:${NOMAD_ALLOC_ID}"
      }

      resources {
        cpu = 100
        memory = 32
      }
    }

    task "rails" {
      driver = "docker"

      env {}

      config {
        image = "billetto/billetto-rails:4.2.415"

        command = "/usr/local/bin/thpoff"
        args = ["bin/bundle", "exec", "unicorn", "-c", "config/unicorn.rb", "--no-default-middleware"]

        port_map {
          web = 3000
        }

        dns_servers = ["172.17.0.1"]
      }

      resources {
        cpu    = 750 # 500 MHz
        memory = 1650 # MB
        network {
          mbits = 50
          port "web" {}
        }
      }

      service {
        name = "billetto-production-web"
        tags = []
        port = "web"

        check {
          name     = "billetto-production-web rails healthcheck"
          type     = "http"
          protocol = "http"
          path     = "/debug/healthcheck"
          interval = "5s"
          timeout  = "3s"
        }
      }
    }

    task "logging-rails" {
      driver = "docker"

      logs {
        max_files     = 1
        max_file_size = 1
      }

      config {
        image = "billetto/datadog-rsyslog:0.0.16"
      }

      env {
        DATADOG_TOKEN = ""
        LOGGED_TASK_NAME = "nginx"
        SYSLOG_SERVICE_NAME = "${NOMAD_JOB_NAME}-${NOMAD_GROUP_NAME}-rails"
        SYSLOG_HOST_NAME = "${attr.unique.hostname}"
        SYSLOG_SOURCE_NAME = "rails"
        SYSLOG_TAGS = "allocid:${NOMAD_ALLOC_ID}"
      }

      resources {
        cpu = 100
        memory = 32
      }
    }
  }
}

@kaspergrubbe
Copy link
Author

I hope you had a nice holiday season, and a fantastic new years party!

I've tried (and failed) yet again to replicate this on my one-node Nomad cluster. But I can make it happen on our production cluster every time now. I've had a look at the look and I don't see the culprit there, I guess I'm not well-versed in the Nomad universe.

@drewbailey Let me know if I can do anything to help you debug it more, we could perhaps do a screenshare session if that would be easier for you, I'm in the UTC timezone, but I am flexible.

@drewbailey
Copy link
Contributor

@kaspergrubbe thanks for the additional info. Unfortunately I didn't see anything in the logs, curiously I also didn't see eval 3bf0190d-400d-7c3b-8ff4-15381dcf354c there either, were the timing of the monitor commands / deploys the same?

Are you able to monitor consul at all during the deploy? Are the checks ever passing, then failing or always failing?

Just to try, could you change the nginx healthcheck to something different than its upstream rails endpoint? something like what I have below to rule out any sort of weirdness with the two tasks using the same check

   location /nginx_health {
      return 200 'nginx health';
   }

@drewbailey
Copy link
Contributor

@kaspergrubbe if you'd be willing, we have an unofficial debugging tool which may be helpful here, if you are open to trying it, it should automatically collect some helpful output related to the job.

The files are located here https://gist.github.com/drewbailey/cb1421fb4aed50fbea15554df7c7fa48 it has dependencies on bash, curl, jq, graphviz.

From the output above, if the info is still there nomad eval status -verbose 3bf01 and nomad deployment status c726d would be helpful

@drewbailey drewbailey moved this from Needs Triage to Triaged in Nomad - Community Issues Triage Jan 7, 2020
@kaspergrubbe
Copy link
Author

kaspergrubbe commented Jan 8, 2020

Just to try, could you change the nginx healthcheck to something different than its upstream rails endpoint? something like what I have below to rule out any sort of weirdness with the two tasks using the same check

Good catch, however, that didn't seem to solve the issue. I've changed it to that now.

I just noticed something funky, it's also possible for our Nomad servers to schedule too many containers, we're having a count = 4, canary = 4 and max_parallel = 4, however this just happened:

screenie_1578499503_702942

@kaspergrubbe
Copy link
Author

It seems like the service becomes deregistered in Consul during a failing deploy:

screenie_1578500752_46644

I'm not looking into Consul logs.

@kaspergrubbe
Copy link
Author

kaspergrubbe commented Jan 8, 2020

I checked the logs on a Consul server during a failing deploy, but I don't see anything suspicious (not even the deregistering), is there a smarter way to log the state of a Consul cluster than individually on each Consul Server and Client?

Here's the Consul log:
consul.log

The jobs that are deregistering are called billetto-production-web and billetto-production-nginx.

@kaspergrubbe
Copy link
Author

https://download.gnome.org is having issues, so I can't obtain Graphviz through Homebrew, I will try again later this evening.

@kaspergrubbe
Copy link
Author

kaspergrubbe commented Jan 8, 2020

I've updated from Consul 1.6.2 (from 1.6.0) in a hope that there was a fix there, but no.

@drewbailey I've run the script, but the graphviz tool says there are errors in the file:

+ dot /var/folders/l8/rhc3mz1j4dgdd4vdzkrb22k40000gn/T/tmp.ntoICplr/graph.dot -Tsvg -o /var/folders/l8/rhc3mz1j4dgdd4vdzkrb22k40000gn/T/tmp.ntoICplr/graph.svg
Error: /var/folders/l8/rhc3mz1j4dgdd4vdzkrb22k40000gn/T/tmp.ntoICplr/graph.dot: syntax error in line 1688 near '-'

The file is here:
graph.txt

@kaspergrubbe
Copy link
Author

Here's a new freshly failing deploy:

Healthy allocations running before failing deploy

  • e8410fc6-0ff9-b52c-50ec-20b7de942ad5 (marked with bold because we will focus on this allocation throughout).
  • 800da2bb-2f3f-2827-055d-d25cf8a7d096
  • 6d75950c-f69e-fcf2-1a7e-70d83b01816f
  • 455726d6-1c7a-3e1c-543d-d1c0841d891b

Job run

$ nomad job run test-fail.hcl
==> Monitoring evaluation "b43983ce"
    Evaluation triggered by job "billetto-production-webtest"
    Allocation "68832070" created: node "3db7ac14", group "group"
    Allocation "7df38246" created: node "38adf12b", group "group"
    Allocation "7fbd1cc4" created: node "38adf12b", group "group"
    Allocation "ddfa5ad1" created: node "3db7ac14", group "group"
    Evaluation within deployment: "482e62f2"
    Allocation "68832070" status changed: "pending" -> "running" (Tasks are running)
    Allocation "7df38246" status changed: "pending" -> "running" (Tasks are running)
    Allocation "7fbd1cc4" status changed: "pending" -> "running" (Tasks are running)
    Allocation "ddfa5ad1" status changed: "pending" -> "running" (Tasks are running)
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "b43983ce" finished with status "complete"

Evaluation status

$ nomad eval status -verbose b43983ce
ID                 = b43983ce-f145-a7b9-6033-72603e2f869f
Create Time        = 2020-01-09T14:00:08Z
Modify Time        = 2020-01-09T14:00:09Z
Status             = complete
Status Description = complete
Type               = service
TriggeredBy        = job-register
Job ID             = billetto-production-webtest
Priority           = 50
Placement Failures = false
Previous Eval      = <none>
Next Eval          = <none>
Blocked Eval       = <none>

Deployment status

$ nomad deployment status 482e62f2
ID          = 482e62f2
Job ID      = billetto-production-webtest
Job Version = 33
Status      = failed
Description = Deployment marked as failed

Deployed
Task Group  Promoted  Desired  Canaries  Placed  Healthy  Unhealthy  Progress Deadline
group       false     4        4         12      0        12         2020-01-09T14:10:09Z

Healthy allocation that was removed

$  nomad alloc status -verbose e8410fc6-0ff9-b52c-50ec-20b7de942ad5
ID                  = e8410fc6-0ff9-b52c-50ec-20b7de942ad5
Eval ID             = 3c0715c4-161b-bfa2-4aab-0ca630e0d60b
Name                = billetto-production-webtest.group[1]
Node ID             = 38adf12b-1404-b538-1c9e-7c8a39027b36
Node Name           = ip-10-1-2-118.eu-west-1.compute.internal
Job ID              = billetto-production-webtest
Job Version         = 32
Client Status       = complete
Client Description  = All tasks have completed
Desired Status      = stop
Desired Description = alloc not needed due to job update
Created             = 2020-01-09T05:33:11Z
Modified            = 2020-01-09T14:00:42Z
Deployment ID       = 82e00fca-df2b-5c1f-c63f-ade120ea1f4e
Deployment Health   = healthy
Evaluated Nodes     = 5
Filtered Nodes      = 0
Exhausted Nodes     = 2
Allocation Time     = 91.69µs
Failures            = 0

Task "rails" is "dead"
Task Resources
CPU        Memory           Disk     Addresses
0/750 MHz  999 MiB/1.6 GiB  300 MiB  web: 10.1.2.118:22726

Task Events:
Started At     = 2020-01-09T05:33:11Z
Finished At    = 2020-01-09T14:00:42Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type        Description
2020-01-09T14:00:42Z  Killed      Task successfully killed
2020-01-09T14:00:42Z  Terminated  Exit Code: 0
2020-01-09T14:00:42Z  Killing     Sent interrupt. Waiting 5s before force killing
2020-01-09T05:33:11Z  Started     Task started by client
2020-01-09T05:33:11Z  Task Setup  Building Task Directory
2020-01-09T05:33:11Z  Received    Task received by client

Placement Metrics
  * Resources exhausted on 2 nodes
  * Dimension "memory" exhausted on 2 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
38adf12b-1404-b538-1c9e-7c8a39027b36  0.835    0                  0              0                        0.835
639210c1-987f-a8b7-7a1e-d7e1db587f27  0.431    0                  0              0                        0.431
3db7ac14-275a-3601-ae4d-0603fd064b06  0.967    -0.5               0              0                        0.233

Info from one of the failed allocations:

$ nomad alloc status -verbose 2f29c457-0041-61e6-a0b9-bb9f36657c26
ID                  = 2f29c457-0041-61e6-a0b9-bb9f36657c26
Eval ID             = 31678a73-c55f-d15d-cd92-29dd16fc378b
Name                = billetto-production-webtest.group[1]
Node ID             = fa5e43ae-fb1d-be84-d41a-4705b108c681
Node Name           = ip-10-1-1-238.eu-west-1.compute.internal
Job ID              = billetto-production-webtest
Job Version         = 33
Client Status       = failed
Client Description  = Failed tasks
Desired Status      = run
Desired Description = <none>
Created             = 2020-01-09T14:01:41Z
Modified            = 2020-01-09T14:01:44Z
Deployment ID       = 482e62f2-a1d5-b602-42eb-501c54962a8e
Deployment Health   = unhealthy
Canary              = true
Evaluated Nodes     = 7
Filtered Nodes      = 0
Exhausted Nodes     = 3
Allocation Time     = 84.708µs
Failures            = 0

Task "rails" is "dead"
Task Resources
CPU      Memory   Disk     Addresses
750 MHz  1.6 GiB  300 MiB  web: 10.1.1.238:28735

Task Events:
Started At     = 2020-01-09T14:01:42Z
Finished At    = 2020-01-09T14:01:42Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type             Description
2020-01-09T14:01:42Z  Alloc Unhealthy  Unhealthy because of failed task
2020-01-09T14:01:42Z  Not Restarting   Policy allows no restarts
2020-01-09T14:01:42Z  Terminated       Exit Code: 1, Exit Message: "Docker container exited with non-zero exit code: 1"
2020-01-09T14:01:42Z  Started          Task started by client
2020-01-09T14:01:41Z  Task Setup       Building Task Directory
2020-01-09T14:01:41Z  Received         Task received by client

Placement Metrics
  * Resources exhausted on 3 nodes
  * Dimension "memory" exhausted on 3 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
fa5e43ae-fb1d-be84-d41a-4705b108c681  0.644    -0.5               0              0                        0.0721
639210c1-987f-a8b7-7a1e-d7e1db587f27  0.644    -0.5               0              0                        0.0721
38adf12b-1404-b538-1c9e-7c8a39027b36  0.784    0                  0              -1                       -0.108
3db7ac14-275a-3601-ae4d-0603fd064b06  0.759    0                  0              -1                       -0.12

@kaspergrubbe
Copy link
Author

@drewbailey Is it this you need?

nomad eval status -verbose 3c0715c4-161b-bfa2-4aab-0ca630e0d60b
ID                 = 3c0715c4-161b-bfa2-4aab-0ca630e0d60b
Create Time        = 2020-01-09T05:33:10Z
Modify Time        = 2020-01-09T05:33:11Z
Status             = complete
Status Description = complete
Type               = service
TriggeredBy        = job-register
Job ID             = billetto-production-webtest
Priority           = 50
Placement Failures = false
Previous Eval      = <none>
Next Eval          = <none>
Blocked Eval       = <none>

@drewbailey
Copy link
Contributor

@kaspergrubbe thanks, do you have info on the previous deployment? 82e00fca-df2b-5c1f-c63f-ade120ea1f4e was it ever promoted? If you could provide the deployment output for a healthy allocation that was killed that would be helpful, as well as job status -verbose web.

Also, if you don't mind uploading the job-history output for a failed deploy that matches up with the jobs/deploys pasted above that'd be great. Thanks in advance, We have a few ideas that we are trying to reproduce with now.

@kaspergrubbe
Copy link
Author

Hi @drewbailey Thank you so much for getting back to me, it is very appreciated as we have halted our Nomad rollout because of this issue!

Thanks in advance, We have a few ideas that we are trying to reproduce with now.

This sounds promising, I can't replicate it locally on my own one-node cluster, but I can replicate it every time on our production cluster. So tell me if there is anything else you want me to test.

This is the flow (newest first), so the one you ask for is further down:

screenie_1578675738_3716018

4507daa1-9030-4150-1d86-5a00688fdc26

$ nomad deployment status -verbose 4507daa1
ID          = 4507daa1-9030-4150-1d86-5a00688fdc26
Job ID      = billetto-production-webtest
Job Version = 35
Status      = successful
Description = Deployment completed successfully

Deployed
Task Group  Desired  Placed  Healthy  Unhealthy  Progress Deadline
group       4        4       4        0          2020-01-10T11:47:18Z

ad47ca9a-715b-e120-6b2a-2c31efb87c9d

$ nomad deployment status -verbose ad47ca9a
ID          = ad47ca9a-715b-e120-6b2a-2c31efb87c9d
Job ID      = billetto-production-webtest
Job Version = 34
Status      = cancelled
Description = Cancelled due to newer version of job

Deployed
Task Group  Desired  Placed  Healthy  Unhealthy  Progress Deadline
group       4        4       0        0          2020-01-10T11:46:13Z

482e62f2-a1d5-b602-42eb-501c54962a8e

$ nomad deployment status -verbose 482e62f2
ID          = 482e62f2-a1d5-b602-42eb-501c54962a8e
Job ID      = billetto-production-webtest
Job Version = 33
Status      = failed
Description = Deployment marked as failed

Deployed
Task Group  Promoted  Desired  Canaries  Placed  Healthy  Unhealthy  Progress Deadline
group       false     4        4         12      0        12         2020-01-09T14:10:09Z

82e00fca-df2b-5c1f-c63f-ade120ea1f4e

$ nomad deployment status -verbose 82e00fca-df2b-5c1f-c63f-ade120ea1f4e
ID          = 82e00fca-df2b-5c1f-c63f-ade120ea1f4e
Job ID      = billetto-production-webtest
Job Version = 32
Status      = successful
Description = Deployment completed successfully

Deployed
Task Group  Promoted  Desired  Canaries  Placed  Healthy  Unhealthy  Progress Deadline
group       true      4        4         8       8        0          2020-01-09T05:43:48Z

(It is a bit strange that it says Desired = 8, and Canaries = 8 when the job-file specifies 4)

fd05cdee-9b0b-f2dc-8d2d-db49dab57b5d

$ nomad deployment status -verbose fd05cdee
ID          = fd05cdee-9b0b-f2dc-8d2d-db49dab57b5d
Job ID      = billetto-production-webtest
Job Version = 26
Status      = successful
Description = Deployment completed successfully

Deployed
Task Group  Promoted  Desired  Canaries  Placed  Healthy  Unhealthy  Progress Deadline
group       true      4        4         8       5        3          2020-01-09T05:31:29Z

f97ecb5e-4441-a9de-6bde-a1b0f49416d9

$ nomad deployment status -verbose f97ecb5e
ID          = f97ecb5e-4441-a9de-6bde-a1b0f49416d9
Job ID      = billetto-production-webtest
Job Version = 20
Status      = failed
Description = Deployment marked as failed

Deployed
Task Group  Promoted  Desired  Canaries  Placed  Healthy  Unhealthy  Progress Deadline
group       false     4        4         8       0        8          2020-01-09T02:36:16Z

f5911776-dead-6371-794b-b50bdd9ed7c2

ID          = f5911776-dead-6371-794b-b50bdd9ed7c2
Job ID      = billetto-production-webtest
Job Version = 9
Status      = successful
Description = Deployment completed successfully

Deployed
Task Group  Promoted  Desired  Canaries  Placed  Healthy  Unhealthy  Progress Deadline
group       true      4        4         8       8        0          2020-01-09T02:17:37Z

@kaspergrubbe
Copy link
Author

Here's the output for the job status verbose, it's a bit noisy as I've been testing a lot:

$ nomad job status -verbose billetto-production-webtest
ID            = billetto-production-webtest
Name          = billetto-production-webtest
Submit Date   = 2020-01-10T11:39:00Z
Type          = service
Priority      = 50
Datacenters   = eu-west-1
Status        = running
Periodic      = false
Parameterized = false

Summary
Task Group  Queued  Starting  Running  Failed  Complete  Lost
group       0       0         4        148     97        0

Evaluations
ID                                    Priority  Triggered By        Status    Placement Failures
3f2ee7dc-e14e-9132-c298-cfe3c07eabc1  50        deployment-watcher  complete  false
aa39bc9f-b3eb-2b8d-ad76-c544d8bbd01c  50        deployment-watcher  complete  false
15295d13-6bbe-93ce-fbd2-6d5ac276c1a1  50        job-register        complete  false
6a2c7e59-81fa-7fec-b78e-cfda09a60bc1  50        alloc-failure       complete  false
d71ee4d4-12a2-4f5c-fc0f-0ea7c37e3652  50        deployment-watcher  complete  false
79306f76-5cf4-a769-67ee-62968fdd1089  50        alloc-failure       complete  false
0622569b-95c3-d518-42d9-cd273b16a2da  50        alloc-failure       complete  false
537651a8-a032-da14-de98-243ab1d17cf4  50        alloc-failure       complete  false
1b4ae233-8a8e-01f2-045e-c652c3666e48  50        alloc-failure       complete  false
d68f2b53-d0f7-3389-eda0-b9cc7f5d1e1b  50        alloc-failure       complete  false
c5d131b7-0233-9e75-5443-de120ae31330  50        deployment-watcher  complete  false
dc1c4240-0d52-acf1-4893-6b224dc7aaf2  50        alloc-failure       complete  false
3b0635df-deec-9664-8c8b-527217935f0c  50        alloc-failure       complete  false
91f2611f-7476-1b1b-889f-349c332c4c98  50        alloc-failure       complete  false
0db6553d-3f62-1ff3-d337-796c84965b5a  50        job-register        complete  false
94950763-0792-1f44-edb8-84a03bf40bd1  50        deployment-watcher  complete  false
458dba96-e1b4-61fd-d202-72b243245a40  50        deployment-watcher  complete  false
cf7e6f8c-e549-6d47-25ec-a9410b40d84c  50        job-register        complete  false
030f7155-cd21-7653-691a-5895676c02b9  50        job-register        complete  false
7949408a-031e-cec4-4fbf-6babed7f2493  50        node-update         complete  false
bfae3d73-b216-1c94-4f2b-15efd04cf474  50        node-update         complete  false
fc89dfec-657f-a758-2351-e2e605094be8  50        node-update         complete  false
190f6f8f-0b41-d07f-1679-7a610aeb2839  50        node-update         complete  false
6373b1f0-c2ea-2514-8e19-5ef1eb50890e  50        deployment-watcher  complete  false
be33f947-c73b-4a3a-b4f5-32dae7a84a69  50        alloc-failure       complete  false
8d1f1ef0-8d43-6769-379e-2f241b59df16  50        deployment-watcher  complete  false
f4c9a45f-30b4-133a-2766-d504ebacf7f3  50        alloc-failure       complete  false
271b58d8-77ef-9ad8-a1ba-6e8aa46cc236  50        alloc-failure       complete  false
572a60e3-2f63-9ef7-127b-765489ece84f  50        alloc-failure       complete  false
31678a73-c55f-d15d-cd92-29dd16fc378b  50        alloc-failure       complete  false
4898c198-be84-fe70-dd78-bee0d40f4569  50        deployment-watcher  complete  false
f6d99953-0249-9705-427c-dd9f15087586  50        alloc-failure       complete  false
c6c0f6fe-48ad-b582-90d5-48e8eef001fc  50        alloc-failure       complete  false
eb7c7ab5-e3bf-aa5d-ddb0-b940138c5f79  50        alloc-failure       complete  false
dd9b46e2-5264-ea1c-7d42-c3107f1d90c0  50        alloc-failure       complete  false
8db09d66-53ca-8b52-a211-bd83e8af70b9  50        deployment-watcher  complete  false
5180af67-135b-3478-5b22-5925a6866ce9  50        alloc-failure       complete  false
c6ab467c-392a-8113-bf3a-5dc7a0296fe3  50        alloc-failure       complete  false
ea6f4e72-ea0f-09a6-a5d7-bee08a219214  50        alloc-failure       complete  false
b43983ce-f145-a7b9-6033-72603e2f869f  50        job-register        complete  false
6b32979d-d277-75f1-aedd-67367952a7ad  50        deployment-watcher  complete  false
1ae73637-5d16-6c35-ec05-3267ae49e2e8  50        deployment-watcher  complete  false
e44f764a-96ab-3733-6f1a-afb1403ba0d6  50        deployment-watcher  complete  false
73ea934f-f35c-de91-0c4a-571637b4250d  50        deployment-watcher  complete  false
90ab8590-ffea-2d12-bdeb-22614b5670eb  50        deployment-watcher  complete  false
3c0715c4-161b-bfa2-4aab-0ca630e0d60b  50        job-register        complete  false
7d36ecec-5652-7e1c-1835-a6909ae11810  50        alloc-failure       complete  false
4a845341-4107-5cee-0978-5b7609fddf7f  50        deployment-watcher  complete  false
72efbb5a-9403-fe13-639a-9c2d5954a6e5  50        alloc-failure       complete  false
71615552-6c1f-8ed2-c8c5-0d269704a579  50        alloc-failure       complete  false
8bbff2e7-ead6-1bb3-f3a5-8dbbc1251362  50        alloc-failure       complete  false
1f9aae5f-89a9-4c75-8d55-8c2b9a80c429  50        alloc-failure       complete  false
f1136801-007c-5f82-a042-8d41f9d16905  50        alloc-failure       complete  false
47e3cba7-accd-8ff2-d5e5-d620dc18edf2  50        deployment-watcher  complete  false
5a540281-1d4f-d2ba-711d-0b9a73a6a85a  50        alloc-failure       complete  false
7d71c523-ec33-c820-b330-36a42a450d28  50        alloc-failure       complete  false
9590da5f-b3d5-6251-0fb9-56cc01fbdaad  50        alloc-failure       complete  false
ed2cd7d5-2047-7422-50f6-cac1dbfc9880  50        alloc-failure       complete  false
ccbd5187-ec75-5024-0582-244f394361df  50        job-register        complete  false
cc47c17e-d92c-2fbc-1df2-78d089a7d9c2  50        deployment-watcher  complete  false
678bd502-8635-87b8-a229-25d4c40db043  50        deployment-watcher  complete  false
a7f230ae-b0fc-e138-33e8-91b532be5e9a  50        deployment-watcher  complete  false
72463553-af46-b0cc-0198-1a952a722323  50        deployment-watcher  complete  false
3230b366-3212-b0ca-bf42-399d4f285297  50        deployment-watcher  complete  false
57c9971a-fc0b-a6be-f0cf-2000a92d488c  50        job-register        complete  false
4a2881c6-c394-2124-b590-574fe6b92fd0  50        alloc-failure       complete  false
89d71d46-b097-d950-2233-6f1d30843f70  50        deployment-watcher  complete  false
f02f36c7-aa65-60fd-533a-4abd3a345f2c  50        alloc-failure       complete  false
222417bd-abb2-3f18-f021-37fce3125e0a  50        alloc-failure       complete  false
22a6167f-47e2-473c-360b-3cacdfca3ffb  50        alloc-failure       complete  false
e91aff86-c1bf-f0fa-bc97-395909f261a2  50        alloc-failure       complete  false
88ffe121-11e5-711d-df81-38c763059b3b  50        alloc-failure       complete  false
c390eaaf-8022-bd0f-ad13-f294e73802da  50        deployment-watcher  complete  false
4bd9bfd8-eb08-a307-0bfc-13b397896691  50        alloc-failure       complete  false
b70da68d-8c1d-5415-a07e-b88cb7c1788c  50        alloc-failure       complete  false
8f4693b2-21bd-b91b-485e-7c5a0409aeec  50        alloc-failure       complete  false
9f0a6d40-fdba-7e7e-0182-fb15039db454  50        alloc-failure       complete  false
632f8b27-75c0-1ff2-7b8d-5b922759cfc1  50        job-register        complete  false
0c36433f-a95e-1ce8-2ee0-1956e813e8b1  50        deployment-watcher  complete  false
dbcde6ce-1a23-4f80-dbe7-5f4399efa782  50        deployment-watcher  complete  false
0130afcc-6740-096d-ad28-0bb7798628d8  50        job-register        complete  false
b8d5725e-25a6-e1f6-f96b-e7c63a9ce7e5  50        alloc-failure       complete  false
c292e84a-afb9-3326-9d30-aecbd49378c2  50        deployment-watcher  complete  false
62ed6c0a-bd48-b26a-8c31-af01d64ea107  50        alloc-failure       complete  false
327284c5-eae7-b1c0-49b9-e29ce6ca86d7  50        alloc-failure       complete  false
1445b906-f508-3775-155c-4a12f73edb75  50        alloc-failure       complete  false
711203c3-4264-751c-6e5f-f8003a4045cb  50        alloc-failure       complete  false
224ffb16-7d99-df4e-3ab7-9c8c62a664c6  50        alloc-failure       complete  false
b713b8d1-784b-62bc-cb27-416ac2db8391  50        deployment-watcher  complete  false
f944674b-9a29-0f69-ecb5-f7b186501d6e  50        alloc-failure       complete  false
35b6b141-047b-29db-472c-6e93de11a129  50        alloc-failure       complete  false
4084e8eb-a930-bd5f-8240-7b98f313f92e  50        alloc-failure       complete  false
ca5960ea-453a-12b8-ecd3-7d4764cb79f7  50        alloc-failure       complete  false
7472f791-41f4-0215-9536-ba541b7ce0ae  50        job-register        complete  false
74427462-3511-19a9-59e1-1d2b94a939fb  50        deployment-watcher  complete  false
4b1c9268-a92f-a0eb-4fb1-6ed7ed2c3cae  50        deployment-watcher  complete  false
cc5dbeab-bee6-45ab-50b0-50e9a6824149  50        deployment-watcher  complete  false
c08626f3-f1c7-3500-1390-20731c399a08  50        deployment-watcher  complete  false
137db530-d7f6-890b-0272-7840d81bb226  50        deployment-watcher  complete  false
bfbf6f74-5845-b137-5a71-7b79e53c5143  50        job-register        complete  false
988e45f8-b68d-b10b-862a-e5d16331c3be  50        alloc-failure       complete  false
9edf1cb6-1b30-3b72-0a61-b90bd5a59f54  50        deployment-watcher  complete  false
7f0ce757-de43-c36f-4d08-0c9db19d6891  50        alloc-failure       complete  false
ef566144-73d8-7b39-c848-ad69bf6eb105  50        alloc-failure       complete  false
315c12a5-3840-f085-b2d9-550274e59f72  50        alloc-failure       complete  false
786e6468-9b8d-e868-de39-4bfd2cab8181  50        alloc-failure       complete  false
aad0e407-5bac-632a-ee37-46de071303f0  50        alloc-failure       complete  false
927bdc01-bc08-9282-8079-ba76097f3bb0  50        alloc-failure       complete  false
923e4580-5a3f-f175-a854-f4e07ab62505  50        deployment-watcher  complete  false
ee5e91ee-f727-16dc-26f3-eba73416a92e  50        alloc-failure       complete  false
a6f93ca6-a06e-d28c-09b1-7f433e201a6e  50        alloc-failure       complete  false
038b0640-b765-e524-6d64-ea53f0477280  50        alloc-failure       complete  false
d710b23a-7919-eac6-9ede-8d00d1802219  50        alloc-failure       complete  false
769e952c-e1c5-0cae-3489-6f8eeea86f66  50        job-register        complete  false
5fb37edd-5a93-ba3c-7eea-df0045e75fa2  50        deployment-watcher  complete  false
15baae9b-367a-9638-dc59-ba722a0214e7  50        deployment-watcher  complete  false
f8460cee-cff1-5bd9-a89d-b14e3def7deb  50        deployment-watcher  complete  false
cff5b63f-69e1-165b-5863-f68ff0d435f9  50        job-register        complete  false
00d2c6a2-98b1-fa1a-3dda-672109b75a66  50        node-update         complete  false
7c53e7e8-f279-78be-ed9c-c00a2f116485  50        job-deregister      complete  false
c2593a5b-6413-eb95-9b9c-b545e3d926c4  50        node-update         complete  false
fd1da907-a2be-67a2-6146-a6075feb32c2  50        node-update         complete  false
ca75c211-e147-1a7d-5f2f-dae0ac7ccb65  50        node-update         complete  false
37503d97-713e-613f-2cba-4a492cf0b9e7  50        alloc-failure       complete  false
01725594-ffb6-3c4e-4cba-bb29567d52c4  50        alloc-failure       complete  false
105fe72f-efc4-ad13-69b0-f7c52e1ce195  50        alloc-failure       complete  false
35f90066-a97b-ebf6-5a39-dd64228c8f33  50        deployment-watcher  complete  false
6552ba36-7720-7348-b68c-50e7d2e002cd  50        alloc-failure       complete  false
11def8a4-24d5-7f7a-1c1e-e8fe60ae9547  50        deployment-watcher  complete  false
327638ff-195e-9a2a-afe9-44d833e0abd8  50        alloc-failure       complete  false
9873cc74-d196-9907-8257-a910e1fabd48  50        alloc-failure       complete  false
40275f6c-abd5-e14d-6362-6836528ff620  50        deployment-watcher  complete  false
a03999e9-255c-8c25-51e8-16edb56f216d  50        alloc-failure       complete  false
1e3f2c14-9a93-86d3-0a6c-541915d19aa1  50        deployment-watcher  complete  false
d18b1cd5-cd25-d8a9-f777-f001c247c22d  50        alloc-failure       complete  false
0a576548-fe09-7e54-8a06-a6c45f6e2c06  50        alloc-failure       complete  false
4a0199e9-ce83-9a56-5d32-b9302e5502b9  50        alloc-failure       complete  false
e2ed2cae-5d13-5bc9-8b4f-b7bf7f53da36  50        deployment-watcher  complete  false
ed76f360-dd87-9901-e617-d90e8f79398f  50        alloc-failure       complete  false
381de1bb-019f-90c7-f546-4d0830203bbe  50        alloc-failure       complete  false
6413a9d3-1844-c5e6-74eb-baa9a6fea3b1  50        deployment-watcher  complete  false
80aa6bc6-7c5c-7b02-cc3d-d6ebd22f67c3  50        alloc-failure       complete  false
86b29d10-8cee-3717-fbee-0f02124d4604  50        deployment-watcher  complete  false
f0aa1b45-57c0-5b0f-6a31-7dfa76338bc5  50        alloc-failure       complete  false
1eca2bd6-1dff-f823-a273-77eaa8c81c71  50        alloc-failure       complete  false
9845fa10-0ee6-196c-c2fc-d44f1bcc7248  50        alloc-failure       complete  false
f46ab54f-f738-620f-5456-f69a8b979d70  50        deployment-watcher  complete  false
fea2dd89-f10b-1fc6-9982-2edf66198f9f  50        alloc-failure       complete  false
bb66a5fa-44e3-360e-0645-61b54eae5314  50        alloc-failure       complete  false
2bdb364d-6f79-4b0e-0a8f-533e7770cb0c  50        deployment-watcher  complete  false
1a487d3b-395a-9f28-955b-547b696c1ec9  50        alloc-failure       complete  false
41e882f0-3f32-aa78-c81b-1099b56f91d8  50        alloc-failure       complete  false
7270cf6a-d3a8-8369-3c24-a8961a120f00  50        alloc-failure       complete  false
fd5ec5da-8707-6689-bcb3-dda270a2daeb  50        deployment-watcher  complete  false
ce11f79a-507d-867b-edab-29889fde9cdc  50        alloc-failure       complete  false
ebe1d974-286f-7cea-0970-81ecbba5a68f  50        alloc-failure       complete  false
cc62b774-c23d-941a-63b1-2af9a12e995a  50        deployment-watcher  complete  false
bb005fb5-30c6-1451-dbef-691d9a9fc03d  50        alloc-failure       complete  false
1ed72268-fe5b-e854-198d-05e1fb447f37  50        alloc-failure       complete  false
5d7c2d72-dcf5-9d53-e0de-4939291c1eae  50        alloc-failure       complete  false
e230c8b1-b2e4-5299-464d-81fe485a700a  50        job-register        complete  false
052cbef3-abe7-e907-6edf-562613a97110  50        deployment-watcher  complete  false
9812073d-376a-d393-0f26-2272b7fa3681  50        deployment-watcher  complete  false
f9e099fb-710c-eaae-dbe3-113e4173b2d2  50        job-register        complete  false
39a6895a-723d-2fc2-858a-79aa0af23053  50        deployment-watcher  complete  false
19202e1b-e487-8fd3-06ec-6a9c0acad563  50        alloc-failure       complete  false
2477a950-0c2e-c81b-57cb-3e606c22b05f  50        deployment-watcher  complete  false
f20981c8-0004-05a4-bd97-4f503e007475  50        alloc-failure       complete  false
e7e6814d-9beb-f2c9-ed49-4aa9db69dc34  50        alloc-failure       complete  false
cad61a05-5ab3-5705-45df-46f8c347f51f  50        alloc-failure       complete  false
4db6a469-8829-1a99-8086-4e52c2185c5d  50        alloc-failure       complete  false
8d3637b4-7f8d-5a82-1830-1ef335ddf73e  50        deployment-watcher  complete  false
2cbb6344-9b57-6fa0-a0dc-7f9e921e2ab4  50        alloc-failure       complete  false
56fa3aa4-4180-11be-d65d-f01705fffbb1  50        alloc-failure       complete  false
50a095dd-d05e-a51f-778b-31438ac8a397  50        alloc-failure       complete  false
8ec023b6-22fa-af55-01b5-5306b23ab7d7  50        job-register        complete  false
84b61e6c-ccc1-e9a1-947e-b9e6907aa932  50        deployment-watcher  complete  false
47a934b0-c91a-aa87-10ed-bac8b488cd0f  50        deployment-watcher  complete  false
e657f20c-645a-c19f-3a0e-0e714104a744  50        deployment-watcher  complete  false
eeafce8f-5fc1-8d14-42b7-da9caa9f371b  50        deployment-watcher  complete  false
14b394a6-8eb0-aec1-c91b-12c1b63a3ace  50        job-register        complete  false
7416621c-b22b-ec7d-36a7-14a8403b6044  50        deployment-watcher  complete  false
8150c4ca-0858-edcf-7e99-b1d0dbe5aa20  50        deployment-watcher  complete  false
593c79e9-d1eb-9f0c-f3af-9122219b54f3  50        deployment-watcher  complete  false
32a3b2c3-d0e9-658f-ee61-6b495f594e18  50        deployment-watcher  complete  false
9895f7ff-81a8-2504-5516-cfa0484984bf  50        deployment-watcher  complete  false
fc1d811b-8626-7aae-6508-116dceb911ad  50        deployment-watcher  complete  false
58849f1d-11ed-2e85-b29c-e5d3bd0f36c0  50        deployment-watcher  complete  false
7bdc008b-178e-5f9c-aaaf-0ac35c4e0c75  50        job-register        complete  false
7130377b-e6ff-4664-211c-e0a428fbbd0f  50        alloc-failure       complete  false
2d2683bc-64be-502d-6641-37dcb9a724e6  50        deployment-watcher  complete  false
e5d403a4-cdf0-af19-7057-e6760cdf613b  50        alloc-failure       complete  false
3dbbaf92-f5ab-3d40-35ad-4001adb06222  50        deployment-watcher  complete  false
34bc7fb5-6f9f-2326-2947-fb483d6ad746  50        alloc-failure       complete  false
011d1e5b-ec1f-db06-5e07-b67e532b8498  50        alloc-failure       complete  false
3ef2ceda-f80e-dc24-95e7-67cda85880c5  50        alloc-failure       complete  false
739c6574-1dfd-7514-ce9d-c1341a4cb858  50        alloc-failure       complete  false
fe5e8805-d879-3e6f-76a5-43b481520722  50        deployment-watcher  complete  false
399b76aa-14a9-afd0-4c96-23ea495186c4  50        alloc-failure       complete  false
d00ec84c-8917-2e91-47d7-fa508d97dbc6  50        alloc-failure       complete  false
0f9744f3-2e36-00f4-d426-32e0db336345  50        alloc-failure       complete  false
400ad86f-844b-92ac-9a3d-86d2fa26a307  50        alloc-failure       complete  false
2c948c05-c863-f0ea-225f-ff074580d0b3  50        job-register        complete  false
03af3b5f-db18-e20f-7e76-b5e9d9cb2d7a  50        deployment-watcher  complete  false
c42b889c-821d-8abb-d6fc-83731fc5acb7  50        job-register        complete  false
1404df49-a890-76cd-cbad-f7c125eab526  50        deployment-watcher  complete  false
9e76e947-e4e9-9370-b903-8608ff57a29e  50        deployment-watcher  complete  false
e79c0299-3e84-66cd-779c-033187ad0ad3  50        deployment-watcher  complete  false
bd5205a4-dffe-fd53-38cc-bbd5c95f6e08  50        deployment-watcher  complete  false
fa747019-2124-cf99-4861-10befc79ebcd  50        deployment-watcher  complete  false
63c82d49-c682-7aba-9cfb-94156292a452  50        deployment-watcher  complete  false
3c8c7804-1f8d-1d9c-c588-af024b272ae6  50        deployment-watcher  complete  false
9fb5989f-9d70-8f17-1f2c-09deb04bbe21  50        deployment-watcher  complete  false
0731ef0b-a44e-6d78-9394-64355c584f57  50        job-register        complete  false
3f47e881-0b5e-57f9-fc8b-ff82d30aec27  50        alloc-failure       complete  false
81591fcd-512c-faef-30e8-b51925c408fc  50        deployment-watcher  complete  false
6928e259-4ade-1341-e673-34eb464d8e8e  50        alloc-failure       complete  false
4d3b3d41-e193-0bfd-bab9-784dfb3086f6  50        alloc-failure       complete  false
c1d32ef9-5bfc-e460-38aa-415cafb6b0b3  50        alloc-failure       complete  false
4b05fa28-03fe-2e99-1ce6-7720128427f8  50        alloc-failure       complete  false
45066d4a-9517-5fc4-f5ae-165e4de7bbcc  50        deployment-watcher  complete  false
b7cfe082-97aa-e568-4fdf-0c46bd68e3b8  50        alloc-failure       complete  false
62ec7dae-9be0-7d85-d2e9-9e6c06bb615f  50        alloc-failure       complete  false
61829e0b-35a9-b61b-f166-2146c416133f  50        alloc-failure       complete  false
f1f787ae-c9c5-87a5-9d35-f9f0235dc4d4  50        alloc-failure       complete  false
ad7219c3-0ee8-41bf-b7b3-7e1235acc4f8  50        alloc-failure       complete  false
59d2be4c-eff5-9de1-99b9-936495789739  50        deployment-watcher  complete  false
8dcb0bed-63b9-db70-d16e-58ffe9fd174d  50        alloc-failure       complete  false
87021e72-27ba-acd6-b5a2-8b4f6b989e76  50        alloc-failure       complete  false
4a014dd7-eff2-4a9d-620c-b15dc04190f5  50        alloc-failure       complete  false
23964552-d20b-eab3-a6d0-abf70a26ca0d  50        alloc-failure       complete  false
64070c33-b5e8-8bdc-ccd3-28eb77ec0efb  50        job-register        complete  false
57bee34b-bc2b-ba48-d83f-61b1935baf39  50        deployment-watcher  complete  false
c40b0a32-4945-e60c-9deb-ea0bbc4ed52d  50        deployment-watcher  complete  false
ad01b9e4-fc21-af03-162d-73c189dd4e38  50        deployment-watcher  complete  false
e4c92146-55f7-b516-999d-3de29b582b71  50        job-register        complete  false
bd8c8f8d-8070-a6c3-e8a9-b949985f03b8  50        deployment-watcher  complete  false
ae9af091-dcac-ee1e-c35b-12cd9b9f1b8e  50        deployment-watcher  complete  false
dc32a720-cc73-ca61-760a-e488e1606604  50        deployment-watcher  complete  false
00ebe648-c7ff-f170-77e4-c4192b27e913  50        deployment-watcher  complete  false
5a2a814b-7143-0ae8-6d03-fdff8a721e50  50        deployment-watcher  complete  false
95fc979b-3289-c5d1-35f2-52401b87936f  50        job-register        complete  false
ebbf0022-11c1-0595-cb45-f5d9fd53d39d  50        alloc-failure       complete  false
7455c05f-2357-1448-76e3-6a49a530b37b  50        deployment-watcher  complete  false
40df6bda-5cb4-abee-1bc8-08c95f812fe4  50        alloc-failure       complete  false
7b00b687-b20c-7bae-41c8-102a7c3f609f  50        alloc-failure       complete  false
23733203-a605-5a8a-a754-2e5cbd7f239e  50        alloc-failure       complete  false
c455cfea-2758-33af-34bb-e4743c086f73  50        alloc-failure       complete  false
6f14aee9-4481-84bb-a4db-45f68a2126da  50        alloc-failure       complete  false
c823369e-5936-a746-b91b-5cbcfcffaab0  50        deployment-watcher  complete  false
c29aa89f-4449-a6df-81ca-23fff806f2b4  50        alloc-failure       complete  false
74cef206-c556-75bf-9c8f-2f73b36f9d5a  50        alloc-failure       complete  false
f6d4c6a0-b3f1-a07e-efce-c23f9cf409b5  50        alloc-failure       complete  false
c816c8b9-7f54-7f72-7453-c8a8111947d5  50        alloc-failure       complete  false
31fad478-cfe1-fa6f-51f4-50dadbb2e3c2  50        job-register        complete  false
36a7c00e-3c00-477c-9b17-042995350ab3  50        deployment-watcher  complete  false
740dc520-56cd-42c4-3a69-46b4f5ffd464  50        deployment-watcher  complete  false
5bf68fe2-3768-4354-15e9-d9a79e6f91c5  50        deployment-watcher  complete  false
1c85980b-d5a7-8e57-501e-6e9c4333147f  50        deployment-watcher  complete  false
ae526ee5-d8f3-61a6-b29a-b494004f707d  50        job-register        complete  false
3a005506-3ce4-03b5-ded1-5b1008de44f4  50        deployment-watcher  complete  false
3f19fa88-04d1-1c6f-3e3b-616fb5b6033a  50        deployment-watcher  complete  false
f67c3046-2992-4139-6430-13e737fda570  50        deployment-watcher  complete  false
6198d842-53f4-3b1d-fa74-b61b0390af91  50        deployment-watcher  complete  false
04651eb3-3825-d9b7-1224-e03f43fb3dfb  50        deployment-watcher  complete  false
51a53ec6-86d0-c012-bb5a-db4220e3950a  50        deployment-watcher  complete  false
8a448fa0-251b-980d-caa1-11f3e630b4ed  50        job-register        complete  false
817171a6-7c89-b7ea-67f0-98e71ac2ad29  50        alloc-failure       complete  false
f5d705e8-1490-d190-80ea-e93c7c7ba402  50        deployment-watcher  complete  false
7caa51e2-f53f-93f4-6b15-578bda0275a5  50        alloc-failure       complete  false
fbf7b612-25d7-8573-2d48-211eaa821bd7  50        alloc-failure       complete  false
22d4d549-adca-ca50-42a6-b44f58aeeb1b  50        deployment-watcher  complete  false
08f76b3b-163b-2ce3-3c7e-e833b0611ab9  50        alloc-failure       complete  false
f5380797-072c-d7cd-9316-df6979682cfc  50        alloc-failure       complete  false
66b50c3c-d16b-db32-5041-5c3a0e848a5a  50        deployment-watcher  complete  false
fbf80e55-b0d8-6a2e-bdf4-9e4fe13b3a85  50        alloc-failure       complete  false
c6c5256d-0588-5f4f-4594-e759fe120f61  50        alloc-failure       complete  false
c9ddce72-b790-95b2-23f3-81f97c63d09e  50        deployment-watcher  complete  false
fd388073-ec14-9532-06f6-e318862638e2  50        alloc-failure       complete  false
8f255f58-9f36-5caa-a5f8-c9ac9ebc278f  50        alloc-failure       complete  false
2a047a0c-c5f0-9fbc-43eb-395d0e7e52b1  50        deployment-watcher  complete  false
8b733a32-6c68-88e8-c726-3c9824b5d709  50        alloc-failure       complete  false
97761a9d-c23d-97b8-0502-d1d99ae6cb23  50        alloc-failure       complete  false
0bb550ee-2a66-58ff-8e08-2b203d5a948a  50        deployment-watcher  complete  false
9adb1340-9b1d-72a4-3d53-0892ed487c94  50        alloc-failure       complete  false
80c8bd78-a6a6-1a95-bc44-6f01aa05c7fa  50        deployment-watcher  complete  false
5163cafa-1c6f-7e8e-13d2-8fcaf52cf326  50        alloc-failure       complete  false
af41bf3c-3281-0900-0333-2442e7bcf03b  50        deployment-watcher  complete  false
4f94e78a-6c23-2912-b25c-bed144a75411  50        alloc-failure       complete  false
85101abc-5fc0-9dfe-ae33-6631db74aa18  50        job-register        complete  false
a8473a8f-4b78-4b26-a174-dcd26035617b  50        deployment-watcher  complete  false
3525327c-400a-ca95-b4a4-f1d9efd5e59c  50        deployment-watcher  complete  false
a3f5ac19-3aea-8f85-fd37-a1a01a0bde2b  50        deployment-watcher  complete  false
39beba42-3f4f-09f7-9722-fa1fad0c0fee  50        deployment-watcher  complete  false
03c20def-f9dc-f526-496e-38f83c30ab58  50        deployment-watcher  complete  false
265f5ff3-3dda-b1a8-2c69-f721e68ba625  50        deployment-watcher  complete  false
eb0b76a3-e77c-b48c-513e-e5778735f784  50        job-register        complete  false
2305f92f-cbcf-4f94-2322-816fb2a98827  50        alloc-failure       complete  false
3f29c3e6-6b1c-1ef2-ebae-69c5af1a0b78  50        deployment-watcher  complete  false
338d3efc-a014-88d1-6dcf-b3112c6d1fba  50        alloc-failure       complete  false
d0d2f23b-fe80-5e95-12df-5d05d570fb68  50        alloc-failure       complete  false
b970a9a4-86e4-8e01-0f52-269dcc9b591d  50        alloc-failure       complete  false
94db325b-e90e-e602-7fca-cc3875398b71  50        deployment-watcher  complete  false
0fa1d9a1-75df-197a-872a-e48e120d3695  50        alloc-failure       complete  false
56ce4d97-5818-9641-f7c2-650e4ed23b47  50        alloc-failure       complete  false
2966f427-8487-5dd1-48b3-61abb31b0865  50        alloc-failure       complete  false
f543c90a-b3bc-4158-1eac-55dc1a117732  50        deployment-watcher  complete  false
0aa52f9c-c660-a55a-b596-b6be087c1125  50        alloc-failure       complete  false
e7c4e7cc-6143-43e5-4e95-83337084d54f  50        alloc-failure       complete  false
c439f0eb-3732-4f3e-d84d-2793890c741f  50        deployment-watcher  complete  false
460c534c-8627-03f1-ccff-fc7a47ce22c4  50        alloc-failure       complete  false
6337315b-0a26-4c8b-8a8d-858a66f1a347  50        alloc-failure       complete  false
7ac89ff0-3b60-9bd9-d6ed-aae14ab3791b  50        alloc-failure       complete  false
64cc88db-517f-0935-e6f9-c4cb43e025e3  50        job-register        complete  false
5fa0e8b7-17b5-0d1c-d41b-ea69599aad2b  50        job-register        complete  false
f1c78432-a52a-69a7-bd97-10621c54b98c  50        deployment-watcher  complete  false
5b1965b7-b162-cb9f-87a1-a31907e8742d  50        deployment-watcher  complete  false
1af1f6cf-5ae7-4694-da05-d43cdeeb1271  50        deployment-watcher  complete  false
f9542a26-3ff2-44b4-d81e-3e663709d046  50        deployment-watcher  complete  false
737118d9-8315-1fe4-e728-90e83f360233  50        job-register        complete  false
da20deec-0ff7-1e81-5616-473374b27d89  50        alloc-failure       complete  false
763fd593-9854-ac38-49cc-aa442333d12c  50        deployment-watcher  complete  false
828e6d09-eec0-dfc4-6c38-9940be1c9853  50        alloc-failure       complete  false
9e19c27b-91e1-89e9-6f60-dcc2caa4b55f  50        alloc-failure       complete  false
8ec1e086-61ce-0983-a018-8a59583ab200  50        alloc-failure       complete  false
5cb57298-9382-835c-e189-2ec148a577bf  50        alloc-failure       complete  false
aee92998-93ae-5437-6880-2edb234147f2  50        deployment-watcher  complete  false
da10106e-e5e0-c2be-1926-6847dd9bacee  50        alloc-failure       complete  false
1ae921fd-2dde-2834-13e7-df0d067bde8c  50        job-register        complete  false
0c667cc8-53be-e75d-8f10-0abfa5615ca6  50        alloc-failure       complete  false
d7ba068a-2597-c2f0-0b06-9eed5f023a0f  50        deployment-watcher  complete  false
e1dddb81-c857-4615-80ad-39f5d0f49542  50        alloc-failure       complete  false
524f8c3a-f237-631c-e858-e0ad8ad8312b  50        alloc-failure       complete  false
197375ba-2eb0-6fb1-7ae7-117884c7770a  50        deployment-watcher  complete  false
f8b64216-d568-1646-8853-a59be1640dbe  50        alloc-failure       complete  false
4af301cf-47d4-cc57-a69c-86d9ad37afe5  50        alloc-failure       complete  false
8c7047a9-d1b9-b383-b786-313f5910932e  50        deployment-watcher  complete  false
56cb37a1-4c2c-75d1-55fa-b252f085f919  50        alloc-failure       complete  false
468c7ddf-b4b9-6f7a-48c3-2f8025ef4cee  50        alloc-failure       complete  false
f34085e4-8efc-9ad5-1830-08ebbffee016  50        deployment-watcher  complete  false
5804809f-82a9-7405-a641-32a75677a8c0  50        alloc-failure       complete  false
e5c32abf-2b55-ea44-b50c-b6a10317c981  50        deployment-watcher  complete  false
3838e66d-7716-f71f-b977-4d1ce2460308  50        alloc-failure       complete  false
72ac2ba9-4f2f-6ce0-a41a-6da34f57667e  50        alloc-failure       complete  false
6b042123-1f5d-4cda-883c-b8085df2fd26  50        deployment-watcher  complete  false
7bb0dcb7-f140-10e8-ac1d-f633bd753d7f  50        alloc-failure       complete  false
3289c65d-b8cc-eafb-758a-a21a19b3dcb1  50        deployment-watcher  complete  false
3107ec48-d6c4-48ef-ea8a-3128365f9747  50        alloc-failure       complete  false
b6686bea-07f9-efec-1c3e-35d654b7b06d  50        alloc-failure       complete  false
571e60a0-4c28-b69d-4271-222f94e2c8aa  50        deployment-watcher  complete  false
7ed51c56-3b95-fbaf-a527-631d9e7775d8  50        alloc-failure       complete  false
7c2a93b1-9c6b-07a2-6fbe-e14eea02e3d9  50        alloc-failure       complete  false
9b08e584-ad9d-5bb9-5c38-f6b48f4db2bc  50        deployment-watcher  complete  false
23f2ce15-b675-9fc6-777f-49c4b4263297  50        alloc-failure       complete  false
6f2aa14f-ed4d-c71a-2027-00dba8946282  50        alloc-failure       complete  false
b00df8cf-49b5-5288-fb35-9778a69768f3  50        alloc-failure       complete  false
09bba4c6-ab48-f8c5-cd25-7516080df487  50        deployment-watcher  complete  false
29376d3c-29be-aff9-e7f3-fca67ae6985b  50        alloc-failure       complete  false
a49a0b02-7152-5ca1-993e-659cf602b7ac  50        alloc-failure       complete  false
3fa16907-8335-b063-13f9-ceea5e9d80c6  50        job-register        complete  false
b6a2b0d3-0310-086e-5ba1-be662f7d4ac3  50        deployment-watcher  complete  false
d071322b-55f6-7fd9-7e30-57f37743cdc0  50        deployment-watcher  complete  false
e209eb03-d1c8-fd6a-70ca-e708fa605c44  50        deployment-watcher  complete  false
c91c38df-f91e-6273-ef5e-ab98401ee909  50        job-register        complete  false
13ef2668-036f-1ff4-d539-360cd0cb0742  50        deployment-watcher  complete  false
de956e9b-8f21-862a-ad60-fada2c5caabe  50        alloc-failure       complete  false
7234715f-65ef-8510-5e1c-34c76d5b064c  50        deployment-watcher  complete  false
183a9757-cc35-5f2b-afc8-815fcd4bb986  50        alloc-failure       complete  false
7bac4701-93d2-866e-3891-0c73a4cfabab  50        alloc-failure       complete  false
6d84551a-927e-c651-276e-70a6b3bacb9d  50        deployment-watcher  complete  false
2b19872d-f5b5-db5b-ec64-c78daf1224e2  50        alloc-failure       complete  false
60823701-243a-914a-5cc8-8028ee0930cd  50        alloc-failure       complete  false
1fab0781-a7db-fe6f-0d4b-561ba695d854  50        deployment-watcher  complete  false
f3405e2c-005f-47e0-1136-1d509a07bb93  50        alloc-failure       complete  false
453560af-e641-ef20-6577-c2e9a7821cf2  50        alloc-failure       complete  false
e494a889-e7ef-c6f3-4671-bb7209778f31  50        deployment-watcher  complete  false
5dd3e64f-75e5-cada-52bc-30214f1babb0  50        alloc-failure       complete  false
a6d4a7aa-1028-7d68-4082-277bcea40a35  50        alloc-failure       complete  false
2121f647-ad6c-ca8c-9536-3244a2c39c77  50        alloc-failure       complete  false
722d8336-1bb6-65d0-08a7-8b05486b87aa  50        deployment-watcher  complete  false
bcc2cc05-96e0-4326-94c7-6b0b7f15c8cd  50        alloc-failure       complete  false
19b93751-1652-5e36-f12e-2384b7de1c67  50        alloc-failure       complete  false
1f3ab4f4-f183-b475-45d4-24d41018f82a  50        deployment-watcher  complete  false
b83a01f8-99bf-fa69-f243-95716da017d9  50        alloc-failure       complete  false
1de392be-d69f-00cd-edab-d78befd2f22f  50        alloc-failure       complete  false
efd5aae6-0108-13f1-1277-40bdbbd6a7a0  50        deployment-watcher  complete  false
8b85f0b3-91e3-e217-fe91-532318e6e2f1  50        alloc-failure       complete  false
18412578-fdb0-01d7-c342-2dc45f246384  50        alloc-failure       complete  false
91e2d50c-938c-8b4f-5c83-3a8da27eceb9  50        deployment-watcher  complete  false
032db58c-de0b-69a0-932c-6eb4908f2ef6  50        alloc-failure       complete  false
119d1b89-d5f9-84ce-377b-59344f7b4592  50        deployment-watcher  complete  false
d6a0b550-8866-3cf9-6ced-090a479b47a2  50        alloc-failure       complete  false
707b439c-450b-44b6-b5cb-3a0a5b6c9346  50        alloc-failure       complete  false
a7c96883-1112-4fb7-ad60-980e56ec26bb  50        alloc-failure       complete  false
2919ac8d-71e7-76b3-7ed0-50c852a9d412  50        deployment-watcher  complete  false
500e240d-fde2-df4c-4abb-aff08d860f58  50        alloc-failure       complete  false
52b88cbc-0fa4-6fc7-bb60-26d7e1e06865  50        alloc-failure       complete  false
61c48190-72cc-1d3a-a6e5-64999041048b  50        deployment-watcher  complete  false
79372a1e-b2e8-6b1d-2a6b-5dcee5b22c5a  50        alloc-failure       complete  false
95bdb723-65ca-3396-3b2d-395a9bfab76e  50        deployment-watcher  complete  false
63f39508-d646-fcfa-33fa-92b9ac7991e7  50        alloc-failure       complete  false
c5fc27f9-d570-3fcb-6489-dc6e720dd84f  50        alloc-failure       complete  false
2b31dbd1-bbfc-c94f-26ce-01c637ace640  50        deployment-watcher  complete  false
7df44d2f-2573-aff7-da9f-8346b432e013  50        alloc-failure       complete  false
d5d2447c-af49-8a49-c5dd-eac44be1b4b3  50        job-register        complete  false
c30d36ee-71cb-f8b8-42f5-ce14c021bca9  50        deployment-watcher  complete  false
a9847c23-c10c-7661-c680-24ed3b98b34e  50        deployment-watcher  complete  false
b39e9729-8c52-6036-2177-94f0e8a5bcda  50        deployment-watcher  complete  false
fd4d9f15-0189-8b85-d814-0c87dcd78fd0  50        deployment-watcher  complete  false
41fd8cad-c4a4-fa59-4c1f-59b99b487f74  50        job-register        complete  false

Latest Deployment
ID          = 7ca7b6f4-5b69-21cb-26bf-d7a16cdc5600
Status      = successful
Description = Deployment completed successfully

Deployed
Task Group  Desired  Placed  Healthy  Unhealthy  Progress Deadline
group       4        4       4        0          2020-01-10T11:49:16Z

Allocations
ID                                    Eval ID                               Node ID                               Node Name                                 Task Group  Version  Desired  Status    Created               Modified
cc5cb504-d1c4-b5be-f9dc-35cbae3acb54  15295d13-6bbe-93ce-fbd2-6d5ac276c1a1  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       37       run      running   2020-01-10T11:39:00Z  2020-01-10T11:39:12Z
8eafac41-5fba-17c5-0c7e-821d3f78c673  15295d13-6bbe-93ce-fbd2-6d5ac276c1a1  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       37       run      running   2020-01-10T11:39:00Z  2020-01-10T11:39:16Z
2ea6774a-8e37-c11c-a87b-f0e1496921b0  15295d13-6bbe-93ce-fbd2-6d5ac276c1a1  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       37       run      running   2020-01-10T11:39:00Z  2020-01-10T11:39:16Z
79d8641b-0625-101c-0ace-846090f52d08  d68f2b53-d0f7-3389-eda0-b9cc7f5d1e1b  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       36       stop     failed    2020-01-10T11:38:23Z  2020-01-10T11:39:00Z
de0c3ed5-2781-f582-15fc-6ebce2c0fe23  d68f2b53-d0f7-3389-eda0-b9cc7f5d1e1b  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       36       stop     failed    2020-01-10T11:38:23Z  2020-01-10T11:39:00Z
3afc5b15-0f4c-183d-7e28-547d6e645f2f  d68f2b53-d0f7-3389-eda0-b9cc7f5d1e1b  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       36       stop     failed    2020-01-10T11:38:23Z  2020-01-10T11:39:00Z
e129c594-6ebf-667d-85ac-27e9869abc3e  d68f2b53-d0f7-3389-eda0-b9cc7f5d1e1b  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       36       stop     failed    2020-01-10T11:38:23Z  2020-01-10T11:39:00Z
54a1ec87-404f-4f73-1311-fad32c65e33b  0db6553d-3f62-1ff3-d337-796c84965b5a  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       36       stop     failed    2020-01-10T11:37:52Z  2020-01-10T11:39:00Z
32c6e44c-e474-6d87-96c3-c498f19b400a  0db6553d-3f62-1ff3-d337-796c84965b5a  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       36       stop     failed    2020-01-10T11:37:52Z  2020-01-10T11:39:00Z
6be78b4f-9695-2096-7e4f-740cf3437793  0db6553d-3f62-1ff3-d337-796c84965b5a  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       36       stop     failed    2020-01-10T11:37:52Z  2020-01-10T11:39:00Z
0254d5ab-fd43-eb66-b71a-757623590f5e  0db6553d-3f62-1ff3-d337-796c84965b5a  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       36       stop     failed    2020-01-10T11:37:52Z  2020-01-10T11:39:00Z
f3bafee0-f995-7fd5-c817-2111df5c0a79  15295d13-6bbe-93ce-fbd2-6d5ac276c1a1  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       37       run      running   2020-01-10T11:36:13Z  2020-01-10T11:39:15Z
5d8609fa-81ed-f236-bf99-2d54617f3c3f  cf7e6f8c-e549-6d47-25ec-a9410b40d84c  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       35       stop     complete  2020-01-10T11:36:13Z  2020-01-10T11:38:25Z
cb6a552d-3467-2e8e-a1ab-5cf0b875e6f0  cf7e6f8c-e549-6d47-25ec-a9410b40d84c  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       35       stop     complete  2020-01-10T11:36:13Z  2020-01-10T11:38:25Z
33209fec-3a33-949f-236d-ab26ff6e3d82  cf7e6f8c-e549-6d47-25ec-a9410b40d84c  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       35       stop     complete  2020-01-10T11:36:13Z  2020-01-10T11:38:25Z
2f29c457-0041-61e6-a0b9-bb9f36657c26  31678a73-c55f-d15d-cd92-29dd16fc378b  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       33       stop     failed    2020-01-09T14:01:41Z  2020-01-10T11:36:13Z
321bb29d-8b92-9d5c-f9dd-031bcd60925e  31678a73-c55f-d15d-cd92-29dd16fc378b  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       33       stop     failed    2020-01-09T14:01:41Z  2020-01-10T11:36:13Z
d754d04a-3057-b191-8805-4d8d84bda59a  31678a73-c55f-d15d-cd92-29dd16fc378b  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       33       stop     failed    2020-01-09T14:01:41Z  2020-01-10T11:36:13Z
4fa98b7f-7bec-ea13-c911-823eca453f57  31678a73-c55f-d15d-cd92-29dd16fc378b  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       33       stop     failed    2020-01-09T14:01:41Z  2020-01-10T11:36:13Z
33ec5567-d89d-5f87-c188-f7cd62dc4c51  dd9b46e2-5264-ea1c-7d42-c3107f1d90c0  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       33       stop     failed    2020-01-09T14:00:40Z  2020-01-10T11:36:13Z
e10861d7-ae51-1dbb-3d72-84452f473e11  dd9b46e2-5264-ea1c-7d42-c3107f1d90c0  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       33       stop     failed    2020-01-09T14:00:40Z  2020-01-10T11:36:13Z
7933a0bc-4171-1c4b-c7a9-3e25c998841e  dd9b46e2-5264-ea1c-7d42-c3107f1d90c0  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       33       stop     failed    2020-01-09T14:00:40Z  2020-01-10T11:36:13Z
a9d542f1-968b-6bf1-db20-a4867c2a845a  dd9b46e2-5264-ea1c-7d42-c3107f1d90c0  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       33       stop     failed    2020-01-09T14:00:40Z  2020-01-10T11:36:13Z
7fbd1cc4-865d-864f-bc68-76ebd7a88127  b43983ce-f145-a7b9-6033-72603e2f869f  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       33       stop     failed    2020-01-09T14:00:09Z  2020-01-10T11:36:13Z
ddfa5ad1-84d1-8035-5c9f-ef27d935834b  b43983ce-f145-a7b9-6033-72603e2f869f  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       33       stop     failed    2020-01-09T14:00:09Z  2020-01-10T11:36:13Z
68832070-967b-1cf3-2303-aa51a543eb92  b43983ce-f145-a7b9-6033-72603e2f869f  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       33       stop     failed    2020-01-09T14:00:09Z  2020-01-10T11:36:13Z
7df38246-f5b3-9f33-f87d-53f0f278bbf9  b43983ce-f145-a7b9-6033-72603e2f869f  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       33       stop     failed    2020-01-09T14:00:09Z  2020-01-10T11:36:13Z
6d75950c-f69e-fcf2-1a7e-70d83b01816f  7d36ecec-5652-7e1c-1835-a6909ae11810  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       32       stop     complete  2020-01-09T05:33:13Z  2020-01-09T14:00:42Z
8260d437-9fd5-18b5-1866-bc96ddc37d49  7d36ecec-5652-7e1c-1835-a6909ae11810  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       32       stop     complete  2020-01-09T05:33:13Z  2020-01-09T05:34:19Z
022be486-b841-4451-cbc9-dcbf8e6babbc  7d36ecec-5652-7e1c-1835-a6909ae11810  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       32       stop     complete  2020-01-09T05:33:13Z  2020-01-09T05:34:19Z
3fca2c19-d9a4-e8a3-8241-7da68f250cc7  7d36ecec-5652-7e1c-1835-a6909ae11810  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       32       stop     complete  2020-01-09T05:33:13Z  2020-01-09T05:34:19Z
6eee24a7-36f3-19d5-5c46-60b56dbeb70b  3c0715c4-161b-bfa2-4aab-0ca630e0d60b  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       32       stop     complete  2020-01-09T05:33:11Z  2020-01-09T05:34:19Z
455726d6-1c7a-3e1c-543d-d1c0841d891b  3c0715c4-161b-bfa2-4aab-0ca630e0d60b  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       32       stop     complete  2020-01-09T05:33:11Z  2020-01-09T14:00:43Z
e8410fc6-0ff9-b52c-50ec-20b7de942ad5  3c0715c4-161b-bfa2-4aab-0ca630e0d60b  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       32       stop     complete  2020-01-09T05:33:11Z  2020-01-09T14:00:42Z
800da2bb-2f3f-2827-055d-d25cf8a7d096  3c0715c4-161b-bfa2-4aab-0ca630e0d60b  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       32       stop     complete  2020-01-09T05:33:11Z  2020-01-09T14:00:42Z
da4d2af7-c7b4-8f1c-fc8b-0e3c94951f3b  f1136801-007c-5f82-a042-8d41f9d16905  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       31       stop     failed    2020-01-09T05:32:13Z  2020-01-09T05:33:13Z
10e82c11-4efe-5a38-c960-4de34ce3a273  f1136801-007c-5f82-a042-8d41f9d16905  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       31       stop     failed    2020-01-09T05:32:13Z  2020-01-09T05:33:13Z
909bc6a5-ef0b-630c-f660-70a72e0924bc  f1136801-007c-5f82-a042-8d41f9d16905  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       31       stop     failed    2020-01-09T05:32:13Z  2020-01-09T05:33:13Z
a7623d99-7848-7378-9c29-9798fb77e0e9  f1136801-007c-5f82-a042-8d41f9d16905  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       31       stop     failed    2020-01-09T05:32:13Z  2020-01-09T05:33:13Z
d19bafbb-2151-03cb-dceb-89cfe6b1193a  ccbd5187-ec75-5024-0582-244f394361df  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       31       stop     failed    2020-01-09T05:31:42Z  2020-01-09T05:33:11Z
971e052c-c175-ac39-8dfc-f3eaad310605  ccbd5187-ec75-5024-0582-244f394361df  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       31       stop     failed    2020-01-09T05:31:42Z  2020-01-09T05:33:11Z
098ee505-b60d-9937-6e5b-dab56cdc23f4  ccbd5187-ec75-5024-0582-244f394361df  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       31       stop     failed    2020-01-09T05:31:42Z  2020-01-09T05:33:11Z
5a33a23a-ce4b-85be-78a8-68ea36daffcd  ccbd5187-ec75-5024-0582-244f394361df  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       31       stop     failed    2020-01-09T05:31:42Z  2020-01-09T05:33:11Z
0dffa9d5-6c0a-2c9e-da0b-85648c7e8a88  4a2881c6-c394-2124-b590-574fe6b92fd0  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       30       stop     complete  2020-01-09T05:29:21Z  2020-01-09T05:32:15Z
72167a38-92f1-3c6f-b668-995fd2e69bf2  4a2881c6-c394-2124-b590-574fe6b92fd0  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       30       stop     complete  2020-01-09T05:29:21Z  2020-01-09T05:29:41Z
2df9024a-409d-5ef8-2d95-afb13af03a9e  4a2881c6-c394-2124-b590-574fe6b92fd0  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       30       stop     complete  2020-01-09T05:29:21Z  2020-01-09T05:29:41Z
0129bb93-8443-6e77-ab9e-b42de9987922  57c9971a-fc0b-a6be-f0cf-2000a92d488c  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       30       stop     complete  2020-01-09T05:28:33Z  2020-01-09T05:29:36Z
f9ea7906-4734-8a56-df4a-d4f838a1daa3  57c9971a-fc0b-a6be-f0cf-2000a92d488c  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       30       stop     complete  2020-01-09T05:28:33Z  2020-01-09T05:32:15Z
1307fa7c-cd97-4fad-71d4-0e93df14127d  57c9971a-fc0b-a6be-f0cf-2000a92d488c  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       30       stop     complete  2020-01-09T05:28:33Z  2020-01-09T05:32:15Z
4069eb08-3864-2fcf-e772-8897b0aa67c6  57c9971a-fc0b-a6be-f0cf-2000a92d488c  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       30       stop     complete  2020-01-09T05:28:33Z  2020-01-09T05:32:15Z
278cf14c-878d-888e-504f-4676e25fd900  88ffe121-11e5-711d-df81-38c763059b3b  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       29       stop     failed    2020-01-09T05:28:20Z  2020-01-09T05:29:21Z
b71460cc-1dd8-37da-b89b-eff9a264c88e  88ffe121-11e5-711d-df81-38c763059b3b  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       29       stop     failed    2020-01-09T05:28:20Z  2020-01-09T05:29:21Z
a498bad6-4ae3-4b7b-0c81-6081ddc8bd4f  88ffe121-11e5-711d-df81-38c763059b3b  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       29       stop     failed    2020-01-09T05:28:20Z  2020-01-09T05:28:21Z
4bddb8bd-5dcd-4519-3b51-d8a749a68147  88ffe121-11e5-711d-df81-38c763059b3b  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       29       stop     failed    2020-01-09T05:28:20Z  2020-01-09T05:29:21Z
373b9000-855d-bc3d-1649-2b30a5705158  632f8b27-75c0-1ff2-7b8d-5b922759cfc1  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       29       stop     failed    2020-01-09T05:27:49Z  2020-01-09T05:28:33Z
8739fb13-b4b2-96cb-c866-98ddc0704505  632f8b27-75c0-1ff2-7b8d-5b922759cfc1  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       29       stop     failed    2020-01-09T05:27:49Z  2020-01-09T05:28:33Z
8c8137a7-a520-610a-7da0-c57f3baed95a  632f8b27-75c0-1ff2-7b8d-5b922759cfc1  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       29       stop     failed    2020-01-09T05:27:49Z  2020-01-09T05:28:33Z
1e2c4dc5-5405-184d-065f-5745b228bad2  632f8b27-75c0-1ff2-7b8d-5b922759cfc1  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       29       stop     failed    2020-01-09T05:27:49Z  2020-01-09T05:28:33Z
f4d47a9e-1fb3-43cb-f8c0-838733099c7a  57c9971a-fc0b-a6be-f0cf-2000a92d488c  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       30       stop     complete  2020-01-09T05:24:53Z  2020-01-09T05:29:36Z
f07afefe-6ee8-d99f-6c73-d471ac97e2c5  0130afcc-6740-096d-ad28-0bb7798628d8  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       28       stop     complete  2020-01-09T05:24:53Z  2020-01-09T05:28:22Z
388191cd-28e6-e1cc-a505-247948bbc669  0130afcc-6740-096d-ad28-0bb7798628d8  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       28       stop     complete  2020-01-09T05:24:53Z  2020-01-09T05:28:22Z
6b71e289-95d1-8ed8-a1f0-3ed458d68942  0130afcc-6740-096d-ad28-0bb7798628d8  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       28       stop     complete  2020-01-09T05:24:53Z  2020-01-09T05:28:22Z
720f6d58-f3d0-abc1-7da5-30c2840abe93  224ffb16-7d99-df4e-3ab7-9c8c62a664c6  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       27       stop     failed    2020-01-09T05:24:08Z  2020-01-09T05:24:53Z
f8f5ae74-ce0e-b6e4-97b4-3e41b0e5f14d  224ffb16-7d99-df4e-3ab7-9c8c62a664c6  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       27       stop     failed    2020-01-09T05:24:08Z  2020-01-09T05:24:53Z
0b29b192-0fb2-975b-1d24-18e42b6c2887  224ffb16-7d99-df4e-3ab7-9c8c62a664c6  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       27       stop     failed    2020-01-09T05:24:08Z  2020-01-09T05:24:53Z
18e10b34-b173-a586-c1be-8a03d8e73704  224ffb16-7d99-df4e-3ab7-9c8c62a664c6  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       27       stop     failed    2020-01-09T05:24:08Z  2020-01-09T05:24:53Z
2f8002ff-e99d-8d5f-bb3e-3cef40c9f353  7472f791-41f4-0215-9536-ba541b7ce0ae  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       27       stop     failed    2020-01-09T05:23:37Z  2020-01-09T05:24:53Z
59b1024f-7adb-c48f-0aa9-daf6073d9b1c  7472f791-41f4-0215-9536-ba541b7ce0ae  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       27       stop     failed    2020-01-09T05:23:37Z  2020-01-09T05:24:53Z
bed55d06-17f4-e918-bb1c-bef770d8700d  7472f791-41f4-0215-9536-ba541b7ce0ae  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       27       stop     failed    2020-01-09T05:23:37Z  2020-01-09T05:24:53Z
9d07e352-1a3b-9237-d5ef-e7a5707e9ab4  7472f791-41f4-0215-9536-ba541b7ce0ae  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       27       stop     failed    2020-01-09T05:23:37Z  2020-01-09T05:24:53Z
a7dee78b-fb41-106e-8c52-ec8a8e4bb9e4  988e45f8-b68d-b10b-862a-e5d16331c3be  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       26       stop     complete  2020-01-09T05:21:14Z  2020-01-09T05:21:39Z
7c44e990-a2ad-92dc-aea4-bc43c854a880  988e45f8-b68d-b10b-862a-e5d16331c3be  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       26       stop     complete  2020-01-09T05:21:14Z  2020-01-09T05:21:43Z
437bc548-4330-be9e-3e7a-263fa4812a60  988e45f8-b68d-b10b-862a-e5d16331c3be  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       26       stop     complete  2020-01-09T05:21:14Z  2020-01-09T05:21:44Z
2ad7c923-451a-63d6-9aaf-c9af7c012150  bfbf6f74-5845-b137-5a71-7b79e53c5143  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       26       stop     complete  2020-01-09T05:20:56Z  2020-01-09T05:21:39Z
871e57e3-3f67-60cd-569a-92615477fc73  bfbf6f74-5845-b137-5a71-7b79e53c5143  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       26       stop     complete  2020-01-09T05:20:56Z  2020-01-09T05:24:10Z
ea61cdae-9e6f-961e-46a4-70b9da5e61e2  bfbf6f74-5845-b137-5a71-7b79e53c5143  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       26       stop     complete  2020-01-09T05:20:56Z  2020-01-09T05:24:10Z
b58006da-9057-a430-6a71-a6cc09dd9c15  bfbf6f74-5845-b137-5a71-7b79e53c5143  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       26       stop     complete  2020-01-09T05:20:56Z  2020-01-09T05:24:10Z
f5818c55-3956-c352-ca65-1a3ef889f499  927bdc01-bc08-9282-8079-ba76097f3bb0  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       25       stop     failed    2020-01-09T05:19:55Z  2020-01-09T05:21:14Z
13377d5f-286c-a937-0109-01062c5ac0d1  927bdc01-bc08-9282-8079-ba76097f3bb0  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       25       stop     failed    2020-01-09T05:19:55Z  2020-01-09T05:20:14Z
df3929ee-84ce-2b60-0ebe-e747959de31a  927bdc01-bc08-9282-8079-ba76097f3bb0  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       25       stop     failed    2020-01-09T05:19:55Z  2020-01-09T05:21:14Z
6b6991f1-33e6-acfe-e467-4c27419f0430  927bdc01-bc08-9282-8079-ba76097f3bb0  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       25       stop     failed    2020-01-09T05:19:55Z  2020-01-09T05:21:14Z
d7f653cc-0305-c004-68ce-537284403a27  769e952c-e1c5-0cae-3489-6f8eeea86f66  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       25       stop     failed    2020-01-09T05:19:05Z  2020-01-09T05:20:56Z
a1bc7448-ff3c-595a-383b-6a2cdd13f2fd  769e952c-e1c5-0cae-3489-6f8eeea86f66  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       25       stop     failed    2020-01-09T05:19:05Z  2020-01-09T05:20:56Z
650ad33b-77a9-5887-64aa-2f7e36185bf0  769e952c-e1c5-0cae-3489-6f8eeea86f66  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       25       stop     failed    2020-01-09T05:19:05Z  2020-01-09T05:20:56Z
98d20de7-453d-efe9-5c18-981ce281b2fa  769e952c-e1c5-0cae-3489-6f8eeea86f66  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       25       stop     failed    2020-01-09T05:19:05Z  2020-01-09T05:20:56Z
244f81d0-44fb-35e7-3d04-0bbfbf611630  cff5b63f-69e1-165b-5863-f68ff0d435f9  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       24       stop     complete  2020-01-09T05:17:42Z  2020-01-09T05:20:15Z
34ccaeca-0da7-1f0a-2ef1-539a625e685d  bfbf6f74-5845-b137-5a71-7b79e53c5143  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       26       stop     complete  2020-01-09T05:17:42Z  2020-01-09T05:24:10Z
53bacbb4-5dc7-7f52-b37c-4e5a3f49fb5e  cff5b63f-69e1-165b-5863-f68ff0d435f9  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       24       stop     complete  2020-01-09T05:17:42Z  2020-01-09T05:20:15Z
b27a7f63-a9be-6214-f778-2c905954a32c  cff5b63f-69e1-165b-5863-f68ff0d435f9  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       24       stop     complete  2020-01-09T05:17:42Z  2020-01-09T05:20:15Z
c991ab6a-f88d-a28e-c466-d9debaaa8e98  9873cc74-d196-9907-8257-a910e1fabd48  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       22       stop     failed    2020-01-09T02:41:12Z  2020-01-09T05:17:42Z
b8be2dce-dd97-0c62-a775-2de8f97487d1  a03999e9-255c-8c25-51e8-16edb56f216d  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       22       stop     failed    2020-01-09T02:41:10Z  2020-01-09T05:17:42Z
7873b061-e9e8-26c6-5b9f-542ff14ffd87  4a0199e9-ce83-9a56-5d32-b9302e5502b9  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       22       stop     failed    2020-01-09T02:36:37Z  2020-01-09T04:42:38Z
c9c94700-e3b9-3427-6e47-cd2821361774  80aa6bc6-7c5c-7b02-cc3d-d6ebd22f67c3  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       22       stop     failed    2020-01-09T02:36:33Z  2020-01-09T04:42:40Z
d89dfe82-fb16-f2b2-5a9a-8beb8bca15a7  80aa6bc6-7c5c-7b02-cc3d-d6ebd22f67c3  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       22       stop     failed    2020-01-09T02:36:33Z  2020-01-09T04:42:40Z
2650fae7-b5b0-c7c6-4a73-9e757151c0de  9845fa10-0ee6-196c-c2fc-d44f1bcc7248  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       22       stop     failed    2020-01-09T02:34:00Z  2020-01-09T04:42:40Z
5c535f5d-35ba-d303-1e41-c77cbe3664a8  bb66a5fa-44e3-360e-0645-61b54eae5314  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       22       stop     failed    2020-01-09T02:33:56Z  2020-01-09T05:17:42Z
424aa9a7-9c56-4224-c4b6-6c4b1e50e14d  bb66a5fa-44e3-360e-0645-61b54eae5314  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       22       stop     failed    2020-01-09T02:33:56Z  2020-01-09T05:17:42Z
46e468f8-b7e7-529a-95ad-a8bea59efcf9  7270cf6a-d3a8-8369-3c24-a8961a120f00  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       22       stop     complete  2020-01-09T02:32:23Z  2020-01-09T04:42:40Z
78908d2a-5a7b-1946-215e-7387b19e0fb8  7270cf6a-d3a8-8369-3c24-a8961a120f00  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       22       stop     failed    2020-01-09T02:32:23Z  2020-01-09T04:42:40Z
1d7e8f6e-0b19-826b-1201-211f53d9d2b1  ce11f79a-507d-867b-edab-29889fde9cdc  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       22       stop     failed    2020-01-09T02:32:22Z  2020-01-09T04:42:40Z
ba930d4b-533e-8b48-d7ad-594fcce5ca09  ce11f79a-507d-867b-edab-29889fde9cdc  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       22       stop     failed    2020-01-09T02:32:22Z  2020-01-09T04:42:40Z
d3d4cad6-9fe6-6138-914f-bdd2e3fb9d9c  e230c8b1-b2e4-5299-464d-81fe485a700a  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       22       stop     failed    2020-01-09T02:31:14Z  2020-01-09T05:17:42Z
58b1e6a3-ff39-681f-cebe-0cf6027833cd  e230c8b1-b2e4-5299-464d-81fe485a700a  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       22       stop     failed    2020-01-09T02:31:14Z  2020-01-09T05:17:42Z
b027d10e-f475-7218-0694-05ce36c41cae  e230c8b1-b2e4-5299-464d-81fe485a700a  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       22       stop     failed    2020-01-09T02:31:14Z  2020-01-09T05:17:42Z
c358e900-db45-75e7-d983-a6ab0d7e1e74  e230c8b1-b2e4-5299-464d-81fe485a700a  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       22       stop     failed    2020-01-09T02:31:14Z  2020-01-09T05:17:42Z
cc161972-5a7b-6b6e-5eff-747b28abd2c3  f9e099fb-710c-eaae-dbe3-113e4173b2d2  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       21       stop     complete  2020-01-09T02:30:17Z  2020-01-09T02:32:58Z
ad9b553c-3645-5b45-43ba-c181e9af6e2b  f9e099fb-710c-eaae-dbe3-113e4173b2d2  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       21       stop     complete  2020-01-09T02:30:17Z  2020-01-09T02:32:58Z
b762b7e1-3be2-79b3-f406-bed54b7b9412  4db6a469-8829-1a99-8086-4e52c2185c5d  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       20       stop     failed    2020-01-09T02:26:47Z  2020-01-09T02:30:17Z
c53c0355-f86d-d7d7-74a7-980cd373e355  4db6a469-8829-1a99-8086-4e52c2185c5d  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       20       stop     failed    2020-01-09T02:26:47Z  2020-01-09T02:30:17Z
af41a47d-e849-c989-015f-46fe30e4c14f  4db6a469-8829-1a99-8086-4e52c2185c5d  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       20       stop     failed    2020-01-09T02:26:47Z  2020-01-09T02:30:17Z
df9da756-ea74-e297-aad0-ac318ceaab5d  4db6a469-8829-1a99-8086-4e52c2185c5d  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       20       stop     failed    2020-01-09T02:26:47Z  2020-01-09T02:30:17Z
f0ebe3b9-bab3-a4f2-2005-fc6bf35ec917  8ec023b6-22fa-af55-01b5-5306b23ab7d7  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       20       stop     failed    2020-01-09T02:26:16Z  2020-01-09T02:30:17Z
e0acef61-b8d2-dd8c-2e55-51ab2cf256d5  8ec023b6-22fa-af55-01b5-5306b23ab7d7  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       20       stop     failed    2020-01-09T02:26:16Z  2020-01-09T02:30:17Z
e89b0b0f-2b64-fefc-bd3f-973f8fdba84d  8ec023b6-22fa-af55-01b5-5306b23ab7d7  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       20       stop     failed    2020-01-09T02:26:16Z  2020-01-09T02:30:17Z
eb6f1cd0-912e-3cb8-55f2-5c12e3b0e021  8ec023b6-22fa-af55-01b5-5306b23ab7d7  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       20       stop     failed    2020-01-09T02:26:16Z  2020-01-09T02:30:17Z
0acf137e-6482-4f00-16f9-3b4f499dc796  f9e099fb-710c-eaae-dbe3-113e4173b2d2  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       21       stop     complete  2020-01-09T02:24:30Z  2020-01-09T02:32:58Z
7c2e2a41-c94a-1446-5daa-aa8fc928fa78  14b394a6-8eb0-aec1-c91b-12c1b63a3ace  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       19       stop     complete  2020-01-09T02:24:30Z  2020-01-09T02:26:49Z
9d639e56-8fa3-7b33-3697-0243e4216ad1  f9e099fb-710c-eaae-dbe3-113e4173b2d2  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       21       stop     complete  2020-01-09T02:24:30Z  2020-01-09T04:47:16Z
c65f372d-bfdd-fee6-d599-6bb10428df24  14b394a6-8eb0-aec1-c91b-12c1b63a3ace  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       19       stop     complete  2020-01-09T02:24:30Z  2020-01-09T02:26:49Z
a3150dc8-ae52-8a49-a248-3b4eaf814467  e5d403a4-cdf0-af19-7057-e6760cdf613b  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       18       stop     complete  2020-01-09T02:22:54Z  2020-01-09T02:25:42Z
b897cbe8-7adb-b4bd-53ab-e100cd776fa3  7bdc008b-178e-5f9c-aaaf-0ac35c4e0c75  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       18       stop     complete  2020-01-09T02:22:54Z  2020-01-09T02:25:42Z
1f4b159a-39ed-97bc-49d2-693067176719  7bdc008b-178e-5f9c-aaaf-0ac35c4e0c75  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       18       stop     complete  2020-01-09T02:22:54Z  2020-01-09T02:25:42Z
42bf3e89-a16c-b43f-085b-45af48e8ec63  7bdc008b-178e-5f9c-aaaf-0ac35c4e0c75  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       18       stop     complete  2020-01-09T02:22:54Z  2020-01-09T02:24:13Z
06dfa327-9961-022c-bba9-062c8bb98ff8  7bdc008b-178e-5f9c-aaaf-0ac35c4e0c75  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       18       stop     complete  2020-01-09T02:22:54Z  2020-01-09T02:25:42Z
4a57b061-300c-1b76-cb7b-de1b9bb33fc6  7bdc008b-178e-5f9c-aaaf-0ac35c4e0c75  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       18       stop     complete  2020-01-09T02:22:54Z  2020-01-09T02:24:13Z
eaa61509-1c4f-66a8-39b2-477d24a69e73  739c6574-1dfd-7514-ce9d-c1341a4cb858  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       17       stop     failed    2020-01-09T02:21:37Z  2020-01-09T02:22:54Z
8f332019-5c3b-f469-5779-8f9bf42a960b  739c6574-1dfd-7514-ce9d-c1341a4cb858  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       17       stop     complete  2020-01-09T02:21:37Z  2020-01-09T02:21:56Z
0c9063fd-a83d-d697-3b8e-478b59b6a59d  739c6574-1dfd-7514-ce9d-c1341a4cb858  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       17       stop     failed    2020-01-09T02:21:37Z  2020-01-09T02:21:55Z
94882ff6-bda6-57af-6983-9147de624af0  739c6574-1dfd-7514-ce9d-c1341a4cb858  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       17       stop     failed    2020-01-09T02:21:37Z  2020-01-09T02:22:54Z
10db701f-acad-b013-1be5-10925ff9c8e6  2c948c05-c863-f0ea-225f-ff074580d0b3  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       17       stop     failed    2020-01-09T02:20:47Z  2020-01-09T02:22:54Z
897391eb-a763-c217-fa4d-e8eab50f458d  2c948c05-c863-f0ea-225f-ff074580d0b3  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       17       stop     failed    2020-01-09T02:20:47Z  2020-01-09T02:22:54Z
64f01c31-fe40-5f95-89ff-3e3bad9eb62d  2c948c05-c863-f0ea-225f-ff074580d0b3  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       17       stop     failed    2020-01-09T02:20:47Z  2020-01-09T02:22:54Z
5bc279c9-0246-6826-cf2d-eb0792d3b78d  2c948c05-c863-f0ea-225f-ff074580d0b3  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       17       stop     failed    2020-01-09T02:20:47Z  2020-01-09T02:22:54Z
2422a577-22a6-5548-7980-6a8ee67b19a7  3f47e881-0b5e-57f9-fc8b-ff82d30aec27  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       15       stop     complete  2020-01-09T02:17:08Z  2020-01-09T02:17:54Z
ef2cc642-910b-2512-1a49-e03cbea25e40  3f47e881-0b5e-57f9-fc8b-ff82d30aec27  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       15       stop     complete  2020-01-09T02:17:08Z  2020-01-09T02:17:54Z
c6dcfb5d-2aee-55d6-7cad-de2db096efde  3f47e881-0b5e-57f9-fc8b-ff82d30aec27  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       15       stop     complete  2020-01-09T02:17:08Z  2020-01-09T02:17:54Z
1cee59ce-d82b-dd97-69c1-b882e660533f  c42b889c-821d-8abb-d6fc-83731fc5acb7  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       16       stop     complete  2020-01-09T02:15:47Z  2020-01-09T02:21:56Z
0a7d848a-966c-3cd6-0d0b-a6953a521585  0731ef0b-a44e-6d78-9394-64355c584f57  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       15       stop     complete  2020-01-09T02:15:47Z  2020-01-09T02:17:54Z
3e56ed30-d311-875b-baf3-06763b74bf14  c42b889c-821d-8abb-d6fc-83731fc5acb7  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       16       stop     complete  2020-01-09T02:15:47Z  2020-01-09T02:21:55Z
972e2957-9bc2-22b9-7e48-2b39aeb5b8f0  7bdc008b-178e-5f9c-aaaf-0ac35c4e0c75  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       18       stop     complete  2020-01-09T02:15:47Z  2020-01-09T02:24:13Z
4835aa47-9627-7328-06a4-8bacba8df755  4b05fa28-03fe-2e99-1ce6-7720128427f8  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       14       stop     failed    2020-01-09T02:15:07Z  2020-01-09T02:17:08Z
76c5bc4c-e426-3c3f-1ccf-ad97b29173ef  4b05fa28-03fe-2e99-1ce6-7720128427f8  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       14       stop     failed    2020-01-09T02:15:07Z  2020-01-09T02:17:08Z
948127fa-f829-8513-66bb-eb1e76b1dd50  4b05fa28-03fe-2e99-1ce6-7720128427f8  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       14       stop     failed    2020-01-09T02:15:07Z  2020-01-09T02:17:08Z
8ea4757b-b43b-3a6c-2726-44a5c1b90eb2  ad7219c3-0ee8-41bf-b7b3-7e1235acc4f8  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       14       stop     failed    2020-01-09T02:14:06Z  2020-01-09T02:15:47Z
5bc455db-89cf-e0ed-1ed4-1f5efbf83b07  ad7219c3-0ee8-41bf-b7b3-7e1235acc4f8  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       14       stop     failed    2020-01-09T02:14:06Z  2020-01-09T02:15:47Z
6663c83d-e5d1-283f-87c2-4e09aa699508  ad7219c3-0ee8-41bf-b7b3-7e1235acc4f8  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       14       stop     failed    2020-01-09T02:14:06Z  2020-01-09T02:15:47Z
04c7c68c-14c3-c3c6-3b90-5370e37103c8  ad7219c3-0ee8-41bf-b7b3-7e1235acc4f8  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       14       stop     failed    2020-01-09T02:14:06Z  2020-01-09T02:15:47Z
77e4ce4b-1906-6f16-030c-ebf2a7964a9e  64070c33-b5e8-8bdc-ccd3-28eb77ec0efb  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       14       stop     failed    2020-01-09T02:13:35Z  2020-01-09T02:15:47Z
a916e83d-3be4-6832-b676-9ea18c012cf5  64070c33-b5e8-8bdc-ccd3-28eb77ec0efb  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       14       stop     failed    2020-01-09T02:13:35Z  2020-01-09T02:15:47Z
fe268bc1-ac60-a3c4-3719-485727113545  64070c33-b5e8-8bdc-ccd3-28eb77ec0efb  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       14       stop     failed    2020-01-09T02:13:35Z  2020-01-09T02:15:47Z
c533336f-181b-9ade-5cae-70cf07770035  64070c33-b5e8-8bdc-ccd3-28eb77ec0efb  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       14       stop     failed    2020-01-09T02:13:35Z  2020-01-09T02:15:47Z
8ce72939-410f-b4f0-37c8-2a3dbe832454  7bdc008b-178e-5f9c-aaaf-0ac35c4e0c75  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       18       stop     complete  2020-01-09T02:11:49Z  2020-01-09T02:24:13Z
1e8fa192-4bc9-3e39-def9-f23ee77171f7  e4c92146-55f7-b516-999d-3de29b582b71  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       13       stop     complete  2020-01-09T02:11:49Z  2020-01-09T02:14:08Z
2fed186a-7a19-fb71-8477-945f2c7c6912  e4c92146-55f7-b516-999d-3de29b582b71  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       13       stop     complete  2020-01-09T02:11:49Z  2020-01-09T02:14:08Z
4ef9cb6f-683f-ac82-7319-6d6d24d6f2ea  e4c92146-55f7-b516-999d-3de29b582b71  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       13       stop     complete  2020-01-09T02:11:49Z  2020-01-09T02:14:08Z
14d4ea8b-829c-8bbc-c53e-fab8adc5e45e  ebbf0022-11c1-0595-cb45-f5d9fd53d39d  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       12       stop     complete  2020-01-09T02:10:40Z  2020-01-09T02:11:03Z
1f015182-87e7-3340-1353-cfd1e9948ea6  ebbf0022-11c1-0595-cb45-f5d9fd53d39d  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       12       stop     complete  2020-01-09T02:10:40Z  2020-01-09T02:11:00Z
f863f449-13b1-bf1e-cc5d-18a2568779ab  ebbf0022-11c1-0595-cb45-f5d9fd53d39d  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       12       stop     complete  2020-01-09T02:10:40Z  2020-01-09T02:11:03Z
2cde123e-c79a-b205-4f8f-35e8f37890d4  ebbf0022-11c1-0595-cb45-f5d9fd53d39d  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       12       stop     complete  2020-01-09T02:10:40Z  2020-01-09T02:11:00Z
20b8ad8e-b2cf-921b-daa4-12b2015b9e13  95fc979b-3289-c5d1-35f2-52401b87936f  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       12       stop     complete  2020-01-09T02:10:09Z  2020-01-09T02:12:52Z
1e56f1cc-39f7-5876-1f5b-b23c81a6dcce  95fc979b-3289-c5d1-35f2-52401b87936f  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       12       stop     complete  2020-01-09T02:10:09Z  2020-01-09T02:12:51Z
069bb54d-b69f-9d84-605b-35082fb44067  95fc979b-3289-c5d1-35f2-52401b87936f  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       12       stop     complete  2020-01-09T02:10:09Z  2020-01-09T02:12:52Z
9e746d91-e65e-27fc-6b93-3e5accbb699e  95fc979b-3289-c5d1-35f2-52401b87936f  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       12       stop     complete  2020-01-09T02:10:09Z  2020-01-09T02:12:51Z
32054c85-ded3-b861-4fb9-100c3c000021  6f14aee9-4481-84bb-a4db-45f68a2126da  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       11       stop     failed    2020-01-09T02:09:39Z  2020-01-09T02:10:40Z
81325c0e-726f-c7ab-f79a-c1981139793c  6f14aee9-4481-84bb-a4db-45f68a2126da  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       11       stop     failed    2020-01-09T02:09:39Z  2020-01-09T02:10:40Z
5c4ba750-92a7-2022-e31c-3a3b21117c5a  6f14aee9-4481-84bb-a4db-45f68a2126da  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       11       stop     failed    2020-01-09T02:09:39Z  2020-01-09T02:10:40Z
60c1723b-c482-1fe1-d10f-2c3e06c2f5f7  6f14aee9-4481-84bb-a4db-45f68a2126da  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       11       stop     failed    2020-01-09T02:09:39Z  2020-01-09T02:10:40Z
70ea9058-d09f-e089-56c3-95d95dbac7ef  31fad478-cfe1-fa6f-51f4-50dadbb2e3c2  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       11       stop     failed    2020-01-09T02:09:07Z  2020-01-09T02:10:09Z
23e1ab73-5dfc-472e-1f73-7a9fdc947fda  31fad478-cfe1-fa6f-51f4-50dadbb2e3c2  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       11       stop     failed    2020-01-09T02:09:07Z  2020-01-09T02:10:09Z
339e75ae-3e47-138a-27c7-a465cf924866  31fad478-cfe1-fa6f-51f4-50dadbb2e3c2  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       11       stop     failed    2020-01-09T02:09:07Z  2020-01-09T02:10:09Z
d085a72c-80bb-0116-5909-10a540634ea5  31fad478-cfe1-fa6f-51f4-50dadbb2e3c2  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       11       stop     failed    2020-01-09T02:09:07Z  2020-01-09T02:10:09Z
2f9bbb08-8ad8-a0f0-3357-92d3c9e1ccc6  ae526ee5-d8f3-61a6-b29a-b494004f707d  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       10       stop     complete  2020-01-09T02:07:47Z  2020-01-09T02:09:42Z
fc380148-d41e-53dd-3aa9-426e94fe748a  ae526ee5-d8f3-61a6-b29a-b494004f707d  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       10       stop     complete  2020-01-09T02:07:47Z  2020-01-09T02:09:42Z
fe5b0c56-e6b8-82f2-6510-c151901c63ee  ae526ee5-d8f3-61a6-b29a-b494004f707d  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       10       stop     complete  2020-01-09T02:07:47Z  2020-01-09T02:09:42Z
1cc264b7-0307-931c-daa2-771246930e26  ae526ee5-d8f3-61a6-b29a-b494004f707d  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       10       stop     complete  2020-01-09T02:07:47Z  2020-01-09T02:09:42Z
a1ce087a-5b0f-0b03-ab82-e27c3d258516  fbf7b612-25d7-8573-2d48-211eaa821bd7  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       9        stop     complete  2020-01-09T02:06:48Z  2020-01-09T02:07:41Z
14a78060-1f84-24d9-806a-edceb6ffcf24  f5380797-072c-d7cd-9316-df6979682cfc  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       9        stop     complete  2020-01-09T02:06:46Z  2020-01-09T02:07:42Z
57eb7d27-0716-1cb3-824a-7b21aef94ced  8a448fa0-251b-980d-caa1-11f3e630b4ed  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       9        stop     complete  2020-01-09T02:06:45Z  2020-01-09T02:08:44Z
fd8cca4b-8d48-0b20-ad93-07e17d086354  8a448fa0-251b-980d-caa1-11f3e630b4ed  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       9        stop     complete  2020-01-09T02:06:45Z  2020-01-09T02:08:44Z
e052e732-969a-9c07-04bc-a118e62979d9  8a448fa0-251b-980d-caa1-11f3e630b4ed  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       9        stop     complete  2020-01-09T02:06:45Z  2020-01-09T02:07:41Z
c6e8ca98-4a44-358f-40ba-543523e6f375  8a448fa0-251b-980d-caa1-11f3e630b4ed  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       9        stop     complete  2020-01-09T02:06:45Z  2020-01-09T02:08:45Z
e212b0ab-7242-3743-e643-32c62d19872a  66b50c3c-d16b-db32-5041-5c3a0e848a5a  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       8        stop     failed    2020-01-09T02:05:48Z  2020-01-09T02:06:45Z
19b1b7ea-d04c-aa7e-6159-e7e92aeb0453  8f255f58-9f36-5caa-a5f8-c9ac9ebc278f  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       8        stop     complete  2020-01-09T02:05:46Z  2020-01-09T02:05:50Z
18f7e3b7-bc3c-40bc-82f6-61088f55b6e0  97761a9d-c23d-97b8-0502-d1d99ae6cb23  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       8        stop     failed    2020-01-09T02:05:44Z  2020-01-09T02:06:46Z
c9897f05-5b3c-9d3a-44f5-0ecacc7655dc  5163cafa-1c6f-7e8e-13d2-8fcaf52cf326  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       8        stop     failed    2020-01-09T02:05:41Z  2020-01-09T02:06:48Z
03e0ac19-7912-453b-02b0-5863be34e08d  85101abc-5fc0-9dfe-ae33-6631db74aa18  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       8        stop     failed    2020-01-09T02:05:08Z  2020-01-09T02:06:45Z
72f66c5c-b55b-8cf8-2237-65047f8af1a2  85101abc-5fc0-9dfe-ae33-6631db74aa18  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       8        stop     failed    2020-01-09T02:05:08Z  2020-01-09T02:06:45Z
6ffb6e7a-3c98-2524-f534-825a14c20ea5  85101abc-5fc0-9dfe-ae33-6631db74aa18  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       8        stop     failed    2020-01-09T02:05:08Z  2020-01-09T02:06:45Z
cae78ee9-174d-bdbb-46b5-90afc5e15b85  85101abc-5fc0-9dfe-ae33-6631db74aa18  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       8        stop     failed    2020-01-09T02:05:08Z  2020-01-09T02:06:45Z
949193ed-85c0-1496-c79e-d183fd0982d4  b970a9a4-86e4-8e01-0f52-269dcc9b591d  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       7        stop     complete  2020-01-09T02:02:21Z  2020-01-09T02:04:27Z
395588cb-4033-f090-7515-891d81b08171  b970a9a4-86e4-8e01-0f52-269dcc9b591d  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       7        stop     complete  2020-01-09T02:02:21Z  2020-01-09T02:04:26Z
e2b1f187-8159-6cc6-d6c9-d40059bb41cd  eb0b76a3-e77c-b48c-513e-e5778735f784  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       7        stop     complete  2020-01-09T02:02:19Z  2020-01-09T02:05:49Z
57b4aac4-f25f-b912-d402-59e0e79dd040  8a448fa0-251b-980d-caa1-11f3e630b4ed  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       9        stop     complete  2020-01-09T02:02:19Z  2020-01-09T02:08:44Z
a7b4a986-bce3-717a-bd8f-70050b706821  8a448fa0-251b-980d-caa1-11f3e630b4ed  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       9        stop     complete  2020-01-09T02:02:19Z  2020-01-09T02:07:42Z
ca751e45-16f3-5cbf-c6a0-ce8ea047bbf5  eb0b76a3-e77c-b48c-513e-e5778735f784  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       7        stop     complete  2020-01-09T02:02:19Z  2020-01-09T02:05:45Z
f4b1e34b-a25c-731c-6aeb-60e6a9e2eb6e  2966f427-8487-5dd1-48b3-61abb31b0865  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       6        stop     failed    2020-01-09T02:01:24Z  2020-01-09T02:02:19Z
9633f1be-cb7e-abf6-faa2-ab8105d50e14  e7c4e7cc-6143-43e5-4e95-83337084d54f  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       6        stop     failed    2020-01-09T02:01:18Z  2020-01-09T02:01:26Z
3e2792ca-99d4-c79c-a564-3de7bb84186f  e7c4e7cc-6143-43e5-4e95-83337084d54f  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       6        stop     failed    2020-01-09T02:01:18Z  2020-01-09T02:02:21Z
b2944dc3-9298-916f-6b01-acccf68a5980  e7c4e7cc-6143-43e5-4e95-83337084d54f  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       6        stop     failed    2020-01-09T02:01:18Z  2020-01-09T02:02:21Z
9696e499-bea8-96dc-2bf5-a5ab3a08efde  64cc88db-517f-0935-e6f9-c4cb43e025e3  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       6        stop     failed    2020-01-09T02:00:46Z  2020-01-09T02:02:19Z
6c1efd3b-61b0-a474-1455-5cb774d9c5c2  64cc88db-517f-0935-e6f9-c4cb43e025e3  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       6        stop     failed    2020-01-09T02:00:46Z  2020-01-09T02:02:19Z
3278022d-6c6f-e632-4201-036093e20ccc  64cc88db-517f-0935-e6f9-c4cb43e025e3  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       6        stop     failed    2020-01-09T02:00:46Z  2020-01-09T02:02:19Z
a140e046-7b56-8389-3095-4ffc2a08e0ff  64cc88db-517f-0935-e6f9-c4cb43e025e3  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       6        stop     failed    2020-01-09T02:00:46Z  2020-01-09T02:02:19Z
aa88d61b-2c4d-899b-b77b-f37be8f5c68c  f1c78432-a52a-69a7-bd97-10621c54b98c  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       5        stop     complete  2020-01-09T02:00:00Z  2020-01-09T02:01:23Z
c9078aa5-7a08-f599-1e96-0a4a32c1fe48  f1c78432-a52a-69a7-bd97-10621c54b98c  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       5        stop     complete  2020-01-09T02:00:00Z  2020-01-09T02:01:28Z
6dbe1cd2-a071-f24a-4a3f-fa1102200d06  737118d9-8315-1fe4-e728-90e83f360233  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       5        stop     complete  2020-01-09T01:59:23Z  2020-01-09T02:00:04Z
61a0664d-90e4-3de1-9459-27ea2369ad06  737118d9-8315-1fe4-e728-90e83f360233  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       5        stop     complete  2020-01-09T01:59:23Z  2020-01-09T02:00:03Z
8debdac9-e110-d2dd-cf7b-ecc77bc0d793  737118d9-8315-1fe4-e728-90e83f360233  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       5        stop     complete  2020-01-09T01:59:23Z  2020-01-09T02:01:22Z
c03aaaf6-9231-33d6-aa8f-1e7c27741a66  eb0b76a3-e77c-b48c-513e-e5778735f784  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       7        stop     complete  2020-01-09T01:59:23Z  2020-01-09T02:04:27Z
c7f631e1-8241-41eb-78c5-e03f310fb588  1ae921fd-2dde-2834-13e7-df0d067bde8c  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       4        stop     failed    2020-01-09T01:58:55Z  2020-01-09T01:59:23Z
0e1dfe10-a8d8-e6a2-c1e3-9e17aec95e3d  1ae921fd-2dde-2834-13e7-df0d067bde8c  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       4        stop     failed    2020-01-09T01:58:55Z  2020-01-09T01:59:23Z
2fd91cc2-0b8c-45d1-6437-430905ee06d2  1ae921fd-2dde-2834-13e7-df0d067bde8c  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       4        stop     failed    2020-01-09T01:58:55Z  2020-01-09T01:59:23Z
6b69b409-f89f-1305-94d7-9e1725d8fca4  1ae921fd-2dde-2834-13e7-df0d067bde8c  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       4        stop     failed    2020-01-09T01:58:55Z  2020-01-09T01:59:23Z
dc7c936b-f572-9031-5dd9-3a6422dd5876  197375ba-2eb0-6fb1-7ae7-117884c7770a  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       3        stop     failed    2020-01-09T01:58:09Z  2020-01-09T02:00:00Z
8688fa8a-b1bc-feef-5ee2-7e6c0010561f  468c7ddf-b4b9-6f7a-48c3-2f8025ef4cee  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       3        stop     failed    2020-01-09T01:57:59Z  2020-01-09T02:00:00Z
3b2dee6b-bf51-c3cb-98d9-6c07fe4aec9e  72ac2ba9-4f2f-6ce0-a41a-6da34f57667e  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       3        stop     failed    2020-01-09T01:56:06Z  2020-01-09T01:58:55Z
174e7343-e1ec-363e-a435-98456170e7d3  b6686bea-07f9-efec-1c3e-35d654b7b06d  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       3        stop     failed    2020-01-09T01:55:57Z  2020-01-09T01:57:59Z
92d9d73c-76b1-021e-3777-3213ae82fe73  571e60a0-4c28-b69d-4271-222f94e2c8aa  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       3        stop     failed    2020-01-09T01:54:59Z  2020-01-09T01:58:55Z
0244c177-adc6-c2c4-bf87-a2de53372b43  571e60a0-4c28-b69d-4271-222f94e2c8aa  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       3        stop     failed    2020-01-09T01:54:59Z  2020-01-09T01:58:55Z
3c5520ef-b003-0ca4-21e4-37b6cb3179a8  b00df8cf-49b5-5288-fb35-9778a69768f3  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       3        stop     failed    2020-01-09T01:54:54Z  2020-01-09T01:55:03Z
d15ac80d-85b3-1b0d-83f8-c6cd9fc17e3a  b00df8cf-49b5-5288-fb35-9778a69768f3  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       3        stop     failed    2020-01-09T01:54:54Z  2020-01-09T01:55:57Z
2fd20f1d-dc3e-a448-5058-25e276a0dfa3  3fa16907-8335-b063-13f9-ceea5e9d80c6  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       3        stop     failed    2020-01-09T01:54:22Z  2020-01-09T01:58:55Z
a3677ab8-286c-3d2c-d5e2-177421faae5c  3fa16907-8335-b063-13f9-ceea5e9d80c6  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       3        stop     failed    2020-01-09T01:54:22Z  2020-01-09T01:58:55Z
80f9bcbf-069a-76a7-e059-af4ed1f56636  3fa16907-8335-b063-13f9-ceea5e9d80c6  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       3        stop     failed    2020-01-09T01:54:22Z  2020-01-09T01:58:55Z
86e1a1c6-cfb5-a182-55a8-ddcd3aae35cb  3fa16907-8335-b063-13f9-ceea5e9d80c6  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       3        stop     failed    2020-01-09T01:54:22Z  2020-01-09T01:58:55Z
c05893a7-f33e-d002-a1bf-59c9576d87bf  c91c38df-f91e-6273-ef5e-ab98401ee909  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       2        stop     complete  2020-01-09T01:53:17Z  2020-01-09T01:54:59Z
2f99515e-65dc-a9b1-5d02-856f603769d3  737118d9-8315-1fe4-e728-90e83f360233  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       5        stop     complete  2020-01-09T01:53:17Z  2020-01-09T02:00:03Z
ace78e6d-d452-60b9-25d8-021ca57f0abc  737118d9-8315-1fe4-e728-90e83f360233  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       5        stop     complete  2020-01-09T01:53:17Z  2020-01-09T02:00:03Z
c9933654-7118-9609-421a-0a4ac65dabf6  c91c38df-f91e-6273-ef5e-ab98401ee909  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       2        stop     complete  2020-01-09T01:53:17Z  2020-01-09T01:55:05Z
dabaad0b-e78e-8b35-7a40-c87ee5e4269b  453560af-e641-ef20-6577-c2e9a7821cf2  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       1        stop     failed    2020-01-09T01:30:28Z  2020-01-09T01:34:35Z
9320be98-2b09-4c17-4f6f-70e1d4b6c80f  453560af-e641-ef20-6577-c2e9a7821cf2  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       1        stop     failed    2020-01-09T01:30:28Z  2020-01-09T01:34:35Z
6be0527e-41e2-5de8-c139-e1adc952d326  2121f647-ad6c-ca8c-9536-3244a2c39c77  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       1        stop     failed    2020-01-09T01:30:02Z  2020-01-09T01:34:10Z
8f6ac300-fd37-376b-a2dd-efc3d5e10232  19b93751-1652-5e36-f12e-2384b7de1c67  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       1        stop     failed    2020-01-09T01:29:58Z  2020-01-09T01:34:35Z
76968e94-7639-78b8-93dc-5f065a3a9779  1de392be-d69f-00cd-edab-d78befd2f22f  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       1        stop     failed    2020-01-09T01:28:20Z  2020-01-09T01:34:10Z
f59761f8-426f-f4b3-76db-0aa9b427c087  1de392be-d69f-00cd-edab-d78befd2f22f  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       1        stop     failed    2020-01-09T01:28:20Z  2020-01-09T01:34:10Z
4416a5b8-4829-1ea1-a150-d773af0f4123  18412578-fdb0-01d7-c342-2dc45f246384  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       1        stop     failed    2020-01-09T01:27:56Z  2020-01-09T01:34:35Z
f93eb6ad-01aa-0f6c-ce9c-dc7f088246af  032db58c-de0b-69a0-932c-6eb4908f2ef6  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       1        stop     failed    2020-01-09T01:27:55Z  2020-01-09T01:53:17Z
e8ec6b85-0d41-e028-25e1-8c915fece744  a7c96883-1112-4fb7-ad60-980e56ec26bb  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       1        stop     failed    2020-01-09T01:27:16Z  2020-01-09T02:41:12Z
7f3fc9ad-7f5c-9781-2e2b-d97ef00d4653  a7c96883-1112-4fb7-ad60-980e56ec26bb  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       1        stop     failed    2020-01-09T01:27:16Z  2020-01-09T01:34:10Z
9c52b953-ac91-0141-8a1e-ad4b7aaf4bbb  79372a1e-b2e8-6b1d-2a6b-5dcee5b22c5a  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       1        stop     failed    2020-01-09T01:26:53Z  2020-01-09T01:34:35Z
96be6b89-9533-0766-ffa1-704a7185794b  2919ac8d-71e7-76b3-7ed0-50c852a9d412  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       1        stop     failed    2020-01-09T01:26:47Z  2020-01-09T01:34:10Z
491a78dc-7a5d-4314-cce8-b70b3930fa1a  d5d2447c-af49-8a49-c5dd-eac44be1b4b3  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       1        stop     failed    2020-01-09T01:26:16Z  2020-01-09T02:41:11Z
a8d395e2-9d5e-9962-bf5c-61101a77df6c  d5d2447c-af49-8a49-c5dd-eac44be1b4b3  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       1        stop     failed    2020-01-09T01:26:16Z  2020-01-09T01:53:17Z
8e2fc502-b951-e6ff-f0e9-6bdee65ae9d6  d5d2447c-af49-8a49-c5dd-eac44be1b4b3  0a1dd321-a2e4-dea2-a670-b32a9cfc6a24  ip-10-1-1-39.eu-west-1.compute.internal   group       1        stop     failed    2020-01-09T01:26:16Z  2020-01-09T01:53:17Z
3ccfbe02-b965-5409-462d-8d2a582a6366  d5d2447c-af49-8a49-c5dd-eac44be1b4b3  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       1        stop     failed    2020-01-09T01:26:16Z  2020-01-09T01:53:17Z
4c41a4a1-4174-655a-5c1b-2c56fab2cfc9  41fd8cad-c4a4-fa59-4c1f-59b99b487f74  2ef9d87a-eb1e-6015-ee2a-b2529f392f1a  ip-10-1-3-245.eu-west-1.compute.internal  group       0        stop     complete  2020-01-09T01:24:51Z  2020-01-09T01:27:22Z
e84335d7-1bfd-0201-48b2-f95d8526ed43  41fd8cad-c4a4-fa59-4c1f-59b99b487f74  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       0        stop     complete  2020-01-09T01:24:51Z  2020-01-09T02:36:34Z
216e6f6c-0c92-1172-200f-fb6faba804b3  41fd8cad-c4a4-fa59-4c1f-59b99b487f74  d3f3c3e1-e9c2-f22d-4bd7-534cf5092c43  ip-10-1-1-150.eu-west-1.compute.internal  group       0        stop     complete  2020-01-09T01:24:51Z  2020-01-09T02:32:22Z
32942140-55b9-008f-daca-193b0c17d2ef  41fd8cad-c4a4-fa59-4c1f-59b99b487f74  cbd4f21c-b93b-6397-88f2-85112a2af263  ip-10-1-2-197.eu-west-1.compute.internal  group       0        stop     complete  2020-01-09T01:24:51Z  2020-01-09T01:27:22Z

@drewbailey
Copy link
Contributor

drewbailey commented Jan 10, 2020

@kaspergrubbe is 4507daa1 still the latest deploy? It shows as not currently promoted, can you promote the deploy, then deploy the bad version and let me know what happens?

If a canary deploy has auto_promote false, and is never promoted a subsequent deploy will replace the running allocations of the non promoted canary.

If you want to coordinate a time to chat more synchronously I set up a gitter channel https://gitter.im/hashicorp-nomad/issues-6864#

@kaspergrubbe
Copy link
Author

is 4507daa1 still the latest deploy?

No, there are newer deploys as I did more testing. I can reset the test and focus on.

If a canary deploy has auto_promote false, and is never promoted a subsequent deploy will replace the running allocations of the non promoted canary.

Oh that does make sense, but not all of these deploys have been promoted, because it looked like the system took care of it itself:

screenie_1578681359_8616

So you're saying even though the interface says N/A, it still needs a promotion?

@drewbailey
Copy link
Contributor

There are a few scenarios when a canary deployment won't (can't) be promoted. The initial deploy and any healthy deploy after a previously failed deploy won't require promotion. Does a failing job kill the allocations from deploy 4507daa1?

@kaspergrubbe
Copy link
Author

kaspergrubbe commented Jan 13, 2020

Hi @drewbailey Sorry I can't answer that, but I did a full test again, and I think I got all the information that you need:

0. Intro

Our cluster is located in AWS eu-west-1, and is running Nomad 0.10.2, with Consul 1.6.2. The cluster have 3 dedicated Nomad server, and 3 dedicated Consul servers. Every Nomad client runs its own Consul client, and all *.consul DNS requests are sent to the local Consul client.

Consul-servers:

  • 3 * t3.micro (One in each AZ).

Nomad-servers:

  • 3 * t3.micro(One in each AZ).

Nomad-clients:

  • 4 * m5.large (Spread out in different AZs)
  • 3 * t3.micro (One in each AZ).

Healthy job (test.hcl):

job "failing-nomad-test01" {
  datacenters = ["eu-west-1"]
  type = "service"

  update {
    health_check = "checks"
    max_parallel = 4
    min_healthy_time = "10s"
    healthy_deadline = "3m"
    progress_deadline = "10m"
    auto_revert = false
    auto_promote = false
    canary = 4
  }

  migrate {
    max_parallel = 3
    health_check = "checks"
    min_healthy_time = "10s"
    healthy_deadline = "5m"
  }

  group "group" {
    count = 4

    restart {
      attempts = 0
      interval = "30m"
      delay = "15s"
      mode = "fail"
    }

    ephemeral_disk {
      size = 300
    }

    task "rails" {
      driver = "docker"

      env {
        RAILS_ENV = "production"
        RAILS_SERVE_STATIC_FILES = "1"
        RAILS_LOG_TO_STDOUT = "1"
        TEST = "2020-01-09 01:23:18 +0000"
      }

      config {
        image = "kaspergrubbe/diceapp:0.0.6"

        command = "bundle"
        args = ["exec", "unicorn", "-c", "/app/config/unicorn.rb"]

        port_map {
          web = 8080
        }

        dns_servers = ["172.17.0.1"]
      }

      resources {
        cpu    = 750
        memory = 250
        network {
          mbits = 50
          port "web" {}
        }
      }

      service {
        name = "failing-nomad-test00"
        tags = []
        port = "web"

        check {
          name     = "failing-nomad-test00 healthcheck"
          type     = "http"
          protocol = "http"
          path     = "/"
          interval = "5s"
          timeout  = "3s"
        }
      }
    }
  }
}

Failing job (test-fail.hcl):

job "failing-nomad-test01" {
  datacenters = ["eu-west-1"]
  type = "service"

  update {
    health_check = "checks"
    max_parallel = 4
    min_healthy_time = "10s"
    healthy_deadline = "3m"
    progress_deadline = "10m"
    auto_revert = false
    auto_promote = false
    canary = 4
  }

  migrate {
    max_parallel = 3
    health_check = "checks"
    min_healthy_time = "10s"
    healthy_deadline = "5m"
  }

  group "group" {
    count = 4

    restart {
      attempts = 0
      interval = "30m"
      delay = "15s"
      mode = "fail"
    }

    ephemeral_disk {
      size = 300
    }

    task "rails" {
      driver = "docker"

      env {
        RAILS_ENV = "production"
        RAILS_SERVE_STATIC_FILES = "1"
        RAILS_LOG_TO_STDOUT = "1"
        TEST = "2020-01-09 01:23:18 +0000"
      }

      config {
        image = "kaspergrubbe/diceapp:0.0.6"

        command = "false"

        port_map {
          web = 8080
        }

        dns_servers = ["172.17.0.1"]
      }

      resources {
        cpu    = 750
        memory = 250
        network {
          mbits = 50
          port "web" {}
        }
      }

      service {
        name = "failing-nomad-test00"
        tags = []
        port = "web"

        check {
          name     = "failing-nomad-test00 healthcheck"
          type     = "http"
          protocol = "http"
          path     = "/"
          interval = "5s"
          timeout  = "3s"
        }
      }
    }
  }
}

Job diff

$ diff test.hcl test-fail.hcl
<         command = "bundle"
<         args = ["exec", "unicorn", "-c", "/app/config/unicorn.rb"]
---
>         command = "false"

1. Start cluster logging

# Leader:
nomad monitor -server-id=leader -log-level=debug > leader.log

# Nodes:
nomad monitor -log-level=debug -node-id=e9cb946f > node-e9cb946f.log
nomad monitor -log-level=debug -node-id=c19cc863 > node-c19cc863.log
nomad monitor -log-level=debug -node-id=7f9a100f > node-7f9a100f.log
nomad monitor -log-level=debug -node-id=38adf12b > node-38adf12b.log
nomad monitor -log-level=debug -node-id=3db7ac14 > node-3db7ac14.log
nomad monitor -log-level=debug -node-id=639210c1 > node-639210c1.log
nomad monitor -log-level=debug -node-id=fa5e43ae > node-fa5e43ae.log

@kaspergrubbe
Copy link
Author

kaspergrubbe commented Jan 13, 2020

2. Deploy first healthy job

$ nomad job run -verbose test.hcl
==> Monitoring evaluation "536dd256-e1c4-9090-113f-c4a6ced42f2a"
    Evaluation triggered by job "failing-nomad-test01"
    Allocation "0056be34-cec2-4b20-dfd0-91d65a9f4310" created: node "38adf12b-1404-b538-1c9e-7c8a39027b36", group "group"
    Allocation "24d6277f-dd1a-8935-d842-eb9431255d81" created: node "c19cc863-1a5a-f13b-ec58-d9c7bbb736e7", group "group"
    Allocation "d74a3b70-f071-2755-d9fa-d366874ea12e" created: node "3db7ac14-275a-3601-ae4d-0603fd064b06", group "group"
    Allocation "e3a2470a-d88a-c730-59b6-edef0e787556" created: node "7f9a100f-02ef-0782-3b88-8402e5355fbe", group "group"
    Evaluation within deployment: "731d8446-c70e-ad5e-fede-9ba8194c1eb1"
    Allocation "e3a2470a-d88a-c730-59b6-edef0e787556" status changed: "pending" -> "running" (Tasks are running)
    Allocation "0056be34-cec2-4b20-dfd0-91d65a9f4310" status changed: "pending" -> "running" (Tasks are running)
    Allocation "24d6277f-dd1a-8935-d842-eb9431255d81" status changed: "pending" -> "running" (Tasks are running)
    Allocation "d74a3b70-f071-2755-d9fa-d366874ea12e" status changed: "pending" -> "running" (Tasks are running)
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "536dd256-e1c4-9090-113f-c4a6ced42f2a" finished with status "complete"

2.1 Get deployment status after self-promotion

$ nomad deployment status -verbose 731d8446-c70e-ad5e-fede-9ba8194c1eb1
ID          = 731d8446-c70e-ad5e-fede-9ba8194c1eb1
Job ID      = failing-nomad-test01
Job Version = 0
Status      = successful
Description = Deployment completed successfully

Deployed
Task Group  Desired  Placed  Healthy  Unhealthy  Progress Deadline
group       4        4       4        0          2020-01-13T03:18:49Z

2.2 Note down allocation IDs

24d6277f-dd1a-8935-d842-eb9431255d81
d74a3b70-f071-2755-d9fa-d366874ea12e
e3a2470a-d88a-c730-59b6-edef0e787556
0056be34-cec2-4b20-dfd0-91d65a9f4310

2.3 Get evaluation status

$ nomad eval status -verbose 536dd256-e1c4-9090-113f-c4a6ced42f2a
ID                 = 536dd256-e1c4-9090-113f-c4a6ced42f2a
Create Time        = 2020-01-13T03:08:31Z
Modify Time        = 2020-01-13T03:08:32Z
Status             = complete
Status Description = complete
Type               = service
TriggeredBy        = job-register
Job ID             = failing-nomad-test01
Priority           = 50
Placement Failures = false
Previous Eval      = <none>
Next Eval          = <none>
Blocked Eval       = <none>

@kaspergrubbe
Copy link
Author

kaspergrubbe commented Jan 13, 2020

3. Deploy failing job

$ nomad job run -verbose test-fail.hcl
==> Monitoring evaluation "377eecea-a038-6ad4-29f6-05ceb0f51858"
    Evaluation triggered by job "failing-nomad-test01"
    Allocation "2387bd9c-c05f-8ccf-e873-438992645cfe" created: node "e9cb946f-e830-0818-e705-e6406c501583", group "group"
    Allocation "b98dec9b-332f-fb59-04d6-5848505202f3" created: node "639210c1-987f-a8b7-7a1e-d7e1db587f27", group "group"
    Allocation "d9d10e6e-8253-64fc-1bf6-619bd32a6827" created: node "7f9a100f-02ef-0782-3b88-8402e5355fbe", group "group"
    Allocation "e9cc157e-4f56-0b39-62e0-ec8ffc805859" created: node "fa5e43ae-fb1d-be84-d41a-4705b108c681", group "group"
    Evaluation within deployment: "f8ef8532-75c3-1685-21b8-bde47d87beee"
    Allocation "2387bd9c-c05f-8ccf-e873-438992645cfe" status changed: "pending" -> "failed" (Failed tasks)
    Allocation "b98dec9b-332f-fb59-04d6-5848505202f3" status changed: "pending" -> "failed" (Failed tasks)
    Allocation "d9d10e6e-8253-64fc-1bf6-619bd32a6827" status changed: "pending" -> "running" (Tasks are running)
    Allocation "e9cc157e-4f56-0b39-62e0-ec8ffc805859" status changed: "pending" -> "failed" (Failed tasks)
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "377eecea-a038-6ad4-29f6-05ceb0f51858" finished with status "complete"

2.1 Fail deployment

$ nomad deployment fail f8ef8532-75c3-1685-21b8-bde47d87beee
Deployment "f8ef8532-75c3-1685-21b8-bde47d87beee" failed

==> Monitoring evaluation "f64b8405"
    Evaluation triggered by job "failing-nomad-test01"
    Evaluation within deployment: "f8ef8532"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "f64b8405" finished with status "complete"

3.1 Note down allocation IDs

0a43fa24-61e8-d261-5301-d24f792a5f1d
1169e48d-3cf1-acff-f094-e54a0d566e4f
1a6c612d-8aa4-09f5-a228-de4907d9fd6b
2387bd9c-c05f-8ccf-e873-438992645cfe
3abfc083-c586-fb55-e490-0b9622aa1005
8b90ca29-810b-3181-ab2b-b54307faf998
b98dec9b-332f-fb59-04d6-5848505202f3
c2ade919-0f84-2795-dc76-110794f75fa4
d2657214-7019-c7e1-28fb-96f52ea1bef8
d9d10e6e-8253-64fc-1bf6-619bd32a6827
e9cc157e-4f56-0b39-62e0-ec8ffc805859

3.2 Get evaluation status

$ nomad eval status -verbose 377eecea-a038-6ad4-29f6-05ceb0f51858
ID                 = 377eecea-a038-6ad4-29f6-05ceb0f51858
Create Time        = 2020-01-13T03:10:57Z
Modify Time        = 2020-01-13T03:10:58Z
Status             = complete
Status Description = complete
Type               = service
TriggeredBy        = job-register
Job ID             = failing-nomad-test01
Priority           = 50
Placement Failures = false
Previous Eval      = <none>
Next Eval          = <none>
Blocked Eval       = <none>

3.3 Get deployment status

$ nomad deployment status f8ef8532-75c3-1685-21b8-bde47d87beee
ID          = f8ef8532
Job ID      = failing-nomad-test01
Job Version = 1
Status      = failed
Description = Deployment marked as failed

Deployed
Task Group  Promoted  Desired  Canaries  Placed  Healthy  Unhealthy  Progress Deadline
group       false     4        4         11      0        11         2020-01-13T03:20:58Z

3.4 Notes

At this point there is still one healthy allocation running from first deploy:

24d6277f-dd1a-8935-d842-eb9431255d81

I also find it strange that Nomad starts 11 allocations when the job specifies restart { attempts = 0, mode = "fail".

@kaspergrubbe
Copy link
Author

4. Re-deploy healthy job

$ nomad job run -verbose test.hcl
==> Monitoring evaluation "81de9158-44f1-f2b9-5a93-1dd1d0972b75"
    Evaluation triggered by job "failing-nomad-test01"
    Evaluation within deployment: "a7641e73-922b-9149-7fe9-f21e56c4b8d8"
    Allocation "5ce27208-d182-cb21-12e5-bf4755812aee" created: node "7f9a100f-02ef-0782-3b88-8402e5355fbe", group "group"
    Allocation "e8f15755-9a49-94bb-69b4-e9d9546a09a4" created: node "3db7ac14-275a-3601-ae4d-0603fd064b06", group "group"
    Allocation "24d6277f-dd1a-8935-d842-eb9431255d81" modified: node "c19cc863-1a5a-f13b-ec58-d9c7bbb736e7", group "group"
    Allocation "4c448781-6d59-22fc-cc54-f62d5278b8cd" created: node "e9cb946f-e830-0818-e705-e6406c501583", group "group"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "81de9158-44f1-f2b9-5a93-1dd1d0972b75" finished with status "complete"

4.1 Note down allocation IDs

24d6277f-dd1a-8935-d842-eb9431255d81
4c448781-6d59-22fc-cc54-f62d5278b8cd
5ce27208-d182-cb21-12e5-bf4755812aee
e8f15755-9a49-94bb-69b4-e9d9546a09a4

4.2 Get evaluation status

$ nomad eval status -verbose 81de9158-44f1-f2b9-5a93-1dd1d0972b75
ID                 = 81de9158-44f1-f2b9-5a93-1dd1d0972b75
Create Time        = 2020-01-13T03:19:14Z
Modify Time        = 2020-01-13T03:19:14Z
Status             = complete
Status Description = complete
Type               = service
TriggeredBy        = job-register
Job ID             = failing-nomad-test01
Priority           = 50
Placement Failures = false
Previous Eval      = <none>
Next Eval          = <none>
Blocked Eval       = <none>

4.3 Get deployment status

$ nomad deployment status a7641e73-922b-9149-7fe9-f21e56c4b8d8
ID          = a7641e73
Job ID      = failing-nomad-test01
Job Version = 2
Status      = successful
Description = Deployment completed successfully

Deployed
Task Group  Desired  Placed  Healthy  Unhealthy  Progress Deadline
group       4        4       4        0          2020-01-13T03:29:31Z

@kaspergrubbe
Copy link
Author

5. Aftermath

5.1 Print job status

$ nomad job status -verbose failing-nomad-test01
ID            = failing-nomad-test01
Name          = failing-nomad-test01
Submit Date   = 2020-01-13T03:19:14Z
Type          = service
Priority      = 50
Datacenters   = eu-west-1
Status        = running
Periodic      = false
Parameterized = false

Summary
Task Group  Queued  Starting  Running  Failed  Complete  Lost
group       0       0         4        11      3         0

Evaluations
ID                                    Priority  Triggered By        Status    Placement Failures
cca065ba-6c4e-dda1-2708-9727f47703c1  50        deployment-watcher  complete  false
e43c7c77-2b19-6bbb-8612-259090dc56df  50        deployment-watcher  complete  false
de21dc78-0059-1b91-d656-f9a5f26bc8c2  50        deployment-watcher  complete  false
81de9158-44f1-f2b9-5a93-1dd1d0972b75  50        job-register        complete  false
f64b8405-a6c4-c585-9f31-3ff10edee611  50        deployment-watcher  complete  false
0f30d2e2-7969-b951-a5c6-08e17620dbf8  50        alloc-failure       complete  false
e01c0c3e-54b1-148f-9a72-b830f004eb45  50        deployment-watcher  complete  false
d0c3749f-a4ea-d896-2215-0fde674f3155  50        alloc-failure       complete  false
56bff8ce-1689-529c-b046-f8e15aca4e8a  50        alloc-failure       complete  false
b420148e-cd5e-8dfd-b2bf-250279f79042  50        alloc-failure       complete  false
15ef5f36-d1ba-46c0-e4cc-be31403912f8  50        alloc-failure       complete  false
0845a82b-31a4-2412-5dd7-77d4af4a2aec  50        deployment-watcher  complete  false
c254fdfa-8f16-e0f5-cf57-73a580050126  50        alloc-failure       complete  false
e4e11c59-d934-5a4d-2825-22f39791e182  50        alloc-failure       complete  false
56347c30-7a7e-166f-3626-3418ce79bea7  50        alloc-failure       complete  false
d4afd6be-2f55-16ae-23bc-4cb5547829b6  50        alloc-failure       complete  false
b4292141-e322-0854-5d8e-f3b0ad9e92f2  50        alloc-failure       complete  false
5731bcff-8600-ce6b-9d90-c3af0ab6213a  50        deployment-watcher  complete  false
5376c4f3-0b62-7700-126e-fa71a9be819b  50        alloc-failure       complete  false
aacb2734-bc6a-adcc-b2eb-9af99d1a0752  50        alloc-failure       complete  false
9074ea5d-f08a-4b26-a882-65e32d95cb0d  50        alloc-failure       complete  false
bfc9b801-b547-b6e7-688d-25fd113953fe  50        alloc-failure       complete  false
377eecea-a038-6ad4-29f6-05ceb0f51858  50        job-register        complete  false
8bb1cc43-56a8-e5e4-26fc-4cb9317d25b0  50        deployment-watcher  complete  false
de86cb7f-0893-f108-3109-fd9bb5fa8e0d  50        deployment-watcher  complete  false
4d10fe98-6c13-feb3-510f-2573018c06ea  50        deployment-watcher  complete  false
536dd256-e1c4-9090-113f-c4a6ced42f2a  50        job-register        complete  false

Latest Deployment
ID          = a7641e73-922b-9149-7fe9-f21e56c4b8d8
Status      = successful
Description = Deployment completed successfully

Deployed
Task Group  Desired  Placed  Healthy  Unhealthy  Progress Deadline
group       4        4       4        0          2020-01-13T03:29:31Z

Allocations
ID                                    Eval ID                               Node ID                               Node Name                                 Task Group  Version  Desired  Status    Created               Modified
4c448781-6d59-22fc-cc54-f62d5278b8cd  81de9158-44f1-f2b9-5a93-1dd1d0972b75  e9cb946f-e830-0818-e705-e6406c501583  ip-10-1-1-93.eu-west-1.compute.internal   group       2        run      running   2020-01-13T03:19:14Z  2020-01-13T03:19:31Z
5ce27208-d182-cb21-12e5-bf4755812aee  81de9158-44f1-f2b9-5a93-1dd1d0972b75  7f9a100f-02ef-0782-3b88-8402e5355fbe  ip-10-1-3-23.eu-west-1.compute.internal   group       2        run      running   2020-01-13T03:19:14Z  2020-01-13T03:19:31Z
e8f15755-9a49-94bb-69b4-e9d9546a09a4  81de9158-44f1-f2b9-5a93-1dd1d0972b75  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       2        run      running   2020-01-13T03:19:14Z  2020-01-13T03:19:27Z
c2ade919-0f84-2795-dc76-110794f75fa4  15ef5f36-d1ba-46c0-e4cc-be31403912f8  7f9a100f-02ef-0782-3b88-8402e5355fbe  ip-10-1-3-23.eu-west-1.compute.internal   group       1        stop     failed    2020-01-13T03:12:30Z  2020-01-13T03:14:30Z
1a6c612d-8aa4-09f5-a228-de4907d9fd6b  15ef5f36-d1ba-46c0-e4cc-be31403912f8  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       1        stop     failed    2020-01-13T03:12:30Z  2020-01-13T03:14:30Z
d2657214-7019-c7e1-28fb-96f52ea1bef8  15ef5f36-d1ba-46c0-e4cc-be31403912f8  e9cb946f-e830-0818-e705-e6406c501583  ip-10-1-1-93.eu-west-1.compute.internal   group       1        stop     failed    2020-01-13T03:12:30Z  2020-01-13T03:14:30Z
3abfc083-c586-fb55-e490-0b9622aa1005  b4292141-e322-0854-5d8e-f3b0ad9e92f2  7f9a100f-02ef-0782-3b88-8402e5355fbe  ip-10-1-3-23.eu-west-1.compute.internal   group       1        stop     failed    2020-01-13T03:11:29Z  2020-01-13T03:14:30Z
0a43fa24-61e8-d261-5301-d24f792a5f1d  b4292141-e322-0854-5d8e-f3b0ad9e92f2  e9cb946f-e830-0818-e705-e6406c501583  ip-10-1-1-93.eu-west-1.compute.internal   group       1        stop     failed    2020-01-13T03:11:29Z  2020-01-13T03:14:30Z
1169e48d-3cf1-acff-f094-e54a0d566e4f  b4292141-e322-0854-5d8e-f3b0ad9e92f2  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       1        stop     failed    2020-01-13T03:11:29Z  2020-01-13T03:14:30Z
8b90ca29-810b-3181-ab2b-b54307faf998  b4292141-e322-0854-5d8e-f3b0ad9e92f2  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       1        stop     failed    2020-01-13T03:11:29Z  2020-01-13T03:14:30Z
e9cc157e-4f56-0b39-62e0-ec8ffc805859  377eecea-a038-6ad4-29f6-05ceb0f51858  fa5e43ae-fb1d-be84-d41a-4705b108c681  ip-10-1-1-238.eu-west-1.compute.internal  group       1        stop     failed    2020-01-13T03:10:58Z  2020-01-13T03:19:14Z
b98dec9b-332f-fb59-04d6-5848505202f3  377eecea-a038-6ad4-29f6-05ceb0f51858  639210c1-987f-a8b7-7a1e-d7e1db587f27  ip-10-1-1-7.eu-west-1.compute.internal    group       1        stop     failed    2020-01-13T03:10:58Z  2020-01-13T03:19:14Z
2387bd9c-c05f-8ccf-e873-438992645cfe  377eecea-a038-6ad4-29f6-05ceb0f51858  e9cb946f-e830-0818-e705-e6406c501583  ip-10-1-1-93.eu-west-1.compute.internal   group       1        stop     failed    2020-01-13T03:10:58Z  2020-01-13T03:19:14Z
d9d10e6e-8253-64fc-1bf6-619bd32a6827  377eecea-a038-6ad4-29f6-05ceb0f51858  7f9a100f-02ef-0782-3b88-8402e5355fbe  ip-10-1-3-23.eu-west-1.compute.internal   group       1        stop     failed    2020-01-13T03:10:58Z  2020-01-13T03:19:14Z
d74a3b70-f071-2755-d9fa-d366874ea12e  536dd256-e1c4-9090-113f-c4a6ced42f2a  3db7ac14-275a-3601-ae4d-0603fd064b06  ip-10-1-3-174.eu-west-1.compute.internal  group       0        stop     complete  2020-01-13T03:08:32Z  2020-01-13T03:11:31Z
e3a2470a-d88a-c730-59b6-edef0e787556  536dd256-e1c4-9090-113f-c4a6ced42f2a  7f9a100f-02ef-0782-3b88-8402e5355fbe  ip-10-1-3-23.eu-west-1.compute.internal   group       0        stop     complete  2020-01-13T03:08:32Z  2020-01-13T03:11:31Z
24d6277f-dd1a-8935-d842-eb9431255d81  81de9158-44f1-f2b9-5a93-1dd1d0972b75  c19cc863-1a5a-f13b-ec58-d9c7bbb736e7  ip-10-1-2-32.eu-west-1.compute.internal   group       2        run      running   2020-01-13T03:08:32Z  2020-01-13T03:19:25Z
0056be34-cec2-4b20-dfd0-91d65a9f4310  536dd256-e1c4-9090-113f-c4a6ced42f2a  38adf12b-1404-b538-1c9e-7c8a39027b36  ip-10-1-2-118.eu-west-1.compute.internal  group       0        stop     complete  2020-01-13T03:08:32Z  2020-01-13T03:11:31Z

@kaspergrubbe
Copy link
Author

5.2 Print all allocation statuses

24d6277f-dd1a-8935-d842-eb9431255d81

ID                  = 24d6277f-dd1a-8935-d842-eb9431255d81
Eval ID             = 81de9158-44f1-f2b9-5a93-1dd1d0972b75
Name                = failing-nomad-test01.group[1]
Node ID             = c19cc863-1a5a-f13b-ec58-d9c7bbb736e7
Node Name           = ip-10-1-2-32.eu-west-1.compute.internal
Job ID              = failing-nomad-test01
Job Version         = 2
Client Status       = running
Client Description  = Tasks are running
Desired Status      = run
Desired Description = <none>
Created             = 2020-01-13T03:08:32Z
Modified            = 2020-01-13T03:19:25Z
Deployment ID       = a7641e73-922b-9149-7fe9-f21e56c4b8d8
Deployment Health   = healthy
Evaluated Nodes     = 3
Filtered Nodes      = 0
Exhausted Nodes     = 0
Allocation Time     = 67.372µs
Failures            = 0

Task "rails" is "running"
Task Resources
CPU        Memory           Disk     Addresses
0/750 MHz  186 MiB/250 MiB  300 MiB  web: 10.1.2.32:25121

Task Events:
Started At     = 2020-01-13T03:08:33Z
Finished At    = N/A
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type        Description
2020-01-13T03:08:33Z  Started     Task started by client
2020-01-13T03:08:32Z  Task Setup  Building Task Directory
2020-01-13T03:08:32Z  Received    Task received by client

Placement Metrics
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
c19cc863-1a5a-f13b-ec58-d9c7bbb736e7  0.877    0                  0              0                        0.877
7f9a100f-02ef-0782-3b88-8402e5355fbe  0.808    0                  0              0                        0.808
e9cb946f-e830-0818-e705-e6406c501583  0.64     0                  0              0                        0.64

d74a3b70-f071-2755-d9fa-d366874ea12e

ID                  = d74a3b70-f071-2755-d9fa-d366874ea12e
Eval ID             = 536dd256-e1c4-9090-113f-c4a6ced42f2a
Name                = failing-nomad-test01.group[0]
Node ID             = 3db7ac14-275a-3601-ae4d-0603fd064b06
Node Name           = ip-10-1-3-174.eu-west-1.compute.internal
Job ID              = failing-nomad-test01
Job Version         = 0
Client Status       = complete
Client Description  = All tasks have completed
Desired Status      = stop
Desired Description = alloc not needed due to job update
Created             = 2020-01-13T03:08:32Z
Modified            = 2020-01-13T03:11:31Z
Deployment ID       = 731d8446-c70e-ad5e-fede-9ba8194c1eb1
Deployment Health   = healthy
Evaluated Nodes     = 3
Filtered Nodes      = 0
Exhausted Nodes     = 0
Allocation Time     = 141.202µs
Failures            = 0

Task "rails" is "dead"
Task Resources
CPU         Memory           Disk     Addresses
10/750 MHz  115 MiB/250 MiB  300 MiB  web: 10.1.3.174:25061

Task Events:
Started At     = 2020-01-13T03:08:32Z
Finished At    = 2020-01-13T03:11:30Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type        Description
2020-01-13T03:11:30Z  Killed      Task successfully killed
2020-01-13T03:11:30Z  Terminated  Exit Code: 0
2020-01-13T03:11:30Z  Killing     Sent interrupt. Waiting 5s before force killing
2020-01-13T03:08:32Z  Started     Task started by client
2020-01-13T03:08:32Z  Task Setup  Building Task Directory
2020-01-13T03:08:32Z  Received    Task received by client

Placement Metrics
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
3db7ac14-275a-3601-ae4d-0603fd064b06  0.869    0                  0              0                        0.869
38adf12b-1404-b538-1c9e-7c8a39027b36  0.424    0                  0              0                        0.424
639210c1-987f-a8b7-7a1e-d7e1db587f27  0.382    0                  0              0                        0.382

e3a2470a-d88a-c730-59b6-edef0e787556

ID                  = e3a2470a-d88a-c730-59b6-edef0e787556
Eval ID             = 536dd256-e1c4-9090-113f-c4a6ced42f2a
Name                = failing-nomad-test01.group[3]
Node ID             = 7f9a100f-02ef-0782-3b88-8402e5355fbe
Node Name           = ip-10-1-3-23.eu-west-1.compute.internal
Job ID              = failing-nomad-test01
Job Version         = 0
Client Status       = complete
Client Description  = All tasks have completed
Desired Status      = stop
Desired Description = alloc not needed due to job update
Created             = 2020-01-13T03:08:32Z
Modified            = 2020-01-13T03:11:31Z
Deployment ID       = 731d8446-c70e-ad5e-fede-9ba8194c1eb1
Deployment Health   = healthy
Evaluated Nodes     = 4
Filtered Nodes      = 0
Exhausted Nodes     = 1
Allocation Time     = 60.026µs
Failures            = 0

Task "rails" is "dead"
Task Resources
CPU        Memory           Disk     Addresses
0/750 MHz  123 MiB/250 MiB  300 MiB  web: 10.1.3.23:28025

Task Events:
Started At     = 2020-01-13T03:08:32Z
Finished At    = 2020-01-13T03:11:31Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type        Description
2020-01-13T03:11:31Z  Killed      Task successfully killed
2020-01-13T03:11:30Z  Terminated  Exit Code: 0
2020-01-13T03:11:30Z  Killing     Sent interrupt. Waiting 5s before force killing
2020-01-13T03:08:32Z  Started     Task started by client
2020-01-13T03:08:32Z  Task Setup  Building Task Directory
2020-01-13T03:08:32Z  Received    Task received by client

Placement Metrics
  * Resources exhausted on 1 nodes
  * Dimension "memory" exhausted on 1 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
7f9a100f-02ef-0782-3b88-8402e5355fbe  0.808    0                  0              0                        0.808
e9cb946f-e830-0818-e705-e6406c501583  0.64     0                  0              0                        0.64
fa5e43ae-fb1d-be84-d41a-4705b108c681  0.261    0                  0              0                        0.261

0056be34-cec2-4b20-dfd0-91d65a9f4310

ID                  = 0056be34-cec2-4b20-dfd0-91d65a9f4310
Eval ID             = 536dd256-e1c4-9090-113f-c4a6ced42f2a
Name                = failing-nomad-test01.group[2]
Node ID             = 38adf12b-1404-b538-1c9e-7c8a39027b36
Node Name           = ip-10-1-2-118.eu-west-1.compute.internal
Job ID              = failing-nomad-test01
Job Version         = 0
Client Status       = complete
Client Description  = All tasks have completed
Desired Status      = stop
Desired Description = alloc not needed due to job update
Created             = 2020-01-13T03:08:32Z
Modified            = 2020-01-13T03:11:31Z
Deployment ID       = 731d8446-c70e-ad5e-fede-9ba8194c1eb1
Deployment Health   = healthy
Evaluated Nodes     = 4
Filtered Nodes      = 0
Exhausted Nodes     = 1
Allocation Time     = 59.969µs
Failures            = 0

Task "rails" is "dead"
Task Resources
CPU        Memory           Disk     Addresses
0/750 MHz  112 MiB/250 MiB  300 MiB  web: 10.1.2.118:27656

Task Events:
Started At     = 2020-01-13T03:08:32Z
Finished At    = 2020-01-13T03:11:30Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type        Description
2020-01-13T03:11:30Z  Killed      Task successfully killed
2020-01-13T03:11:30Z  Terminated  Exit Code: 0
2020-01-13T03:11:30Z  Killing     Sent interrupt. Waiting 5s before force killing
2020-01-13T03:08:32Z  Started     Task started by client
2020-01-13T03:08:32Z  Task Setup  Building Task Directory
2020-01-13T03:08:32Z  Received    Task received by client

Placement Metrics
  * Resources exhausted on 1 nodes
  * Dimension "cpu" exhausted on 1 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
38adf12b-1404-b538-1c9e-7c8a39027b36  0.424    0                  0              0                        0.424
639210c1-987f-a8b7-7a1e-d7e1db587f27  0.382    0                  0              0                        0.382
fa5e43ae-fb1d-be84-d41a-4705b108c681  0.261    0                  0              0                        0.261

0a43fa24-61e8-d261-5301-d24f792a5f1d

ID                   = 0a43fa24-61e8-d261-5301-d24f792a5f1d
Eval ID              = b4292141-e322-0854-5d8e-f3b0ad9e92f2
Name                 = failing-nomad-test01.group[2]
Node ID              = e9cb946f-e830-0818-e705-e6406c501583
Node Name            = ip-10-1-1-93.eu-west-1.compute.internal
Job ID               = failing-nomad-test01
Job Version          = 1
Client Status        = failed
Client Description   = Failed tasks
Desired Status       = stop
Desired Description  = alloc not needed due to job update
Created              = 2020-01-13T03:11:29Z
Modified             = 2020-01-13T03:14:30Z
Deployment ID        = f8ef8532-75c3-1685-21b8-bde47d87beee
Deployment Health    = unhealthy
Canary               = true
Replacement Alloc ID = c2ade919-0f84-2795-dc76-110794f75fa4
Evaluated Nodes      = 7
Filtered Nodes       = 0
Exhausted Nodes      = 2
Allocation Time      = 123.303µs
Failures             = 0

Task "rails" is "dead"
Task Resources
CPU      Memory   Disk     Addresses
750 MHz  250 MiB  300 MiB  web: 10.1.1.93:25325

Task Events:
Started At     = 2020-01-13T03:11:29Z
Finished At    = 2020-01-13T03:11:30Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type             Description
2020-01-13T03:11:30Z  Alloc Unhealthy  Unhealthy because of failed task
2020-01-13T03:11:30Z  Not Restarting   Policy allows no restarts
2020-01-13T03:11:29Z  Terminated       Exit Code: 1, Exit Message: "Docker container exited with non-zero exit code: 1"
2020-01-13T03:11:29Z  Started          Task started by client
2020-01-13T03:11:29Z  Task Setup       Building Task Directory
2020-01-13T03:11:29Z  Received         Task received by client

Placement Metrics
  * Resources exhausted on 2 nodes
  * Dimension "cpu" exhausted on 1 nodes
  * Dimension "memory" exhausted on 1 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
e9cb946f-e830-0818-e705-e6406c501583  0.64     0                  0              0                        0.64
7f9a100f-02ef-0782-3b88-8402e5355fbe  0.914    -0.5               0              0                        0.207
38adf12b-1404-b538-1c9e-7c8a39027b36  0.515    -0.5               0              0                        0.00761
639210c1-987f-a8b7-7a1e-d7e1db587f27  0.479    -0.5               0              0                        -0.0103
fa5e43ae-fb1d-be84-d41a-4705b108c681  0.261    0                  0              -1                       -0.369

1169e48d-3cf1-acff-f094-e54a0d566e4f

ID                   = 1169e48d-3cf1-acff-f094-e54a0d566e4f
Eval ID              = b4292141-e322-0854-5d8e-f3b0ad9e92f2
Name                 = failing-nomad-test01.group[0]
Node ID              = fa5e43ae-fb1d-be84-d41a-4705b108c681
Node Name            = ip-10-1-1-238.eu-west-1.compute.internal
Job ID               = failing-nomad-test01
Job Version          = 1
Client Status        = failed
Client Description   = Failed tasks
Desired Status       = stop
Desired Description  = alloc not needed due to job update
Created              = 2020-01-13T03:11:29Z
Modified             = 2020-01-13T03:14:30Z
Deployment ID        = f8ef8532-75c3-1685-21b8-bde47d87beee
Deployment Health    = unhealthy
Canary               = true
Replacement Alloc ID = 1a6c612d-8aa4-09f5-a228-de4907d9fd6b
Evaluated Nodes      = 7
Filtered Nodes       = 0
Exhausted Nodes      = 2
Allocation Time      = 251.445µs
Failures             = 0

Task "rails" is "dead"
Task Resources
CPU      Memory   Disk     Addresses
750 MHz  250 MiB  300 MiB  web: 10.1.1.238:24884

Task Events:
Started At     = 2020-01-13T03:11:29Z
Finished At    = 2020-01-13T03:11:30Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type             Description
2020-01-13T03:11:30Z  Alloc Unhealthy  Unhealthy because of failed task
2020-01-13T03:11:30Z  Not Restarting   Policy allows no restarts
2020-01-13T03:11:30Z  Terminated       Exit Code: 1, Exit Message: "Docker container exited with non-zero exit code: 1"
2020-01-13T03:11:29Z  Started          Task started by client
2020-01-13T03:11:29Z  Task Setup       Building Task Directory
2020-01-13T03:11:29Z  Received         Task received by client

Placement Metrics
  * Resources exhausted on 2 nodes
  * Dimension "memory" exhausted on 1 nodes
  * Dimension "cpu" exhausted on 1 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
fa5e43ae-fb1d-be84-d41a-4705b108c681  0.261    0                  0              0                        0.261
7f9a100f-02ef-0782-3b88-8402e5355fbe  0.914    -0.5               0              0                        0.207
38adf12b-1404-b538-1c9e-7c8a39027b36  0.515    -0.5               0              0                        0.00761
639210c1-987f-a8b7-7a1e-d7e1db587f27  0.479    -0.5               0              0                        -0.0103
e9cb946f-e830-0818-e705-e6406c501583  0.808    -0.5               0              -1                       -0.231

1a6c612d-8aa4-09f5-a228-de4907d9fd6b

ID                  = 1a6c612d-8aa4-09f5-a228-de4907d9fd6b
Eval ID             = 15ef5f36-d1ba-46c0-e4cc-be31403912f8
Name                = failing-nomad-test01.group[0]
Node ID             = 3db7ac14-275a-3601-ae4d-0603fd064b06
Node Name           = ip-10-1-3-174.eu-west-1.compute.internal
Job ID              = failing-nomad-test01
Job Version         = 1
Client Status       = failed
Client Description  = Failed tasks
Desired Status      = stop
Desired Description = alloc not needed due to job update
Created             = 2020-01-13T03:12:30Z
Modified            = 2020-01-13T03:14:30Z
Deployment ID       = f8ef8532-75c3-1685-21b8-bde47d87beee
Deployment Health   = unhealthy
Canary              = true
Evaluated Nodes     = 5
Filtered Nodes      = 0
Exhausted Nodes     = 0
Allocation Time     = 309.943µs
Failures            = 0

Task "rails" is "dead"
Task Resources
CPU        Memory       Disk     Addresses
0/750 MHz  0 B/250 MiB  300 MiB  web: 10.1.3.174:27358

Task Events:
Started At     = 2020-01-13T03:12:30Z
Finished At    = 2020-01-13T03:12:30Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type             Description
2020-01-13T03:12:30Z  Alloc Unhealthy  Unhealthy because of failed task
2020-01-13T03:12:30Z  Not Restarting   Policy allows no restarts
2020-01-13T03:12:30Z  Terminated       Exit Code: 1, Exit Message: "Docker container exited with non-zero exit code: 1"
2020-01-13T03:12:30Z  Started          Task started by client
2020-01-13T03:12:30Z  Task Setup       Building Task Directory
2020-01-13T03:12:30Z  Received         Task received by client

Placement Metrics
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
3db7ac14-275a-3601-ae4d-0603fd064b06  0.869    0                  0              0                        0.869
7f9a100f-02ef-0782-3b88-8402e5355fbe  0.808    0                  0              0                        0.808
639210c1-987f-a8b7-7a1e-d7e1db587f27  0.382    0                  0              0                        0.382
e9cb946f-e830-0818-e705-e6406c501583  0.64     0                  0              -1                       -0.18
fa5e43ae-fb1d-be84-d41a-4705b108c681  0.261    0                  0              -1                       -0.369

2387bd9c-c05f-8ccf-e873-438992645cfe

ID                   = 2387bd9c-c05f-8ccf-e873-438992645cfe
Eval ID              = 377eecea-a038-6ad4-29f6-05ceb0f51858
Name                 = failing-nomad-test01.group[0]
Node ID              = e9cb946f-e830-0818-e705-e6406c501583
Node Name            = ip-10-1-1-93.eu-west-1.compute.internal
Job ID               = failing-nomad-test01
Job Version          = 1
Client Status        = failed
Client Description   = Failed tasks
Desired Status       = stop
Desired Description  = alloc not needed due to job update
Created              = 2020-01-13T03:10:58Z
Modified             = 2020-01-13T03:19:14Z
Deployment ID        = f8ef8532-75c3-1685-21b8-bde47d87beee
Deployment Health    = unhealthy
Canary               = true
Replacement Alloc ID = 1169e48d-3cf1-acff-f094-e54a0d566e4f
Evaluated Nodes      = 4
Filtered Nodes       = 0
Exhausted Nodes      = 1
Allocation Time      = 151.917µs
Failures             = 0

Task "rails" is "dead"
Task Resources
CPU      Memory   Disk     Addresses
750 MHz  250 MiB  300 MiB  web: 10.1.1.93:22747

Task Events:
Started At     = 2020-01-13T03:10:58Z
Finished At    = 2020-01-13T03:10:59Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type             Description
2020-01-13T03:10:59Z  Alloc Unhealthy  Unhealthy because of failed task
2020-01-13T03:10:59Z  Not Restarting   Policy allows no restarts
2020-01-13T03:10:59Z  Terminated       Exit Code: 1, Exit Message: "Docker container exited with non-zero exit code: 1"
2020-01-13T03:10:58Z  Started          Task started by client
2020-01-13T03:10:58Z  Task Setup       Building Task Directory
2020-01-13T03:10:58Z  Received         Task received by client

Placement Metrics
  * Resources exhausted on 1 nodes
  * Dimension "memory" exhausted on 1 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
e9cb946f-e830-0818-e705-e6406c501583  0.64     0                  0              0                        0.64
fa5e43ae-fb1d-be84-d41a-4705b108c681  0.261    0                  0              0                        0.261
7f9a100f-02ef-0782-3b88-8402e5355fbe  0.914    -0.5               0              0                        0.207

3abfc083-c586-fb55-e490-0b9622aa1005

ID                  = 3abfc083-c586-fb55-e490-0b9622aa1005
Eval ID             = b4292141-e322-0854-5d8e-f3b0ad9e92f2
Name                = failing-nomad-test01.group[1]
Node ID             = 7f9a100f-02ef-0782-3b88-8402e5355fbe
Node Name           = ip-10-1-3-23.eu-west-1.compute.internal
Job ID              = failing-nomad-test01
Job Version         = 1
Client Status       = failed
Client Description  = Failed tasks
Desired Status      = stop
Desired Description = alloc not needed due to job update
Created             = 2020-01-13T03:11:29Z
Modified            = 2020-01-13T03:14:30Z
Deployment ID       = f8ef8532-75c3-1685-21b8-bde47d87beee
Deployment Health   = unhealthy
Canary              = true
Evaluated Nodes     = 7
Filtered Nodes      = 0
Exhausted Nodes     = 2
Allocation Time     = 101.021µs
Failures            = 0

Task "rails" is "dead"
Task Resources
CPU      Memory   Disk     Addresses
750 MHz  250 MiB  300 MiB  web: 10.1.3.23:21286

Task Events:
Started At     = 2020-01-13T03:11:29Z
Finished At    = 2020-01-13T03:11:30Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type             Description
2020-01-13T03:11:30Z  Alloc Unhealthy  Unhealthy because of failed task
2020-01-13T03:11:30Z  Not Restarting   Policy allows no restarts
2020-01-13T03:11:30Z  Terminated       Exit Code: 1, Exit Message: "Docker container exited with non-zero exit code: 1"
2020-01-13T03:11:29Z  Started          Task started by client
2020-01-13T03:11:29Z  Task Setup       Building Task Directory
2020-01-13T03:11:29Z  Received         Task received by client

Placement Metrics
  * Resources exhausted on 2 nodes
  * Dimension "memory" exhausted on 1 nodes
  * Dimension "cpu" exhausted on 1 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
7f9a100f-02ef-0782-3b88-8402e5355fbe  0.914    -0.5               0              0                        0.207
e9cb946f-e830-0818-e705-e6406c501583  0.808    -0.5               0              0                        0.154
38adf12b-1404-b538-1c9e-7c8a39027b36  0.515    -0.5               0              0                        0.00761
fa5e43ae-fb1d-be84-d41a-4705b108c681  0.382    -0.5               0              0                        -0.0591
639210c1-987f-a8b7-7a1e-d7e1db587f27  0.479    -0.5               0              -1                       -0.34

8b90ca29-810b-3181-ab2b-b54307faf998

ID                   = 8b90ca29-810b-3181-ab2b-b54307faf998
Eval ID              = b4292141-e322-0854-5d8e-f3b0ad9e92f2
Name                 = failing-nomad-test01.group[3]
Node ID              = 639210c1-987f-a8b7-7a1e-d7e1db587f27
Node Name            = ip-10-1-1-7.eu-west-1.compute.internal
Job ID               = failing-nomad-test01
Job Version          = 1
Client Status        = failed
Client Description   = Failed tasks
Desired Status       = stop
Desired Description  = alloc not needed due to job update
Created              = 2020-01-13T03:11:29Z
Modified             = 2020-01-13T03:14:30Z
Deployment ID        = f8ef8532-75c3-1685-21b8-bde47d87beee
Deployment Health    = unhealthy
Canary               = true
Replacement Alloc ID = d2657214-7019-c7e1-28fb-96f52ea1bef8
Evaluated Nodes      = 4
Filtered Nodes       = 0
Exhausted Nodes      = 1
Allocation Time      = 162.945µs
Failures             = 0

Task "rails" is "dead"
Task Resources
CPU        Memory       Disk     Addresses
0/750 MHz  0 B/250 MiB  300 MiB  web: 10.1.1.7:23948

Task Events:
Started At     = 2020-01-13T03:11:29Z
Finished At    = 2020-01-13T03:11:30Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type            Description
2020-01-13T03:11:30Z  Not Restarting  Policy allows no restarts
2020-01-13T03:11:30Z  Terminated      Exit Code: 1, Exit Message: "Docker container exited with non-zero exit code: 1"
2020-01-13T03:11:29Z  Started         Task started by client
2020-01-13T03:11:29Z  Task Setup      Building Task Directory
2020-01-13T03:11:29Z  Received        Task received by client

Placement Metrics
  * Resources exhausted on 1 nodes
  * Dimension "cpu" exhausted on 1 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
639210c1-987f-a8b7-7a1e-d7e1db587f27  0.382    0                  0              0                        0.382
fa5e43ae-fb1d-be84-d41a-4705b108c681  0.261    0                  0              0                        0.261
38adf12b-1404-b538-1c9e-7c8a39027b36  0.515    -0.5               0              0                        0.00761

b98dec9b-332f-fb59-04d6-5848505202f3

ID                   = b98dec9b-332f-fb59-04d6-5848505202f3
Eval ID              = 377eecea-a038-6ad4-29f6-05ceb0f51858
Name                 = failing-nomad-test01.group[1]
Node ID              = 639210c1-987f-a8b7-7a1e-d7e1db587f27
Node Name            = ip-10-1-1-7.eu-west-1.compute.internal
Job ID               = failing-nomad-test01
Job Version          = 1
Client Status        = failed
Client Description   = Failed tasks
Desired Status       = stop
Desired Description  = alloc not needed due to job update
Created              = 2020-01-13T03:10:58Z
Modified             = 2020-01-13T03:19:14Z
Deployment ID        = f8ef8532-75c3-1685-21b8-bde47d87beee
Deployment Health    = unhealthy
Canary               = true
Replacement Alloc ID = 3abfc083-c586-fb55-e490-0b9622aa1005
Evaluated Nodes      = 4
Filtered Nodes       = 0
Exhausted Nodes      = 1
Allocation Time      = 90.271µs
Failures             = 0

Task "rails" is "dead"
Task Resources
CPU      Memory   Disk     Addresses
750 MHz  250 MiB  300 MiB  web: 10.1.1.7:30636

Task Events:
Started At     = 2020-01-13T03:10:58Z
Finished At    = 2020-01-13T03:10:59Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type             Description
2020-01-13T03:10:59Z  Alloc Unhealthy  Unhealthy because of failed task
2020-01-13T03:10:59Z  Not Restarting   Policy allows no restarts
2020-01-13T03:10:59Z  Terminated       Exit Code: 1, Exit Message: "Docker container exited with non-zero exit code: 1"
2020-01-13T03:10:58Z  Started          Task started by client
2020-01-13T03:10:58Z  Task Setup       Building Task Directory
2020-01-13T03:10:58Z  Received         Task received by client

Placement Metrics
  * Resources exhausted on 1 nodes
  * Dimension "cpu" exhausted on 1 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
639210c1-987f-a8b7-7a1e-d7e1db587f27  0.382    0                  0              0                        0.382
7f9a100f-02ef-0782-3b88-8402e5355fbe  0.914    -0.5               0              0                        0.207
38adf12b-1404-b538-1c9e-7c8a39027b36  0.515    -0.5               0              0                        0.00761

c2ade919-0f84-2795-dc76-110794f75fa4

ID                  = c2ade919-0f84-2795-dc76-110794f75fa4
Eval ID             = 15ef5f36-d1ba-46c0-e4cc-be31403912f8
Name                = failing-nomad-test01.group[2]
Node ID             = 7f9a100f-02ef-0782-3b88-8402e5355fbe
Node Name           = ip-10-1-3-23.eu-west-1.compute.internal
Job ID              = failing-nomad-test01
Job Version         = 1
Client Status       = failed
Client Description  = Failed tasks
Desired Status      = stop
Desired Description = alloc not needed due to job update
Created             = 2020-01-13T03:12:30Z
Modified            = 2020-01-13T03:14:30Z
Deployment ID       = f8ef8532-75c3-1685-21b8-bde47d87beee
Deployment Health   = unhealthy
Canary              = true
Evaluated Nodes     = 6
Filtered Nodes      = 0
Exhausted Nodes     = 2
Allocation Time     = 85.059µs
Failures            = 0

Task "rails" is "dead"
Task Resources
CPU      Memory   Disk     Addresses
750 MHz  250 MiB  300 MiB  web: 10.1.3.23:31392

Task Events:
Started At     = 2020-01-13T03:12:30Z
Finished At    = 2020-01-13T03:12:31Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type             Description
2020-01-13T03:12:31Z  Alloc Unhealthy  Unhealthy because of failed task
2020-01-13T03:12:31Z  Not Restarting   Policy allows no restarts
2020-01-13T03:12:31Z  Terminated       Exit Code: 1, Exit Message: "Docker container exited with non-zero exit code: 1"
2020-01-13T03:12:30Z  Started          Task started by client
2020-01-13T03:12:30Z  Task Setup       Building Task Directory
2020-01-13T03:12:30Z  Received         Task received by client

Placement Metrics
  * Resources exhausted on 2 nodes
  * Dimension "cpu" exhausted on 1 nodes
  * Dimension "memory" exhausted on 1 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
7f9a100f-02ef-0782-3b88-8402e5355fbe  0.808    0                  0              0                        0.808
38adf12b-1404-b538-1c9e-7c8a39027b36  0.424    0                  0              0                        0.424
639210c1-987f-a8b7-7a1e-d7e1db587f27  0.382    0                  0              0                        0.382
fa5e43ae-fb1d-be84-d41a-4705b108c681  0.261    0                  0              -1                       -0.369

d2657214-7019-c7e1-28fb-96f52ea1bef8

ID                  = d2657214-7019-c7e1-28fb-96f52ea1bef8
Eval ID             = 15ef5f36-d1ba-46c0-e4cc-be31403912f8
Name                = failing-nomad-test01.group[3]
Node ID             = e9cb946f-e830-0818-e705-e6406c501583
Node Name           = ip-10-1-1-93.eu-west-1.compute.internal
Job ID              = failing-nomad-test01
Job Version         = 1
Client Status       = failed
Client Description  = Failed tasks
Desired Status      = stop
Desired Description = alloc not needed due to job update
Created             = 2020-01-13T03:12:30Z
Modified            = 2020-01-13T03:14:30Z
Deployment ID       = f8ef8532-75c3-1685-21b8-bde47d87beee
Deployment Health   = unhealthy
Canary              = true
Evaluated Nodes     = 6
Filtered Nodes      = 0
Exhausted Nodes     = 2
Allocation Time     = 108.256µs
Failures            = 0

Task "rails" is "dead"
Task Resources
CPU      Memory   Disk     Addresses
750 MHz  250 MiB  300 MiB  web: 10.1.1.93:27342

Task Events:
Started At     = 2020-01-13T03:12:30Z
Finished At    = 2020-01-13T03:12:30Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type             Description
2020-01-13T03:12:30Z  Alloc Unhealthy  Unhealthy because of failed task
2020-01-13T03:12:30Z  Not Restarting   Policy allows no restarts
2020-01-13T03:12:30Z  Terminated       Exit Code: 1, Exit Message: "Docker container exited with non-zero exit code: 1"
2020-01-13T03:12:30Z  Started          Task started by client
2020-01-13T03:12:30Z  Task Setup       Building Task Directory
2020-01-13T03:12:30Z  Received         Task received by client

Placement Metrics
  * Resources exhausted on 2 nodes
  * Dimension "memory" exhausted on 1 nodes
  * Dimension "cpu" exhausted on 1 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
e9cb946f-e830-0818-e705-e6406c501583  0.64     0                  0              0                        0.64
38adf12b-1404-b538-1c9e-7c8a39027b36  0.424    0                  0              0                        0.424
fa5e43ae-fb1d-be84-d41a-4705b108c681  0.261    0                  0              0                        0.261
639210c1-987f-a8b7-7a1e-d7e1db587f27  0.382    0                  0              -1                       -0.309

d9d10e6e-8253-64fc-1bf6-619bd32a6827

ID                   = d9d10e6e-8253-64fc-1bf6-619bd32a6827
Eval ID              = 377eecea-a038-6ad4-29f6-05ceb0f51858
Name                 = failing-nomad-test01.group[3]
Node ID              = 7f9a100f-02ef-0782-3b88-8402e5355fbe
Node Name            = ip-10-1-3-23.eu-west-1.compute.internal
Job ID               = failing-nomad-test01
Job Version          = 1
Client Status        = failed
Client Description   = Failed tasks
Desired Status       = stop
Desired Description  = alloc not needed due to job update
Created              = 2020-01-13T03:10:58Z
Modified             = 2020-01-13T03:19:14Z
Deployment ID        = f8ef8532-75c3-1685-21b8-bde47d87beee
Deployment Health    = unhealthy
Canary               = true
Replacement Alloc ID = 8b90ca29-810b-3181-ab2b-b54307faf998
Evaluated Nodes      = 7
Filtered Nodes       = 0
Exhausted Nodes      = 2
Allocation Time      = 88.892µs
Failures             = 0

Task "rails" is "dead"
Task Resources
CPU      Memory   Disk     Addresses
750 MHz  250 MiB  300 MiB  web: 10.1.3.23:26688

Task Events:
Started At     = 2020-01-13T03:10:59Z
Finished At    = 2020-01-13T03:10:59Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type             Description
2020-01-13T03:10:59Z  Alloc Unhealthy  Unhealthy because of failed task
2020-01-13T03:10:59Z  Not Restarting   Policy allows no restarts
2020-01-13T03:10:59Z  Terminated       Exit Code: 1, Exit Message: "Docker container exited with non-zero exit code: 1"
2020-01-13T03:10:59Z  Started          Task started by client
2020-01-13T03:10:58Z  Task Setup       Building Task Directory
2020-01-13T03:10:58Z  Received         Task received by client

Placement Metrics
  * Resources exhausted on 2 nodes
  * Dimension "memory" exhausted on 1 nodes
  * Dimension "cpu" exhausted on 1 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
7f9a100f-02ef-0782-3b88-8402e5355fbe  0.914    -0.5               0              0                        0.207
e9cb946f-e830-0818-e705-e6406c501583  0.808    -0.5               0              0                        0.154
38adf12b-1404-b538-1c9e-7c8a39027b36  0.515    -0.5               0              0                        0.00761
639210c1-987f-a8b7-7a1e-d7e1db587f27  0.479    -0.5               0              0                        -0.0103
fa5e43ae-fb1d-be84-d41a-4705b108c681  0.382    -0.5               0              0                        -0.0591

e9cc157e-4f56-0b39-62e0-ec8ffc805859

ID                   = e9cc157e-4f56-0b39-62e0-ec8ffc805859
Eval ID              = 377eecea-a038-6ad4-29f6-05ceb0f51858
Name                 = failing-nomad-test01.group[2]
Node ID              = fa5e43ae-fb1d-be84-d41a-4705b108c681
Node Name            = ip-10-1-1-238.eu-west-1.compute.internal
Job ID               = failing-nomad-test01
Job Version          = 1
Client Status        = failed
Client Description   = Failed tasks
Desired Status       = stop
Desired Description  = alloc not needed due to job update
Created              = 2020-01-13T03:10:58Z
Modified             = 2020-01-13T03:19:14Z
Deployment ID        = f8ef8532-75c3-1685-21b8-bde47d87beee
Deployment Health    = unhealthy
Canary               = true
Replacement Alloc ID = 0a43fa24-61e8-d261-5301-d24f792a5f1d
Evaluated Nodes      = 6
Filtered Nodes       = 0
Exhausted Nodes      = 2
Allocation Time      = 76.962µs
Failures             = 0

Task "rails" is "dead"
Task Resources
CPU        Memory       Disk     Addresses
0/750 MHz  0 B/250 MiB  300 MiB  web: 10.1.1.238:28181

Task Events:
Started At     = 2020-01-13T03:10:58Z
Finished At    = 2020-01-13T03:10:59Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type             Description
2020-01-13T03:10:59Z  Alloc Unhealthy  Unhealthy because of failed task
2020-01-13T03:10:59Z  Not Restarting   Policy allows no restarts
2020-01-13T03:10:59Z  Terminated       Exit Code: 1, Exit Message: "Docker container exited with non-zero exit code: 1"
2020-01-13T03:10:58Z  Started          Task started by client
2020-01-13T03:10:58Z  Task Setup       Building Task Directory
2020-01-13T03:10:58Z  Received         Task received by client

Placement Metrics
  * Resources exhausted on 2 nodes
  * Dimension "memory" exhausted on 1 nodes
  * Dimension "cpu" exhausted on 1 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
fa5e43ae-fb1d-be84-d41a-4705b108c681  0.261    0                  0              0                        0.261
e9cb946f-e830-0818-e705-e6406c501583  0.808    -0.5               0              0                        0.154
38adf12b-1404-b538-1c9e-7c8a39027b36  0.515    -0.5               0              0                        0.00761
639210c1-987f-a8b7-7a1e-d7e1db587f27  0.479    -0.5               0              0                        -0.0103

24d6277f-dd1a-8935-d842-eb9431255d81

ID                  = 24d6277f-dd1a-8935-d842-eb9431255d81
Eval ID             = 81de9158-44f1-f2b9-5a93-1dd1d0972b75
Name                = failing-nomad-test01.group[1]
Node ID             = c19cc863-1a5a-f13b-ec58-d9c7bbb736e7
Node Name           = ip-10-1-2-32.eu-west-1.compute.internal
Job ID              = failing-nomad-test01
Job Version         = 2
Client Status       = running
Client Description  = Tasks are running
Desired Status      = run
Desired Description = <none>
Created             = 2020-01-13T03:08:32Z
Modified            = 2020-01-13T03:19:25Z
Deployment ID       = a7641e73-922b-9149-7fe9-f21e56c4b8d8
Deployment Health   = healthy
Evaluated Nodes     = 3
Filtered Nodes      = 0
Exhausted Nodes     = 0
Allocation Time     = 67.372µs
Failures            = 0

Task "rails" is "running"
Task Resources
CPU        Memory           Disk     Addresses
0/750 MHz  187 MiB/250 MiB  300 MiB  web: 10.1.2.32:25121

Task Events:
Started At     = 2020-01-13T03:08:33Z
Finished At    = N/A
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type        Description
2020-01-13T03:08:33Z  Started     Task started by client
2020-01-13T03:08:32Z  Task Setup  Building Task Directory
2020-01-13T03:08:32Z  Received    Task received by client

Placement Metrics
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
c19cc863-1a5a-f13b-ec58-d9c7bbb736e7  0.877    0                  0              0                        0.877
7f9a100f-02ef-0782-3b88-8402e5355fbe  0.808    0                  0              0                        0.808
e9cb946f-e830-0818-e705-e6406c501583  0.64     0                  0              0                        0.64

4c448781-6d59-22fc-cc54-f62d5278b8cd

ID                  = 4c448781-6d59-22fc-cc54-f62d5278b8cd
Eval ID             = 81de9158-44f1-f2b9-5a93-1dd1d0972b75
Name                = failing-nomad-test01.group[3]
Node ID             = e9cb946f-e830-0818-e705-e6406c501583
Node Name           = ip-10-1-1-93.eu-west-1.compute.internal
Job ID              = failing-nomad-test01
Job Version         = 2
Client Status       = running
Client Description  = Tasks are running
Desired Status      = run
Desired Description = <none>
Created             = 2020-01-13T03:19:14Z
Modified            = 2020-01-13T03:19:31Z
Deployment ID       = a7641e73-922b-9149-7fe9-f21e56c4b8d8
Deployment Health   = healthy
Evaluated Nodes     = 5
Filtered Nodes      = 0
Exhausted Nodes     = 2
Allocation Time     = 66.305µs
Failures            = 0

Task "rails" is "running"
Task Resources
CPU        Memory           Disk     Addresses
0/750 MHz  160 MiB/250 MiB  300 MiB  web: 10.1.1.93:20793

Task Events:
Started At     = 2020-01-13T03:19:15Z
Finished At    = N/A
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type        Description
2020-01-13T03:19:15Z  Started     Task started by client
2020-01-13T03:19:14Z  Task Setup  Building Task Directory
2020-01-13T03:19:14Z  Received    Task received by client

Placement Metrics
  * Resources exhausted on 2 nodes
  * Dimension "memory" exhausted on 1 nodes
  * Dimension "cpu" exhausted on 1 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
e9cb946f-e830-0818-e705-e6406c501583  0.64     0                  0              0                        0.64
639210c1-987f-a8b7-7a1e-d7e1db587f27  0.382    0                  0              0                        0.382
fa5e43ae-fb1d-be84-d41a-4705b108c681  0.261    0                  0              0                        0.261

5ce27208-d182-cb21-12e5-bf4755812aee

ID                  = 5ce27208-d182-cb21-12e5-bf4755812aee
Eval ID             = 81de9158-44f1-f2b9-5a93-1dd1d0972b75
Name                = failing-nomad-test01.group[2]
Node ID             = 7f9a100f-02ef-0782-3b88-8402e5355fbe
Node Name           = ip-10-1-3-23.eu-west-1.compute.internal
Job ID              = failing-nomad-test01
Job Version         = 2
Client Status       = running
Client Description  = Tasks are running
Desired Status      = run
Desired Description = <none>
Created             = 2020-01-13T03:19:14Z
Modified            = 2020-01-13T03:19:31Z
Deployment ID       = a7641e73-922b-9149-7fe9-f21e56c4b8d8
Deployment Health   = healthy
Evaluated Nodes     = 3
Filtered Nodes      = 0
Exhausted Nodes     = 0
Allocation Time     = 60.078µs
Failures            = 0

Task "rails" is "running"
Task Resources
CPU        Memory           Disk     Addresses
0/750 MHz  166 MiB/250 MiB  300 MiB  web: 10.1.3.23:22839

Task Events:
Started At     = 2020-01-13T03:19:15Z
Finished At    = N/A
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type        Description
2020-01-13T03:19:15Z  Started     Task started by client
2020-01-13T03:19:14Z  Task Setup  Building Task Directory
2020-01-13T03:19:14Z  Received    Task received by client

Placement Metrics
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
7f9a100f-02ef-0782-3b88-8402e5355fbe  0.808    0                  0              0                        0.808
38adf12b-1404-b538-1c9e-7c8a39027b36  0.424    0                  0              0                        0.424
639210c1-987f-a8b7-7a1e-d7e1db587f27  0.382    0                  0              0                        0.382

e8f15755-9a49-94bb-69b4-e9d9546a09a4

ID                  = e8f15755-9a49-94bb-69b4-e9d9546a09a4
Eval ID             = 81de9158-44f1-f2b9-5a93-1dd1d0972b75
Name                = failing-nomad-test01.group[0]
Node ID             = 3db7ac14-275a-3601-ae4d-0603fd064b06
Node Name           = ip-10-1-3-174.eu-west-1.compute.internal
Job ID              = failing-nomad-test01
Job Version         = 2
Client Status       = running
Client Description  = Tasks are running
Desired Status      = run
Desired Description = <none>
Created             = 2020-01-13T03:19:14Z
Modified            = 2020-01-13T03:19:27Z
Deployment ID       = a7641e73-922b-9149-7fe9-f21e56c4b8d8
Deployment Health   = healthy
Evaluated Nodes     = 4
Filtered Nodes      = 0
Exhausted Nodes     = 1
Allocation Time     = 113.661µs
Failures            = 0

Task "rails" is "running"
Task Resources
CPU        Memory           Disk     Addresses
0/750 MHz  165 MiB/250 MiB  300 MiB  web: 10.1.3.174:29669

Task Events:
Started At     = 2020-01-13T03:19:15Z
Finished At    = N/A
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type        Description
2020-01-13T03:19:15Z  Started     Task started by client
2020-01-13T03:19:14Z  Task Setup  Building Task Directory
2020-01-13T03:19:14Z  Received    Task received by client

Placement Metrics
  * Resources exhausted on 1 nodes
  * Dimension "memory" exhausted on 1 nodes
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
3db7ac14-275a-3601-ae4d-0603fd064b06  0.869    0                  0              0                        0.869
e9cb946f-e830-0818-e705-e6406c501583  0.64     0                  0              0                        0.64
fa5e43ae-fb1d-be84-d41a-4705b108c681  0.261    0                  0              0                        0.261

@kaspergrubbe
Copy link
Author

@kaspergrubbe
Copy link
Author

And hey! Look! The job-history script worked (added .txt extension because Github didn't like .svg):

graph.svg.txt

@drewbailey
Copy link
Contributor

drewbailey commented Jan 13, 2020

Thanks for all of this @kaspergrubbe, its very helpful.. digging into this now, for your question about the 11 allocs, the restart stanza you have dictates the groups behavior on task failure, 0 attemps, mode fail means the allocation itself will not restart, but it will reschedule which you can read more of here https://www.nomadproject.io/docs/job-specification/reschedule.html

Something that is interesting and I'll be digging into is alloc 24d6277f it was initially job version 0, then somehow got inplace updated to version 2

@drewbailey
Copy link
Contributor

@kaspergrubbe alright, I've finally got this reproduced on a deployed cluster in aws, will be interesting to dig in and see why it wasn't happening locally. Now that I have it reproduced I'll hopefully have something for you soon.

Thanks for your patience and debugging help on this!

@kaspergrubbe
Copy link
Author

@drewbailey I'm really happy that you can reproduce it! That means that it (hopefully) wasn't me that have misconfigured something.
Let me know if you need anything from me, and thank you so much for looking at this!

@drewbailey
Copy link
Contributor

Will do, think I should be ok now that I can reproduce it locally. In the mean time, depending on how you want to configure your deploys, disabling rescheduling will fix the immediate issue, though if you end up with a failed task the deploy will hault and essentially fail immediately.

@drewbailey
Copy link
Contributor

@kaspergrubbe I created #6936 with reproduction steps and will be working on the issue from there, thanks again for the reproduction help!

@kaspergrubbe
Copy link
Author

@drewbailey Thanks Drew, I will follow that other issue. Do you know how far back this has been broken? I am currently looking at my options of either downgrading or disabling rescheduling.

@drewbailey
Copy link
Contributor

It looks like it could potentially be as far back as 0.8.4 https://github.com/hashicorp/nomad/blob/master/CHANGELOG.md#084-june-11-2018

@kaspergrubbe
Copy link
Author

@drewbailey Scary it hasn't been noticed earlier, but maybe other users just have very stable jobs and they haven't noticed, I will be disabling rescheduling to mitigate the issue for now. Thanks again! :)

@drewbailey drewbailey moved this from Triaged to In Review in Nomad - Community Issues Triage Jan 28, 2020
Nomad - Community Issues Triage automation moved this from In Review to Done Feb 3, 2020
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 13, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants