Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nomad v1.2.4 Unable to place allocation with host_network #11906

Closed
nahsi opened this issue Jan 23, 2022 · 3 comments
Closed

Nomad v1.2.4 Unable to place allocation with host_network #11906

nahsi opened this issue Jan 23, 2022 · 3 comments
Assignees
Labels

Comments

@nahsi
Copy link

nahsi commented Jan 23, 2022

Nomad version

Nomad v1.2.4 (9f21b72)

Operating system and Environment details

Gentoo
Linux smyrna 5.10.79 #1 SMP Sat Nov 20 20:01:24 MSK 2021 x86_64 Intel(R) Celeron(R) J4125 CPU @ 2.00GHz GenuineIntel GNU/Linux

Issue

Nomad is unable to create an allocation for job that binds to host_network when version is 1.2.4
On screenshot below you can see that Nomad is able to place allocations on hosts with version 1.2.3 and emits constraint missing on host with version 1.2.4
image

> nomad node status -verbose
ID                                    DC      Name       Class   Address         Version  Drain  Eligibility  Status
0b6ede59-f5fc-4ede-10a7-52b9edb090ec  syria   tyros      <none>  192.168.130.1   1.2.3    false  eligible     ready
6d79a7df-de75-fd47-b6e7-c7ee0d5a33e7  syria   antiochia  <none>  192.168.130.10  1.2.3    false  eligible     ready
cffc3930-56f1-db51-e24d-cf7a45f31f47  pontus  heraclea   <none>  192.168.175.83  1.2.3    false  eligible     ready
ea5a3609-c47d-67c2-e324-7460caf8245e  syria   palmyra    <none>  192.168.130.20  1.2.3    false  eligible     ready
5b66604b-e63f-93ba-814d-875279ec9cc8  asia    pergamon   <none>  192.168.230.10  1.2.3    false  eligible     ready
fc9d9ec1-7d6b-8f48-f4c4-4673f2ad6b7b  asia    smyrna     <none>  192.168.230.1   1.2.4    false  eligible     ready

host_network is configured like this:

{
    "client": {
        "host_network": [
            {
                "public": {
                    "interface": "ppp0",
                    "reserved_ports": "80,443,25,465,993"
                }
            }
        ]
    }
}

Network is detected by nomad client:

> curl -s nomad.service.consul:4646/v1/node/fc9d9ec1-7d6b-8f48-f4c4-4673f2ad6b7b | jq .HostNetworks
{
  "public": {
    "Name": "public",
    "CIDR": "",
    "Interface": "ppp0",
    "ReservedPorts": "80,443,25,465,993"
  }
}

Reproduction steps

  1. Add host_network to client configuration
  2. Run job that binds to added host_network

Expected Result

Allocation is placed

Actual Result

Nomad is unable to place allocation with error: constraint missing network <host_network> for port <reserved_port>...

Job file (if appropriate)

variables {
  versions = {
    traefik  = "2.5.6"
    promtail = "2.4.2"
  }
}

job "ingress" {
  datacenters = [
    "syria",
    "asia",
    "pontus"
  ]

  namespace = "infra"
  type      = "service"

  update {
    max_parallel = 1
    stagger      = "1m"
  }

  constraint {
    distinct_property = "${node.datacenter}"
  }

  group "traefik" {
    count = 3
    network {
      port "traefik" {
        to = 8080
      }

      port "http" {
        static       = 80
        to           = 80
        host_network = "public"
      }

      port "https" {
        static       = 443
        to           = 443
        host_network = "public"
      }

      port "smtp" {
        static       = 465
        to           = 465
        host_network = "public"
      }

      port "smtp-relay" {
        static       = 25
        to           = 25
        host_network = "public"
      }

      port "imap" {
        static       = 993
        to           = 993
        host_network = "public"
      }

      port "promtail" {
        to = 3000
      }
    }

    task "traefik" {
      driver       = "docker"
      kill_timeout = "30s"

      vault {
        policies = ["public-cert"]
      }

      resources {
        cpu        = 50
        memory     = 32
        memory_max = 64
      }

      service {
        name = "ingress"
        port = "traefik"

        check {
          type     = "http"
          protocol = "http"
          path     = "/ping"
          port     = "traefik"
          interval = "20s"
          timeout  = "2s"
        }
      }

      config {
        image = "traefik:${var.versions.traefik}"

        extra_hosts = [
          "host.docker.internal:host-gateway"
        ]

        ports = [
          "traefik",
          "http",
          "https",
          "smtp",
          "smtp-relay",
          "imap"
        ]

        args = [
          "--configFile=local/traefik.yml"
        ]
      }

      template {
        data        = file("traefik.yml")
        destination = "local/traefik.yml"
      }

      template {
        data        = file("file.yml")
        destination = "local/traefik/file.yml"
      }

      template {
        data = <<-EOH
        {{- with secret "secret/certificate" -}}
        {{ .Data.data.ca_bundle }}{{ end }}
        EOH

        destination = "secrets/cert.pem"
        change_mode = "restart"
        splay       = "1m"
      }

      template {
        data = <<-EOH
        {{- with secret "secret/certificate" -}}
        {{ .Data.data.key }}{{ end }}
        EOH

        destination = "secrets/key.pem"
        change_mode = "restart"
        splay       = "1m"
      }
    }

    task "promtail" {
      driver = "docker"

      lifecycle {
        hook    = "poststart"
        sidecar = true
      }

      resources {
        cpu    = 50
        memory = 32
      }

      service {
        name = "promtail"
        port = "promtail"

        meta {
          sidecar_to = "traefik"
        }

        check {
          type     = "http"
          path     = "/ready"
          interval = "20s"
          timeout  = "2s"
        }
      }

      config {
        image = "grafana/promtail:${var.versions.promtail}"

        args = [
          "-config.file=local/promtail.yml"
        ]

        ports = [
          "promtail"
        ]
      }

      template {
        data        = file("promtail.yml")
        destination = "local/promtail.yml"
      }
    }
  }
}
@nahsi nahsi added the type/bug label Jan 23, 2022
@jrasell jrasell self-assigned this Jan 24, 2022
@jrasell
Copy link
Member

jrasell commented Jan 24, 2022

Hi @nahsi and thanks for raising this issue. I believe this is working as expected as the reserved_ports client configuration option is designed to detail ports which the Nomad client is not allowed to allocate. This functionality was until recently broken until #11728 was raised which fixed the behaviour. This change was shipped as part of the 1.2.4 release, which is why you are now experiencing the expected behaviour which might not have been the case previously.

@nahsi
Copy link
Author

nahsi commented Jan 24, 2022

@jrasell I see, I though reserved_ports mean "reserved for allocation that uses this network" :) It's very silly of me.

Thank you for anwser and have a good day!

@nahsi nahsi closed this as completed Jan 24, 2022
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 12, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants