Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port allocation doesn't work for system jobs with docker driver #8934

Closed
ku1ik opened this issue Sep 20, 2020 · 22 comments
Closed

Port allocation doesn't work for system jobs with docker driver #8934

ku1ik opened this issue Sep 20, 2020 · 22 comments

Comments

@ku1ik
Copy link

ku1ik commented Sep 20, 2020

Nomad version

Nomad v0.12.5 (514b0d6)

Operating system and Environment details

Ubuntu 20.04

Issue

Deploying a system job with Docker driver and a port specified in network stanza results in Port "http" not found, check network stanza error reported by Docker driver.

Reproduction steps

Run following job file and observe allocations fail with the above error.

Job file (if appropriate)

job "echo-system" {
  datacenters = ["dc1"]
  type = "system"

  group "web" {
    network {
      port "http" {
        to = 5678
      }
    }

    task "server" {
      driver = "docker"

      config {
        image = "hashicorp/http-echo"
        args  = ["-text", "hello world"]
        ports = ["http"]
      }
    }
  }
}

I also tried the deprecated syntax, also no luck:

job "echo-system2" {
  datacenters = ["dc1"]
  type = "system"

  group "web" {
    task "server" {
      driver = "docker"

      config {
        image = "hashicorp/http-echo"
        args  = ["-text", "hello world"]
        ports = ["http"]
      }


      resources {
        network {
          port "http" {
            to = 5678
          }
        }
      }
    }
  }
}

Nomad Client logs (if appropriate)

Failed to create container configuration for image "hashicorp/http-echo" ("sha256:a6838e9a6ff6ab3624720a7bd36152dda540ce3987714398003e14780e61478a"): Port "http" not found, check network stanza
@ku1ik ku1ik changed the title Port allocations doesn't work for system jobs with docker driver Port allocation doesn't work for system jobs with docker driver Sep 20, 2020
@ku1ik
Copy link
Author

ku1ik commented Sep 20, 2020

When I try a similar job with type = "service" it does work correctly.

@jonathanrcross
Copy link

Glad I found this issue, thought I was going crazy with different configuration options (network port stanza wise and docker config ports)for and have been experiencing that same issue at hand for system type jobs.

Nomad Version: Nomad v0.12.5
Docker Version: 19.03.13
OS: CentOS Linux release 7.8.2003

@tgross
Copy link
Member

tgross commented Oct 12, 2020

Hi folks! I suspect this was fixed by #8822, which hasn't made it into the changelog for the upcoming 0.13.0 yet. Trying this exact same jobspec on current master:

$ nomad job run ./example.nomad
==> Monitoring evaluation "65b08a9f"
    Evaluation triggered by job "echo-system"
    Allocation "e7e30a88" created: node "9721555e", group "web"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "65b08a9f" finished with status "complete"

$ nomad alloc status e7e
ID                  = e7e30a88-70a5-13f4-2eae-10dd013848ae
Eval ID             = 65b08a9f
Name                = echo-system.web[0]
Node ID             = 9721555e
Node Name           = devmode
Job ID              = echo-system
Job Version         = 0
Client Status       = running
Client Description  = Tasks are running
Desired Status      = run
Desired Description = <none>
Created             = 8s ago
Modified            = 6s ago

Allocation Addresses
Label  Dynamic  Address
*http  yes      10.0.2.15:23132 -> 5678

Task "server" is "running"
Task Resources
CPU        Memory           Disk     Addresses
0/100 MHz  836 KiB/300 MiB  300 MiB

Task Events:
Started At     = 2020-10-12T17:52:29Z
Finished At    = N/A
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                       Type        Description
2020-10-12T13:52:29-04:00  Started     Task started by client
2020-10-12T13:52:28-04:00  Driver      Downloading image
2020-10-12T13:52:28-04:00  Task Setup  Building Task Directory
2020-10-12T13:52:28-04:00  Received    Task received by client

$ curl 10.0.2.15:23132
hello world

@the-maldridge
Copy link

@tgross I don't think #8822 is the complete fix. I patched that into a build and I continue to get the errors described above:

Recent Events:
Time                       Type            Description
2020-10-26T23:32:45-07:00  Killing         Sent interrupt. Waiting 5s before force killing
2020-10-26T23:32:45-07:00  Not Restarting  Error was unrecoverable
2020-10-26T23:32:45-07:00  Driver Failure  Failed to create container configuration for image "hashicorp/http-echo" ("sha256:a6838e9a6ff6ab3624720a7bd36152dda540ce3987714398003e14780e61478a"): Port "http" not found, check network stanza
2020-10-26T23:32:41-07:00  Driver          Downloading image
2020-10-26T23:32:41-07:00  Task Setup      Building Task Directory
2020-10-26T23:32:41-07:00  Received        Task received by client

Is there a known good workaround short of reverting the nomad version in the cluster to one with known functional system networking?

@tgross
Copy link
Member

tgross commented Oct 27, 2020

I don't think #8822 is the complete fix. I patched that into a build and I continue to get the errors described above:

Ok, that's interesting. I've just re-verified it on current master (as of this morning, which is shipping as 1.0-beta2 today).

This wasn't an area I worked on, so I'm going to admit I'm not sure which patch it was that landed between 0.12.5 and master and after a bit of digging I'm not coming up with anything obvious. Let's tag in @nickethier and @shoenig who've been doing a bunch of that networking work. They might be able to point you to the specific patch(es) you'll need.

@the-maldridge
Copy link

Sure. I also tested 0.12.6 with the patch as well, so its somewhere between 0.12.6 and master. As an aside, it would be really useful if listed on the page with the downloads if there could be a listing of known defects. Had I known this was an issue, I would not have updated the packages in Void.

@the-maldridge
Copy link

@tgross any update from your end? I've bashed my head against enough other issues in 0.12.x that I'm pretty close to rolling back, but if I can provide any more debugging information before I do I'm happy to send logs your way. Looks like this is reasonably well understood, just bafflingly not backported into a released non-beta version.

@nickethier
Copy link
Member

Hey @the-maldridge I just took a look and confirmed that the 0.12.7 release contains this bug. I patched it with #8822 locally and retested and could not reproduce. #8822 didn't make it into the 0.12 line but I will check with the team about if we backport this one.

I know you said you tried applying that patch and still got this error. Could you recheck that for me once more just to be sure I didn't do something wrong on my end.

@the-maldridge
Copy link

Sure, I'll give it another try. To be clear I will apply the patch as generated by github (https://patch-diff.githubusercontent.com/raw/hashicorp/nomad/pull/8822.patch) to the release tarball for 0.12.7 as retrieved from github. If there are other steps you think I should be taking to make sure my build matches yours, let me know.

@nickethier
Copy link
Member

Looks the same to me, heres my workspace looks like:

$>git diff
diff --git a/scheduler/system_sched.go b/scheduler/system_sched.go
index f8088b02a..038a74188 100644
--- a/scheduler/system_sched.go
+++ b/scheduler/system_sched.go
@@ -349,6 +349,7 @@ func (s *SystemScheduler) computePlacements(place []allocTuple) error {
 
                if option.AllocResources != nil {
                        resources.Shared.Networks = option.AllocResources.Networks
+                       resources.Shared.Ports = option.AllocResources.Ports
                }
 
                // Create an allocation for this

@the-maldridge
Copy link

No dice. I can send configs, sample jobs, log entries, pretty much whatever you want; this is Void Linux's cluster which is entirely on github save for the encryption keys.

@the-maldridge
Copy link

Actually the job file is short enough I can just paste it:

job "node_exporter" {
  datacenters = [
    "VOID",
    "VOID-CONTROL",
    "VOID-PROXY",
  ]
  type = "system"
  group "monitoring" {
    network {
      port "metrics" {}
    }
    task "node_exporter" {
      driver = "docker"
      config {
        image = "prom/node-exporter:v1.0.1"
        ports = ["metrics"]
      }
      resources {
        cpu    = 500
        memory = 64
      }
    }
  }
}

@the-maldridge
Copy link

For completeness here's a copy of the built binary. The only change between this binary and the one in prod is this was stripped and upx'd to fit within the github upload limits.

nomad.zip

@tgross tgross removed this from the 1.0 milestone Dec 7, 2020
@tgross
Copy link
Member

tgross commented Dec 7, 2020

@the-maldridge just doing some follow-up here:

  • I took the patch @nickethier provided in Port allocation doesn't work for system jobs with docker driver #8934 (comment) and applied it to the v0.12.6 tag, built in my dev environment, and ran the job you provided. It works.
  • I took the patch @nickethier provided in Port allocation doesn't work for system jobs with docker driver #8934 (comment) and applied it to the v0.12.7 tag, built in my dev environment, and ran the job you provided. It works.
  • You mentioned this is for the Void Linux cluster... I don't know much about the library environment there but I'm wondering if you're hitting some kind of fingerprinting issue that's not being correctly surfaced, and it just looks like the bug we fixed. Do you hit it on any of the 1.0 beta/rc builds?
  • The binary you've uploaded exits 127 for me, even for trivial operations like nomad version. I suspect there's some library path issues there unless I'm running on Void myself. Do you know if there's a reasonably current Vagrant box we could pull to match your environment?
# statically linked... but that's probably because it's upx'd?
$ ldd ./pkg/nomad
 not a dynamic executable

# blows up
$ sudo strace ./pkg/nomad version
execve("./pkg/nomad", ["./pkg/nomad", "version"], [/* 18 vars */]) = 0
open("/proc/self/exe", O_RDONLY)        = 3
mmap(NULL, 21102734, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fe77e6b2000
mmap(0x7fe77e6b2000, 21102336, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 3, 0) = 0x7fe77e6b2000
mprotect(0x7fe77fad1000, 4238, PROT_READ|PROT_EXEC) = 0
readlink("/proc/self/exe", "/opt/gopath/src/github.com/hashi"..., 4095) = 52
mmap(0x400000, 72216576, PROT_NONE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x400000
mmap(0x400000, 15048, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x400000
mprotect(0x400000, 15048, PROT_READ)    = 0
mmap(0x404000, 33536141, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0x4000) = 0x404000
mprotect(0x404000, 33536141, PROT_READ|PROT_EXEC) = 0
mmap(0x2400000, 36807552, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0x2000000) = 0x2400000
mprotect(0x2400000, 36807552, PROT_READ) = 0
mmap(0x471b000, 1575833, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0x431a000) = 0x471b000
mprotect(0x471b000, 1575833, PROT_READ|PROT_WRITE) = 0
mmap(0x489c000, 274152, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x489c000
open("/lib/ld-linux-x86-64.so.2", O_RDONLY) = -1 ENOENT (No such file or directory)
exit(127)                               = ?
+++ exited with 127 +++

@the-maldridge
Copy link

Very spooky. I have not tried any of the 1.0 builds as this is a production cluster and the official guidance was that 1.0 was not production stable at the time I was hitting this bug. I have since removed my need for this feature by deploying the task that required it (node_exporter) outside of nomad since it would have been very fiddly to get all the host volumes working correctly across the fleet.

The build for Nomad uses this file. The build command should ultimately boil down to go build -tags 'ui release' -X github.com/hashicorp/nomad/version.GitCommit=${_git_commit} github.com/hashicorp/nomad. I do not believe we have yet switched over to Nomad's module based builds, but I have seen that those are starting to be checked in.

Void's go binaries are not PIE, not UPX'd and not stripped (Go is such a fun language to try and run standard packaging processes across). The output of ldd is as follows:

$ ldd /usr/bin/nomad
	linux-vdso.so.1 (0x00007ffda1974000)
	libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007f7eebed5000)
	libdl.so.2 => /usr/lib/libdl.so.2 (0x00007f7eebed0000)
	libc.so.6 => /usr/lib/libc.so.6 (0x00007f7eebd0d000)
	/lib/ld-linux-x86-64.so.2 (0x00007f7eebf04000)

I'll bet that whatever distro you run has the linker at /usr/lib/, whereas due to path precedent reasons Void has it at /lib/.

Void does maintain vagrant boxen at https://app.vagrantup.com/voidlinux/. If you want it to be truly up to date, run xbps-install -Su twice after it boots to update the package manager and subsequently all packages, but the base image is sufficiently up to date to support testing.

Vagrant.configure("2") do |config|
  config.vm.box = "voidlinux/glibc64"
end

@tgross
Copy link
Member

tgross commented Dec 7, 2020

Thanks @the-maldridge. We're in the midst of prepping for 1.0 GA and Nick is on leave, but I'll see if I can get a repro working for you in the next week or so.

@nickethier
Copy link
Member

Hey @the-maldridge happy new year. I was discussing this today with the team and @drewbailey pointed me to #9736 which fixes a case where ports aren't persisted correctly on job updates. Were you experiencing this with initial job deployment (what Tim and myself tested), or is this when you're updating an existing job in the cluster?

@the-maldridge
Copy link

Happy new year to you as well @nickethier ! This was being experienced with brand new jobs to the cluster. I'm afraid I don't have a good way to test this anymore as due to it not working I abandoned docker networking entirely and now use CNI plugins which I found to be a more likely path to be working.

@nickethier
Copy link
Member

I'm sorry we weren't able to reproduce on our end @the-maldridge but I'm glad you've found a solution thats working for you.

@sickill are you still seeing this with the latest release. I believe the above fixes should have solved your original problem and am inclined to close this issue as we're no longer able to reproduce on our end.

@the-maldridge
Copy link

I would also be fine to close, it seems pretty clear that driver level networking is no longer a well trod path, and that the intended mechanisms are CNI with group level networking.

@ku1ik
Copy link
Author

ku1ik commented Jan 13, 2021

I tested with the latest release and I confirm it works properly now, so we can close it 👍 Thx!

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 25, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants