Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failing to allocate job in busybox environment #6721

Closed
turbo-cafe opened this issue Nov 19, 2019 · 17 comments
Closed

failing to allocate job in busybox environment #6721

turbo-cafe opened this issue Nov 19, 2019 · 17 comments

Comments

@turbo-cafe
Copy link

in busybox environment allocating job fails with "unknown user nobody" or, similarly, with user root when whitelisted; it makes me think message in log file is misleading and the reason must be in environment, guessing i installed coreutils over busybox applets but that didn't help and i didn't find any reference document about what specifically nomad expects e.g. wget to download jars or bash to run jobs;

issue is reproducible with raw_exec driver, as well as with exec or java; you might want to skip job attached below and use any hello-world job with raw_exec, since otherwise you'd have to compile custom kernel for openwrt image;

Nomad version

v0.10.1 built with nonvidia tag

Operating system and Environment details

busybox 1.30.1 openwrt 19.07 with custom kernel 4.14.151

Issue

[ERROR] client.alloc_runner: prerun failed: alloc_id=28223f0d-a174-ae8b-272a-64a107
80aa72 error="pre-run hook "alloc_dir" failed: user: unknown user nobody"

Logs

https://hastebin.com/dupavitido.bash

Job

https://hastebin.com/jukavazesi.bash

Reproduction steps

@tgross
Copy link
Member

tgross commented Nov 19, 2019

Pulling some interesting bits here so that if this third-party pastebin disappears we don't lose context for the issue. Otherwise marking for investigation.

Jobspec

job "metabase" {
  region = "global"
  datacenters = ["dc1"]
  type = "service"

  group "metabase" {
    count = 1

    task "metabase" {
      driver = "java"
      user   = "nobody"
      config {
        jar_path = "metabase.jar"
        jvm_options = ["-Xmx256m","-Xms256m"]
      }

      artifact {
          source= "http://downloads.metabase.com/v0.33.5.1/metabase.jar"
      }

      service {
        name = "metabase"
        port = "http"

        tags = ["java","metabase"]

        check {
          type = "http"
          port = "http"
          path = "/"
          interval = "10s"
          timeout = "2s"
        }
      }

      resources {
        cpu = 100
        memory = 256

        network {
          mbits = 5

          port "http" {
            static = 3000
          }
        }
      }
    }
  }
}
Interesting bits of the logs
==> Nomad agent started! Log data will stream in below:

    2019-11-19T09:00:50.861Z [DEBUG] agent.plugin_loader.docker: using client connection initialized from environment: plugin_dir=
    2019-11-19T09:00:50.861Z [DEBUG] agent.plugin_loader.docker: using client connection initialized from environment: plugin_dir=
    2019-11-19T09:00:50.862Z [INFO ] agent: detected plugin: name=qemu type=driver plugin_version=0.1.0
    2019-11-19T09:00:50.862Z [INFO ] agent: detected plugin: name=java type=driver plugin_version=0.1.0
    2019-11-19T09:00:50.862Z [INFO ] agent: detected plugin: name=docker type=driver plugin_version=0.1.0
    2019-11-19T09:00:50.862Z [INFO ] agent: detected plugin: name=rkt type=driver plugin_version=0.1.0
    2019-11-19T09:00:50.862Z [INFO ] agent: detected plugin: name=raw_exec type=driver plugin_version=0.1.0
    2019-11-19T09:00:50.862Z [INFO ] agent: detected plugin: name=exec type=driver plugin_version=0.1.0
    2019-11-19T09:00:50.874Z [INFO ] nomad: raft: Initial configuration (index=1): [{Suffrage:Voter ID:127.0.0.1:4647 Address:127.0.0.1:4647}]
    2019-11-19T09:00:50.876Z [INFO ] nomad: raft: Node at 127.0.0.1:4647 [Follower] entering Follower state (Leader: "")
    2019-11-19T09:00:50.877Z [INFO ] nomad: serf: EventMemberJoin: slim.global 127.0.0.1
    2019-11-19T09:00:50.877Z [INFO ] nomad: starting scheduling worker(s): num_workers=2 schedulers="[service batch system _core]"
    2019-11-19T09:00:50.879Z [INFO ] client: using state directory: state_dir=/tmp/NomadClient871155963
    2019-11-19T09:00:50.879Z [INFO ] client: using alloc directory: alloc_dir=/tmp/NomadClient538243102
    2019-11-19T09:00:50.881Z [INFO ] nomad: adding server: server="slim.global (Addr: 127.0.0.1:4647) (DC: dc1)"
    2019-11-19T09:00:50.882Z [DEBUG] nomad: lost contact with Nomad quorum, falling back to Consul for server list
    2019-11-19T09:00:50.882Z [ERROR] nomad: error looking up Nomad servers in Consul: error="server.nomad: unable to query Consul datacenters: Get http://127.0.0.1:8500/v1/catalog/datacenters: dial tcp 127.0.0.1:8500: connect: connection refused"
    2019-11-19T09:00:50.883Z [DEBUG] client.fingerprint_mgr: built-in fingerprints: fingerprinters="[arch cgroup consul cpu host memory network nomad signal storage vault env_aws env_gce]"
    2019-11-19T09:00:50.883Z [INFO ] client.fingerprint_mgr.cgroup: cgroups are available
    2019-11-19T09:00:50.883Z [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=cgroup period=15s
    2019-11-19T09:00:50.884Z [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=consul period=15s
    2019-11-19T09:00:50.884Z [DEBUG] client.fingerprint_mgr.cpu: detected cpu frequency: MHz=2495
    2019-11-19T09:00:50.884Z [DEBUG] client.fingerprint_mgr.cpu: detected core count: cores=2
    2019-11-19T09:00:50.885Z [DEBUG] client.fingerprint_mgr.network: unable to read link speed: path=/sys/class/net/lo/speed
    2019-11-19T09:00:50.885Z [DEBUG] client.fingerprint_mgr.network: link speed could not be detected and no speed specified by user, falling back to default speed: mbits=1000
    2019-11-19T09:00:50.885Z [DEBUG] client.fingerprint_mgr.network: detected interface IP: interface=lo IP=127.0.0.1
    2019-11-19T09:00:50.885Z [DEBUG] client.fingerprint_mgr.network: detected interface IP: interface=lo IP=::1
    2019-11-19T09:00:50.886Z [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=vault period=15s
    2019-11-19T09:00:51.887Z [DEBUG] consul.sync: sync complete: registered_services=3 deregistered_services=0 registered_checks=3 deregistered_checks=0
    2019-11-19T09:00:52.145Z [WARN ] nomad: raft: Heartbeat timeout from "" reached, starting election
    2019-11-19T09:00:52.145Z [INFO ] nomad: raft: Node at 127.0.0.1:4647 [Candidate] entering Candidate state in term 2
    2019-11-19T09:00:52.145Z [DEBUG] nomad: raft: Votes needed: 1
    2019-11-19T09:00:52.145Z [DEBUG] nomad: raft: Vote granted from 127.0.0.1:4647 in term 2. Tally: 1
    2019-11-19T09:00:52.145Z [INFO ] nomad: raft: Election won. Tally: 1
    2019-11-19T09:00:52.145Z [INFO ] nomad: raft: Node at 127.0.0.1:4647 [Leader] entering Leader state
    2019-11-19T09:00:52.146Z [INFO ] nomad: cluster leadership acquired
    2019-11-19T09:00:52.886Z [DEBUG] client.fingerprint_mgr.env_aws: error querying AWS Metadata URL, skipping
    2019-11-19T09:00:54.887Z [DEBUG] client.fingerprint_mgr.env_gce: could not read value for attribute: attribute=machine-type error="Get http://169.254.169.254/computeMetadata/v1/instance/machine-type: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
    2019-11-19T09:00:54.887Z [DEBUG] client.fingerprint_mgr.env_gce: error querying GCE Metadata URL, skipping
    2019-11-19T09:00:54.887Z [DEBUG] client.fingerprint_mgr: detected fingerprints: node_attrs="[arch cgroup cpu host network nomad signal storage]"
    2019-11-19T09:00:54.887Z [INFO ] client.plugin: starting plugin manager: plugin-type=driver
    2019-11-19T09:00:54.888Z [INFO ] client.plugin: starting plugin manager: plugin-type=device
    2019-11-19T09:00:54.888Z [DEBUG] client.device_mgr: exiting since there are no device plugins
    2019-11-19T09:00:54.894Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=exec health=healthy description=Healthy
    2019-11-19T09:00:54.898Z [DEBUG] client.plugin: waiting on plugin manager initial fingerprint: plugin-type=driver
    2019-11-19T09:00:54.898Z [DEBUG] client.plugin: waiting on plugin manager initial fingerprint: plugin-type=device
    2019-11-19T09:00:54.898Z [DEBUG] client.plugin: finished plugin manager initial fingerprint: plugin-type=device
    2019-11-19T09:00:54.899Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=raw_exec health=healthy description=Healthy
    2019-11-19T09:00:54.900Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=qemu health=undetected description=
    2019-11-19T09:00:54.901Z [DEBUG] client.server_mgr: new server list: new_servers=127.0.0.1:4647,127.0.0.1:4647 old_servers=
    2019-11-19T09:00:54.903Z [DEBUG] client.driver_mgr.docker: could not connect to docker daemon: driver=docker endpoint=unix:///var/run/docker.sock error="Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory"
    2019-11-19T09:00:54.903Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=docker health=undetected description="Failed to connect to docker daemon"
    2019-11-19T09:00:54.903Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=rkt health=undetected description="Failed to execute rkt version: exec: "rkt": executable file not found in $PATH"
    2019-11-19T09:00:55.258Z [DEBUG] nomad: memberlist: Stream connection from=127.0.0.1:49502
    2019-11-19T09:00:55.298Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=java health=healthy description=Healthy
    2019-11-19T09:00:55.298Z [DEBUG] client.driver_mgr: detected drivers: drivers="map[healthy:[exec raw_exec java] undetected:[qemu docker rkt]]"
    2019-11-19T09:00:55.298Z [DEBUG] client.plugin: finished plugin manager initial fingerprint: plugin-type=driver
    2019-11-19T09:00:55.298Z [INFO ] client: started client: node_id=ea2ac41a-8509-cf0c-38cc-3b408bfb7896
    2019-11-19T09:00:55.303Z [DEBUG] client: updated allocations: index=1 total=0 pulled=0 filtered=0
    2019-11-19T09:00:55.303Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=0
    2019-11-19T09:00:55.303Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=0 errors=0
    2019-11-19T09:00:55.304Z [INFO ] client: node registration complete
    2019-11-19T09:00:55.305Z [DEBUG] client: state updated: node_status=ready
    2019-11-19T09:00:55.305Z [DEBUG] client.server_mgr: new server list: new_servers=127.0.0.1:4647 old_servers=127.0.0.1:4647,127.0.0.1:4647
    2019-11-19T09:00:55.305Z [DEBUG] consul.sync: sync complete: registered_services=1 deregistered_services=0 registered_checks=1 deregistered_checks=0
    2019-11-19T09:00:56.305Z [DEBUG] client: state changed, updating node and re-registering
    2019-11-19T09:00:56.307Z [INFO ] client: node registration complete

...

    2019-11-19T09:01:02.586Z [DEBUG] worker: dequeued evaluation: eval_id=c4d4240f-582d-0857-3479-f1714d19ab9c
    2019-11-19T09:01:02.587Z [DEBUG] http: request complete: method=PUT path=/v1/jobs?region=global duration=7.065819ms
    2019-11-19T09:01:02.588Z [DEBUG] worker.service_sched: reconciled current state with desired state: eval_id=c4d4240f-582d-0857-3479-f1714d19ab9c job_id=metabase namespace=default results="Total changes: (place 1) (destructive 0) (inplace 0) (stop 0)
Created Deployment: "fd87f7aa-233a-5569-7d92-b2342c6e79a2"
Desired Changes for "metabase": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)"
    2019-11-19T09:01:02.589Z [DEBUG] http: request complete: method=GET path=/v1/evaluation/c4d4240f-582d-0857-3479-f1714d19ab9c?region=global duration=738.273µs
    2019-11-19T09:01:02.591Z [DEBUG] worker: submitted plan for evaluation: eval_id=c4d4240f-582d-0857-3479-f1714d19ab9c
    2019-11-19T09:01:02.591Z [DEBUG] worker.service_sched: setting eval status: eval_id=c4d4240f-582d-0857-3479-f1714d19ab9c job_id=metabase namespace=default status=complete
    2019-11-19T09:01:02.591Z [DEBUG] worker: updated evaluation: eval="<Eval "c4d4240f-582d-0857-3479-f1714d19ab9c" JobID: "metabase" Namespace: "default">"
    2019-11-19T09:01:02.591Z [DEBUG] worker: ack evaluation: eval_id=c4d4240f-582d-0857-3479-f1714d19ab9c
    2019-11-19T09:01:02.592Z [DEBUG] http: request complete: method=GET path=/v1/evaluation/c4d4240f-582d-0857-3479-f1714d19ab9c/allocations?region=global duration=191.423µs
    2019-11-19T09:01:02.593Z [DEBUG] client: updated allocations: index=11 total=1 pulled=1 filtered=0
    2019-11-19T09:01:02.593Z [DEBUG] client: allocation updates: added=1 removed=0 updated=0 ignored=0
    2019-11-19T09:01:02.595Z [DEBUG] client: allocation updates applied: added=1 removed=0 updated=0 ignored=0 errors=0
    2019-11-19T09:01:02.596Z [ERROR] client.alloc_runner: prerun failed: alloc_id=c9ebe220-83c8-40c5-ad7c-699aeb5db67e error="pre-run hook "alloc_dir" failed: user: lookup username nobody: no such file or directory"
    2019-11-19T09:01:02.596Z [INFO ] client.gc: marking allocation for GC: alloc_id=c9ebe220-83c8-40c5-ad7c-699aeb5db67e
    2019-11-19T09:01:02.750Z [DEBUG] nomad.client: adding evaluations for rescheduling failed allocations: num_evals=1
    2019-11-19T09:01:02.755Z [DEBUG] worker: dequeued evaluation: eval_id=3484ff27-8fb4-72fa-a566-b7c14b12fc1d

start script:

opkg update && opkg install haproxy
/etc/init.d/haproxy start

haproxy config:

defaults
   mode http

frontend stats
   bind *:1936
   stats uri /
   stats show-legends
   no log

frontend http_front
   bind *:8080
   default_backend http_back

backend http_back
    balance roundrobin
    server nomad 127.0.0.1:4646 source 192.168.1.1

@turbo-cafe
Copy link
Author

6721.tar.gz
here's a turnkey solution to reproduce the issue; you'll need linux, build-essentials, golang, qemu, 5G space and 10 minuntes while it builds;

make will download and build nomad, pack it with openjdk, metabase.jar, metabase.job, consul into openwrt image;

make run will use qemu to start image with no password;

use tty to submit job
su - nobody
nomad job run metabase.job

observe issue either in /tmp/log/nomad.log or visiting http://192.168.1.1:8080 at your desktop;

make clean to remove downloaded and generated files

here i used raw_exec driver and nomad in dev mode to avoid rebuilding openwrt kernel to include cgroups support; if you're wondering how nomad agent is started look into /etc/rc.local;

@tgross
Copy link
Member

tgross commented Nov 19, 2019

Is there something unusual about that tarball, @turbo-cafe ? I get the following when unpacking it in a virtual machine:

$ tar -xf 6721.tar.gz
tar: .: Cannot utime: Operation not permitted
tar: .: Cannot change mode to rwxrwxr-t: Operation not permitted
tar: Exiting with failure status due to previous errors

If you could provide a gist with the minimal set of files we need to reproduce, rather than an opaque tarball, that would help us out a lot.

@turbo-cafe
Copy link
Author

you'll need to gunzip it first, or tar zxf 6721.tar.gz to tell tar to gunzip it for you

@turbo-cafe
Copy link
Author

@tgross
Copy link
Member

tgross commented Nov 19, 2019

you'll need to gunzip it first, or tar zxf 6721.tar.gz to tell tar to gunzip it for you

Yes, I'm aware. GNU tar does the right thing when it encounters a .tar.gz, and gunzipping it first results in the same error anyways.

here's gist of included files https://gist.github.com/turbo-cafe/60ad80343688555bf385ed9000fc7f07

Can you provide a link to where this WRT image comes from? I really can't go running arbitrary virtual machines someone is sending us.

@turbo-cafe
Copy link
Author

turbo-cafe commented Nov 19, 2019

you were right, tar xf works fine; i just unpacked it fine and now i'm confused what on earth might be the trouble with file, just for reference sha256sum 6721.tar.gz
9c5ee005099b6255c0a58813fb4b573f7511fea067734e2c7f67e95db8e43ab8 6721.tar.gz

Makefile:25 uses wget to download imagebuilder for v19.07-rc1 from downloads.openwrt.org;

@turbo-cafe
Copy link
Author

turbo-cafe commented Nov 19, 2019

downloaded file matched hash with original, my only guess is timezone utc+3 might be reason tar prints a warning

to get my peace of mind back i checked file with https://www.virustotal.com/gui/file/9c5ee005099b6255c0a58813fb4b573f7511fea067734e2c7f67e95db8e43ab8/detection

@tgross
Copy link
Member

tgross commented Nov 19, 2019

Makefile:25 uses wget to download imagebuilder for v19.07-rc1 from downloads.openwrt.org;

Ah, gotcha. Thanks. We'll circle back here once we've had a chance to repro.

@turbo-cafe
Copy link
Author

an update about where i am on this issue: i can start metabase.job on mainstream linux e.g. glibc, bash, coreutils; similarly i can java -jar on busybox system just fine, and that rules out all moving parts but nomad and busybox; i tend to think about this as a configuration issue, and it would be just fine if there was any kind of reference documentation with requirements about environment nomad expects; knowing how to fill the gap between what nomad expects and what busybox has to offer, would allow me move forward preparing working environment

@tgross tgross self-assigned this Nov 22, 2019
@tgross
Copy link
Member

tgross commented Nov 22, 2019

Ok, I was able to replicate the problem with the Makefile (with some tweaks, see below). I was originally thinking this was a problem with the chroot environment, but looking at the code that's not the case. This is happening when Nomad sets up the alloc directory so that we can later chroot into it. So that operation is happening as the Nomad user in the context of the host.

The error message, again:

2019-11-22T21:03:51Z  Setup Failure  failed to setup alloc: pre-run hook "alloc_dir" failed: user: lookup username nobody: no such file or directory

But /etc/passwd does exist!

root@OpenWrt:/# ls -lah /etc/passwd
-rw-r--r--    1 root     root         242 Nov 22 21:02 /etc/passwd

The error bubbles up from client/allocdir/fs_unix.go#L42, which is calling into the golang stdlib's own os/user/lookup.go. And there we can see the error is coming because it can't find /etc/passwd.

I'm not quite sure what's going on there yet. But there are different code paths in os/user for cgo vs non-cgo. I did notice that you're statically linking Nomad, rather than letting it be linked to glibc. There's an open issue #5643 where the whole static linking problem is being discussed at length, but the tl;dr is that we'd love some help on making that work but we're not there yet.

So my recommendation is to try this with a glibc-linked build and see if that gets this working for you (or at least new errors! 😀 ). And if you'd like to contribute your expertise in #5643 I'm sure it'd be welcome!


re: the Makefile

Just a heads up, the build also required libncurses, and the Makefile blows up during the Nomad build target because the -o flag was incorrect. And had to make some adjustments to make run for running on a headless machine where the user didn't have permissions to /dev/kvm without sudo:

41c41,42
<       GOPATH=$(CURDIR)/target/go go build -x -o target -tags "nonvidia release ui" --ldflags '-linkmode external -extldflags "-static"' github.com/hashicorp/nomad
---
>       GOPATH=$(CURDIR)/target/go go build -x -o target/nomad -tags "nonvidia release ui" --ldflags '-linkmode external -extldflags "-static"' github.com/hashicorp/nomad
>
46c47,54
<       qemu-system-x86_64 -enable-kvm -cpu host -smp cores=2 -m 4096 -drive file=target/openwrt-19.07.0-rc1-x86-64-combined-ext4.img,id=d0,if=none -device ide-hd,drive=d0,bus=ide.0 -soundhw ac97 -device virtio-net-pci,netdev=lan -netdev tap,id=lan,ifname=tap0,script=no,downscript=no -device virtio-net-pci,netdev=wan -netdev user,id=wan
---
>       sudo qemu-system-x86_64 -enable-kvm \
>               -cpu host -smp cores=1 -m 4096 -nographic \
>               -drive file=target/openwrt-19.07.0-rc1-x86-64-combined-ext4.img,id=d0,if=none,format=raw \
>               -device ide-hd,drive=d0,bus=ide.0 \
>               -device virtio-net-pci,netdev=lan \
>               -netdev tap,id=lan,ifname=tap0,script=no,downscript=no \
>               -device virtio-net-pci,netdev=wan \
>               -netdev user,id=wan

@turbo-cafe
Copy link
Author

the key here's that nomad binary was linked to glibc, that's the main reason i was able to start metabase.job on my desktop, but not on busybox system; the solution to my problem was to link nomad with musl and i was able to start metabase.job; i never bothered about static linking, since openwrt provides musl libc

@tgross
Copy link
Member

tgross commented Nov 25, 2019

Great to hear that worked out! I'd love to hear more about what you're doing -- installing Nomad on OpenWRT is definitely a use case we haven't seen before!

@the-maldridge
Copy link

That's really interesting that you were even able to get this far with a glibc binary on a musl system. I'd also be very curious what you were doing since that shouldn't have worked.

@jboero
Copy link
Contributor

jboero commented Nov 17, 2020

I've been experimenting with this as well building minimal Nomad env. I'm not sure if this is a thread by OP but very handy on linking musl. I managed to get it working with glibc and minimal nsswitch libs and /etc/nsswitch.conf specifying only files but it might be nice to have an alternative userlookup option without dependencies on libnss. @tgross @the-maldridge ?

https://brimstone.github.io/#!talks/musl-go.md

@the-maldridge
Copy link

Unless you actually need nsswitch functionality, linking fully statically and just using the files database should be enough.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 28, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants