Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

simple exec (no cgroups, docker or resource isolation) #213

Closed
jippi opened this issue Oct 4, 2015 · 9 comments
Closed

simple exec (no cgroups, docker or resource isolation) #213

jippi opened this issue Oct 4, 2015 · 9 comments
Assignees

Comments

@jippi
Copy link
Contributor

jippi commented Oct 4, 2015

hi,

a nice feature would be to have a very simple exec driver, that just runs a command

no fancy docker, cgroups or resource isolation - just running a command

would allow me to replace supervisord with nomad, and get a lot of the features I'm missing from supervisor backed right into nomad

@jippi jippi changed the title simple exec simple exec (no cgroups, docker or resource allocation) Oct 4, 2015
@ghost
Copy link

ghost commented Oct 4, 2015

There is one https://nomadproject.io/docs/drivers/exec.html

There are even Qemu and JVM drivers.

@jippi
Copy link
Contributor Author

jippi commented Oct 4, 2015

Every time I execute a job with exec my mount points gets messed up - some goes missing and it remounts the data dir as well -- currently it doesn't seem like the job is actually executed either, as nomad.out is nowhere to be found.

Example:

job "example" {
    # region = "global"
    datacenters = ["online"]
    type = "batch"
    priority = 50

    update {
        # Stagger updates every 10 seconds
        stagger = "10s"

        # Update a single task at a time
        max_parallel = 1
    }

    group "demo" {
        # Control the number of instances of this groups.
        # Defaults to 1
        # count = 1

        # Define a task to run
        task "date" {
            driver = "exec"

            config {
                command = "/bin/date > nomad.out"
            }

                        resources {
                                cpu = 500 # 500 Mhz
                        }
        }
    }
}

gives

==> Monitoring evaluation "c8587fb6-8043-d59c-db4e-32b9cce42997"
    Evaluation triggered by job "example"
    Allocation "b91276d6-cdd4-815b-48b6-d10d6377fe02" modified: node "ee0bd4c4-c1df-2ca1-c01d-7ac4c2138769", group "demo"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "c8587fb6-8043-d59c-db4e-32b9cce42997" finished with status "complete"

and makes a new mount point on the box

/dev/disk/by-uuid/2778df7a-f8b9-4dc0-b2ba-71d76aea0261   12G  7.5G  3.2G  71% /opt/nomad/data/alloc/b91276d6-cdd4-815b-48b6-d10d6377fe02/date/alloc

and umounts devtmpfs and proc

before exec

-> df -h
Filesystem                                              Size  Used Avail Use% Mounted on
udev                                                     10M     0   10M   0% /dev
tmpfs                                                   3.0G  216K  3.0G   1% /run
/dev/disk/by-uuid/2778df7a-f8b9-4dc0-b2ba-71d76aea0261   12G  7.4G  3.2G  70% /
tmpfs                                                   5.0M     0  5.0M   0% /run/lock
tmpfs                                                   6.1G     0  6.1G   0% /run/shm

after exec

-> df -h
df: `devtmpfs': No such file or directory
df: `proc': No such file or directory
Filesystem                                              Size  Used Avail Use% Mounted on
udev                                                     10M     0   10M   0% /dev
tmpfs                                                   3.0G  220K  3.0G   1% /run
/dev/disk/by-uuid/2778df7a-f8b9-4dc0-b2ba-71d76aea0261   12G  7.5G  3.2G  71% /
tmpfs                                                   5.0M     0  5.0M   0% /run/lock
tmpfs                                                   6.1G     0  6.1G   0% /run/shm
/dev/disk/by-uuid/2778df7a-f8b9-4dc0-b2ba-71d76aea0261   12G  7.5G  3.2G  71% /opt/nomad/data/alloc/b91276d6-cdd4-815b-48b6-d10d6377fe02/date/alloc

@jippi jippi changed the title simple exec (no cgroups, docker or resource allocation) simple exec (no cgroups, docker or resource isolation) Oct 4, 2015
@dadgar
Copy link
Contributor

dadgar commented Oct 4, 2015

Hey,

Would you mind describing your environment? The mounts are a bit odd because they get cleaned up with the allocation is destroyed. This is hours after the job finishes which in a production environment gives you time to look at the logs or ship them. But when running locally it is undesirable. We will work on that.

BTW the stdout and stderr logs can be found in the tasks local directory. This can be found at /nomad_alloc_dir/alloc_id/task_name/local/task_name.stderr/stdout

@jippi
Copy link
Contributor Author

jippi commented Oct 5, 2015

Hi,

I'm running the 5 x nomad servers and 7 x nomad clients - with v0.1.0 - each on its own dedicated vm.

both the nomad servers and nomad clients are running inside a QEMU / KVM instance running Debian 7.8 with a custom 4.0.6 kernel

Is it possible to make a executor that won't touch anything, but simple exec a command without any server modifications ? :) I'm not looking for nomad for resource isolation, but simply a distributed executor in place of supervisord - which have plenty of pain points for us currently.

example fstab

UUID=2778df7a-f8b9-4dc0-b2ba-71d76aea0261 /                 ext4        rw,noatime,nodiratime,discard,nouser_xattr,barrier=0,data=ordered,errors=remount-ro     0       1
UUID=71570a26-77ff-4b86-a26d-9531dd0b4f35 none              swap        sw                                                      0       0
UUID=a779cc49-20be-4f23-bb5c-72d9f6713f54 /var/spool/postfix/   ext4            rw,noatime,nodiratime,discard,nouser_xattr,barrier=0,data=ordered,errors=remount-ro             0       0
tmpfs                                     /tmp                  tmpfs           rw,noatime,nodiratime,noatime,size=5g

@aldergren
Copy link
Contributor

From my understanding of the code, the exec driver relies uses the executor to run your task within a per-task chroot. It will make some mounts available within this chroot but most won't be. Which mounts are accessible is currently not configurable. I have the same requirement as you and I'd be willing to attempt to write the code if there's consensus on a design that's likely to be accepted.

@wagnersza
Copy link

+1 simple exec (no cgroups, docker or resource isolation)

for instance, to run "puppet apply" command in the hosts.

@dadgar
Copy link
Contributor

dadgar commented Oct 6, 2015

Just as an update, this is something we plan to support.

@dadgar
Copy link
Contributor

dadgar commented Oct 9, 2015

We now have a raw_exec driver!

@dadgar dadgar closed this as completed Oct 9, 2015
benbuzbee pushed a commit to benbuzbee/nomad that referenced this issue Jul 21, 2022
Ensure InstallSnapshot always consumes the snapshot from the stream.
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 29, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants