Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restrict task drivers to namespaces #11807

Closed
apollo13 opened this issue Jan 8, 2022 · 6 comments
Closed

Restrict task drivers to namespaces #11807

apollo13 opened this issue Jan 8, 2022 · 6 comments
Assignees
Labels
stage/accepted Confirmed, and intend to work on. No timeline committment though. theme/auth type/enhancement
Milestone

Comments

@apollo13
Copy link
Contributor

apollo13 commented Jan 8, 2022

Proposal

It would be great if access to task drivers were guarded by an ACL.

Use-cases

Nomad is a great task scheduler, but with great power comes great responsibility. For instance in some cases a nomad admin might want to execute raw_exec tasks on nodes, but raw_exec shouldn't be available to "normal" users. Maybe it doesn't even need ACLs but solely a config option that ties the availability of task drivers to certain namespaces.

WDYT? Any other creative ideas -- or did I miss something?

Attempted Solutions

I don't see an easy workaround :)

@the-maldridge
Copy link

In a discussion on this topic I proposed an idea that I'd like to reiterate here. My idea was to make drivers behave like vault modules, where you can mount them to backends. This would tie in well with the concept of drivers that are external plugins and may be extremely special purpose. I also think this would provide a neat way to resolve the concerns raised in #9258 by allowing the docker driver to be remounted with different options in different namespaces.

@tgross
Copy link
Member

tgross commented Jan 10, 2022

Dropping some open-ended thoughts here...

Maybe it doesn't even need ACLs but solely a config option that ties the availability of task drivers to certain namespaces.

Something that makes this a little challenging is that task drivers are fingerprinted on the clients and they're "opaque" to the server. You can see this in the specialized behavior we have for alloc-node-exec ACL capability, which we have to check on the client and not on the server. We can hard-code some ACL capabilities for the built-in drivers, but then we'd be leaving out being able to constrain custom drivers. (Ex. suppose I have a mycompany-root-fork-exec driver that I want to constrain to being run by cluster administrators, similar to what you're prosing for raw_exec.)

My idea was to make drivers behave like vault modules, where you can mount them to backends.

The difference from a semantics perspective is that Vault modules are global to the cluster whereas task drivers are specific to clients. The situation we have now is that they can be "loaded/unloaded" via the configuration file on a per client basis. But dynamically loadable client configuration is a cool idea for sure (and something I think we'll want for node metadata and some storage ideas I've been kicking around).

Some other stuff to consider...

If we did this as ACLs, what does the ACL policy configuration look like? Do we make extended capabilities for submit-job (and dispatch, etc.) where we append the driver name? The capabilities are an allow list, so would we need some kind of implicit submit-job-* for backwards compatibility?

namespace "prod" {
  policy = "read"
  capabilities = [
    "submit-job",
    "submit-job-raw_exec",
    "submit-job-docker",
    ]
}

Could we do this as a client-side plugin configuration?

plugin "raw_exec" {
  config {
    enabled = true
    namespaces = ["system-admin", "dev"]
  }
}

That'd have to be implemented for each of the plugins, which is easy for internal plugins but a scattered for community plugins. In any case, we could conceivably do something like this as a first step and get it all wired up to the scheduler, and then have dynamic client configuration as a separate body of work.

@apollo13
Copy link
Contributor Author

Hi @tgross, thank you for your thoughtful response (as usual :)).

If we did this as ACLs, what does the ACL policy configuration look like?

That is a good question and I purposely didn't think to much about it in the issue post since I wanted to gauge interested in it first. Ie is this something you'd be willing to support at all…

But to answer your question after a bit more thinking: Something like submit-job-docker looks ugly and I wonder if ACLs are the correct place for it anyways. For me the allowed drivers are not necessarily something to guard via ACLs but an intrinsic property of the namespace.

So maybe it would make more sense as a enabled_drivers: [] field to the namespace creation API call (https://www.nomadproject.io/api/namespaces#create-or-update-namespace). This way it would be on the servers to check and after the server checked that the namespace allows for those drivers it can then select the appropriate clients from fingerprinting.

That'd have to be implemented for each of the plugins, which is easy for internal plugins but a scattered for community plugins.

I guess that would be okay, like for builtin plugins that do not configure namespaces they would simply be for all namespaces?

In any case, we could conceivably do something like this as a first step and get it all wired up to the scheduler, and then have dynamic client configuration as a separate body of work.

Sounds good to me. If we can get commit-commitment from the nomad team I might be able to play with it [After we figured out which approach we wanna take] :)

@tgross
Copy link
Member

tgross commented Jan 10, 2022

Hey @apollo13 I think that approach sounds great! I floated the idea internally to some folks and that kicked off a very quick Request For Comments doc that's being circulated among interested HashiCorp folks. I'm going to share this doc below for your comments as well:


[RFC] Restrict Task Drivers by Namespace

Background

Specific Nomad task drivers can currently be disabled on individual client nodes with the plugin.config.enabled field. Several internal users and members of the Nomad community have asked for the ability to instead restrict task drivers to particular namespaces.

This would allow operators to enable “less secure” loosely constrained task drivers (ex. raw_exec) only for namespaces that are accessible by cluster operators and not by general users. For example, a cluster operator could use a sysbatch raw_exec job to update packages on every client node in the cluster, while developers are limited docker jobs.

Implementation

Add a new NamespaceCapabilities object to the existing Namespace object. This object will contain a field EnabledTaskDrivers.

diff --git a/nomad/structs/structs.go b/nomad/structs/structs.go
index f19656fce..cb098b413 100644
--- a/nomad/structs/structs.go
+++ b/nomad/structs/structs.go
@@ -4948,11 +4948,20 @@ type Namespace struct {
        // cross-regions.
        Hash []byte

+       // Capabilities is the set of capabilities allowed for this namespace
+       Capabilities *NamespaceCapabilities
+
        // Raft Indexes
        CreateIndex uint64
        ModifyIndex uint64
 }

+// NamespaceCapabilities represents a set of capabilities allowed for this
+// namespace, to be checked at job submission time.
+type NamespaceCapabilities struct {
+       EnabledTaskDrivers []string
+}
+
 func (n *Namespace) Validate() error {
        var mErr multierror.Error

On job submission the admission controllers hook will check if the job's namespace has a Capabilities block and that block has a non-empty EnabledTaskDrivers field. If so, the task driver for each task in that job spec will be validated against that list.

The existing namespace Register API only needs to have the new field added in order to add this feature. But the existing nomad namespace apply command does not accept a specification file in the way that we do for quotas. For the initial implementation, we'll add a new command line argument -enabled-task-drivers that accepts a comma-separated list of task drivers. In future work as we expand namespace capabilities (see below), we'll probably want to create a namespace specification similar to what we have for quotas, ACL policies, and CSI volumes.

An operator may update a namespace after a job that uses that namespace has been submitted. Nomad does not currently reconcile the state of running allocations against configuration changes except for jobs and node updates that force an allocation to be rescheduled.

The Namespace.UpsertNamespace RPC triggered by the nomad namespace apply command should check if its enabled task drivers configuration has changed from the previous value. If so, check against all jobs running in that namespace and return an error if any of the existing jobs would now no longer be correct. This behavior is similar to nomad namespace delete which returns the error namespace "example" has non-terminal jobs in regions: [us-east-1, us-west-1]" if a user tries to delete a namespace that’s in use.

Abandoned Ideas

  • Add an ACL capability: instead of adding the capability to a namespace, we could add the capability to an ACL policy. For example, we could add a capability submit-job-raw_exec. This is awkward because it would require parsing the task driver portion out of the name for all ACL checks for that capability. We would not be able to ensure the task driver portion is correct at policy submission time, because task drivers can be added by clients at any time. And we would need to add a new set of these capabilities for every new base capability, creating a combinatorial explosion of capabilities.

Future Work

  • Expand namespace capabilities: the NamespaceCapabilities field leaves room to implement other capabilities that have been requested on namespaces, such as limiting namespaces to particular node classes (ref Provide a way to tie namespaces to certain client nodes #9342 )
  • Reconciliation: Nomad currently does not reconcile running jobs against configuration changes outside of job submission and rescheduling events. For example, if a user who submitted a job has their ACL policy changed such that they cannot submit jobs to that namespace anymore, their existing jobs are left running. Likewise, if a client is reconfigured so that a plugin is disabled, existing tasks using that plugin are left running. Reconciling these behaviors is a large project that spans across the server and client. This has been left out of scope for this proposal.

apollo13 added a commit to apollo13/nomad that referenced this issue Jan 10, 2022
apollo13 added a commit to apollo13/nomad that referenced this issue Jan 10, 2022
apollo13 added a commit to apollo13/nomad that referenced this issue Jan 10, 2022
apollo13 added a commit to apollo13/nomad that referenced this issue Jan 10, 2022
@tgross tgross changed the title ACL for task drivers Restrict task drivers to namespaces Jan 13, 2022
@tgross tgross added stage/accepted Confirmed, and intend to work on. No timeline committment though. and removed stage/needs-discussion labels Jan 13, 2022
@tgross tgross self-assigned this Feb 8, 2022
@tgross
Copy link
Member

tgross commented Feb 24, 2022

Closed by #11813, which will ship in Nomad 1.3.0!

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 11, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
stage/accepted Confirmed, and intend to work on. No timeline committment though. theme/auth type/enhancement
Projects
Development

No branches or pull requests

3 participants