Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial support for Host Volumes #5923

Merged
merged 10 commits into from
Aug 9, 2019
Merged

Conversation

endocrimes
Copy link
Contributor

@endocrimes endocrimes commented Jul 4, 2019

This is the minimum viable work for #5377.

Subsequent PRs to the f-host-volumes branch will include support for features like getting volume status from the API, further improvements to driver support, validating volumes on client start, etc.

Example Client Config

client {
  host_volume "tmp-dir" {
    path = "/tmp"
  }
}

Example Jobspec

job "example" {
  datacenters = ["dc1"]

  type = "service"

  group "cache" {
    count = 1

    volume "tmp" {
      type = "host"

      config {
        source = "tmp-dir"
      }
    }

    task "redis" {
      driver = "docker"

      config {
        image = "redis"
      }

      volume_mount {
        volume           = "tmp"
        destination      = "/tmp"
      }

      resources {
        cpu    = 500 # 500 MHz
        memory = 256 # 256MB

        network {
          mbits = 10
          port  "db"  {}
        }
      }
    }
  }
}

command/agent/testdata/basic.hcl Outdated Show resolved Hide resolved
jobspec/test-fixtures/basic.hcl Outdated Show resolved Hide resolved
@endocrimes endocrimes force-pushed the dani/rfc-host-volumes branch 6 times, most recently from 25eae45 to 39037d7 Compare July 23, 2019 13:07
@endocrimes endocrimes changed the base branch from master to f-host-volumes July 23, 2019 13:12
@endocrimes endocrimes changed the title [WIP] Initial support for Host Volumes Initial support for Host Volumes Jul 23, 2019
@endocrimes endocrimes force-pushed the dani/rfc-host-volumes branch 4 times, most recently from 5935e7a to 4a1d99c Compare July 25, 2019 14:49
@endocrimes endocrimes added this to the 0.10.0 milestone Jul 25, 2019
Copy link
Contributor

@notnoop notnoop left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did a quick skim of the PR very quickly and made few comments. I can roughly see how the pieces fit together and see the relevancy of changes, but having comments on the newly added structs and stating the convention for how volume vs mount vs host volume is used will help a lot in following the code.

I'll review again my morning! Thanks!

client/allocrunner/taskrunner/volume_hook.go Show resolved Hide resolved
return result
}

func (h *volumeHook) hostVolumeMountConfigurations(vmounts []*structs.VolumeMount, volumes map[string]*structs.VolumeRequest, client map[string]*structs.ClientHostVolumeConfig) ([]*drivers.MountConfig, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm getting lost tracking the the distinction between volume mount, volume config, mount config, and volume mount config :). I'd suggest adding a comment somewhere to explain what each type is meant to represent.

Also, I'd love to clarify the parameter names here, client. Maybe naming the parameter based on source of value would be useful if they are all volumes of some kind.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok - So I have a better handle now - Let me phrase them as best as I understand and have me confirm:

  • volume is the filesystem thingy that is unit of scheduling to be assigned to an alloc
  • host volume is the client configuration of a volume. HostVolume fields and variables in this PR refer to client config values of type ClientHostVolumeConfig (or derived from it) and never to the volumes instances themselves.
  • volume mount refers to the task level configuration for how the volume is mounted to the task filesystem.
  • mount config refers to the concrete OS bind specification that executors will use to actually bind mount?

Is this roughly correct? Would be nice to call that out in the struct documentation?

client/allocrunner/taskrunner/volume_hook.go Show resolved Hide resolved
client/client.go Show resolved Hide resolved
Copy link
Contributor

@langmartin langmartin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just a first pass. I have an additional question: what actually causes the volume to be mounted in the task directory?

plugins/drivers/driver.go Show resolved Hide resolved
client/allocrunner/taskrunner/volume_hook.go Show resolved Hide resolved
Copy link
Contributor

@notnoop notnoop left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did one more round of review. I have a high level question about the feasibility logic and the distinction between Volume and VolumeRequest.

I think I have a better handle on the names now, but I believe this PR would benefit greatly from better having documentation for the new types and methods, specially ones in structs package.

I think I'll need one more review cycle before I feel I grock this PR well.

Also, I don't expect to be addressed in this review, but wondering how the feasibility logic will evolve when we support non-host-volumes that may have different feasability logic? In this iteration, the schedule is aware of the volume type requirements, but I guess it will not be with pluggable volume types?

nomad/structs/volumes.go Outdated Show resolved Hide resolved
scheduler/feasible.go Outdated Show resolved Hide resolved
scheduler/feasible.go Show resolved Hide resolved
nomad/structs/volumes.go Outdated Show resolved Hide resolved
scheduler/feasible.go Show resolved Hide resolved
nomad/structs/volumes.go Show resolved Hide resolved
return result
}

func (h *volumeHook) hostVolumeMountConfigurations(vmounts []*structs.VolumeMount, volumes map[string]*structs.VolumeRequest, client map[string]*structs.ClientHostVolumeConfig) ([]*drivers.MountConfig, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok - So I have a better handle now - Let me phrase them as best as I understand and have me confirm:

  • volume is the filesystem thingy that is unit of scheduling to be assigned to an alloc
  • host volume is the client configuration of a volume. HostVolume fields and variables in this PR refer to client config values of type ClientHostVolumeConfig (or derived from it) and never to the volumes instances themselves.
  • volume mount refers to the task level configuration for how the volume is mounted to the task filesystem.
  • mount config refers to the concrete OS bind specification that executors will use to actually bind mount?

Is this roughly correct? Would be nice to call that out in the struct documentation?

command/agent/config.go Show resolved Hide resolved
nomad/structs/volumes.go Show resolved Hide resolved
nomad/structs/volumes.go Show resolved Hide resolved
nomad/structs/volumes.go Outdated Show resolved Hide resolved
nomad/structs/volumes.go Outdated Show resolved Hide resolved
// Create the feasibility wrapper which wraps all feasibility checks in
// which feasibility checking can be skipped if the computed node class has
// previously been marked as eligible or ineligible. Generally this will be
// checks that only needs to examine the single node to determine feasibility.
jobs := []FeasibilityChecker{s.jobConstraint}
tgs := []FeasibilityChecker{s.taskGroupDrivers, s.taskGroupConstraint, s.taskGroupDevices}
tgs := []FeasibilityChecker{s.taskGroupDrivers, s.taskGroupConstraint, s.taskGroupHostVolumes, s.taskGroupDevices}
s.wrappedChecks = NewFeasibilityWrapper(ctx, s.quota, jobs, tgs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noting that this needs to happen in nomad enterprise as well.

// Create the feasibility wrapper which wraps all feasibility checks in
// which feasibility checking can be skipped if the computed node class has
// previously been marked as eligible or ineligible. Generally this will be
// checks that only needs to examine the single node to determine feasibility.
jobs := []FeasibilityChecker{s.jobConstraint}
tgs := []FeasibilityChecker{s.taskGroupDrivers, s.taskGroupConstraint, s.taskGroupDevices}
tgs := []FeasibilityChecker{s.taskGroupDrivers, s.taskGroupConstraint, s.taskGroupHostVolumes, s.taskGroupDevices}
s.wrappedChecks = NewFeasibilityWrapper(ctx, s.quota, jobs, tgs)

// Filter on distinct host constraints.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IS this worth adding an e2e test for (in a separate PR)?

Copy link
Contributor Author

@endocrimes endocrimes Aug 3, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe? - I doubt it tho tbh - there's v little new behaviour overall.

@endocrimes endocrimes merged commit a087e8e into f-host-volumes Aug 9, 2019
@github-actions
Copy link

github-actions bot commented Feb 5, 2023

I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 5, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants