Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add proposal for mount options #321

Merged
merged 1 commit into from
Feb 23, 2017

Conversation

gnufied
Copy link
Member

@gnufied gnufied commented Feb 1, 2017

Adding initial proposal for mount options.

CC @saad-ali @matchstick @rootfs

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Feb 1, 2017

To match options that are allowed - we will split the mount options in PV by ',' and then verify if key does not exist in blacklist, which will prevent mounting volumes with these options.

One upside of using these options in kubelet is - it will also work for external provisioners. One major downside is - we will be allowing creation of PV or SC which can be blocked later by a blacklisted option. i.e - A PV or SC with invalid mount option will not be rejected immediately.
Copy link
Member

@liggitt liggitt Feb 1, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

letting potentially dangerous data in the API, then ignoring/failing on it at the kubelet level is a change from the direction of PodSecurityPolicy, etc.

enforcing it this way would not let you set advanced options in PVs (which require higher permissions to create) but forbid them in pod specs (which are accessible to normal namespace-constrained users)

it also would not let you allow privileged users (e.g. cluster admins setting up mounts in kube-system, etc) to make use of mount options, while limiting other users' access

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed on all accounts but so far use cases we have seen - there hasn't been a case where we have a need of some PVs from same plugin configurable with wider(?) mount options and some with narrower.

The other options we have considered has similar problems. But this is still very much WIP - so we are considering all options.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added more options for avoiding a blacklist and still having saner set of mount options.

@gnufied
Copy link
Member Author

gnufied commented Feb 1, 2017

cc @thockin

Mount time options that are operationally important and have no security implications should be suppported. Examples are NFS's TCP mode, versions, lock mode, caching mode; Glusterfs's caching mode; SMB's version, locking, id mapping; and more.

## Design

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @kubernetes/api-reviewers @kubernetes/sig-storage-api-reviews @kubernetes/sig-auth-api-reviews

To prevent users from using options which are not safe - Kubernetes admin can configure volume options and specify a blacklist of mount point parameters. For example:

```
~> kubelet --volume-options <volume-config-file>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this requires deploying a config file to all nodes, which is hard to maintain.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 please don't add more config files.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 let's not make the white/black list cluster admin configurable. Just hard code it in the plugins for now. The addition of the mountOptions field to the volume specs for some plugins should be the only API change.

My thought process for this is that we should minimize changes to the API while making sure we don't paint ourselves in to a corner. And by implementing a minimally viable solution (hardcoded white/black list), we reduce complexity while leaving open the option of adding in cluster admin configurable list in the future if the need arises.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

didn't @derekwaynecarr add some crazy config resource type? Not that I think this is definitely the way to go...

nfs:
path: /tmp
server: 172.17.0.2
mountOption: "hard,nolock,nfsvers=3"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you elaborate how alpha/beta phase would look like? Annotations? Or are we aiming directly at stable API?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I forget how these things look internally in the kernel, etc. But, would it make sense to make this a list instead of an opaque CSV string? I see there is a concept around fields but I agree making a field for every option is going to be untenable to keep up with.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 list seems like it would be more robust then a CSV string? Thoughts @gnufied?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think - it is more natural to have this as a string since that is something admins are used to anyways. Using a string doesn't prevent us from validating the options (split by , and treat them as a list internally).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general if we (in our validation) are going to deal with each of these individually, we should deal with them as a list. I'm pretty cautious anytime we accept a string that we parse where we don't own the underlying syntax. So if we want to parse and validate, list is preferable. Especially if we have other controls (like pod security policy?) that need to say which mount options are allowed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, okay - It is purely aesthetic thing and I could be wrong, so if we use a list, mount options will look like:

mountOption: ["hard", "nolock", "nfsvers=3"]

Or in case when there is only one option, it will look like:

mountOption: ["volume-name=foobar"]

So even if we use a list here, we still have to do some string processing. such as split by = for key/values.

```

This still has the problem that - each time a new option has to be supported user will have to
wait for new release.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think each time a new option appears it has to be reviewed anyway (and blacklisted by default). In fact, whitelisting the options would look like a safer approach to me.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to whitelisting and making the whitelist a node property.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should it be whitelisted by the node or by the API server? if it's at the node level, that implies you can have non-homogenous nodes. should the scheduler take into account which nodes allow which options?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to whitelisting. I propose whitelist would be implemented per volume plugin (set of options allowed will differ between NFS and GCE PD, for example). The whitelist will be checked at mount time, when the mount call happens, unsupported mount options will be dropped (with an warning even generated indicating that the specified options will be ignored since they are not in the whitelist).

@philips
Copy link
Contributor

philips commented Feb 2, 2017

cc @lpabon

nfs:
path: /tmp
server: 172.17.0.2
mountOption: "hard,nolock,nfsvers=3"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I forget how these things look internally in the kernel, etc. But, would it make sense to make this a list instead of an opaque CSV string? I see there is a concept around fields but I agree making a field for every option is going to be untenable to keep up with.

To prevent users from using options which are not safe - Kubernetes admin can configure volume options and specify a blacklist of mount point parameters. For example:

```
~> kubelet --volume-options <volume-config-file>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 please don't add more config files.

"mount_option_blacklist": ["transport"]
},
"nfs": {
"mount_option_blacklist": ["hard", "noacl"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hrm, it is a list here. Why not above?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are keys enough? I don't know enough about mount options to know if you'd need to whitelist specific values

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be tough order to validate the values, they keys themselves are hard enough IMO. cc @rootfs

Copy link
Contributor

@smarterclayton smarterclayton Feb 10, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Blacklists are bad. We should whitelist if we want to be secure.

EDIT: saw the conversation at the end.

```

This still has the problem that - each time a new option has to be supported user will have to
wait for new release.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to whitelisting and making the whitelist a node property.

@childsb
Copy link
Contributor

childsb commented Feb 6, 2017

I dont think we want the 'blacklist options' support section. The user will never see mount options, only admin

Copy link
Member

@saad-ali saad-ali left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sync'd offline with @gnufied. We both agree that a hardcoded whitelist per volume plugin sounds like the best approach.

nfs:
path: /tmp
server: 172.17.0.2
mountOption: "hard,nolock,nfsvers=3"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 list seems like it would be more robust then a CSV string? Thoughts @gnufied?

```

This still has the problem that - each time a new option has to be supported user will have to
wait for new release.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to whitelisting. I propose whitelist would be implemented per volume plugin (set of options allowed will differ between NFS and GCE PD, for example). The whitelist will be checked at mount time, when the mount call happens, unsupported mount options will be dropped (with an warning even generated indicating that the specified options will be ignored since they are not in the whitelist).

To prevent users from using options which are not safe - Kubernetes admin can configure volume options and specify a blacklist of mount point parameters. For example:

```
~> kubelet --volume-options <volume-config-file>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 let's not make the white/black list cluster admin configurable. Just hard code it in the plugins for now. The addition of the mountOptions field to the volume specs for some plugins should be the only API change.

My thought process for this is that we should minimize changes to the API while making sure we don't paint ourselves in to a corner. And by implementing a minimally viable solution (hardcoded white/black list), we reduce complexity while leaving open the option of adding in cluster admin configurable list in the future if the need arises.

@childsb
Copy link
Contributor

childsb commented Feb 8, 2017

Sync'd offline with @gnufied. We both agree that a hardcoded whitelist per volume plugin sounds like the best approach.

@saad-ali Can you help me understand why a whitelist is needed? PVs are 'admin owned' objects, and we've treated Volumes as 'admin owned' in the past also. What are the use cases where untrusted users use Volumes that we'd want to limit access?

@liggitt
Copy link
Member

liggitt commented Feb 8, 2017

PVs are 'admin owned' objects, and we've treated Volumes as 'admin owned' in the past also. What are the use cases where untrusted users use Volumes that we'd want to limit access?

The proposed API exposes the options in pod spec Volume definitions as well

@childsb
Copy link
Contributor

childsb commented Feb 8, 2017

The proposed API exposes the options in pod spec Volume definitions as well

Where's the security boundary here? If i'm a user with a Volume specified in a POD.. can't I make that volume hostPath pointed at /etc and do all sorts of things?

I thought the assumption is only Admin/System users are allowed to use Volumes directly.

@liggitt
Copy link
Member

liggitt commented Feb 8, 2017

Where's the security boundary here? If i'm a user with a Volume specified in a POD.. can't I make that volume hostPath pointed at /etc and do all sorts of things?

I thought the assumption is only Admin/System users are allowed to use Volumes directly.

PodSecurityPolicy allows gating which types of volumes are allowed in pod specs. It does not have controls in place for details of particular volume types (particular host paths, mount options, etc)

@childsb
Copy link
Contributor

childsb commented Feb 8, 2017

PodSecurityPolicy allows gating which types of volumes are allowed. It does not have controls in place for details of particular volume types (particular host paths, mount options, etc)

Ok that makes sense. So enhance PSP to include white list of Volume properties seems like a good spot for this function instead of CLI.

Copy link
Contributor

@eparis eparis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the security model here? Why are we going to this trouble? Why are we blocking a storage expert from setting the things they need?

Is there is actually a security issue here I think it better fixed by differentiating users and storage admins. If the person who bought the gigantic expensive storage array isn't allowed to use it the way they need to I think we have failed.

My opinion would be to just reject any pod definition with a mountOption by default. Allow any mountOption on any PV. If we later need to add a flag to allow pod definitions with mountOptions so be it.

But to think that we could possibly understand all of the potential mount options, all of the implications of those, and keep up with the every changing state of the world is a bit bold. My laptop, with no fancy enterprise storage of any kind has 50 unique mount options right now.

I'm convinced of the value of mountOptions. I'm not convinced of the value of the proposed restrictions.


## Goal

Enable Kubernetes users to specify mount options with mountable volumes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me, this should not be the goal at all. We should be allowing kubernetes storage admins to specify mount options. I actually object to users having anything to do with it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I will change the wording there.


We currently support network filesystems: NFS, Glusterfs, Ceph FS, SMB (Azure file), Quobytes, and local filesystems such as ext[3|4] and XFS.

Mount time options that are operationally important and have no security implications should be suppported. Examples are NFS's TCP mode, versions, lock mode, caching mode; Glusterfs's caching mode; SMB's version, locking, id mapping; and more.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the security model we are working with when determining if something has security implications ? Who are we trying to protect from what?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the definition of 'security implications' is going to be context/site dependant. This makes implementing this in a way that is safe for everybody difficult. Certainly I'd like to see the threat models being considered.

To prevent users from using options which are not safe - Kubernetes admin can configure volume options and specify a blacklist of mount point parameters. For example:

```
~> kubelet --volume-options <volume-config-file>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

didn't @derekwaynecarr add some crazy config resource type? Not that I think this is definitely the way to go...


One upside of using these options in kubelet is - it will also work for external provisioners. One major downside is - we will be allowing creation of PV or SC which can be blocked later by a blacklisted option. i.e - A PV or SC with invalid mount option will not be rejected immediately.

2. As suggested by @saad change mount options to key/value pairs such as:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about things like noatime or nosuid that are not key/value pairs?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are not going to do key/value pair @eparis . That is something that was briefly considered and rejected. As I update the design, I will delete it.

@eparis
Copy link
Contributor

eparis commented Feb 8, 2017

I'm also not sure how any of these take into account mount options that aren't static. What if i need to mount with timeout=0 or timeout=10 ? Which of these proposals can handle those?

@liggitt
Copy link
Member

liggitt commented Feb 8, 2017

My opinion would be to just reject any pod definition with a mountOption by default. Allow any mountOption on any PV.

Starting with only allowing the options in PVs (blocking in pod specs for now via validation or something) and not bothering with restricting what can be specified in a PV makes sense to me.

@jheiss
Copy link

jheiss commented Feb 8, 2017

PodSecurityPolicy allows gating which types of volumes are allowed in pod specs. It does not have controls in place for details of particular volume types (particular host paths, mount options, etc)

Is an enhancement along those lines tracked somewhere that you know of? We're interested in being able to allow a user to bind mount certain host paths but reject everything else. Wikimedia has an admission controller but it is a bit of a blunt instrument. Integrating this with PSPs would be great.

@gnufied
Copy link
Member Author

gnufied commented Feb 8, 2017

@jheiss nope, afaik there is no such enhancement in progress.

@eparis whichever way we choose to validate, we weren't planning on validating values. The plan is to just validate the keys, so if timeout key is allowed - k8s admin can configure any values for it.

@gnufied
Copy link
Member Author

gnufied commented Feb 8, 2017

@liggitt @eparis I am fine with not allowing mount options in pod volumes. We can trigger an event if user uses that option and in that case - pod will be left in containeCreating state or something similar.

@liggitt
Copy link
Member

liggitt commented Feb 8, 2017

I am fine with not allowing mount options in pod volumes. We can trigger an event if user uses that option

Let's prevent it in the API in the first place, we don't want latent data persisted in the API that could spring to life in later releases.

@saad-ali
Copy link
Member

saad-ali commented Feb 8, 2017

My opinion would be to just reject any pod definition with a mountOption by default. Allow any mountOption on any PV.

Starting with only allowing the options in PVs (blocking in pod specs for now via validation or something) and not bothering with restricting what can be specified in a PV makes sense to me.
View changes

The security concern is exposing powerful options directly to users via the pod spec. Like you said, we could simply not allow mountOptions in the pod spec (only in PV objects). My hesitation is that could result in a poor user experience--PersistentVolumeSource and VolumeSource have so far been identical in behavior (and share API components). If we suddenly say they behave differently in this case, that could cause confusion.

I am fine with not allowing mount options in pod volumes. We can trigger an event if user uses that option

Let's prevent it in the API in the first place, we don't want latent data persisted in the API that could spring to life in later releases.

PersistentVolumeSource and VolumeSource share the volume source types where mountOptions would be defined, so I don't think it would be possible to block in the API definition.

@eparis
Copy link
Contributor

eparis commented Feb 9, 2017

For discussion sake, can we explain a mount option that we consider to be 'unsafe' for a user to set, even in a pod spec?

@liggitt
Copy link
Member

liggitt commented Feb 9, 2017

PersistentVolumeSource and VolumeSource share the volume source types where mountOptions would be defined, so I don't think it would be possible to block in the API definition.

It could be done via validation (validateVolumeSource does pod-spec-level validation, ValidatePersistentVolume does PV-level validation). I agree it's not terrific, but I'd prefer it to prematurely adding a lot of complexity around policy around mount options at the pod-level if we're not even sure we'll need them.

@lpabon
Copy link

lpabon commented Feb 9, 2017

@gnufied I like the mountOptions being set at the StorageClass (and possibly PV) because these settings are done by the administrator, but I am concerned with the settings being allowed in the Pod Spec. Is there a use case where the user would want to set their own mount options and override what the administrator had setup?

@eparis
Copy link
Contributor

eparis commented Feb 9, 2017

@lpabon if we are talking about the pod spec we are not talking about 'overriding' anything an administrator set up. The pod spec has the option to inline anything which could be in a PV. No PV needed. For example entirely inside the pod spec you can define an NFS mount volume setting the export and the server and everything.

While I completely understand why we control the use of inline volume spec I'm still fuzzy on why we are trying to put controls on the mount options altogether. Without understanding the risks of mount options its hard to see how to properly handle the PV vs inline Pod definition. If I let a user set the nfs server and export arbitrarily, what is the extra risk allowing them to set mountOptions?

How are mountOptions dangerous. Can we find some specifics?

@wongma7
Copy link
Contributor

wongma7 commented Feb 9, 2017

if it's decided mount options can be dangerous & that admin control over pod spec mount options is needed, I agree it should be done via PSPs. Not another new security whitelist mechanism that admins will have to figure out. As already suggested, could start by disabilng pod spec mount options altogether via validation then implement PSP whitelists later if the need arises.

Even if it's just temporary, a hardcoded blacklist/whitelist would block not only pod-spec mount options but also admin-created PVs which I think would be too annoying for admins who know what they're doing

@eparis
Copy link
Contributor

eparis commented Feb 9, 2017

@matchstick I know you have a strong linux storage background. Can you come up with an example of a mount option that we could consider 'dangerous'? I'm having difficulty find the $problem that we are solving with all this white/black list stuff. I want something concrete to discuss.

@liggitt
Copy link
Member

liggitt commented Feb 16, 2017

cc @pwittrock @ymqytw for adding another field to a union type (name + mountOptions + one-of(volume sources))

@gnufied
Copy link
Member Author

gnufied commented Feb 16, 2017

@eparis @saad-ali I have updated proposal with final(?) conclusions so far:

  1. Use PSP for enabling/disabling mount options in inline volume specs.
  2. Allow plugins to define - if they want to support mount options.
  3. Change mountOption from a string to list of strings.

PTAL

@thockin
Copy link
Member

thockin commented Feb 16, 2017 via email

@gnufied
Copy link
Member Author

gnufied commented Feb 16, 2017

@thockin Initialization of VolumePlugin interface (https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/plugins.go#L65) via ProbeVolumePlugins doesn't involve any remote API call for all plugins we ship with Kubernetes. Do you think this could change in future?

What other option we have? The alternative is - we can simply ignore the mount options on volume plugins that don't support it. May be some other option there is, that I am missing....

@gnufied
Copy link
Member Author

gnufied commented Feb 21, 2017

I have talked to @eparis @saad-ali and @childsb and we all agreed that after moving mountOptions to one level up in updated proposal - it is no longer necessary to support mountOptions in inline volume specs since - persistent volume specs and inline volume specs are distinct API resources.

This removes the need for PSPs etc and I have updated the design proposal to reflect that.

restuser: "admin"
secretNamespace: "default"
secretName: "heketi-secret"
mountOption: ["volume-name=foobar"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parameters is map[string]string, not map[string][]string

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the first thing that takes a storage class parameter and uses it to fill in a field in the PV, right? that's different than all other parameters, which are arbitrary and volume-plugin-defined. If we want fields in the storageclass that map to fields in the PV, that seems like it should be in a different field than parameters

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch about map[string]string vs map[string][]string. I am thinking this will become a non-issue if we move this field to its own place.

There are other PRs such as - kubernetes/kubernetes#40805 which do something similar.

I am as well somewhat hesitant to piggyback on parameter field since it hinders discoverability of API (just a bag of key/value pairs). cc @childsb

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree - this feels like an abuse of parameters, which are promised to be opaque.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for separate attribute

Copy link
Member Author

@gnufied gnufied Feb 23, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mountOptions in SCs are going to be phase2 of this implementation until we decide where it should appear. I am going to update the design to reflect this decision.

accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
mountOption: ["hard","nolock","nfsvers=3"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mountOptions?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't it be alpha annotation first?

Copy link
Contributor

@childsb childsb Feb 22, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Talking with @gnufied about this offline yesterday, i dont think the use cases for mountOptions in storageclass is large enough to warrant adding the function or a field now and should nix it from the proposal. It would be easier to solve mount options in storage class when someone needs it and has a good reason.

@saad-ali
Copy link
Member

So what we've agreed on:

  1. mountOptions are limited to PV (that means we don't have to do any validation of options, since the PV type should only be cluster admin cluster accessible, and we trust the cluster admin).
  2. This feature is targeting beta for 1.6 so instead of adding a first class field, we will start with an annotation on the PV object.
  3. Error handling should be explicit so that users are aware when we are ignoring their request (no silently ignoring).

Will LGTM and merge once @gnufied gets a chance to update the proposal to reflect that.

Implementation in progress here: kubernetes/kubernetes#41906

@gnufied
Copy link
Member Author

gnufied commented Feb 23, 2017

@saad-ali updated the PR to reflect all of latest changes. I have also dropped Storageclasses for now. We can amend the design when we need it.

@saad-ali saad-ali self-assigned this Feb 23, 2017
@saad-ali
Copy link
Member

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 23, 2017
@saad-ali
Copy link
Member

Merging this, if there are any concerns we can amend the design.

@saad-ali saad-ali merged commit cb54588 into kubernetes:master Feb 23, 2017
ruebenramirez pushed a commit to ruebenramirez/community that referenced this pull request Apr 22, 2017
shyamjvs pushed a commit to shyamjvs/community that referenced this pull request Sep 22, 2017
MadhavJivrajani pushed a commit to MadhavJivrajani/community that referenced this pull request Nov 30, 2021
danehans pushed a commit to danehans/community that referenced this pull request Jul 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet