Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pass namespace annotations to driver when --extra-create-metadata is enabled #714

Closed
rdpsin opened this issue Mar 16, 2022 · 16 comments
Closed
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@rdpsin
Copy link

rdpsin commented Mar 16, 2022

Currently, external-provisioner passes the PVC name, PVC namespace and PV name to the CSI driver in CreateVolumeRequest.Parameters, when extra-create-metadata is enabled.

It would be useful to also pass namespace annotations to the CSI driver as well. This can be very helpful to cluster admins since a common design is to set boundaries/controls at the namespace level. For e.g., this can allow the CSI driver to attach namespace annotations as tags to the volume for cost allocation, backups, IAM policies, et cetera.

Design:

A new parameter is created: csi.storage.k8s.io/pvc/namespace/annotations where the value is a string of comma separated key-value pairs.

csi.storage.k8s.io/pvc/namespace/annotations: "foo=bar,baz=test"

Note that similar proposals have been put forward, but they include passing PVC annotations to the CSI driver (see #86).

@rdpsin
Copy link
Author

rdpsin commented Mar 16, 2022

cc: @xing-yang

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2022
@jingxu97 jingxu97 moved this to Need Attention in 1.25 Release Jun 20, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 14, 2022
@rdpsin
Copy link
Author

rdpsin commented Jul 18, 2022

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 18, 2022
@paolosalvatori
Copy link

I fully support this request. Today you cannot specify metadata and, in particular, tags in a PVC that are passed down to a CSI driver. Tagging resources is helpful in a multitenant environment for policy tracking and cost charging. Today, the only way to tag managed resources dynamically created by a PVC on a cloud platform is the following:

  • Create a custom storage class and specify tags in the parameters:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: contoso-managed-csi-premium
provisioner: disk.csi.azure.com
parameters:
  skuname: Premium_LRS
  tags: costcenter=1234,tenant=Contoso
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
  • Create a PVC based on the custom storage class:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: contoso-managed-csi-premium
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: contoso-managed-csi-premium
  resources:
    requests:
      storage: 50Gi
  • Use the PVC in your deployment definition

In a multi-tenant environment, this approach can lead to two kinds of problems:

  • Proliferation of custom storage classes, one for each tenant. If you share an AKS cluster across hundreds of tenants each using PVs, and you want to use this technique to tag managed disks, you can quickly end up with hundreds of custom storage classes
  • Storage classes are not namespaced resources, so this can lead to issues when trying to segregate or restrict security using Kubernetes RBAC or Azure AD integrated RBAC

You can use a trick to avoid the proliferation of storage classes:

  • You can create a custom storage class with the proper tags for a tenant
  • Based on the custom storage class, create one or more PVCs, which dynamically create related managed resources on the current cloud platform.
  • Delete the custom storage class.

This solution is doable, but it's not very clean.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 25, 2022
@andyzhangx
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 25, 2022
@humblec
Copy link
Contributor

humblec commented Nov 10, 2022

@rdpsin imo, fetching namespace annotation shouldnt be in scope of provisioner sidecar. The API object, provisioner really care about is PVC and its bindings. this looks to be out of scope for the provisioner sidecar to fetch and pass namespace annotation as part of the createvolume request.

cc @jsafrane @msau42 @xing-yang

@andyzhangx
Copy link
Member

@rdpsin imo, fetching namespace annotation shouldnt be in scope of provisioner sidecar. The object provisioner really care about is PVC and its bindings. this looks to be out of scope for the provisioner sidecar to fetch and pass as part of the createvolume request.

cc @jsafrane @msau42 @xing-yang

we already supports providing following values in csi-provisioner, this time we want to provide more: pvc.annotations, and that's already supported for csi.storage.k8s.io/provisioner-secret-name fields by csi-provisioner, we just want to make this function more generic in csi-provisioner.

  • ${pvc.metadata.name}
  • ${pvc.metadata.namespace}
  • ${pv.metadata.name}

@humblec
Copy link
Contributor

humblec commented Nov 10, 2022

@andyzhangx looks like there is some confusion here , this issue description says "namespace" annotation to be passed with createVolume

It would be useful to also pass namespace annotations to the CSI driver as well. This can be very helpful to cluster admins since a common design is to set boundaries/controls at the namespace level. 

and iiuc your comment here talks about PVC annotation #714 (comment) . Can you please clarify?

@andyzhangx
Copy link
Member

@humblec I meant PVC annotation, related to #86, get namespace's annotation should be out of scope since it's not directly related to PVC.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 8, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 10, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 9, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
No open projects
Status: Done
Development

No branches or pull requests

6 participants