Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-37617: STOR-1593: Rebase to upstream v5.0.2 for 4.17 #104

Merged
merged 9 commits into from
Aug 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
18 changes: 0 additions & 18 deletions CHANGELOG/CHANGELOG-4.0.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,3 @@
# Release notes for v4.0.1

[Documentation](https://kubernetes-csi.github.io)



## Dependencies

### Added
_Nothing has changed._

### Changed
- github.com/golang/protobuf: [v1.5.3 → v1.5.4](https://github.com/golang/protobuf/compare/v1.5.3...v1.5.4)
- google.golang.org/protobuf: v1.31.0 → v1.33.0

### Removed
_Nothing has changed._

# Release notes for v4.0.0

[Documentation](https://kubernetes-csi.github.io)
Expand Down
333 changes: 333 additions & 0 deletions CHANGELOG/CHANGELOG-5.0.md

Large diffs are not rendered by default.

12 changes: 9 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@
The external-provisioner is a sidecar container that dynamically provisions volumes by calling `CreateVolume` and `DeleteVolume` functions of CSI drivers. It is necessary because internal persistent volume controller running in Kubernetes controller-manager does not have any direct interfaces to CSI drivers.

## Overview
The external-provisioner is an external controller that monitors `PersistentVolumeClaim` objects created by user and creates/deletes volumes for them. Full design can be found at Kubernetes proposal at [container-storage-interface.md](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/container-storage-interface.md)
The external-provisioner is an external controller that monitors `PersistentVolumeClaim` objects created by user and creates/deletes volumes for them.
The [Kubernetes Container Storage Interface (CSI) Documentation](https://kubernetes-csi.github.io/docs/) explains how to develop, deploy, and test a Container Storage Interface (CSI) driver on Kubernetes.

## Compatibility

Expand All @@ -26,7 +27,7 @@ Following table reflects the head of this branch.
| CSIStorageCapacity | GA | On | Publish [capacity information](https://kubernetes.io/docs/concepts/storage/volumes/#storage-capacity) for the Kubernetes scheduler. | No |
| ReadWriteOncePod | Beta | On | [Single pod access mode for PersistentVolumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). | No |
| CSINodeExpandSecret | Beta | On | [CSI Node expansion secret](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3107-csi-nodeexpandsecret) | No |
| HonorPVReclaimPolicy| Alpha |Off | [Honor the PV reclaim policy](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2644-honor-pv-reclaim-policy) | No |
| HonorPVReclaimPolicy| Beta | On | [Honor the PV reclaim policy](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2644-honor-pv-reclaim-policy) | No |
| PreventVolumeModeConversion | Beta |On | [Prevent unauthorized conversion of source volume mode](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3141-prevent-volume-mode-conversion) | `--prevent-volume-mode-conversion` (No in-tree feature gate) |
| CrossNamespaceVolumeDataSource | Alpha |Off | [Cross-namespace volume data source](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3294-provision-volumes-from-cross-namespace-snapshots) | `--feature-gates=CrossNamespaceVolumeDataSource=true` |

Expand Down Expand Up @@ -138,7 +139,7 @@ protocol](https://github.com/kubernetes/design-proposals-archive/blob/main/stora
The [design document](./doc/design.md) explains this in more detail.

### Topology support
When `Topology` feature is enabled and the driver specifies `VOLUME_ACCESSIBILITY_CONSTRAINTS` in its plugin capabilities, external-provisioner prepares `CreateVolumeRequest.AccessibilityRequirements` while calling `Controller.CreateVolume`. The driver has to consider these topology constraints while creating the volume. Below table shows how these `AccessibilityRequirements` are prepared:
When `Topology` feature is enabled* and the driver specifies `VOLUME_ACCESSIBILITY_CONSTRAINTS` in its plugin capabilities, external-provisioner prepares `CreateVolumeRequest.AccessibilityRequirements` while calling `Controller.CreateVolume`. The driver has to consider these topology constraints while creating the volume. Below table shows how these `AccessibilityRequirements` are prepared:

[Delayed binding](https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode) | Strict topology | [Allowed topologies](https://kubernetes.io/docs/concepts/storage/storage-classes/#allowed-topologies) | Immediate Topology | [Resulting accessibility requirements](https://github.com/container-storage-interface/spec/blob/master/spec.md#createvolume)
:---: |:---:|:---:|:---:|:---|
Expand All @@ -149,6 +150,11 @@ No | Irrelevant | Yes | Irrelevant | `Requisite` = Allowed topologies<br>`Prefer
No | Irrelevant | No | Yes | `Requisite` = Aggregated cluster topology<br>`Preferred` = `Requisite` with randomly selected node topology as first element
No | Irrelevant | No | No | `Requisite` and `Preferred` both nil

*) `Topology` feature gate is enabled by default since v5.0.
<!-- TODO: remove the feature gate in the next release - remove the whole column in the table above. -->

When enabling topology support in a CSI driver that had it disabled, please make sure the topology is first enabled in the driver's node DaemonSet and topology labels are populated on all nodes. The topology can be then updated in the driver's Deployment and its external-provisioner sidecar.

### Capacity support

The external-provisioner can be used to create CSIStorageCapacity
Expand Down
65 changes: 34 additions & 31 deletions cmd/csi-provisioner/csi-provisioner.go
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,9 @@ import (
_ "k8s.io/component-base/metrics/prometheus/clientgo/leaderelection" // register leader election in the default legacy registry
_ "k8s.io/component-base/metrics/prometheus/workqueue" // register work queues in the default legacy registry
csitrans "k8s.io/csi-translation-lib"
"k8s.io/klog/v2"
"sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller"
libmetrics "sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/metrics"
klog "k8s.io/klog/v2"
"sigs.k8s.io/sig-storage-lib-external-provisioner/v10/controller"
libmetrics "sigs.k8s.io/sig-storage-lib-external-provisioner/v10/controller/metrics"

"github.com/kubernetes-csi/csi-lib-utils/leaderelection"
"github.com/kubernetes-csi/csi-lib-utils/metrics"
Expand Down Expand Up @@ -210,13 +210,13 @@ func main() {
metrics.WithSubsystem(metrics.SubsystemSidecar),
)

grpcClient, err := ctrl.Connect(*csiEndpoint, metricsManager)
grpcClient, err := ctrl.Connect(ctx, *csiEndpoint, metricsManager)
if err != nil {
klog.Error(err.Error())
os.Exit(1)
}

err = ctrl.Probe(grpcClient, *operationTimeout)
err = ctrl.Probe(ctx, grpcClient, *operationTimeout)
if err != nil {
klog.Error(err.Error())
os.Exit(1)
Expand Down Expand Up @@ -244,15 +244,15 @@ func main() {
// Will be provided via default gatherer.
metrics.WithProcessStartTime(false),
metrics.WithMigration())
migratedGrpcClient, err := ctrl.Connect(*csiEndpoint, metricsManager)
migratedGrpcClient, err := ctrl.Connect(ctx, *csiEndpoint, metricsManager)
if err != nil {
klog.Error(err.Error())
os.Exit(1)
}
grpcClient.Close()
grpcClient = migratedGrpcClient

err = ctrl.Probe(grpcClient, *operationTimeout)
err = ctrl.Probe(ctx, grpcClient, *operationTimeout)
if err != nil {
klog.Error(err.Error())
os.Exit(1)
Expand Down Expand Up @@ -553,34 +553,20 @@ func main() {
csiProvisioner = capacity.NewProvisionWrapper(csiProvisioner, capacityController)
}

provisionController = controller.NewProvisionController(
clientset,
provisionerName,
csiProvisioner,
provisionerOptions...,
)

csiClaimController := ctrl.NewCloningProtectionController(
clientset,
claimLister,
claimInformer,
claimQueue,
controllerCapabilities,
)

// Start HTTP server, regardless whether we are the leader or not.
if addr != "" {
// To collect metrics data from the metric handler itself, we
// let it register itself and then collect from that registry.
// Start HTTP server, regardless whether we are the leader or not.
// Register provisioner metrics manually to be able to add multiplexer in front of it
m := libmetrics.New("controller")
reg := prometheus.NewRegistry()
reg.MustRegister([]prometheus.Collector{
libmetrics.PersistentVolumeClaimProvisionTotal,
libmetrics.PersistentVolumeClaimProvisionFailedTotal,
libmetrics.PersistentVolumeClaimProvisionDurationSeconds,
libmetrics.PersistentVolumeDeleteTotal,
libmetrics.PersistentVolumeDeleteFailedTotal,
libmetrics.PersistentVolumeDeleteDurationSeconds,
m.PersistentVolumeClaimProvisionTotal,
m.PersistentVolumeClaimProvisionFailedTotal,
m.PersistentVolumeClaimProvisionDurationSeconds,
m.PersistentVolumeDeleteTotal,
m.PersistentVolumeDeleteFailedTotal,
m.PersistentVolumeDeleteDurationSeconds,
}...)
provisionerOptions = append(provisionerOptions, controller.MetricsInstance(m))
gatherers = append(gatherers, reg)

// This is similar to k8s.io/component-base/metrics HandlerWithReset
Expand Down Expand Up @@ -611,6 +597,23 @@ func main() {
}()
}

logger := klog.FromContext(ctx)
provisionController = controller.NewProvisionController(
logger,
clientset,
provisionerName,
csiProvisioner,
provisionerOptions...,
)

csiClaimController := ctrl.NewCloningProtectionController(
clientset,
claimLister,
claimInformer,
claimQueue,
controllerCapabilities,
)

run := func(ctx context.Context) {
factory.Start(ctx.Done())
if factoryForNamespace != nil {
Expand Down
2 changes: 1 addition & 1 deletion deploy/kubernetes/rbac.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ rules:
# verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
verbs: ["get", "list", "watch", "create", "patch", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
Expand Down
Loading