Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use schedule instead of VolumeReplicationClass #182

Conversation

raghavendrabhat
Copy link
Collaborator

VolumeReplicationClass field in VolumeReplicationGroup is
nothing but a string. Instead of that, use a schedule which
is then used in combination with the storage driver for the
pvc (obtained from storageclass) by the VolumeReplicationGroup
to determine the appropriate VolumeReplicationClass to use.

HUB/DRPC (Previously AVR):

  • modify DRPolicy to include schedule along with DR prepared
    clusters
  • Make DRPC (previously AVR) refer to the DRPolicy schedule
    while creating VolumeReplicationGroup and use that as the
    schedule for VolumeReplicationGroup.

Managed Cluster:

  • While reconciling VolumeReplicationGroup, for each of the
    PVC obtained via label selection, do this
    • storagedriver = StorageDriver(StorageClass(pvc))
    • Get All the VolumeReplicationClasses in the cluster
    • For each VolumeReplicationClass (chosen via label selector)
      if (replicationdriver == storagedriver and
      replicationSchedule == vrg.Spec.Schedule)
      chosenReplicationClass = VolumeReplicationClass

Signed-off-by: Raghavendra M raghavendra@redhat.com

@raghavendrabhat raghavendrabhat force-pushed the main-drpc-schedule-replication-class-9 branch 5 times, most recently from 1801080 to fe2d571 Compare August 5, 2021 15:49
Copy link
Member

@ShyamsundarR ShyamsundarR left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other than the schedule discussion mostly nits. If we can settle the schedule format, and update the rook setup scripts for VolRepClass, we can merge this.

api/v1alpha1/volumereplicationgroup_types.go Outdated Show resolved Hide resolved
config/samples/ramendr_v1alpha1_drpolicy.yaml Outdated Show resolved Hide resolved
@@ -243,6 +244,18 @@ func (r *DRPlacementControlReconciler) reconcileDRPCInstance(ctx context.Context
return ctrl.Result{}, err
}

// Currently validation of schedule in DRPolicy is done here. When
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack! @hatfieldbrian tagging you here for consideration once this and the initial DRPolicy reconciler are merged.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DRPolicy reconciler is available in pull request #200. Issue #201 tracks schedule validation move to DRPolicy reconciler.

scName := pvc.Spec.StorageClassName

storageClass := &storagev1.StorageClass{}
if err := v.reconciler.Get(v.ctx, types.NamespacedName{Name: *scName}, storageClass); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(nit) minor optimization possible here, as most often all PVCs would point to the same SC, so if we stash away the SCs we fetched, we can avoid fetching it again for the next PVC saving a call to the API server. (for later, just noting it for now)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK.

controllers/volumereplicationgroup_controller.go Outdated Show resolved Hide resolved
@raghavendrabhat raghavendrabhat force-pushed the main-drpc-schedule-replication-class-9 branch 3 times, most recently from b0a9027 to 9e43d82 Compare August 6, 2021 22:49
}
}

if pvc == nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This statement should never be true. A possible fix would be to change line 1280 to:
var pvc *corev1.PersistentVolumeClaim
or change line 1293 to if *pvc == (corev1.PersistentVolumeClaim{}) {

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK

@raghavendrabhat raghavendrabhat force-pushed the main-drpc-schedule-replication-class-9 branch from 9e43d82 to 46cd07e Compare August 10, 2021 17:33
api/v1alpha1/drpolicy_types.go Outdated Show resolved Hide resolved
hack/ocm-minikube-ramen.sh Outdated Show resolved Hide resolved
VolumeReplicationClass field in VolumeReplicationGroup is
nothing but a string. Instead of that, use a schedule which
is then used in combination with the storage driver for the
pvc (obtained from storageclass) by the VolumeReplicationGroup
to determine the appropriate VolumeReplicationClass to use.

HUB/DRPC (Previously AVR):
- modify DRPolicy to include schedule along with DR prepared
  clusters
- Make DRPC (previously AVR) refer to the DRPolicy schedule
  while creating VolumeReplicationGroup and use that as the
  schedule for VolumeReplicationGroup.

Managed Cluster:
- While reconciling VolumeReplicationGroup, for each of the
  PVC obtained via label selection, do this
    - storagedriver = StorageDriver(StorageClass(pvc))
    - Get All the VolumeReplicationClasses in the cluster
    - For each VolumeReplicationClass (chosen via label selector)
        if (replicationdriver == storagedriver and
             replicationSchedule == vrg.Spec.Schedule)
             chosenReplicationClass = VolumeReplicationClass

Signed-off-by: Raghavendra M <raghavendra@redhat.com>
@raghavendrabhat raghavendrabhat force-pushed the main-drpc-schedule-replication-class-9 branch from 46cd07e to 0aa1fc0 Compare August 10, 2021 19:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants