diff --git a/content/docs/replication/_index.md b/content/docs/replication/_index.md index 2c45ef2807..d478396ea5 100644 --- a/content/docs/replication/_index.md +++ b/content/docs/replication/_index.md @@ -27,7 +27,7 @@ CSM for Replication provides the following capabilities: | Active-Active (Metro) file volume replication | no | no | no | no | no | | Create `PersistentVolume` objects in the cluster representing the replicated volume | yes | yes | yes | no | no | | Create `DellCSIReplicationGroup` objects in the cluster | yes | yes | yes | no | no | -| Failover & Reprotect applications using the replicated volumes | yes | yes | no | no | no | +| Failover & Reprotect applications using the replicated volumes | yes | yes | yes | no | no | | Online Volume Expansion for replicated volumes | yes | no | no | no | no | | Provides a command line utility - [repctl](tools) for configuring & managing replication related resources across multiple clusters | yes | yes | yes | no | no | {{}} diff --git a/content/docs/replication/deployment/installation.md b/content/docs/replication/deployment/installation.md index 6bbabeee29..005637fac7 100644 --- a/content/docs/replication/deployment/installation.md +++ b/content/docs/replication/deployment/installation.md @@ -75,9 +75,8 @@ The following CSI drivers support replication: 1. CSI driver for PowerMax 2. CSI driver for PowerStore 3. CSI driver for PowerScale -4. CSI driver for Unity XT -Please follow the steps outlined in [PowerMax](../powermax), [PowerStore](../powerstore), [PowerScale](../powerscale) or [Unity](../unity) pages during the driver installation. +Please follow the steps outlined in [PowerMax](../powermax), [PowerStore](../powerstore) or [PowerScale](../powerscale) pages during the driver installation. >Note: Please ensure that replication CRDs are installed in the clusters where you are installing the CSI drivers. These CRDs are generally installed as part of the CSM Replication controller installation process. diff --git a/content/docs/replication/deployment/powerscale.md b/content/docs/replication/deployment/powerscale.md index 1d8c61c44f..dda3c15b2e 100644 --- a/content/docs/replication/deployment/powerscale.md +++ b/content/docs/replication/deployment/powerscale.md @@ -130,7 +130,7 @@ Let's go through each parameter and what it means: * `replication.storage.dell.com/rpo` is an acceptable amount of data, which is measured in units of time, that may be lost due to a failure. > NOTE: Available RPO values "Five_Minutes", "Fifteen_Minutes", "Thirty_Minutes", "One_Hour", "Six_Hours", "Twelve_Hours", "One_Day" * `replication.storage.dell.com/ignoreNamespaces`, if set to `true` PowerScale driver, it will ignore in what namespace volumes are created and put every volume created using this storage class into a single volume group. -* `replication.storage.dell.com/volumeGroupPrefix` represents what string would be appended to the volume group name to differentiate them. +* `replication.storage.dell.com/volumeGroupPrefix` represents what string would be appended to the volume group name to differentiate them. It is important to not use the same prefix for different kubernetes clusters, otherwise any action on a replication group in one kubernetes cluster will impact the other. > NOTE: To configure the VolumeGroupPrefix, the name format of \'\-\-\-\\' cannot be more than 63 characters. @@ -198,52 +198,63 @@ The CSI PowerScale driver will create a volume on the array, add it to a VolumeG using the parameters provided in the replication enabled Storage Class. ### SyncIQ Policy Architecture -When creating `DellCSIReplicationGroup` (RG) objects on the Kubernetes cluster(s) used for replication, matching SyncIQ policies are created on *both* the source and target PowerScale storage arrays. +When creating `DellCSIReplicationGroup` (RG) objects on the Kubernetes cluster(s) used for replication, a SyncIQ policy to facilitate this replication is created *only* on the source PowerScale storage array. -This is done so that the RG objects can communicate with a relative 'local' and 'remote' set of policies to query for current synchronization status and perform replication actions; on the *source* Kubernetes cluster's RG, the *source* PowerScale array is seen as 'local' and the *target* PowerScale array is seen as remote. The inverse relationship exists on the *target* Kubernetes cluster's RG, which sees the *target* PowerScale array as 'local' and the *source* PowerScale array as 'remote'. +This singular SyncIQ policy on the source storage array and its matching Local Target policy on the target storage array provide information for the RGs to determine their status. Upon creation, the SyncIQ policy is set to a schedule of `When source is modified`. The SyncIQ policy is `Enabled` when the RG is created. The directory that is being replicated is *read-write accessible* on the source storage array, and is restricted to *read-only* on the target. -Upon creation, both SyncIQ policies (source and target) are set to a schedule of `When source is modified`. The source PowerScale array's SyncIQ policy is `Enabled` when the RG is created, and the target array's policy is `Disabled`. Similarly, the directory that is being replicated is *read-write accessible* on the source storage array, and is restricted to *read-only* on the target. +### Performing Failover/Failback/Reprotect on PowerScale -### Performing Failover on PowerScale +Failover, Failback, and Reprotect one-step operations are not natively supported on PowerScale, and are performed as a series of steps in CSM replication. When any of these operations are triggered, through the use of `repctl` or by editing the RG, the steps below are performed on the PowerScale storage arrays. + +#### Failover - Halt Replication and Allow Writes on Target Steps for performing Failover can be found in the Tools page under [Executing Actions.](https://dell.github.io/csm-docs/docs/replication/tools/#executing-actions) There are some PowerScale-specific considerations to keep in mind: -- Failover on PowerScale does NOT halt writes on the source side. It is recommended that the storage administrator or end user manually stop writes to ensure no data is lost on the source side in the event of future failback. -- In the case of unplanned failover, the source-side SyncIQ policy will be left enabled and set to its previously defined `When source is modified` sync schedule. It is recommended for storage admins to manually disable the source-side SyncIQ policy when bringing the failed-over source array back online. +- Failover on PowerScale does NOT halt writes on the source side. It is recommended that the storage administrator or end user manually **stop writes** to ensure no data is lost on the source side in the event of future failback. +- In the case of unplanned failover, the SyncIQ policy on the source PowerScale array will be left enabled and set to its previously defined `When source is modified` sync schedule. Storage admins **must** manually disable this SyncIQ policy when bringing the failed-over source array back online, or unexpected behavior may occur. + +The below steps are performed by CSM replication to perform a failover. -### Performing Failback on PowerScale +1. Syncing data from source to target one final time before transition. *(planned failover only)* +2. Disabling the SyncIQ policy on the source PowerScale storage array. *(planned failover only)* +3. Enabling writes on the target PowerScale array's Local Target policy. -Failback operations are not presently supported for PowerScale. In the event of a failover, failback can be performed manually using the below methodologies. #### Failback - Discard Target -Performing failback and discarding changes made to the target is to simply resume synchronization from the source. The steps to perform this operation are as follows: -1. Log in to the source PowerScale array. Navigate to the `Data Protection > SyncIQ` page and select the `Policies` tab. -2. Edit the source-side SyncIQ policy's schedule from `When source is modified` to `Manual`. -3. Log in to the target PowerScale array. Navigate to the `Data Protection > SyncIQ` page and select the `Local targets` tab. -4. Perform `Actions > Disallow writes` on the target-side Local Target policy that matches the SyncIQ policy undergoing failback. -5. Return to the source array. Enable the source-side SyncIQ policy. Edit its schedule from `Manual` to `When source is modified`. Set the time delay for synchronization as appropriate. +Performing failback and discarding changes made to the target is to simply resume synchronization from the source. The steps CSM replication is following to perform this operation are as follows: + +1. Editing the SyncIQ policy on the source PowerScale array's schedule from `When source is modified` to `Manual`. +2. Performing `Actions > Disallow writes` on the target PowerScale array's Local Target policy that matches the SyncIQ policy undergoing failback. +3. Editing the SyncIQ policy's schedule from `Manual` to `When source is modified` and setting the time delay for synchronization as appropriate. +4. Enabling the source PowerScale array's SyncIQ policy. + + #### Failback - Discard Source -Information on the methodology for performing a failback while taking changes made to the original target can be found in relevant PowerScale SyncIQ documentation. The detailed steps are as follows: - -1. Log in to the source PowerScale array. Navigate to the `Data Protection > SyncIQ` page and select the `Policies` tab. -2. Edit the source-side SyncIQ policy's schedule from `When source is modified` to `Manual`. -3. Log in to the target PowerScale array. Navigate to the `Data Protection > SyncIQ` page and select the `Policies` tab. -4. Delete the target-side SyncIQ policy that has a name matching the SyncIQ policy undergoing failback. This is necessary to prevent conflicts when running resync-prep in the next step. -5. On the source PowerScale array, enable the SyncIQ policy that is undergoing failback. On this policy, perform `Actions > Resync-prep`. This will create a new SyncIQ policy on the target PowerScale array, matching the original SyncIQ policy with an appended *_mirror* to its name. Wait until the policy being acted on is disabled by the resync-prep operation before continuing. -6. On the target PowerScale array's `Policies` tab, perform `Actions > Start job` on the *_mirror* policy. Wait for this synchronization to complete. -7. On the source PowerScale array, switch from the `Policies` tab to the `Local targets` tab. Find the local target policy that matches the SyncIQ policy undergoing failback and perform `Actions > Allow writes`. -8. On the target PowerScale array, perform `Actions > Resync-prep` on the *_mirror* policy. Wait until the policy on the source side is re-enabled by the resync-prep operation before continuing. -9. On the target PowerScale array, delete the *_mirror* SyncIQ policy. -10. On the target PowerScale array, manually recreate the original SyncIQ policy that was deleted in step 4. This will require filepaths, RPO, and other details that can be obtained from the source-side SyncIQ policy. Its name **must** match the source-side SyncIQ policy. Its source directory will be the source-side policy's *target* directory, and vice-versa. Its target host will be the source PowerScale array endpoint. -11. Ensure that the target-side SyncIQ policy that was just created is **Enabled.** This will create a Local Target policy on the source side. If it was not created as Enabled, enable it now. -12. On the source PowerScale array, select the `Local targets` tab. Perform `Actions > Allow writes` on the source-side Local Target policy that matches the SyncIQ policy undergoing failback. -13. Disable the target-side SyncIQ policy. -14. On the source PowerScale array, edit the SyncIQ policy's schedule from `Manual` to `When source is modified`. Set the time delay for synchronization as appropriate. +Information on the methodology for performing a failback while taking changes made to the original target can be found in relevant PowerScale SyncIQ documentation. The steps CSM replication is following to perform this operation are as follows: + +1. Editing the SyncIQ policy on the source PowerScale array's schedule from `When source is modified` to `Manual`. +2. Enabling the SyncIQ policy that is undergoing failback, if it isn't already enabled. +3. Performing the `Resync-prep` action on the SyncIQ policy. This will create a new SyncIQ policy on the target PowerScale array, matching the original SyncIQ policy with an appended *_mirror* to its name. +4. Starting a synchronization job on the target PowerScale array's newly created *_mirror* policy. +5. Running the `Allow writes` operation on the Local Target on the source PowerScale array that was created by the *_mirror* policy. +6. Performing the `Resync-prep` action on the target PowerScale array's *_mirror* policy. +7. Deleting the *_mirror* SyncIQ policy. +8. Editing the SyncIQ policy on the source PowerScale array's schedule from `Manual` to `When source is modified` and setting the time delay for synchronization as appropriate. + +#### Reprotect - Set Original Target as New Source + +A reprotect operation is, in essence, doing away with the original source-target relationship and establishing a new one in the reverse direction. This is done **only after** failing over to the original target array is complete, and the original source array is up and ready to be made into a new replication destination. To accomplish this, CSM replication performs the following steps: + +1. Deleting the SyncIQ policy on the original source PowerScale array. +2. Creating a new SyncIQ policy on the original target PowerScale array. This policy establishes the original target as a new *source*, and sets its replication destination to the original source (which can be considered the new *target*.) ### Supported Replication Actions The CSI PowerScale driver supports the following list of replication actions: - FAILOVER_REMOTE - UNPLANNED_FAILOVER_LOCAL +- FAILBACK_LOCAL +- ACTION_FAILBACK_DISCARD_CHANGES_LOCAL +- REPROTECT_LOCAL - SUSPEND - RESUME - SYNC diff --git a/content/docs/replication/deployment/powerstore.md b/content/docs/replication/deployment/powerstore.md index dfde098928..412efd9720 100644 --- a/content/docs/replication/deployment/powerstore.md +++ b/content/docs/replication/deployment/powerstore.md @@ -113,8 +113,7 @@ Let's go through each parameter and what it means: * `replication.storage.dell.com/rpo` is an acceptable amount of data, which is measured in units of time, that may be lost due to a failure. * `replication.storage.dell.com/ignoreNamespaces`, if set to `true` PowerStore driver, it will ignore in what namespace volumes are created and put every volume created using this storage class into a single volume group. -* `replication.storage.dell.com/volumeGroupPrefix` represents what string would be appended to the volume group name - to differentiate them. +* `replication.storage.dell.com/volumeGroupPrefix` represents what string would be appended to the volume group name to differentiate them. It is important to not use the same prefix for different kubernetes clusters, otherwise any action on a replication group in one kubernetes cluster will impact the other. >NOTE: To configure the VolumeGroupPrefix, the name format of \'\-\-\-\' cannot be more than 63 characters. diff --git a/content/docs/replication/deployment/storageclasses.md b/content/docs/replication/deployment/storageclasses.md index 042d351d72..df85a44833 100644 --- a/content/docs/replication/deployment/storageclasses.md +++ b/content/docs/replication/deployment/storageclasses.md @@ -29,7 +29,7 @@ This should contain the name of the storage class on the remote cluster which is >Note: You still need to create a pair of storage classes even while using a single stretched cluster ### Driver specific parameters -Please refer to the driver specific sections for [PowerMax](../powermax/#creating-storage-classes), [PowerStore](../powerstore/#creating-storage-classes), [PowerScale](../powerscale/#creating-storage-classes) or [Unity](../unity/#creating-storage-classes) for a detailed list of parameters. +Please refer to the driver specific sections for [PowerMax](../powermax/#creating-storage-classes), [PowerStore](../powerstore/#creating-storage-classes) or [PowerScale](../powerscale/#creating-storage-classes) for a detailed list of parameters. ### PV sync Deletion diff --git a/content/docs/replication/deployment/unity.md b/content/docs/replication/deployment/unity.md deleted file mode 100644 index 84bc358ff4..0000000000 --- a/content/docs/replication/deployment/unity.md +++ /dev/null @@ -1,179 +0,0 @@ ---- -title: Unity -linktitle: Unity -weight: 7 -description: > - Enabling Replication feature for CSI Unity ---- -## Enabling Replication in CSI Unity - -Container Storage Modules (CSM) Replication sidecar is a helper container that is installed alongside a CSI driver to facilitate replication functionality. Such CSI drivers must implement `dell-csi-extensions` calls. - -CSI driver for Dell Unity supports necessary extension calls from `dell-csi-extensions`. To be able to provision replicated volumes you would need to do the steps described in these sections. - -### Before Installation - -#### On Storage Array -Be sure to configure replication between multiple Unity instances using instructions provided by -Unity storage. - - -#### In Kubernetes -Ensure you installed CRDs and replication controller in your clusters. - -To verify you have everything in order you can execute these commands: - -* Check controller pods - ```shell - kubectl get pods -n dell-replication-controller - ``` - Pods should be `READY` and `RUNNING` -* Check that controller config map is properly populated - ```shell - kubectl get cm -n dell-replication-controller dell-replication-controller-config -o yaml - ``` - `data` field should be properly populated with cluster-id of your choosing and, if using multi-cluster - installation, your `targets:` parameter should be populated by a list of target clusters IDs. - - -If you don't have something installed or something is out-of-place, please refer to installation instructions in [installation-repctl](../install-repctl) or [installation](../installation). - -### Installing Driver With Replication Module - -To install the driver with replication enabled, you need to ensure you have set -helm parameter `controller.replication.enabled` in your copy of example `values.yaml` file -(usually called `my-unity-settings.yaml`, `myvalues.yaml` etc.). - -Here is an example of what that would look like: -```yaml -... -# controller: configure controller specific parameters -controller: - ... - # replication: allows to configure replication - replication: - enabled: true - image: dellemc/dell-csi-replicator:v1.2.0 - replicationContextPrefix: "unity" - replicationPrefix: "replication.storage.dell.com" -... -``` -You can leave other parameters like `image`, `replicationContextPrefix`, and `replicationPrefix` as they are. - -After enabling the replication module, you can continue to install the CSI driver for Unity following the usual installation procedure. Just ensure you've added the necessary array connection information to secret. - -> **_NOTE:_** you need to install your driver on ALL clusters where you want to use replication. Both arrays must be accessible from each cluster. - - -### Creating Storage Classes - -To provision replicated volumes, you need to create adequately configured storage classes on both the source and target clusters. - -A pair of storage classes on the source, and target clusters would be essentially `mirrored` copies of one another. -You can create them manually or with the help of `repctl`. - -#### Manual Storage Class Creation - -You can find a sample replication enabled storage class in the driver repository [here](https://github.com/dell/csi-unity/blob/main/samples/storageclass/unity-replication.yaml). - -It will look like this: -```yaml -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: unity-replication -provisioner: csi-unity.dellemc.com -reclaimPolicy: Delete -volumeBindingMode: Immediate -parameters: - replication.storage.dell.com/isReplicationEnabled: "true" - replication.storage.dell.com/remoteStorageClassName: "unity-replication" - replication.storage.dell.com/remoteClusterID: "target" - replication.storage.dell.com/remoteSystem: "APM000000002" - replication.storage.dell.com/rpo: "5" - replication.storage.dell.com/ignoreNamespaces: "false" - replication.storage.dell.com/volumeGroupPrefix: "csi" - replication.storage.dell.com/remoteStoragePool: pool_002 - replication.storage.dell.com/remoteNasServer: nas_124 - arrayId: "APM000000001" - protocol: "NFS" - storagePool: pool_001 - nasServer: nas_123 -``` - -Let's go through each parameter and what it means: -* `replication.storage.dell.com/isReplicationEnabled` if set to `true`, will mark this storage class as replication enabled, - just leave it as `true`. -* `replication.storage.dell.com/remoteStorageClassName` points to the name of the remote storage class. If you are using replication with the multi-cluster configuration you can make it the same as the current storage class name. -* `replication.storage.dell.com/remoteClusterID` represents the ID of a remote cluster. It is the same id you put in the replication controller config map. -* `replication.storage.dell.com/remoteSystem` is the name of the remote system that should match whatever `clusterName` you called it in `unity-creds` secret. -* `replication.storage.dell.com/rpo` is an acceptable amount of data, which is measured in units of time, that may be lost due to a failure. -* `replication.storage.dell.com/ignoreNamespaces`, if set to `true` Unity driver, it will ignore in what namespace volumes are created and put every volume created using this storage class into a single volume group. -* `replication.storage.dell.com/volumeGroupPrefix` represents what string would be appended to the volume group name to differentiate them. ->NOTE: To configure the VolumeGroupPrefix, the name format of \'\-\-\-\' cannot be more than 63 characters. -* `arrayId` is a unique identifier of the storage array you specified in array connection secret. -* `nasServer` id of the Nas server of local array to which the allocated volume will belong. -* `storagePool` is the storage pool of the local array. - -After figuring out how storage classes would look, you just need to go and apply them to your Kubernetes clusters with `kubectl`. - -#### Storage Class creation with `repctl` - -`repctl` can simplify storage class creation by creating a pair of mirrored storage classes in both clusters -(using a single storage class configuration) in one command. - -To create storage classes with `repctl` you need to fill up the config with necessary information. -You can find an example [here](https://github.com/dell/csm-replication/blob/main/repctl/examples/unity_example_values.yaml), copy it, and modify it to your needs. - -If you open this example you can see a lot of similar fields and parameters you can modify in the storage class. - -Let's use the same example from manual installation and see what config would look like: -```yaml -targetClusterID: "cluster-2" -sourceClusterID: "cluster-1" -name: "unity-replication" -driver: "unity" -reclaimPolicy: "Retain" -replicationPrefix: "replication.storage.dell.com" -remoteRetentionPolicy: - RG: "Retain" - PV: "Retain" -parameters: - arrayId: - source: "APM000000001" - target: "APM000000002" - storagePool: - source: pool_123 - target: pool_124 - rpo: "0" - ignoreNamespaces: "false" - volumeGroupPrefix: "prefix" - protocol: "NFS" - nasServer: - source: nas_123 - target: nas_123 -``` - -After preparing the config, you can apply it to both clusters with `repctl`. Before you do this, ensure you've added your clusters to `repctl` via the `add` command. - -To create storage classes just run `./repctl create sc --from-config ` and storage classes would be applied to both clusters. - -After creating storage classes you can make sure they are in place by using `./repctl get storageclasses` command. - -### Provisioning Replicated Volumes - -After installing the driver and creating storage classes, you are good to create volumes using newly -created storage classes. - -On your source cluster, create a PersistentVolumeClaim using one of the replication-enabled Storage Classes. -The CSI Unity driver will create a volume on the array, add it to a VolumeGroup and configure replication -using the parameters provided in the replication enabled Storage Class. - -### Supported Replication Actions -The CSI Unity driver supports the following list of replication actions: -- FAILOVER_REMOTE -- UNPLANNED_FAILOVER_LOCAL -- REPROTECT_LOCAL -- SUSPEND -- RESUME -- SYNC diff --git a/content/docs/replication/release/_index.md b/content/docs/replication/release/_index.md index 33d56c7cf5..f7701f43ad 100644 --- a/content/docs/replication/release/_index.md +++ b/content/docs/replication/release/_index.md @@ -6,16 +6,19 @@ Description: > Dell Container Storage Modules (CSM) release notes for replication --- -## Release Notes - CSM Replication 1.3.1 +## Release Notes - CSM Replication 1.4.0 ### New Features/Changes -There are no new features in this release. + + - [PowerScale - Implement Failback functionality](https://github.com/dell/csm/issues/558) + - [PowerScale - Implement Reprotect functionality](https://github.com/dell/csm/issues/532) + - [PowerScale - SyncIQ policy improvements](https://github.com/dell/csm/issues/573) ### Fixed Issues -- [PowerScale Replication - Replicated PV has the wrong AzServiceIP](https://github.com/dell/csm/issues/514) -- ["repctl cluster inject --use-sa" doesn't work for Kubernetes 1.24 and above](https://github.com/dell/csm/issues/463) + +There are no fixed issues at this time. ### Known Issues -| Github ID | Description | -| --------------------------------------------- | --------------------------------------------------------------------------------------- | -| [523](https://github.com/dell/csm/issues/523) | **PowerScale:** Artifacts are not properly cleaned after deletion. | +| Github ID | Description | +| --------------------------------------------- | ------------------------------------------------------------------ | +| [523](https://github.com/dell/csm/issues/523) | **PowerScale:** Artifacts are not properly cleaned after deletion. | diff --git a/content/docs/replication/replication-actions.md b/content/docs/replication/replication-actions.md index 00a31ab560..5bc89fccac 100644 --- a/content/docs/replication/replication-actions.md +++ b/content/docs/replication/replication-actions.md @@ -34,11 +34,11 @@ For e.g. - The following table lists details of what actions should be used in different Disaster Recovery workflows & the equivalent operation done on the storage array: {{}} -| Workflow | Actions | PowerMax | PowerStore | PowerScale | Unity | -| ------------------- | ----------------------------------------------------- | ---------------------- | -------------------------------------- | -------------------------------------------- | -------------------------------------- | -| Planned Migration | FAILOVER_LOCAL
FAILOVER_REMOTE | symrdf failover -swap | FAILOVER (no REPROTECT after FAILOVER) | allow_writes on target, disable local policy | FAILOVER (no REPROTECT after FAILOVER) | -| Reprotect | REPROTECT_LOCAL
REPROTECT_REMOTE | symrdf resume/est | REPROTECT | Not supported | REPROTECT | -| Unplanned Migration | UNPLANNED_FAILOVER_LOCAL
UNPLANNED_FAILOVER_REMOTE | symrdf failover -force | FAILOVER (at target site) | allow_writes on target | FAILOVER (at target site) | +| Workflow | Actions | PowerMax | PowerStore | PowerScale | +| ------------------- | ----------------------------------------------------- | ---------------------- | -------------------------------------- | ------------------------------------------------ | +| Planned Migration | FAILOVER_LOCAL
FAILOVER_REMOTE | symrdf failover -swap | FAILOVER (no REPROTECT after FAILOVER) | allow_writes on target, disable local policy | +| Reprotect | REPROTECT_LOCAL
REPROTECT_REMOTE | symrdf resume/est | REPROTECT | Delete policy on source, create policy on target | +| Unplanned Migration | UNPLANNED_FAILOVER_LOCAL
UNPLANNED_FAILOVER_REMOTE | symrdf failover -force | FAILOVER (at target site) | allow_writes on target | {{
}} ### Maintenance Actions @@ -46,11 +46,11 @@ These actions can be run at any site and are used to change the replication link The following table lists the supported maintenance actions and the equivalent operation done on the storage arrays {{}} -| Action | Description | PowerMax | PowerStore | PowerScale | Unity | -| ------- | -------------------------------------------------- | ---------------- | --------------- | -------------------- | ------ | -| SUSPEND | Temporarily suspend
replication | symrdf suspend | PAUSE | disable local policy | PAUSE | -| RESUME | Resume replication | symrdf resume | RESUME | enable local policy | RESUME | -| SYNC | Synchronize all changes
from source to target | symrdf establish | SYNCHRONIZE NOW | start syncIQ job | SYNC | +| Action | Description | PowerMax | PowerStore | PowerScale | +| ------- | -------------------------------------------------- | ---------------- | --------------- | -------------------- | +| SUSPEND | Temporarily suspend
replication | symrdf suspend | PAUSE | disable local policy | +| RESUME | Resume replication | symrdf resume | RESUME | enable local policy | +| SYNC | Synchronize all changes
from source to target | symrdf establish | SYNCHRONIZE NOW | start syncIQ job | {{
}} ### How to perform actions