Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cephfs: RO Restore fails when storageclass has pool paramter #3820

Closed
Rakshith-R opened this issue May 15, 2023 · 5 comments · Fixed by #4047
Closed

cephfs: RO Restore fails when storageclass has pool paramter #3820

Rakshith-R opened this issue May 15, 2023 · 5 comments · Fixed by #4047
Assignees
Labels
bug Something isn't working component/cephfs Issues related to CephFS

Comments

@Rakshith-R
Copy link
Contributor

Describe the bug

RO Restore will not be possible when cephfs storageclass has pool parameter mentionedd.

I0515 10:27:41.247279       1 utils.go:195] ID: 35 Req-ID: pvc-03876232-45cc-40f2-94d6-5455cac30b6c GRPC call: /csi.v1.Controller/CreateVolume
I0515 10:27:41.247448       1 utils.go:206] ID: 35 Req-ID: pvc-03876232-45cc-40f2-94d6-5455cac30b6c GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-03876232-45cc-40f2-94d6-5455cac30b6c","parameters":{"clusterID":"rook-ceph","csi.storage.k8s.io/pv/name":"pvc-03876232-45cc-40f2-94d6-5455cac30b6c","csi.storage.k8s.io/pvc/name":"cephfs-pvc-restore","csi.storage.k8s.io/pvc/namespace":"rook-ceph","fsName":"myfs","pool":"myfs-replicated"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":3}}],"volume_content_source":{"Type":{"Snapshot":{"snapshot_id":"0001-0009-rook-ceph-0000000000000001-de98cf4b-1db4-4b36-8504-539763f55e9f"}}}}
I0515 10:27:41.250612       1 omap.go:88] ID: 35 Req-ID: pvc-03876232-45cc-40f2-94d6-5455cac30b6c got omap values: (pool="myfs-metadata", namespace="csi", name="csi.snap.de98cf4b-1db4-4b36-8504-539763f55e9f"): map[csi.imagename:csi-snap-de98cf4b-1db4-4b36-8504-539763f55e9f csi.snapname:snapshot-ee3d59fc-5121-4d58-b17e-92660d917d11 csi.source:csi-vol-f047da06-344c-4362-a312-9a07c64fe070]
E0515 10:27:41.277713       1 controllerserver.go:277] ID: 35 Req-ID: pvc-03876232-45cc-40f2-94d6-5455cac30b6c validation and extraction of volume options failed: cannot set pool for snapshot-backed volume
E0515 10:27:41.277826       1 utils.go:210] ID: 35 Req-ID: pvc-03876232-45cc-40f2-94d6-5455cac30b6c GRPC error: rpc error: code = InvalidArgument desc = cannot set pool for snapshot-backed volume
I0515 10:29:05.698175       1 utils.go:195] ID: 36 Req-ID: pvc-03876232-45cc-40f2-94d6-5455cac30b6c GRPC call: /csi.v1.Controller/CreateVolume
I0515 10:29:05.698909       1 utils.go:206] ID: 36 Req-ID: pvc-03876232-45cc-40f2-94d6-5455cac30b6c GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-03876232-45cc-40f2-94d6-5455cac30b6c","parameters":{"clusterID":"rook-ceph","csi.storage.k8s.io/pv/name":"pvc-03876232-45cc-40f2-94d6-5455cac30b6c","csi.storage.k8s.io/pvc/name":"cephfs-pvc-restore","csi.storage.k8s.io/pvc/namespace":"rook-ceph","fsName":"myfs","pool":"myfs-replicated"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":3}}],"volume_content_source":{"Type":{"Snapshot":{"snapshot_id":"0001-0009-rook-ceph-0000000000000001-de98cf4b-1db4-4b36-8504-539763f55e9f"}}}}
I0515 10:29:05.709888       1 omap.go:88] ID: 36 Req-ID: pvc-03876232-45cc-40f2-94d6-5455cac30b6c got omap values: (pool="myfs-metadata", namespace="csi", name="csi.snap.de98cf4b-1db4-4b36-8504-539763f55e9f"): map[csi.imagename:csi-snap-de98cf4b-1db4-4b36-8504-539763f55e9f csi.snapname:snapshot-ee3d59fc-5121-4d58-b17e-92660d917d11 csi.source:csi-vol-f047da06-344c-4362-a312-9a07c64fe070]
E0515 10:29:05.725906       1 controllerserver.go:277] ID: 36 Req-ID: pvc-03876232-45cc-40f2-94d6-5455cac30b6c validation and extraction of volume options failed: cannot set pool for snapshot-backed volume
E0515 10:29:05.725992       1 utils.go:210] ID: 36 Req-ID: pvc-03876232-45cc-40f2-94d6-5455cac30b6c GRPC error: rpc error: code = InvalidArgument desc = cannot set pool for snapshot-backed volume
I0515 10:31:06.136888       1 utils.go:195] ID: 37 Req-ID: pvc-03876232-45cc-40f2-94d6-5455cac30b6c GRPC call: /csi.v1.Controller/CreateVolume
I0515 10:31:06.137169       1 utils.go:206] ID: 37 Req-ID: pvc-03876232-45cc-40f2-94d6-5455cac30b6c GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-03876232-45cc-40f2-94d6-5455cac30b6c","parameters":{"clusterID":"rook-ceph","csi.storage.k8s.io/pv/name":"pvc-03876232-45cc-40f2-94d6-5455cac30b6c","csi.storage.k8s.io/pvc/name":"cephfs-pvc-restore","csi.storage.k8s.io/pvc/namespace":"rook-ceph","fsName":"myfs","pool":"myfs-replicated"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":3}}],"volume_content_source":{"Type":{"Snapshot":{"snapshot_id":"0001-0009-rook-ceph-0000000000000001-de98cf4b-1db4-4b36-8504-539763f55e9f"}}}}
I0515 10:31:06.143371       1 omap.go:88] ID: 37 Req-ID: pvc-03876232-45cc-40f2-94d6-5455cac30b6c got omap values: (pool="myfs-metadata", namespace="csi", name="csi.snap.de98cf4b-1db4-4b36-8504-539763f55e9f"): map[csi.imagename:csi-snap-de98cf4b-1db4-4b36-8504-539763f55e9f csi.snapname:snapshot-ee3d59fc-5121-4d58-b17e-92660d917d11 csi.source:csi-vol-f047da06-344c-4362-a312-9a07c64fe070]
E0515 10:31:06.175414       1 controllerserver.go:277] ID: 37 Req-ID: pvc-03876232-45cc-40f2-94d6-5455cac30b6c validation and extraction of volume options failed: cannot set pool for snapshot-backed volume
E0515 10:31:06.175536       1 utils.go:210] ID: 37 Req-ID: pvc-03876232-45cc-40f2-94d6-5455cac30b6c GRPC error: rpc error: code = InvalidArgument desc = cannot set pool for snapshot-backed volume

if vo.Pool != "" {
return errors.New("cannot set pool for snapshot-backed volume")
}

@nixpanic nixpanic added the component/cephfs Issues related to CephFS label May 19, 2023
@nixpanic
Copy link
Member

What do you expect, or suggest as behavior?

@nixpanic nixpanic added the bug Something isn't working label May 22, 2023
@Rakshith-R
Copy link
Contributor Author

What do you expect,

We've defaulted RO restore PVC to be shallow clone. Since Storageclasses for these
PVC are from before this change, we need to be compatible.
I expect there to be no change in behaviour from earlier when there was no concept of shallow clone.
Therefore, a RO Restore pvc should go to bound state without any error.

or suggest as behavior?

We should remove this check, since we just reference this RO restore pvc back to snapshot and don't use pool parameter anywhere(AFAIK)

if vo.Pool != "" {
return errors.New("cannot set pool for snapshot-backed volume")
}

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the wontfix This will not be worked on label Jun 21, 2023
@Rakshith-R Rakshith-R removed the wontfix This will not be worked on label Jun 22, 2023
@Rakshith-R Rakshith-R added this to the release-v3.9.1 milestone Jun 22, 2023
@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the wontfix This will not be worked on label Jul 22, 2023
@github-actions
Copy link

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 30, 2023
@Rakshith-R Rakshith-R reopened this Aug 1, 2023
@Rakshith-R Rakshith-R removed the wontfix This will not be worked on label Aug 1, 2023
@Rakshith-R Rakshith-R self-assigned this Aug 2, 2023
@mergify mergify bot closed this as completed in #4047 Aug 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working component/cephfs Issues related to CephFS
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants