Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cephfs: Fix Removal of IPs from blocklist #4815

Merged
merged 1 commit into from
Sep 9, 2024

Conversation

black-dragon74
Copy link
Member

@black-dragon74 black-dragon74 commented Aug 30, 2024

While dealing with CephFS fencing we evict the
clients and block the IPs from the CIDR range
that do not have any active clients individually.

While Unfencing, the IP is removed via the
CIDR range which fails to remove the individual
IPs from Ceph's blacklist.

This PR fetches the blocklist from ceph and
removes the IPs in blocklist that lie inside
the CIDR range along with their unique nonces.

@black-dragon74 black-dragon74 self-assigned this Aug 30, 2024
@mergify mergify bot added the component/cephfs Issues related to CephFS label Aug 30, 2024
@black-dragon74
Copy link
Member Author

black-dragon74 commented Aug 30, 2024

❯ oc apply -f /tmp/a.yml
networkfence.csiaddons.openshift.io/fence-test-1 created

❯ oc exec -it rook-ceph-tools-67bf494bc8-tq69s -- ceph osd blocklist ls
100.64.0.3:0/0 2024-09-02T16:39:14.528296+0000
100.64.0.2:0/0 2024-09-02T16:39:09.325046+0000
100.64.0.0:0/0 2024-09-02T16:39:04.855016+0000
cidr:100.64.0.0:0/30 2029-09-02T20:44:33.360855+0000
listed 4 entries

❯ nvim /tmp/a.yml
❯ oc apply -f /tmp/a.yml
networkfence.csiaddons.openshift.io/fence-test-1 configured

❯ oc exec -it rook-ceph-tools-67bf494bc8-tq69s -- ceph osd blocklist ls
listed 0 entries

Logs:

I0902 15:38:32.883931       1 utils.go:240] ID: 25 GRPC call: /fence.FenceController/FenceClusterNetwork
I0902 15:38:32.884117       1 utils.go:241] ID: 25 GRPC request: {"cidrs":[{"cidr":"100.64.0.0/30"}],"parameters":{"clusterID":"openshift-storage"},"secrets":"***stripped***"}
I0902 15:38:33.040084       1 cephcmds.go:159] ID: 25 command succeeded: ceph [tell mds.0 client ls --id csi-cephfs-provisioner --keyfile=***stripped*** -m 172.30.248.221:3300,172.30.217.51:3300,172.30.161.182:3300]
I0902 15:38:33.767875       1 cephcmds.go:105] ID: 25 command succeeded: ceph [osd blocklist range add 100.64.0.0/30 157784760 --id csi-cephfs-provisioner --keyfile=***stripped*** -m 172.30.248.221:3300,172.30.217.51:3300,172.30.161.182:3300]
I0902 15:38:33.767916       1 fencing.go:115] ID: 25 blocklisted IP "100.64.0.0/30" successfully
I0902 15:38:33.768008       1 utils.go:247] ID: 25 GRPC response: {}


I0902 15:40:44.821312       1 utils.go:240] ID: 26 GRPC call: /fence.FenceController/UnfenceClusterNetwork
I0902 15:40:44.821407       1 utils.go:241] ID: 26 GRPC request: {"cidrs":[{"cidr":"100.64.0.0/30"}],"parameters":{"clusterID":"openshift-storage"},"secrets":"***stripped***"}
I0902 15:40:46.014550       1 cephcmds.go:105] ID: 26 command succeeded: ceph [osd blocklist range rm 100.64.0.0/30 --id csi-cephfs-provisioner --keyfile=***stripped*** -m 172.30.248.221:3300,172.30.217.51:3300,172.30.161.182:3300]
I0902 15:40:46.014589       1 fencing.go:376] ID: 26 unblocked IP "100.64.0.0/30" successfully
I0902 15:40:46.336832       1 cephcmds.go:159] ID: 26 command succeeded: ceph [osd blocklist ls --id csi-cephfs-provisioner --keyfile=***stripped*** -m 172.30.248.221:3300,172.30.217.51:3300,172.30.161.182:3300]
I0902 15:40:46.336872       1 fencing.go:432] ID: 26 fetched blocklist: 100.64.0.3:0/0 2024-09-02T16:39:14.528296+0000 100.64.0.2:0/0 2024-09-02T16:39:09.325046+0000 100.64.0.0:0/0 2024-09-02T16:39:04.855016+0000
I0902 15:40:46.336904       1 fencing.go:441] ID: 26 parsed blocklist for CIDR 100.64.0.0/30: [{IP:100.64.0.3 Nonce:0} {IP:100.64.0.2 Nonce:0} {IP:100.64.0.0 Nonce:0}]
I0902 15:40:47.043587       1 cephcmds.go:105] ID: 26 command succeeded: ceph [osd blocklist rm 100.64.0.3:0/0 --id csi-cephfs-provisioner --keyfile=***stripped*** -m 172.30.248.221:3300,172.30.217.51:3300,172.30.161.182:3300]
I0902 15:40:47.043617       1 fencing.go:376] ID: 26 unblocked IP "100.64.0.3" successfully
I0902 15:40:48.065347       1 cephcmds.go:105] ID: 26 command succeeded: ceph [osd blocklist rm 100.64.0.2:0/0 --id csi-cephfs-provisioner --keyfile=***stripped*** -m 172.30.248.221:3300,172.30.217.51:3300,172.30.161.182:3300]
I0902 15:40:48.065383       1 fencing.go:376] ID: 26 unblocked IP "100.64.0.2" successfully
I0902 15:40:49.094710       1 cephcmds.go:105] ID: 26 command succeeded: ceph [osd blocklist rm 100.64.0.0:0/0 --id csi-cephfs-provisioner --keyfile=***stripped*** -m 172.30.248.221:3300,172.30.217.51:3300,172.30.161.182:3300]
I0902 15:40:49.094739       1 fencing.go:376] ID: 26 unblocked IP "100.64.0.0" successfully
I0902 15:40:49.094820       1 utils.go:247] ID: 26 GRPC response: {}

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Aug 30, 2024

@black-dragon74 can you please also make sure you have cephfs PVC mounted on the node which you are blocklisting and restart the csi-addons manager pod multiple times (to check cephcsi cephfs fencing is idempotent or not) and provide the logs here?

@black-dragon74 black-dragon74 force-pushed the fix-cephfs-unfence branch 4 times, most recently from dcc9406 to 23802c9 Compare September 2, 2024 16:20
Copy link
Collaborator

@Madhu-1 Madhu-1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you please test the csi-addons restart case and paste the logs here?

internal/csi-addons/networkfence/fencing.go Outdated Show resolved Hide resolved
internal/csi-addons/networkfence/fencing.go Show resolved Hide resolved
@black-dragon74 black-dragon74 force-pushed the fix-cephfs-unfence branch 2 times, most recently from 13e170d to 4e6a907 Compare September 3, 2024 10:07
Madhu-1
Madhu-1 previously approved these changes Sep 3, 2024
@@ -412,7 +413,37 @@ func (nf *NetworkFence) RemoveNetworkFence(ctx context.Context) error {
}
// remove ceph blocklist for each IP in the range mentioned by the CIDR
for _, host := range hosts {
err := nf.removeCephBlocklist(ctx, host, false)
err := nf.removeCephBlocklist(ctx, host, "0", false)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@black-dragon74 Why is "0" used as nonce here ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is to remove just the blacklist entry without specifying any extra details such as port and nonce. If you do not specify the port and nonce explicitly, ceph uses the default of 0/0 for port and nonce respectively.

Ex: ceph osd blocklist rm x.x.x.x, IP = x.x.x.x:0/0

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is to remove just the blacklist entry without specifying any extra details such as port and nonce. If you do not specify the port and nonce explicitly, ceph uses the default of 0/0 for port and nonce respectively.

Ex: ceph osd blocklist rm x.x.x.x, IP = x.x.x.x:0/0

Can you add it as comment just above this line ?
its a bit confusing why "0" pops as nonce all of a sudden.

Copy link
Contributor

@Rakshith-R Rakshith-R left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Sep 9, 2024

@Mergifyio queue

Copy link
Contributor

mergify bot commented Sep 9, 2024

queue

✅ The pull request has been merged automatically

The pull request has been merged automatically at 6c704bc

While dealing with CephFS fencing we evict the
clients and block the IPs from the CIDR range
that do not have any active clients individually.

While Unfencing, the IP is removed via the
CIDR range which fails to remove the individual
IPs from Ceph's blacklist.

This PR fetches the blocklist from ceph and
removes the IPs in blocklist that lie inside
the CIDR range along with their unique nonces.

Signed-off-by: Niraj Yadav <niryadav@redhat.com>
@mergify mergify bot added the ok-to-test Label to trigger E2E tests label Sep 9, 2024
@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/upgrade-tests-cephfs

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/upgrade-tests-rbd

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.30

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.30

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.29

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e/k8s-1.30

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.31

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.29

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.31

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e/k8s-1.29

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e/k8s-1.31

@ceph-csi-bot ceph-csi-bot removed the ok-to-test Label to trigger E2E tests label Sep 9, 2024
@mergify mergify bot merged commit 6c704bc into ceph:devel Sep 9, 2024
37 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/cephfs Issues related to CephFS
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants