Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement multi Rook/Ceph env e2e #27

Merged
merged 1 commit into from
Jul 8, 2024

Conversation

llamerada-jp
Copy link
Contributor

No description provided.

@llamerada-jp llamerada-jp force-pushed the implement-e2e-multi-cluster branch 17 times, most recently from d237b0b to 19433d7 Compare June 28, 2024 06:07
@llamerada-jp llamerada-jp changed the title Implement e2e multi cluster Implement multi Rook/Ceph env e2e Jun 28, 2024
@llamerada-jp llamerada-jp marked this pull request as ready for review June 28, 2024 06:42
test/e2e/backup_test.go Outdated Show resolved Hide resolved
test/e2e/multi_rook_ceph_test.go Outdated Show resolved Hide resolved
test/e2e/multi_rook_ceph_test.go Outdated Show resolved Hide resolved
test/e2e/backup_test.go Outdated Show resolved Hide resolved
_, stderr, err := kubectl("delete", "sc", sc)
Expect(err).NotTo(HaveOccurred(), string(stderr))
})
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about deleting RBD images in the second cluster as the old code did? It's the teardown logic and it should cleanup the environment as possible.

test/e2e/multi_rook_ceph_test.go Outdated Show resolved Hide resolved
test/e2e/util.go Outdated Show resolved Hide resolved
test/e2e/multi_rook_ceph_test.go Outdated Show resolved Hide resolved
@llamerada-jp llamerada-jp force-pushed the implement-e2e-multi-cluster branch 5 times, most recently from d41cd5f to d5729d3 Compare July 5, 2024 08:53
test/e2e/util.go Outdated
images := []string{}
clones := []string{}
snaps := []string{}
// remove RBD snapshots first
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's better to mention RBD clone images.

Suggested change
// remove RBD snapshots first
// remove RBD clone images of targets, RBD anspshots of targets, and target images in order
// to remove them cleanly.

Comment on lines +63 to +89
It("delete resources in the namespace: "+ns, func() {
err := deleteNamespacedResource(ns, "mantlerestore")
Expect(err).NotTo(HaveOccurred())
err = deleteNamespacedResource(ns, "mantlebackup")
Expect(err).NotTo(HaveOccurred())
err = deleteNamespacedResource(ns, "pvc")
Expect(err).NotTo(HaveOccurred())
})

It("delete the namespace: "+ns, func() {
_, stderr, err := kubectl("delete", "namespace", ns)
Expect(err).NotTo(HaveOccurred(), string(stderr))
})
}

It("clean up the SCs and RBD pools", func() {
for _, sc := range []string{test.storageClassName1, test.storageClassName2} {
_, stderr, err := kubectl("delete", "sc", sc)
Expect(err).NotTo(HaveOccurred(), string(stderr))
}

for _, ns := range []string{cephCluster1Namespace, cephCluster2Namespace} {
err := removeAllRBDImageAndSnap(ns, test.poolName)
Expect(err).NotTo(HaveOccurred())

_, _, err = kubectl("delete", "-n", ns, "cephblockpool", test.poolName, "--wait=false")
Expect(err).NotTo(HaveOccurred())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's better to continue the teardown even if some deletion failed not to restore clean environment as possible.

Suggested change
It("delete resources in the namespace: "+ns, func() {
err := deleteNamespacedResource(ns, "mantlerestore")
Expect(err).NotTo(HaveOccurred())
err = deleteNamespacedResource(ns, "mantlebackup")
Expect(err).NotTo(HaveOccurred())
err = deleteNamespacedResource(ns, "pvc")
Expect(err).NotTo(HaveOccurred())
})
It("delete the namespace: "+ns, func() {
_, stderr, err := kubectl("delete", "namespace", ns)
Expect(err).NotTo(HaveOccurred(), string(stderr))
})
}
It("clean up the SCs and RBD pools", func() {
for _, sc := range []string{test.storageClassName1, test.storageClassName2} {
_, stderr, err := kubectl("delete", "sc", sc)
Expect(err).NotTo(HaveOccurred(), string(stderr))
}
for _, ns := range []string{cephCluster1Namespace, cephCluster2Namespace} {
err := removeAllRBDImageAndSnap(ns, test.poolName)
Expect(err).NotTo(HaveOccurred())
_, _, err = kubectl("delete", "-n", ns, "cephblockpool", test.poolName, "--wait=false")
Expect(err).NotTo(HaveOccurred())
It("delete resources in the namespace: "+ns, func() {
err := deleteNamespacedResource(ns, "mantlerestore")
err = deleteNamespacedResource(ns, "mantlebackup")
err = deleteNamespacedResource(ns, "pvc")
})
It("delete the namespace: "+ns, func() {
_, stderr, err := kubectl("delete", "namespace", ns)
})
}
It("clean up the SCs and RBD pools", func() {
for _, sc := range []string{test.storageClassName1, test.storageClassName2} {
_, stderr, err := kubectl("delete", "sc", sc)
}
for _, ns := range []string{cephCluster1Namespace, cephCluster2Namespace} {
err := removeAllRBDImageAndSnap(ns, test.poolName)
_, _, err = kubectl("delete", "-n", ns, "cephblockpool", test.poolName, "--wait=false")

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to leave these because the errors raised by tearDown sometimes made me notice cases that I was not noticed. Of course, if you know it will be an error, then it's not.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I understood.

Comment on lines +79 to +92
It("delete namespace: "+test.tenantNamespace, func() {
_, stderr, err := kubectl("delete", "namespace", test.tenantNamespace)
Expect(err).NotTo(HaveOccurred(), string(stderr))
})

It("clean up the SCs and RBD pools", func() {
_, stderr, err := kubectl("delete", "sc", test.storageClassName)
Expect(err).NotTo(HaveOccurred(), string(stderr))

err = removeAllRBDImageAndSnap(cephCluster1Namespace, test.poolName)
Expect(err).NotTo(HaveOccurred())

_, _, err = kubectl("delete", "-n", cephCluster1Namespace, "cephblockpool", test.poolName, "--wait=false")
Expect(err).NotTo(HaveOccurred())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as multi_rook_ceph_test.go#tearDownEnv().

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's also OK not to fix.

@satoru-takeuchi
Copy link
Contributor

@llamerada-jp This PR will be merged after my last minor change requests.

Signed-off-by: Yuji Ito <llamerada.jp@gmail.com>
@satoru-takeuchi satoru-takeuchi merged commit 2f9f4ef into main Jul 8, 2024
2 checks passed
@satoru-takeuchi satoru-takeuchi deleted the implement-e2e-multi-cluster branch July 8, 2024 02:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants