Skip to content

Commit

Permalink
Add ownerReference resilience test
Browse files Browse the repository at this point in the history
Signed-off-by: killianmuldoon <kmuldoon@vmware.com>
  • Loading branch information
killianmuldoon committed Aug 15, 2023
1 parent 957d695 commit d288e6d
Show file tree
Hide file tree
Showing 5 changed files with 246 additions and 57 deletions.
42 changes: 21 additions & 21 deletions test/e2e/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,36 +14,36 @@ In order to run the e2e tests the following requirements must be met:
* The testing must occur on a host that can access the VMs deployed to vSphere via the network
* Ginkgo ([download](https://onsi.github.io/ginkgo/#getting-ginkgo))
* Docker ([download](https://www.docker.com/get-started))
* Kind v0.7.0+ ([download](https://kind.sigs.k8s.io))
* Kind v0.20.0+ ([download](https://kind.sigs.k8s.io))

### Environment variables

The first step to running the e2e tests is setting up the required environment variables:

| Environment variable | Description | Example |
| ----------------------------- | ----------------------------------------------------------------------------------- | -------------------------------------------------------------------------------- |
| `VSPHERE_SERVER` | The IP address or FQDN of a vCenter 6.7u3 server | `my.vcenter.com` |
| `VSPHERE_USERNAME` | The username used to access the vSphere server | `my-username` |
| `VSPHERE_PASSWORD` | The password used to access the vSphere server | `my-password` |
| `VSPHERE_DATACENTER` | The unique name or inventory path of the datacenter in which VMs will be created | `my-datacenter` or `/my-datacenter` |
| `VSPHERE_FOLDER` | The unique name or inventory path of the folder in which VMs will be created | `my-folder` or `/my-datacenter/vm/my-folder` |
| `VSPHERE_RESOURCE_POOL` | The unique name or inventory path of the resource pool in which VMs will be created | `my-resource-pool` or `/my-datacenter/host/Cluster-1/Resources/my-resource-pool` |
| `VSPHERE_DATASTORE` | The unique name or inventory path of the datastore in which VMs will be created | `my-datastore` or `/my-datacenter/datstore/my-datastore` |
| `VSPHERE_NETWORK` | The unique name or inventory path of the network to which VMs will be connected | `my-network` or `/my-datacenter/network/my-network` |
| `VSPHERE_SSH_PRIVATE_KEY` | The file path of the private key used to ssh into the CAPV VMs | `/home/foo/bar-ssh.key` |
| `VSPHERE_SSH_AUTHORIZED_KEY` | The public key that is added to the CAPV VMs | `ssh-rsa ABCDEF...XYZ=` |
| `VSPHERE_TLS_THUMBPRINT` | The TLS thumbprint of the vSphere server's certificate which should be trusted | `2A:3F:BC:CA:C0:96:35:D4:B7:A2:AA:3C:C1:33:D9:D7:BE:EC:31:55` |
| `CONTROL_PLANE_ENDPOINT_IP` | The IP that kube-vip should use as a control plane endpoint | `10.10.123.100` |
| `VSPHERE_STORAGE_POLICY` | The name of an existing vSphere storage policy to be assigned to created VMs | `my-test-sp` |
| Environment variable | Description | Example |
|------------------------------|-------------------------------------------------------------------------------------|----------------------------------------------------------------------------------|
| `VSPHERE_SERVER` | The IP address or FQDN of a vCenter 6.7u3 server | `my.vcenter.com` |
| `VSPHERE_USERNAME` | The username used to access the vSphere server | `my-username` |
| `VSPHERE_PASSWORD` | The password used to access the vSphere server | `my-password` |
| `VSPHERE_DATACENTER` | The unique name or inventory path of the datacenter in which VMs will be created | `my-datacenter` or `/my-datacenter` |
| `VSPHERE_FOLDER` | The unique name or inventory path of the folder in which VMs will be created | `my-folder` or `/my-datacenter/vm/my-folder` |
| `VSPHERE_RESOURCE_POOL` | The unique name or inventory path of the resource pool in which VMs will be created | `my-resource-pool` or `/my-datacenter/host/Cluster-1/Resources/my-resource-pool` |
| `VSPHERE_DATASTORE` | The unique name or inventory path of the datastore in which VMs will be created | `my-datastore` or `/my-datacenter/datstore/my-datastore` |
| `VSPHERE_NETWORK` | The unique name or inventory path of the network to which VMs will be connected | `my-network` or `/my-datacenter/network/my-network` |
| `VSPHERE_SSH_PRIVATE_KEY` | The file path of the private key used to ssh into the CAPV VMs | `/home/foo/bar-ssh.key` |
| `VSPHERE_SSH_AUTHORIZED_KEY` | The public key that is added to the CAPV VMs | `ssh-rsa ABCDEF...XYZ=` |
| `VSPHERE_TLS_THUMBPRINT` | The TLS thumbprint of the vSphere server's certificate which should be trusted | `2A:3F:BC:CA:C0:96:35:D4:B7:A2:AA:3C:C1:33:D9:D7:BE:EC:31:55` |
| `CONTROL_PLANE_ENDPOINT_IP` | The IP that kube-vip should use as a control plane endpoint | `10.10.123.100` |
| `VSPHERE_STORAGE_POLICY` | The name of an existing vSphere storage policy to be assigned to created VMs | `my-test-sp` |

### Flags

| Flag | Description | Default Value |
|-------------------------|----------------------------------------------------------------------------------------------------------|-----------|
| `SKIP_RESOURCE_CLEANUP` | This flags skips cleanup of the resources created during the tests as well as the kind/bootstrap cluster | `false` |
| `USE_EXISTING_CLUSTER` | This flag enables the usage of an existing K8S cluster as the management cluster to run tests against. | `false` |
| `GINKGO_TEST_TIMEOUT` | This sets the timeout for the E2E test suite. | `2h` |
| `GINKGO_FOCUS` | This populates the `-focus` flag of the `ginkgo` run command. | `""` |
|-------------------------|----------------------------------------------------------------------------------------------------------|---------------|
| `SKIP_RESOURCE_CLEANUP` | This flags skips cleanup of the resources created during the tests as well as the kind/bootstrap cluster | `false` |
| `USE_EXISTING_CLUSTER` | This flag enables the usage of an existing K8S cluster as the management cluster to run tests against. | `false` |
| `GINKGO_TEST_TIMEOUT` | This sets the timeout for the E2E test suite. | `2h` |
| `GINKGO_FOCUS` | This populates the `-focus` flag of the `ginkgo` run command. | `""` |

### Running the e2e tests

Expand Down
36 changes: 0 additions & 36 deletions test/e2e/capv_clusterclass_quickstart_test.go

This file was deleted.

39 changes: 39 additions & 0 deletions test/e2e/capv_quick_start_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,11 @@ package e2e

import (
. "github.com/onsi/ginkgo/v2"
"k8s.io/utils/pointer"
capi_e2e "sigs.k8s.io/cluster-api/test/e2e"
"sigs.k8s.io/cluster-api/test/framework"

"sigs.k8s.io/cluster-api-provider-vsphere/test/helpers"
)

var _ = Describe("Cluster Creation using Cluster API quick-start test [PR-Blocking]", func() {
Expand All @@ -29,6 +33,41 @@ var _ = Describe("Cluster Creation using Cluster API quick-start test [PR-Blocki
BootstrapClusterProxy: bootstrapClusterProxy,
ArtifactFolder: artifactFolder,
SkipCleanup: skipCleanup,
PostMachinesProvisioned: func(proxy framework.ClusterProxy, namespace, clusterName string) {
// This check ensures that owner references are resilient - i.e. correctly re-reconciled - when removed.
framework.ValidateOwnerReferencesResilience(ctx, proxy, namespace, clusterName,
framework.CoreOwnerReferenceAssertion,
framework.KubeadmBootstrapOwnerReferenceAssertions,
framework.KubeadmControlPlaneOwnerReferenceAssertions,
helpers.VSphereKubernetesReferenceAssertions,
helpers.VSphereExpOwnerReferenceAssertions,
helpers.VSphereReferenceAssertions,
)
},
}
})
})

var _ = Describe("ClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass]", func() {
capi_e2e.QuickStartSpec(ctx, func() capi_e2e.QuickStartSpecInput {
return capi_e2e.QuickStartSpecInput{
E2EConfig: e2eConfig,
ClusterctlConfigPath: clusterctlConfigPath,
BootstrapClusterProxy: bootstrapClusterProxy,
ArtifactFolder: artifactFolder,
SkipCleanup: skipCleanup,
Flavor: pointer.String("topology"),
PostMachinesProvisioned: func(proxy framework.ClusterProxy, namespace, clusterName string) {
// This check ensures that owner references are resilient - i.e. correctly re-reconciled - when removed.
framework.ValidateOwnerReferencesResilience(ctx, proxy, namespace, clusterName,
framework.CoreOwnerReferenceAssertion,
framework.KubeadmBootstrapOwnerReferenceAssertions,
framework.KubeadmControlPlaneOwnerReferenceAssertions,
helpers.VSphereKubernetesReferenceAssertions,
helpers.VSphereExpOwnerReferenceAssertions,
helpers.VSphereReferenceAssertions,
)
},
}
})
})
1 change: 1 addition & 0 deletions test/e2e/govmomi_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ func initVSphereSession() {
By("parsing vSphere server URL")
serverURL, err := soap.ParseURL(vsphereServer)
Expect(err).ShouldNot(HaveOccurred())
Expect(serverURL).ToNot(BeNil())

var vimClient *vim25.Client

Expand Down
185 changes: 185 additions & 0 deletions test/helpers/ownerreference_helpers.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,185 @@
package helpers

import (
"fmt"
"reflect"
"sort"

"github.com/pkg/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
kerrors "k8s.io/apimachinery/pkg/util/errors"
clusterv1 "sigs.k8s.io/cluster-api/api/v1beta1"
bootstrapv1 "sigs.k8s.io/cluster-api/bootstrap/kubeadm/api/v1beta1"
controlplanev1 "sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1"
addonsv1 "sigs.k8s.io/cluster-api/exp/addons/api/v1beta1"

infrav1 "sigs.k8s.io/cluster-api-provider-vsphere/apis/v1beta1"
)

// TODO: Two owners on vsphereMachines.
// ClusterResourceSetBinding has multiple owners.
// Missing the following types as they are not created as part of this test:
// VSphereClusterIdentityKind = "VSphereClusterIdentity"
// VSphereFailureDomainsKind = "VSphereFailureDomains"
// vSphereDeploymentZonesKind = "vSphereDeploymentZones"
// vSphereClusterTemplateKind = "VSphereClusterTemplate"

// KubernetesReferenceAssertions maps Kubernetes types to functions which return an error if the passed OwnerReferences
// aren't as expected.
var (
VSphereKubernetesReferenceAssertions = map[string]func([]metav1.OwnerReference) error{

secretKind: func(owners []metav1.OwnerReference) error {
// Secrets for cluster certificates must be owned by the KubeadmControlPlane. The bootstrap secret should be owned by a KubeadmControlPlane.
return HasOneOfExactOwnersByGVK(owners,
[]schema.GroupVersionKind{kubeadmControlPlaneGVK},
[]schema.GroupVersionKind{kubeadmConfigGVK},
[]schema.GroupVersionKind{clusterResourceSetGVK},
[]schema.GroupVersionKind{vSphereClusterGVK})
},
configMapKind: func(owners []metav1.OwnerReference) error {
// The only configMaps considered here are those owned by a ClusterResourceSet.
return HasOneOfExactOwnersByGVK(owners, []schema.GroupVersionKind{clusterResourceSetGVK})
},
}

VSphereExpOwnerReferenceAssertions = map[string]func([]metav1.OwnerReference) error{
clusterResourceSetKind: func(owners []metav1.OwnerReference) error {
// ClusterResourcesSet doesn't have ownerReferences (it is a clusterctl move-hierarchy root).
return HasExactOwnersByGVK(owners, []schema.GroupVersionKind{})
},
// ClusterResourcesSetBinding has ClusterResourceSet set as owners on creation.
clusterResourceSetBindingKind: func(owners []metav1.OwnerReference) error {
return HasOneOfExactOwnersByGVK(owners, []schema.GroupVersionKind{clusterResourceSetGVK}, []schema.GroupVersionKind{clusterResourceSetGVK, clusterResourceSetGVK})
},
// MachinePool must be owned by a Cluster.
machinePoolKind: func(owners []metav1.OwnerReference) error {
// MachinePools must be owned by a Cluster.
return HasExactOwnersByGVK(owners, []schema.GroupVersionKind{clusterGVK})
},
}
)

var (
VSphereClusterIdentityKind = "VSphereClusterIdentity"
VSphereFailureDomainsKind = "VSphereFailureDomains"
vSphereDeploymentZonesKind = "vSphereDeploymentZones"

vSphereClusterKind = "VSphereCluster"
vSphereClusterTemplateKind = "VSphereClusterTemplate"

vSphereMachineKind = "VSphereMachine"
vSphereMachineTemplateKind = "VSphereMachineTemplate"
vSphereVMKind = "VSphereVM"

vSphereMachineGVK = infrav1.GroupVersion.WithKind(vSphereMachineKind)

vSphereClusterGVK = infrav1.GroupVersion.WithKind(vSphereClusterKind)
VSphereReferenceAssertions = map[string]func([]metav1.OwnerReference) error{
vSphereClusterKind: func(owners []metav1.OwnerReference) error {
// The only configMaps considered here are those owned by a ClusterResourceSet.
return HasOneOfExactOwnersByGVK(owners, []schema.GroupVersionKind{clusterGVK})
},
vSphereClusterTemplateKind: func(owners []metav1.OwnerReference) error {
return HasOneOfExactOwnersByGVK(owners, []schema.GroupVersionKind{clusterClassGVK})
},
vSphereMachineKind: func(owners []metav1.OwnerReference) error {
// The only configMaps considered here are those owned by a ClusterResourceSet.
return HasOneOfExactOwnersByGVK(owners, []schema.GroupVersionKind{vSphereClusterGVK, machineGVK})
},
vSphereMachineTemplateKind: func(owners []metav1.OwnerReference) error {
// The only configMaps considered here are those owned by a ClusterResourceSet.
return HasOneOfExactOwnersByGVK(owners, []schema.GroupVersionKind{clusterGVK}, []schema.GroupVersionKind{clusterClassGVK})
},
vSphereVMKind: func(owners []metav1.OwnerReference) error {
// The only configMaps considered here are those owned by a ClusterResourceSet.
return HasOneOfExactOwnersByGVK(owners, []schema.GroupVersionKind{vSphereMachineGVK})
},
VSphereClusterIdentityKind: func(owners []metav1.OwnerReference) error { return errors.New("IMPLEMENT ME") },
VSphereFailureDomainsKind: func(owners []metav1.OwnerReference) error { return errors.New("IMPLEMENT ME") },
vSphereDeploymentZonesKind: func(owners []metav1.OwnerReference) error { return errors.New("IMPLEMENT ME") },
}
)

func HasExactOwnersByGVK(refList []metav1.OwnerReference, wantGVKs []schema.GroupVersionKind) error {
refGVKs := []schema.GroupVersionKind{}
for _, ref := range refList {
refGVK, err := ownerRefGVK(ref)
if err != nil {
return err
}
refGVKs = append(refGVKs, refGVK)
}
sort.SliceStable(refGVKs, func(i int, j int) bool {
return refGVKs[i].String() > refGVKs[j].String()
})
sort.SliceStable(wantGVKs, func(i int, j int) bool {
return wantGVKs[i].String() > wantGVKs[j].String()
})
if !reflect.DeepEqual(wantGVKs, refGVKs) {
return fmt.Errorf("wanted %v, actual %v", wantGVKs, refGVKs)
}
return nil
}

// NOTE: we are using HasOneOfExactOwnersByGVK as a convenience approach for checking owner references on objects that

Check failure on line 126 in test/helpers/ownerreference_helpers.go

View workflow job for this annotation

GitHub Actions / lint

exported: comment on exported function HasOneOfExactOwnersByGVK should be of the form "HasOneOfExactOwnersByGVK ..." (revive)

Check warning on line 126 in test/helpers/ownerreference_helpers.go

View workflow job for this annotation

GitHub Actions / lint

exported: comment on exported function HasOneOfExactOwnersByGVK should be of the form "HasOneOfExactOwnersByGVK ..." (revive)
// can have different owner references depending on the cluster topology.
// In a follow-up iteration we can make improvements to check owner references according to the specific use cases vs checking generically "oneOf".
func HasOneOfExactOwnersByGVK(refList []metav1.OwnerReference, possibleGVKS ...[]schema.GroupVersionKind) error {
var allErrs []error
for _, wantGVK := range possibleGVKS {
err := HasExactOwnersByGVK(refList, wantGVK)
if err != nil {
allErrs = append(allErrs, err)
continue
}
return nil
}
return kerrors.NewAggregate(allErrs)
}

func ownerRefGVK(ref metav1.OwnerReference) (schema.GroupVersionKind, error) {
refGV, err := schema.ParseGroupVersion(ref.APIVersion)
if err != nil {
return schema.GroupVersionKind{}, err
}
return schema.GroupVersionKind{Version: refGV.Version, Group: refGV.Group, Kind: ref.Kind}, nil
}

var (
clusterKind = "Cluster"
clusterClassKind = "ClusterClass"

machineKind = "Machine"

clusterGVK = clusterv1.GroupVersion.WithKind(clusterKind)
clusterClassGVK = clusterv1.GroupVersion.WithKind(clusterClassKind)

machineGVK = clusterv1.GroupVersion.WithKind(machineKind)
)

var (
clusterResourceSetKind = "ClusterResourceSet"
clusterResourceSetBindingKind = "ClusterResourceSetBinding"
machinePoolKind = "MachinePool"

clusterResourceSetGVK = addonsv1.GroupVersion.WithKind(clusterResourceSetKind)
)

var (
configMapKind = "ConfigMap"
secretKind = "Secret"
)

var (
kubeadmControlPlaneKind = "KubeadmControlPlane"

kubeadmControlPlaneGVK = controlplanev1.GroupVersion.WithKind(kubeadmControlPlaneKind)
)

var (
kubeadmConfigKind = "KubeadmConfig"

kubeadmConfigGVK = bootstrapv1.GroupVersion.WithKind(kubeadmConfigKind)
)

0 comments on commit d288e6d

Please sign in to comment.