Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes to make use of ginkgo v2 #105

Merged
merged 6 commits into from
Aug 29, 2022

Conversation

deepakm-ntnx
Copy link
Contributor

@deepakm-ntnx deepakm-ntnx commented Aug 10, 2022

What this PR does / why we need it:
Ref: kubernetes-sigs/cluster-api#6906
This change upgrades test code to ginkgo v2 and respective cluster-api

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

How Has This Been Tested?:

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration and test output

 ~ make test-e2e                     
KO_DOCKER_REPO=ko.local /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/hack/tools/bin/ko-v0.11.2 build -B --platform=linux/amd64 -t e2e -L .
2022/08/23 13:39:43 Using base gcr.io/distroless/static:nonroot@sha256:1f580b0a1922c3e54ae15b0758b5747b260bd99d39d40c2edb3e7f6e2452298b for github.com/nutanix-cloud-native/cluster-api-provider-nutanix
2022/08/23 13:39:44 Building github.com/nutanix-cloud-native/cluster-api-provider-nutanix for linux/amd64
2022/08/23 13:39:49 Loading ko.local/cluster-api-provider-nutanix:db72b6c0bad974c3bfedb3ad066399029c299b8658f592351ab8d2316b167293
2022/08/23 13:39:50 Loaded ko.local/cluster-api-provider-nutanix:db72b6c0bad974c3bfedb3ad066399029c299b8658f592351ab8d2316b167293
2022/08/23 13:39:50 Adding tag e2e
2022/08/23 13:39:50 Added tag e2e
ko.local/cluster-api-provider-nutanix:db72b6c0bad974c3bfedb3ad066399029c299b8658f592351ab8d2316b167293
docker tag ko.local/cluster-api-provider-nutanix:e2e ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template --load-restrictor LoadRestrictionsNone > /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template.yaml 
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-secret --load-restrictor LoadRestrictionsNone > /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-secret.yaml
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-credential-ref --load-restrictor LoadRestrictionsNone > /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-credential-ref.yaml
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-additional-categories --load-restrictor LoadRestrictionsNone > /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-additional-categories.yaml
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nmt --load-restrictor LoadRestrictionsNone > /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nmt.yaml
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-project --load-restrictor LoadRestrictionsNone > /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-project.yaml
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-ccm --load-restrictor LoadRestrictionsNone > /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-ccm.yaml
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1alpha4/cluster-template --load-restrictor LoadRestrictionsNone > /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1alpha4/cluster-template.yaml
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build templates/base > templates/cluster-template.yaml
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build templates/csi > templates/cluster-template-csi.yaml
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build templates/ccm > templates/cluster-template-ccm.yaml
mkdir -p /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/_artifacts
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/hack/tools/bin/ginkgo-v2.1.4 -v --trace --tags=e2e --label-filter="prblocker" --fail-fast  --nodes=1 \
	    --no-color=false --output-dir="/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/_artifacts" --junit-report="junit.e2e_suite.1.xml" \
	     ./test/e2e -- \
	    -e2e.artifacts-folder="/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/_artifacts" \
	    -e2e.config="/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml" \
	    -e2e.skip-resource-cleanup=false -e2e.use-existing-cluster=false
Running Suite: capx-e2e - /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e
==================================================================================================================
Random Seed: 1661287198

Will run 1 of 21 specs
------------------------------
[SynchronizedBeforeSuite] 
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:118
STEP: Initializing a runtime.Scheme with all the GVK relevant for this test 08/23/22 13:40:07.54
STEP: Loading the e2e test configuration from "/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml" 08/23/22 13:40:07.541
STEP: Creating a clusterctl local repository into "/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/_artifacts" 08/23/22 13:40:07.542
STEP: Reading the ClusterResourceSet manifest ./data/cni/kindnet/kindnet.yaml 08/23/22 13:40:07.542
STEP: Setting up the bootstrap cluster 08/23/22 13:40:11.124
STEP: Creating the bootstrap cluster 08/23/22 13:40:11.124
INFO: Creating a kind cluster with name "test-rdy4w0"
Creating cluster "test-rdy4w0" ...
 • Ensuring node image (kindest/node:v1.21.10) 🖼  ...
 ✓ Ensuring node image (kindest/node:v1.21.10) 🖼
 • Preparing nodes 📦   ...
 ✓ Preparing nodes 📦 
 • Writing configuration 📜  ...
 ✓ Writing configuration 📜
 • Starting control-plane 🕹️  ...
 ✓ Starting control-plane 🕹️
 • Installing CNI 🔌  ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
INFO: The kubeconfig file for the kind cluster is /var/folders/l3/8dxq0z892n992ffkmb33003r0000gn/T/e2e-kind3920008627
INFO: Loading image: "ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e"
INFO: Image ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e is present in local container image cache
INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4"
INFO: Image registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4 is present in local container image cache
INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4"
INFO: Image registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4 is present in local container image cache
INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4"
INFO: Image registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4 is present in local container image cache
STEP: Initializing the bootstrap cluster 08/23/22 13:41:07.676
INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure nutanix --config /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/_artifacts/repository/clusterctl-config.yaml --kubeconfig /var/folders/l3/8dxq0z892n992ffkmb33003r0000gn/T/e2e-kind3920008627
INFO: Waiting for provider controllers to be running
STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available 08/23/22 13:42:12.402
INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-647446bd74-qrp5c, container manager
STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available 08/23/22 13:42:12.424
INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-cb8bf9c5d-2ghsd, container manager
STEP: Waiting for deployment capi-system/capi-controller-manager to be available 08/23/22 13:42:12.47
INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-865c9d9754-qksrt, container manager
STEP: Waiting for deployment capx-system/capx-controller-manager to be available 08/23/22 13:42:12.507
INFO: Creating log watcher for controller capx-system/capx-controller-manager, pod capx-controller-manager-5b4dffd4f7-8t9rq, container kube-rbac-proxy
INFO: Creating log watcher for controller capx-system/capx-controller-manager, pod capx-controller-manager-5b4dffd4f7-8t9rq, container manager
------------------------------
[SynchronizedBeforeSuite] PASSED [125.273 seconds]
[SynchronizedBeforeSuite] 
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:118

  Begin Captured GinkgoWriter Output >>
    STEP: Initializing a runtime.Scheme with all the GVK relevant for this test 08/23/22 13:40:07.54
    STEP: Loading the e2e test configuration from "/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml" 08/23/22 13:40:07.541
    STEP: Creating a clusterctl local repository into "/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/_artifacts" 08/23/22 13:40:07.542
    STEP: Reading the ClusterResourceSet manifest ./data/cni/kindnet/kindnet.yaml 08/23/22 13:40:07.542
    STEP: Setting up the bootstrap cluster 08/23/22 13:40:11.124
    STEP: Creating the bootstrap cluster 08/23/22 13:40:11.124
    INFO: Creating a kind cluster with name "test-rdy4w0"
    INFO: The kubeconfig file for the kind cluster is /var/folders/l3/8dxq0z892n992ffkmb33003r0000gn/T/e2e-kind3920008627
    INFO: Loading image: "ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e"
    INFO: Image ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e is present in local container image cache
    INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4"
    INFO: Image registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4 is present in local container image cache
    INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4"
    INFO: Image registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4 is present in local container image cache
    INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4"
    INFO: Image registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4 is present in local container image cache
    STEP: Initializing the bootstrap cluster 08/23/22 13:41:07.676
    INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure nutanix --config /Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/_artifacts/repository/clusterctl-config.yaml --kubeconfig /var/folders/l3/8dxq0z892n992ffkmb33003r0000gn/T/e2e-kind3920008627
    INFO: Waiting for provider controllers to be running
    STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available 08/23/22 13:42:12.402
    INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-647446bd74-qrp5c, container manager
    STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available 08/23/22 13:42:12.424
    INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-cb8bf9c5d-2ghsd, container manager
    STEP: Waiting for deployment capi-system/capi-controller-manager to be available 08/23/22 13:42:12.47
    INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-865c9d9754-qksrt, container manager
    STEP: Waiting for deployment capx-system/capx-controller-manager to be available 08/23/22 13:42:12.507
    INFO: Creating log watcher for controller capx-system/capx-controller-manager, pod capx-controller-manager-5b4dffd4f7-8t9rq, container kube-rbac-proxy
    INFO: Creating log watcher for controller capx-system/capx-controller-manager, pod capx-controller-manager-5b4dffd4f7-8t9rq, container manager
  << End Captured GinkgoWriter Output
------------------------------
SSSSSSSSSSSSSSS
------------------------------
When following the Cluster API quick-start
  Should create a workload cluster [prblocker, slow, network]
  /Users/deepak.muley/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.0-beta.0.0.20220823125924-6e30f66cc3df/e2e/quick_start.go:78
STEP: Creating a namespace for hosting the "quick-start" test spec 08/23/22 13:42:12.816
INFO: Creating namespace quick-start-ka7whx
INFO: Creating event watcher for namespace "quick-start-ka7whx"
STEP: Creating a workload cluster 08/23/22 13:42:12.854
INFO: Creating the workload cluster with name "quick-start-j49m9h" using the "(default)" template (Kubernetes v1.21.10, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster quick-start-j49m9h --infrastructure (default) --kubernetes-version v1.21.10 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default)
INFO: Applying the cluster template yaml to the cluster
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.kubeadmcontrolplane.controlplane.cluster.x-k8s.io": Post "https://capi-kubeadm-control-plane-webhook-service.capi-kubeadm-control-plane-system.svc:443/mutate-controlplane-cluster-x-k8s-io-v1beta1-kubeadmcontrolplane?timeout=10s": dial tcp 10.96.178.61:443: connect: connection refused

Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.kubeadmcontrolplane.controlplane.cluster.x-k8s.io": Post "https://capi-kubeadm-control-plane-webhook-service.capi-kubeadm-control-plane-system.svc:443/mutate-controlplane-cluster-x-k8s-io-v1beta1-kubeadmcontrolplane?timeout=10s": dial tcp 10.96.178.61:443: connect: connection refused

Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.kubeadmcontrolplane.controlplane.cluster.x-k8s.io": Post "https://capi-kubeadm-control-plane-webhook-service.capi-kubeadm-control-plane-system.svc:443/mutate-controlplane-cluster-x-k8s-io-v1beta1-kubeadmcontrolplane?timeout=10s": dial tcp 10.96.178.61:443: connect: connection refused

Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.kubeadmcontrolplane.controlplane.cluster.x-k8s.io": Post "https://capi-kubeadm-control-plane-webhook-service.capi-kubeadm-control-plane-system.svc:443/mutate-controlplane-cluster-x-k8s-io-v1beta1-kubeadmcontrolplane?timeout=10s": dial tcp 10.96.178.61:443: connect: connection refused

Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.kubeadmcontrolplane.controlplane.cluster.x-k8s.io": Post "https://capi-kubeadm-control-plane-webhook-service.capi-kubeadm-control-plane-system.svc:443/mutate-controlplane-cluster-x-k8s-io-v1beta1-kubeadmcontrolplane?timeout=10s": dial tcp 10.96.178.61:443: connect: connection refused

Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.kubeadmcontrolplane.controlplane.cluster.x-k8s.io": Post "https://capi-kubeadm-control-plane-webhook-service.capi-kubeadm-control-plane-system.svc:443/mutate-controlplane-cluster-x-k8s-io-v1beta1-kubeadmcontrolplane?timeout=10s": dial tcp 10.96.178.61:443: connect: connection refused

Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.kubeadmcontrolplane.controlplane.cluster.x-k8s.io": Post "https://capi-kubeadm-control-plane-webhook-service.capi-kubeadm-control-plane-system.svc:443/mutate-controlplane-cluster-x-k8s-io-v1beta1-kubeadmcontrolplane?timeout=10s": EOF

configmap/cni-quick-start-j49m9h-crs-cni unchanged
secret/quick-start-j49m9h configured
clusterresourceset.addons.cluster.x-k8s.io/quick-start-j49m9h-crs-cni unchanged
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-j49m9h-kcfg-0 unchanged
cluster.cluster.x-k8s.io/quick-start-j49m9h unchanged
machinedeployment.cluster.x-k8s.io/quick-start-j49m9h-wmd unchanged
machinehealthcheck.cluster.x-k8s.io/quick-start-j49m9h-mhc configured
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/quick-start-j49m9h-kcp created
nutanixcluster.infrastructure.cluster.x-k8s.io/quick-start-j49m9h unchanged
nutanixmachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-j49m9h-mt-0 unchanged

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase 08/23/22 13:42:21.516
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by quick-start-ka7whx/quick-start-j49m9h-kcp to be provisioned
STEP: Waiting for one control plane node to exist 08/23/22 13:42:31.571
INFO: Waiting for control plane to be ready
INFO: Waiting for control plane quick-start-ka7whx/quick-start-j49m9h-kcp to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready 08/23/22 13:43:21.656
STEP: Checking all the the control plane machines are in the expected failure domains 08/23/22 13:43:51.687
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist 08/23/22 13:43:51.738
STEP: Checking all the machines controlled by quick-start-j49m9h-wmd are in the "" failure domain 08/23/22 13:44:11.783
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED! 08/23/22 13:44:11.864
STEP: Dumping logs from the "quick-start-j49m9h" workload cluster 08/23/22 13:44:11.864
Failed to get logs for Machine quick-start-j49m9h-kcp-7vvw2, Cluster quick-start-ka7whx/quick-start-j49m9h: error creating container exec: Error response from daemon: No such container: quick-start-j49m9h-kcp-7vvw2
Failed to get logs for Machine quick-start-j49m9h-wmd-9758fd5c-5frjx, Cluster quick-start-ka7whx/quick-start-j49m9h: error creating container exec: Error response from daemon: No such container: quick-start-j49m9h-wmd-9758fd5c-5frjx
STEP: Dumping all the Cluster API resources in the "quick-start-ka7whx" namespace 08/23/22 13:44:12.056
STEP: Deleting cluster quick-start-ka7whx/quick-start-j49m9h 08/23/22 13:44:12.318
STEP: Deleting cluster quick-start-j49m9h 08/23/22 13:44:12.379
INFO: Waiting for the Cluster quick-start-ka7whx/quick-start-j49m9h to be deleted
STEP: Waiting for cluster quick-start-j49m9h to be deleted 08/23/22 13:44:12.405
STEP: Deleting namespace used for hosting the "quick-start" test spec 08/23/22 13:44:42.438
INFO: Deleting namespace quick-start-ka7whx
------------------------------
• [SLOW TEST] [149.657 seconds]
When following the Cluster API quick-start [prblocker, slow, network]
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/capx_quick_start_test.go:28
  Should create a workload cluster
  /Users/deepak.muley/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.0-beta.0.0.20220823125924-6e30f66cc3df/e2e/quick_start.go:78

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "quick-start" test spec 08/23/22 13:42:12.816
    INFO: Creating namespace quick-start-ka7whx
    INFO: Creating event watcher for namespace "quick-start-ka7whx"
    STEP: Creating a workload cluster 08/23/22 13:42:12.854
    INFO: Creating the workload cluster with name "quick-start-j49m9h" using the "(default)" template (Kubernetes v1.21.10, 1 control-plane machines, 1 worker machines)
    INFO: Getting the cluster template yaml
    INFO: clusterctl config cluster quick-start-j49m9h --infrastructure (default) --kubernetes-version v1.21.10 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default)
    INFO: Applying the cluster template yaml to the cluster
    INFO: Waiting for the cluster infrastructure to be provisioned
    STEP: Waiting for cluster to enter the provisioned phase 08/23/22 13:42:21.516
    INFO: Waiting for control plane to be initialized
    INFO: Waiting for the first control plane machine managed by quick-start-ka7whx/quick-start-j49m9h-kcp to be provisioned
    STEP: Waiting for one control plane node to exist 08/23/22 13:42:31.571
    INFO: Waiting for control plane to be ready
    INFO: Waiting for control plane quick-start-ka7whx/quick-start-j49m9h-kcp to be ready (implies underlying nodes to be ready as well)
    STEP: Waiting for the control plane to be ready 08/23/22 13:43:21.656
    STEP: Checking all the the control plane machines are in the expected failure domains 08/23/22 13:43:51.687
    INFO: Waiting for the machine deployments to be provisioned
    STEP: Waiting for the workload nodes to exist 08/23/22 13:43:51.738
    STEP: Checking all the machines controlled by quick-start-j49m9h-wmd are in the "" failure domain 08/23/22 13:44:11.783
    INFO: Waiting for the machine pools to be provisioned
    STEP: PASSED! 08/23/22 13:44:11.864
    STEP: Dumping logs from the "quick-start-j49m9h" workload cluster 08/23/22 13:44:11.864
    STEP: Dumping all the Cluster API resources in the "quick-start-ka7whx" namespace 08/23/22 13:44:12.056
    STEP: Deleting cluster quick-start-ka7whx/quick-start-j49m9h 08/23/22 13:44:12.318
    STEP: Deleting cluster quick-start-j49m9h 08/23/22 13:44:12.379
    INFO: Waiting for the Cluster quick-start-ka7whx/quick-start-j49m9h to be deleted
    STEP: Waiting for cluster quick-start-j49m9h to be deleted 08/23/22 13:44:12.405
    STEP: Deleting namespace used for hosting the "quick-start" test spec 08/23/22 13:44:42.438
    INFO: Deleting namespace quick-start-ka7whx
  << End Captured GinkgoWriter Output
------------------------------
SSSSS
------------------------------
[SynchronizedAfterSuite] 
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:170
STEP: Dumping logs from the bootstrap cluster 08/23/22 13:44:42.476
Failed to get logs for the bootstrap cluster node test-rdy4w0-control-plane: exit status 1
STEP: Tearing down the management cluster 08/23/22 13:44:43.137
------------------------------
[SynchronizedAfterSuite] PASSED [3.809 seconds]
[SynchronizedAfterSuite] 
/Users/deepak.muley/go/src/github.com/deepakm-ntnx/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:170

  Begin Captured GinkgoWriter Output >>
    STEP: Dumping logs from the bootstrap cluster 08/23/22 13:44:42.476
    STEP: Tearing down the management cluster 08/23/22 13:44:43.137
  << End Captured GinkgoWriter Output
------------------------------
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
------------------------------
[ReportAfterSuite] PASSED [0.001 seconds]
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
------------------------------

Ran 1 of 21 Specs in 278.740 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 20 Skipped
PASS

Ginkgo ran 1 suite in 4m47.948802684s
Test Suite Passed

Special notes for your reviewer:

Please confirm that if this PR changes any image versions, then that's the sole change this PR makes.

Release note:


@deepakm-ntnx deepakm-ntnx changed the title [WIP] Changes to make use of ginkgo v2 Changes to make use of ginkgo v2 Aug 23, 2022
@@ -323,7 +322,9 @@ test-kubectl-workload: ## Run kubectl queries to get all capx workload related o
.PHONY: test-e2e
test-e2e: docker-build-e2e $(GINKGO_BIN) cluster-e2e-templates cluster-templates ## Run the end-to-end tests
mkdir -p $(ARTIFACTS)
$(GINKGO) -v -trace -tags=e2e -focus="$(GINKGO_FOCUS)" $(_SKIP_ARGS) -nodes=$(GINKGO_NODES) --noColor=$(GINKGO_NOCOLOR) $(GINKGO_ARGS) ./test/e2e -- \
$(GINKGO) -v --trace --tags=e2e --label-filter="$(LABEL_FILTERS)" --fail-fast $(_SKIP_ARGS) --nodes=$(GINKGO_NODES) \
--no-color=$(GINKGO_NOCOLOR) --output-dir="$(ARTIFACTS)" --junit-report="junit.e2e_suite.1.xml" \
Copy link
Contributor

@thunderboltsid thunderboltsid Aug 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider setting the junit-report via a variable similar to other args

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deepakm-ntnx Please don't resolve without addressing the issue :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@thunderboltsid I dont think this needs to be changed at this time to avoid unnecessary variables

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

idk; customizable report name usually helps keeping the CI-code cleaner as you don't have to make assumptions around magic strings present in makefiles.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main point there is that magic strings tend to replicate across the codebase 🙂
sometimes, across codebases. Which is why these things are usually parameterized to give the calling entity control while having a sane default. With that said, we can do that in a separate PR.

@deepakm-ntnx deepakm-ntnx merged commit 4176c1e into nutanix-cloud-native:main Aug 29, 2022
deepakm-ntnx added a commit to deepakm-ntnx/cluster-api-provider-nutanix that referenced this pull request Aug 29, 2022
* [WIP] Changes to make use of ginkgo v2

Ref: kubernetes-sigs/cluster-api#6906

* KRBN-5429 added validation for machine template parameters

* fixes for supporting ginkgo v2

* fixed panic by calling setupsinglehandler only once

* updated package with cve fixes
@deepakm-ntnx deepakm-ntnx deleted the ginkgo_v2 branch August 29, 2022 19:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants