Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

alibaba: implement cluster destroy #5348

Merged

Conversation

bd233
Copy link
Contributor

@bd233 bd233 commented Nov 2, 2021

Add code for the openshift-install destroy cluster command for cluster using the alibaba platform.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 2, 2021

Hi @bd233. Thanks for your PR.

I'm waiting for a openshift member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Nov 2, 2021
@bd233 bd233 force-pushed the alibabacloud-supplementary branch from 3fce36d to e118273 Compare November 2, 2021 13:05
@kwoodson
Copy link

kwoodson commented Nov 3, 2021

@bd233 Please rebase this now that #5333 has merged.

I rely on the destroy code to clean up my clusters. Thanks!

@openshift-ci openshift-ci bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 3, 2021
@bd233 bd233 force-pushed the alibabacloud-supplementary branch from e118273 to 13e0ac8 Compare November 4, 2021 07:14
@openshift-ci openshift-ci bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 4, 2021
@kwoodson
Copy link

kwoodson commented Nov 4, 2021

@patrickdillon I have been testing this destroy code and it does clean up the cluster. I'll go through and verify again.

@bd233
Copy link
Contributor Author

bd233 commented Nov 8, 2021

We are awaiting a fix in github.com//pull/5348 which places the resourceGroupID on the security groups.

@kwoodson I have fixed it. Please verify

@bd233
Copy link
Contributor Author

bd233 commented Nov 8, 2021

@patrickdillon Please review whether this is the desired validation Alibaba: fix: add metadata server IP validation

@bd233
Copy link
Contributor Author

bd233 commented Nov 8, 2021

@kwoodson I have a question, what should the number of worker nodes in the default configuration (if I do not modify the install-config file) depend on? Is the number of all available zones available?

@kwoodson
Copy link

kwoodson commented Nov 8, 2021

@bd233
It appears the replicas are set by default by the number of zones+1:
https://github.com/openshift/installer/blob/master/pkg/asset/machines/alibabacloud/machinesets.go#L25-L38

The zones are determined here https://github.com/openshift/installer/blob/master/pkg/asset/machines/worker.go#L239-L249

As you know, this can be determined in any method you desire. I think the current behavior is 1 worker per zone but I think the standard node count is generally 3 worker nodes. That provides sufficient space to run the ingress with HA as well as the other cluster resources (registry, console, etc).

It appears that other providers do something similar. AWS looks at subnets or number of AZs. I think you will always get number of availabilityzones + 1. Due to this code:

		if int64(idx) < total%numOfAZs {
			replicas++
		}

@patrickdillon
Copy link
Contributor

Default cluster should always have 3 compute nodes.

@kwoodson
Copy link

kwoodson commented Nov 8, 2021

@bd233 I've noticed that the SLB that is created by the ingress operator has DeletionProtection enabled. Therefore the ingress controller's SLB does not get removed.

Also appears that the OSS bucket that is created for the bootstrap node also does not get deleted.

Copy link
Contributor

@patrickdillon patrickdillon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just reviewed the metadata server validation. This stands alone as it's own PR so I would suggest moving it to a new one.

@@ -172,6 +173,7 @@ func validatePrivateZoneID(client *Client, ic *types.InstallConfig, path *field.
// ValidateForProvisioning validates if the install config is valid for provisioning the cluster.
func ValidateForProvisioning(client *Client, ic *types.InstallConfig, metadata *Metadata) error {
allErrs := field.ErrorList{}
allErrs = append(allErrs, validateMetadataServerIPNotInMachineCIDR(ic.Networking)...)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because these functions are static validation--that is, they do not require connecting to the alibaba api--they should be moved to pkg/types. Specifically here: https://github.com/openshift/installer/blob/master/pkg/types/validation/installconfig.go#L465

the validatePlatform function will need to pass in the networking struct, similar to how OpenStack does.

func validateMetadataServerIPNotInMachineCIDR(n *types.Networking) field.ErrorList {
allErrs := field.ErrorList{}
fldPath := field.NewPath("networking").Child("machineNetwork")
matedataServerIP := "100.100.100.200"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

matedataServerIP -> metadataServerIP

}

func validateIPNotinMachineCIDR(ip string, n *types.Networking) error {
for _, network := range n.MachineNetwork {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should probably check the cluster network and service network as well

func validateIPNotinMachineCIDR(ip string, n *types.Networking) error {
for _, network := range n.MachineNetwork {
if network.CIDR.Contains(net.ParseIP(ip)) {
return fmt.Errorf("the IP must not be in one of the machine networks")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"the IP must not be in one of the machine networks" -> "the contains 100.100.100.200 which is reserved for the metadata service"

@bd233
Copy link
Contributor Author

bd233 commented Nov 9, 2021

@bd233 It appears the replicas are set by default by the number of zones+1: https://github.com/openshift/installer/blob/master/pkg/asset/machines/alibabacloud/machinesets.go#L25-L38

thx, I think I understand the logic here, total is the total number of nodes (from the pool.Replicas, the default is 3), replicas is the number of nodes in each availability zone, if total is less than the number of availability zones, replicas is 0 in some availability zones.

@bd233
Copy link
Contributor Author

bd233 commented Nov 9, 2021

@bd233 I've noticed that the SLB that is created by the ingress operator has DeletionProtection enabled. Therefore the ingress controller's SLB does not get removed.

@kwoodson I think this has nothing to do with deleting protection. If you run the installer destroy cluster command directly, the ECS instance will be released immediately, and CCM has no chance to delete the SLB, so the installer should be responsible for deleting this SLB.

In addition, as you know, the Installer will create two SLB instances. The installer searches for the two SLB instances through tags kubernetes.io/cluster/<cluster_id>: owned, turns off the delete protection status and deletes them.
I will use a tag ack.aliyun.com:<cluster_id> to find the SLB created by ingress operator and delete it. (I need you to help me verify that this is feasible, I can’t create it to this point)

Also appears that the OSS bucket that is created for the bootstrap node also does not get deleted.

I created the cluster and deleted it, but it did not reproduce the problem. Is there any more detailed information to help me reproduce it?

@bd233 bd233 force-pushed the alibabacloud-supplementary branch from a0f6d80 to dfb149e Compare November 9, 2021 10:21
@bd233
Copy link
Contributor Author

bd233 commented Nov 9, 2021

@kwoodson
Copy link

kwoodson commented Nov 9, 2021

@bd233 Let me test the latest code and verify if the SLB gets removed. I'll also look into the OSS buckets.

Copy link
Contributor

@patrickdillon patrickdillon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be easier to review if you broke this in to multiple PRs. One for destroy, one for existing VPC, etc.

Also please squash where possible. I know there is at least one destroy commit that could be squashed.

for _, arn := range tagResources {
notDeletedResources = append(notDeletedResources, arn.ResourceARN)
}
return errors.New(fmt.Sprintf("There are undeleted cloud resources %q", notDeletedResources))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As staebler pointed out in the previous PR, the destroy code should never stop running if it knows that there are outstanding resources to be deleted. The destroyer should loop, attempting to destroy any resources as long as they exist.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it is inappropriate to raise an error here.
But I think waitComplete here is redundant. The delete function of each cloud resource already contains the logic to wait for the deletion to be completed, so I should remove waitComplete.

@bd233 bd233 force-pushed the alibabacloud-supplementary branch from 67d0892 to 4ddbe60 Compare November 11, 2021 13:18
@staebler
Copy link
Contributor

/retitle alibaba: implement cluster destroy

@openshift-ci openshift-ci bot changed the title Alibabacloud supplementary alibaba: implement cluster destroy Nov 16, 2021
@patrickdillon
Copy link
Contributor

/ok-to-test

@openshift-ci openshift-ci bot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Nov 17, 2021
@patrickdillon
Copy link
Contributor

/approve

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 17, 2021

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: patrickdillon

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 17, 2021
Copy link
Contributor

@patrickdillon patrickdillon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you fix this one small error to pass the test, and we should be ready to merge?

return tagResources, nil
}

func (o *ClusterUninstaller) ListTagResources(tags map[string]string) (tagResources []tag.TagResource, err error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is causing the go-lint test to fail:

 /go/src/github.com/openshift/installer/pkg/destroy/alibabacloud/alibabacloud.go:987:1: exported method ClusterUninstaller.ListTagResources should have comment or be unexported

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, please continue

@bd233 bd233 force-pushed the alibabacloud-supplementary branch from 4ddbe60 to 4a6ee69 Compare November 18, 2021 00:48
@kwoodson
Copy link

@bd233 I attempted to run this destroy code today but it continued to run. I ran this with --log-level debug and was able to see that it was attempting to delete the DNS records.

DEBUG Start to search DNS records                  
DEBUG DNS records: SDK.ServerError                 
DEBUG ErrorCode: InvalidDomainName.NoExist         
DEBUG Recommend: https://error-center.aliyun.com/status/search?Keyword=InvalidDomainName.NoExist&source=PopGw 
DEBUG RequestId: 470B1762-645B-53AA-A907-A5796994BD4D 
DEBUG Message: The specified domain name does not exist. Refresh the page and try again. 
...

If the record InvalidDomainName.NoExist is returned, I would assume that the deletion occurred already. Is it possible to include the NoExist as a successful delete?

cc @patrickdillon

err := wait.PollImmediateInfinite(
time.Second*10,
func() (bool, error) {
ferr := f.execute()
Copy link

@kwoodson kwoodson Nov 18, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not love this solution and I am not proposing it but this addition was able to skip the resources that are returning an error message like DEBUG Message: The specified domain name does not exist. Refresh the page and try again.:

			if ferr != nil {
				if strings.Contains(ferr.Error(), "not exist") {
					return true, nil
				}
				o.Logger.Debugf("%s: %v", f.name, ferr)
				return false, nil
			}

I recommend checking the ferr message for objects that do not exist and return true to continue the deletion.

@bd233 @patrickdillon WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the deleteDNSRecords function incorrectly returning an error when it can't find the domain?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The deleteDNSRecords is returning an error that the DNS record does not exist. I think this behavior is expected from the Alibaba API. We need to gracefully handle NoExist errors so that deletion can complete successfully.

@bd233 Can you confirm this behavior?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option is to check to see if it exists before trying to delete. That is what I thought this code was doing: https://github.com/openshift/installer/pull/5348/files#diff-5d31e7ab73dc252cc09331a0790fcc8b6cd3add977402effbe19a1b1b24fba39R1249-R1255

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kwoodson @patrickdillon Sorry to reply so late.
I think this is an abnormal scenario. I read the Start to search DNS records log, but did not see Start to delete DNS records, so the error should come from records, err: = o.listrecord (basedomain), But this should not be, if the domain name does not exist, it should be returned here:

if len(domains) == 0 {
return
}

I am trying to reproduce and fix this problem as soon as possible.

I recommend checking the ferr message for objects that do not exist and return true to continue the deletion.
@bd233 @patrickdillon WDYT?

I don't think 'ferr' should be handled. This error should be resolved in 'deleteDnsRecords'. This solution may cause some resources not to be deleted

@kwoodson Are there any manual deletion operations in the process of executing the installer?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bd233 Generally speaking there shouldn't be any but that doesn't stop customers from updating or modifying the cluster objects. The reason this DNS item was modified is that we set up valid DNS and removed the original before removing the cluster. That caused the missing DNS record. I think that it is not unreasonable to have components get modified or deleted outside of the cluster. I think if objects return NoExist they should be considered to be deleted and the destroy code can continue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kwoodson @patrickdillon
I think I understand this problem. Querying domainName is a fuzzy query. If baseDomain is openshift.com, then bd.openshift.com may appear in the result, and len (domains) == 0 is false, so...

I'll fix it as soon as possible.

Copy link
Contributor Author

@bd233 bd233 Nov 19, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have updated this PR.
I think ListPrivateZones and ListDNSDomain in pkg/asset/installconfig/alibabacloud/client.go also has this problem. I will create a PR to fix it.

@bd233 bd233 force-pushed the alibabacloud-supplementary branch from 4a6ee69 to 9553bdf Compare November 19, 2021 14:19
Copy link
Contributor

@patrickdillon patrickdillon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain (briefly) the fix for the dns issue? I am not sure I understand how it is being fixed.

Also it looks like an azure stack commit snuck in during a rebase.

func (o *ClusterUninstaller) deleteDNSRecords() (err error) {
o.Logger.Debug("Start to search DNS records")

baseDomain := strings.Join(strings.Split(o.ClusterDomain, ".")[1:], ".")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the fix for the fuzzy domain search error? If so can you explain how this solves the problem? Also it would be nice to have a comment in the code for why we need the string manipulation

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha~, about azure stack commit, it is a rebase mistake, I will updata it later.
The LIKE is default search mode on DescribeDomains api. In this mode, if you search openshift.com, can match xx.openshift.com and openshift.com.xx, but using EXACT mode, only return openshift.com.

Also it would be nice to have a comment in the code for why we need the string manipulation

I need base domain to search domain and delete it records, but I can not get base domain, so I use cluster domin to splite by . (the format of cluster domain is <cluster name>.<base domain>).

I will add comment here.

Adds destroy code for the Alibaba platform.

Alibaba: fix: turn off SLB and ECS protection

Before deleting ECS and SLB, turn off the protection state

Alibaba: fix: update asynchronous destroy mode

Refer to IMBCloud to optimize the part of destroying the cluster

Alibaba: fix: destroy SLB created by ingress operator

Query SLB created by ingress operator through tag, and destory it.
This commit was produced by running, and all modules verified
@bd233 bd233 force-pushed the alibabacloud-supplementary branch from 9553bdf to 6a08c59 Compare November 20, 2021 17:00
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 20, 2021

@bd233: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-libvirt 6a08c59 link false /test e2e-libvirt
ci/prow/e2e-aws-single-node 6a08c59 link false /test e2e-aws-single-node
ci/prow/e2e-crc 6a08c59 link false /test e2e-crc
ci/prow/e2e-aws-workers-rhel7 6a08c59 link false /test e2e-aws-workers-rhel7

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@kwoodson
Copy link

I tested this yesterday:

~/go/src/github.com/openshift/installer/bin/openshift-install destroy cluster --dir cluster --log-level debug
DEBUG OpenShift Installer unreleased-master-5214-gc80668d4b12c8a2d89f2914b24877d6d7a43c8f3 
DEBUG Built from commit c80668d4b12c8a2d89f2914b24877d6d7a43c8f3 
DEBUG Retrieving cloud resources by tag {"kubernetes.io/cluster/test-p79w9":"owned"} 
DEBUG Retrieving cloud resources by tag {"ack.aliyun.com":"test-p79w9"} 
DEBUG Start to search DNS records                  
DEBUG Start to search RAM policy "test-p79w9-policy-bootstrap" attachments 
DEBUG Start to check and turn off ECS instances ["i-0xi32ol31nc3j9j5akfz" "i-0xicg6qlyjeopjflu4y1" "i-0xig89d4vg0r8mwf6ov9" "i-0xibcqegh9hceeerbb8h" "i-0xie1u8s8lhavztcj64a" "i-0xifkuk3z82eyof7b6kp" "i-0xi5hd6brb1uccksrzc5"] deletion protection 
DEBUG Start to delete ECS instances ["i-0xi32ol31nc3j9j5akfz" "i-0xicg6qlyjeopjflu4y1" "i-0xig89d4vg0r8mwf6ov9" "i-0xibcqegh9hceeerbb8h" "i-0xie1u8s8lhavztcj64a" "i-0xifkuk3z82eyof7b6kp" "i-0xi5hd6brb1uccksrzc5"] 
DEBUG Start to delete buckets "test-p79w9-bootstrap" 
DEBUG Untag cloud resources &["arn:acs:oss:us-east-1:*:bucket/test-p79w9-bootstrap"] with tags &["kubernetes.io/cluster/test-p79w9"] 
DEBUG Start to detach RAM policy "test-p79w9-policy-bootstrap" 
DEBUG Start to detach RAM policy "test-p79w9-policy-bootstrap" with "test-p79w9-role-bootstrap@role.5807403157081522.onaliyunservice.com" 
DEBUG Start to delete DNS records                  
DEBUG Start to delete DNS record "731226929327065088" 
DEBUG Start to delete objects of buckets test-p79w9-bootstrap 
DEBUG Start to search and delete RAM policy "test-p79w9-policy-bootstrap" 
DEBUG Start to search and delete RAM role "test-p79w9-role-bootstrap" 
DEBUG Start to search RAM policy "test-p79w9-policy-master" attachments 
DEBUG Start to detach RAM policy "test-p79w9-policy-master" 
DEBUG Start to detach RAM policy "test-p79w9-policy-master" with "test-p79w9-role-master@role.5807403157081522.onaliyunservice.com" 
DEBUG Start to search and delete RAM policy "test-p79w9-policy-master" 
DEBUG Start to search and delete RAM role "test-p79w9-role-master" 
DEBUG Start to search RAM policy "test-p79w9-policy-worker" attachments 
DEBUG Start to detach RAM policy "test-p79w9-policy-worker" 
DEBUG Start to detach RAM policy "test-p79w9-policy-worker" with "test-p79w9-role-worker@role.5807403157081522.onaliyunservice.com" 
DEBUG Start to search and delete RAM policy "test-p79w9-policy-worker" 
DEBUG Start to search and delete RAM role "test-p79w9-role-worker" 
DEBUG Start to search private zones                
DEBUG Start to delete security groups ["sg-0xig7un0qyij3hguy35x" "sg-0xibkdteueprzcms7h4a" "sg-0xi3kcj0cd4may8umsb5"] 
DEBUG Start to delete SLBs ["lb-7goby9h2trqyoobuw6a8f" "lb-7go0jssnzhyycbh5qv7tl" "lb-7gop47wfz838slf1ak7s5" "lb-7gokgvsp68kukoroxmb4a"] 
DEBUG Start to delete NAT gateways ["ngw-0xihgz0tfk9thxvwzc1nv"] 
DEBUG Start to delete security group "sg-0xig7un0qyij3hguy35x" rules  
DEBUG Start to delete NAT gateway "ngw-0xihgz0tfk9thxvwzc1nv" 
DEBUG Start to unbind/bind private zone "dfefb8bf19aa992a7d68432f1216a33a" with vpc 
DEBUG Start to delete SLB "lb-7goby9h2trqyoobuw6a8f" 
DEBUG Start to delete private zones                
DEBUG Start to delete private zone "dfefb8bf19aa992a7d68432f1216a33a" 
DEBUG Start to delete SLB "lb-7go0jssnzhyycbh5qv7tl" 
DEBUG Start to delete SLB "lb-7gop47wfz838slf1ak7s5" 
DEBUG Start to delete SLB "lb-7gokgvsp68kukoroxmb4a" 
DEBUG Start to delete security group "sg-0xibkdteueprzcms7h4a" rules  
DEBUG Start to delete security group "sg-0xi3kcj0cd4may8umsb5" rules  
DEBUG Start to delete security group "sg-0xig7un0qyij3hguy35x" 
DEBUG Start to delete security group "sg-0xibkdteueprzcms7h4a" 
DEBUG Start to delete security group "sg-0xi3kcj0cd4may8umsb5" 
DEBUG Start to delete EIPs ["eip-0xi8mxqtfpfzwqfpm93yf"] 
DEBUG Start to delete EIP "eip-0xi8mxqtfpfzwqfpm93yf" 
DEBUG Start to delete VSwitchs ["vsw-0xihwtxkkqqkbj8nflbpy" "vsw-0xi87gk3s7vhkyh6xguyb" "vsw-0xivf3rei4llqzvz48y6j" "vsw-0xiomujawrayubh04nz04"] 
DEBUG Start to delete VSwitch "vsw-0xihwtxkkqqkbj8nflbpy" 
DEBUG Start to delete VSwitch "vsw-0xi87gk3s7vhkyh6xguyb" 
DEBUG Start to delete VSwitch "vsw-0xivf3rei4llqzvz48y6j" 
DEBUG Start to delete VSwitch "vsw-0xiomujawrayubh04nz04" 
DEBUG Start to delete VPCs ["vpc-0xive4iz8few58zdak5oq"] 
DEBUG Start to delete VPC "vpc-0xive4iz8few58zdak5oq" 
DEBUG Purging asset "Metadata" from disk           
DEBUG Purging asset "Master Ignition Customization Check" from disk 
DEBUG Purging asset "Worker Ignition Customization Check" from disk 
DEBUG Purging asset "Terraform Variables" from disk 
DEBUG Purging asset "Kubeconfig Admin Client" from disk 
DEBUG Purging asset "Kubeadmin Password" from disk 
DEBUG Purging asset "Certificate (journal-gatewayd)" from disk 
DEBUG Purging asset "Cluster" from disk            
INFO Time elapsed: 1m48s                         

@kwoodson
Copy link

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Nov 22, 2021
@openshift-merge-robot openshift-merge-robot merged commit 7fd3584 into openshift:master Nov 22, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants