Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-operator/templates/cluster-launch-installer-e2e*: Destroy with --continue-on-error #1542

Conversation

wking
Copy link
Member

@wking wking commented Sep 18, 2018

Taking advantage of openshift/installer#252, so we can reap at least most of our resources even if the cluster doesn't come up enough for the machine API operator to be able to destroy workers. With various stages of cluster health:

  1. Cluster never comes up at all.
  2. Cluster healthy enough to create workers.
  3. Cluster healthy enough to destroy workers.

we're only worried about leakage in the space between 2 and 3. Hopefully there isn't any space there, but without this commit we're currently leaking resources from 1 as well.

The two-part destroy attempts are originally from #928, although there's not much to motivate them there. With --continue-on-error destruction, we're already trying pretty hard to clean everything up. So excepting brief network hiccups and such, I think a single pass is sufficient. And we'll want a better backstop to catch any resources that leak through (e.g. orphaned workers), so I'm dropping the retry here.

CC @smarterclayton, @eparis.

…ontinue-on-error

Taking advantage of openshift/installer@23915f2e (installer/*: Add
--continue-on-error to destroy workflow, 2018-09-16,
openshift/installer#252) so we can reap at least most of our resources
even if the cluster doesn't come up enough for the machine API
operator to be able to destroy workers.  With various stages of
cluster health:

1. Cluster never comes up at all.
2. Cluster healthy enough to create workers.
3. Cluster healthy enough to destroy workers.

we're only worried about leakage in the space between 2 and 3.
Hopefully there isn't any space there, but without this commit we're
currently leaking resources from 1 as well.

The two-part destroy attempts are originally from 51df634 (Support an
aws installer CI job, 2018-06-07, openshift#928), although there's not much to
motivate them there.  With --continue-on-error destruction, we're
already trying pretty hard to clean everything up.  So excepting brief
network hiccups and such, I think a single pass is sufficient.  And
we'll want a better backstop to catch any resources that leak through
(e.g. orphaned workers), so I'm dropping the retry here.
@openshift-ci-robot openshift-ci-robot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Sep 18, 2018
@smarterclayton
Copy link
Contributor

We actually were retrying because we were seeing things not deleted fully even when terraform successfully ran to completion. That looked like ec2 errors leaking out to us. Do we retry internally now even with error?

@wking
Copy link
Member Author

wking commented Sep 18, 2018

We actually were retrying because we were seeing things not deleted fully even when terraform successfully ran to completion. That looked like ec2 errors leaking out to us.

Do you have examples? That sounds like a Terraform bug (at least in the pre-machine-API world we were living in when #928 landed).

Do we retry internally now even with error?

Nope. I'd rather file fixes for any Terraform bugs we hit, and have a CI cleanup backstop (@crawford has been talking up... some tool whose name I forget ;).

As I say in the initial message, subsequent tectonic destroy attempts may fail if an initial run with --continue-on-error borks the state too badly. But if you want to preserve the two-attempt approach here, I'm fine rerolling to use:

tectonic destroy --dir=. --log-level=debug ||
tectonic destroy --dir=. --log-level=debug --continue-on-error

@crawford
Copy link
Contributor

some tool whose name I forget

Cloud Custodian

@stevekuznetsov
Copy link
Contributor

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Sep 21, 2018
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: stevekuznetsov, wking

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 21, 2018
@openshift-merge-robot openshift-merge-robot merged commit 2869b73 into openshift:master Sep 21, 2018
@openshift-ci-robot
Copy link
Contributor

@wking: Updated the following 2 configmaps:

  • prow-job-cluster-launch-installer-e2e-smoke configmap using the following files:
    • key cluster-launch-installer-e2e-smoke.yaml using file ci-operator/templates/cluster-launch-installer-e2e-smoke.yaml
  • prow-job-cluster-launch-installer-e2e configmap using the following files:
    • key cluster-launch-installer-e2e.yaml using file ci-operator/templates/cluster-launch-installer-e2e.yaml

In response to this:

Taking advantage of openshift/installer#252, so we can reap at least most of our resources even if the cluster doesn't come up enough for the machine API operator to be able to destroy workers. With various stages of cluster health:

  1. Cluster never comes up at all.
  2. Cluster healthy enough to create workers.
  3. Cluster healthy enough to destroy workers.

we're only worried about leakage in the space between 2 and 3. Hopefully there isn't any space there, but without this commit we're currently leaking resources from 1 as well.

The two-part destroy attempts are originally from #928, although there's not much to motivate them there. With --continue-on-error destruction, we're already trying pretty hard to clean everything up. So excepting brief network hiccups and such, I think a single pass is sufficient. And we'll want a better backstop to catch any resources that leak through (e.g. orphaned workers), so I'm dropping the retry here.

CC @smarterclayton, @eparis.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@wking wking deleted the installer-destroy-continue-on-error branch September 23, 2018 07:06
derekhiggins pushed a commit to derekhiggins/release that referenced this pull request Oct 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants