-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gke node pool: add 404 response code check on delete call #4747
gke node pool: add 404 response code check on delete call #4747
Conversation
Hello! I am a robot who works on Magic Modules PRs. I have detected that you are a community contributor, so your PR will be assigned to someone with a commit-bit on this repo for initial review. Thanks for your contribution! A human will be with you soon. @ScottSuarez, please review this PR or find an appropriate assignee. |
I have triggered VCR tests based on this PR's diffs. See the results here: "https://ci-oss.hashicorp.engineering/viewQueued.html?itemId=184892" |
@@ -482,6 +482,10 @@ func resourceContainerNodePoolDelete(d *schema.ResourceData, meta interface{}) e | |||
//Check cluster is in running state | |||
_, err = containerClusterAwaitRestingState(config, nodePoolInfo.project, nodePoolInfo.location, nodePoolInfo.cluster, userAgent, d.Timeout(schema.TimeoutCreate)) | |||
if err != nil { | |||
if isGoogleApiErrorWithCode(err, 404) { | |||
log.Printf("[INFO] GKE node pool %s doesn't exist to delete", d.Id()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed that I made a mistake when sending this PR.
cl/371172872 was correct, but when I sent this OSS PR, the change was already made in OSS (just a few lines below). Since the code change in this PR wasn't what I intended to make, we may just revert this PR.
On the other hand, different from cl/371172872. This OSS code had an extra containerClusterAwaitRestingState
before containerNodePoolAwaitRestingState
. I wonder if we would introduce a similar bug if we simply revert this PR-- when deleting a node pool, should we raise an error if the cluster doesn't exist, or should we just print a log? If it's the latter, then we may fix this log message.
@rileykarson @ScottSuarez WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the cluster doesn't exist we can assume the node pool doesn't. It's not immediately obvious what's wrong here to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good. Maybe I shouldn't say "fix this log message". Since we assume the node pool doesn't exist because the cluster doesn't, we can improve this log message to "[INFO] GKE cluster %s doesn't exist, skipping node pool %s deletion", nodePoolInfo.cluster, d.Id()
.
It's not that critical, just the code wasn't doing what I thought it would do, and I think maybe it would be confusing to others as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, we could make it more clear! If you're interested in changing it feel free, but I don't feel it's required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will do. Thanks
If this PR is for Terraform, I acknowledge that I have:
make test
andmake lint
to ensure it passes unit and linter tests.Release Note Template for Downstream PRs (will be copied)
Similar to #4512. Reproduced the bug and verified the fix worked locally: cl/371048390, b/186679604
fixes hashicorp/terraform-provider-google#9023
cc @rileykarson