Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add support for upgrade_settings in both node config and NAP config #5676

Closed
wants to merge 1 commit into from

Conversation

davidquarles
Copy link
Contributor

No description provided.

@ghost ghost added the size/m label Feb 13, 2020
@ghost ghost requested a review from nat-henderson February 13, 2020 20:32
@ghost ghost added the documentation label Feb 13, 2020
@davidquarles
Copy link
Contributor Author

This fixes #5580.

@ghost ghost requested a review from slevenick February 13, 2020 21:28
@davidquarles
Copy link
Contributor Author

davidquarles commented Feb 14, 2020

@rileykarson @danawillow I'm still finishing up writing tests here, but had two quick questions:

  1. Is it actually preferable to factor the shared upgrade settings config out into its own file, as was done with e.g. google/node_config.go?
  2. Should we do any sort of validation here of the upstream constraint that max_surge + max_unavailable <= 20, or does it make more sense to just let the failure occur and propagate down? If we prefer the former, any guidance is appreciated, as I'm not sure how/where to do that sort of composite validation.

EDIT: cc @slevenick @ndmckinley

@davidquarles
Copy link
Contributor Author

  1. Should we do any sort of validation here of the upstream constraint that max_surge + max_unavailable <= 20, or does it make more sense to just let the failure occur and propagate down? If we prefer the former, any guidance is appreciated, as I'm not sure how/where to do that sort of composite validation.

Additional context:
After testing, it appears the google API will allow you to violate this constraint at both the node pool and NAP layers and silently fails in subtly different ways. NAP config successfully applies (I used maxUnavailable=1, maxSurge=20), but NAP autoscale is subsequently broken. I tried this same config at the node pool layer and terraform emitted no errors, but the node pool itself didn't update and nothing change in terraform state.

@nat-henderson
Copy link
Contributor

This looks great.

  1. Is it actually preferable to factor the shared upgrade settings config out into its own file, as was done with e.g. google/node_config.go?

In my opinion, no - I'd prefer it if you put it in container_cluster.

  1. Should we do any sort of validation here of the upstream constraint that max_surge + max_unavailable <= 20, or does it make more sense to just let the failure occur and propagate down? If we prefer the former, any guidance is appreciated, as I'm not sure how/where to do that sort of composite validation.

I think it's probably best to add this validation - our general approach is that validation is desirable unless it would impose an undue maintenance burden. In this case where the server-side can break if the validation isn't added, it seems extremely useful to add.

modular-magician added a commit to modular-magician/terraform-provider-google that referenced this pull request Feb 10, 2022
Signed-off-by: Modular Magician <magic-modules@google.com>
modular-magician added a commit that referenced this pull request Feb 10, 2022
Signed-off-by: Modular Magician <magic-modules@google.com>
@rileykarson
Copy link
Collaborator

Closing PR as stale

Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jul 14, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants