Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shorten GSA account_id if necessary #666

Merged
merged 5 commits into from
Sep 12, 2020

Conversation

jaketf
Copy link
Contributor

@jaketf jaketf commented Sep 10, 2020

No description provided.

@comment-bot-dev
Copy link

comment-bot-dev commented Sep 10, 2020

Thanks for the PR! 🚀
✅ Lint checks have passed.

Copy link
Member

@bharathkkb bharathkkb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @jaketf
Can we also add something to the description for this var that indicates this behavior.

@jaketf
Copy link
Contributor Author

jaketf commented Sep 11, 2020

Actually please hold on merging this.
CI failed on first commit and I'm having some issues when testing locally (seemingly unrelated to this change).
I'll add -xeo pipefail options to the kubectl wrapper for debugging. thoughts on keeping these options to fail sooner and more transparently when there are issues?

@morgante
Copy link
Contributor

I'll add -xeo pipefail options to the kubectl wrapper for debugging. thoughts on keeping these options to fail sooner and more transparently when there are issues?

That seems like a good idea.

@jaketf
Copy link
Contributor Author

jaketf commented Sep 11, 2020

is there a reason we have a kubectl wrapper locally in this module when we actually use the one in this module?

https://github.com/terraform-google-modules/terraform-google-gcloud/blob/v1.3.0/modules/kubectl-wrapper/scripts/kubectl_wrapper.sh

@morgante
Copy link
Contributor

is there a reason we have a kubectl wrapper locally in this module when we actually use the one in this module?

I think that's just a legacy artifact we forgot to remove. /cc @bharathkkb if I'm wrong.

@jaketf
Copy link
Contributor Author

jaketf commented Sep 11, 2020

hmm this got more complicated to test :)
I'm sure you've run into this cross repo changes. Do you have suggestions for a good workflow for testing changes across these repos?
I feel almost like making a monster CFT monorepo that subrepo's each module.
But I figured you might have a simpler solution.
Apologies if this is obvious I don't contribute to CFT very much.

@morgante
Copy link
Contributor

I'm sure you've run into this cross repo changes. Do you have suggestions for a good workflow for testing changes across these repos?

Unfortunately it's non-trivial. Our main approach:

  1. Open a PR on the upstream (gcloud repo) with your desired change
  2. Open a PR on this repo which changes the module to point towards your PR/fork from 1.
  3. Iterate on both until they're passing / working as expected
  4. We merge 1
  5. Update your PR on this repo to point back to the released version of (4) and we merge this PR.

@bharathkkb
Copy link
Member

bharathkkb commented Sep 11, 2020

@jaketf the integration failures here seem related to b/159766801. Lint failures are due to docs, you can run make generate_docs to regenerate them. Did you encounter any other errors locally?

Regarding kubectl_wrapper.sh yes, we never got around to getting rid of them and can be removed. I believe the script in gcloud repo does fail fast. Unfortunately, cross module workflow is as @morgante suggested. I sometimes just use local references to modules I clone when testing locally so I don't have to keep pushing upstream.

@jaketf
Copy link
Contributor Author

jaketf commented Sep 11, 2020

My other local error was in fact user error on my part (and indeed unrelated to this change).
If you omit cluster_name / location when using existing KSA then we pass along empty strings. This should be caught when we check number of args in the kubectl wrapper however we never updated arg num check from 3->5 when we updated the required args for that script.
This results in very confusing error to end user of annotate command not found.
I've opened terraform-google-modules/terraform-google-gcloud#79 upstream to improve this to warn user they haven't specified enough args.

Is there a way we can assert "if use_existing_ksa then {cluster_name, location} not empty" in this module?

@bharathkkb
Copy link
Member

@jaketf

Is there a way we can assert "if use_existing_ksa then {cluster_name, location} not empty" in this module?

Unfortunately I dont think so. Currently in the docs we do say Required if using existing KSA.. Terraform 0.13.x has introduced custom validation but that is currently scoped to just the variable it is associated to.

Thanks for the fix in gcloud, we will need to release a new version there and bump the versions here for it to take effect similar to #651

@jaketf
Copy link
Contributor Author

jaketf commented Sep 11, 2020

Thanks for the fix in gcloud, we will need to release a new version there and bump the versions here for it to take effect similar to #651
Makes sense.
Bumping version to include the gcloud PR only slightly improves developer experience in the event they are as boneheaded as me for not reading that docs note :) and is thus not very urgent.

However, this PR (which is actually an orthogonal change) is a little higher priority as it affects the usability of the module in a way the end user cannot easily fix because it has to do with the internals of the module.
I ran into this because airflow helm chart defines all KSAs like this {{ .Release.name }}-<actual_role>-serviceaccount which eats up many valuable characters in the 30 char GSA name limit. I think this is silly naming convention but this chart is under apache community control.

Anyway sorry for so much back and forth late on a Friday and thanks both for the help! cheers to the weekend!

@morgante
Copy link
Contributor

/gcbrun

@morgante morgante merged commit 0225458 into terraform-google-modules:master Sep 12, 2020
CPL-markus pushed a commit to WALTER-GROUP/terraform-google-kubernetes-engine that referenced this pull request Jul 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants