-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
azurerm provider should define kube_config_raw as sensitive #1220
Comments
Awesome, thanks a lot! |
Hey @subesokun Just a heads up that we have released v1.5.0 of the provider that includes this fix 🙂 |
@katbyte thanks for the heads up! Unfortunately I've noticed today that also in the |
Yep! Thanks for letting us know 🙂 |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks! |
This issue was originally opened by @subesokun as hashicorp/terraform#18013. It was migrated here as a result of the provider split. The original body of the issue is below.
Terraform Version
Debug Output
https://gist.github.com/subesokun/ae9893a093c4ce7fcdebf2cb5cc95c0d
Expected Behavior
If a kubernetes cluster gets re-deployed the
kube_config_raw
content shouldn't be visible in the console log and be marked as<sensitive>
Actual Behavior
kube_config_raw
content gets printed in clear text into the console log.Steps to Reproduce
terraform init
terraform apply
linux_profile.ssh_key
terraform apply
Additional Context
We're deploying our kubernetes clusters as part of our CI/CD pipelines and usually every developer in the project has access to the deployment logs of those pipelines and hence we need to keep the console logs clean from any sensitive data. Now as the
kube_config_raw
gets logged in clear text into the console this is very critical to us as everybody with access to the logs could access the cluster and compromise it which we have to prevent by all means.The text was updated successfully, but these errors were encountered: