Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix issue 276: Double encoded KubernetesCluster connection secrets #407

Merged
merged 1 commit into from
Jul 20, 2023

Conversation

svscheg
Copy link
Contributor

@svscheg svscheg commented Mar 13, 2023

Description of your changes

Fixes first part for issue #276

I have:

  • Run make reviewable test to ensure this PR is ready for review.

How has this code been tested

Manually
Uptest: https://github.com/upbound/provider-azure/actions/runs/5268824377

Secret before the changes:
before_fix_changes.txt

Secret after the changes:
after_changed.txt

@@ -26,25 +25,6 @@ import (
// Configure configures kubernetes group
func Configure(p *config.Provider) {
p.AddResourceConfigurator("azurerm_kubernetes_cluster", func(r *config.Resource) {
// Note(ezgidemirel): Following fields are not marked as "sensitive" in Terraform cli schema output.
// We need to configure them explicitly to store in connectionDetails secret.
r.TerraformResource.Schema["kube_admin_config"].Elem.(*schema.Resource).
Copy link
Collaborator

@ulucinar ulucinar Mar 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like the kube_admin_config[].client_certificate has already set Sensitive to true but when you remove the configuration override here, a corresponding field gets generated in the status. I was not expecting this, i.e., I was thinking that the configuration override we have here was unnecessary.

Following fields are not marked as "sensitive" in Terraform cli schema output.

We will need to check further why we need this override. The comment here says that these are not marked as sensitive in the JSON schema. But in the native schema, this is already marked as sensitive. So, I believe for some reason, we are losing this important information by not consuming the native schema.

Copy link
Contributor

@turkenh turkenh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@svscheg as I stated in the issue, I would suggest using AdditionalConnectionDetailsFn configuration to end up with the exact connection secret we want, e.g. encoded only once.

Copy link
Contributor

@turkenh turkenh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@svscheg svscheg force-pushed the issue276 branch 3 times, most recently from a1a01b2 to aeb42ff Compare June 14, 2023 11:50
@svscheg
Copy link
Contributor Author

svscheg commented Jun 14, 2023

/test-examples="examples/containerservice/kubernetescluster.yaml"

@svscheg svscheg changed the title Deleting the explicit sensitive configuration for the azurerm_kubernetes_cluster Fix issue 276: Double encoded KubernetesCluster connection secrets Jun 23, 2023
Copy link
Contributor

@turkenh turkenh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@svscheg thanks!

It would be good to check the content of the connection secret after the resource is ready. Could you update the How this code has been tested section with the content of that secret (should be ok to share even sensitive given it is a test cluster to be deleted)?
I am also wondering if you could use that kubeconfig to connect to the cluster with kubectl?

@svscheg
Copy link
Contributor Author

svscheg commented Jun 23, 2023

Hi @turkenh , @turkenf the PR is ready for review, could you please take a look it?

@svscheg
Copy link
Contributor Author

svscheg commented Jun 23, 2023

@svscheg thanks!

It would be good to check the content of the connection secret after the resource is ready. Could you update the How this code has been tested section with the content of that secret (should be ok to share even sensitive given it is a test cluster to be deleted)? I am also wondering if you could use that kubeconfig to connect to the cluster with kubectl?

Hi @turkenh ! I added files with secret before the changes and after the changes.
and also have a question - how I can to check if I can use that kubeconfig to connect to the cluster with kubectl?

@turkenh
Copy link
Contributor

turkenh commented Jun 23, 2023

and also have a question - how I can to check if I can use that kubeconfig to connect to the cluster with kubectl?

  1. Copy the value of the kubeconfig key in the secret
  2. Decode it with:
base64 -D <<< `paste-the-kubeconfig-here` > /tmp/k
  1. export KUBECONFIG with
export KUBECONFIG=/tmp/k
  1. Try a kubectl command
kubectl get nodes

If the kubectl command works, then we're good.

@svscheg
Copy link
Contributor Author

svscheg commented Jun 26, 2023

and also have a question - how I can to check if I can use that kubeconfig to connect to the cluster with kubectl?

  1. Copy the value of the kubeconfig key in the secret
  2. Decode it with:
base64 -D <<< `paste-the-kubeconfig-here` > /tmp/k
  1. export KUBECONFIG with
export KUBECONFIG=/tmp/k
  1. Try a kubectl command
kubectl get nodes

If the kubectl command works, then we're good.

Hi @turkenh !
If I use value from the kubeconfig attribute and repeat all the steps I can see next after command kubectl get nodes:
kubectl get nodes
E0623 19:36:07.366367 8815 memcache.go:238] couldn't get current server API group list: Get "https://exampleaks1-fsl39vxh.hcp.westeurope.azmk8s.io:443/api?timeout=32s": dial tcp 20.103.162.20:443: i/o timeout
E0623 19:36:37.368250 8815 memcache.go:238] couldn't get current server API group list: Get "https://exampleaks1-fsl39vxh.hcp.westeurope.azmk8s.io:443/api?timeout=32s": dial tcp 20.103.162.20:443: i/o timeout
E0623 19:37:07.392539 8815 memcache.go:238] couldn't get current server API group list: Get "https://exampleaks1-fsl39vxh.hcp.westeurope.azmk8s.io:443/api?timeout=32s": dial tcp 20.103.162.20:443: i/o timeout
E0623 19:37:37.395415 8815 memcache.go:238] couldn't get current server API group list: Get "https://exampleaks1-fsl39vxh.hcp.westeurope.azmk8s.io:443/api?timeout=32s": dial tcp 20.103.162.20:443: i/o timeout
E0623 19:38:07.396277 8815 memcache.go:238] couldn't get current server API group list: Get "https://exampleaks1-fsl39vxh.hcp.westeurope.azmk8s.io:443/api?timeout=32s": dial tcp 20.103.162.20:443: i/o timeout
Unable to connect to the server: dial tcp 20.103.162.20:443: i/o timeout

Is it correct or something is not ok?

@svscheg
Copy link
Contributor Author

svscheg commented Jun 26, 2023

@turkenh maybe we have some ways to re-check attributes kubeconfig.clustercacertificate, kubeconfig.clientcertificate, kubeconfig.clientkey?

@turkenh
Copy link
Contributor

turkenh commented Jul 19, 2023

Is it correct or something is not ok?

@svscheg, looks like it is a valid kubeconfig but your cluster was not ready yet or there were some connectivity problems. Ideally, I would like to see a successful output with a list of nodes.

@turkenh maybe we have some ways to re-check attributes kubeconfig.clustercacertificate, kubeconfig.clientcertificate, kubeconfig.clientkey?

The above procedure would validate all together without checking them individually.

Copy link
Contributor

@turkenh turkenh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me.

As I commented above, seeing a successful kubectl get nodes with the generated config would be good before merge. But I am fine with approving as the code itself looks good and previous test @svscheg partially validated that generated kubeconfig is a valid one.

@turkenf
Copy link
Collaborator

turkenf commented Jul 20, 2023

Tested manually and kubectl get nodes command works properly:

> kubectl get nodes

NAME                              STATUS   ROLES   AGE   VERSION
aks-default-47539582-vmss000000   Ready    agent   19m   v1.25.6
aks-example-21932893-vmss000000   Ready    agent   11m   v1.25.6

@turkenf turkenf merged commit b05d78c into crossplane-contrib:main Jul 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants