Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make 'kubeconfig' field sensitive #60

Merged
merged 1 commit into from
Aug 17, 2021

Conversation

zulh-civo
Copy link
Member

Closes #57


tf show output before this PR

Show
# civo_kubernetes_cluster.my-cluster:
resource "civo_kubernetes_cluster" "my-cluster" {
  api_endpoint           = "https://212.2.247.178:6443"
  created_at             = "2021-08-13 04:30:40 +0000 UTC"
  dns_entry              = "ce0e3a6a-a4e7-4af0-8f7c-37edffc8705e.k8s.civo.com"
  id                     = "ce0e3a6a-a4e7-4af0-8f7c-37edffc8705e"
  installed_applications = []
  instances              = [
      {
          cpu_cores = 1
          disk_gb   = 15
          hostname  = "k3s-my-cluster-a4eed495-node-pool-1531"
          ram_mb    = 2048
          size      = ""
          status    = "ACTIVE"
          tags      = []
      },
      {
          cpu_cores = 1
          disk_gb   = 15
          hostname  = "k3s-my-cluster-a4eed495-node-pool-edae"
          ram_mb    = 2048
          size      = ""
          status    = "ACTIVE"
          tags      = []
      },
      {
          cpu_cores = 1
          disk_gb   = 15
          hostname  = "k3s-my-cluster-a4eed495-node-pool-f313"
          ram_mb    = 2048
          size      = ""
          status    = "ACTIVE"
          tags      = []
      },
  ]
  kubeconfig             = <<-EOT
      apiVersion: v1
      clusters:
      - cluster:
          certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTWpnNE1qa3hOREV3SGhjTk1qRXdPREV6TURRek1qSXhXaGNOTXpFd09ERXhNRFF6TWpJeApXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTWpnNE1qa3hOREV3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFTNXhEaU5HYVRnWjVPYnlFdEg5dmlBbFhrR1k1RW9WMUUxcWpnVzhmSHoKWlE2eVZKUDh4SnA5NU0vQ3JQTkdORzRxOGE1bFBoUWxSUW1URzBIYkJvQ2RvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVW9oOGhraWtZK3E4MGZQMEhOa214CkFPYkZBK1F3Q2dZSUtvWkl6ajBFQXdJRFNRQXdSZ0loQUlzd0F1YnRoR2R2ak90aDRUcytCbTM2bWFkNjh3SC8KenBnUnVZamZhRnpOQWlFQXlLUTZjRGtBM3NNbTdBNE9xUmtCUU9YblVpdFRqbVF2RldldFYzbHhqL009Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
          server: https://212.2.247.178:6443
        name: my-cluster
      contexts:
      - context:
          cluster: my-cluster
          user: my-cluster
        name: my-cluster
      current-context: my-cluster
      kind: Config
      preferences: {}
      users:
      - name: my-cluster
        user:
          client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRlZ0F3SUJBZ0lJUXJyY0V4VDVKVmt3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOakk0T0RJNU1UUXhNQjRYRFRJeE1EZ3hNekEwTXpJeU1Wb1hEVEl5TURneApNekEwTXpJeU1Wb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJCNDhiVk1aTzRCZVZjdWgKMW9KcDVDVnAyYzVaN1l5YXlPalFPS2Z4QmxlazltSFlkNnlQNEo3aVh0c0IyaEM1M2d3QkhUREpFd0tEeHNHLwo0WEhYMDNlalNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCUnFTQ3BNQ3VlZmxOSTJEYjBFZVNVRExJb1FvekFLQmdncWhrak9QUVFEQWdOSkFEQkcKQWlFQWdwaUFMRDZuSlQ0QTdtMXNHTDRibitMc0g5V0pjbW1hMlQvNnJPanprMUFDSVFDQSsrY1dtWDZHNlJVTApjV0xaZVB4cjJRaU9nR2d5Qkg1MzNBZXVZRU54b3c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZURDQ0FSMmdBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwClpXNTBMV05oUURFMk1qZzRNamt4TkRFd0hoY05NakV3T0RFek1EUXpNakl4V2hjTk16RXdPREV4TURRek1qSXgKV2pBak1TRXdId1lEVlFRRERCaHJNM010WTJ4cFpXNTBMV05oUURFMk1qZzRNamt4TkRFd1dUQVRCZ2NxaGtqTwpQUUlCQmdncWhrak9QUU1CQndOQ0FBVEo1eE84dlZySXdzMUl1aC9FazFxQ0Z6WWZMb05aZVcva3FOS1pxTWdLCkl3S1E0UUVRajRPZG1vMGJUenFkYVBicjBlT1dNbVJFUzFOaHdVTFo1YWVKbzBJd1FEQU9CZ05WSFE4QkFmOEUKQkFNQ0FxUXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVha2dxVEFybm41VFNOZzI5QkhrbApBeXlLRUtNd0NnWUlLb1pJemowRUF3SURTUUF3UmdJaEFOdThmSUpyclN3RXY2N00zemdwSytsMENVOFVDby9GCjhvMHR4cHJVanJud0FpRUF5a1RYM0g0dkg5US9PZUxXKyt1WEJ2VFlzUEdvYXRRMTQrVVhDaFAvTTlRPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
          client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUNTZXpnTWN1YWYrTms3K3JiOVM1MDdzQ0Znd3R5VU96Y1hBT1A5K25wMkVvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFSGp4dFV4azdnRjVWeTZIV2dtbmtKV25aemxudGpKckk2TkE0cC9FR1Y2VDJZZGgzckkvZwpudUplMndIYUVMbmVEQUVkTU1rVEFvUEd3Yi9oY2RmVGR3PT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
  EOT
  kubernetes_version     = "1.20.0-k3s1"
  master_ip              = "212.2.247.178"
  name                   = "my-cluster"
  network_id             = "fb3cdfff-66ba-4048-9f37-8625237cbe67"
  num_target_nodes       = 3
  pools                  = [
      {
          count          = 3
          id             = "2f5a47de-79b2-4bac-92eb-fea7c498f91e"
          instance_names = [
              "k3s-my-cluster-a4eed495-node-pool-1531",
              "k3s-my-cluster-a4eed495-node-pool-edae",
              "k3s-my-cluster-a4eed495-node-pool-f313",
          ]
          instances      = [
              {
                  cpu_cores = 1
                  disk_gb   = 15
                  hostname  = "k3s-my-cluster-a4eed495-node-pool-f313"
                  ram_mb    = 2048
                  size      = "g3.k3s.small"
                  status    = "ACTIVE"
                  tags      = []
              },
              {
                  cpu_cores = 1
                  disk_gb   = 15
                  hostname  = "k3s-my-cluster-a4eed495-node-pool-edae"
                  ram_mb    = 2048
                  size      = "g3.k3s.small"
                  status    = "ACTIVE"
                  tags      = []
              },
              {
                  cpu_cores = 1
                  disk_gb   = 15
                  hostname  = "k3s-my-cluster-a4eed495-node-pool-1531"
                  ram_mb    = 2048
                  size      = "g3.k3s.small"
                  status    = "ACTIVE"
                  tags      = []
              },
          ]
          size           = "g3.k3s.small"
      },
  ]
  ready                  = true
  status                 = "ACTIVE"
  target_nodes_size      = "g3.k3s.small"
}

tf show output after this PR

Show
# civo_kubernetes_cluster.my-cluster:
resource "civo_kubernetes_cluster" "my-cluster" {
  api_endpoint           = "https://212.2.240.138:6443"
  created_at             = "2021-08-13 04:39:14 +0000 UTC"
  dns_entry              = "7b152db9-1ff1-47e0-805a-d425a7630135.k8s.civo.com"
  id                     = "7b152db9-1ff1-47e0-805a-d425a7630135"
  installed_applications = []
  instances              = [
      {
          cpu_cores = 1
          disk_gb   = 15
          hostname  = "k3s-my-cluster-5eb3422b-node-pool-0cdd"
          ram_mb    = 2048
          size      = ""
          status    = "ACTIVE"
          tags      = []
      },
      {
          cpu_cores = 1
          disk_gb   = 15
          hostname  = "k3s-my-cluster-5eb3422b-node-pool-334d"
          ram_mb    = 2048
          size      = ""
          status    = "ACTIVE"
          tags      = []
      },
      {
          cpu_cores = 1
          disk_gb   = 15
          hostname  = "k3s-my-cluster-5eb3422b-node-pool-e5ba"
          ram_mb    = 2048
          size      = ""
          status    = "ACTIVE"
          tags      = []
      },
  ]
  kubeconfig             = (sensitive value)
  kubernetes_version     = "1.20.0-k3s1"
  master_ip              = "212.2.240.138"
  name                   = "my-cluster"
  network_id             = "fb3cdfff-66ba-4048-9f37-8625237cbe67"
  num_target_nodes       = 3
  pools                  = [
      {
          count          = 3
          id             = "a985f994-d71e-436d-8a2a-1033aee71b46"
          instance_names = [
              "k3s-my-cluster-5eb3422b-node-pool-0cdd",
              "k3s-my-cluster-5eb3422b-node-pool-334d",
              "k3s-my-cluster-5eb3422b-node-pool-e5ba",
          ]
          instances      = [
              {
                  cpu_cores = 1
                  disk_gb   = 15
                  hostname  = "k3s-my-cluster-5eb3422b-node-pool-e5ba"
                  ram_mb    = 2048
                  size      = "g3.k3s.small"
                  status    = "ACTIVE"
                  tags      = []
              },
              {
                  cpu_cores = 1
                  disk_gb   = 15
                  hostname  = "k3s-my-cluster-5eb3422b-node-pool-334d"
                  ram_mb    = 2048
                  size      = "g3.k3s.small"
                  status    = "ACTIVE"
                  tags      = []
              },
              {
                  cpu_cores = 1
                  disk_gb   = 15
                  hostname  = "k3s-my-cluster-5eb3422b-node-pool-0cdd"
                  ram_mb    = 2048
                  size      = "g3.k3s.small"
                  status    = "ACTIVE"
                  tags      = []
              },
          ]
          size           = "g3.k3s.small"
      },
  ]
  ready                  = true
  status                 = "ACTIVE"
  target_nodes_size      = "g3.k3s.small"
}

Note:
They are two different clusters

@zulh-civo zulh-civo self-assigned this Aug 13, 2021
@zulh-civo
Copy link
Member Author

FYI @alejandrojnm / @saiyam1814

@saiyam1814
Copy link
Contributor

LGTM

@saiyam1814 saiyam1814 merged commit 858b0d3 into civo:master Aug 17, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Kubeconfig in Cluster resource should be marked sensitive
2 participants