Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hub-spoke example extended to cross account access between accounts #58

Open
danielloader opened this issue Mar 26, 2024 · 3 comments
Open

Comments

@danielloader
Copy link

Context: https://github.com/gitops-bridge-dev/gitops-bridge/tree/main/argocd/iac/terraform/examples/eks/multi-cluster/hub-spoke

Thanks for updating the multi-cluster examples to include EKS Pod Association, it's been a great simplification and improvement.

I'm currently working me way trying to bend this example into a cross account example internally for a tech demo, where as OIDC was little more forgiving cross account due to not needing to do Role Chaining.

This is less a request more an issue to track if anyone else is doing this and to open up some discussion on implementation details, perhaps with a hope to contributing an example back to this repository.

@danielloader
Copy link
Author

danielloader commented Mar 26, 2024

Notes so far:

This seems to work mostly out the box, ArgoCD assumes a role to get access so it does the chaining for you, rather than relying on the role provided by the Pod Identity controller for the argocd-server pod.

Things I needed to change in the hub:

  • Add a data source for the org:
    data "aws_organizations_organization" "current" {}
  • Add a policy document for the spoke cluster secret injection:
    data "aws_iam_policy_document" "spoke_cluster_secrets" {
      statement {
        effect = "Allow"
        principals {
          type = "AWS"
          identifiers = ["*"]
        }
        actions = ["sts:AssumeRole","sts:TagSession"]
        condition {
          test = "StringEquals"
          variable = "aws:PrincipalOrgId"
          values = [
            data.aws_organizations_organization.current.id
          ]
        }
      }
    }
  • A role using the trust document:
    resource "aws_iam_role" "spoke_cluster_secrets" {
      name = "argocd-hub-spoke-access"
      assume_role_policy = data.aws_iam_policy_document.spoke_cluster_secrets.json
    }
  • Add an access entry in EKS so this role can be assumed and used to add secrets to argocd namespace:
    module "eks" {
      source  = "terraform-aws-modules/eks/aws"
      version = "~> 20.5"
      #... omit the rest for brevity
    
      access_entries = {
        spokes = {
          kubernetes_groups = []
          principal_arn = aws_iam_role.spoke_cluster_secrets.arn
    
          policy_associations = {
            argocd = {
              policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy"
              access_scope = {
                namespaces = ["argocd"]
                type = "namespace"
              }
            }
          }
        }
      }
    }
  • Output for the hub state:
    output "spoke_cluster_secrets_arn" {
      description = "IAM Role ARN used by spokes to add secrets to the hub ArgoCD"
      value = aws_iam_role.spoke_cluster_secrets.arn
    }

Things I had to change in the spokes:

  • Update the Kubernetes Access to Hub Cluster block, using the ARN from the output/hub block.
    provider "kubernetes" {
      host                   = data.terraform_remote_state.cluster_hub.outputs.cluster_endpoint
      cluster_ca_certificate = base64decode(data.terraform_remote_state.cluster_hub.outputs.cluster_certificate_authority_data)
    
      exec {
        api_version = "client.authentication.k8s.io/v1beta1"
        command     = "aws"
        # This requires the awscli to be installed locally where Terraform is executed
        args = ["eks", "get-token", "--cluster-name", data.terraform_remote_state.cluster_hub.outputs.cluster_name, "--region", data.terraform_remote_state.cluster_hub.outputs.cluster_region, "--role-arn", data.terraform_remote_state.cluster_hub.outputs.spoke_cluster_secrets.arn]
      }
      alias = "hub"
    }

As far as I can tell, this prevents needing an "uber" terraform user which can jump into each account, the hub and spoke clusters are deployed with different accounts with different credentials and the state sharing is what glues it together (relies on the state being in the same bucket/available from the planning/applying user)

@danielloader
Copy link
Author

danielloader commented Mar 27, 2024

Okay so taken a stab at removing the "shared state" between terraform stacks and the best native AWS solution I can come up with is using Parameter Store. Thanks @agjmills for the idea.

Important

You need to enable resource sharing within an organisation before doing any of this.
https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-orgs

In the hub stack:

resource "aws_ram_resource_share" "hub" {
  name                      = "hub"
  allow_external_principals = false
}

resource "aws_ram_principal_association" "hub" {
  principal          = data.aws_organizations_organization.current.arn
  resource_share_arn = aws_ram_resource_share.hub.arn
}

resource "aws_ram_resource_association" "hub" {
  resource_arn       = aws_ssm_parameter.hub.arn
  resource_share_arn = aws_ram_resource_share.hub.arn
}

resource "aws_ssm_parameter" "hub" {
  name = "hub"
  type = "String"
  tier = "Advanced"
  value = jsonencode(
    {
      "cluster_name" : module.eks.cluster_name,
      "cluster_endpoint" : module.eks.cluster_endpoint
      "cluster_certificate_authority_data" : module.eks.cluster_certificate_authority_data,
      "cluster_region" : local.region,
      "spoke_cluster_secrets_arn" : aws_iam_role.spoke_cluster_secrets.arn,
      "argocd_iam_role_arn" : aws_iam_role.argocd_hub.arn
    }
  )
}

In the spoke stacks:

################################################################################
# Kubernetes Access for Hub Cluster
################################################################################

provider "aws" {
  alias = "hub"
  region = data.aws_arn.hub_parameter.region
}
data "aws_arn" "hub_parameter" {
  arn = var.hub_parameter_arn
}
data "aws_ssm_parameter" "hub" {
  name = var.hub_parameter_arn

  provider = aws.hub
}

locals {
  hub = jsondecode(data.aws_ssm_parameter.hub.value)
}
provider "kubernetes" {
  host                   = local.hub.cluster_endpoint
  cluster_ca_certificate = base64decode(local.hub.cluster_certificate_authority_data)

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "aws"
    # This requires the awscli to be installed locally where Terraform is executed
    args = ["eks", "get-token", "--cluster-name", local.hub.cluster_name, "--region", local.hub.cluster_region, "--role-arn", local.hub.spoke_cluster_secrets_arn]
  }
  alias = "hub"
}

It's not perfect, there's still the last mile issue of outputting the parameter ARN itself and supplying it as a variable, but in theory it's quite a static string, thus somewhat predictable - you just need to know the account ID of the hub cluster.

That could be stored in a CI pipeline variable for example, so the spoke clusters can just "know" where it is on successive runs with any number of future spoke clusters.

diagram drawio

@csantanapr
Copy link
Member

This is great @danielloader 🎉
Could you contribute the example to eks/multi-cluster/hub-spoke-cross-accounts/ ?

Do you required the user to use organizations? What if the user doesn't have organizations. Regardless I think adding your example using organizations and using ssm parameter to share the values to spoke, and the role to write secrets is great!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants