Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FSS Mount Target Created Dynamically Don't Get FSS Security Group Attached (4.5.9 -> 5.0.2) #871

Open
KingMichaelPark opened this issue Nov 15, 2023 · 4 comments
Labels
question Further information is requested

Comments

@KingMichaelPark
Copy link

Hello,

When I have deployed the OKE module as:

module "oke" {
  #checkov:skip=CKV_TF_1:Ignore warning, can't parse this
  source  = "oracle-terraform-modules/oke/oci"
  version = "5.0.2"

  # Core
  compartment_id = local.env_compartment_id
  tenancy_id     = var.tenancy_ocid

  # Tags
  tag_namespace = "oke"
  # use_defined_tags = true // true/*false
  vcn_id = module.network.oci_core_vcn_id

  load_balancers          = "public"
  preferred_load_balancer = "public"

  create_vcn = false
  subnets = {
    bastion  = { newbits = 13 }
    operator = { newbits = 13 }
    cp       = { newbits = 13 }
    int_lb   = { newbits = 11 }
    pub_lb   = { newbits = 11 }
    workers  = { newbits = 2 }
    pods     = { newbits = 2 }
    fss      = { newbits = 4 }
  }

  nsgs = {
    bastion  = {}
    operator = {}
    cp       = {}
    int_lb   = {}
    pub_lb   = {}
    workers  = {}
    pods     = {}
    fss      = {}
  }
  allow_rules_public_lb = {
    "Allow TCP ingress from public load balancers to workers nodes for NodePort traffic" : {
      protocol = 6, port_min = 80, port_max = 443, source = "0.0.0.0/0", source_type = "CIDR_BLOCK",
    }
  }

  # Cluster
  cluster_name       = "oke"
  cluster_type       = "enhanced" // *basic/enhanced
  image_signing_keys = []
  kubernetes_version = var.kubernetes_version

  worker_pool_mode = "node-pool"
  worker_pool_size = 1

  worker_pools = {
    autoscaler_pool = {
      autoscale          = true,
      boot_volume_size   = 2000,
      cluster_type       = "enhanced"
      dashboard_enabled  = true
      description        = "Node pool managed by cluster autoscaler",
      kubernetes_version = var.kubernetes_version
      max_size           = var.autoscaler_pool_max_nodes,
      memory             = var.autoscaler_pool_node_memory,
      min_size           = 1,
      node_pool_size     = var.autoscaler_pool_initial_nodes,
      shape              = var.autoscaler_pool_node_shape
      ocpus              = var.autoscaler_pool_node_ocpus,
    }

    oke-vm-standard = {
      allow_autoscaler     = true
      boot_volume_size     = var.worker_pool_boot_volume_size,
      cluster_type         = "enhanced"
      create               = var.worker_pool_create,
      dashboard_enabled    = true
      description          = "Node pool with cluster autoscaler scheduling allowed",
      kubernetes_version   = var.kubernetes_version
      memory               = var.worker_pool_memory,
      node_cycling_enabled = true
      ocpus                = var.worker_pool_ocpus,
      shape                = var.worker_pool_shape,
      size                 = var.worker_pool_size,
    }
  }

  home_region = var.region
  region      = var.region

  # Bastion
  create_bastion           = true
  bastion_shape            = var.bastion_shape
  bastion_image_os_version = var.bastion_os_version
  bastion_upgrade          = false

  # Operator
  create_operator           = true
  operator_shape            = var.operator_shape
  operator_image_os_version = var.operator_os_version
  operator_upgrade          = false

  # Autoscaler
  cluster_autoscaler_install           = true
  cluster_autoscaler_namespace         = "kube-system"
  cluster_autoscaler_helm_version      = "9.29.4"
  cluster_autoscaler_helm_values       = {}
  cluster_autoscaler_helm_values_files = []

  metrics_server_install = true

  ig_route_table_id  = module.network.oci_core_vcn_public_route_table_id
  nat_route_table_id = module.network.oci_core_vcn_private_route_table_id

  ssh_private_key = var.ssh_private_key
  ssh_public_key  = local.authorized_keys

  control_plane_is_public = false

  providers = {
    oci.home = oci.home
  }
}

The FSS related security group rules are created fine (and their relationships to the workers)
However, when the CSI Storage Class is created in Kubernetes to create the new dynamic file system

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fss-dynamic-storage
provisioner: fss.csi.oraclecloud.com
reclaimPolicy: Delete
allowVolumeExpansion: true
parameters:
  availabilityDomain: "{{ .Values.oci.availabilityDomain }}"
  mountTargetSubnetOcid: "{{ .Values.oci.mountTargetSubnetOcid }}" # This is the module.oke.fss_subnet_id
  compartmentOcid: "{{ .Values.oci.compartmentOcid }}"
  exportPath: "{{ .Values.oci.exportPath }}"
  exportOptions: "[{\"source\":\"0.0.0.0/0\",\"requirePrivilegedSourcePort\":false,\"access\":\"READ_WRITE\",\"identitySquash\":\"NONE\"}]"
  encryptInTransit: "{{ .Values.oci.encryptInTransit }}"

It gets created in the mount targets area (along with it's file system)
image

However, the security group that it needs in order to communicate with the workers (the fss security group) does not get attached to the mount target. (The attachment you see in the following image, I had to manually assign it).

image

I know I am probably doing something daft, but the 4.5.9 module did this automatically and I am just trying to recreate the behaviour in 5.0.2. Would you be able to advise how I can terraform/k8 this security group to the mount targets automatically?

Thank you in advance!

@KingMichaelPark KingMichaelPark added the question Further information is requested label Nov 15, 2023
@KingMichaelPark
Copy link
Author

I have also added some annotations for the the network security groups on the persistent volume claim but these don't seem to add to the mount target.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nextflow-fss-claim
  annotations:
    oci.oraclecloud.com/oci-network-security-groups: "{{ .Values.oci.fss_nsg_ocid }},{{ .Values.oci.workers_nsg_ocid }}"
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: fss-dynamic-storage
  resources:
    requests:
      storage: 1000Gi

@gman-k8s
Copy link

we are also missing the FSS creation in oke-5.x.
did you find a workaround for it?

@KingMichaelPark
Copy link
Author

KingMichaelPark commented Jan 19, 2024

I think I had to manually attach two nsg's the worker nsg to the load balancer. I couldn't automate the step no matter what I tried.

@robo-cap
Copy link
Member

robo-cap commented Apr 5, 2024

Oracle CCM doesn't support the attachment of an NSG to the Mount Target.
https://github.com/oracle/oci-cloud-controller-manager/blob/cd3c0b68028dc3f3dd84d358b711704c431e33c5/pkg/csi/driver/fss_controller.go#L589-L600

The workaround at this moment is to create a Mount Target in Terraform using the outputs of the OKE TF module and attaching the FSS NSG to it.

In OKE you can use the mountTargetOcid instead of mountTargetSubnetOcid.

New issue created here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants