diff --git a/docs/configuration-reference/backend/local.md b/docs/configuration-reference/backend/local.md index 12388de53..84a2ffaa7 100644 --- a/docs/configuration-reference/backend/local.md +++ b/docs/configuration-reference/backend/local.md @@ -36,8 +36,9 @@ backend "local" { Default backend is local. -| Argument | Description | Default | Required | -|-----------------------------|--------------------------------------------------------------|:-------:|:--------:| -| `backend.local` | Local backend configuration block. | - | false | -| `backend.local.path` | Location where Lokomotive stores the cluster state. | - | false | +| Argument | Description | Default | Type | Required | +|----------------------|-----------------------------------------------------|:-------:|:------:|:--------:| +| `backend.local` | Local backend configuration block. | - | object | false | +| `backend.local.path` | Location where Lokomotive stores the cluster state. | - | string | false | + diff --git a/docs/configuration-reference/backend/s3.md b/docs/configuration-reference/backend/s3.md index 8ddc426bb..0b8fe44c0 100644 --- a/docs/configuration-reference/backend/s3.md +++ b/docs/configuration-reference/backend/s3.md @@ -41,14 +41,15 @@ backend "s3" { ## Attribute reference -| Argument | Description | Default | Required | -|-----------------------------|--------------------------------------------------------------------------------------------------------------|:-------:|:--------:| -| `backend.s3` | AWS S3 backend configuration block. | - | false | -| `backend.s3.bucket` | Name of the S3 bucket where Lokomotive stores cluster state. | - | true | -| `backend.s3.key` | Path in the S3 bucket to store the cluster state. | - | true | -| `backend.s3.region` | AWS Region of the S3 bucket. | - | false | -| `backend.s3.aws_creds_path` | Path to the AWS credentials file. | - | false | -| `backend.s3.dynamodb_table` | Name of the DynamoDB table for locking the cluster state. The table must have a primary key named LockID. | - | false | +| Argument | Description | Default | Type | Required | +|-----------------------------|-----------------------------------------------------------------------------------------------------------|:-------:|:------:|:--------:| +| `backend.s3` | AWS S3 backend configuration block. | - | object | false | +| `backend.s3.bucket` | Name of the S3 bucket where Lokomotive stores cluster state. | - | string | true | +| `backend.s3.key` | Path in the S3 bucket to store the cluster state. | - | string | true | +| `backend.s3.region` | AWS Region of the S3 bucket. | - | string | false | +| `backend.s3.aws_creds_path` | Path to the AWS credentials file. | - | string | false | +| `backend.s3.dynamodb_table` | Name of the DynamoDB table for locking the cluster state. The table must have a primary key named LockID. | - | string | false | + >NOTE: In order for the installer to configure the credentials for S3 backend either pass them as environment variables or in the config above. diff --git a/docs/configuration-reference/components/aws-ebs-csi-driver.md b/docs/configuration-reference/components/aws-ebs-csi-driver.md index 2e9eefa34..50c3867b7 100644 --- a/docs/configuration-reference/components/aws-ebs-csi-driver.md +++ b/docs/configuration-reference/components/aws-ebs-csi-driver.md @@ -41,9 +41,10 @@ Table of all the arguments accepted by the component. Example: -| Argument | Description | Default | Required | -|--------------------------------|------------------------------------------------------------------------------|:------------:|:--------:| -| `enable_default_storage_class` | Use the storage class provided by the component as the default storage class | true | false | +| Argument | Description | Default | Type | Required | +|--------------------------------|------------------------------------------------------------------------------|-------|-------|--------| +| `enable_default_storage_class` | Use the storage class provided by the component as the default storage class | true | bool | false | + ## Applying @@ -64,4 +65,4 @@ lokoctl component delete aws-ebs-csi-driver **WARNING: Before destroying a cluster or deleting the component, EBS volumes must be cleaned up manually.** Failing to do so would result in EBS volumes -being left behind. \ No newline at end of file +being left behind. diff --git a/docs/configuration-reference/components/cert-manager.md b/docs/configuration-reference/components/cert-manager.md index 17e7a8ab4..48c6fb102 100644 --- a/docs/configuration-reference/components/cert-manager.md +++ b/docs/configuration-reference/components/cert-manager.md @@ -39,11 +39,13 @@ Table of all the arguments accepted by the component. Example: -| Argument | Description | Default | Required | -|-------------|--------------------------------------------------------------|:------------:|:--------:| -| `email` | Email used for certificates to receive expiry notifications. | - | true | -| `namespace` | Namespace to deploy the cert-manager into. | cert-manager | false | -| `webhooks` | Controls if webhooks should be deployed. | true | false | +| Argument | Description | Default | Type | Required | +|-------------------|----------------------------------------------------------------|:------------:|:------:|:--------:| +| `email` | Email used for certificates to receive expiry notifications. | - | string | true | +| `namespace` | Namespace to deploy the cert-manager into. | cert-manager | string | false | +| `webhooks` | Controls if webhooks should be deployed. | true | bool | false | +| `service_monitor` | Specifies how metrics can be retrieved from a set of services. | false | bool | false | + ## Applying diff --git a/docs/configuration-reference/components/cluster-autoscaler.md b/docs/configuration-reference/components/cluster-autoscaler.md index 17f6c1021..599c7e9a7 100644 --- a/docs/configuration-reference/components/cluster-autoscaler.md +++ b/docs/configuration-reference/components/cluster-autoscaler.md @@ -70,21 +70,23 @@ Table of all the arguments accepted by the component. Example: -| Argument | Description | Default | Required | -|------------------------------|------------------------------------------------------------------------------------------|:---------------|:--------:| -| `cluster_name` | Name of the cluster. | - | true | -| `worker_pool` | Name of the worker pool. | - | true | -| `namespace` | Namespace where the Cluster Autoscaler will be installed. | "kube-system" | false | -| `min_workers` | Minimum number of workers in the worker pool. | 1 | false | -| `max_workers` | Maximum number of workers in the worker pool. | 4 | false | -| `scale_down_unneeded_time` | How long a node should be unneeded before it is eligible for scale down. | "10m" | false | -| `scale_down_delay_after_add` | How long scale down should wait after a scale up. | "10m" | false | -| `scale_down_unready_time` | How long an unready node should be unneeded before it is eligible for scale down. | "20m" | false | -| `provider` | Supported provider, currently Packet. | "packet" | false | -| `packet.project_id` | Packet Project ID where the cluster is running. | - | true | -| `packet.facility` | Packet Facility where the cluster is running. | - | true | -| `packet.worker_type` | Machine type for workers spawned by the Cluster Autoscaler. | "t1.small.x86" | false | -| `packet_worker_channel` | Flatcar Container Linux channel to be used in workers spawned by the Cluster Autoscaler. | "stable" | false | +| Argument | Description | Default | Type | Required | +|------------------------------|------------------------------------------------------------------------------------------|:---------------|:------:|:--------:| +| `cluster_name` | Name of the cluster. | - | string | true | +| `worker_pool` | Name of the worker pool. | - | string | true | +| `namespace` | Namespace where the Cluster Autoscaler will be installed. | "kube-system" | string | false | +| `min_workers` | Minimum number of workers in the worker pool. | 1 | number | false | +| `max_workers` | Maximum number of workers in the worker pool. | 4 | number | false | +| `scale_down_unneeded_time` | How long a node should be unneeded before it is eligible for scale down. | "10m" | string | false | +| `scale_down_delay_after_add` | How long scale down should wait after a scale up. | "10m" | string | false | +| `scale_down_unready_time` | How long an unready node should be unneeded before it is eligible for scale down. | "20m" | string | false | +| `provider` | Supported provider, currently Packet. | "packet" | string | false | +| `service_monitor` | Specifies how metrics can be retrieved from a set of services. | false | bool | false | +| `packet.project_id` | Packet Project ID where the cluster is running. | - | string | true | +| `packet.facility` | Packet Facility where the cluster is running. | - | string | true | +| `packet.worker_type` | Machine type for workers spawned by the Cluster Autoscaler. | "t1.small.x86" | string | false | +| `packet_worker_channel` | Flatcar Container Linux channel to be used in workers spawned by the Cluster Autoscaler. | "stable" | string | false | + ## Applying diff --git a/docs/configuration-reference/components/contour.md b/docs/configuration-reference/components/contour.md index a4b03b974..ae008da70 100644 --- a/docs/configuration-reference/components/contour.md +++ b/docs/configuration-reference/components/contour.md @@ -67,13 +67,14 @@ Table of all the arguments accepted by the component. Example: -| Argument | Description | Default | Required | -|---------------------|---------------------------------------------------------------------------------------------------------|:--------------:|:--------:| -| `enable_monitoring` | Create Prometheus Operator configs to scrape Contour and Envoy metrics. Also deploys Grafana Dashboard. | false | false | -| `ingress_hosts` | [ExternalDNS component](external-dns.md) creates DNS entries from the values provided. | "" | false | -| `node_affinity` | Node affinity for deploying the operator pod and envoy daemonset. | - | false | -| `service_type` | The type of Kubernetes service used to expose Envoy. | "LoadBalancer" | false | -| `toleration` | Tolerations that the operator and envoy pods will tolerate. | - | false | +| Argument | Description | Default | Type | Required | +|---------------------|---------------------------------------------------------------------------------------------------------|:--------------:|:---------------------------------------------------------------------------------------------------------------|:--------:| +| `enable_monitoring` | Create Prometheus Operator configs to scrape Contour and Envoy metrics. Also deploys Grafana Dashboard. | false | bool | false | +| `ingress_hosts` | [ExternalDNS component](external-dns.md) creates DNS entries from the values provided. | "" | list(string) | false | +| `node_affinity` | Node affinity for deploying the operator pod and envoy daemonset. | - | list(object({key = string, operator = string, values = list(string)})) | false | +| `service_type` | The type of Kubernetes service used to expose Envoy. | "LoadBalancer" | string | false | +| `toleration` | Tolerations that the operator and envoy pods will tolerate. | - | list(object({key = string, effect = string, operator = string, value = string, toleration_seconds = string })) | false | + ## Applying diff --git a/docs/configuration-reference/components/dex.md b/docs/configuration-reference/components/dex.md index a1954d0f3..bfde34117 100644 --- a/docs/configuration-reference/components/dex.md +++ b/docs/configuration-reference/components/dex.md @@ -153,31 +153,32 @@ Table of all the arguments accepted by the component. Example: -| Argument | Description | Default | Required | -|-----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------:|:--------:| -| `ingress_host` | Used as the `hosts` domain in the ingress resource for dex that is automatically created. | - | true | -| `issuer_host` | Dex's issuer URL. | - | true | -| `certmanager_cluster_issuer` | `ClusterIssuer` to be used by cert-manager while issuing TLS certificates. Supported values: `letsencrypt-production`, `letsencrypt-staging`. | `letsencrypt-production` | false | -| `connector` | Dex implements connectors that target OpenID Connect and specific platforms such as GitHub, Google etc. Currently only GitHub and OIDC (Google) are supported from lokoctl. | - | true | -| `connector.id` | ID of the connector. | - | true | -| `connector.name` | Name of the connector. | - | true | -| `connector.config` | Configuration for the chosen connector. | - | true | -| `connector.config.client_id` | OAuth app client id. | - | true | -| `connector.config.client_secret` | OAuth app client secret. | - | true | -| `connector.config.issuer` | The OIDC issuer endpoint. For `oidc` connector only. | - | true | -| `connector.config.redirect_uri` | The authorization callback URL. | - | true | -| `connector.config.team_name` | Can be 'name', 'slug' or 'both', see https://github.com/dexidp/dex/blob/master/Documentation/connectors/github.md. For `github` connector only. | - | true | -| `connector.config.admin_email` | The email of a GSuite super user. For `google` connector only. | - | false | -| `connector.config.hosted_domains` | If this field is nonempty, only users from a listed domain will be allowed to log in. For `oidc` and `google` connectors only. | - | false | -| `connector.config.org` | Define one or more organizations and teams. For `github` connector only. | - | true | -| `connector.config.org.name` | Name of the GitHub organization. | - | true | -| `connector.config.org.teams` | Name of the team in the provided GitHub organization. | - | true | -| `gsuite_json_config_path` | Path to the Gsuite Service Account JSON file. For `google` connector only. | - | true | -| `connector.static_client` | Configure one or more static clients, i.e. apps that use dex. Example: gangway | - | true | -| `connector.static_client.id` | Client ID used to identify the static client. | - | true | -| `connector.static_client.secret` | Client secret used to identify the static client. | - | true | -| `connector.static_client.name` | Name used when displaying this client to the end user. | - | true | -| `connector.static_client.redirect_uris` | A registered set of redirect URIs. When redirecting from dex to the client, the URI requested to redirect to MUST match one of these values. | - | true | +| Argument | Description | Default | Type | Required | +|-----------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------:|:------------:|:--------:| +| `ingress_host` | Used as the `hosts` domain in the ingress resource for dex that is automatically created. | - | string | true | +| `issuer_host` | Dex's issuer URL. | - | string | true | +| `certmanager_cluster_issuer` | `ClusterIssuer` to be used by cert-manager while issuing TLS certificates. Supported values: `letsencrypt-production`, `letsencrypt-staging`. | `letsencrypt-production` | string | false | +| `connector` | Dex implements connectors that target OpenID Connect and specific platforms such as GitHub, Google etc. Currently only GitHub and OIDC (Google) are supported from lokoctl. | - | list(object) | true | +| `connector.id` | ID of the connector. | - | string | true | +| `connector.name` | Name of the connector. | - | string | true | +| `connector.config` | Configuration for the chosen connector. | - | object | true | +| `connector.config.client_id` | OAuth app client id. | - | string | true | +| `connector.config.client_secret` | OAuth app client secret. | - | string | true | +| `connector.config.issuer` | The OIDC issuer endpoint. For `oidc` connector only. | - | string | true | +| `connector.config.redirect_uri` | The authorization callback URL. | - | string | true | +| `connector.config.team_name` | Can be 'name', 'slug' or 'both', see https://github.com/dexidp/dex/blob/master/Documentation/connectors/github.md. For `github` connector only. | - | string | true | +| `connector.config.admin_email` | The email of a GSuite super user. For `google` connector only. | - | string | false | +| `connector.config.hosted_domains` | If this field is nonempty, only users from a listed domain will be allowed to log in. For `oidc` and `google` connectors only. | - | list(string) | false | +| `connector.config.org` | Define one or more organizations and teams. For `github` connector only. | - | list(object) | true | +| `connector.config.org.name` | Name of the GitHub organization. | - | string | true | +| `connector.config.org.teams` | Name of the team in the provided GitHub organization. | - | list(string) | true | +| `gsuite_json_config_path` | Path to the Gsuite Service Account JSON file. For `google` connector only. | - | string | false | +| `static_client` | Configure one or more static clients, i.e. apps that use dex. Example: gangway | - | list(object) | true | +| `static_client.id` | Client ID used to identify the static client. | - | string | true | +| `static_client.secret` | Client secret used to identify the static client. | - | string | true | +| `static_client.name` | Name used when displaying this client to the end user. | - | string | true | +| `static_client.redirect_uris` | A registered set of redirect URIs. When redirecting from dex to the client, the URI requested to redirect to MUST match one of these values. | - | list(string) | true | + ## Applying diff --git a/docs/configuration-reference/components/external-dns.md b/docs/configuration-reference/components/external-dns.md index 6dcd64bee..843ba68e4 100644 --- a/docs/configuration-reference/components/external-dns.md +++ b/docs/configuration-reference/components/external-dns.md @@ -59,18 +59,19 @@ Table of all the arguments accepted by the component. Example: -| Argument | Description | Default | Required | -|-----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------:|:--------:| -| `sources` | Kubernetes resources type to be observed for new DNS entries by ExternalDNS. | ["service"] | false | -| `namespace` | Namespace to install ExternalDNS. | "external-dns" | false | -| `policy` | Modify how DNS records are sychronized between sources and providers (options: sync, upsert-only). | "upsert-only" | false | -| `metrics` | Enable metrics collection by Prometheus. Needs [Prometheus Operator component](prometheus-operator.md) installed. | false | false | -| `owner_id` | A name that identifies this instance of ExternalDNS. Set it to a unique value across the DNS zone that doesn't change for the lifetime of the cluster. | - | true | -| `aws` | Configuration block for AWS Route53 DNS provider. | - | true | -| `aws.zone_type` | Filter for zones of this type (options: public, private). | "public" | false | -| `aws.zone_id` | ID of the DNS zone. | - | true | -| `aws.aws_access_key_id` | AWS access key ID for AWS credentials. Use environment variable AWS_ACCESS_KEY_ID instead. | - | false | -| `aws.aws_secret_access_key` | AWS secret access key for AWS credentials. Use environment variable AWS_SECRET_ACCESS_KEY instead. | - | false | +| Argument | Description | Default | Type | Required | +|-----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------:|:------------:|:--------:| +| `sources` | Kubernetes resources type to be observed for new DNS entries by ExternalDNS. | ["service"] | list(string) | false | +| `namespace` | Namespace to install ExternalDNS. | "external-dns" | string | false | +| `policy` | Modify how DNS records are sychronized between sources and providers (options: sync, upsert-only). | "upsert-only" | string | false | +| `metrics` | Enable metrics collection by Prometheus. Needs [Prometheus Operator component](prometheus-operator.md) installed. | false | bool | false | +| `owner_id` | A name that identifies this instance of ExternalDNS. Set it to a unique value across the DNS zone that doesn't change for the lifetime of the cluster. | - | string | true | +| `aws` | Configuration block for AWS Route53 DNS provider. | - | object | true | +| `aws.zone_type` | Filter for zones of this type (options: public, private). | "public" | string | false | +| `aws.zone_id` | ID of the DNS zone. | - | string | true | +| `aws.aws_access_key_id` | AWS access key ID for AWS credentials. Use environment variable AWS_ACCESS_KEY_ID instead. | - | string | false | +| `aws.aws_secret_access_key` | AWS secret access key for AWS credentials. Use environment variable AWS_SECRET_ACCESS_KEY instead. | - | string | false | + ## Applying diff --git a/docs/configuration-reference/components/gangway.md b/docs/configuration-reference/components/gangway.md index 445c53431..b991ba89c 100644 --- a/docs/configuration-reference/components/gangway.md +++ b/docs/configuration-reference/components/gangway.md @@ -76,18 +76,19 @@ Table of all the arguments accepted by the component. Example: -| Argument | Description | Default | Required | -|------------------|-----------------------------------------------------------------------------------------------|:-------:|:--------:| -| `cluster_name` | The name of the cluster. | - | true | -| `ingress_host` | Used as the `hosts` domain in the ingress resource for gangway that is automatically created. | - | true | -| `certmanager_cluster_issuer` | `ClusterIssuer` to be used by cert-manager while issuing TLS certificates. Supported values: `letsencrypt-production`, `letsencrypt-staging`. | `letsencrypt-production` | false | -| `sesion_key` | Gangway session key. | - | true | -| `api_server_url` | URL of Kubernetes API server. | - | true | -| `authorize_url` | Auth endpoint of Dex. | - | true | -| `token_url` | Token endpoint of Dex. | - | true | -| `client_id` | Static client ID. | - | true | -| `client_secret` | Static client secret. | - | true | -| `redirect_url` | Gangway's redirect URL, i.e. OIDC callback endpoint. | - | true | +| Argument | Description | Default | Type | Required | +|------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|:------------------------:|:------:|:--------:| +| `cluster_name` | The name of the cluster. | - | string | true | +| `ingress_host` | Used as the `hosts` domain in the ingress resource for gangway that is automatically created. | - | string | true | +| `certmanager_cluster_issuer` | `ClusterIssuer` to be used by cert-manager while issuing TLS certificates. Supported values: `letsencrypt-production`, `letsencrypt-staging`. | `letsencrypt-production` | string | false | +| `sesion_key` | Gangway session key. | - | string | true | +| `api_server_url` | URL of Kubernetes API server. | - | string | true | +| `authorize_url` | Auth endpoint of Dex. | - | string | true | +| `token_url` | Token endpoint of Dex. | - | string | true | +| `client_id` | Static client ID. | - | string | true | +| `client_secret` | Static client secret. | - | string | true | +| `redirect_url` | Gangway's redirect URL, i.e. OIDC callback endpoint. | - | string | true | + ## Applying diff --git a/docs/configuration-reference/components/httpbin.md b/docs/configuration-reference/components/httpbin.md index 8ceaccc58..a0951f9dc 100644 --- a/docs/configuration-reference/components/httpbin.md +++ b/docs/configuration-reference/components/httpbin.md @@ -34,10 +34,11 @@ Table of all the arguments accepted by the component. Example: -| Argument | Description | Default | Required | -|------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|:------------------------:|:--------:| -| `ingress_host` | Used as the `hosts` domain in the ingress resource for httpbin that is automatically created. | - | true | -| `certmanager_cluster_issuer` | `ClusterIssuer` to be used by cert-manager while issuing TLS certificates. Supported values: `letsencrypt-production`, `letsencrypt-staging`. | `letsencrypt-production` | false | +| Argument | Description | Default | Type | Required | +|------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|:------------------------:|:------:|:--------:| +| `ingress_host` | Used as the `hosts` domain in the ingress resource for httpbin that is automatically created. | - | string | true | +| `certmanager_cluster_issuer` | `ClusterIssuer` to be used by cert-manager while issuing TLS certificates. Supported values: `letsencrypt-production`, `letsencrypt-staging`. | `letsencrypt-production` | string | false | + ## Applying diff --git a/docs/configuration-reference/components/metallb.md b/docs/configuration-reference/components/metallb.md index f6b161a98..3116e7f18 100644 --- a/docs/configuration-reference/components/metallb.md +++ b/docs/configuration-reference/components/metallb.md @@ -49,7 +49,6 @@ MetalLB component configuration example: ```tf component "metallb" { - # Optional arguments. address_pools = { default = ["147.63.8.20/32"] special_addresses = ["147.85.47.16/29", "147.85.47.24/29"] @@ -84,14 +83,15 @@ Table of all the arguments accepted by the component. Example: -| Argument | Description | Default | Required | -|-----------------------------|--------------------------------------------------------------------------------------------|:-------:|:--------:| -| `address_pools` | A map which allows specifying one or more CIDRs which MetalLB can use to expose services. | - | true | -| `controller_node_selectors` | A map with specific labels to run MetalLB controller pods selectively on a group of nodes. | - | false | -| `speaker_node_selectors` | A map with specific labels to run MetalLB speaker pods selectively on a group of nodes. | - | false | -| `controller_toleration` | Specify one or more tolerations for controller pods. | - | false | -| `speaker_toleration` | Specify one or more tolerations for speaker pods. | - | false | -| `service_monitor` | Create ServiceMonitor for Prometheus to scrape MetalLB metrics. | false | false | +| Argument | Description | Default | Type | Required | +|-----------------------------|--------------------------------------------------------------------------------------------|:-------:|:---------------------------------------------------------------------------------------------------------------|:--------:| +| `address_pools` | A map which allows specifying one or more CIDRs which MetalLB can use to expose services. | - | object({default = list(string), special_addresses = list(string)}) | true | +| `controller_node_selectors` | A map with specific labels to run MetalLB controller pods selectively on a group of nodes. | - | map(string) | false | +| `speaker_node_selectors` | A map with specific labels to run MetalLB speaker pods selectively on a group of nodes. | - | map(string) | false | +| `controller_toleration` | Specify one or more tolerations for controller pods. | - | list(object({key = string, effect = string, operator = string, value = string, toleration_seconds = string })) | false | +| `speaker_toleration` | Specify one or more tolerations for speaker pods. | - | list(object({key = string, effect = string, operator = string, value = string, toleration_seconds = string })) | false | +| `service_monitor` | Create ServiceMonitor for Prometheus to scrape MetalLB metrics. | false | bool | false | + ## Applying diff --git a/docs/configuration-reference/components/openebs-operator.md b/docs/configuration-reference/components/openebs-operator.md index aececd9a5..348b5f010 100644 --- a/docs/configuration-reference/components/openebs-operator.md +++ b/docs/configuration-reference/components/openebs-operator.md @@ -59,10 +59,11 @@ Table of all the arguments accepted by the component. Example: -| Argument | Description | Default | Required | -|----------------------|------------------------- |:-------:|:--------:| -| `ndm_selector_label` | Name of the node label. | - | false | -| `ndm_selector_value` | Value of the node label | - | false | +| Argument | Description | Default | Type | Required | +|----------------------|-------------------------|:-------:|:------:|:--------:| +| `ndm_selector_label` | Name of the node label. | - | string | false | +| `ndm_selector_value` | Value of the node label | - | string | false | + ## Applying diff --git a/docs/configuration-reference/components/openebs-storage-class.md b/docs/configuration-reference/components/openebs-storage-class.md index 7c4fcfdff..9a91d2870 100644 --- a/docs/configuration-reference/components/openebs-storage-class.md +++ b/docs/configuration-reference/components/openebs-storage-class.md @@ -62,11 +62,12 @@ Table of all the arguments accepted by the component. Example: -| Argument | Description | Default | Required | -|-----------------|-------------------------------------------------------------------------------------------------------------------------------|:-------:|:--------:| -| `replica_count` | Defines the number of cStor volume replicas. | 3 | false | -| `default` | Indicates whether the storage class is default or not. | false | false | -| `disks` | List of selected unclaimed BlockDevice CRs which are unmounted and do not contain a filesystem in each participating node. | - | false | +| Argument | Description | Default | Type | Required | +|-----------------|----------------------------------------------------------------------------------------------------------------------------|:-------:|:------------:|:--------:| +| `replica_count` | Defines the number of cStor volume replicas. | 3 | number | false | +| `default` | Indicates whether the storage class is default or not. | false | bool | false | +| `disks` | List of selected unclaimed BlockDevice CRs which are unmounted and do not contain a filesystem in each participating node. | - | list(string) | false | + ## Applying diff --git a/docs/configuration-reference/components/prometheus-operator.md b/docs/configuration-reference/components/prometheus-operator.md index 74dfe2e94..cbbeb5e5f 100644 --- a/docs/configuration-reference/components/prometheus-operator.md +++ b/docs/configuration-reference/components/prometheus-operator.md @@ -114,37 +114,37 @@ EOF Table of all the arguments accepted by the component. Example: +| Argument | Description | Default | Type | Required | +|----------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------:|:--------:| +| `namespace` | Namespace to deploy the Prometheus Operator. | `monitoring` | string | false | +| `grafana.admin_password` | Password for `admin` user in Grafana. If not provided it is auto generated and stored in secret `prometheus-operator-grafana`. | - | string | false | +| `grafana.secret_env` | Sensitive environment variables passed to Grafana pod and stored as secret. Read more on manipulating `grafana.ini` using env var [here](https://grafana.com/docs/grafana/latest/installation/configuration/#configure-with-environment-variables). | - | map(string) | false | +| `grafana.ingress.host` | Ingress URL host to expose Grafana over the internet. **NOTE:** When running on Packet, a DNS entry pointing at the ingress controller needs to be created. | - | string | true | +| `grafana.ingress.class` | Ingress class to use for Grafana ingress. | `contour` | string | false | +| `grafana.ingress.certmanager_cluster_issuer` | `ClusterIssuer` to be used by cert-manager while issuing TLS certificates. Supported values: `letsencrypt-production`, `letsencrypt-staging`. | `letsencrypt-production` | string | false | +| `prometheus_operator_node_selector` | Node selector to specify nodes where the Prometheus Operator pods should be deployed. | {} | map(string) | false | +| `prometheus_metrics_retention` | Time duration Prometheus shall retain data for. Must match the regular expression `[0-9]+(ms\|s\|m\|h\|d\|w\|y)` (milliseconds, seconds, minutes, hours, days, weeks and years). | `10d` | string | false | +| `prometheus_external_url` | The external URL Prometheus instances will be available under. This is necessary to generate correct URLs. This is necessary if Prometheus is not served from root of a DNS name. | "" | string | false | +| `prometheus_node_selector` | Node selector to specify nodes where the Prometheus pods should be deployed. | {} | map(string) | false | +| `prometheus_storage_size` | Storage capacity for the Prometheus in bytes. You can express storage as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. | "50Gi" | string | false | +| `watch_labeled_service_monitors` | By default prometheus operator watches only the ServiceMonitor objects in the cluster that are labeled `release: prometheus-operator`. If set to `false` then all the ServiceMonitors will be watched. | `true` | bool | false | +| `watch_labeled_prometheus_rules` | By default prometheus operator watches only the PrometheusRule objects in the cluster that are labeled `release: prometheus-operator` and `app: prometheus-operator`. If set to `false` then all the PrometheusRule will be watched. | `true` | bool | false | +| `alertmanager_retention` | Time duration Alertmanager shall retain data for. Must match the regular expression `[0-9]+(ms\|s\|m\|h)` (milliseconds, seconds, minutes and hours). | `120h` | string | false | +| `alertmanager_external_url` | The external URL the Alertmanager instances will be available under. This is necessary to generate correct URLs. This is necessary if Alertmanager is not served from root of a DNS name. | "" | string | false | +| `alertmanager_config` | Provide YAML file path to configure Alertmanager. See [https://prometheus.io/docs/alerting/configuration/#configuration-file](https://prometheus.io/docs/alerting/configuration/#configuration-file). | `{"global":{"resolve_timeout":"5m"},"route":{"group_by":["job"],"group_wait":"30s","group_interval":"5m","repeat_interval":"12h","receiver":"null","routes":[{"match":{"alertname":"Watchdog"},"receiver":"null"}]},"receivers":[{"name":"null"}]}` | string | false | +| `alertmanager_node_selector` | Node selector to specify nodes where the AlertManager pods should be deployed. | {} | map(string) | false | +| `alertmanager_storage_size` | Storage capacity for the Alertmanager in bytes. You can express storage as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. | "50Gi" | string | false | +| `disable_webhooks` | Disables validation and mutation webhooks. This might be required on older versions of Kubernetes to install successfully. | false | bool | false | +| `monitor` | Block, which allows to disable scraping of individual Kubernetes components. | - | object | false | +| `monitor.etcd` | Controls if the default Prometheus instance should scrape etcd metrics. | true | bool | false | +| `monitor.kube_controller_manager` | Controls if the default Prometheus instance should scrape kube-controller-manager metrics. | true | bool | false | +| `monitor.kube_scheduler` | Controls if the default Prometheus instance should scrape kube-scheduler metrics. | true | bool | false | +| `monitor.kube_proxy` | Controls if the default Prometheus instance should scrape kube-proxy metrics. | true | bool | false | +| `monitor.kubelet` | Controls if the default Prometheus instance should scrape kubelet metrics. | true | bool | false | +| `coredns` | Block, which allows to customize, how CoreDNS is scraped. | - | object | false | +| `coredns.selector` | Defines, how CoreDNS pods should be selected for scraping. | {"k8s-app":"coredns","tier":"control-plane"} | map(string) | false | +| `storage_class` | Storage Class to use for the storage allowed for Prometheus and Alertmanager. | - | string | false | -| Argument | Description | Default | Required | -|-------- |--------------|:-------:|:--------:| -| `namespace` | Namespace to deploy the Prometheus Operator. | `monitoring` | false | -| `grafana.admin_password` | Password for `admin` user in Grafana. If not provided it is auto generated and stored in secret `prometheus-operator-grafana`. | - | false | -| `grafana.secret_env` | Sensitive environment variables passed to Grafana pod and stored as secret. Read more on manipulating `grafana.ini` using env var [here](https://grafana.com/docs/grafana/latest/installation/configuration/#configure-with-environment-variables). | - | false | -| `grafana.ingress.host` | Ingress URL host to expose Grafana over the internet. **NOTE:** When running on Packet, a DNS entry pointing at the ingress controller needs to be created. | - | true | -| `grafana.ingress.class` | Ingress class to use for Grafana ingress. | `contour` | false | -| `grafana.ingress.certmanager_cluster_issuer` | `ClusterIssuer` to be used by cert-manager while issuing TLS certificates. Supported values: `letsencrypt-production`, `letsencrypt-staging`. | `letsencrypt-production` | false | -| `prometheus_operator_node_selector` | Node selector to specify nodes where the Prometheus Operator pods should be deployed. | {} | false | -| `prometheus_metrics_retention` | Time duration Prometheus shall retain data for. Must match the regular expression `[0-9]+(ms\|s\|m\|h\|d\|w\|y)` (milliseconds, seconds, minutes, hours, days, weeks and years). | `10d` | false | -| `prometheus_external_url` | The external URL Prometheus instances will be available under. This is necessary to generate correct URLs. This is necessary if Prometheus is not served from root of a DNS name. | "" | false | -| `prometheus_node_selector` | Node selector to specify nodes where the Prometheus pods should be deployed. | {} | false | -| `prometheus_storage_size` | Storage capacity for the Prometheus in bytes. You can express storage as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. | "50Gi" | false | -| `watch_labeled_service_monitors` | By default prometheus operator watches only the ServiceMonitor objects in the cluster that are labeled `release: prometheus-operator`. If set to `false` then all the ServiceMonitors will be watched. | `true` | false | -| `watch_labeled_prometheus_rules` | By default prometheus operator watches only the PrometheusRule objects in the cluster that are labeled `release: prometheus-operator` and `app: prometheus-operator`. If set to `false` then all the PrometheusRule will be watched. | `true` | false | -| `alertmanager_retention` | Time duration Alertmanager shall retain data for. Must match the regular expression `[0-9]+(ms\|s\|m\|h)` (milliseconds, seconds, minutes and hours). | `120h` | false | -| `alertmanager_external_url` | The external URL the Alertmanager instances will be available under. This is necessary to generate correct URLs. This is necessary if Alertmanager is not served from root of a DNS name. | "" | false | -| `alertmanager_config` | Provide YAML file path to configure Alertmanager. See [https://prometheus.io/docs/alerting/configuration/#configuration-file](https://prometheus.io/docs/alerting/configuration/#configuration-file). | `{"global":{"resolve_timeout":"5m"},"route":{"group_by":["job"],"group_wait":"30s","group_interval":"5m","repeat_interval":"12h","receiver":"null","routes":[{"match":{"alertname":"Watchdog"},"receiver":"null"}]},"receivers":[{"name":"null"}]}` | false | -| `alertmanager_node_selector` | Node selector to specify nodes where the AlertManager pods should be deployed. | {} | false | -| `alertmanager_storage_size` | Storage capacity for the Alertmanager in bytes. You can express storage as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. | "50Gi" | false | -| `disable_webhooks` | Disables validation and mutation webhooks. This might be required on older versions of Kubernetes to install successfully. | false | false | -| `monitor` | Block, which allows to disable scraping of individual Kubernetes components. | - | false | -| `monitor.etcd` | Controls if the default Prometheus instance should scrape etcd metrics. | true | false | -| `monitor.kube_controller_manager` | Controls if the default Prometheus instance should scrape kube-controller-manager metrics. | true | false | -| `monitor.kube_scheduler` | Controls if the default Prometheus instance should scrape kube-scheduler metrics. | true | false | -| `monitor.kube_proxy` | Controls if the default Prometheus instance should scrape kube-proxy metrics. | true | false | -| `monitor.kubelet` | Controls if the default Prometheus instance should scrape kubelet metrics. | true | false | -| `coredns` | Block, which allows to customize, how CoreDNS is scraped. | - | false | -| `coredns.selector` | Defines, how CoreDNS pods should be selected for scraping. | {"k8s-app":"coredns","tier":"control-plane"} | false | -| `storage_class` | Storage Class to use for the storage allowed for Prometheus and Alertmanager. | - | false | ## Applying diff --git a/docs/configuration-reference/components/rook-ceph.md b/docs/configuration-reference/components/rook-ceph.md index 9f0aa34a6..b116e38ea 100644 --- a/docs/configuration-reference/components/rook-ceph.md +++ b/docs/configuration-reference/components/rook-ceph.md @@ -67,15 +67,17 @@ Table of all the arguments accepted by the component. Example: -| Argument | Description | Default | Required | -|-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------|:-------:|:--------:| -| `namespace` | Namespace to deploy the Ceph cluster into. Must be the same as the rook operator. | rook | false | -| `monitor_count` | Number of Ceph monitors to deploy. An odd number like 3 or 5 is recommended which should also be sufficient for most cases. | 1 | false | -| `node_affinity` | Node affinity for deploying the Ceph cluster pods. | - | false | -| `toleration` | Tolerations that the Ceph cluster pods will tolerate. | - | false | -| `metadata_device` | Name of the device to store the metadata on each storage machine. **Note**: Provide just the name of the device and skip prefixing with `/dev/`. | - | false | -| `storage_class.enable` | Install Storage Class config. | false | false | -| `storage_class.default` | Make this Storage Class as a default one. | false | false | + +| Argument | Description | Default | Type | Required | +|-------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|:-------:|:---------------------------------------------------------------------------------------------------------------|:--------:| +| `namespace` | Namespace to deploy the Ceph cluster into. Must be the same as the rook operator. | "rook" | string | false | +| `monitor_count` | Number of Ceph monitors to deploy. An odd number like 3 or 5 is recommended which should also be sufficient for most cases. | 1 | number | false | +| `node_affinity` | Node affinity for deploying the Ceph cluster pods. | - | list(object({key = string, operator = string, values = list(string)})) | false | +| `toleration` | Tolerations that the Ceph cluster pods will tolerate. | - | list(object({key = string, effect = string, operator = string, value = string, toleration_seconds = string })) | false | +| `metadata_device` | Name of the device to store the metadata on each storage machine. **Note**: Provide just the name of the device and skip prefixing with `/dev/`. | - | string | false | +| `storage_class.enable` | Install Storage Class config. | false | bool | false | +| `storage_class.default` | Make this Storage Class as a default one. | false | bool | false | + ## Applying diff --git a/docs/configuration-reference/components/rook.md b/docs/configuration-reference/components/rook.md index f9d744253..66d67ac0f 100644 --- a/docs/configuration-reference/components/rook.md +++ b/docs/configuration-reference/components/rook.md @@ -57,16 +57,17 @@ Table of all the arguments accepted by the component. Example: -| Argument | Description | Default | Required | -|------------------------------|----------------------------------------------------------------------------------------------------------|:-------:|:--------:| -| `namespace` | Namespace to deploy the rook operator into. | rook | false | -| `node_selector` | A map with specific labels to run Rook pods selectively on a group of nodes. | - | false | -| `toleration` | Tolerations that the operator's pods will tolerate. | - | false | -| `agent_toleration_key` | Toleration key for the rook agent pods. | - | false | -| `agent_toleration_effect` | Toleration effect for the rook agent pods. Needs to be specified if `agent_toleration_key` is set. | - | false | -| `discover_toleration_key` | Toleration key for the rook discover pods. | - | false | -| `discover_toleration_effect` | Toleration effect for the rook discover pods. Needs to be specified if `discover_toleration_key` is set. | - | false | -| `enable_monitoring` | Enable Monitoring for the Rook sub-systems. Make sure that the Prometheus Operator is installed. | false | false | +| Argument | Description | Default | Type | Required | +|------------------------------|----------------------------------------------------------------------------------------------------------|:-------:|:---------------------------------------------------------------------------------------------------------------|:--------:| +| `namespace` | Namespace to deploy the rook operator into. | "rook" | string | false | +| `node_selector` | A map with specific labels to run Rook pods selectively on a group of nodes. | - | map(string) | false | +| `toleration` | Tolerations that the operator's pods will tolerate. | - | list(object({key = string, effect = string, operator = string, value = string, toleration_seconds = string })) | false | +| `agent_toleration_key` | Toleration key for the rook agent pods. | - | string | false | +| `agent_toleration_effect` | Toleration effect for the rook agent pods. Needs to be specified if `agent_toleration_key` is set. | - | string | false | +| `discover_toleration_key` | Toleration key for the rook discover pods. | - | string | false | +| `discover_toleration_effect` | Toleration effect for the rook discover pods. Needs to be specified if `discover_toleration_key` is set. | - | string | false | +| `enable_monitoring` | Enable Monitoring for the Rook sub-systems. Make sure that the Prometheus Operator is installed. | false | bool | false | + ## Applying diff --git a/docs/configuration-reference/components/velero.md b/docs/configuration-reference/components/velero.md index 2171aa854..151c4d724 100644 --- a/docs/configuration-reference/components/velero.md +++ b/docs/configuration-reference/components/velero.md @@ -75,26 +75,27 @@ Table of all the arguments accepted by the component. Example: -| Argument | Description | Default | Required | -|-------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------:|:--------:| -| `provider` | Supported provider name. Only `azure` is supported for now. | "azure" | false | -| `namespace` | Namespace to install Velero. | "velero" | false | -| `metrics` | Configure Prometheus to scrape Velero metrics. Needs the [Prometheus Operator component](prometheus-operator.md) installed.| - | false | -| `metrics.enabled` | Adds Prometheus annotations to Velero deployment if enabled. | false | false | -| `metrics.service_monitor` | Adds ServiceMonitor resource for Prometheus. Requires `metrics.enabled` as true. | false | false | -| `azure` | Configure Azure provider for Velero. | - | true | -| `azure.subscription_id` | Azure Subscription ID where client application is created. Can be obtained with `az account list`. | - | true | -| `azure.tenant_id` | Azure Tenant ID where your subscription is created. Can be obtained with `az account list`. | - | true | -| `azure.client_id` | Azure Application Client ID to perform Azure operations. | - | true | -| `azure.client_secret` | Azure Application Client secret. | - | true | -| `azure.resource_group` | Azure resource group, where PVC Disks are created. If this argument is wrong, Velero will fail to create PVC snapshots. | - | true | -| `azure.backup_storage_location` | Configure backup storage location and metadata. | - | true | -| `azure.backup_storage_location.resource_group` | Name of the resource group containing the storage account for this backup storage location. | - | true | -| `azure.backup_storage_location.storage_account` | Name of the storage account for this backup storage location. | - | true | -| `azure.backup_storage_location.bucket` | Name of the storage container to store backups. | - | true | -| `azure.volume_snapshot_location` | Configure PVC snapshot location. | - | false | -| `azure.volume_snapshot_location.resource_group` | Azure Resource Group where snapshots will be stored. | Stored in the same resource group as the cluster. | false | -| `azure.volume_snapshot_location.api_timeout` | Azure API timeout. | "10m" | false | +| Argument | Description | Default | Type | Required | +|-------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------:|:------:|:--------:| +| `provider` | Supported provider name. Only `azure` is supported for now. | "azure" | string | false | +| `namespace` | Namespace to install Velero. | "velero" | string | false | +| `metrics` | Configure Prometheus to scrape Velero metrics. Needs the [Prometheus Operator component](prometheus-operator.md) installed. | - | object | false | +| `metrics.enabled` | Adds Prometheus annotations to Velero deployment if enabled. | false | bool | false | +| `metrics.service_monitor` | Adds ServiceMonitor resource for Prometheus. Requires `metrics.enabled` as true. | false | bool | false | +| `azure` | Configure Azure provider for Velero. | - | object | true | +| `azure.subscription_id` | Azure Subscription ID where client application is created. Can be obtained with `az account list`. | - | string | true | +| `azure.tenant_id` | Azure Tenant ID where your subscription is created. Can be obtained with `az account list`. | - | string | true | +| `azure.client_id` | Azure Application Client ID to perform Azure operations. | - | string | true | +| `azure.client_secret` | Azure Application Client secret. | - | string | true | +| `azure.resource_group` | Azure resource group, where PVC Disks are created. If this argument is wrong, Velero will fail to create PVC snapshots. | - | string | true | +| `azure.backup_storage_location` | Configure backup storage location and metadata. | - | object | true | +| `azure.backup_storage_location.resource_group` | Name of the resource group containing the storage account for this backup storage location. | - | string | true | +| `azure.backup_storage_location.storage_account` | Name of the storage account for this backup storage location. | - | string | true | +| `azure.backup_storage_location.bucket` | Name of the storage container to store backups. | - | string | true | +| `azure.volume_snapshot_location` | Configure PVC snapshot location. | - | object | false | +| `azure.volume_snapshot_location.resource_group` | Azure Resource Group where snapshots will be stored. | Stored in the same resource group as the cluster. | string | false | +| `azure.volume_snapshot_location.api_timeout` | Azure API timeout. | "10m" | string | false | + ## Applying diff --git a/docs/configuration-reference/platforms/aks.md b/docs/configuration-reference/platforms/aks.md index 4542105b5..df0475726 100644 --- a/docs/configuration-reference/platforms/aks.md +++ b/docs/configuration-reference/platforms/aks.md @@ -93,24 +93,25 @@ block in the cluster configuration. ## Attribute reference -| Argument | Description | Default | Required | -| ----------------------- | ------------------------------------------------------------ | :-----------: | :------: | -| `asset_dir` | Location where Lokomotive stores cluster assets. | - | true | -| `cluster_name` | Name of the cluster. **NOTE**: It must be unique per resource group. | - | true | -| `tenant_id` | Azure Tenant ID. Can also be provided using the `LOKOMOTIVE_AKS_TENANT_ID` environment variable. | - | true | -| `subscription_id` | Azure Subscription ID. Can also be provided using the `LOKOMOTIVE_AKS_SUBSCRIPTION_ID` environment variable. | - | true | -| `resource_group_name` | Name of the resource group, where AKS cluster object will be created. Please note, that AKS will also create a separate resource group for workers and other required objects, like load balancers, disks etc. If `manage_resource_group` parameter is set to `false`, this resource group must be manually created before cluster creation. | - | true | -| `client_id` | Azure service principal ID used for running the AKS cluster. Can also be provided using the `LOKOMOTIVE_AKS_CLIENT_ID`. This parameter is mutually exclusive with `application_name` parameter. | - | false | -| `client_secret` | Azure service principal secret used for running the AKS cluster. Can also be provided using the `LOKOMOTIVE_AKS_CLIENT_SECRET`. This parameter is mutually exclusive with `application_name` parameter. | - | false | -| `tags` | Additional tags for Azure resources. | - | false | -| `location` | Azure location where resources will be created. Valid values can be obtained using the following command from Azure CLI: `az account list-locations -o table`. | "West Europe" | false | -| `application_name` | Azure AD application name. If specified, a new Application will be created in Azure AD together with a service principal, which will be used to run the AKS cluster on behalf of the user to provide full cluster creation automation. Please note that this requires [permissions to create applications in Azure AD](https://docs.microsoft.com/en-us/azure/active-directory/users-groups-roles/roles-delegate-app-roles). This parameter is mutually exclusive with `client_id` and `client_secret`. | - | false | -| `manage_resource_group` | If `true`, a resource group for the AKS object will be created on behalf of the user. | true | false | -| `worker_pool` | Configuration block for worker pools. At least one worker pool must be defined. | - | true | -| `worker_pool.count` | Number of workers in the worker pool. Can be changed afterwards to add or delete workers. | - | true | -| `worker_pool.vm_size` | Azure VM size for worker nodes. | - | true | -| `worker_pool.labels` | Map of Kubernetes Node object labels. | - | false | -| `worker_pool.taints` | List of Kubernetes Node taints. | - | false | +| Argument | Description | Default | Type | Required | +|-------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------:|:------------:|:--------:| +| `asset_dir` | Location where Lokomotive stores cluster assets. | - | string | true | +| `cluster_name` | Name of the cluster. **NOTE**: It must be unique per resource group. | - | string | true | +| `tenant_id` | Azure Tenant ID. Can also be provided using the `LOKOMOTIVE_AKS_TENANT_ID` environment variable. | - | string | true | +| `subscription_id` | Azure Subscription ID. Can also be provided using the `LOKOMOTIVE_AKS_SUBSCRIPTION_ID` environment variable. | - | string | true | +| `resource_group_name` | Name of the resource group, where AKS cluster object will be created. Please note, that AKS will also create a separate resource group for workers and other required objects, like load balancers, disks etc. If `manage_resource_group` parameter is set to `false`, this resource group must be manually created before cluster creation. | - | string | true | +| `client_id` | Azure service principal ID used for running the AKS cluster. Can also be provided using the `LOKOMOTIVE_AKS_CLIENT_ID`. This parameter is mutually exclusive with `application_name` parameter. | - | string | false | +| `client_secret` | Azure service principal secret used for running the AKS cluster. Can also be provided using the `LOKOMOTIVE_AKS_CLIENT_SECRET`. This parameter is mutually exclusive with `application_name` parameter. | - | string | false | +| `tags` | Additional tags for Azure resources. | - | map(string) | false | +| `location` | Azure location where resources will be created. Valid values can be obtained using the following command from Azure CLI: `az account list-locations -o table`. | "West Europe" | string | false | +| `application_name` | Azure AD application name. If specified, a new Application will be created in Azure AD together with a service principal, which will be used to run the AKS cluster on behalf of the user to provide full cluster creation automation. Please note that this requires [permissions to create applications in Azure AD](https://docs.microsoft.com/en-us/azure/active-directory/users-groups-roles/roles-delegate-app-roles). This parameter is mutually exclusive with `client_id` and `client_secret`. | - | string | false | +| `manage_resource_group` | If `true`, a resource group for the AKS object will be created on behalf of the user. | true | bool | false | +| `worker_pool` | Configuration block for worker pools. At least one worker pool must be defined. | - | list(object) | true | +| `worker_pool.count` | Number of workers in the worker pool. Can be changed afterwards to add or delete workers. | - | number | true | +| `worker_pool.vm_size` | Azure VM size for worker nodes. | - | string | true | +| `worker_pool.labels` | Map of Kubernetes Node object labels. | - | map(string) | false | +| `worker_pool.taints` | List of Kubernetes Node taints. | - | list(string) | false | + ## Applying diff --git a/docs/configuration-reference/platforms/aws.md b/docs/configuration-reference/platforms/aws.md index 877a00b4a..d460afad8 100644 --- a/docs/configuration-reference/platforms/aws.md +++ b/docs/configuration-reference/platforms/aws.md @@ -190,53 +190,54 @@ worker_pool "my-worker-pool" { ## Attribute reference -| Argument | Description | Default | Required | -|-------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------:|:--------:| -| `asset_dir` | Location where Lokomotive stores cluster assets. | - | true | -| `cluster_name` | Name of the cluster. **NOTE**: It must be unique per DNS Zone and region. | - | true | -| `tags` | Optional details to tag on AWS resources. | - | false | -| `os_channel` | Flatcar Container Linux AMI channel to install from (stable, beta, alpha, edge). | "stable" | false | -| `os_version` | Flatcar Container Linux version to install. Version such as "2303.3.1" or "current". | "current" | false | -| `dns_zone` | Route 53 DNS Zone. | - | true | -| `dns_zone_id` | Route 53 DNS Zone ID. | - | true | -| `oidc` | OIDC configuration block. | - | false | -| `oidc.issuer_url` | URL of the provider which allows the API server to discover public signing keys. Only URLs which use the https:// scheme are accepted. | - | false | -| `oidc.client_id` | A client id that all tokens must be issued for. | "gangway" | false | -| `oidc.username_claim` | JWT claim to use as the user name. | "email" | false | -| `oidc.groups_claim` | JWT claim to use as the user’s group. | "groups" | false | -| `enable_csi` | Set up IAM role needed for dynamic volumes provisioning to work on AWS | false | false | -| `expose_nodeports` | Expose node ports `30000-32767` in the security group, if set to `true`. | false | false | -| `ssh_pubkeys` | List of SSH public keys for user `core`. Each element must be specified in a valid OpenSSH public key format, as defined in RFC 4253 Section 6.6, e.g. "ssh-rsa AAAAB3N...". | - | true | -| `controller_count` | Number of controller nodes. | 1 | false | -| `controller_type` | AWS instance type for controllers. | "t3.small" | false | -| `controller_clc_snippets` | Controller Flatcar Container Linux Config snippets. | [] | false | -| `region` | AWS region to use for deploying the cluster. | "eu-central-1" | false | -| `enable_aggregation` | Enable the Kubernetes Aggregation Layer. | true | false | -| `disk_size` | Size of the EBS volume in GB. | 40 | false | -| `disk_type` | Type of the EBS volume (e.g. standard, gp2, io1). | "gp2" | false | -| `disk_iops` | IOPS of the EBS volume (e.g 100). | 0 | false | -| `network_mtu` | CNI interface MTU. Use 8981 if using instances types with Jumbo frames. | 1480 | false | -| `host_cidr` | CIDR IPv4 range to assign to EC2 nodes. | "10.0.0.0/16" | false | -| `pod_cidr` | CIDR IPv4 range to assign Kubernetes pods. | "10.2.0.0/16" | false | -| `service_cidr` | CIDR IPv4 range to assign Kubernetes services. | "10.3.0.0/16" | false | -| `cluster_domain_suffix` | Cluster's DNS domain. | "cluster.local" | false | -| `enable_reporting` | Enables usage or analytics reporting to upstream. | false | false | -| `certs_validity_period_hours` | Validity of all the certificates in hours. | 8760 | false | -| `worker_pool` | Configuration block for worker pools. There can be more than one. **NOTE**: worker pool name must be unique per DNS zone and region. | - | true | -| `worker_pool.count` | Number of workers in the worker pool. Can be changed afterwards to add or delete workers. | - | true | -| `worker_pool.instance_type` | AWS instance type for worker nodes. | "t3.small" | false | -| `worker_pool.ssh_pubkeys` | List of SSH public keys for user `core`. Each element must be specified in a valid OpenSSH public key format, as defined in RFC 4253 Section 6.6, e.g. "ssh-rsa AAAAB3N...". | - | true | -| `worker_pool.os_channel` | Flatcar Container Linux channel to install from (stable, beta, alpha, edge). | "stable" | false | -| `worker_pool.os_version` | Flatcar Container Linux version to install. Version such as "2303.3.1" or "current". | "current" | false | -| `worker_pool.disk_size` | Size of the EBS volume in GB. | 40 | false | -| `worker_pool.disk_type` | Type of the EBS volume (e.g. standard, gp2, io1). | "gp2" | false | -| `worker_pool.disk_iops` | IOPS of the EBS volume (e.g 100). | 0 | false | -| `worker_pool.spot_price` | Spot price in USD for autoscaling group spot instances. Leave as empty string for autoscaling group to use on-demand instances. Switching in-place from spot to on-demand is not possible. | "" | false | -| `worker_pool.target_groups` | Additional target group ARNs to which worker instances should be added. | [] | false | -| `worker_pool.lb_http_port` | Port the load balancer should listen on for HTTP connections. | 80 | false | -| `worker_pool.lb_https_port` | Port the load balancer should listen on for HTTPS connections. | 443 | false | -| `worker_pool.clc_snippets` | CWorker Flatcar Container Linux Config snippets. | [] | false | -| `worker_pool.tags` | Optional details to tag on AWS resources. | - | false | +| Argument | Description | Default | Type | Required | +|-------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------:|:------------:|:--------:| +| `asset_dir` | Location where Lokomotive stores cluster assets. | - | string | true | +| `cluster_name` | Name of the cluster. **NOTE**: It must be unique per DNS Zone and region. | - | string | true | +| `tags` | Optional details to tag on AWS resources. | - | map(string) | false | +| `os_channel` | Flatcar Container Linux AMI channel to install from (stable, beta, alpha, edge). | "stable" | string | false | +| `os_version` | Flatcar Container Linux version to install. Version such as "2303.3.1" or "current". | "current" | string | false | +| `dns_zone` | Route 53 DNS Zone. | - | string | true | +| `dns_zone_id` | Route 53 DNS Zone ID. | - | string | true | +| `oidc` | OIDC configuration block. | - | object | false | +| `oidc.issuer_url` | URL of the provider which allows the API server to discover public signing keys. Only URLs which use the https:// scheme are accepted. | - | string | false | +| `oidc.client_id` | A client id that all tokens must be issued for. | "gangway" | string | false | +| `oidc.username_claim` | JWT claim to use as the user name. | "email" | string | false | +| `oidc.groups_claim` | JWT claim to use as the user’s group. | "groups" | string | false | +| `enable_csi` | Set up IAM role needed for dynamic volumes provisioning to work on AWS | false | bool | false | +| `expose_nodeports` | Expose node ports `30000-32767` in the security group, if set to `true`. | false | bool | false | +| `ssh_pubkeys` | List of SSH public keys for user `core`. Each element must be specified in a valid OpenSSH public key format, as defined in RFC 4253 Section 6.6, e.g. "ssh-rsa AAAAB3N...". | - | list(string) | true | +| `controller_count` | Number of controller nodes. | 1 | number | false | +| `controller_type` | AWS instance type for controllers. | "t3.small" | string | false | +| `controller_clc_snippets` | Controller Flatcar Container Linux Config snippets. | [] | list(string) | false | +| `region` | AWS region to use for deploying the cluster. | "eu-central-1" | string | false | +| `enable_aggregation` | Enable the Kubernetes Aggregation Layer. | true | bool | false | +| `disk_size` | Size of the EBS volume in GB. | 40 | number | false | +| `disk_type` | Type of the EBS volume (e.g. standard, gp2, io1). | "gp2" | string | false | +| `disk_iops` | IOPS of the EBS volume (e.g 100). | 0 | number | false | +| `network_mtu` | CNI interface MTU. Use 8981 if using instances types with Jumbo frames. | 1480 | number | false | +| `host_cidr` | CIDR IPv4 range to assign to EC2 nodes. | "10.0.0.0/16" | string | false | +| `pod_cidr` | CIDR IPv4 range to assign Kubernetes pods. | "10.2.0.0/16" | string | false | +| `service_cidr` | CIDR IPv4 range to assign Kubernetes services. | "10.3.0.0/16" | string | false | +| `cluster_domain_suffix` | Cluster's DNS domain. | "cluster.local" | string | false | +| `enable_reporting` | Enables usage or analytics reporting to upstream. | false | bool | false | +| `certs_validity_period_hours` | Validity of all the certificates in hours. | 8760 | number | false | +| `worker_pool` | Configuration block for worker pools. There can be more than one. **NOTE**: worker pool name must be unique per DNS zone and region. | - | list(object) | true | +| `worker_pool.count` | Number of workers in the worker pool. Can be changed afterwards to add or delete workers. | - | number | true | +| `worker_pool.instance_type` | AWS instance type for worker nodes. | "t3.small" | string | false | +| `worker_pool.ssh_pubkeys` | List of SSH public keys for user `core`. Each element must be specified in a valid OpenSSH public key format, as defined in RFC 4253 Section 6.6, e.g. "ssh-rsa AAAAB3N...". | - | list(string) | true | +| `worker_pool.os_channel` | Flatcar Container Linux channel to install from (stable, beta, alpha, edge). | "stable" | string | false | +| `worker_pool.os_version` | Flatcar Container Linux version to install. Version such as "2303.3.1" or "current". | "current" | string | false | +| `worker_pool.disk_size` | Size of the EBS volume in GB. | 40 | number | false | +| `worker_pool.disk_type` | Type of the EBS volume (e.g. standard, gp2, io1). | "gp2" | string | false | +| `worker_pool.disk_iops` | IOPS of the EBS volume (e.g 100). | 0 | number | false | +| `worker_pool.spot_price` | Spot price in USD for autoscaling group spot instances. Leave as empty string for autoscaling group to use on-demand instances. Switching in-place from spot to on-demand is not possible. | "" | string | false | +| `worker_pool.target_groups` | Additional target group ARNs to which worker instances should be added. | [] | list(string) | false | +| `worker_pool.lb_http_port` | Port the load balancer should listen on for HTTP connections. | 80 | number | false | +| `worker_pool.lb_https_port` | Port the load balancer should listen on for HTTPS connections. | 443 | number | false | +| `worker_pool.clc_snippets` | CWorker Flatcar Container Linux Config snippets. | [] | list(string) | false | +| `worker_pool.tags` | Optional details to tag on AWS resources. | - | map(string) | false | + ## Applying diff --git a/docs/configuration-reference/platforms/baremetal.md b/docs/configuration-reference/platforms/baremetal.md index eee9ce141..6194a7e11 100644 --- a/docs/configuration-reference/platforms/baremetal.md +++ b/docs/configuration-reference/platforms/baremetal.md @@ -126,32 +126,33 @@ os_version = var.custom_default_os_version ## Attribute reference -| Argument | Description | Default | Required | -|-----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------:|:--------:| -| `asset_dir` | Location where Lokomotive stores cluster assets. | - | true | -| `cached_install` | Whether the operating system should PXE boot and install from matchbox /assets cache. Note that the admin must have downloaded the `os_version` into matchbox assets. | "false" | false | -| `cluster_name` | Name of the cluster. | - | true | -| `controller_domains` | Ordered list of controller FQDNs. Example: ["node1.example.com"] | - | true | -| `controller_macs` | Ordered list of controller identifying MAC addresses. Example: ["52:54:00:a1:9c:ae"] | - | true | -| `controller_names` | Ordered list of controller names. Example: ["node1"] | - | true | -| `k8s_domain_name` | Controller DNS name which resolves to a controller instance. Workers and kubeconfig's will communicate with this endpoint. Example: "cluster.example.com" | - | true | -| `labels` | Map of extra Kubernetes Node labels for worker nodes. | - | false | -| `matchbox_ca_path` | Path to the CA to verify and authenticate client certificates. | - | true | -| `matchbox_client_cert_path` | Path to the server TLS certificate file. | - | true | -| `matchbox_client_key_path` | Path to the server TLS key file. | - | true | -| `matchbox_endpoint` | Matchbox API endpoint. | - | true | -| `matchbox_http_endpoint` | Matchbox HTTP read-only endpoint. Example: "http://matchbox.example.com:8080" | - | true | -| `worker_names` | Ordered list of worker names. Example: ["node2", "node3"] | - | true | -| `worker_macs` | Ordered list of worker identifying MAC addresses. Example ["52:54:00:b2:2f:86", "52:54:00:c3:61:77"] | - | true | -| `worker_domains` | Ordered list of worker FQDNs. Example ["node2.example.com", "node3.example.com"] | - | true | -| `ssh_pubkeys` | List of SSH public keys for user `core`. Each element must be specified in a valid OpenSSH public key format, as defined in RFC 4253 Section 6.6, e.g. "ssh-rsa AAAAB3N...". | - | true | -| `os_version` | Flatcar Container Linux version to install. Version such as "2303.3.1" or "current". | "current" | false | -| `os_channel` | Flatcar Container Linux channel to install from ("flatcar-stable", "flatcar-beta", "flatcar-alpha", "flatcar-edge"). | "flatcar-stable" | false | -| `oidc` | OIDC configuration block. | - | false | -| `oidc.issuer_url` | URL of the provider which allows the API server to discover public signing keys. Only URLs which use the https:// scheme are accepted. | - | false | -| `oidc.client_id` | A client id that all tokens must be issued for. | "gangway" | false | -| `oidc.username_claim` | JWT claim to use as the user name. | "email" | false | -| `oidc.groups_claim` | JWT claim to use as the user’s group. | "groups" | false | +| Argument | Description | Default | Type | Required | +|-----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------:|:------------:|:--------:| +| `asset_dir` | Location where Lokomotive stores cluster assets. | - | string | true | +| `cached_install` | Whether the operating system should PXE boot and install from matchbox /assets cache. Note that the admin must have downloaded the `os_version` into matchbox assets. | "false" | string | false | +| `cluster_name` | Name of the cluster. | - | string | true | +| `controller_domains` | Ordered list of controller FQDNs. Example: ["node1.example.com"] | - | list(string) | true | +| `controller_macs` | Ordered list of controller identifying MAC addresses. Example: ["52:54:00:a1:9c:ae"] | - | list(string) | true | +| `controller_names` | Ordered list of controller names. Example: ["node1"] | - | list(string) | true | +| `k8s_domain_name` | Controller DNS name which resolves to a controller instance. Workers and kubeconfig's will communicate with this endpoint. Example: "cluster.example.com" | - | string | true | +| `labels` | Map of extra Kubernetes Node labels for worker nodes. | - | map(string) | false | +| `matchbox_ca_path` | Path to the CA to verify and authenticate client certificates. | - | string | true | +| `matchbox_client_cert_path` | Path to the server TLS certificate file. | - | string | true | +| `matchbox_client_key_path` | Path to the server TLS key file. | - | string | true | +| `matchbox_endpoint` | Matchbox API endpoint. | - | string | true | +| `matchbox_http_endpoint` | Matchbox HTTP read-only endpoint. Example: "http://matchbox.example.com:8080" | - | string | true | +| `worker_names` | Ordered list of worker names. Example: ["node2", "node3"] | - | list(string) | true | +| `worker_macs` | Ordered list of worker identifying MAC addresses. Example ["52:54:00:b2:2f:86", "52:54:00:c3:61:77"] | - | list(string) | true | +| `worker_domains` | Ordered list of worker FQDNs. Example ["node2.example.com", "node3.example.com"] | - | list(string) | true | +| `ssh_pubkeys` | List of SSH public keys for user `core`. Each element must be specified in a valid OpenSSH public key format, as defined in RFC 4253 Section 6.6, e.g. "ssh-rsa AAAAB3N...". | - | list(string) | true | +| `os_version` | Flatcar Container Linux version to install. Version such as "2303.3.1" or "current". | "current" | string | false | +| `os_channel` | Flatcar Container Linux channel to install from ("flatcar-stable", "flatcar-beta", "flatcar-alpha", "flatcar-edge"). | "flatcar-stable" | string | false | +| `oidc` | OIDC configuration block. | - | object | false | +| `oidc.issuer_url` | URL of the provider which allows the API server to discover public signing keys. Only URLs which use the https:// scheme are accepted. | - | string | false | +| `oidc.client_id` | A client id that all tokens must be issued for. | "gangway" | string | false | +| `oidc.username_claim` | JWT claim to use as the user name. | "email" | string | false | +| `oidc.groups_claim` | JWT claim to use as the user’s group. | "groups" | string | false | + ## Applying diff --git a/docs/configuration-reference/platforms/packet.md b/docs/configuration-reference/platforms/packet.md index 399b3891d..d2b3e3324 100644 --- a/docs/configuration-reference/platforms/packet.md +++ b/docs/configuration-reference/platforms/packet.md @@ -192,59 +192,60 @@ node_type = var.custom_default_worker_type ## Attribute reference -| Argument | Description | Default | Required | -|---------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------:|:--------:| -| `auth_token` | Packet Auth token. Use the `PACKET_AUTH_TOKEN` environment variable instead. | - | false | -| `asset_dir` | Location where Lokomotive stores cluster assets. | - | true | -| `cluster_name` | Name of the cluster. | - | true | -| `tags` | List of tags that will be propagated to master nodes. | - | false | -| `controller_count` | Number of controller nodes. | 1 | false | -| `controller_type` | Packet instance type for controllers. | "baremetal_0" | false | -| `controller_clc_snippets` | Controller Flatcar Container Linux Config snippets. | [] | false | -| `dns` | DNS configuration block. | - | true | -| `dns.zone` | A DNS zone to use for the cluster. The following format is used for cluster-related DNS records: `..` | - | true | -| `dns.provider` | DNS provider to use for the cluster. Valid values: `cloudflare`, `route53`, `manual`. | - | true | -| `oidc` | OIDC configuration block. | - | false | -| `oidc.issuer_url` | URL of the provider which allows the API server to discover public signing keys. Only URLs which use the https:// scheme are accepted. | - | false | -| `oidc.client_id` | A client id that all tokens must be issued for. | "gangway" | false | -| `oidc.username_claim` | JWT claim to use as the user name. | "email" | false | -| `oidc.groups_claim` | JWT claim to use as the user’s group. | "groups" | false | -| `facility` | Packet facility to use for deploying the cluster. | - | false | -| `project_id` | Packet project ID. | - | true | -| `ssh_pubkeys` | List of SSH public keys for user `core`. Each element must be specified in a valid OpenSSH public key format, as defined in RFC 4253 Section 6.6, e.g. "ssh-rsa AAAAB3N...". | - | true | -| `os_arch` | Flatcar Container Linux architecture to install (amd64, arm64). | "amd64" | false | -| `os_channel` | Flatcar Container Linux channel to install from (stable, beta, alpha, edge). | "stable" | false | -| `os_version` | Flatcar Container Linux version to install. Version such as "2303.3.1" or "current". | "current" | false | -| `ipxe_script_url` | Boot via iPXE. Required for arm64. | - | false | -| `management_cidrs` | List of IPv4 CIDRs authorized to access or manage the cluster. Example ["0.0.0.0/0"] to allow all. | - | true | -| `node_private_cidr` | Private IPv4 CIDR of the nodes used to allow inter-node traffic. Example "10.0.0.0/8" | - | true | -| `enable_aggregation` | Enable the Kubernetes Aggregation Layer. | true | false | -| `network_mtu` | CNI interface MTU | 1480 | false | -| `pod_cidr` | CIDR IPv4 range to assign Kubernetes pods. | "10.2.0.0/16" | false | -| `service_cidr` | CIDR IPv4 range to assign Kubernetes services. | "10.3.0.0/16" | false | -| `cluster_domain_suffix` | Cluster's DNS domain. | "cluster.local" | false | -| `enable_reporting` | Enables usage or analytics reporting to upstream. | false | false | -| `reservation_ids` | Block with Packet hardware reservation IDs for controller nodes. Each key must have the format `controller-${index}` and the value is the reservation UUID. Can't be combined with `reservation_ids_default`. Example: `reservation_ids = { controller-0 = "" }`. | - | false | -| `reservation_ids_default` | Default reservation ID for controllers. The value`next-available` will choose any reservation that matches the pool's device type and facility. Can't be combined with `reservation_ids` | - | false | -| `certs_validity_period_hours` | Validity of all the certificates in hours. | 8760 | false | -| `worker_pool` | Configuration block for worker pools. There can be more than one. | - | true | -| `worker_pool.count` | Number of workers in the worker pool. Can be changed afterwards to add or delete workers. | 1 | true | -| `worker_pool.clc_snippets` | Flatcar Container Linux Config snippets for nodes in the worker pool. | [] | false | -| `worker_pool.tags` | List of tags that will be propagated to nodes in the worker pool. | - | false | -| `worker_pool.disable_bgp` | Disable BGP on nodes. Nodes won't be able to connect to Packet BGP peers. | false | false | -| `worker_pool.ipxe_script_url` | Boot via iPXE. Required for arm64. | - | false | -| `worker_pool.os_arch` | Flatcar Container Linux architecture to install (amd64, arm64). | "amd64" | false | -| `worker_pool.os_channel` | Flatcar Container Linux channel to install from (stable, beta, alpha, edge). | "stable" | false | -| `worker_pool.os_version` | Flatcar Container Linux version to install. Version such as "2303.3.1" or "current". | "current" | false | -| `worker_pool.node_type` | Packet instance type for worker nodes. | "baremetal_0" | false | -| `worker_pool.labels` | Custom labels to assign to worker nodes. | - | false | -| `worker_pool.taints` | Taints to assign to worker nodes. | - | false | -| `worker_pool.reservation_ids` | Block with Packet hardware reservation IDs for worker nodes. Each key must have the format `worker-${index}` and the value is the reservation UUID. Can't be combined with `reservation_ids_default`. Example: `reservation_ids = { worker-0 = "" }`. | - | false | -| `worker_pool.reservation_ids_default` | Default reservation ID for workers. The value`next-available` will choose any reservation that matches the pool's device type and facility. Can't be combined with `reservation_ids`. | - | false | -| `worker_pool.setup_raid` | Attempt to create a RAID 0 from extra disks to be used for persistent container storage. Can't be used with `setup_raid_hdd` nor `setup_raid_sdd`. | false | false | -| `worker_pool.setup_raid_hdd` | Attempt to create a RAID 0 from extra Hard Disk drives only, to be used for persistent container storage. Can't be used with `setup_raid` nor `setup_raid_sdd`. | false | false | -| `worker_pool.setup_raid_ssd` | Attempt to create a RAID 0 from extra Solid State Drives only, to be used for persistent container storage. Can't be used with `setup_raid` nor `setup_raid_hdd`. | false | false | -| `worker_pool.setup_raid_ssd_fs` | When set to `true` file system will be created on SSD RAID device and will be mounted on `/mnt/node-local-ssd-storage`. To use the raw device set it to `false`. | false | false | +| Argument | Description | Default | Type | Required | +|---------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------:|:------------:|:--------:| +| `auth_token` | Packet Auth token. Use the `PACKET_AUTH_TOKEN` environment variable instead. | - | string | false | +| `asset_dir` | Location where Lokomotive stores cluster assets. | - | string | true | +| `cluster_name` | Name of the cluster. | - | string | true | +| `tags` | List of tags that will be propagated to master nodes. | - | map(string) | false | +| `controller_count` | Number of controller nodes. | 1 | number | false | +| `controller_type` | Packet instance type for controllers. | "baremetal_0" | string | false | +| `controller_clc_snippets` | Controller Flatcar Container Linux Config snippets. | [] | list(string) | false | +| `dns` | DNS configuration block. | - | object | true | +| `dns.zone` | A DNS zone to use for the cluster. The following format is used for cluster-related DNS records: `..` | - | string | true | +| `dns.provider` | DNS provider to use for the cluster. Valid values: `cloudflare`, `route53`, `manual`. | - | string | true | +| `oidc` | OIDC configuration block. | - | object | false | +| `oidc.issuer_url` | URL of the provider which allows the API server to discover public signing keys. Only URLs which use the https:// scheme are accepted. | - | string | false | +| `oidc.client_id` | A client id that all tokens must be issued for. | "gangway" | string | false | +| `oidc.username_claim` | JWT claim to use as the user name. | "email" | string | false | +| `oidc.groups_claim` | JWT claim to use as the user’s group. | "groups" | string | false | +| `facility` | Packet facility to use for deploying the cluster. | - | string | false | +| `project_id` | Packet project ID. | - | string | true | +| `ssh_pubkeys` | List of SSH public keys for user `core`. Each element must be specified in a valid OpenSSH public key format, as defined in RFC 4253 Section 6.6, e.g. "ssh-rsa AAAAB3N...". | - | list(string) | true | +| `os_arch` | Flatcar Container Linux architecture to install (amd64, arm64). | "amd64" | string | false | +| `os_channel` | Flatcar Container Linux channel to install from (stable, beta, alpha, edge). | "stable" | string | false | +| `os_version` | Flatcar Container Linux version to install. Version such as "2303.3.1" or "current". | "current" | string | false | +| `ipxe_script_url` | Boot via iPXE. Required for arm64. | - | string | false | +| `management_cidrs` | List of IPv4 CIDRs authorized to access or manage the cluster. Example ["0.0.0.0/0"] to allow all. | - | list(string) | true | +| `node_private_cidr` | Private IPv4 CIDR of the nodes used to allow inter-node traffic. Example "10.0.0.0/8" | - | string | true | +| `enable_aggregation` | Enable the Kubernetes Aggregation Layer. | true | bool | false | +| `network_mtu` | CNI interface MTU | 1480 | number | false | +| `pod_cidr` | CIDR IPv4 range to assign Kubernetes pods. | "10.2.0.0/16" | string | false | +| `service_cidr` | CIDR IPv4 range to assign Kubernetes services. | "10.3.0.0/16" | string | false | +| `cluster_domain_suffix` | Cluster's DNS domain. | "cluster.local" | string | false | +| `enable_reporting` | Enables usage or analytics reporting to upstream. | false | bool | false | +| `reservation_ids` | Block with Packet hardware reservation IDs for controller nodes. Each key must have the format `controller-${index}` and the value is the reservation UUID. Can't be combined with `reservation_ids_default`. Example: `reservation_ids = { controller-0 = "" }`. | - | map(string) | false | +| `reservation_ids_default` | Default reservation ID for controllers. The value`next-available` will choose any reservation that matches the pool's device type and facility. Can't be combined with `reservation_ids` | - | string | false | +| `certs_validity_period_hours` | Validity of all the certificates in hours. | 8760 | number | false | +| `worker_pool` | Configuration block for worker pools. There can be more than one. | - | list(object) | true | +| `worker_pool.count` | Number of workers in the worker pool. Can be changed afterwards to add or delete workers. | 1 | number | true | +| `worker_pool.clc_snippets` | Flatcar Container Linux Config snippets for nodes in the worker pool. | [] | list(string) | false | +| `worker_pool.tags` | List of tags that will be propagated to nodes in the worker pool. | - | map(string) | false | +| `worker_pool.disable_bgp` | Disable BGP on nodes. Nodes won't be able to connect to Packet BGP peers. | false | bool | false | +| `worker_pool.ipxe_script_url` | Boot via iPXE. Required for arm64. | - | string | false | +| `worker_pool.os_arch` | Flatcar Container Linux architecture to install (amd64, arm64). | "amd64" | string | false | +| `worker_pool.os_channel` | Flatcar Container Linux channel to install from (stable, beta, alpha, edge). | "stable" | string | false | +| `worker_pool.os_version` | Flatcar Container Linux version to install. Version such as "2303.3.1" or "current". | "current" | string | false | +| `worker_pool.node_type` | Packet instance type for worker nodes. | "baremetal_0" | string | false | +| `worker_pool.labels` | Custom labels to assign to worker nodes. | - | string | false | +| `worker_pool.taints` | Taints to assign to worker nodes. | - | string | false | +| `worker_pool.reservation_ids` | Block with Packet hardware reservation IDs for worker nodes. Each key must have the format `worker-${index}` and the value is the reservation UUID. Can't be combined with `reservation_ids_default`. Example: `reservation_ids = { worker-0 = "" }`. | - | map(string) | false | +| `worker_pool.reservation_ids_default` | Default reservation ID for workers. The value`next-available` will choose any reservation that matches the pool's device type and facility. Can't be combined with `reservation_ids`. | - | string | false | +| `worker_pool.setup_raid` | Attempt to create a RAID 0 from extra disks to be used for persistent container storage. Can't be used with `setup_raid_hdd` nor `setup_raid_sdd`. | false | bool | false | +| `worker_pool.setup_raid_hdd` | Attempt to create a RAID 0 from extra Hard Disk drives only, to be used for persistent container storage. Can't be used with `setup_raid` nor `setup_raid_sdd`. | false | bool | false | +| `worker_pool.setup_raid_ssd` | Attempt to create a RAID 0 from extra Solid State Drives only, to be used for persistent container storage. Can't be used with `setup_raid` nor `setup_raid_hdd`. | false | bool | false | +| `worker_pool.setup_raid_ssd_fs` | When set to `true` file system will be created on SSD RAID device and will be mounted on `/mnt/node-local-ssd-storage`. To use the raw device set it to `false`. | false | bool | false | + ## Applying