diff --git a/website/docs/cdktf/python/d/connect_instance_storage_config.html.markdown b/website/docs/cdktf/python/d/connect_instance_storage_config.html.markdown index 17ff7516ff32..cba955fdc3f4 100644 --- a/website/docs/cdktf/python/d/connect_instance_storage_config.html.markdown +++ b/website/docs/cdktf/python/d/connect_instance_storage_config.html.markdown @@ -93,4 +93,4 @@ The `encryption_config` configuration block supports the following arguments: * `encryption_type` - The type of encryption. Valid Values: `KMS`. * `key_id` - The full ARN of the encryption key. Be sure to provide the full ARN of the encryption key, not just the ID. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/fsx_ontap_file_system.html.markdown b/website/docs/cdktf/python/d/fsx_ontap_file_system.html.markdown new file mode 100644 index 000000000000..2cdd1621b1f2 --- /dev/null +++ b/website/docs/cdktf/python/d/fsx_ontap_file_system.html.markdown @@ -0,0 +1,83 @@ +--- +subcategory: "FSx" +layout: "aws" +page_title: "AWS: aws_fsx_ontap_file_system" +description: |- + Retrieve information on FSx ONTAP File System. +--- + + + +# Data Source: aws_fsx_ontap_file_system + +Retrieve information on FSx ONTAP File System. + +## Example Usage + +### Basic Usage + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.data_aws_fsx_ontap_file_system import DataAwsFsxOntapFileSystem +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + DataAwsFsxOntapFileSystem(self, "example", + id="fs-12345678" + ) +``` + +## Argument Reference + +The following arguments are required: + +* `id` - (Required) Identifier of the file system (e.g. `fs-12345678`). + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - Amazon Resource Name of the file system. +* `automatic_backup_retention_days` - The number of days to retain automatic backups. +* `daily_automatic_backup_start_time` - The preferred time (in `HH:MM` format) to take daily automatic backups, in the UTC time zone. +* `deployment_type` - The file system deployment type. +* `disk_iops_configuration` - The SSD IOPS configuration for the Amazon FSx for NetApp ONTAP file system, specifying the number of provisioned IOPS and the provision mode. See [Disk IOPS](#disk-iops) Below. +* `dns_name` - DNS name for the file system (e.g. `fs-12345678.corp.example.com`). +* `endpoint_ip_address_range` - (Multi-AZ only) Specifies the IP address range in which the endpoints to access your file system exist. +* `endpoints` - The Management and Intercluster FileSystemEndpoints that are used to access data or to manage the file system using the NetApp ONTAP CLI, REST API, or NetApp SnapMirror. See [FileSystemEndpoints](#file-system-endpoints) below. +* `id` - Identifier of the file system (e.g. `fs-12345678`). +* `kms_key_id` - ARN for the KMS Key to encrypt the file system at rest. +* `network_interface_ids` - The IDs of the elastic network interfaces from which a specific file system is accessible. +* `owner_id` - AWS account identifier that created the file system. +* `preferred_subnet_id` - Specifies the subnet in which you want the preferred file server to be located. +* `route_table_ids` - (Multi-AZ only) The VPC route tables in which your file system's endpoints exist. +* `storage_capacity` - The storage capacity of the file system in gibibytes (GiB). +* `storage_type` - The type of storage the file system is using. If set to `SSD`, the file system uses solid state drive storage. If set to `HDD`, the file system uses hard disk drive storage. +* `subnet_ids` - Specifies the IDs of the subnets that the file system is accessible from. For the MULTI_AZ_1 file system deployment type, there are two subnet IDs, one for the preferred file server and one for the standby file server. The preferred file server subnet identified in the `preferred_subnet_id` property. +* `tags` - The tags associated with the file system. +* `throughput_capacity` - The sustained throughput of an Amazon FSx file system in Megabytes per second (MBps). +* `vpc_id` - The ID of the primary virtual private cloud (VPC) for the file system. +* `weekly_maintenance_start_time` - The preferred start time (in `D:HH:MM` format) to perform weekly maintenance, in the UTC time zone. + +### Disk IOPS + +* `iops` - The total number of SSD IOPS provisioned for the file system. +* `mode` - Specifies whether the file system is using the `AUTOMATIC` setting of SSD IOPS of 3 IOPS per GB of storage capacity, or if it using a `USER_PROVISIONED` value. + +### File System Endpoints + +* `intercluster` - A FileSystemEndpoint for managing your file system by setting up NetApp SnapMirror with other ONTAP systems. See [FileSystemEndpoint](#file-system-endpoint) below. +* `management` - A FileSystemEndpoint for managing your file system using the NetApp ONTAP CLI and NetApp ONTAP API. See [FileSystemEndpoint](#file-system-endpoint) below. + +### File System Endpoint + +* `DNSName` - The file system's DNS name. You can mount your file system using its DNS name. +* `IpAddresses` - IP addresses of the file system endpoint. + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/fsx_ontap_storage_virtual_machine.html.markdown b/website/docs/cdktf/python/d/fsx_ontap_storage_virtual_machine.html.markdown new file mode 100644 index 000000000000..95bbf1411e7a --- /dev/null +++ b/website/docs/cdktf/python/d/fsx_ontap_storage_virtual_machine.html.markdown @@ -0,0 +1,124 @@ +--- +subcategory: "FSx" +layout: "aws" +page_title: "AWS: aws_fsx_ontap_storage_virtual_machine" +description: |- + Retrieve information on FSx ONTAP Storage Virtual Machine (SVM). +--- + + + +# Data Source: aws_fsx_ontap_storage_virtual_machine + +Retrieve information on FSx ONTAP Storage Virtual Machine (SVM). + +## Example Usage + +### Basic Usage + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.data_aws_fsx_ontap_storage_virtual_machine import DataAwsFsxOntapStorageVirtualMachine +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + DataAwsFsxOntapStorageVirtualMachine(self, "example", + id="svm-12345678" + ) +``` + +### Filter Example + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.data_aws_fsx_ontap_storage_virtual_machine import DataAwsFsxOntapStorageVirtualMachine +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + DataAwsFsxOntapStorageVirtualMachine(self, "example", + filter=[DataAwsFsxOntapStorageVirtualMachineFilter( + name="file-system-id", + values=["fs-12345678"] + ) + ] + ) +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available ONTAP Storage Virtual Machines in the current region. The given filters must match exactly one Storage Virtual Machine whose data will be exported as attributes. + +The following arguments are optional: + +* `filter` - (Optional) Configuration block. Detailed below. +* `id` - (Optional) Identifier of the storage virtual machine (e.g. `svm-12345678`). + +### filter + +This block allows for complex filters. + +The following arguments are required: + +* `name` - (Required) Name of the field to filter by, as defined by [the underlying AWS API](https://docs.aws.amazon.com/fsx/latest/APIReference/API_StorageVirtualMachineFilter.html). +* `values` - (Required) Set of values that are accepted for the given field. An SVM will be selected if any one of the given values matches. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - Amazon Resource Name of the SVM. +* `active_directory_configuration` - The Microsoft Active Directory configuration to which the SVM is joined, if applicable. See [Active Directory Configuration](#active-directory-configuration) below. +* `creation_time` - The time that the SVM was created. +* `endpoints` - The endpoints that are used to access data or to manage the SVM using the NetApp ONTAP CLI, REST API, or NetApp CloudManager. They are the Iscsi, Management, Nfs, and Smb endpoints. See [SVM Endpoints](#svm-endpoints) below. +* `file_system_id` - Identifier of the file system (e.g. `fs-12345678`). +* `id` - The SVM's system generated unique ID. +* `lifecycle_status` - The SVM's lifecycle status. +* `lifecycle_transition_reason` - Describes why the SVM lifecycle state changed. See [Lifecycle Transition Reason](#lifecycle-transition-reason) below. +* `name` - The name of the SVM, if provisioned. +* `subtype` - The SVM's subtype. +* `uuid` - The SVM's UUID. + +### Active Directory Configuration + +The following arguments are supported for `active_directory_configuration` configuration block: + +* `netbios_name` - The NetBIOS name of the AD computer object to which the SVM is joined. +* `self_managed_active_directory` - The configuration of the self-managed Microsoft Active Directory (AD) directory to which the Windows File Server or ONTAP storage virtual machine (SVM) instance is joined. See [Self Managed Active Directory](#self-managed-active-directory) below. + +### Self Managed Active Directory + +* `dns_ips` - A list of up to three IP addresses of DNS servers or domain controllers in the self-managed AD directory. +* `domain_name` - The fully qualified domain name of the self-managed AD directory. +* `file_system_administrators_group` - The name of the domain group whose members have administrative privileges for the FSx file system. +* `organizational_unit_distinguished_name` - The fully qualified distinguished name of the organizational unit within the self-managed AD directory to which the Windows File Server or ONTAP storage virtual machine (SVM) instance is joined. +* `username` - The user name for the service account on your self-managed AD domain that FSx uses to join to your AD domain. + +### Lifecycle Transition Reason + +* `message` - A detailed message. + +### SVM Endpoints + +* `Iscsi` - An endpoint for connecting using the Internet Small Computer Systems Interface (iSCSI) protocol. See [SVM Endpoint](#svm-endpoint) below. +* `management` - An endpoint for managing SVMs using the NetApp ONTAP CLI, NetApp ONTAP API, or NetApp CloudManager. See [SVM Endpoint](#svm-endpoint) below. +* `nfs` - An endpoint for connecting using the Network File System (NFS) protocol. See [SVM Endpoint](#svm-endpoint) below. +* `smb` - An endpoint for connecting using the Server Message Block (SMB) protocol. See [SVM Endpoint](#svm-endpoint) below. + +### SVM Endpoint + +* `DNSName` - The file system's DNS name. You can mount your file system using its DNS name. +* `IpAddresses` - The SVM endpoint's IP addresses. + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/fsx_ontap_storage_virtual_machines.html.markdown b/website/docs/cdktf/python/d/fsx_ontap_storage_virtual_machines.html.markdown new file mode 100644 index 000000000000..c5418f984206 --- /dev/null +++ b/website/docs/cdktf/python/d/fsx_ontap_storage_virtual_machines.html.markdown @@ -0,0 +1,57 @@ +--- +subcategory: "FSx" +layout: "aws" +page_title: "AWS: aws_fsx_ontap_storage_virtual_machines" +description: |- + This resource can be useful for getting back a set of FSx ONTAP Storage Virtual Machine (SVM) IDs. +--- + + + +# Data Source: aws_fsx_ontap_storage_virtual_machines + +This resource can be useful for getting back a set of FSx ONTAP Storage Virtual Machine (SVM) IDs. + +## Example Usage + +The following shows outputting all SVM IDs for a given FSx ONTAP File System. + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.data_aws_fsx_ontap_storage_virtual_machines import DataAwsFsxOntapStorageVirtualMachines +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + DataAwsFsxOntapStorageVirtualMachines(self, "example", + filter=[DataAwsFsxOntapStorageVirtualMachinesFilter( + name="file-system-id", + values=["fs-12345678"] + ) + ] + ) +``` + +## Argument Reference + +* `filter` - (Optional) Configuration block. Detailed below. + +### filter + +This block allows for complex filters. + +The following arguments are required: + +* `name` - (Required) Name of the field to filter by, as defined by [the underlying AWS API](https://docs.aws.amazon.com/fsx/latest/APIReference/API_StorageVirtualMachineFilter.html). +* `values` - (Required) Set of values that are accepted for the given field. An SVM will be selected if any one of the given values matches. + +## Attributes Reference + +* `ids` - List of all SVM IDs found. + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/guardduty_detector.html.markdown b/website/docs/cdktf/python/d/guardduty_detector.html.markdown index 12aeef24a04b..d17a3c7ac0f3 100644 --- a/website/docs/cdktf/python/d/guardduty_detector.html.markdown +++ b/website/docs/cdktf/python/d/guardduty_detector.html.markdown @@ -37,8 +37,14 @@ class MyConvertedCode(TerraformStack): This data source exports the following attributes in addition to the arguments above: +* `features` - Current configuration of the detector features. + * `additional_configuration` - Additional feature configuration. + * `name` - The name of the additional configuration. + * `status` - The status of the additional configuration. + * `name` - The name of the detector feature. + * `status` - The status of the detector feature. * `finding_publishing_frequency` - The frequency of notifications sent about subsequent finding occurrences. * `service_role_arn` - Service-linked role that grants GuardDuty access to the resources in the AWS account. * `status` - Current status of the detector. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/organizations_organizational_unit.html.markdown b/website/docs/cdktf/python/d/organizations_organizational_unit.html.markdown new file mode 100644 index 000000000000..3e246e75a782 --- /dev/null +++ b/website/docs/cdktf/python/d/organizations_organizational_unit.html.markdown @@ -0,0 +1,55 @@ +--- +subcategory: "Organizations" +layout: "aws" +page_title: "AWS: aws_organizations_organizational_unit" +description: |- + Terraform data source for getting an AWS Organizations Organizational Unit. +--- + + + +# Data Source: aws_organizations_organizational_unit + +Terraform data source for getting an AWS Organizations Organizational Unit. + +## Example Usage + +### Basic Usage + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import Fn, Token, TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.data_aws_organizations_organization import DataAwsOrganizationsOrganization +from imports.aws.data_aws_organizations_organizational_unit import DataAwsOrganizationsOrganizationalUnit +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + org = DataAwsOrganizationsOrganization(self, "org") + DataAwsOrganizationsOrganizationalUnit(self, "ou", + name="dev", + parent_id=Token.as_string(Fn.lookup_nested(org.roots, ["0", "id"])) + ) +``` + +## Argument Reference + +The following arguments are required: + +* `parent_id` - (Required) Parent ID of the organizational unit. + +* `name` - (Required) Name of the organizational unit + +## Attribute Reference + +This data source exports the following attributes in addition to the arguments above: + +* `arn` - ARN of the organizational unit + +* `id` - ID of the organizational unit + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/s3_bucket_object.html.markdown b/website/docs/cdktf/python/d/s3_bucket_object.html.markdown index 096457c3500e..17d14ac2a8db 100644 --- a/website/docs/cdktf/python/d/s3_bucket_object.html.markdown +++ b/website/docs/cdktf/python/d/s3_bucket_object.html.markdown @@ -103,7 +103,7 @@ This data source exports the following attributes in addition to the arguments a * `expiration` - If the object expiration is configured (see [object lifecycle management](http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html)), the field includes this header. It includes the expiry-date and rule-id key value pairs providing object expiration information. The value of the rule-id is URL encoded. * `expires` - Date and time at which the object is no longer cacheable. * `last_modified` - Last modified date of the object in RFC1123 format (e.g., `Mon, 02 Jan 2006 15:04:05 MST`) -* `metadata` - Map of metadata stored with the object in S3 +* `metadata` - Map of metadata stored with the object in S3. [Keys](https://developer.hashicorp.com/terraform/language/expressions/types#maps-objects) are always returned in lowercase. * `object_lock_legal_hold_status` - Indicates whether this object has an active [legal hold](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-legal-holds). This field is only returned if you have permission to view an object's legal hold status. * `object_lock_mode` - Object lock [retention mode](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-retention-modes) currently in place for this object. * `object_lock_retain_until_date` - The date and time when this object's object lock will expire. @@ -116,4 +116,4 @@ This data source exports the following attributes in addition to the arguments a -> **Note:** Terraform ignores all leading `/`s in the object's `key` and treats multiple `/`s in the rest of the object's `key` as a single `/`, so values of `/index.html` and `index.html` correspond to the same S3 object as do `first//second///third//` and `first/second/third/`. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/s3_object.html.markdown b/website/docs/cdktf/python/d/s3_object.html.markdown index aa3516e1804b..1f090bb52fa0 100644 --- a/website/docs/cdktf/python/d/s3_object.html.markdown +++ b/website/docs/cdktf/python/d/s3_object.html.markdown @@ -106,7 +106,7 @@ This data source exports the following attributes in addition to the arguments a * `expiration` - If the object expiration is configured (see [object lifecycle management](http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html)), the field includes this header. It includes the expiry-date and rule-id key value pairs providing object expiration information. The value of the rule-id is URL encoded. * `expires` - Date and time at which the object is no longer cacheable. * `last_modified` - Last modified date of the object in RFC1123 format (e.g., `Mon, 02 Jan 2006 15:04:05 MST`) -* `metadata` - Map of metadata stored with the object in S3 +* `metadata` - Map of metadata stored with the object in S3. [Keys](https://developer.hashicorp.com/terraform/language/expressions/types#maps-objects) are always returned in lowercase. * `object_lock_legal_hold_status` - Indicates whether this object has an active [legal hold](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-legal-holds). This field is only returned if you have permission to view an object's legal hold status. * `object_lock_mode` - Object lock [retention mode](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-retention-modes) currently in place for this object. * `object_lock_retain_until_date` - The date and time when this object's object lock will expire. @@ -119,4 +119,4 @@ This data source exports the following attributes in addition to the arguments a -> **Note:** Terraform ignores all leading `/`s in the object's `key` and treats multiple `/`s in the rest of the object's `key` as a single `/`, so values of `/index.html` and `index.html` correspond to the same S3 object as do `first//second///third//` and `first/second/third/`. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/vpclattice_service.html.markdown b/website/docs/cdktf/python/d/vpclattice_service.html.markdown index 950f192455d9..d78783832e11 100644 --- a/website/docs/cdktf/python/d/vpclattice_service.html.markdown +++ b/website/docs/cdktf/python/d/vpclattice_service.html.markdown @@ -39,7 +39,7 @@ The arguments of this data source act as filters for querying the available VPC The given filters must match exactly one VPC lattice service whose data will be exported as attributes. * `name` - (Optional) Service name. -* `service_identifier` - (Optional) ID or Amazon Resource Name (ARN) of the service network. +* `service_identifier` - (Optional) ID or Amazon Resource Name (ARN) of the service. ## Attribute Reference @@ -54,4 +54,4 @@ This data source exports the following attributes in addition to the arguments a * `status` - Status of the service. * `tags` - List of tags associated with the service. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/vpclattice_service_network.html.markdown b/website/docs/cdktf/python/d/vpclattice_service_network.html.markdown index 4091ae7c5f37..74e9b483cd37 100644 --- a/website/docs/cdktf/python/d/vpclattice_service_network.html.markdown +++ b/website/docs/cdktf/python/d/vpclattice_service_network.html.markdown @@ -29,7 +29,7 @@ class MyConvertedCode(TerraformStack): def __init__(self, scope, name): super().__init__(scope, name) DataAwsVpclatticeServiceNetwork(self, "example", - service_network_identifier="" + service_network_identifier="snsa-01112223334445556" ) ``` @@ -37,7 +37,7 @@ class MyConvertedCode(TerraformStack): The following arguments are required: -* `service_network_identifier` - (Required) Identifier of the network service. +* `service_network_identifier` - (Required) Identifier of the service network. ## Attribute Reference @@ -52,4 +52,4 @@ This data source exports the following attributes in addition to the arguments a * `number_of_associated_services` - Number of services associated with this service network. * `number_of_associated_vpcs` - Number of VPCs associated with this service network. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/index.html.markdown b/website/docs/cdktf/python/index.html.markdown index f8d86c080349..a68bad34e8c9 100644 --- a/website/docs/cdktf/python/index.html.markdown +++ b/website/docs/cdktf/python/index.html.markdown @@ -13,7 +13,7 @@ Use the Amazon Web Services (AWS) provider to interact with the many resources supported by AWS. You must configure the provider with the proper credentials before you can use it. -Use the navigation to the left to read about the available resources. There are currently 1251 resources and 514 data sources available in the provider. +Use the navigation to the left to read about the available resources. There are currently 1255 resources and 518 data sources available in the provider. To learn the basics of Terraform using this provider, follow the hands-on [get started tutorials](https://learn.hashicorp.com/tutorials/terraform/infrastructure-as-code?in=terraform/aws-get-started&utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS). Interact with AWS services, @@ -787,4 +787,4 @@ Approaches differ per authentication providers: There used to be no better way to get account ID out of the API when using the federated account until `sts:GetCallerIdentity` was introduced. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ami.html.markdown b/website/docs/cdktf/python/r/ami.html.markdown index 940f98d81852..91623aa7733e 100644 --- a/website/docs/cdktf/python/r/ami.html.markdown +++ b/website/docs/cdktf/python/r/ami.html.markdown @@ -122,7 +122,6 @@ This resource exports the following attributes in addition to the arguments abov * `image_owner_alias` - AWS account alias (for example, amazon, self) or the AWS account ID of the AMI owner. * `image_type` - Type of image. * `hypervisor` - Hypervisor type of the image. -* `owner_id` - AWS account ID of the image owner. * `platform` - This value is set to windows for Windows AMIs; otherwise, it is blank. * `public` - Whether the image has public launch permissions. * `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). @@ -154,4 +153,4 @@ Using `terraform import`, import `aws_ami` using the ID of the AMI. For example: % terraform import aws_ami.example ami-12345678 ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/batch_job_queue.html.markdown b/website/docs/cdktf/python/r/batch_job_queue.html.markdown index 3130bab1ab4b..b99560af292b 100644 --- a/website/docs/cdktf/python/r/batch_job_queue.html.markdown +++ b/website/docs/cdktf/python/r/batch_job_queue.html.markdown @@ -79,9 +79,8 @@ class MyConvertedCode(TerraformStack): This resource supports the following arguments: * `name` - (Required) Specifies the name of the job queue. -* `compute_environments` - (Required) Specifies the set of compute environments - mapped to a job queue and their order. The position of the compute environments - in the list will dictate the order. +* `compute_environments` - (Required) List of compute environment ARNs mapped to a job queue. + The position of the compute environments in the list will dictate the order. * `priority` - (Required) The priority of the job queue. Job queues with a higher priority are evaluated first when associated with the same compute environment. * `scheduling_policy_arn` - (Optional) The ARN of the fair share scheduling policy. If this parameter is specified, the job queue uses a fair share scheduling policy. If this parameter isn't specified, the job queue uses a first in, first out (FIFO) scheduling policy. After a job queue is created, you can replace but can't remove the fair share scheduling policy. @@ -122,4 +121,4 @@ Using `terraform import`, import Batch Job Queue using the `arn`. For example: % terraform import aws_batch_job_queue.test_queue arn:aws:batch:us-east-1:123456789012:job-queue/sample ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/cleanrooms_configured_table.html.markdown b/website/docs/cdktf/python/r/cleanrooms_configured_table.html.markdown new file mode 100644 index 000000000000..56a82550c364 --- /dev/null +++ b/website/docs/cdktf/python/r/cleanrooms_configured_table.html.markdown @@ -0,0 +1,95 @@ +--- +subcategory: "Clean Rooms" +layout: "aws" +page_title: "AWS: aws_cleanrooms_configured_table" +description: |- + Provides a Clean Rooms Configured Table. +--- + + + +# Resource: aws_cleanrooms_configured_table + +Provides a AWS Clean Rooms configured table. Configured tables are used to represent references to existing tables in the AWS Glue Data Catalog. + +## Example Usage + +### Configured table with tags + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.cleanrooms_configured_table import CleanroomsConfiguredTable +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + CleanroomsConfiguredTable(self, "test_configured_table", + allowed_columns=["column1", "column2", "column3"], + analysis_method="DIRECT_QUERY", + description="I made this table with terraform!", + name="terraform-example-table", + table_reference=CleanroomsConfiguredTableTableReference( + database_name="example_database", + table_name="example_table" + ), + tags={ + "Project": "Terraform" + } + ) +``` + +## Argument Reference + +This resource supports the following arguments: + +* `name` - (Required) - The name of the configured table. +* `description` - (Optional) - A description for the configured table. +* `analysis_method` - (Required) - The analysis method for the configured table. The only valid value is currently `DIRECT_QUERY`. +* `allowed_columns` - (Required - Forces new resource) - The columns of the references table which will be included in the configured table. +* `table_reference` - (Required - Forces new resource) - A reference to the AWS Glue table which will be used to create the configured table. +* `table_reference.database_name` - (Required - Forces new resource) - The name of the AWS Glue database which contains the table. +* `table_reference.table_name` - (Required - Forces new resource) - The name of the AWS Glue table which will be used to create the configured table. +* `tags` - (Optional) - Key value pairs which tag the configured table. + +## Attribute Reference + +This resource exports the following attributes in addition to the arguments above: + +* `arn` - The ARN of the configured table. +* `id` - The ID of the configured table. +* `create_time` - The date and time the configured table was created. +* `update_time` - The date and time the configured table was last updated. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `1m`) +- `update` - (Default `1m`) +- `delete` - (Default `1m`) + +## Import + +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import `aws_cleanrooms_configured_table` using the `id`. For example: + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) +``` + +Using `terraform import`, import `aws_cleanrooms_configured_table` using the `id`. For example: + +```console +% terraform import aws_cleanrooms_configured_table.table 1234abcd-12ab-34cd-56ef-1234567890ab +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/cloud9_environment_ec2.html.markdown b/website/docs/cdktf/python/r/cloud9_environment_ec2.html.markdown index 0ef2467b8c2b..1e705d2b08fa 100644 --- a/website/docs/cdktf/python/r/cloud9_environment_ec2.html.markdown +++ b/website/docs/cdktf/python/r/cloud9_environment_ec2.html.markdown @@ -114,9 +114,12 @@ This resource supports the following arguments: * `amazonlinux-1-x86_64` * `amazonlinux-2-x86_64` * `ubuntu-18.04-x86_64` + * `ubuntu-22.04-x86_64` * `resolve:ssm:/aws/service/cloud9/amis/amazonlinux-1-x86_64` * `resolve:ssm:/aws/service/cloud9/amis/amazonlinux-2-x86_64` * `resolve:ssm:/aws/service/cloud9/amis/ubuntu-18.04-x86_64` + * `resolve:ssm:/aws/service/cloud9/amis/ubuntu-22.04-x86_64` + * `owner_arn` - (Optional) The ARN of the environment owner. This can be ARN of any AWS IAM principal. Defaults to the environment's creator. * `subnet_id` - (Optional) The ID of the subnet in Amazon VPC that AWS Cloud9 will use to communicate with the Amazon EC2 instance. * `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. @@ -130,4 +133,4 @@ This resource exports the following attributes in addition to the arguments abov * `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). * `type` - The type of the environment (e.g., `ssh` or `ec2`) - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/cloudfront_distribution.html.markdown b/website/docs/cdktf/python/r/cloudfront_distribution.html.markdown index 00139203f647..f6c3a110b3ac 100644 --- a/website/docs/cdktf/python/r/cloudfront_distribution.html.markdown +++ b/website/docs/cdktf/python/r/cloudfront_distribution.html.markdown @@ -422,8 +422,8 @@ argument should not be specified. * `origin_access_control_id` (Optional) - Unique identifier of a [CloudFront origin access control][8] for this origin. * `origin_id` (Required) - Unique identifier for the origin. * `origin_path` (Optional) - Optional element that causes CloudFront to request your content from a directory in your Amazon S3 bucket or your custom origin. -* `origin_shield` - The [CloudFront Origin Shield](#origin-shield-arguments) configuration information. Using Origin Shield can help reduce the load on your origin. For more information, see [Using Origin Shield](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/origin-shield.html) in the Amazon CloudFront Developer Guide. -* `s3_origin_config` - The [CloudFront S3 origin](#s3-origin-config-arguments) configuration information. If a custom origin is required, use `custom_origin_config` instead. +* `origin_shield` - (Optional) [CloudFront Origin Shield](#origin-shield-arguments) configuration information. Using Origin Shield can help reduce the load on your origin. For more information, see [Using Origin Shield](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/origin-shield.html) in the Amazon CloudFront Developer Guide. +* `s3_origin_config` - (Optional) [CloudFront S3 origin](#s3-origin-config-arguments) configuration information. If a custom origin is required, use `custom_origin_config` instead. ##### Custom Origin Config Arguments @@ -437,7 +437,7 @@ argument should not be specified. ##### Origin Shield Arguments * `enabled` (Required) - Whether Origin Shield is enabled. -* `origin_shield_region` (Required) - AWS Region for Origin Shield. To specify a region, use the region code, not the region name. For example, specify the US East (Ohio) region as us-east-2. +* `origin_shield_region` (Optional) - AWS Region for Origin Shield. To specify a region, use the region code, not the region name. For example, specify the US East (Ohio) region as `us-east-2`. ##### S3 Origin Config Arguments @@ -527,4 +527,4 @@ Using `terraform import`, import CloudFront Distributions using the `id`. For ex % terraform import aws_cloudfront_distribution.distribution E74FTE3EXAMPLE ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/connect_instance_storage_config.html.markdown b/website/docs/cdktf/python/r/connect_instance_storage_config.html.markdown index 3aa7c381b1fb..378b6329b8ca 100644 --- a/website/docs/cdktf/python/r/connect_instance_storage_config.html.markdown +++ b/website/docs/cdktf/python/r/connect_instance_storage_config.html.markdown @@ -235,4 +235,4 @@ Using `terraform import`, import Amazon Connect Instance Storage Configs using t % terraform import aws_connect_instance_storage_config.example f1288a1f-6193-445a-b47e-af739b2:c1d4e5f6-1b3c-1b3c-1b3c-c1d4e5f6c1d4e5:CHAT_TRANSCRIPTS ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/db_option_group.html.markdown b/website/docs/cdktf/python/r/db_option_group.html.markdown index a59778ee3b8c..78d025dc3c9d 100644 --- a/website/docs/cdktf/python/r/db_option_group.html.markdown +++ b/website/docs/cdktf/python/r/db_option_group.html.markdown @@ -71,35 +71,35 @@ More information about this can be found [here](https://docs.aws.amazon.com/Amaz This resource supports the following arguments: -* `name` - (Optional, Forces new resource) The name of the option group. If omitted, Terraform will assign a random, unique name. Must be lowercase, to match as it is stored in AWS. +* `name` - (Optional, Forces new resource) Name of the option group. If omitted, Terraform will assign a random, unique name. Must be lowercase, to match as it is stored in AWS. * `name_prefix` - (Optional, Forces new resource) Creates a unique name beginning with the specified prefix. Conflicts with `name`. Must be lowercase, to match as it is stored in AWS. -* `option_group_description` - (Optional) The description of the option group. Defaults to "Managed by Terraform". +* `option_group_description` - (Optional) Description of the option group. Defaults to "Managed by Terraform". * `engine_name` - (Required) Specifies the name of the engine that this option group should be associated with. * `major_engine_version` - (Required) Specifies the major version of the engine that this option group should be associated with. -* `option` - (Optional) A list of Options to apply. -* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `option` - (Optional) List of options to apply. +* `tags` - (Optional) Map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. -Option blocks support the following: +`option` blocks support the following: -* `option_name` - (Required) The Name of the Option (e.g., MEMCACHED). -* `option_settings` - (Optional) A list of option settings to apply. -* `port` - (Optional) The Port number when connecting to the Option (e.g., 11211). -* `version` - (Optional) The version of the option (e.g., 13.1.0.0). -* `db_security_group_memberships` - (Optional) A list of DB Security Groups for which the option is enabled. -* `vpc_security_group_memberships` - (Optional) A list of VPC Security Groups for which the option is enabled. +* `option_name` - (Required) Name of the option (e.g., MEMCACHED). +* `option_settings` - (Optional) List of option settings to apply. +* `port` - (Optional) Port number when connecting to the option (e.g., 11211). Leaving out or removing `port` from your configuration does not remove or clear a port from the option in AWS. AWS may assign a default port. Not including `port` in your configuration means that the AWS provider will ignore a previously set value, a value set by AWS, and any port changes. +* `version` - (Optional) Version of the option (e.g., 13.1.0.0). Leaving out or removing `version` from your configuration does not remove or clear a version from the option in AWS. AWS may assign a default version. Not including `version` in your configuration means that the AWS provider will ignore a previously set value, a value set by AWS, and any version changes. +* `db_security_group_memberships` - (Optional) List of DB Security Groups for which the option is enabled. +* `vpc_security_group_memberships` - (Optional) List of VPC Security Groups for which the option is enabled. -Option Settings blocks support the following: +`option_settings` blocks support the following: -* `name` - (Optional) The Name of the setting. -* `value` - (Optional) The Value of the setting. +* `name` - (Optional) Name of the setting. +* `value` - (Optional) Value of the setting. ## Attribute Reference This resource exports the following attributes in addition to the arguments above: -* `id` - The db option group name. -* `arn` - The ARN of the db option group. -* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `id` - DB option group name. +* `arn` - ARN of the DB option group. +* `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). ## Timeouts @@ -109,7 +109,7 @@ This resource exports the following attributes in addition to the arguments abov ## Import -In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import DB Option groups using the `name`. For example: +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import DB option groups using the `name`. For example: ```python # DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug @@ -120,10 +120,10 @@ class MyConvertedCode(TerraformStack): super().__init__(scope, name) ``` -Using `terraform import`, import DB Option groups using the `name`. For example: +Using `terraform import`, import DB option groups using the `name`. For example: ```console % terraform import aws_db_option_group.example mysql-option-group ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/dms_replication_config.html.markdown b/website/docs/cdktf/python/r/dms_replication_config.html.markdown new file mode 100644 index 000000000000..b38079d65678 --- /dev/null +++ b/website/docs/cdktf/python/r/dms_replication_config.html.markdown @@ -0,0 +1,116 @@ +--- +subcategory: "DMS (Database Migration)" +layout: "aws" +page_title: "AWS: aws_dms_replication_config" +description: |- + Provides a DMS Serverless replication config resource. +--- + + + +# Resource: aws_dms_replication_config + +Provides a DMS Serverless replication config resource. + +~> **NOTE:** Changing most arguments will stop the replication if it is running. You can set `start_replication` to resume the replication afterwards. + +## Example Usage + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import Token, TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.dms_replication_config import DmsReplicationConfig +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + DmsReplicationConfig(self, "name", + compute_config=DmsReplicationConfigComputeConfig( + max_capacity_units=Token.as_number("64"), + min_capacity_units=Token.as_number("2"), + preferred_maintenance_window="sun:23:45-mon:00:30", + replication_subnet_group_id=default_var.replication_subnet_group_id + ), + replication_config_identifier="test-dms-serverless-replication-tf", + replication_type="cdc", + resource_identifier="test-dms-serverless-replication-tf", + source_endpoint_arn=source.endpoint_arn, + start_replication=True, + table_mappings=" {\n \"rules\":[{\"rule-type\":\"selection\",\"rule-id\":\"1\",\"rule-name\":\"1\",\"object-locator\":{\"schema-name\":\"%%\",\"table-name\":\"%%\", \"rule-action\":\"include\"}]\n }\n\n", + target_endpoint_arn=target.endpoint_arn + ) +``` + +## Argument Reference + +This resource supports the following arguments: + +* `compute_config` - (Required) Configuration block for provisioning an DMS Serverless replication. +* `start_replication` - (Optional) Whether to run or stop the serverless replication, default is false. +* `replication_config_identifier` - (Required) Unique identifier that you want to use to create the config. +* `replication_type` - (Required) The migration type. Can be one of `full-load | cdc | full-load-and-cdc`. +* `source_endpoint_arn` - (Required) The Amazon Resource Name (ARN) string that uniquely identifies the source endpoint. +* `table_mappings` - (Required) An escaped JSON string that contains the table mappings. For information on table mapping see [Using Table Mapping with an AWS Database Migration Service Task to Select and Filter Data](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.html) +* `target_endpoint_arn` - (Required) The Amazon Resource Name (ARN) string that uniquely identifies the target endpoint. +* `replication_settings` - (Optional) An escaped JSON string that are used to provision this replication configuration. For example, [Change processing tuning settings](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.ChangeProcessingTuning.html) +* `resource_identifier` - (Optional) Unique value or name that you set for a given resource that can be used to construct an Amazon Resource Name (ARN) for that resource. For more information, see [Fine-grained access control using resource names and tags](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#CHAP_Security.FineGrainedAccess) +* `supplemental_settings` - (Optional) JSON settings for specifying supplemental data. For more information see [Specifying supplemental data for task settings](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.TaskData.html) +* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +`compute_config` block support the following: + +* `availability_zone` - (Optional) The Availability Zone where the DMS Serverless replication using this configuration will run. The default value is a random. +* `dns_name_servers` - (Optional) A list of custom DNS name servers supported for the DMS Serverless replication to access your source or target database. +* `kms_key_id` - (Optional) An Key Management Service (KMS) key Amazon Resource Name (ARN) that is used to encrypt the data during DMS Serverless replication. If you don't specify a value for the KmsKeyId parameter, DMS uses your default encryption key. +* `max_capacity_units` - (Required) Specifies the maximum value of the DMS capacity units (DCUs) for which a given DMS Serverless replication can be provisioned. A single DCU is 2GB of RAM, with 2 DCUs as the minimum value allowed. The list of valid DCU values includes 2, 4, 8, 16, 32, 64, 128, 192, 256, and 384. +* `min_capacity_units` - (Optional) Specifies the minimum value of the DMS capacity units (DCUs) for which a given DMS Serverless replication can be provisioned. The list of valid DCU values includes 2, 4, 8, 16, 32, 64, 128, 192, 256, and 384. If this value isn't set DMS scans the current activity of available source tables to identify an optimum setting for this parameter. +* `multi_az` - (Optional) Specifies if the replication instance is a multi-az deployment. You cannot set the `availability_zone` parameter if the `multi_az` parameter is set to `true`. +* `preferred_maintenance_window` - (Optional) The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC). + + - Default: A 30-minute window selected at random from an 8-hour block of time per region, occurring on a random day of the week. + - Format: `ddd:hh24:mi-ddd:hh24:mi` + - Valid Days: `mon, tue, wed, thu, fri, sat, sun` + - Constraints: Minimum 30-minute window. + +* `replication_subnet_group_id` - (Optional) Specifies a subnet group identifier to associate with the DMS Serverless replication. +* `vpc_security_group_ids` - (Optional) Specifies the virtual private cloud (VPC) security group to use with the DMS Serverless replication. The VPC security group must work with the VPC containing the replication. + +## Attribute Reference + +This resource exports the following attributes in addition to the arguments above: + +* `arn` - The Amazon Resource Name (ARN) for the serverless replication config. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `60m`) +* `update` - (Default `60m`) +* `delete` - (Default `60m`) + +## Import + +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import replication configs using the `arn`. For example: + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) +``` + +Using `terraform import`, import a replication config using the `arn`. For example: + +```console +% terraform import aws_dms_replication_config.example arn:aws:dms:us-east-1:123456789012:replication-config:UX6OL6MHMMJKFFOXE3H7LLJCMEKBDUG4ZV7DRSI +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_network_insights_path.html.markdown b/website/docs/cdktf/python/r/ec2_network_insights_path.html.markdown index f143e1b8061b..a686c6eae74b 100644 --- a/website/docs/cdktf/python/r/ec2_network_insights_path.html.markdown +++ b/website/docs/cdktf/python/r/ec2_network_insights_path.html.markdown @@ -38,7 +38,7 @@ class MyConvertedCode(TerraformStack): The following arguments are required: * `source` - (Required) ID or ARN of the resource which is the source of the path. Can be an Instance, Internet Gateway, Network Interface, Transit Gateway, VPC Endpoint, VPC Peering Connection or VPN Gateway. If the resource is in another account, you must specify an ARN. -* `destination` - (Required) ID or ARN of the resource which is the source of the path. Can be an Instance, Internet Gateway, Network Interface, Transit Gateway, VPC Endpoint, VPC Peering Connection or VPN Gateway. If the resource is in another account, you must specify an ARN. +* `destination` - (Required) ID or ARN of the resource which is the destination of the path. Can be an Instance, Internet Gateway, Network Interface, Transit Gateway, VPC Endpoint, VPC Peering Connection or VPN Gateway. If the resource is in another account, you must specify an ARN. * `protocol` - (Required) Protocol to use for analysis. Valid options are `tcp` or `udp`. The following arguments are optional: @@ -77,4 +77,4 @@ Using `terraform import`, import Network Insights Paths using the `id`. For exam % terraform import aws_ec2_network_insights_path.test nip-00edfba169923aefd ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/fsx_ontap_file_system.html.markdown b/website/docs/cdktf/python/r/fsx_ontap_file_system.html.markdown index 8a2cc4c0dd1a..e7f4145cbd7b 100644 --- a/website/docs/cdktf/python/r/fsx_ontap_file_system.html.markdown +++ b/website/docs/cdktf/python/r/fsx_ontap_file_system.html.markdown @@ -49,7 +49,7 @@ This resource supports the following arguments: * `kms_key_id` - (Optional) ARN for the KMS Key to encrypt the file system at rest, Defaults to an AWS managed KMS Key. * `automatic_backup_retention_days` - (Optional) The number of days to retain automatic backups. Setting this to 0 disables automatic backups. You can retain automatic backups for a maximum of 90 days. * `daily_automatic_backup_start_time` - (Optional) A recurring daily time, in the format HH:MM. HH is the zero-padded hour of the day (0-23), and MM is the zero-padded minute of the hour. For example, 05:00 specifies 5 AM daily. Requires `automatic_backup_retention_days` to be set. -* `disk_iops_configuration` - (Optional) The SSD IOPS configuration for the Amazon FSx for NetApp ONTAP file system. See [Disk Iops Configuration](#disk-iops-configuration) Below. +* `disk_iops_configuration` - (Optional) The SSD IOPS configuration for the Amazon FSx for NetApp ONTAP file system. See [Disk Iops Configuration](#disk-iops-configuration) below. * `endpoint_ip_address_range` - (Optional) Specifies the IP address range in which the endpoints to access your file system will be created. By default, Amazon FSx selects an unused IP address range for you from the 198.19.* range. * `storage_type` - (Optional) - The filesystem storage type. defaults to `SSD`. * `fsx_admin_password` - (Optional) The ONTAP administrative password for the fsxadmin user that you can use to administer your file system using the ONTAP CLI and REST API. @@ -139,4 +139,4 @@ class MyConvertedCode(TerraformStack): ) ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/fsx_ontap_volume.html.markdown b/website/docs/cdktf/python/r/fsx_ontap_volume.html.markdown index c8dcf1f31965..3ea3493869cc 100644 --- a/website/docs/cdktf/python/r/fsx_ontap_volume.html.markdown +++ b/website/docs/cdktf/python/r/fsx_ontap_volume.html.markdown @@ -72,18 +72,46 @@ class MyConvertedCode(TerraformStack): This resource supports the following arguments: * `name` - (Required) The name of the Volume. You can use a maximum of 203 alphanumeric characters, plus the underscore (_) special character. +* `bypass_snaplock_enterprise_retention` - (Optional) Setting this to `true` allows a SnapLock administrator to delete an FSx for ONTAP SnapLock Enterprise volume with unexpired write once, read many (WORM) files. This configuration must be applied separately before attempting to delete the resource to have the desired behavior. Defaults to `false`. +* `copy_tags_to_backups` - (Optional) A boolean flag indicating whether tags for the volume should be copied to backups. This value defaults to `false`. * `junction_path` - (Optional) Specifies the location in the storage virtual machine's namespace where the volume is mounted. The junction_path must have a leading forward slash, such as `/vol3` * `ontap_volume_type` - (Optional) Specifies the type of volume, valid values are `RW`, `DP`. Default value is `RW`. These can be set by the ONTAP CLI or API. This setting is used as part of migration and replication [Migrating to Amazon FSx for NetApp ONTAP](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/migrating-fsx-ontap.html) * `security_style` - (Optional) Specifies the volume security style, Valid values are `UNIX`, `NTFS`, and `MIXED`. * `size_in_megabytes` - (Required) Specifies the size of the volume, in megabytes (MB), that you are creating. * `skip_final_backup` - (Optional) When enabled, will skip the default final backup taken when the volume is deleted. This configuration must be applied separately before attempting to delete the resource to have the desired behavior. Defaults to `false`. +* `snaplock_configuration` - (Optional) The SnapLock configuration for an FSx for ONTAP volume. See [SnapLock Configuration](#snaplock-configuration) below. +* `snapshot_policy` - (Optional) Specifies the snapshot policy for the volume. See [snapshot policies](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/snapshots-ontap.html#snapshot-policies) in the Amazon FSx ONTAP User Guide * `storage_efficiency_enabled` - (Optional) Set to true to enable deduplication, compression, and compaction storage efficiency features on the volume. * `storage_virtual_machine_id` - (Required) Specifies the storage virtual machine in which to create the volume. * `tags` - (Optional) A map of tags to assign to the volume. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `tiering_policy` - (Optional) The data tiering policy for an FSx for ONTAP volume. See [Tiering Policy](#tiering-policy) below. -### tiering_policy +### SnapLock Configuration -The `tiering_policy` configuration block supports the following arguments: +* `audit_log_volume` - (Optional) Enables or disables the audit log volume for an FSx for ONTAP SnapLock volume. The default value is `false`. +* `autocommit_period` - (Optional) The configuration object for setting the autocommit period of files in an FSx for ONTAP SnapLock volume. See [Autocommit Period](#autocommit-period) below. +* `privileged_delete` - (Optional) Enables, disables, or permanently disables privileged delete on an FSx for ONTAP SnapLock Enterprise volume. Valid values: `DISABLED`, `ENABLED`, `PERMANENTLY_DISABLED`. The default value is `DISABLED`. +* `retention_period` - (Optional) The retention period of an FSx for ONTAP SnapLock volume. See [SnapLock Retention Period](#snaplock-retention-period) below. +* `snaplock_type` - (Required) Specifies the retention mode of an FSx for ONTAP SnapLock volume. After it is set, it can't be changed. Valid values: `COMPLIANCE`, `ENTERPRISE`. +* `volume_append_mode_enabled` - (Optional) Enables or disables volume-append mode on an FSx for ONTAP SnapLock volume. The default value is `false`. + +### Autocommit Period + +* `type` - (Required) The type of time for the autocommit period of a file in an FSx for ONTAP SnapLock volume. Setting this value to `NONE` disables autocommit. Valid values: `MINUTES`, `HOURS`, `DAYS`, `MONTHS`, `YEARS`, `NONE`. +* `value` - (Optional) The amount of time for the autocommit period of a file in an FSx for ONTAP SnapLock volume. + +### SnapLock Retention Period + +* `default_retention` - (Required) The retention period assigned to a write once, read many (WORM) file by default if an explicit retention period is not set for an FSx for ONTAP SnapLock volume. The default retention period must be greater than or equal to the minimum retention period and less than or equal to the maximum retention period. See [Retention Period](#retention-period) below. +* `maximum_retention` - (Required) The longest retention period that can be assigned to a WORM file on an FSx for ONTAP SnapLock volume. See [Retention Period](#retention-period) below. +* `minimum_retention` - (Required) The shortest retention period that can be assigned to a WORM file on an FSx for ONTAP SnapLock volume. See [Retention Period](#retention-period) below. + +### Retention Period + +* `type` - (Required) The type of time for the retention period of an FSx for ONTAP SnapLock volume. Set it to one of the valid types. If you set it to `INFINITE`, the files are retained forever. If you set it to `UNSPECIFIED`, the files are retained until you set an explicit retention period. Valid values: `SECONDS`, `MINUTES`, `HOURS`, `DAYS`, `MONTHS`, `YEARS`, `INFINITE`, `UNSPECIFIED`. +* `value` - (Optional) The amount of time for the autocommit period of a file in an FSx for ONTAP SnapLock volume. + +### Tiering Policy * `name` - (Required) Specifies the tiering policy for the ONTAP volume for moving data to the capacity pool storage. Valid values are `SNAPSHOT_ONLY`, `AUTO`, `ALL`, `NONE`. Default value is `SNAPSHOT_ONLY`. * `cooling_period` - (Optional) Specifies the number of days that user data in a volume must remain inactive before it is considered "cold" and moved to the capacity pool. Used with `AUTO` and `SNAPSHOT_ONLY` tiering policies only. Valid values are whole numbers between 2 and 183. Default values are 31 days for `AUTO` and 2 days for `SNAPSHOT_ONLY`. @@ -127,4 +155,4 @@ Using `terraform import`, import FSx ONTAP volume using the `id`. For example: % terraform import aws_fsx_ontap_volume.example fsvol-12345678abcdef123 ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/fsx_openzfs_volume.html.markdown b/website/docs/cdktf/python/r/fsx_openzfs_volume.html.markdown index cd72c84bb9ac..14e73ba28c41 100644 --- a/website/docs/cdktf/python/r/fsx_openzfs_volume.html.markdown +++ b/website/docs/cdktf/python/r/fsx_openzfs_volume.html.markdown @@ -42,6 +42,7 @@ This resource supports the following arguments: * `origin_snapshot` - (Optional) The ARN of the source snapshot to create the volume from. * `copy_tags_to_snapshots` - (Optional) A boolean flag indicating whether tags for the file system should be copied to snapshots. The default value is false. * `data_compression_type` - (Optional) Method used to compress the data on the volume. Valid values are `NONE` or `ZSTD`. Child volumes that don't specify compression option will inherit from parent volume. This option on file system applies to the root volume. +* `delete_volume_options` - (Optional) Whether to delete all child volumes and snapshots. Valid values: `DELETE_CHILD_VOLUMES_AND_SNAPSHOTS`. This configuration must be applied separately before attempting to delete the resource to have the desired behavior.. * `nfs_exports` - (Optional) NFS export configuration for the root volume. Exactly 1 item. See [NFS Exports](#nfs-exports) Below. * `read_only` - (Optional) specifies whether the volume is read-only. Default is false. * `record_size_kib` - (Optional) The record size of an OpenZFS volume, in kibibytes (KiB). Valid values are `4`, `8`, `16`, `32`, `64`, `128`, `256`, `512`, or `1024` KiB. The default is `128` KiB. @@ -100,4 +101,4 @@ Using `terraform import`, import FSx Volumes using the `id`. For example: % terraform import aws_fsx_openzfs_volume.example fsvol-543ab12b1ca672f33 ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/guardduty_detector.html.markdown b/website/docs/cdktf/python/r/guardduty_detector.html.markdown index 9acc14f0a870..23c5a886b310 100644 --- a/website/docs/cdktf/python/r/guardduty_detector.html.markdown +++ b/website/docs/cdktf/python/r/guardduty_detector.html.markdown @@ -3,14 +3,14 @@ subcategory: "GuardDuty" layout: "aws" page_title: "AWS: aws_guardduty_detector" description: |- - Provides a resource to manage a GuardDuty detector + Provides a resource to manage an Amazon GuardDuty detector --- # Resource: aws_guardduty_detector -Provides a resource to manage a GuardDuty detector. +Provides a resource to manage an Amazon GuardDuty detector. ~> **NOTE:** Deleting this resource is equivalent to "disabling" GuardDuty for an AWS region, which removes all existing findings. You can set the `enable` attribute to `false` to instead "suspend" monitoring and feedback reporting while keeping existing data. See the [Suspending or Disabling Amazon GuardDuty documentation](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_suspend-disable.html) for more information. @@ -56,7 +56,7 @@ This resource supports the following arguments: * `enable` - (Optional) Enable monitoring and feedback reporting. Setting to `false` is equivalent to "suspending" GuardDuty. Defaults to `true`. * `finding_publishing_frequency` - (Optional) Specifies the frequency of notifications sent for subsequent finding occurrences. If the detector is a GuardDuty member account, the value is determined by the GuardDuty primary account and cannot be modified, otherwise defaults to `SIX_HOURS`. For standalone and GuardDuty primary accounts, it must be configured in Terraform to enable drift detection. Valid values for standalone and primary accounts: `FIFTEEN_MINUTES`, `ONE_HOUR`, `SIX_HOURS`. See [AWS Documentation](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings_cloudwatch.html#guardduty_findings_cloudwatch_notification_frequency) for more information. -* `datasources` - (Optional) Describes which data sources will be enabled for the detector. See [Data Sources](#data-sources) below for more details. +* `datasources` - (Optional) Describes which data sources will be enabled for the detector. See [Data Sources](#data-sources) below for more details. [Deprecated](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty-feature-object-api-changes-march2023.html) in favor of [`aws_guardduty_detector_feature` resources](guardduty_detector_feature.html). * `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. ### Data Sources @@ -70,6 +70,8 @@ The `datasources` block supports the following: * `malware_protection` - (Optional) Configures [Malware Protection](https://docs.aws.amazon.com/guardduty/latest/ug/malware-protection.html). See [Malware Protection](#malware-protection), [Scan EC2 instance with findings](#scan-ec2-instance-with-findings) and [EBS volumes](#ebs-volumes) below for more details. +The `datasources` block is deprecated since March 2023. Use the `features` block instead and [map each `datasources` block to the corresponding `features` block](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty-feature-object-api-changes-march2023.html#guardduty-feature-enablement-datasource-relation). + ### S3 Logs The `s3_logs` block supports the following: @@ -142,4 +144,4 @@ Using `terraform import`, import GuardDuty detectors using the detector ID. For The ID of the detector can be retrieved via the [AWS CLI](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/guardduty/list-detectors.html) using `aws guardduty list-detectors`. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/guardduty_detector_feature.html.markdown b/website/docs/cdktf/python/r/guardduty_detector_feature.html.markdown new file mode 100644 index 000000000000..48c35d08eab6 --- /dev/null +++ b/website/docs/cdktf/python/r/guardduty_detector_feature.html.markdown @@ -0,0 +1,67 @@ +--- +subcategory: "GuardDuty" +layout: "aws" +page_title: "AWS: aws_guardduty_detector_feature" +description: |- + Provides a resource to manage an Amazon GuardDuty detector feature +--- + + + +# Resource: aws_guardduty_detector_feature + +Provides a resource to manage a single Amazon GuardDuty [detector feature](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty-features-activation-model.html#guardduty-features). + +~> **NOTE:** Deleting this resource does not disable the detector feature, the resource in simply removed from state instead. + +## Example Usage + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws. import GuarddutyDetectorFeature +from imports.aws.guardduty_detector import GuarddutyDetector +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + example = GuarddutyDetector(self, "example", + enable=True + ) + GuarddutyDetectorFeature(self, "eks_runtime_monitoring", + additional_configuration=[{ + "name": "EKS_ADDON_MANAGEMENT", + "status": "ENABLED" + } + ], + detector_id=example.id, + name="EKS_RUNTIME_MONITORING", + status="ENABLED" + ) +``` + +## Argument Reference + +This resource supports the following arguments: + +* `detector_id` - (Required) Amazon GuardDuty detector ID. +* `name` - (Required) The name of the detector feature. Valid values: `S3_DATA_EVENTS`, `EKS_AUDIT_LOGS`, `EBS_MALWARE_PROTECTION`, `RDS_LOGIN_EVENTS`, `EKS_RUNTIME_MONITORING`, `LAMBDA_NETWORK_LOGS`. +* `status` - (Required) The status of the detector feature. Valid values: `ENABLED`, `DISABLED`. +* `additional_configuration` - (Optional) Additional feature configuration block. See [below](#additional-configuration). + +### Additional Configuration + +The `additional_configuration` block supports the following: + +* `name` - (Required) The name of the additional configuration. Valid values: `EKS_ADDON_MANAGEMENT`. +* `status` - (Required) The status of the additional configuration. Valid values: `ENABLED`, `DISABLED`. + +## Attribute Reference + +This resource exports no additional attributes. + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/lexv2models_bot.html.markdown b/website/docs/cdktf/python/r/lexv2models_bot.html.markdown new file mode 100644 index 000000000000..7c0965b8568e --- /dev/null +++ b/website/docs/cdktf/python/r/lexv2models_bot.html.markdown @@ -0,0 +1,104 @@ +--- +subcategory: "Lex V2 Models" +layout: "aws" +page_title: "AWS: aws_lexv2models_bot" +description: |- + Terraform resource for managing an AWS Lex V2 Models Bot. +--- + + + +# Resource: aws_lexv2models_bot + +Terraform resource for managing an AWS Lex V2 Models Bot. + +## Example Usage + +### Basic Usage + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import Token, TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.lexv2_models_bot import Lexv2ModelsBot +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + Lexv2ModelsBot(self, "example", + data_privacy=[{ + "child_directed": Token.as_boolean("boolean") + } + ], + idle_session_ttl_in_seconds=10, + name="example", + role_arn="bot_example_arn" + ) +``` + +## Argument Reference + +The following arguments are required: + +* `name` - Name of the bot. The bot name must be unique in the account that creates the bot. Type String. Length Constraints: Minimum length of 1. Maximum length of 100. +* `data_privacy` - Provides information on additional privacy protections Amazon Lex should use with the bot's data. See [`data_privacy`](#data-privacy) +* `idle_session_ttl_in_seconds` - Time, in seconds, that Amazon Lex should keep information about a user's conversation with the bot. You can specify between 60 (1 minute) and 86,400 (24 hours) seconds. +* `role_arn` - ARN of an IAM role that has permission to access the bot. + +The following arguments are optional: + +* `members` - List of bot members in a network to be created. See [`bot_members`](#bot-members). +* `bot_tags` - List of tags to add to the bot. You can only add tags when you create a bot. +* `bot_type` - Type of a bot to create. +* `description` - Description of the bot. It appears in lists to help you identify a particular bot. +* `test_bot_alias_tags` - List of tags to add to the test alias for a bot. You can only add tags when you create a bot. + +## Attribute Reference + +This resource exports the following attributes in addition to the arguments above: + +* `id` - Unique identifier for a particular bot. + +### Data Privacy + +* `child_directed` (Required) - For each Amazon Lex bot created with the Amazon Lex Model Building Service, you must specify whether your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to the Children's Online Privacy Protection Act (COPPA) by specifying true or false in the childDirected field. + +### Bot Members + +* `alias_id` (Required) - Alias ID of a bot that is a member of this network of bots. +* `alias_name` (Required) - Alias name of a bot that is a member of this network of bots. +* `id` (Required) - Unique ID of a bot that is a member of this network of bots. +* `name` (Required) - Unique name of a bot that is a member of this network of bots. +* `version` (Required) - Version of a bot that is a member of this network of bots. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `30m`) +* `update` - (Default `30m`) +* `delete` - (Default `30m`) + +## Import + +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import Lex V2 Models Bot using the `example_id_arg`. For example: + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) +``` + +Using `terraform import`, import Lex V2 Models Bot using the `example_id_arg`. For example: + +```console +% terraform import aws_lexv2models_bot.example bot-id-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/lightsail_bucket.html.markdown b/website/docs/cdktf/python/r/lightsail_bucket.html.markdown index b190a8d4083a..76ab18867d81 100644 --- a/website/docs/cdktf/python/r/lightsail_bucket.html.markdown +++ b/website/docs/cdktf/python/r/lightsail_bucket.html.markdown @@ -38,6 +38,7 @@ This resource supports the following arguments: * `name` - (Required) The name for the bucket. * `bundle_id` - (Required) - The ID of the bundle to use for the bucket. A bucket bundle specifies the monthly cost, storage space, and data transfer quota for a bucket. Use the [get-bucket-bundles](https://docs.aws.amazon.com/cli/latest/reference/lightsail/get-bucket-bundles.html) cli command to get a list of bundle IDs that you can specify. +* `force_delete` - (Optional) - Force Delete non-empty buckets using `terraform destroy`. AWS by default will not delete an s3 bucket which is not empty, to prevent losing bucket data and affecting other resources in lightsail. If `force_delete` is set to `true` the bucket will be deleted even when not empty. * `tags` - (Optional) A map of tags to assign to the resource. To create a key-only tag, use an empty string as the value. If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level. ## Attribute Reference @@ -71,4 +72,4 @@ Using `terraform import`, import `aws_lightsail_bucket` using the `name` attribu % terraform import aws_lightsail_bucket.test example-bucket ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/mq_broker.html.markdown b/website/docs/cdktf/python/r/mq_broker.html.markdown index afe1b7a7c2a7..069eeaa2f585 100644 --- a/website/docs/cdktf/python/r/mq_broker.html.markdown +++ b/website/docs/cdktf/python/r/mq_broker.html.markdown @@ -14,7 +14,7 @@ Provides an Amazon MQ broker resource. This resources also manages users for the -> For more information on Amazon MQ, see [Amazon MQ documentation](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/welcome.html). -~> **NOTE:** Amazon MQ currently places limits on **RabbitMQ** brokers. For example, a RabbitMQ broker cannot have: instances with an associated IP address of an ENI attached to the broker, an associated LDAP server to authenticate and authorize broker connections, storage type `EFS`, audit logging, or `configuration` blocks. Although this resource allows you to create RabbitMQ users, RabbitMQ users cannot have console access or groups. Also, Amazon MQ does not return information about RabbitMQ users so drift detection is not possible. +~> **NOTE:** Amazon MQ currently places limits on **RabbitMQ** brokers. For example, a RabbitMQ broker cannot have: instances with an associated IP address of an ENI attached to the broker, an associated LDAP server to authenticate and authorize broker connections, storage type `EFS`, or audit logging. Although this resource allows you to create RabbitMQ users, RabbitMQ users cannot have console access or groups. Also, Amazon MQ does not return information about RabbitMQ users so drift detection is not possible. ~> **NOTE:** Changes to an MQ Broker can occur when you change a parameter, such as `configuration` or `user`, and are reflected in the next maintenance window. Because of this, Terraform may report a difference in its planning phase because a modification has not yet taken place. You can use the `apply_immediately` flag to instruct the service to apply the change immediately (see documentation below). Using `apply_immediately` can result in a brief downtime as the broker reboots. @@ -104,7 +104,7 @@ The following arguments are optional: * `apply_immediately` - (Optional) Specifies whether any broker modifications are applied immediately, or during the next maintenance window. Default is `false`. * `authentication_strategy` - (Optional) Authentication strategy used to secure the broker. Valid values are `simple` and `ldap`. `ldap` is not supported for `engine_type` `RabbitMQ`. * `auto_minor_version_upgrade` - (Optional) Whether to automatically upgrade to new minor versions of brokers as Amazon MQ makes releases available. -* `configuration` - (Optional) Configuration block for broker configuration. Applies to `engine_type` of `ActiveMQ` only. Detailed below. +* `configuration` - (Optional) Configuration block for broker configuration. Applies to `engine_type` of `ActiveMQ` and `RabbitMQ` only. Detailed below. * `deployment_mode` - (Optional) Deployment mode of the broker. Valid values are `SINGLE_INSTANCE`, `ACTIVE_STANDBY_MULTI_AZ`, and `CLUSTER_MULTI_AZ`. Default is `SINGLE_INSTANCE`. * `encryption_options` - (Optional) Configuration block containing encryption options. Detailed below. * `ldap_server_metadata` - (Optional) Configuration block for the LDAP server used to authenticate and authorize connections to the broker. Not supported for `engine_type` `RabbitMQ`. Detailed below. (Currently, AWS may not process changes to LDAP server metadata.) @@ -218,4 +218,4 @@ Using `terraform import`, import MQ Brokers using their broker id. For example: % terraform import aws_mq_broker.example a1b2c3d4-d5f6-7777-8888-9999aaaabbbbcccc ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/mq_configuration.html.markdown b/website/docs/cdktf/python/r/mq_configuration.html.markdown index 774f1a2ee3fe..1d6f10b99b11 100644 --- a/website/docs/cdktf/python/r/mq_configuration.html.markdown +++ b/website/docs/cdktf/python/r/mq_configuration.html.markdown @@ -16,6 +16,8 @@ For more information on Amazon MQ, see [Amazon MQ documentation](https://docs.aw ## Example Usage +### ActiveMQ + ```python # DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug from constructs import Construct @@ -37,11 +39,34 @@ class MyConvertedCode(TerraformStack): ) ``` +### RabbitMQ + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.mq_configuration import MqConfiguration +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + MqConfiguration(self, "example", + data="# Default RabbitMQ delivery acknowledgement timeout is 30 minutes in milliseconds\nconsumer_timeout = 1800000\n\n", + description="Example Configuration", + engine_type="RabbitMQ", + engine_version="3.11.16", + name="example" + ) +``` + ## Argument Reference The following arguments are required: -* `data` - (Required) Broker configuration in XML format. See [official docs](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-broker-configuration-parameters.html) for supported parameters and format of the XML. +* `data` - (Required) Broker configuration in XML format for `ActiveMQ` or [Cuttlefish](https://github.com/Kyorai/cuttlefish) format for `RabbitMQ`. See [official docs](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-broker-configuration-parameters.html) for supported parameters and format of the XML. * `engine_type` - (Required) Type of broker engine. Valid values are `ActiveMQ` and `RabbitMQ`. * `engine_version` - (Required) Version of the broker engine. * `name` - (Required) Name of the configuration. @@ -80,4 +105,4 @@ Using `terraform import`, import MQ Configurations using the configuration ID. F % terraform import aws_mq_configuration.example c-0187d1eb-88c8-475a-9b79-16ef5a10c94f ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/opensearch_inbound_connection_accepter.html.markdown b/website/docs/cdktf/python/r/opensearch_inbound_connection_accepter.html.markdown index 12d9066b86cc..1e64c3ec5d3a 100644 --- a/website/docs/cdktf/python/r/opensearch_inbound_connection_accepter.html.markdown +++ b/website/docs/cdktf/python/r/opensearch_inbound_connection_accepter.html.markdown @@ -69,6 +69,13 @@ This resource exports the following attributes in addition to the arguments abov * `id` - The Id of the connection to accept. * `connection_status` - Status of the connection request. +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `5m`) +* `delete` - (Default `5m`) + ## Import In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import AWS Opensearch Inbound Connection Accepters using the Inbound Connection ID. For example: @@ -88,4 +95,4 @@ Using `terraform import`, import AWS Opensearch Inbound Connection Accepters usi % terraform import aws_opensearch_inbound_connection_accepter.foo connection-id ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/opensearch_outbound_connection.html.markdown b/website/docs/cdktf/python/r/opensearch_outbound_connection.html.markdown index 0aca0b44a078..ba628689e9f0 100644 --- a/website/docs/cdktf/python/r/opensearch_outbound_connection.html.markdown +++ b/website/docs/cdktf/python/r/opensearch_outbound_connection.html.markdown @@ -36,6 +36,7 @@ class MyConvertedCode(TerraformStack): data_aws_region_current.override_logical_id("current") OpensearchOutboundConnection(self, "foo", connection_alias="outbound_connection", + connection_mode="DIRECT", local_domain_info=OpensearchOutboundConnectionLocalDomainInfo( domain_name=local_domain.domain_name, owner_id=Token.as_string(current.account_id), @@ -54,9 +55,20 @@ class MyConvertedCode(TerraformStack): This resource supports the following arguments: * `connection_alias` - (Required, Forces new resource) Specifies the connection alias that will be used by the customer for this connection. +* `connection_mode` - (Required, Forces new resource) Specifies the connection mode. Accepted values are `DIRECT` or `VPC_ENDPOINT`. +* `accept_connection` - (Optional, Forces new resource) Accepts the connection. +* `connection_properties` - (Optional, Forces new resource) Configuration block for the outbound connection. * `local_domain_info` - (Required, Forces new resource) Configuration block for the local Opensearch domain. * `remote_domain_info` - (Required, Forces new resource) Configuration block for the remote Opensearch domain. +### connection_properties + +* `cross_cluster_search` - (Optional, Forces new resource) Configuration block for cross cluster search. + +### cross_cluster_search + +* `skip_unavailable` - (Optional, Forces new resource) Skips unavailable clusters and can only be used for cross-cluster searches. Accepted values are `ENABLED` or `DISABLED`. + ### local_domain_info * `owner_id` - (Required, Forces new resource) The Account ID of the owner of the local domain. @@ -76,6 +88,17 @@ This resource exports the following attributes in addition to the arguments abov * `id` - The Id of the connection. * `connection_status` - Status of the connection request. +`connection_properties` block exports the following: + +* `endpoint` - The endpoint of the remote domain, is only set when `connection_mode` is `VPC_ENDPOINT` and `accept_connection` is `TRUE`. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `5m`) +* `delete` - (Default `5m`) + ## Import In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import AWS Opensearch Outbound Connections using the Outbound Connection ID. For example: @@ -95,4 +118,4 @@ Using `terraform import`, import AWS Opensearch Outbound Connections using the O % terraform import aws_opensearch_outbound_connection.foo connection-id ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/opensearch_package.html.markdown b/website/docs/cdktf/python/r/opensearch_package.html.markdown new file mode 100644 index 000000000000..2b201baf3dc6 --- /dev/null +++ b/website/docs/cdktf/python/r/opensearch_package.html.markdown @@ -0,0 +1,94 @@ +--- +subcategory: "OpenSearch" +layout: "aws" +page_title: "AWS: aws_opensearch_package" +description: |- + Terraform resource for managing an AWS OpenSearch package. +--- + + + +# Resource: aws_opensearch_package + +Manages an AWS Opensearch Package. + +## Example Usage + +### Basic Usage + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import Fn, Token, TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.opensearch_package import OpensearchPackage +from imports.aws.s3_bucket import S3Bucket +from imports.aws.s3_object import S3Object +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + my_opensearch_packages = S3Bucket(self, "my_opensearch_packages", + bucket="my-opensearch-packages" + ) + example = S3Object(self, "example", + bucket=my_opensearch_packages.bucket, + etag=Token.as_string(Fn.filemd5("./example.txt")), + key="example.txt", + source="./example.txt" + ) + aws_opensearch_package_example = OpensearchPackage(self, "example_2", + package_name="example-txt", + package_source=OpensearchPackagePackageSource( + s3_bucket_name=my_opensearch_packages.bucket, + s3_key=example.key + ), + package_type="TXT-DICTIONARY" + ) + # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. + aws_opensearch_package_example.override_logical_id("example") +``` + +## Argument Reference + +This resource supports the following arguments: + +* `package_name` - (Required, Forces new resource) Unique name for the package. +* `package_type` - (Required, Forces new resource) The type of package. +* `package_source` - (Required, Forces new resource) Configuration block for the package source options. +* `package_description` - (Optional, Forces new resource) Description of the package. + +### package_source + +* `s3_bucket_name` - (Required, Forces new resource) The name of the Amazon S3 bucket containing the package. +* `s3_key` - (Required, Forces new resource) Key (file name) of the package. + +## Attribute Reference + +This resource exports the following attributes in addition to the arguments above: + +* `id` - The Id of the package. +* `available_package_version` - The current version of the package. + +## Import + +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import AWS Opensearch Packages using the Package ID. For example: + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) +``` + +Using `terraform import`, import AWS Opensearch Packages using the Package ID. For example: + +```console +% terraform import aws_opensearch_package.example package-id +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/opensearch_package_association.html.markdown b/website/docs/cdktf/python/r/opensearch_package_association.html.markdown new file mode 100644 index 000000000000..c6a6e3993479 --- /dev/null +++ b/website/docs/cdktf/python/r/opensearch_package_association.html.markdown @@ -0,0 +1,77 @@ +--- +subcategory: "OpenSearch" +layout: "aws" +page_title: "AWS: aws_opensearch_package_association" +description: |- + Terraform resource for managing an AWS OpenSearch package association. +--- + + + +# Resource: aws_opensearch_package_association + +Manages an AWS Opensearch Package Association. + +## Example Usage + +### Basic Usage + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import Token, TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.opensearch_domain import OpensearchDomain +from imports.aws.opensearch_package import OpensearchPackage +from imports.aws.opensearch_package_association import OpensearchPackageAssociation +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + my_domain = OpensearchDomain(self, "my_domain", + cluster_config=OpensearchDomainClusterConfig( + instance_type="r4.large.search" + ), + domain_name="my-opensearch-domain", + engine_version="Elasticsearch_7.10" + ) + example = OpensearchPackage(self, "example", + package_name="example-txt", + package_source=OpensearchPackagePackageSource( + s3_bucket_name=my_opensearch_packages.bucket, + s3_key=Token.as_string(aws_s3_object_example.key) + ), + package_type="TXT-DICTIONARY" + ) + aws_opensearch_package_association_example = + OpensearchPackageAssociation(self, "example_2", + domain_name=my_domain.domain_name, + package_id=example.id + ) + # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. + aws_opensearch_package_association_example.override_logical_id("example") +``` + +## Argument Reference + +This resource supports the following arguments: + +* `package_id` - (Required, Forces new resource) Internal ID of the package to associate with a domain. +* `domain_name` - (Required, Forces new resource) Name of the domain to associate the package with. + +## Attribute Reference + +This resource exports the following attributes in addition to the arguments above: + +* `id` - The Id of the package association. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `10m`) +* `delete` - (Default `10m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/rds_custom_db_engine_version.markdown b/website/docs/cdktf/python/r/rds_custom_db_engine_version.markdown new file mode 100644 index 000000000000..5342c3590236 --- /dev/null +++ b/website/docs/cdktf/python/r/rds_custom_db_engine_version.markdown @@ -0,0 +1,191 @@ +--- +subcategory: "RDS (Relational Database)" +layout: "aws" +page_title: "AWS: aws_rds_custom_db_engine_version" +description: |- + Provides an custom engine version (CEV) resource for Amazon RDS Custom. +--- + + + +# Resource: aws_rds_custom_db_engine_version + +Provides an custom engine version (CEV) resource for Amazon RDS Custom. For additional information, see [Working with CEVs for RDS Custom for Oracle](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-cev.html) and [Working with CEVs for RDS Custom for SQL Server](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-cev-sqlserver.html) in the the [RDS User Guide](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html). + +## Example Usage + +### RDS Custom for Oracle Usage + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.kms_key import KmsKey +from imports.aws.rds_custom_db_engine_version import RdsCustomDbEngineVersion +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + example = KmsKey(self, "example", + description="KMS symmetric key for RDS Custom for Oracle" + ) + aws_rds_custom_db_engine_version_example = RdsCustomDbEngineVersion(self, "example_1", + database_installation_files_s3_bucket_name="DOC-EXAMPLE-BUCKET", + database_installation_files_s3_prefix="1915_GI/", + engine="custom-oracle-ee-cdb", + engine_version="19.cdb_cev1", + kms_key_id=example.arn, + manifest=" {\n\t\"databaseInstallationFileNames\":[\"V982063-01.zip\"]\n }\n\n", + tags={ + "Key": "value", + "Name": "example" + } + ) + # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. + aws_rds_custom_db_engine_version_example.override_logical_id("example") +``` + +### RDS Custom for Oracle External Manifest Usage + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import Fn, Token, TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.kms_key import KmsKey +from imports.aws.rds_custom_db_engine_version import RdsCustomDbEngineVersion +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + example = KmsKey(self, "example", + description="KMS symmetric key for RDS Custom for Oracle" + ) + aws_rds_custom_db_engine_version_example = RdsCustomDbEngineVersion(self, "example_1", + database_installation_files_s3_bucket_name="DOC-EXAMPLE-BUCKET", + database_installation_files_s3_prefix="1915_GI/", + engine="custom-oracle-ee-cdb", + engine_version="19.cdb_cev1", + filename="manifest_1915_GI.json", + kms_key_id=example.arn, + manifest_hash=Token.as_string(Fn.filebase64sha256(json)), + tags={ + "Key": "value", + "Name": "example" + } + ) + # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. + aws_rds_custom_db_engine_version_example.override_logical_id("example") +``` + +### RDS Custom for SQL Server Usage + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.rds_custom_db_engine_version import RdsCustomDbEngineVersion +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + RdsCustomDbEngineVersion(self, "test", + engine="custom-sqlserver-se", + engine_version="15.00.4249.2.cev-1", + source_image_id="ami-0aa12345678a12ab1" + ) +``` + +### RDS Custom for SQL Server Usage with AMI from another region + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.ami_copy import AmiCopy +from imports.aws.rds_custom_db_engine_version import RdsCustomDbEngineVersion +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + example = AmiCopy(self, "example", + description="A copy of ami-xxxxxxxx", + name="sqlserver-se-2019-15.00.4249.2", + source_ami_id="ami-xxxxxxxx", + source_ami_region="us-east-1" + ) + RdsCustomDbEngineVersion(self, "test", + engine="custom-sqlserver-se", + engine_version="15.00.4249.2.cev-1", + source_image_id=example.id + ) +``` + +## Argument Reference + +This resource supports the following arguments: + +* `database_installation_files_s3_bucket_name` - (Required) The name of the Amazon S3 bucket that contains the database installation files. +* `database_installation_files_s3_prefix` - (Required) The prefix for the Amazon S3 bucket that contains the database installation files. +* `description` - (Optional) The description of the CEV. +* `engine` - (Required) The name of the database engine. Valid values are `custom-oracle*`, `custom-sqlserver*`. +* `engine_version` - (Required) The version of the database engine. +* `filename` - (Optional) The name of the manifest file within the local filesystem. Conflicts with `manifest`. +* `kms_key_id` - (Optional) The ARN of the AWS KMS key that is used to encrypt the database installation files. Required for RDS Custom for Oracle. +* `manifest` - (Optional) The manifest file, in JSON format, that contains the list of database installation files. Conflicts with `filename`. +* `manifest_hash` - (Optional) Used to trigger updates. Must be set to a base64-encoded SHA256 hash of the manifest source specified with `filename`. The usual way to set this is filebase64sha256("manifest.json") where "manifest.json" is the local filename of the manifest source. +* `status` - (Optional) The status of the CEV. Valid values are `available`, `inactive`, `inactive-except-restore`. +* `source_image_id` - (Optional) The ID of the AMI to create the CEV from. Required for RDS Custom for SQL Server. For RDS Custom for Oracle, you can specify an AMI ID that was used in a different Oracle CEV. +* `tags` - (Optional) A mapping of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attribute Reference + +This resource exports the following attributes in addition to the arguments above: + +* `arn` - The Amazon Resource Name (ARN) for the custom engine version. +* `create_time` - The date and time that the CEV was created. +* `db_parameter_group_family` - The name of the DB parameter group family for the CEV. +* `image_id` - The ID of the AMI that was created with the CEV. +* `major_engine_version` - The major version of the database engine. +* `manifest_computed` - The returned manifest file, in JSON format, service generated and often different from input `manifest`. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `240m`) +- `update` - (Default `10m`) +- `delete` - (Default `60m`) + +## Import + +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import custom engine versions for Amazon RDS custom using the `engine` and `engine_version` separated by a colon (`:`). For example: + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) +``` + +Using `terraform import`, import custom engine versions for Amazon RDS custom using the `engine` and `engine_version` separated by a colon (`:`). For example: + +```console +% terraform import aws_rds_custom_db_engine_version.example custom-oracle-ee-cdb:19.cdb_cev1 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/route53_hosted_zone_dnssec.html.markdown b/website/docs/cdktf/python/r/route53_hosted_zone_dnssec.html.markdown index 924717f18045..33fed47ddb2e 100644 --- a/website/docs/cdktf/python/r/route53_hosted_zone_dnssec.html.markdown +++ b/website/docs/cdktf/python/r/route53_hosted_zone_dnssec.html.markdown @@ -14,6 +14,8 @@ Manages Route 53 Hosted Zone Domain Name System Security Extensions (DNSSEC). Fo !> **WARNING:** If you disable DNSSEC signing for your hosted zone before the DNS changes have propagated, your domain could become unavailable on the internet. When you remove the DS records, you must wait until the longest TTL for the DS records that you remove has expired before you complete the step to disable DNSSEC signing. Please refer to the [Route 53 Developer Guide - Disable DNSSEC](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-configuring-dnssec-disable.html) for a detailed breakdown on the steps required to disable DNSSEC safely for a hosted zone. +~> **Note:** Route53 hosted zones are global resources, and as such any `aws_kms_key` that you use as part of a signing key needs to be located in the `us-east-1` region. In the example below, the main AWS provider declaration is for `us-east-1`, however if you are provisioning your AWS resources in a different region, you will need to specify a provider alias and use that attached to the `aws_kms_key` resource as described in the [provider alias documentation](https://developer.hashicorp.com/terraform/language/providers/configuration#alias-multiple-provider-configurations). + ## Example Usage ```python @@ -120,4 +122,4 @@ Using `terraform import`, import `aws_route53_hosted_zone_dnssec` resources usin % terraform import aws_route53_hosted_zone_dnssec.example Z1D633PJN98FT9 ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/sfn_alias.html.markdown b/website/docs/cdktf/python/r/sfn_alias.html.markdown index 001a0c9716dd..edf19d6b39e7 100644 --- a/website/docs/cdktf/python/r/sfn_alias.html.markdown +++ b/website/docs/cdktf/python/r/sfn_alias.html.markdown @@ -51,7 +51,7 @@ class MyConvertedCode(TerraformStack): ## Argument Reference -The following arguments are required: +This resource supports the following arguments: * `name` - (Required) Name for the alias you are creating. * `description` - (Optional) Description of the alias. @@ -59,13 +59,9 @@ The following arguments are required: `routing_configuration` supports the following arguments: -* `state_machine_version_arn` - (Required) A version of the state machine. +* `state_machine_version_arn` - (Required) The Amazon Resource Name (ARN) of the state machine version. * `weight` - (Required) Percentage of traffic routed to the state machine version. -The following arguments are optional: - -* `optional_arg` - (Optional) Concise argument description. Do not begin the description with "An", "The", "Defines", "Indicates", or "Specifies," as these are verbose. In other words, "Indicates the amount of storage," can be rewritten as "Amount of storage," without losing any information. - ## Attribute Reference This resource exports the following attributes in addition to the arguments above: @@ -92,4 +88,4 @@ Using `terraform import`, import SFN (Step Functions) Alias using the `arn`. For % terraform import aws_sfn_alias.foo arn:aws:states:us-east-1:123456789098:stateMachine:myStateMachine:foo ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/vpclattice_listener_rule.html.markdown b/website/docs/cdktf/python/r/vpclattice_listener_rule.html.markdown index 7e667a5f0490..0de19f26a90b 100644 --- a/website/docs/cdktf/python/r/vpclattice_listener_rule.html.markdown +++ b/website/docs/cdktf/python/r/vpclattice_listener_rule.html.markdown @@ -107,7 +107,7 @@ The following arguments are required: * `service_identifier` - (Required) The ID or Amazon Resource Identifier (ARN) of the service. * `listener_identifier` - (Required) The ID or Amazon Resource Name (ARN) of the listener. -* `action` - (Required) The action for the default rule. +* `action` - (Required) The action for the listener rule. * `match` - (Required) The rule match. * `name` - (Required) The name of the rule. The name must be unique within the listener. The valid characters are a-z, 0-9, and hyphens (-). You can't use a hyphen as the first or last character, or immediately after another hyphen. * `priority` - (Required) The priority assigned to the rule. Each rule for a specific listener must have a unique priority. The lower the priority number the higher the priority. @@ -167,8 +167,8 @@ path match match (`match`) supports the following: This resource exports the following attributes in addition to the arguments above: -* `arn` - ARN of the target group. -* `rule_id` - Unique identifier for the target group. +* `arn` - The ARN for the listener rule. +* `rule_id` - Unique identifier for the listener rule. * `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). ## Timeouts @@ -198,4 +198,4 @@ Using `terraform import`, import VPC Lattice Listener Rule using the `example_id % terraform import aws_vpclattice_listener_rule.example rft-8012925589 ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/vpclattice_target_group.html.markdown b/website/docs/cdktf/python/r/vpclattice_target_group.html.markdown index 653bf94f7b89..d71e933fa060 100644 --- a/website/docs/cdktf/python/r/vpclattice_target_group.html.markdown +++ b/website/docs/cdktf/python/r/vpclattice_target_group.html.markdown @@ -69,6 +69,35 @@ class MyConvertedCode(TerraformStack): protocol_version="HTTP1", unhealthy_threshold_count=3 ), + ip_address_type="IPV4", + port=443, + protocol="HTTPS", + protocol_version="HTTP1", + vpc_identifier=Token.as_string(aws_vpc_example.id) + ), + name="example", + type="IP" + ) +``` + +### ALB + +If the type is ALB, `health_check` block is not supported. + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import Token, TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.vpclattice_target_group import VpclatticeTargetGroup +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + VpclatticeTargetGroup(self, "example", + config=VpclatticeTargetGroupConfigA( port=443, protocol="HTTPS", protocol_version="HTTP1", @@ -171,4 +200,4 @@ Using `terraform import`, import VPC Lattice Target Group using the `id`. For ex % terraform import aws_vpclattice_target_group.example tg-0c11d4dc16ed96bdb ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/vpclattice_target_group_attachment.html.markdown b/website/docs/cdktf/python/r/vpclattice_target_group_attachment.html.markdown index 12ea26b3dfd9..0ade38be1540 100644 --- a/website/docs/cdktf/python/r/vpclattice_target_group_attachment.html.markdown +++ b/website/docs/cdktf/python/r/vpclattice_target_group_attachment.html.markdown @@ -47,10 +47,10 @@ The following arguments are required: `target` supports the following: - `id` - (Required) The ID of the target. If the target type of the target group is INSTANCE, this is an instance ID. If the target type is IP , this is an IP address. If the target type is LAMBDA, this is the ARN of the Lambda function. If the target type is ALB, this is the ARN of the Application Load Balancer. -- `port` - (Optional) The port on which the target is listening. For HTTP, the default is 80. For HTTPS, the default is 443. +- `port` - (Optional) This port is used for routing traffic to the target, and defaults to the target group port. However, you can override the default and specify a custom port. ## Attribute Reference This resource exports no additional attributes. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/wafv2_rule_group.html.markdown b/website/docs/cdktf/python/r/wafv2_rule_group.html.markdown index dedc3f9d82d9..03170c058ae9 100644 --- a/website/docs/cdktf/python/r/wafv2_rule_group.html.markdown +++ b/website/docs/cdktf/python/r/wafv2_rule_group.html.markdown @@ -476,7 +476,8 @@ You can't nest a `rate_based_statement`, for example for use inside a `not_state The `rate_based_statement` block supports the following arguments: -* `aggregate_key_type` - (Optional) Setting that indicates how to aggregate the request counts. Valid values include: `CONSTANT`, `FORWARDED_IP` or `IP`. Default: `IP`. +* `aggregate_key_type` - (Optional) Setting that indicates how to aggregate the request counts. Valid values include: `CONSTANT`, `CUSTOM_KEYS`, `FORWARDED_IP` or `IP`. Default: `IP`. +* `custom_key` - (Optional) Aggregate the request counts using one or more web request components as the aggregate keys. See [`custom_key`](#custom_key-block) below for details. * `forwarded_ip_config` - (Optional) The configuration for inspecting IP addresses in an HTTP header that you specify, instead of using the IP address that's reported by the web request origin. If `aggregate_key_type` is set to `FORWARDED_IP`, this block is required. See [Forwarded IP Config](#forwarded-ip-config) below for details. * `limit` - (Required) The limit on requests per 5-minute period for a single originating IP address. * `scope_down_statement` - (Optional) An optional nested statement that narrows the scope of the rate-based statement to matching web requests. This can be any nestable statement, and you can nest statements at any level below this scope-down statement. See [Statement](#statement) above for details. If `aggregate_key_type` is set to `CONSTANT`, this block is required. @@ -666,6 +667,91 @@ This resource exports the following attributes in addition to the arguments abov * `arn` - The ARN of the WAF rule group. * `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +### `custom_key` Block + +Aggregate the request counts using one or more web request components as the aggregate keys. With this option, you must specify the aggregate keys in the `custom_keys` block. To aggregate on only the IP address or only the forwarded IP address, don't use custom keys. Instead, set the `aggregate_key_type` to `IP` or `FORWARDED_IP`. + +The `custom_key` block supports the following arguments: + +* `cookie` - (Optional) Use the value of a cookie in the request as an aggregate key. See [RateLimit `cookie`](#ratelimit-cookie-block) below for details. +* `forwarded_ip` - (Optional) Use the first IP address in an HTTP header as an aggregate key. See [`forwarded_ip`](#ratelimit-forwarded_ip-block) below for details. +* `http_method` - (Optional) Use the request's HTTP method as an aggregate key. See [RateLimit `http_method`](#ratelimit-http_method-block) below for details. +* `header` - (Optional) Use the value of a header in the request as an aggregate key. See [RateLimit `header`](#ratelimit-header-block) below for details. +* `ip` - (Optional) Use the request's originating IP address as an aggregate key. See [`RateLimit ip`](#ratelimit-ip-block) below for details. +* `label_namespace` - (Optional) Use the specified label namespace as an aggregate key. See [RateLimit `label_namespace`](#ratelimit-label_namespace-block) below for details. +* `query_argument` - (Optional) Use the specified query argument as an aggregate key. See [RateLimit `query_argument`](#ratelimit-query_argument-block) below for details. +* `query_string` - (Optional) Use the request's query string as an aggregate key. See [RateLimit `query_string`](#ratelimit-query_string-block) below for details. +* `uri_path` - (Optional) Use the request's URI path as an aggregate key. See [RateLimit `uri_path`](#ratelimit-uri_path-block) below for details. + +### RateLimit `cookie` Block + +Use the value of a cookie in the request as an aggregate key. Each distinct value in the cookie contributes to the aggregation instance. If you use a single cookie as your custom key, then each value fully defines an aggregation instance. + +The `cookie` block supports the following arguments: + +* `name`: The name of the cookie to use. +* `text_transformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [Text Transformation](#text-transformation) above for details. + +### RateLimit `forwarded_ip` Block + +Use the first IP address in an HTTP header as an aggregate key. Each distinct forwarded IP address contributes to the aggregation instance. When you specify an IP or forwarded IP in the custom key settings, you must also specify at least one other key to use. You can aggregate on only the forwarded IP address by specifying `FORWARDED_IP` in your rate-based statement's `aggregate_key_type`. With this option, you must specify the header to use in the rate-based rule's [Forwarded IP Config](#forwarded-ip-config) block. + +The `forwarded_ip` block is configured as an empty block `{}`. + +### RateLimit `http_method` Block + +Use the request's HTTP method as an aggregate key. Each distinct HTTP method contributes to the aggregation instance. If you use just the HTTP method as your custom key, then each method fully defines an aggregation instance. + +The `http_method` block is configured as an empty block `{}`. + +### RateLimit `header` Block + +Use the value of a header in the request as an aggregate key. Each distinct value in the header contributes to the aggregation instance. If you use a single header as your custom key, then each value fully defines an aggregation instance. + +The `header` block supports the following arguments: + +* `name`: The name of the header to use. +* `text_transformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [Text Transformation](#text-transformation) above for details. + +### RateLimit `ip` Block + +Use the request's originating IP address as an aggregate key. Each distinct IP address contributes to the aggregation instance. When you specify an IP or forwarded IP in the custom key settings, you must also specify at least one other key to use. You can aggregate on only the IP address by specifying `IP` in your rate-based statement's `aggregate_key_type`. + +The `ip` block is configured as an empty block `{}`. + +### RateLimit `label_namespace` Block + +Use the specified label namespace as an aggregate key. Each distinct fully qualified label name that has the specified label namespace contributes to the aggregation instance. If you use just one label namespace as your custom key, then each label name fully defines an aggregation instance. This uses only labels that have been added to the request by rules that are evaluated before this rate-based rule in the web ACL. For information about label namespaces and names, see Label syntax and naming requirements (https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-label-requirements.html) in the WAF Developer Guide. + +The `label_namespace` block supports the following arguments: + +* `namespace`: The namespace to use for aggregation + +### RateLimit `query_argument` Block + +Use the specified query argument as an aggregate key. Each distinct value for the named query argument contributes to the aggregation instance. If you use a single query argument as your custom key, then each value fully defines an aggregation instance. + +The `query_argument` block supports the following arguments: + +* `name`: The name of the query argument to use. +* `text_transformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [Text Transformation](#text-transformation) above for details. + +### RateLimit `query_string` Block + +Use the request's query string as an aggregate key. Each distinct string contributes to the aggregation instance. If you use just the query string as your custom key, then each string fully defines an aggregation instance. + +The `query_string` block supports the following arguments: + +* `text_transformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [Text Transformation](#text-transformation) above for details. + +### RateLimit `uri_path` Block + +Use the request's URI path as an aggregate key. Each distinct URI path contributes to the aggregation instance. If you use just the URI path as your custom key, then each URI path fully defines an aggregation instance. + +The `uri_path` block supports the following arguments: + +* `text_transformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [Text Transformation](#text-transformation) above for details. + ## Import In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import WAFv2 Rule Group using `ID/name/scope`. For example: @@ -685,4 +771,4 @@ Using `terraform import`, import WAFv2 Rule Group using `ID/name/scope`. For exa % terraform import aws_wafv2_rule_group.example a1b2c3d4-d5f6-7777-8888-9999aaaabbbbcccc/example/REGIONAL ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/wafv2_web_acl.html.markdown b/website/docs/cdktf/python/r/wafv2_web_acl.html.markdown index 25098aba5b59..90bd6aede165 100644 --- a/website/docs/cdktf/python/r/wafv2_web_acl.html.markdown +++ b/website/docs/cdktf/python/r/wafv2_web_acl.html.markdown @@ -582,7 +582,8 @@ You can't nest a `rate_based_statement`, for example for use inside a `not_state The `rate_based_statement` block supports the following arguments: -* `aggregate_key_type` - (Optional) Setting that indicates how to aggregate the request counts. Valid values include: `CONSTANT`, `FORWARDED_IP` or `IP`. Default: `IP`. +* `aggregate_key_type` - (Optional) Setting that indicates how to aggregate the request counts. Valid values include: `CONSTANT`, `CUSTOM_KEYS`, `FORWARDED_IP`, or `IP`. Default: `IP`. +* `custom_key` - (Optional) Aggregate the request counts using one or more web request components as the aggregate keys. See [`custom_key`](#custom_key-block) below for details. * `forwarded_ip_config` - (Optional) Configuration for inspecting IP addresses in an HTTP header that you specify, instead of using the IP address that's reported by the web request origin. If `aggregate_key_type` is set to `FORWARDED_IP`, this block is required. See [`forwarded_ip_config`](#forwarded_ip_config-block) below for details. * `limit` - (Required) Limit on requests per 5-minute period for a single originating IP address. * `scope_down_statement` - (Optional) Optional nested statement that narrows the scope of the rate-based statement to matching web requests. This can be any nestable statement, and you can nest statements at any level below this scope-down statement. See [`statement`](#statement-block) above for details. If `aggregate_key_type` is set to `CONSTANT`, this block is required. @@ -851,6 +852,91 @@ The `cloudfront` block supports the following arguments: * `default_size_inspection_limit` - (Required) Specifies the maximum size of the web request body component that an associated CloudFront distribution should send to AWS WAF for inspection. This applies to statements in the web ACL that inspect the body or JSON body. Valid values are `KB_16`, `KB_32`, `KB_48` and `KB_64`. +### `custom_key` Block + +Aggregate the request counts using one or more web request components as the aggregate keys. With this option, you must specify the aggregate keys in the `custom_keys` block. To aggregate on only the IP address or only the forwarded IP address, don't use custom keys. Instead, set the `aggregate_key_type` to `IP` or `FORWARDED_IP`. + +The `custom_key` block supports the following arguments: + +* `cookie` - (Optional) Use the value of a cookie in the request as an aggregate key. See [RateLimit `cookie`](#ratelimit-cookie-block) below for details. +* `forwarded_ip` - (Optional) Use the first IP address in an HTTP header as an aggregate key. See [`forwarded_ip`](#ratelimit-forwarded_ip-block) below for details. +* `http_method` - (Optional) Use the request's HTTP method as an aggregate key. See [RateLimit `http_method`](#ratelimit-http_method-block) below for details. +* `header` - (Optional) Use the value of a header in the request as an aggregate key. See [RateLimit `header`](#ratelimit-header-block) below for details. +* `ip` - (Optional) Use the request's originating IP address as an aggregate key. See [`RateLimit ip`](#ratelimit-ip-block) below for details. +* `label_namespace` - (Optional) Use the specified label namespace as an aggregate key. See [RateLimit `label_namespace`](#ratelimit-label_namespace-block) below for details. +* `query_argument` - (Optional) Use the specified query argument as an aggregate key. See [RateLimit `query_argument`](#ratelimit-query_argument-block) below for details. +* `query_string` - (Optional) Use the request's query string as an aggregate key. See [RateLimit `query_string`](#ratelimit-query_string-block) below for details. +* `uri_path` - (Optional) Use the request's URI path as an aggregate key. See [RateLimit `uri_path`](#ratelimit-uri_path-block) below for details. + +### RateLimit `cookie` Block + +Use the value of a cookie in the request as an aggregate key. Each distinct value in the cookie contributes to the aggregation instance. If you use a single cookie as your custom key, then each value fully defines an aggregation instance. + +The `cookie` block supports the following arguments: + +* `name`: The name of the cookie to use. +* `text_transformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [`text_transformation`](#text_transformation-block) above for details. + +### RateLimit `forwarded_ip` Block + +Use the first IP address in an HTTP header as an aggregate key. Each distinct forwarded IP address contributes to the aggregation instance. When you specify an IP or forwarded IP in the custom key settings, you must also specify at least one other key to use. You can aggregate on only the forwarded IP address by specifying `FORWARDED_IP` in your rate-based statement's `aggregate_key_type`. With this option, you must specify the header to use in the rate-based rule's [`forwarded_ip_config`](#forwarded_ip_config-block) block. + +The `forwarded_ip` block is configured as an empty block `{}`. + +### RateLimit `http_method` Block + +Use the request's HTTP method as an aggregate key. Each distinct HTTP method contributes to the aggregation instance. If you use just the HTTP method as your custom key, then each method fully defines an aggregation instance. + +The `http_method` block is configured as an empty block `{}`. + +### RateLimit `header` Block + +Use the value of a header in the request as an aggregate key. Each distinct value in the header contributes to the aggregation instance. If you use a single header as your custom key, then each value fully defines an aggregation instance. + +The `header` block supports the following arguments: + +* `name`: The name of the header to use. +* `text_transformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [`text_transformation`](#text_transformation-block) above for details. + +### RateLimit `ip` Block + +Use the request's originating IP address as an aggregate key. Each distinct IP address contributes to the aggregation instance. When you specify an IP or forwarded IP in the custom key settings, you must also specify at least one other key to use. You can aggregate on only the IP address by specifying `IP` in your rate-based statement's `aggregate_key_type`. + +The `ip` block is configured as an empty block `{}`. + +### RateLimit `label_namespace` Block + +Use the specified label namespace as an aggregate key. Each distinct fully qualified label name that has the specified label namespace contributes to the aggregation instance. If you use just one label namespace as your custom key, then each label name fully defines an aggregation instance. This uses only labels that have been added to the request by rules that are evaluated before this rate-based rule in the web ACL. For information about label namespaces and names, see Label syntax and naming requirements (https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-label-requirements.html) in the WAF Developer Guide. + +The `label_namespace` block supports the following arguments: + +* `namespace`: The namespace to use for aggregation + +### RateLimit `query_argument` Block + +Use the specified query argument as an aggregate key. Each distinct value for the named query argument contributes to the aggregation instance. If you use a single query argument as your custom key, then each value fully defines an aggregation instance. + +The `query_argument` block supports the following arguments: + +* `name`: The name of the query argument to use. +* `text_transformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [`text_transformation`](#text_transformation-block) above for details. + +### RateLimit `query_string` Block + +Use the request's query string as an aggregate key. Each distinct string contributes to the aggregation instance. If you use just the query string as your custom key, then each string fully defines an aggregation instance. + +The `query_string` block supports the following arguments: + +* `text_transformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [`text_transformation`](#text_transformation-block) above for details. + +### RateLimit `uri_path` Block + +Use the request's URI path as an aggregate key. Each distinct URI path contributes to the aggregation instance. If you use just the URI path as your custom key, then each URI path fully defines an aggregation instance. + +The `uri_path` block supports the following arguments: + +* `text_transformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [`text_transformation`](#text_transformation-block) above for details. + ## Attribute Reference This resource exports the following attributes in addition to the arguments above: @@ -879,4 +965,4 @@ Using `terraform import`, import WAFv2 Web ACLs using `ID/Name/Scope`. For examp % terraform import aws_wafv2_web_acl.example a1b2c3d4-d5f6-7777-8888-9999aaaabbbbcccc/example/REGIONAL ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/wafv2_web_acl_association.html.markdown b/website/docs/cdktf/python/r/wafv2_web_acl_association.html.markdown index 09cf593906d3..490c58190d85 100644 --- a/website/docs/cdktf/python/r/wafv2_web_acl_association.html.markdown +++ b/website/docs/cdktf/python/r/wafv2_web_acl_association.html.markdown @@ -87,7 +87,7 @@ resource "aws_wafv2_web_acl_association" "example" { This resource supports the following arguments: -* `resource_arn` - (Required) The Amazon Resource Name (ARN) of the resource to associate with the web ACL. This must be an ARN of an Application Load Balancer, an Amazon API Gateway stage, or an Amazon Cognito User Pool. +* `resource_arn` - (Required) The Amazon Resource Name (ARN) of the resource to associate with the web ACL. This must be an ARN of an Application Load Balancer, an Amazon API Gateway stage, an Amazon Cognito User Pool, an Amazon AppSync GraphQL API, an Amazon App Runner service, or an Amazon Verified Access instance. * `web_acl_arn` - (Required) The Amazon Resource Name (ARN) of the Web ACL that you want to associate with the resource. ## Attribute Reference @@ -119,4 +119,4 @@ Using `terraform import`, import WAFv2 Web ACL Association using `WEB_ACL_ARN,RE % terraform import aws_wafv2_web_acl_association.example arn:aws:wafv2:...7ce849ea,arn:aws:apigateway:...ages/name ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/connect_instance_storage_config.html.markdown b/website/docs/cdktf/typescript/d/connect_instance_storage_config.html.markdown index e162c3c396ba..8cca3280da2b 100644 --- a/website/docs/cdktf/typescript/d/connect_instance_storage_config.html.markdown +++ b/website/docs/cdktf/typescript/d/connect_instance_storage_config.html.markdown @@ -43,7 +43,7 @@ This data source supports the following arguments: * `associationId` - (Required) The existing association identifier that uniquely identifies the resource type and storage config for the given instance ID. * `instanceId` - (Required) Reference to the hosting Amazon Connect Instance -* `resourceType` - (Required) A valid resource type. Valid Values: `chatTranscripts` | `callRecordings` | `scheduledReports` | `mediaStreams` | `contactTraceRecords` | `agentEvents` | `realTimeContactAnalysisSegments`. +* `resourceType` - (Required) A valid resource type. Valid Values: `agentEvents` | `attachments` | `callRecordings` | `chatTranscripts` | `contactEvaluations` | `contactTraceRecords` | `mediaStreams` | `realTimeContactAnalysisSegments` | `scheduledReports` | `screenRecordings`. ## Attribute Reference @@ -97,4 +97,4 @@ The `encryptionConfig` configuration block supports the following arguments: * `encryptionType` - The type of encryption. Valid Values: `kms`. * `keyId` - The full ARN of the encryption key. Be sure to provide the full ARN of the encryption key, not just the ID. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/fsx_ontap_file_system.html.markdown b/website/docs/cdktf/typescript/d/fsx_ontap_file_system.html.markdown new file mode 100644 index 000000000000..48fc8c492efb --- /dev/null +++ b/website/docs/cdktf/typescript/d/fsx_ontap_file_system.html.markdown @@ -0,0 +1,86 @@ +--- +subcategory: "FSx" +layout: "aws" +page_title: "AWS: aws_fsx_ontap_file_system" +description: |- + Retrieve information on FSx ONTAP File System. +--- + + + +# Data Source: aws_fsx_ontap_file_system + +Retrieve information on FSx ONTAP File System. + +## Example Usage + +### Basic Usage + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { DataAwsFsxOntapFileSystem } from "./.gen/providers/aws/data-aws-fsx-ontap-file-system"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + new DataAwsFsxOntapFileSystem(this, "example", { + id: "fs-12345678", + }); + } +} + +``` + +## Argument Reference + +The following arguments are required: + +* `id` - (Required) Identifier of the file system (e.g. `fs12345678`). + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - Amazon Resource Name of the file system. +* `automaticBackupRetentionDays` - The number of days to retain automatic backups. +* `dailyAutomaticBackupStartTime` - The preferred time (in `hh:mm` format) to take daily automatic backups, in the UTC time zone. +* `deploymentType` - The file system deployment type. +* `diskIopsConfiguration` - The SSD IOPS configuration for the Amazon FSx for NetApp ONTAP file system, specifying the number of provisioned IOPS and the provision mode. See [Disk IOPS](#disk-iops) Below. +* `dnsName` - DNS name for the file system (e.g. `fs12345678CorpExampleCom`). +* `endpointIpAddressRange` - (Multi-AZ only) Specifies the IP address range in which the endpoints to access your file system exist. +* `endpoints` - The Management and Intercluster FileSystemEndpoints that are used to access data or to manage the file system using the NetApp ONTAP CLI, REST API, or NetApp SnapMirror. See [FileSystemEndpoints](#file-system-endpoints) below. +* `id` - Identifier of the file system (e.g. `fs12345678`). +* `kmsKeyId` - ARN for the KMS Key to encrypt the file system at rest. +* `networkInterfaceIds` - The IDs of the elastic network interfaces from which a specific file system is accessible. +* `ownerId` - AWS account identifier that created the file system. +* `preferredSubnetId` - Specifies the subnet in which you want the preferred file server to be located. +* `routeTableIds` - (Multi-AZ only) The VPC route tables in which your file system's endpoints exist. +* `storageCapacity` - The storage capacity of the file system in gibibytes (GiB). +* `storageType` - The type of storage the file system is using. If set to `ssd`, the file system uses solid state drive storage. If set to `hdd`, the file system uses hard disk drive storage. +* `subnetIds` - Specifies the IDs of the subnets that the file system is accessible from. For the MULTI_AZ_1 file system deployment type, there are two subnet IDs, one for the preferred file server and one for the standby file server. The preferred file server subnet identified in the `preferredSubnetId` property. +* `tags` - The tags associated with the file system. +* `throughputCapacity` - The sustained throughput of an Amazon FSx file system in Megabytes per second (MBps). +* `vpcId` - The ID of the primary virtual private cloud (VPC) for the file system. +* `weeklyMaintenanceStartTime` - The preferred start time (in `d:hh:mm` format) to perform weekly maintenance, in the UTC time zone. + +### Disk IOPS + +* `iops` - The total number of SSD IOPS provisioned for the file system. +* `mode` - Specifies whether the file system is using the `automatic` setting of SSD IOPS of 3 IOPS per GB of storage capacity, or if it using a `userProvisioned` value. + +### File System Endpoints + +* `intercluster` - A FileSystemEndpoint for managing your file system by setting up NetApp SnapMirror with other ONTAP systems. See [FileSystemEndpoint](#file-system-endpoint) below. +* `management` - A FileSystemEndpoint for managing your file system using the NetApp ONTAP CLI and NetApp ONTAP API. See [FileSystemEndpoint](#file-system-endpoint) below. + +### File System Endpoint + +* `dnsName` - The file system's DNS name. You can mount your file system using its DNS name. +* `ipAddresses` - IP addresses of the file system endpoint. + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/fsx_ontap_storage_virtual_machine.html.markdown b/website/docs/cdktf/typescript/d/fsx_ontap_storage_virtual_machine.html.markdown new file mode 100644 index 000000000000..8fe806f5d6b1 --- /dev/null +++ b/website/docs/cdktf/typescript/d/fsx_ontap_storage_virtual_machine.html.markdown @@ -0,0 +1,131 @@ +--- +subcategory: "FSx" +layout: "aws" +page_title: "AWS: aws_fsx_ontap_storage_virtual_machine" +description: |- + Retrieve information on FSx ONTAP Storage Virtual Machine (SVM). +--- + + + +# Data Source: aws_fsx_ontap_storage_virtual_machine + +Retrieve information on FSx ONTAP Storage Virtual Machine (SVM). + +## Example Usage + +### Basic Usage + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { DataAwsFsxOntapStorageVirtualMachine } from "./.gen/providers/aws/data-aws-fsx-ontap-storage-virtual-machine"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + new DataAwsFsxOntapStorageVirtualMachine(this, "example", { + id: "svm-12345678", + }); + } +} + +``` + +### Filter Example + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { DataAwsFsxOntapStorageVirtualMachine } from "./.gen/providers/aws/data-aws-fsx-ontap-storage-virtual-machine"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + new DataAwsFsxOntapStorageVirtualMachine(this, "example", { + filter: [ + { + name: "file-system-id", + values: ["fs-12345678"], + }, + ], + }); + } +} + +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available ONTAP Storage Virtual Machines in the current region. The given filters must match exactly one Storage Virtual Machine whose data will be exported as attributes. + +The following arguments are optional: + +* `filter` - (Optional) Configuration block. Detailed below. +* `id` - (Optional) Identifier of the storage virtual machine (e.g. `svm12345678`). + +### filter + +This block allows for complex filters. + +The following arguments are required: + +* `name` - (Required) Name of the field to filter by, as defined by [the underlying AWS API](https://docs.aws.amazon.com/fsx/latest/APIReference/API_StorageVirtualMachineFilter.html). +* `values` - (Required) Set of values that are accepted for the given field. An SVM will be selected if any one of the given values matches. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - Amazon Resource Name of the SVM. +* `activeDirectoryConfiguration` - The Microsoft Active Directory configuration to which the SVM is joined, if applicable. See [Active Directory Configuration](#active-directory-configuration) below. +* `creationTime` - The time that the SVM was created. +* `endpoints` - The endpoints that are used to access data or to manage the SVM using the NetApp ONTAP CLI, REST API, or NetApp CloudManager. They are the Iscsi, Management, Nfs, and Smb endpoints. See [SVM Endpoints](#svm-endpoints) below. +* `fileSystemId` - Identifier of the file system (e.g. `fs12345678`). +* `id` - The SVM's system generated unique ID. +* `lifecycleStatus` - The SVM's lifecycle status. +* `lifecycleTransitionReason` - Describes why the SVM lifecycle state changed. See [Lifecycle Transition Reason](#lifecycle-transition-reason) below. +* `name` - The name of the SVM, if provisioned. +* `subtype` - The SVM's subtype. +* `uuid` - The SVM's UUID. + +### Active Directory Configuration + +The following arguments are supported for `activeDirectoryConfiguration` configuration block: + +* `netbiosName` - The NetBIOS name of the AD computer object to which the SVM is joined. +* `selfManagedActiveDirectory` - The configuration of the self-managed Microsoft Active Directory (AD) directory to which the Windows File Server or ONTAP storage virtual machine (SVM) instance is joined. See [Self Managed Active Directory](#self-managed-active-directory) below. + +### Self Managed Active Directory + +* `dnsIps` - A list of up to three IP addresses of DNS servers or domain controllers in the self-managed AD directory. +* `domainName` - The fully qualified domain name of the self-managed AD directory. +* `fileSystemAdministratorsGroup` - The name of the domain group whose members have administrative privileges for the FSx file system. +* `organizationalUnitDistinguishedName` - The fully qualified distinguished name of the organizational unit within the self-managed AD directory to which the Windows File Server or ONTAP storage virtual machine (SVM) instance is joined. +* `username` - The user name for the service account on your self-managed AD domain that FSx uses to join to your AD domain. + +### Lifecycle Transition Reason + +* `message` - A detailed message. + +### SVM Endpoints + +* `iscsi` - An endpoint for connecting using the Internet Small Computer Systems Interface (iSCSI) protocol. See [SVM Endpoint](#svm-endpoint) below. +* `management` - An endpoint for managing SVMs using the NetApp ONTAP CLI, NetApp ONTAP API, or NetApp CloudManager. See [SVM Endpoint](#svm-endpoint) below. +* `nfs` - An endpoint for connecting using the Network File System (NFS) protocol. See [SVM Endpoint](#svm-endpoint) below. +* `smb` - An endpoint for connecting using the Server Message Block (SMB) protocol. See [SVM Endpoint](#svm-endpoint) below. + +### SVM Endpoint + +* `dnsName` - The file system's DNS name. You can mount your file system using its DNS name. +* `ipAddresses` - The SVM endpoint's IP addresses. + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/fsx_ontap_storage_virtual_machines.html.markdown b/website/docs/cdktf/typescript/d/fsx_ontap_storage_virtual_machines.html.markdown new file mode 100644 index 000000000000..d613cfbd1217 --- /dev/null +++ b/website/docs/cdktf/typescript/d/fsx_ontap_storage_virtual_machines.html.markdown @@ -0,0 +1,61 @@ +--- +subcategory: "FSx" +layout: "aws" +page_title: "AWS: aws_fsx_ontap_storage_virtual_machines" +description: |- + This resource can be useful for getting back a set of FSx ONTAP Storage Virtual Machine (SVM) IDs. +--- + + + +# Data Source: aws_fsx_ontap_storage_virtual_machines + +This resource can be useful for getting back a set of FSx ONTAP Storage Virtual Machine (SVM) IDs. + +## Example Usage + +The following shows outputting all SVM IDs for a given FSx ONTAP File System. + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { DataAwsFsxOntapStorageVirtualMachines } from "./.gen/providers/aws/data-aws-fsx-ontap-storage-virtual-machines"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + new DataAwsFsxOntapStorageVirtualMachines(this, "example", { + filter: [ + { + name: "file-system-id", + values: ["fs-12345678"], + }, + ], + }); + } +} + +``` + +## Argument Reference + +* `filter` - (Optional) Configuration block. Detailed below. + +### filter + +This block allows for complex filters. + +The following arguments are required: + +* `name` - (Required) Name of the field to filter by, as defined by [the underlying AWS API](https://docs.aws.amazon.com/fsx/latest/APIReference/API_StorageVirtualMachineFilter.html). +* `values` - (Required) Set of values that are accepted for the given field. An SVM will be selected if any one of the given values matches. + +## Attributes Reference + +* `ids` - List of all SVM IDs found. + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/guardduty_detector.html.markdown b/website/docs/cdktf/typescript/d/guardduty_detector.html.markdown index 3f6d8fde4aa3..622c5b9eb561 100644 --- a/website/docs/cdktf/typescript/d/guardduty_detector.html.markdown +++ b/website/docs/cdktf/typescript/d/guardduty_detector.html.markdown @@ -40,8 +40,14 @@ class MyConvertedCode extends TerraformStack { This data source exports the following attributes in addition to the arguments above: +* `features` - Current configuration of the detector features. + * `additionalConfiguration` - Additional feature configuration. + * `name` - The name of the additional configuration. + * `status` - The status of the additional configuration. + * `name` - The name of the detector feature. + * `status` - The status of the detector feature. * `findingPublishingFrequency` - The frequency of notifications sent about subsequent finding occurrences. * `serviceRoleArn` - Service-linked role that grants GuardDuty access to the resources in the AWS account. * `status` - Current status of the detector. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/organizations_organizational_unit.html.markdown b/website/docs/cdktf/typescript/d/organizations_organizational_unit.html.markdown new file mode 100644 index 000000000000..e9da8ddecd07 --- /dev/null +++ b/website/docs/cdktf/typescript/d/organizations_organizational_unit.html.markdown @@ -0,0 +1,58 @@ +--- +subcategory: "Organizations" +layout: "aws" +page_title: "AWS: aws_organizations_organizational_unit" +description: |- + Terraform data source for getting an AWS Organizations Organizational Unit. +--- + + + +# Data Source: aws_organizations_organizational_unit + +Terraform data source for getting an AWS Organizations Organizational Unit. + +## Example Usage + +### Basic Usage + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { Fn, Token, TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { DataAwsOrganizationsOrganization } from "./.gen/providers/aws/data-aws-organizations-organization"; +import { DataAwsOrganizationsOrganizationalUnit } from "./.gen/providers/aws/data-aws-organizations-organizational-unit"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + const org = new DataAwsOrganizationsOrganization(this, "org", {}); + new DataAwsOrganizationsOrganizationalUnit(this, "ou", { + name: "dev", + parentId: Token.asString(Fn.lookupNested(org.roots, ["0", "id"])), + }); + } +} + +``` + +## Argument Reference + +The following arguments are required: + +* `parentId` - (Required) Parent ID of the organizational unit. + +* `name` - (Required) Name of the organizational unit + +## Attribute Reference + +This data source exports the following attributes in addition to the arguments above: + +* `arn` - ARN of the organizational unit + +* `id` - ID of the organizational unit + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/s3_bucket_object.html.markdown b/website/docs/cdktf/typescript/d/s3_bucket_object.html.markdown index 4ce7c1b3487a..3ef3abc9dc39 100644 --- a/website/docs/cdktf/typescript/d/s3_bucket_object.html.markdown +++ b/website/docs/cdktf/typescript/d/s3_bucket_object.html.markdown @@ -113,7 +113,7 @@ This data source exports the following attributes in addition to the arguments a * `expiration` - If the object expiration is configured (see [object lifecycle management](http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html)), the field includes this header. It includes the expiry-date and rule-id key value pairs providing object expiration information. The value of the rule-id is URL encoded. * `expires` - Date and time at which the object is no longer cacheable. * `lastModified` - Last modified date of the object in RFC1123 format (e.g., `Mon, 02 Jan 2006 15:04:05 MST`) -* `metadata` - Map of metadata stored with the object in S3 +* `metadata` - Map of metadata stored with the object in S3. [Keys](https://developer.hashicorp.com/terraform/language/expressions/types#maps-objects) are always returned in lowercase. * `objectLockLegalHoldStatus` - Indicates whether this object has an active [legal hold](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-legal-holds). This field is only returned if you have permission to view an object's legal hold status. * `objectLockMode` - Object lock [retention mode](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-retention-modes) currently in place for this object. * `objectLockRetainUntilDate` - The date and time when this object's object lock will expire. @@ -126,4 +126,4 @@ This data source exports the following attributes in addition to the arguments a -> **Note:** Terraform ignores all leading `/`s in the object's `key` and treats multiple `/`s in the rest of the object's `key` as a single `/`, so values of `/indexHtml` and `indexHtml` correspond to the same S3 object as do `first//second///third//` and `first/second/third/`. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/s3_object.html.markdown b/website/docs/cdktf/typescript/d/s3_object.html.markdown index d573e31af58d..63ad9e7b1eeb 100644 --- a/website/docs/cdktf/typescript/d/s3_object.html.markdown +++ b/website/docs/cdktf/typescript/d/s3_object.html.markdown @@ -112,7 +112,7 @@ This data source exports the following attributes in addition to the arguments a * `expiration` - If the object expiration is configured (see [object lifecycle management](http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html)), the field includes this header. It includes the expiry-date and rule-id key value pairs providing object expiration information. The value of the rule-id is URL encoded. * `expires` - Date and time at which the object is no longer cacheable. * `lastModified` - Last modified date of the object in RFC1123 format (e.g., `Mon, 02 Jan 2006 15:04:05 MST`) -* `metadata` - Map of metadata stored with the object in S3 +* `metadata` - Map of metadata stored with the object in S3. [Keys](https://developer.hashicorp.com/terraform/language/expressions/types#maps-objects) are always returned in lowercase. * `objectLockLegalHoldStatus` - Indicates whether this object has an active [legal hold](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-legal-holds). This field is only returned if you have permission to view an object's legal hold status. * `objectLockMode` - Object lock [retention mode](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-retention-modes) currently in place for this object. * `objectLockRetainUntilDate` - The date and time when this object's object lock will expire. @@ -125,4 +125,4 @@ This data source exports the following attributes in addition to the arguments a -> **Note:** Terraform ignores all leading `/`s in the object's `key` and treats multiple `/`s in the rest of the object's `key` as a single `/`, so values of `/indexHtml` and `indexHtml` correspond to the same S3 object as do `first//second///third//` and `first/second/third/`. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/vpclattice_service.html.markdown b/website/docs/cdktf/typescript/d/vpclattice_service.html.markdown index 4c7772f43211..8eb5349116d9 100644 --- a/website/docs/cdktf/typescript/d/vpclattice_service.html.markdown +++ b/website/docs/cdktf/typescript/d/vpclattice_service.html.markdown @@ -42,7 +42,7 @@ The arguments of this data source act as filters for querying the available VPC The given filters must match exactly one VPC lattice service whose data will be exported as attributes. * `name` - (Optional) Service name. -* `serviceIdentifier` - (Optional) ID or Amazon Resource Name (ARN) of the service network. +* `serviceIdentifier` - (Optional) ID or Amazon Resource Name (ARN) of the service. ## Attribute Reference @@ -57,4 +57,4 @@ This data source exports the following attributes in addition to the arguments a * `status` - Status of the service. * `tags` - List of tags associated with the service. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/vpclattice_service_network.html.markdown b/website/docs/cdktf/typescript/d/vpclattice_service_network.html.markdown index d7a92cd24781..3c3fc22f1644 100644 --- a/website/docs/cdktf/typescript/d/vpclattice_service_network.html.markdown +++ b/website/docs/cdktf/typescript/d/vpclattice_service_network.html.markdown @@ -29,7 +29,7 @@ class MyConvertedCode extends TerraformStack { constructor(scope: Construct, name: string) { super(scope, name); new DataAwsVpclatticeServiceNetwork(this, "example", { - serviceNetworkIdentifier: "", + serviceNetworkIdentifier: "snsa-01112223334445556", }); } } @@ -40,7 +40,7 @@ class MyConvertedCode extends TerraformStack { The following arguments are required: -* `serviceNetworkIdentifier` - (Required) Identifier of the network service. +* `serviceNetworkIdentifier` - (Required) Identifier of the service network. ## Attribute Reference @@ -55,4 +55,4 @@ This data source exports the following attributes in addition to the arguments a * `numberOfAssociatedServices` - Number of services associated with this service network. * `numberOfAssociatedVpcs` - Number of VPCs associated with this service network. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/index.html.markdown b/website/docs/cdktf/typescript/index.html.markdown index 1c7a51654d50..8e2dc8da73c3 100644 --- a/website/docs/cdktf/typescript/index.html.markdown +++ b/website/docs/cdktf/typescript/index.html.markdown @@ -13,7 +13,7 @@ Use the Amazon Web Services (AWS) provider to interact with the many resources supported by AWS. You must configure the provider with the proper credentials before you can use it. -Use the navigation to the left to read about the available resources. There are currently 1251 resources and 514 data sources available in the provider. +Use the navigation to the left to read about the available resources. There are currently 1255 resources and 518 data sources available in the provider. To learn the basics of Terraform using this provider, follow the hands-on [get started tutorials](https://learn.hashicorp.com/tutorials/terraform/infrastructure-as-code?in=terraform/aws-get-started&utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS). Interact with AWS services, @@ -829,4 +829,4 @@ Approaches differ per authentication providers: There used to be no better way to get account ID out of the API when using the federated account until `sts:getCallerIdentity` was introduced. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ami.html.markdown b/website/docs/cdktf/typescript/r/ami.html.markdown index 19e8a66bde7f..a696d274a0e9 100644 --- a/website/docs/cdktf/typescript/r/ami.html.markdown +++ b/website/docs/cdktf/typescript/r/ami.html.markdown @@ -126,7 +126,6 @@ This resource exports the following attributes in addition to the arguments abov * `imageOwnerAlias` - AWS account alias (for example, amazon, self) or the AWS account ID of the AMI owner. * `imageType` - Type of image. * `hypervisor` - Hypervisor type of the image. -* `ownerId` - AWS account ID of the image owner. * `platform` - This value is set to windows for Windows AMIs; otherwise, it is blank. * `public` - Whether the image has public launch permissions. * `tagsAll` - Map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). @@ -161,4 +160,4 @@ Using `terraform import`, import `awsAmi` using the ID of the AMI. For example: % terraform import aws_ami.example ami-12345678 ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/batch_job_queue.html.markdown b/website/docs/cdktf/typescript/r/batch_job_queue.html.markdown index f0e573855e3c..7dbd2503660e 100644 --- a/website/docs/cdktf/typescript/r/batch_job_queue.html.markdown +++ b/website/docs/cdktf/typescript/r/batch_job_queue.html.markdown @@ -86,9 +86,8 @@ class MyConvertedCode extends TerraformStack { This resource supports the following arguments: * `name` - (Required) Specifies the name of the job queue. -* `computeEnvironments` - (Required) Specifies the set of compute environments - mapped to a job queue and their order. The position of the compute environments - in the list will dictate the order. +* `computeEnvironments` - (Required) List of compute environment ARNs mapped to a job queue. + The position of the compute environments in the list will dictate the order. * `priority` - (Required) The priority of the job queue. Job queues with a higher priority are evaluated first when associated with the same compute environment. * `schedulingPolicyArn` - (Optional) The ARN of the fair share scheduling policy. If this parameter is specified, the job queue uses a fair share scheduling policy. If this parameter isn't specified, the job queue uses a first in, first out (FIFO) scheduling policy. After a job queue is created, you can replace but can't remove the fair share scheduling policy. @@ -132,4 +131,4 @@ Using `terraform import`, import Batch Job Queue using the `arn`. For example: % terraform import aws_batch_job_queue.test_queue arn:aws:batch:us-east-1:123456789012:job-queue/sample ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/cleanrooms_configured_table.html.markdown b/website/docs/cdktf/typescript/r/cleanrooms_configured_table.html.markdown new file mode 100644 index 000000000000..56381b4bd473 --- /dev/null +++ b/website/docs/cdktf/typescript/r/cleanrooms_configured_table.html.markdown @@ -0,0 +1,101 @@ +--- +subcategory: "Clean Rooms" +layout: "aws" +page_title: "AWS: aws_cleanrooms_configured_table" +description: |- + Provides a Clean Rooms Configured Table. +--- + + + +# Resource: aws_cleanrooms_configured_table + +Provides a AWS Clean Rooms configured table. Configured tables are used to represent references to existing tables in the AWS Glue Data Catalog. + +## Example Usage + +### Configured table with tags + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { CleanroomsConfiguredTable } from "./.gen/providers/aws/cleanrooms-configured-table"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + new CleanroomsConfiguredTable(this, "test_configured_table", { + allowedColumns: ["column1", "column2", "column3"], + analysisMethod: "DIRECT_QUERY", + description: "I made this table with terraform!", + name: "terraform-example-table", + tableReference: { + databaseName: "example_database", + tableName: "example_table", + }, + tags: { + Project: "Terraform", + }, + }); + } +} + +``` + +## Argument Reference + +This resource supports the following arguments: + +* `name` - (Required) - The name of the configured table. +* `description` - (Optional) - A description for the configured table. +* `analysisMethod` - (Required) - The analysis method for the configured table. The only valid value is currently `directQuery`. +* `allowedColumns` - (Required - Forces new resource) - The columns of the references table which will be included in the configured table. +* `tableReference` - (Required - Forces new resource) - A reference to the AWS Glue table which will be used to create the configured table. +* `tableReferenceDatabaseName` - (Required - Forces new resource) - The name of the AWS Glue database which contains the table. +* `tableReferenceTableName` - (Required - Forces new resource) - The name of the AWS Glue table which will be used to create the configured table. +* `tags` - (Optional) - Key value pairs which tag the configured table. + +## Attribute Reference + +This resource exports the following attributes in addition to the arguments above: + +* `arn` - The ARN of the configured table. +* `id` - The ID of the configured table. +* `createTime` - The date and time the configured table was created. +* `updateTime` - The date and time the configured table was last updated. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `1M`) +- `update` - (Default `1M`) +- `delete` - (Default `1M`) + +## Import + +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import `awsCleanroomsConfiguredTable` using the `id`. For example: + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + } +} + +``` + +Using `terraform import`, import `awsCleanroomsConfiguredTable` using the `id`. For example: + +```console +% terraform import aws_cleanrooms_configured_table.table 1234abcd-12ab-34cd-56ef-1234567890ab +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/cloud9_environment_ec2.html.markdown b/website/docs/cdktf/typescript/r/cloud9_environment_ec2.html.markdown index 7cee10d21e2b..1ff2467f2f03 100644 --- a/website/docs/cdktf/typescript/r/cloud9_environment_ec2.html.markdown +++ b/website/docs/cdktf/typescript/r/cloud9_environment_ec2.html.markdown @@ -136,9 +136,12 @@ This resource supports the following arguments: * `amazonlinux1X8664` * `amazonlinux2X8664` * `ubuntu1804X8664` + * `ubuntu2204X8664` * `resolve:ssm:/aws/service/cloud9/amis/amazonlinux1X8664` * `resolve:ssm:/aws/service/cloud9/amis/amazonlinux2X8664` * `resolve:ssm:/aws/service/cloud9/amis/ubuntu1804X8664` + * `resolve:ssm:/aws/service/cloud9/amis/ubuntu2204X8664` + * `ownerArn` - (Optional) The ARN of the environment owner. This can be ARN of any AWS IAM principal. Defaults to the environment's creator. * `subnetId` - (Optional) The ID of the subnet in Amazon VPC that AWS Cloud9 will use to communicate with the Amazon EC2 instance. * `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. @@ -152,4 +155,4 @@ This resource exports the following attributes in addition to the arguments abov * `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). * `type` - The type of the environment (e.g., `ssh` or `ec2`) - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/cloudfront_distribution.html.markdown b/website/docs/cdktf/typescript/r/cloudfront_distribution.html.markdown index 055f8f378848..dc28b3813bc8 100644 --- a/website/docs/cdktf/typescript/r/cloudfront_distribution.html.markdown +++ b/website/docs/cdktf/typescript/r/cloudfront_distribution.html.markdown @@ -493,8 +493,8 @@ argument should not be specified. * `originAccessControlId` (Optional) - Unique identifier of a [CloudFront origin access control][8] for this origin. * `originId` (Required) - Unique identifier for the origin. * `originPath` (Optional) - Optional element that causes CloudFront to request your content from a directory in your Amazon S3 bucket or your custom origin. -* `originShield` - The [CloudFront Origin Shield](#origin-shield-arguments) configuration information. Using Origin Shield can help reduce the load on your origin. For more information, see [Using Origin Shield](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/origin-shield.html) in the Amazon CloudFront Developer Guide. -* `s3OriginConfig` - The [CloudFront S3 origin](#s3-origin-config-arguments) configuration information. If a custom origin is required, use `customOriginConfig` instead. +* `originShield` - (Optional) [CloudFront Origin Shield](#origin-shield-arguments) configuration information. Using Origin Shield can help reduce the load on your origin. For more information, see [Using Origin Shield](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/origin-shield.html) in the Amazon CloudFront Developer Guide. +* `s3OriginConfig` - (Optional) [CloudFront S3 origin](#s3-origin-config-arguments) configuration information. If a custom origin is required, use `customOriginConfig` instead. ##### Custom Origin Config Arguments @@ -508,7 +508,7 @@ argument should not be specified. ##### Origin Shield Arguments * `enabled` (Required) - Whether Origin Shield is enabled. -* `originShieldRegion` (Required) - AWS Region for Origin Shield. To specify a region, use the region code, not the region name. For example, specify the US East (Ohio) region as us-east-2. +* `originShieldRegion` (Optional) - AWS Region for Origin Shield. To specify a region, use the region code, not the region name. For example, specify the US East (Ohio) region as `usEast2`. ##### S3 Origin Config Arguments @@ -601,4 +601,4 @@ Using `terraform import`, import CloudFront Distributions using the `id`. For ex % terraform import aws_cloudfront_distribution.distribution E74FTE3EXAMPLE ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/connect_instance_storage_config.html.markdown b/website/docs/cdktf/typescript/r/connect_instance_storage_config.html.markdown index 4f07ef354b96..e4bcd21a7434 100644 --- a/website/docs/cdktf/typescript/r/connect_instance_storage_config.html.markdown +++ b/website/docs/cdktf/typescript/r/connect_instance_storage_config.html.markdown @@ -178,7 +178,7 @@ class MyConvertedCode extends TerraformStack { This resource supports the following arguments: * `instanceId` - (Required) Specifies the identifier of the hosting Amazon Connect Instance. -* `resourceType` - (Required) A valid resource type. Valid Values: `agentEvents` | `attachments` | `callRecordings` | `chatTranscripts` | `contactEvaluations` | `contactTraceRecords` | `mediaStreams` | `realTimeContactAnalysisSegments` | `scheduledReports`. +* `resourceType` - (Required) A valid resource type. Valid Values: `agentEvents` | `attachments` | `callRecordings` | `chatTranscripts` | `contactEvaluations` | `contactTraceRecords` | `mediaStreams` | `realTimeContactAnalysisSegments` | `scheduledReports` | `screenRecordings`. * `storageConfig` - (Required) Specifies the storage configuration options for the Connect Instance. [Documented below](#storage_config). ### `storageConfig` @@ -255,4 +255,4 @@ Using `terraform import`, import Amazon Connect Instance Storage Configs using t % terraform import aws_connect_instance_storage_config.example f1288a1f-6193-445a-b47e-af739b2:c1d4e5f6-1b3c-1b3c-1b3c-c1d4e5f6c1d4e5:CHAT_TRANSCRIPTS ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/db_option_group.html.markdown b/website/docs/cdktf/typescript/r/db_option_group.html.markdown index eca8494fc244..bd69e9b810f1 100644 --- a/website/docs/cdktf/typescript/r/db_option_group.html.markdown +++ b/website/docs/cdktf/typescript/r/db_option_group.html.markdown @@ -79,35 +79,35 @@ More information about this can be found [here](https://docs.aws.amazon.com/Amaz This resource supports the following arguments: -* `name` - (Optional, Forces new resource) The name of the option group. If omitted, Terraform will assign a random, unique name. Must be lowercase, to match as it is stored in AWS. +* `name` - (Optional, Forces new resource) Name of the option group. If omitted, Terraform will assign a random, unique name. Must be lowercase, to match as it is stored in AWS. * `namePrefix` - (Optional, Forces new resource) Creates a unique name beginning with the specified prefix. Conflicts with `name`. Must be lowercase, to match as it is stored in AWS. -* `optionGroupDescription` - (Optional) The description of the option group. Defaults to "Managed by Terraform". +* `optionGroupDescription` - (Optional) Description of the option group. Defaults to "Managed by Terraform". * `engineName` - (Required) Specifies the name of the engine that this option group should be associated with. * `majorEngineVersion` - (Required) Specifies the major version of the engine that this option group should be associated with. -* `option` - (Optional) A list of Options to apply. -* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `option` - (Optional) List of options to apply. +* `tags` - (Optional) Map of tags to assign to the resource. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. -Option blocks support the following: +`option` blocks support the following: -* `optionName` - (Required) The Name of the Option (e.g., MEMCACHED). -* `optionSettings` - (Optional) A list of option settings to apply. -* `port` - (Optional) The Port number when connecting to the Option (e.g., 11211). -* `version` - (Optional) The version of the option (e.g., 13.1.0.0). -* `dbSecurityGroupMemberships` - (Optional) A list of DB Security Groups for which the option is enabled. -* `vpcSecurityGroupMemberships` - (Optional) A list of VPC Security Groups for which the option is enabled. +* `optionName` - (Required) Name of the option (e.g., MEMCACHED). +* `optionSettings` - (Optional) List of option settings to apply. +* `port` - (Optional) Port number when connecting to the option (e.g., 11211). Leaving out or removing `port` from your configuration does not remove or clear a port from the option in AWS. AWS may assign a default port. Not including `port` in your configuration means that the AWS provider will ignore a previously set value, a value set by AWS, and any port changes. +* `version` - (Optional) Version of the option (e.g., 13.1.0.0). Leaving out or removing `version` from your configuration does not remove or clear a version from the option in AWS. AWS may assign a default version. Not including `version` in your configuration means that the AWS provider will ignore a previously set value, a value set by AWS, and any version changes. +* `dbSecurityGroupMemberships` - (Optional) List of DB Security Groups for which the option is enabled. +* `vpcSecurityGroupMemberships` - (Optional) List of VPC Security Groups for which the option is enabled. -Option Settings blocks support the following: +`optionSettings` blocks support the following: -* `name` - (Optional) The Name of the setting. -* `value` - (Optional) The Value of the setting. +* `name` - (Optional) Name of the setting. +* `value` - (Optional) Value of the setting. ## Attribute Reference This resource exports the following attributes in addition to the arguments above: -* `id` - The db option group name. -* `arn` - The ARN of the db option group. -* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `id` - DB option group name. +* `arn` - ARN of the DB option group. +* `tagsAll` - Map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). ## Timeouts @@ -117,7 +117,7 @@ This resource exports the following attributes in addition to the arguments abov ## Import -In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import DB Option groups using the `name`. For example: +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import DB option groups using the `name`. For example: ```typescript // DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug @@ -131,10 +131,10 @@ class MyConvertedCode extends TerraformStack { ``` -Using `terraform import`, import DB Option groups using the `name`. For example: +Using `terraform import`, import DB option groups using the `name`. For example: ```console % terraform import aws_db_option_group.example mysql-option-group ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/dms_replication_config.html.markdown b/website/docs/cdktf/typescript/r/dms_replication_config.html.markdown new file mode 100644 index 000000000000..07b3b3ce0edf --- /dev/null +++ b/website/docs/cdktf/typescript/r/dms_replication_config.html.markdown @@ -0,0 +1,123 @@ +--- +subcategory: "DMS (Database Migration)" +layout: "aws" +page_title: "AWS: aws_dms_replication_config" +description: |- + Provides a DMS Serverless replication config resource. +--- + + + +# Resource: aws_dms_replication_config + +Provides a DMS Serverless replication config resource. + +~> **NOTE:** Changing most arguments will stop the replication if it is running. You can set `startReplication` to resume the replication afterwards. + +## Example Usage + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { Token, TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { DmsReplicationConfig } from "./.gen/providers/aws/dms-replication-config"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + new DmsReplicationConfig(this, "name", { + computeConfig: { + maxCapacityUnits: Token.asNumber("64"), + minCapacityUnits: Token.asNumber("2"), + preferredMaintenanceWindow: "sun:23:45-mon:00:30", + replicationSubnetGroupId: defaultVar.replicationSubnetGroupId, + }, + replicationConfigIdentifier: "test-dms-serverless-replication-tf", + replicationType: "cdc", + resourceIdentifier: "test-dms-serverless-replication-tf", + sourceEndpointArn: source.endpointArn, + startReplication: true, + tableMappings: + ' {\n "rules":[{"rule-type":"selection","rule-id":"1","rule-name":"1","object-locator":{"schema-name":"%%","table-name":"%%", "rule-action":"include"}]\n }\n\n', + targetEndpointArn: target.endpointArn, + }); + } +} + +``` + +## Argument Reference + +This resource supports the following arguments: + +* `computeConfig` - (Required) Configuration block for provisioning an DMS Serverless replication. +* `startReplication` - (Optional) Whether to run or stop the serverless replication, default is false. +* `replicationConfigIdentifier` - (Required) Unique identifier that you want to use to create the config. +* `replicationType` - (Required) The migration type. Can be one of `full-load | cdc | full-load-and-cdc`. +* `sourceEndpointArn` - (Required) The Amazon Resource Name (ARN) string that uniquely identifies the source endpoint. +* `tableMappings` - (Required) An escaped JSON string that contains the table mappings. For information on table mapping see [Using Table Mapping with an AWS Database Migration Service Task to Select and Filter Data](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.html) +* `targetEndpointArn` - (Required) The Amazon Resource Name (ARN) string that uniquely identifies the target endpoint. +* `replicationSettings` - (Optional) An escaped JSON string that are used to provision this replication configuration. For example, [Change processing tuning settings](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.ChangeProcessingTuning.html) +* `resourceIdentifier` - (Optional) Unique value or name that you set for a given resource that can be used to construct an Amazon Resource Name (ARN) for that resource. For more information, see [Fine-grained access control using resource names and tags](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#CHAP_Security.FineGrainedAccess) +* `supplementalSettings` - (Optional) JSON settings for specifying supplemental data. For more information see [Specifying supplemental data for task settings](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.TaskData.html) +* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +`computeConfig` block support the following: + +* `availabilityZone` - (Optional) The Availability Zone where the DMS Serverless replication using this configuration will run. The default value is a random. +* `dnsNameServers` - (Optional) A list of custom DNS name servers supported for the DMS Serverless replication to access your source or target database. +* `kmsKeyId` - (Optional) An Key Management Service (KMS) key Amazon Resource Name (ARN) that is used to encrypt the data during DMS Serverless replication. If you don't specify a value for the KmsKeyId parameter, DMS uses your default encryption key. +* `maxCapacityUnits` - (Required) Specifies the maximum value of the DMS capacity units (DCUs) for which a given DMS Serverless replication can be provisioned. A single DCU is 2GB of RAM, with 2 DCUs as the minimum value allowed. The list of valid DCU values includes 2, 4, 8, 16, 32, 64, 128, 192, 256, and 384. +* `minCapacityUnits` - (Optional) Specifies the minimum value of the DMS capacity units (DCUs) for which a given DMS Serverless replication can be provisioned. The list of valid DCU values includes 2, 4, 8, 16, 32, 64, 128, 192, 256, and 384. If this value isn't set DMS scans the current activity of available source tables to identify an optimum setting for this parameter. +* `multiAz` - (Optional) Specifies if the replication instance is a multi-az deployment. You cannot set the `availabilityZone` parameter if the `multiAz` parameter is set to `true`. +* `preferredMaintenanceWindow` - (Optional) The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC). + + - Default: A 30-minute window selected at random from an 8-hour block of time per region, occurring on a random day of the week. + - Format: `ddd:hh24:miDdd:hh24:mi` + - Valid Days: `mon, tue, wed, thu, fri, sat, sun` + - Constraints: Minimum 30-minute window. + +* `replicationSubnetGroupId` - (Optional) Specifies a subnet group identifier to associate with the DMS Serverless replication. +* `vpcSecurityGroupIds` - (Optional) Specifies the virtual private cloud (VPC) security group to use with the DMS Serverless replication. The VPC security group must work with the VPC containing the replication. + +## Attribute Reference + +This resource exports the following attributes in addition to the arguments above: + +* `arn` - The Amazon Resource Name (ARN) for the serverless replication config. +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `60M`) +* `update` - (Default `60M`) +* `delete` - (Default `60M`) + +## Import + +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import replication configs using the `arn`. For example: + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + } +} + +``` + +Using `terraform import`, import a replication config using the `arn`. For example: + +```console +% terraform import aws_dms_replication_config.example arn:aws:dms:us-east-1:123456789012:replication-config:UX6OL6MHMMJKFFOXE3H7LLJCMEKBDUG4ZV7DRSI +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_network_insights_path.html.markdown b/website/docs/cdktf/typescript/r/ec2_network_insights_path.html.markdown index b648b17c2d0b..acca58fae170 100644 --- a/website/docs/cdktf/typescript/r/ec2_network_insights_path.html.markdown +++ b/website/docs/cdktf/typescript/r/ec2_network_insights_path.html.markdown @@ -41,7 +41,7 @@ class MyConvertedCode extends TerraformStack { The following arguments are required: * `source` - (Required) ID or ARN of the resource which is the source of the path. Can be an Instance, Internet Gateway, Network Interface, Transit Gateway, VPC Endpoint, VPC Peering Connection or VPN Gateway. If the resource is in another account, you must specify an ARN. -* `destination` - (Required) ID or ARN of the resource which is the source of the path. Can be an Instance, Internet Gateway, Network Interface, Transit Gateway, VPC Endpoint, VPC Peering Connection or VPN Gateway. If the resource is in another account, you must specify an ARN. +* `destination` - (Required) ID or ARN of the resource which is the destination of the path. Can be an Instance, Internet Gateway, Network Interface, Transit Gateway, VPC Endpoint, VPC Peering Connection or VPN Gateway. If the resource is in another account, you must specify an ARN. * `protocol` - (Required) Protocol to use for analysis. Valid options are `tcp` or `udp`. The following arguments are optional: @@ -83,4 +83,4 @@ Using `terraform import`, import Network Insights Paths using the `id`. For exam % terraform import aws_ec2_network_insights_path.test nip-00edfba169923aefd ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/fsx_ontap_file_system.html.markdown b/website/docs/cdktf/typescript/r/fsx_ontap_file_system.html.markdown index 642508cc3ed9..35c4294d9cd5 100644 --- a/website/docs/cdktf/typescript/r/fsx_ontap_file_system.html.markdown +++ b/website/docs/cdktf/typescript/r/fsx_ontap_file_system.html.markdown @@ -52,7 +52,7 @@ This resource supports the following arguments: * `kmsKeyId` - (Optional) ARN for the KMS Key to encrypt the file system at rest, Defaults to an AWS managed KMS Key. * `automaticBackupRetentionDays` - (Optional) The number of days to retain automatic backups. Setting this to 0 disables automatic backups. You can retain automatic backups for a maximum of 90 days. * `dailyAutomaticBackupStartTime` - (Optional) A recurring daily time, in the format HH:MM. HH is the zero-padded hour of the day (0-23), and MM is the zero-padded minute of the hour. For example, 05:00 specifies 5 AM daily. Requires `automaticBackupRetentionDays` to be set. -* `diskIopsConfiguration` - (Optional) The SSD IOPS configuration for the Amazon FSx for NetApp ONTAP file system. See [Disk Iops Configuration](#disk-iops-configuration) Below. +* `diskIopsConfiguration` - (Optional) The SSD IOPS configuration for the Amazon FSx for NetApp ONTAP file system. See [Disk Iops Configuration](#disk-iops-configuration) below. * `endpointIpAddressRange` - (Optional) Specifies the IP address range in which the endpoints to access your file system will be created. By default, Amazon FSx selects an unused IP address range for you from the 198.19.* range. * `storageType` - (Optional) - The filesystem storage type. defaults to `ssd`. * `fsxAdminPassword` - (Optional) The ONTAP administrative password for the fsxadmin user that you can use to administer your file system using the ONTAP CLI and REST API. @@ -153,4 +153,4 @@ class MyConvertedCode extends TerraformStack { ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/fsx_ontap_volume.html.markdown b/website/docs/cdktf/typescript/r/fsx_ontap_volume.html.markdown index dd32cb449872..79cf1f5d5429 100644 --- a/website/docs/cdktf/typescript/r/fsx_ontap_volume.html.markdown +++ b/website/docs/cdktf/typescript/r/fsx_ontap_volume.html.markdown @@ -82,18 +82,46 @@ class MyConvertedCode extends TerraformStack { This resource supports the following arguments: * `name` - (Required) The name of the Volume. You can use a maximum of 203 alphanumeric characters, plus the underscore (_) special character. +* `bypassSnaplockEnterpriseRetention` - (Optional) Setting this to `true` allows a SnapLock administrator to delete an FSx for ONTAP SnapLock Enterprise volume with unexpired write once, read many (WORM) files. This configuration must be applied separately before attempting to delete the resource to have the desired behavior. Defaults to `false`. +* `copyTagsToBackups` - (Optional) A boolean flag indicating whether tags for the volume should be copied to backups. This value defaults to `false`. * `junctionPath` - (Optional) Specifies the location in the storage virtual machine's namespace where the volume is mounted. The junction_path must have a leading forward slash, such as `/vol3` * `ontapVolumeType` - (Optional) Specifies the type of volume, valid values are `rw`, `dp`. Default value is `rw`. These can be set by the ONTAP CLI or API. This setting is used as part of migration and replication [Migrating to Amazon FSx for NetApp ONTAP](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/migrating-fsx-ontap.html) * `securityStyle` - (Optional) Specifies the volume security style, Valid values are `unix`, `ntfs`, and `mixed`. * `sizeInMegabytes` - (Required) Specifies the size of the volume, in megabytes (MB), that you are creating. * `skipFinalBackup` - (Optional) When enabled, will skip the default final backup taken when the volume is deleted. This configuration must be applied separately before attempting to delete the resource to have the desired behavior. Defaults to `false`. +* `snaplockConfiguration` - (Optional) The SnapLock configuration for an FSx for ONTAP volume. See [SnapLock Configuration](#snaplock-configuration) below. +* `snapshotPolicy` - (Optional) Specifies the snapshot policy for the volume. See [snapshot policies](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/snapshots-ontap.html#snapshot-policies) in the Amazon FSx ONTAP User Guide * `storageEfficiencyEnabled` - (Optional) Set to true to enable deduplication, compression, and compaction storage efficiency features on the volume. * `storageVirtualMachineId` - (Required) Specifies the storage virtual machine in which to create the volume. * `tags` - (Optional) A map of tags to assign to the volume. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `tieringPolicy` - (Optional) The data tiering policy for an FSx for ONTAP volume. See [Tiering Policy](#tiering-policy) below. -### tiering_policy +### SnapLock Configuration -The `tieringPolicy` configuration block supports the following arguments: +* `auditLogVolume` - (Optional) Enables or disables the audit log volume for an FSx for ONTAP SnapLock volume. The default value is `false`. +* `autocommitPeriod` - (Optional) The configuration object for setting the autocommit period of files in an FSx for ONTAP SnapLock volume. See [Autocommit Period](#autocommit-period) below. +* `privilegedDelete` - (Optional) Enables, disables, or permanently disables privileged delete on an FSx for ONTAP SnapLock Enterprise volume. Valid values: `disabled`, `enabled`, `permanentlyDisabled`. The default value is `disabled`. +* `retentionPeriod` - (Optional) The retention period of an FSx for ONTAP SnapLock volume. See [SnapLock Retention Period](#snaplock-retention-period) below. +* `snaplockType` - (Required) Specifies the retention mode of an FSx for ONTAP SnapLock volume. After it is set, it can't be changed. Valid values: `compliance`, `enterprise`. +* `volumeAppendModeEnabled` - (Optional) Enables or disables volume-append mode on an FSx for ONTAP SnapLock volume. The default value is `false`. + +### Autocommit Period + +* `type` - (Required) The type of time for the autocommit period of a file in an FSx for ONTAP SnapLock volume. Setting this value to `none` disables autocommit. Valid values: `minutes`, `hours`, `days`, `months`, `years`, `none`. +* `value` - (Optional) The amount of time for the autocommit period of a file in an FSx for ONTAP SnapLock volume. + +### SnapLock Retention Period + +* `defaultRetention` - (Required) The retention period assigned to a write once, read many (WORM) file by default if an explicit retention period is not set for an FSx for ONTAP SnapLock volume. The default retention period must be greater than or equal to the minimum retention period and less than or equal to the maximum retention period. See [Retention Period](#retention-period) below. +* `maximumRetention` - (Required) The longest retention period that can be assigned to a WORM file on an FSx for ONTAP SnapLock volume. See [Retention Period](#retention-period) below. +* `minimumRetention` - (Required) The shortest retention period that can be assigned to a WORM file on an FSx for ONTAP SnapLock volume. See [Retention Period](#retention-period) below. + +### Retention Period + +* `type` - (Required) The type of time for the retention period of an FSx for ONTAP SnapLock volume. Set it to one of the valid types. If you set it to `infinite`, the files are retained forever. If you set it to `unspecified`, the files are retained until you set an explicit retention period. Valid values: `seconds`, `minutes`, `hours`, `days`, `months`, `years`, `infinite`, `unspecified`. +* `value` - (Optional) The amount of time for the autocommit period of a file in an FSx for ONTAP SnapLock volume. + +### Tiering Policy * `name` - (Required) Specifies the tiering policy for the ONTAP volume for moving data to the capacity pool storage. Valid values are `snapshotOnly`, `auto`, `all`, `none`. Default value is `snapshotOnly`. * `coolingPeriod` - (Optional) Specifies the number of days that user data in a volume must remain inactive before it is considered "cold" and moved to the capacity pool. Used with `auto` and `snapshotOnly` tiering policies only. Valid values are whole numbers between 2 and 183. Default values are 31 days for `auto` and 2 days for `snapshotOnly`. @@ -140,4 +168,4 @@ Using `terraform import`, import FSx ONTAP volume using the `id`. For example: % terraform import aws_fsx_ontap_volume.example fsvol-12345678abcdef123 ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/fsx_openzfs_volume.html.markdown b/website/docs/cdktf/typescript/r/fsx_openzfs_volume.html.markdown index 9b8074154192..209716e629e7 100644 --- a/website/docs/cdktf/typescript/r/fsx_openzfs_volume.html.markdown +++ b/website/docs/cdktf/typescript/r/fsx_openzfs_volume.html.markdown @@ -45,6 +45,7 @@ This resource supports the following arguments: * `originSnapshot` - (Optional) The ARN of the source snapshot to create the volume from. * `copyTagsToSnapshots` - (Optional) A boolean flag indicating whether tags for the file system should be copied to snapshots. The default value is false. * `dataCompressionType` - (Optional) Method used to compress the data on the volume. Valid values are `none` or `zstd`. Child volumes that don't specify compression option will inherit from parent volume. This option on file system applies to the root volume. +* `deleteVolumeOptions` - (Optional) Whether to delete all child volumes and snapshots. Valid values: `deleteChildVolumesAndSnapshots`. This configuration must be applied separately before attempting to delete the resource to have the desired behavior.. * `nfsExports` - (Optional) NFS export configuration for the root volume. Exactly 1 item. See [NFS Exports](#nfs-exports) Below. * `readOnly` - (Optional) specifies whether the volume is read-only. Default is false. * `recordSizeKib` - (Optional) The record size of an OpenZFS volume, in kibibytes (KiB). Valid values are `4`, `8`, `16`, `32`, `64`, `128`, `256`, `512`, or `1024` KiB. The default is `128` KiB. @@ -106,4 +107,4 @@ Using `terraform import`, import FSx Volumes using the `id`. For example: % terraform import aws_fsx_openzfs_volume.example fsvol-543ab12b1ca672f33 ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/guardduty_detector.html.markdown b/website/docs/cdktf/typescript/r/guardduty_detector.html.markdown index 8ead8744e9c3..97f9f29e5daa 100644 --- a/website/docs/cdktf/typescript/r/guardduty_detector.html.markdown +++ b/website/docs/cdktf/typescript/r/guardduty_detector.html.markdown @@ -3,14 +3,14 @@ subcategory: "GuardDuty" layout: "aws" page_title: "AWS: aws_guardduty_detector" description: |- - Provides a resource to manage a GuardDuty detector + Provides a resource to manage an Amazon GuardDuty detector --- # Resource: aws_guardduty_detector -Provides a resource to manage a GuardDuty detector. +Provides a resource to manage an Amazon GuardDuty detector. ~> **NOTE:** Deleting this resource is equivalent to "disabling" GuardDuty for an AWS region, which removes all existing findings. You can set the `enable` attribute to `false` to instead "suspend" monitoring and feedback reporting while keeping existing data. See the [Suspending or Disabling Amazon GuardDuty documentation](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_suspend-disable.html) for more information. @@ -59,7 +59,7 @@ This resource supports the following arguments: * `enable` - (Optional) Enable monitoring and feedback reporting. Setting to `false` is equivalent to "suspending" GuardDuty. Defaults to `true`. * `findingPublishingFrequency` - (Optional) Specifies the frequency of notifications sent for subsequent finding occurrences. If the detector is a GuardDuty member account, the value is determined by the GuardDuty primary account and cannot be modified, otherwise defaults to `sixHours`. For standalone and GuardDuty primary accounts, it must be configured in Terraform to enable drift detection. Valid values for standalone and primary accounts: `fifteenMinutes`, `oneHour`, `sixHours`. See [AWS Documentation](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings_cloudwatch.html#guardduty_findings_cloudwatch_notification_frequency) for more information. -* `datasources` - (Optional) Describes which data sources will be enabled for the detector. See [Data Sources](#data-sources) below for more details. +* `datasources` - (Optional) Describes which data sources will be enabled for the detector. See [Data Sources](#data-sources) below for more details. [Deprecated](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty-feature-object-api-changes-march2023.html) in favor of [`awsGuarddutyDetectorFeature` resources](guardduty_detector_feature.html). * `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. ### Data Sources @@ -73,6 +73,8 @@ The `datasources` block supports the following: * `malwareProtection` - (Optional) Configures [Malware Protection](https://docs.aws.amazon.com/guardduty/latest/ug/malware-protection.html). See [Malware Protection](#malware-protection), [Scan EC2 instance with findings](#scan-ec2-instance-with-findings) and [EBS volumes](#ebs-volumes) below for more details. +The `datasources` block is deprecated since March 2023. Use the `features` block instead and [map each `datasources` block to the corresponding `features` block](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty-feature-object-api-changes-march2023.html#guardduty-feature-enablement-datasource-relation). + ### S3 Logs The `s3Logs` block supports the following: @@ -148,4 +150,4 @@ Using `terraform import`, import GuardDuty detectors using the detector ID. For The ID of the detector can be retrieved via the [AWS CLI](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/guardduty/list-detectors.html) using `aws guardduty list-detectors`. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/guardduty_detector_feature.html.markdown b/website/docs/cdktf/typescript/r/guardduty_detector_feature.html.markdown new file mode 100644 index 000000000000..05334852acc7 --- /dev/null +++ b/website/docs/cdktf/typescript/r/guardduty_detector_feature.html.markdown @@ -0,0 +1,71 @@ +--- +subcategory: "GuardDuty" +layout: "aws" +page_title: "AWS: aws_guardduty_detector_feature" +description: |- + Provides a resource to manage an Amazon GuardDuty detector feature +--- + + + +# Resource: aws_guardduty_detector_feature + +Provides a resource to manage a single Amazon GuardDuty [detector feature](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty-features-activation-model.html#guardduty-features). + +~> **NOTE:** Deleting this resource does not disable the detector feature, the resource in simply removed from state instead. + +## Example Usage + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { GuarddutyDetectorFeature } from "./.gen/providers/aws/"; +import { GuarddutyDetector } from "./.gen/providers/aws/guardduty-detector"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + const example = new GuarddutyDetector(this, "example", { + enable: true, + }); + new GuarddutyDetectorFeature(this, "eks_runtime_monitoring", { + additional_configuration: [ + { + name: "EKS_ADDON_MANAGEMENT", + status: "ENABLED", + }, + ], + detector_id: example.id, + name: "EKS_RUNTIME_MONITORING", + status: "ENABLED", + }); + } +} + +``` + +## Argument Reference + +This resource supports the following arguments: + +* `detectorId` - (Required) Amazon GuardDuty detector ID. +* `name` - (Required) The name of the detector feature. Valid values: `s3DataEvents`, `eksAuditLogs`, `ebsMalwareProtection`, `rdsLoginEvents`, `eksRuntimeMonitoring`, `lambdaNetworkLogs`. +* `status` - (Required) The status of the detector feature. Valid values: `enabled`, `disabled`. +* `additionalConfiguration` - (Optional) Additional feature configuration block. See [below](#additional-configuration). + +### Additional Configuration + +The `additionalConfiguration` block supports the following: + +* `name` - (Required) The name of the additional configuration. Valid values: `eksAddonManagement`. +* `status` - (Required) The status of the additional configuration. Valid values: `enabled`, `disabled`. + +## Attribute Reference + +This resource exports no additional attributes. + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/lexv2models_bot.html.markdown b/website/docs/cdktf/typescript/r/lexv2models_bot.html.markdown new file mode 100644 index 000000000000..ead633a4a3f9 --- /dev/null +++ b/website/docs/cdktf/typescript/r/lexv2models_bot.html.markdown @@ -0,0 +1,111 @@ +--- +subcategory: "Lex V2 Models" +layout: "aws" +page_title: "AWS: aws_lexv2models_bot" +description: |- + Terraform resource for managing an AWS Lex V2 Models Bot. +--- + + + +# Resource: aws_lexv2models_bot + +Terraform resource for managing an AWS Lex V2 Models Bot. + +## Example Usage + +### Basic Usage + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { Token, TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { Lexv2ModelsBot } from "./.gen/providers/aws/lexv2-models-bot"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + new Lexv2ModelsBot(this, "example", { + dataPrivacy: [ + { + childDirected: Token.asBoolean("boolean"), + }, + ], + idleSessionTtlInSeconds: 10, + name: "example", + roleArn: "bot_example_arn", + }); + } +} + +``` + +## Argument Reference + +The following arguments are required: + +* `name` - Name of the bot. The bot name must be unique in the account that creates the bot. Type String. Length Constraints: Minimum length of 1. Maximum length of 100. +* `dataPrivacy` - Provides information on additional privacy protections Amazon Lex should use with the bot's data. See [`dataPrivacy`](#data-privacy) +* `idleSessionTtlInSeconds` - Time, in seconds, that Amazon Lex should keep information about a user's conversation with the bot. You can specify between 60 (1 minute) and 86,400 (24 hours) seconds. +* `roleArn` - ARN of an IAM role that has permission to access the bot. + +The following arguments are optional: + +* `members` - List of bot members in a network to be created. See [`botMembers`](#bot-members). +* `botTags` - List of tags to add to the bot. You can only add tags when you create a bot. +* `botType` - Type of a bot to create. +* `description` - Description of the bot. It appears in lists to help you identify a particular bot. +* `testBotAliasTags` - List of tags to add to the test alias for a bot. You can only add tags when you create a bot. + +## Attribute Reference + +This resource exports the following attributes in addition to the arguments above: + +* `id` - Unique identifier for a particular bot. + +### Data Privacy + +* `childDirected` (Required) - For each Amazon Lex bot created with the Amazon Lex Model Building Service, you must specify whether your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to the Children's Online Privacy Protection Act (COPPA) by specifying true or false in the childDirected field. + +### Bot Members + +* `aliasId` (Required) - Alias ID of a bot that is a member of this network of bots. +* `aliasName` (Required) - Alias name of a bot that is a member of this network of bots. +* `id` (Required) - Unique ID of a bot that is a member of this network of bots. +* `name` (Required) - Unique name of a bot that is a member of this network of bots. +* `version` (Required) - Version of a bot that is a member of this network of bots. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `30M`) +* `update` - (Default `30M`) +* `delete` - (Default `30M`) + +## Import + +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import Lex V2 Models Bot using the `exampleIdArg`. For example: + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + } +} + +``` + +Using `terraform import`, import Lex V2 Models Bot using the `exampleIdArg`. For example: + +```console +% terraform import aws_lexv2models_bot.example bot-id-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/lightsail_bucket.html.markdown b/website/docs/cdktf/typescript/r/lightsail_bucket.html.markdown index 50f49d21bd0a..ed0ec1a59e22 100644 --- a/website/docs/cdktf/typescript/r/lightsail_bucket.html.markdown +++ b/website/docs/cdktf/typescript/r/lightsail_bucket.html.markdown @@ -41,6 +41,7 @@ This resource supports the following arguments: * `name` - (Required) The name for the bucket. * `bundleId` - (Required) - The ID of the bundle to use for the bucket. A bucket bundle specifies the monthly cost, storage space, and data transfer quota for a bucket. Use the [get-bucket-bundles](https://docs.aws.amazon.com/cli/latest/reference/lightsail/get-bucket-bundles.html) cli command to get a list of bundle IDs that you can specify. +* `forceDelete` - (Optional) - Force Delete non-empty buckets using `terraform destroy`. AWS by default will not delete an s3 bucket which is not empty, to prevent losing bucket data and affecting other resources in lightsail. If `forceDelete` is set to `true` the bucket will be deleted even when not empty. * `tags` - (Optional) A map of tags to assign to the resource. To create a key-only tag, use an empty string as the value. If configured with a provider `defaultTags` configuration block present, tags with matching keys will overwrite those defined at the provider-level. ## Attribute Reference @@ -77,4 +78,4 @@ Using `terraform import`, import `awsLightsailBucket` using the `name` attribute % terraform import aws_lightsail_bucket.test example-bucket ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/mq_broker.html.markdown b/website/docs/cdktf/typescript/r/mq_broker.html.markdown index 095a26bdb9f4..c50f92b1265b 100644 --- a/website/docs/cdktf/typescript/r/mq_broker.html.markdown +++ b/website/docs/cdktf/typescript/r/mq_broker.html.markdown @@ -14,7 +14,7 @@ Provides an Amazon MQ broker resource. This resources also manages users for the -> For more information on Amazon MQ, see [Amazon MQ documentation](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/welcome.html). -~> **NOTE:** Amazon MQ currently places limits on **RabbitMQ** brokers. For example, a RabbitMQ broker cannot have: instances with an associated IP address of an ENI attached to the broker, an associated LDAP server to authenticate and authorize broker connections, storage type `efs`, audit logging, or `configuration` blocks. Although this resource allows you to create RabbitMQ users, RabbitMQ users cannot have console access or groups. Also, Amazon MQ does not return information about RabbitMQ users so drift detection is not possible. +~> **NOTE:** Amazon MQ currently places limits on **RabbitMQ** brokers. For example, a RabbitMQ broker cannot have: instances with an associated IP address of an ENI attached to the broker, an associated LDAP server to authenticate and authorize broker connections, storage type `efs`, or audit logging. Although this resource allows you to create RabbitMQ users, RabbitMQ users cannot have console access or groups. Also, Amazon MQ does not return information about RabbitMQ users so drift detection is not possible. ~> **NOTE:** Changes to an MQ Broker can occur when you change a parameter, such as `configuration` or `user`, and are reflected in the next maintenance window. Because of this, Terraform may report a difference in its planning phase because a modification has not yet taken place. You can use the `applyImmediately` flag to instruct the service to apply the change immediately (see documentation below). Using `applyImmediately` can result in a brief downtime as the broker reboots. @@ -112,7 +112,7 @@ The following arguments are optional: * `applyImmediately` - (Optional) Specifies whether any broker modifications are applied immediately, or during the next maintenance window. Default is `false`. * `authenticationStrategy` - (Optional) Authentication strategy used to secure the broker. Valid values are `simple` and `ldap`. `ldap` is not supported for `engineType` `rabbitMq`. * `autoMinorVersionUpgrade` - (Optional) Whether to automatically upgrade to new minor versions of brokers as Amazon MQ makes releases available. -* `configuration` - (Optional) Configuration block for broker configuration. Applies to `engineType` of `activeMq` only. Detailed below. +* `configuration` - (Optional) Configuration block for broker configuration. Applies to `engineType` of `activeMq` and `rabbitMq` only. Detailed below. * `deploymentMode` - (Optional) Deployment mode of the broker. Valid values are `singleInstance`, `activeStandbyMultiAz`, and `clusterMultiAz`. Default is `singleInstance`. * `encryptionOptions` - (Optional) Configuration block containing encryption options. Detailed below. * `ldapServerMetadata` - (Optional) Configuration block for the LDAP server used to authenticate and authorize connections to the broker. Not supported for `engineType` `rabbitMq`. Detailed below. (Currently, AWS may not process changes to LDAP server metadata.) @@ -229,4 +229,4 @@ Using `terraform import`, import MQ Brokers using their broker id. For example: % terraform import aws_mq_broker.example a1b2c3d4-d5f6-7777-8888-9999aaaabbbbcccc ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/mq_configuration.html.markdown b/website/docs/cdktf/typescript/r/mq_configuration.html.markdown index 3b273e87dc46..113a81645121 100644 --- a/website/docs/cdktf/typescript/r/mq_configuration.html.markdown +++ b/website/docs/cdktf/typescript/r/mq_configuration.html.markdown @@ -16,6 +16,8 @@ For more information on Amazon MQ, see [Amazon MQ documentation](https://docs.aw ## Example Usage +### ActiveMQ + ```typescript // DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug import { Construct } from "constructs"; @@ -40,11 +42,37 @@ class MyConvertedCode extends TerraformStack { ``` +### RabbitMQ + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { MqConfiguration } from "./.gen/providers/aws/mq-configuration"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + new MqConfiguration(this, "example", { + data: "# Default RabbitMQ delivery acknowledgement timeout is 30 minutes in milliseconds\nconsumer_timeout = 1800000\n\n", + description: "Example Configuration", + engineType: "RabbitMQ", + engineVersion: "3.11.16", + name: "example", + }); + } +} + +``` + ## Argument Reference The following arguments are required: -* `data` - (Required) Broker configuration in XML format. See [official docs](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-broker-configuration-parameters.html) for supported parameters and format of the XML. +* `data` - (Required) Broker configuration in XML format for `activeMq` or [Cuttlefish](https://github.com/Kyorai/cuttlefish) format for `rabbitMq`. See [official docs](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-broker-configuration-parameters.html) for supported parameters and format of the XML. * `engineType` - (Required) Type of broker engine. Valid values are `activeMq` and `rabbitMq`. * `engineVersion` - (Required) Version of the broker engine. * `name` - (Required) Name of the configuration. @@ -86,4 +114,4 @@ Using `terraform import`, import MQ Configurations using the configuration ID. F % terraform import aws_mq_configuration.example c-0187d1eb-88c8-475a-9b79-16ef5a10c94f ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/opensearch_inbound_connection_accepter.html.markdown b/website/docs/cdktf/typescript/r/opensearch_inbound_connection_accepter.html.markdown index 49c775533435..8cd1e11dd744 100644 --- a/website/docs/cdktf/typescript/r/opensearch_inbound_connection_accepter.html.markdown +++ b/website/docs/cdktf/typescript/r/opensearch_inbound_connection_accepter.html.markdown @@ -72,6 +72,13 @@ This resource exports the following attributes in addition to the arguments abov * `id` - The Id of the connection to accept. * `connectionStatus` - Status of the connection request. +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `5M`) +* `delete` - (Default `5M`) + ## Import In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import AWS Opensearch Inbound Connection Accepters using the Inbound Connection ID. For example: @@ -94,4 +101,4 @@ Using `terraform import`, import AWS Opensearch Inbound Connection Accepters usi % terraform import aws_opensearch_inbound_connection_accepter.foo connection-id ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/opensearch_outbound_connection.html.markdown b/website/docs/cdktf/typescript/r/opensearch_outbound_connection.html.markdown index 335270f9c1d1..390f091f81e5 100644 --- a/website/docs/cdktf/typescript/r/opensearch_outbound_connection.html.markdown +++ b/website/docs/cdktf/typescript/r/opensearch_outbound_connection.html.markdown @@ -36,6 +36,7 @@ class MyConvertedCode extends TerraformStack { dataAwsRegionCurrent.overrideLogicalId("current"); new OpensearchOutboundConnection(this, "foo", { connectionAlias: "outbound_connection", + connectionMode: "DIRECT", localDomainInfo: { domainName: localDomain.domainName, ownerId: Token.asString(current.accountId), @@ -57,9 +58,20 @@ class MyConvertedCode extends TerraformStack { This resource supports the following arguments: * `connectionAlias` - (Required, Forces new resource) Specifies the connection alias that will be used by the customer for this connection. +* `connectionMode` - (Required, Forces new resource) Specifies the connection mode. Accepted values are `direct` or `vpcEndpoint`. +* `acceptConnection` - (Optional, Forces new resource) Accepts the connection. +* `connectionProperties` - (Optional, Forces new resource) Configuration block for the outbound connection. * `localDomainInfo` - (Required, Forces new resource) Configuration block for the local Opensearch domain. * `remoteDomainInfo` - (Required, Forces new resource) Configuration block for the remote Opensearch domain. +### connection_properties + +* `crossClusterSearch` - (Optional, Forces new resource) Configuration block for cross cluster search. + +### cross_cluster_search + +* `skipUnavailable` - (Optional, Forces new resource) Skips unavailable clusters and can only be used for cross-cluster searches. Accepted values are `enabled` or `disabled`. + ### local_domain_info * `ownerId` - (Required, Forces new resource) The Account ID of the owner of the local domain. @@ -79,6 +91,17 @@ This resource exports the following attributes in addition to the arguments abov * `id` - The Id of the connection. * `connectionStatus` - Status of the connection request. +`connectionProperties` block exports the following: + +* `endpoint` - The endpoint of the remote domain, is only set when `connectionMode` is `vpcEndpoint` and `acceptConnection` is `true`. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `5M`) +* `delete` - (Default `5M`) + ## Import In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import AWS Opensearch Outbound Connections using the Outbound Connection ID. For example: @@ -101,4 +124,4 @@ Using `terraform import`, import AWS Opensearch Outbound Connections using the O % terraform import aws_opensearch_outbound_connection.foo connection-id ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/opensearch_package.html.markdown b/website/docs/cdktf/typescript/r/opensearch_package.html.markdown new file mode 100644 index 000000000000..b7c74b3b0943 --- /dev/null +++ b/website/docs/cdktf/typescript/r/opensearch_package.html.markdown @@ -0,0 +1,104 @@ +--- +subcategory: "OpenSearch" +layout: "aws" +page_title: "AWS: aws_opensearch_package" +description: |- + Terraform resource for managing an AWS OpenSearch package. +--- + + + +# Resource: aws_opensearch_package + +Manages an AWS Opensearch Package. + +## Example Usage + +### Basic Usage + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { Fn, Token, TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { OpensearchPackage } from "./.gen/providers/aws/opensearch-package"; +import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; +import { S3Object } from "./.gen/providers/aws/s3-object"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + const myOpensearchPackages = new S3Bucket(this, "my_opensearch_packages", { + bucket: "my-opensearch-packages", + }); + const example = new S3Object(this, "example", { + bucket: myOpensearchPackages.bucket, + etag: Token.asString(Fn.filemd5("./example.txt")), + key: "example.txt", + source: "./example.txt", + }); + const awsOpensearchPackageExample = new OpensearchPackage( + this, + "example_2", + { + packageName: "example-txt", + packageSource: { + s3BucketName: myOpensearchPackages.bucket, + s3Key: example.key, + }, + packageType: "TXT-DICTIONARY", + } + ); + /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ + awsOpensearchPackageExample.overrideLogicalId("example"); + } +} + +``` + +## Argument Reference + +This resource supports the following arguments: + +* `packageName` - (Required, Forces new resource) Unique name for the package. +* `packageType` - (Required, Forces new resource) The type of package. +* `packageSource` - (Required, Forces new resource) Configuration block for the package source options. +* `packageDescription` - (Optional, Forces new resource) Description of the package. + +### package_source + +* `s3BucketName` - (Required, Forces new resource) The name of the Amazon S3 bucket containing the package. +* `s3Key` - (Required, Forces new resource) Key (file name) of the package. + +## Attribute Reference + +This resource exports the following attributes in addition to the arguments above: + +* `id` - The Id of the package. +* `availablePackageVersion` - The current version of the package. + +## Import + +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import AWS Opensearch Packages using the Package ID. For example: + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + } +} + +``` + +Using `terraform import`, import AWS Opensearch Packages using the Package ID. For example: + +```console +% terraform import aws_opensearch_package.example package-id +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/opensearch_package_association.html.markdown b/website/docs/cdktf/typescript/r/opensearch_package_association.html.markdown new file mode 100644 index 000000000000..45f5f1bfc07d --- /dev/null +++ b/website/docs/cdktf/typescript/r/opensearch_package_association.html.markdown @@ -0,0 +1,80 @@ +--- +subcategory: "OpenSearch" +layout: "aws" +page_title: "AWS: aws_opensearch_package_association" +description: |- + Terraform resource for managing an AWS OpenSearch package association. +--- + + + +# Resource: aws_opensearch_package_association + +Manages an AWS Opensearch Package Association. + +## Example Usage + +### Basic Usage + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { Token, TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { OpensearchDomain } from "./.gen/providers/aws/opensearch-domain"; +import { OpensearchPackage } from "./.gen/providers/aws/opensearch-package"; +import { OpensearchPackageAssociation } from "./.gen/providers/aws/opensearch-package-association"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + const myDomain = new OpensearchDomain(this, "my_domain", { + clusterConfig: { + instanceType: "r4.large.search", + }, + domainName: "my-opensearch-domain", + engineVersion: "Elasticsearch_7.10", + }); + const example = new OpensearchPackage(this, "example", { + packageName: "example-txt", + packageSource: { + s3BucketName: myOpensearchPackages.bucket, + s3Key: Token.asString(awsS3ObjectExample.key), + }, + packageType: "TXT-DICTIONARY", + }); + const awsOpensearchPackageAssociationExample = + new OpensearchPackageAssociation(this, "example_2", { + domainName: myDomain.domainName, + packageId: example.id, + }); + /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ + awsOpensearchPackageAssociationExample.overrideLogicalId("example"); + } +} + +``` + +## Argument Reference + +This resource supports the following arguments: + +* `packageId` - (Required, Forces new resource) Internal ID of the package to associate with a domain. +* `domainName` - (Required, Forces new resource) Name of the domain to associate the package with. + +## Attribute Reference + +This resource exports the following attributes in addition to the arguments above: + +* `id` - The Id of the package association. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `10M`) +* `delete` - (Default `10M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/rds_custom_db_engine_version.markdown b/website/docs/cdktf/typescript/r/rds_custom_db_engine_version.markdown new file mode 100644 index 000000000000..d0a975226b89 --- /dev/null +++ b/website/docs/cdktf/typescript/r/rds_custom_db_engine_version.markdown @@ -0,0 +1,215 @@ +--- +subcategory: "RDS (Relational Database)" +layout: "aws" +page_title: "AWS: aws_rds_custom_db_engine_version" +description: |- + Provides an custom engine version (CEV) resource for Amazon RDS Custom. +--- + + + +# Resource: aws_rds_custom_db_engine_version + +Provides an custom engine version (CEV) resource for Amazon RDS Custom. For additional information, see [Working with CEVs for RDS Custom for Oracle](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-cev.html) and [Working with CEVs for RDS Custom for SQL Server](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-cev-sqlserver.html) in the the [RDS User Guide](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html). + +## Example Usage + +### RDS Custom for Oracle Usage + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { KmsKey } from "./.gen/providers/aws/kms-key"; +import { RdsCustomDbEngineVersion } from "./.gen/providers/aws/rds-custom-db-engine-version"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + const example = new KmsKey(this, "example", { + description: "KMS symmetric key for RDS Custom for Oracle", + }); + const awsRdsCustomDbEngineVersionExample = new RdsCustomDbEngineVersion( + this, + "example_1", + { + databaseInstallationFilesS3BucketName: "DOC-EXAMPLE-BUCKET", + databaseInstallationFilesS3Prefix: "1915_GI/", + engine: "custom-oracle-ee-cdb", + engineVersion: "19.cdb_cev1", + kmsKeyId: example.arn, + manifest: + ' {\n\t"databaseInstallationFileNames":["V982063-01.zip"]\n }\n\n', + tags: { + Key: "value", + Name: "example", + }, + } + ); + /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ + awsRdsCustomDbEngineVersionExample.overrideLogicalId("example"); + } +} + +``` + +### RDS Custom for Oracle External Manifest Usage + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { Fn, Token, TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { KmsKey } from "./.gen/providers/aws/kms-key"; +import { RdsCustomDbEngineVersion } from "./.gen/providers/aws/rds-custom-db-engine-version"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + const example = new KmsKey(this, "example", { + description: "KMS symmetric key for RDS Custom for Oracle", + }); + const awsRdsCustomDbEngineVersionExample = new RdsCustomDbEngineVersion( + this, + "example_1", + { + databaseInstallationFilesS3BucketName: "DOC-EXAMPLE-BUCKET", + databaseInstallationFilesS3Prefix: "1915_GI/", + engine: "custom-oracle-ee-cdb", + engineVersion: "19.cdb_cev1", + filename: "manifest_1915_GI.json", + kmsKeyId: example.arn, + manifestHash: Token.asString(Fn.filebase64sha256(json)), + tags: { + Key: "value", + Name: "example", + }, + } + ); + /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ + awsRdsCustomDbEngineVersionExample.overrideLogicalId("example"); + } +} + +``` + +### RDS Custom for SQL Server Usage + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { RdsCustomDbEngineVersion } from "./.gen/providers/aws/rds-custom-db-engine-version"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + new RdsCustomDbEngineVersion(this, "test", { + engine: "custom-sqlserver-se", + engineVersion: "15.00.4249.2.cev-1", + sourceImageId: "ami-0aa12345678a12ab1", + }); + } +} + +``` + +### RDS Custom for SQL Server Usage with AMI from another region + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { AmiCopy } from "./.gen/providers/aws/ami-copy"; +import { RdsCustomDbEngineVersion } from "./.gen/providers/aws/rds-custom-db-engine-version"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + const example = new AmiCopy(this, "example", { + description: "A copy of ami-xxxxxxxx", + name: "sqlserver-se-2019-15.00.4249.2", + sourceAmiId: "ami-xxxxxxxx", + sourceAmiRegion: "us-east-1", + }); + new RdsCustomDbEngineVersion(this, "test", { + engine: "custom-sqlserver-se", + engineVersion: "15.00.4249.2.cev-1", + sourceImageId: example.id, + }); + } +} + +``` + +## Argument Reference + +This resource supports the following arguments: + +* `databaseInstallationFilesS3BucketName` - (Required) The name of the Amazon S3 bucket that contains the database installation files. +* `databaseInstallationFilesS3Prefix` - (Required) The prefix for the Amazon S3 bucket that contains the database installation files. +* `description` - (Optional) The description of the CEV. +* `engine` - (Required) The name of the database engine. Valid values are `customOracle*`, `customSqlserver*`. +* `engineVersion` - (Required) The version of the database engine. +* `filename` - (Optional) The name of the manifest file within the local filesystem. Conflicts with `manifest`. +* `kmsKeyId` - (Optional) The ARN of the AWS KMS key that is used to encrypt the database installation files. Required for RDS Custom for Oracle. +* `manifest` - (Optional) The manifest file, in JSON format, that contains the list of database installation files. Conflicts with `filename`. +* `manifestHash` - (Optional) Used to trigger updates. Must be set to a base64-encoded SHA256 hash of the manifest source specified with `filename`. The usual way to set this is filebase64sha256("manifest.json") where "manifest.json" is the local filename of the manifest source. +* `status` - (Optional) The status of the CEV. Valid values are `available`, `inactive`, `inactiveExceptRestore`. +* `sourceImageId` - (Optional) The ID of the AMI to create the CEV from. Required for RDS Custom for SQL Server. For RDS Custom for Oracle, you can specify an AMI ID that was used in a different Oracle CEV. +* `tags` - (Optional) A mapping of tags to assign to the resource. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attribute Reference + +This resource exports the following attributes in addition to the arguments above: + +* `arn` - The Amazon Resource Name (ARN) for the custom engine version. +* `createTime` - The date and time that the CEV was created. +* `dbParameterGroupFamily` - The name of the DB parameter group family for the CEV. +* `imageId` - The ID of the AMI that was created with the CEV. +* `majorEngineVersion` - The major version of the database engine. +* `manifestComputed` - The returned manifest file, in JSON format, service generated and often different from input `manifest`. +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `240M`) +- `update` - (Default `10M`) +- `delete` - (Default `60M`) + +## Import + +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import custom engine versions for Amazon RDS custom using the `engine` and `engineVersion` separated by a colon (`:`). For example: + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + } +} + +``` + +Using `terraform import`, import custom engine versions for Amazon RDS custom using the `engine` and `engineVersion` separated by a colon (`:`). For example: + +```console +% terraform import aws_rds_custom_db_engine_version.example custom-oracle-ee-cdb:19.cdb_cev1 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/route53_hosted_zone_dnssec.html.markdown b/website/docs/cdktf/typescript/r/route53_hosted_zone_dnssec.html.markdown index fd761838bb8d..cb7f7447d6b7 100644 --- a/website/docs/cdktf/typescript/r/route53_hosted_zone_dnssec.html.markdown +++ b/website/docs/cdktf/typescript/r/route53_hosted_zone_dnssec.html.markdown @@ -14,6 +14,8 @@ Manages Route 53 Hosted Zone Domain Name System Security Extensions (DNSSEC). Fo !> **WARNING:** If you disable DNSSEC signing for your hosted zone before the DNS changes have propagated, your domain could become unavailable on the internet. When you remove the DS records, you must wait until the longest TTL for the DS records that you remove has expired before you complete the step to disable DNSSEC signing. Please refer to the [Route 53 Developer Guide - Disable DNSSEC](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-configuring-dnssec-disable.html) for a detailed breakdown on the steps required to disable DNSSEC safely for a hosted zone. +~> **Note:** Route53 hosted zones are global resources, and as such any `awsKmsKey` that you use as part of a signing key needs to be located in the `usEast1` region. In the example below, the main AWS provider declaration is for `usEast1`, however if you are provisioning your AWS resources in a different region, you will need to specify a provider alias and use that attached to the `awsKmsKey` resource as described in the [provider alias documentation](https://developer.hashicorp.com/terraform/language/providers/configuration#alias-multiple-provider-configurations). + ## Example Usage ```typescript @@ -143,4 +145,4 @@ Using `terraform import`, import `awsRoute53HostedZoneDnssec` resources using th % terraform import aws_route53_hosted_zone_dnssec.example Z1D633PJN98FT9 ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/sfn_alias.html.markdown b/website/docs/cdktf/typescript/r/sfn_alias.html.markdown index 58fe5ea6ebbb..2a20bbeffee5 100644 --- a/website/docs/cdktf/typescript/r/sfn_alias.html.markdown +++ b/website/docs/cdktf/typescript/r/sfn_alias.html.markdown @@ -59,7 +59,7 @@ class MyConvertedCode extends TerraformStack { ## Argument Reference -The following arguments are required: +This resource supports the following arguments: * `name` - (Required) Name for the alias you are creating. * `description` - (Optional) Description of the alias. @@ -67,13 +67,9 @@ The following arguments are required: `routingConfiguration` supports the following arguments: -* `stateMachineVersionArn` - (Required) A version of the state machine. +* `stateMachineVersionArn` - (Required) The Amazon Resource Name (ARN) of the state machine version. * `weight` - (Required) Percentage of traffic routed to the state machine version. -The following arguments are optional: - -* `optionalArg` - (Optional) Concise argument description. Do not begin the description with "An", "The", "Defines", "Indicates", or "Specifies," as these are verbose. In other words, "Indicates the amount of storage," can be rewritten as "Amount of storage," without losing any information. - ## Attribute Reference This resource exports the following attributes in addition to the arguments above: @@ -103,4 +99,4 @@ Using `terraform import`, import SFN (Step Functions) Alias using the `arn`. For % terraform import aws_sfn_alias.foo arn:aws:states:us-east-1:123456789098:stateMachine:myStateMachine:foo ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/vpclattice_listener_rule.html.markdown b/website/docs/cdktf/typescript/r/vpclattice_listener_rule.html.markdown index c0b6a88951f6..88c826c195a0 100644 --- a/website/docs/cdktf/typescript/r/vpclattice_listener_rule.html.markdown +++ b/website/docs/cdktf/typescript/r/vpclattice_listener_rule.html.markdown @@ -118,7 +118,7 @@ The following arguments are required: * `serviceIdentifier` - (Required) The ID or Amazon Resource Identifier (ARN) of the service. * `listenerIdentifier` - (Required) The ID or Amazon Resource Name (ARN) of the listener. -* `action` - (Required) The action for the default rule. +* `action` - (Required) The action for the listener rule. * `match` - (Required) The rule match. * `name` - (Required) The name of the rule. The name must be unique within the listener. The valid characters are a-z, 0-9, and hyphens (-). You can't use a hyphen as the first or last character, or immediately after another hyphen. * `priority` - (Required) The priority assigned to the rule. Each rule for a specific listener must have a unique priority. The lower the priority number the higher the priority. @@ -178,8 +178,8 @@ path match match (`match`) supports the following: This resource exports the following attributes in addition to the arguments above: -* `arn` - ARN of the target group. -* `ruleId` - Unique identifier for the target group. +* `arn` - The ARN for the listener rule. +* `ruleId` - Unique identifier for the listener rule. * `tagsAll` - Map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). ## Timeouts @@ -212,4 +212,4 @@ Using `terraform import`, import VPC Lattice Listener Rule using the `exampleIdA % terraform import aws_vpclattice_listener_rule.example rft-8012925589 ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/vpclattice_target_group.html.markdown b/website/docs/cdktf/typescript/r/vpclattice_target_group.html.markdown index 5458d822f897..07448d912f80 100644 --- a/website/docs/cdktf/typescript/r/vpclattice_target_group.html.markdown +++ b/website/docs/cdktf/typescript/r/vpclattice_target_group.html.markdown @@ -72,6 +72,38 @@ class MyConvertedCode extends TerraformStack { protocolVersion: "HTTP1", unhealthyThresholdCount: 3, }, + ipAddressType: "IPV4", + port: 443, + protocol: "HTTPS", + protocolVersion: "HTTP1", + vpcIdentifier: Token.asString(awsVpcExample.id), + }, + name: "example", + type: "IP", + }); + } +} + +``` + +### ALB + +If the type is ALB, `healthCheck` block is not supported. + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { Token, TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { VpclatticeTargetGroup } from "./.gen/providers/aws/vpclattice-target-group"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + new VpclatticeTargetGroup(this, "example", { + config: { port: 443, protocol: "HTTPS", protocolVersion: "HTTP1", @@ -183,4 +215,4 @@ Using `terraform import`, import VPC Lattice Target Group using the `id`. For ex % terraform import aws_vpclattice_target_group.example tg-0c11d4dc16ed96bdb ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/vpclattice_target_group_attachment.html.markdown b/website/docs/cdktf/typescript/r/vpclattice_target_group_attachment.html.markdown index 304b001623d1..b1a690dbab26 100644 --- a/website/docs/cdktf/typescript/r/vpclattice_target_group_attachment.html.markdown +++ b/website/docs/cdktf/typescript/r/vpclattice_target_group_attachment.html.markdown @@ -50,10 +50,10 @@ The following arguments are required: `target` supports the following: - `id` - (Required) The ID of the target. If the target type of the target group is INSTANCE, this is an instance ID. If the target type is IP , this is an IP address. If the target type is LAMBDA, this is the ARN of the Lambda function. If the target type is ALB, this is the ARN of the Application Load Balancer. -- `port` - (Optional) The port on which the target is listening. For HTTP, the default is 80. For HTTPS, the default is 443. +- `port` - (Optional) This port is used for routing traffic to the target, and defaults to the target group port. However, you can override the default and specify a custom port. ## Attribute Reference This resource exports no additional attributes. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/wafv2_rule_group.html.markdown b/website/docs/cdktf/typescript/r/wafv2_rule_group.html.markdown index 5d9818f6cc16..a988c86bed6b 100644 --- a/website/docs/cdktf/typescript/r/wafv2_rule_group.html.markdown +++ b/website/docs/cdktf/typescript/r/wafv2_rule_group.html.markdown @@ -511,7 +511,8 @@ You can't nest a `rateBasedStatement`, for example for use inside a `notStatemen The `rateBasedStatement` block supports the following arguments: -* `aggregateKeyType` - (Optional) Setting that indicates how to aggregate the request counts. Valid values include: `constant`, `forwardedIp` or `ip`. Default: `ip`. +* `aggregateKeyType` - (Optional) Setting that indicates how to aggregate the request counts. Valid values include: `constant`, `customKeys`, `forwardedIp` or `ip`. Default: `ip`. +* `customKey` - (Optional) Aggregate the request counts using one or more web request components as the aggregate keys. See [`customKey`](#custom_key-block) below for details. * `forwardedIpConfig` - (Optional) The configuration for inspecting IP addresses in an HTTP header that you specify, instead of using the IP address that's reported by the web request origin. If `aggregateKeyType` is set to `forwardedIp`, this block is required. See [Forwarded IP Config](#forwarded-ip-config) below for details. * `limit` - (Required) The limit on requests per 5-minute period for a single originating IP address. * `scopeDownStatement` - (Optional) An optional nested statement that narrows the scope of the rate-based statement to matching web requests. This can be any nestable statement, and you can nest statements at any level below this scope-down statement. See [Statement](#statement) above for details. If `aggregateKeyType` is set to `constant`, this block is required. @@ -701,6 +702,91 @@ This resource exports the following attributes in addition to the arguments abov * `arn` - The ARN of the WAF rule group. * `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +### `customKey` Block + +Aggregate the request counts using one or more web request components as the aggregate keys. With this option, you must specify the aggregate keys in the `customKeys` block. To aggregate on only the IP address or only the forwarded IP address, don't use custom keys. Instead, set the `aggregateKeyType` to `ip` or `forwardedIp`. + +The `customKey` block supports the following arguments: + +* `cookie` - (Optional) Use the value of a cookie in the request as an aggregate key. See [RateLimit `cookie`](#ratelimit-cookie-block) below for details. +* `forwardedIp` - (Optional) Use the first IP address in an HTTP header as an aggregate key. See [`forwardedIp`](#ratelimit-forwarded_ip-block) below for details. +* `httpMethod` - (Optional) Use the request's HTTP method as an aggregate key. See [RateLimit `httpMethod`](#ratelimit-http_method-block) below for details. +* `header` - (Optional) Use the value of a header in the request as an aggregate key. See [RateLimit `header`](#ratelimit-header-block) below for details. +* `ip` - (Optional) Use the request's originating IP address as an aggregate key. See [`RateLimit ip`](#ratelimit-ip-block) below for details. +* `labelNamespace` - (Optional) Use the specified label namespace as an aggregate key. See [RateLimit `labelNamespace`](#ratelimit-label_namespace-block) below for details. +* `queryArgument` - (Optional) Use the specified query argument as an aggregate key. See [RateLimit `queryArgument`](#ratelimit-query_argument-block) below for details. +* `queryString` - (Optional) Use the request's query string as an aggregate key. See [RateLimit `queryString`](#ratelimit-query_string-block) below for details. +* `uriPath` - (Optional) Use the request's URI path as an aggregate key. See [RateLimit `uriPath`](#ratelimit-uri_path-block) below for details. + +### RateLimit `cookie` Block + +Use the value of a cookie in the request as an aggregate key. Each distinct value in the cookie contributes to the aggregation instance. If you use a single cookie as your custom key, then each value fully defines an aggregation instance. + +The `cookie` block supports the following arguments: + +* `name`: The name of the cookie to use. +* `textTransformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [Text Transformation](#text-transformation) above for details. + +### RateLimit `forwardedIp` Block + +Use the first IP address in an HTTP header as an aggregate key. Each distinct forwarded IP address contributes to the aggregation instance. When you specify an IP or forwarded IP in the custom key settings, you must also specify at least one other key to use. You can aggregate on only the forwarded IP address by specifying `forwardedIp` in your rate-based statement's `aggregateKeyType`. With this option, you must specify the header to use in the rate-based rule's [Forwarded IP Config](#forwarded-ip-config) block. + +The `forwardedIp` block is configured as an empty block `{}`. + +### RateLimit `httpMethod` Block + +Use the request's HTTP method as an aggregate key. Each distinct HTTP method contributes to the aggregation instance. If you use just the HTTP method as your custom key, then each method fully defines an aggregation instance. + +The `httpMethod` block is configured as an empty block `{}`. + +### RateLimit `header` Block + +Use the value of a header in the request as an aggregate key. Each distinct value in the header contributes to the aggregation instance. If you use a single header as your custom key, then each value fully defines an aggregation instance. + +The `header` block supports the following arguments: + +* `name`: The name of the header to use. +* `textTransformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [Text Transformation](#text-transformation) above for details. + +### RateLimit `ip` Block + +Use the request's originating IP address as an aggregate key. Each distinct IP address contributes to the aggregation instance. When you specify an IP or forwarded IP in the custom key settings, you must also specify at least one other key to use. You can aggregate on only the IP address by specifying `ip` in your rate-based statement's `aggregateKeyType`. + +The `ip` block is configured as an empty block `{}`. + +### RateLimit `labelNamespace` Block + +Use the specified label namespace as an aggregate key. Each distinct fully qualified label name that has the specified label namespace contributes to the aggregation instance. If you use just one label namespace as your custom key, then each label name fully defines an aggregation instance. This uses only labels that have been added to the request by rules that are evaluated before this rate-based rule in the web ACL. For information about label namespaces and names, see Label syntax and naming requirements (https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-label-requirements.html) in the WAF Developer Guide. + +The `labelNamespace` block supports the following arguments: + +* `namespace`: The namespace to use for aggregation + +### RateLimit `queryArgument` Block + +Use the specified query argument as an aggregate key. Each distinct value for the named query argument contributes to the aggregation instance. If you use a single query argument as your custom key, then each value fully defines an aggregation instance. + +The `queryArgument` block supports the following arguments: + +* `name`: The name of the query argument to use. +* `textTransformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [Text Transformation](#text-transformation) above for details. + +### RateLimit `queryString` Block + +Use the request's query string as an aggregate key. Each distinct string contributes to the aggregation instance. If you use just the query string as your custom key, then each string fully defines an aggregation instance. + +The `queryString` block supports the following arguments: + +* `textTransformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [Text Transformation](#text-transformation) above for details. + +### RateLimit `uriPath` Block + +Use the request's URI path as an aggregate key. Each distinct URI path contributes to the aggregation instance. If you use just the URI path as your custom key, then each URI path fully defines an aggregation instance. + +The `uriPath` block supports the following arguments: + +* `textTransformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [Text Transformation](#text-transformation) above for details. + ## Import In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import WAFv2 Rule Group using `id/name/scope`. For example: @@ -723,4 +809,4 @@ Using `terraform import`, import WAFv2 Rule Group using `id/name/scope`. For exa % terraform import aws_wafv2_rule_group.example a1b2c3d4-d5f6-7777-8888-9999aaaabbbbcccc/example/REGIONAL ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/wafv2_web_acl.html.markdown b/website/docs/cdktf/typescript/r/wafv2_web_acl.html.markdown index b316098bbe86..9193b5a10527 100644 --- a/website/docs/cdktf/typescript/r/wafv2_web_acl.html.markdown +++ b/website/docs/cdktf/typescript/r/wafv2_web_acl.html.markdown @@ -606,7 +606,8 @@ You can't nest a `rateBasedStatement`, for example for use inside a `notStatemen The `rateBasedStatement` block supports the following arguments: -* `aggregateKeyType` - (Optional) Setting that indicates how to aggregate the request counts. Valid values include: `constant`, `forwardedIp` or `ip`. Default: `ip`. +* `aggregateKeyType` - (Optional) Setting that indicates how to aggregate the request counts. Valid values include: `constant`, `customKeys`, `forwardedIp`, or `ip`. Default: `ip`. +* `customKey` - (Optional) Aggregate the request counts using one or more web request components as the aggregate keys. See [`customKey`](#custom_key-block) below for details. * `forwardedIpConfig` - (Optional) Configuration for inspecting IP addresses in an HTTP header that you specify, instead of using the IP address that's reported by the web request origin. If `aggregateKeyType` is set to `forwardedIp`, this block is required. See [`forwardedIpConfig`](#forwarded_ip_config-block) below for details. * `limit` - (Required) Limit on requests per 5-minute period for a single originating IP address. * `scopeDownStatement` - (Optional) Optional nested statement that narrows the scope of the rate-based statement to matching web requests. This can be any nestable statement, and you can nest statements at any level below this scope-down statement. See [`statement`](#statement-block) above for details. If `aggregateKeyType` is set to `constant`, this block is required. @@ -875,6 +876,91 @@ The `cloudfront` block supports the following arguments: * `defaultSizeInspectionLimit` - (Required) Specifies the maximum size of the web request body component that an associated CloudFront distribution should send to AWS WAF for inspection. This applies to statements in the web ACL that inspect the body or JSON body. Valid values are `kb16`, `kb32`, `kb48` and `kb64`. +### `customKey` Block + +Aggregate the request counts using one or more web request components as the aggregate keys. With this option, you must specify the aggregate keys in the `customKeys` block. To aggregate on only the IP address or only the forwarded IP address, don't use custom keys. Instead, set the `aggregateKeyType` to `ip` or `forwardedIp`. + +The `customKey` block supports the following arguments: + +* `cookie` - (Optional) Use the value of a cookie in the request as an aggregate key. See [RateLimit `cookie`](#ratelimit-cookie-block) below for details. +* `forwardedIp` - (Optional) Use the first IP address in an HTTP header as an aggregate key. See [`forwardedIp`](#ratelimit-forwarded_ip-block) below for details. +* `httpMethod` - (Optional) Use the request's HTTP method as an aggregate key. See [RateLimit `httpMethod`](#ratelimit-http_method-block) below for details. +* `header` - (Optional) Use the value of a header in the request as an aggregate key. See [RateLimit `header`](#ratelimit-header-block) below for details. +* `ip` - (Optional) Use the request's originating IP address as an aggregate key. See [`RateLimit ip`](#ratelimit-ip-block) below for details. +* `labelNamespace` - (Optional) Use the specified label namespace as an aggregate key. See [RateLimit `labelNamespace`](#ratelimit-label_namespace-block) below for details. +* `queryArgument` - (Optional) Use the specified query argument as an aggregate key. See [RateLimit `queryArgument`](#ratelimit-query_argument-block) below for details. +* `queryString` - (Optional) Use the request's query string as an aggregate key. See [RateLimit `queryString`](#ratelimit-query_string-block) below for details. +* `uriPath` - (Optional) Use the request's URI path as an aggregate key. See [RateLimit `uriPath`](#ratelimit-uri_path-block) below for details. + +### RateLimit `cookie` Block + +Use the value of a cookie in the request as an aggregate key. Each distinct value in the cookie contributes to the aggregation instance. If you use a single cookie as your custom key, then each value fully defines an aggregation instance. + +The `cookie` block supports the following arguments: + +* `name`: The name of the cookie to use. +* `textTransformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [`textTransformation`](#text_transformation-block) above for details. + +### RateLimit `forwardedIp` Block + +Use the first IP address in an HTTP header as an aggregate key. Each distinct forwarded IP address contributes to the aggregation instance. When you specify an IP or forwarded IP in the custom key settings, you must also specify at least one other key to use. You can aggregate on only the forwarded IP address by specifying `forwardedIp` in your rate-based statement's `aggregateKeyType`. With this option, you must specify the header to use in the rate-based rule's [`forwardedIpConfig`](#forwarded_ip_config-block) block. + +The `forwardedIp` block is configured as an empty block `{}`. + +### RateLimit `httpMethod` Block + +Use the request's HTTP method as an aggregate key. Each distinct HTTP method contributes to the aggregation instance. If you use just the HTTP method as your custom key, then each method fully defines an aggregation instance. + +The `httpMethod` block is configured as an empty block `{}`. + +### RateLimit `header` Block + +Use the value of a header in the request as an aggregate key. Each distinct value in the header contributes to the aggregation instance. If you use a single header as your custom key, then each value fully defines an aggregation instance. + +The `header` block supports the following arguments: + +* `name`: The name of the header to use. +* `textTransformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [`textTransformation`](#text_transformation-block) above for details. + +### RateLimit `ip` Block + +Use the request's originating IP address as an aggregate key. Each distinct IP address contributes to the aggregation instance. When you specify an IP or forwarded IP in the custom key settings, you must also specify at least one other key to use. You can aggregate on only the IP address by specifying `ip` in your rate-based statement's `aggregateKeyType`. + +The `ip` block is configured as an empty block `{}`. + +### RateLimit `labelNamespace` Block + +Use the specified label namespace as an aggregate key. Each distinct fully qualified label name that has the specified label namespace contributes to the aggregation instance. If you use just one label namespace as your custom key, then each label name fully defines an aggregation instance. This uses only labels that have been added to the request by rules that are evaluated before this rate-based rule in the web ACL. For information about label namespaces and names, see Label syntax and naming requirements (https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-label-requirements.html) in the WAF Developer Guide. + +The `labelNamespace` block supports the following arguments: + +* `namespace`: The namespace to use for aggregation + +### RateLimit `queryArgument` Block + +Use the specified query argument as an aggregate key. Each distinct value for the named query argument contributes to the aggregation instance. If you use a single query argument as your custom key, then each value fully defines an aggregation instance. + +The `queryArgument` block supports the following arguments: + +* `name`: The name of the query argument to use. +* `textTransformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [`textTransformation`](#text_transformation-block) above for details. + +### RateLimit `queryString` Block + +Use the request's query string as an aggregate key. Each distinct string contributes to the aggregation instance. If you use just the query string as your custom key, then each string fully defines an aggregation instance. + +The `queryString` block supports the following arguments: + +* `textTransformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [`textTransformation`](#text_transformation-block) above for details. + +### RateLimit `uriPath` Block + +Use the request's URI path as an aggregate key. Each distinct URI path contributes to the aggregation instance. If you use just the URI path as your custom key, then each URI path fully defines an aggregation instance. + +The `uriPath` block supports the following arguments: + +* `textTransformation`: Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. They are used in rate-based rule statements, to transform request components before using them as custom aggregation keys. Atleast one transformation is required. See [`textTransformation`](#text_transformation-block) above for details. + ## Attribute Reference This resource exports the following attributes in addition to the arguments above: @@ -906,4 +992,4 @@ Using `terraform import`, import WAFv2 Web ACLs using `id/name/scope`. For examp % terraform import aws_wafv2_web_acl.example a1b2c3d4-d5f6-7777-8888-9999aaaabbbbcccc/example/REGIONAL ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/wafv2_web_acl_association.html.markdown b/website/docs/cdktf/typescript/r/wafv2_web_acl_association.html.markdown index 92d359c930bd..a72390294211 100644 --- a/website/docs/cdktf/typescript/r/wafv2_web_acl_association.html.markdown +++ b/website/docs/cdktf/typescript/r/wafv2_web_acl_association.html.markdown @@ -87,7 +87,7 @@ resource "aws_wafv2_web_acl_association" "example" { This resource supports the following arguments: -* `resourceArn` - (Required) The Amazon Resource Name (ARN) of the resource to associate with the web ACL. This must be an ARN of an Application Load Balancer, an Amazon API Gateway stage, or an Amazon Cognito User Pool. +* `resourceArn` - (Required) The Amazon Resource Name (ARN) of the resource to associate with the web ACL. This must be an ARN of an Application Load Balancer, an Amazon API Gateway stage, an Amazon Cognito User Pool, an Amazon AppSync GraphQL API, an Amazon App Runner service, or an Amazon Verified Access instance. * `webAclArn` - (Required) The Amazon Resource Name (ARN) of the Web ACL that you want to associate with the resource. ## Attribute Reference @@ -122,4 +122,4 @@ Using `terraform import`, import WAFv2 Web ACL Association using `webAclArn,reso % terraform import aws_wafv2_web_acl_association.example arn:aws:wafv2:...7ce849ea,arn:aws:apigateway:...ages/name ``` - \ No newline at end of file + \ No newline at end of file