From f0cc92791002546d47890c34f2ab2058aa18674c Mon Sep 17 00:00:00 2001 From: AWS CDK Automation <43080478+aws-cdk-automation@users.noreply.github.com> Date: Wed, 7 Dec 2022 01:49:00 -0800 Subject: [PATCH] docs(cfnspec): update CloudFormation documentation (#23258) --- .../spec-source/cfn-docs/cfn-docs.json | 539 +++++++++++++++++- 1 file changed, 524 insertions(+), 15 deletions(-) diff --git a/packages/@aws-cdk/cfnspec/spec-source/cfn-docs/cfn-docs.json b/packages/@aws-cdk/cfnspec/spec-source/cfn-docs/cfn-docs.json index 7e4b035660a91..0bdedf298e502 100644 --- a/packages/@aws-cdk/cfnspec/spec-source/cfn-docs/cfn-docs.json +++ b/packages/@aws-cdk/cfnspec/spec-source/cfn-docs/cfn-docs.json @@ -4200,7 +4200,7 @@ }, "AWS::AppRunner::Service": { "attributes": { - "Ref": "", + "Ref": "When the logical ID of this resource is provided to the `Ref` intrinsic function, `Ref` returns the ARN of the App Runner service.", "ServiceArn": "The Amazon Resource Name (ARN) of this service.", "ServiceId": "An ID that App Runner generated for this service. It's unique within the AWS Region .", "ServiceUrl": "A subdomain URL that App Runner generated for this service. You can use this URL to access your service web application.", @@ -8394,7 +8394,7 @@ }, "description": "Creates a new event data store.", "properties": { - "AdvancedEventSelectors": "The advanced event selectors to use to select the events for the data store.\n\nFor more information about how to use advanced event selectors to log CloudTrail events, see [Log events by using advanced event selectors](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#creating-data-event-selectors-advanced) in the CloudTrail User Guide.\n\nFor more information about how to use advanced event selectors to include AWS Config configuration items in your event data store, see [Create an event data store for AWS Config configuration items](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/lake-cli-create-eds-config.html) in the CloudTrail User Guide.", + "AdvancedEventSelectors": "The advanced event selectors to use to select the events for the data store. You can configure up to five advanced event selectors for each event data store.\n\nFor more information about how to use advanced event selectors to log CloudTrail events, see [Log events by using advanced event selectors](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#creating-data-event-selectors-advanced) in the CloudTrail User Guide.\n\nFor more information about how to use advanced event selectors to include AWS Config configuration items in your event data store, see [Create an event data store for AWS Config configuration items](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/lake-cli-create-eds-config.html) in the CloudTrail User Guide.", "KmsKeyId": "Specifies the AWS KMS key ID to use to encrypt the events delivered by CloudTrail. The value can be an alias name prefixed by `alias/` , a fully specified ARN to an alias, a fully specified ARN to a key, or a globally unique identifier.\n\n> Disabling or deleting the KMS key, or removing CloudTrail permissions on the key, prevents CloudTrail from logging events to the event data store, and prevents users from querying the data in the event data store that was encrypted with the key. After you associate an event data store with a KMS key, the KMS key cannot be removed or changed. Before you disable or delete a KMS key that you are using with an event data store, delete or back up your event data store. \n\nCloudTrail also supports AWS KMS multi-Region keys. For more information about multi-Region keys, see [Using multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the *AWS Key Management Service Developer Guide* .\n\nExamples:\n\n- `alias/MyAliasName`\n- `arn:aws:kms:us-east-2:123456789012:alias/MyAliasName`\n- `arn:aws:kms:us-east-2:123456789012:key/12345678-1234-1234-1234-123456789012`\n- `12345678-1234-1234-1234-123456789012`", "MultiRegionEnabled": "Specifies whether the event data store includes events from all regions, or only from the region in which the event data store is created.", "Name": "The name of the event data store.", @@ -8462,7 +8462,7 @@ "attributes": {}, "description": "Use event selectors to further specify the management and data event settings for your trail. By default, trails created without specific event selectors will be configured to log all read and write management events, and no data events. When an event occurs in your account, CloudTrail evaluates the event selector for all trails. For each trail, if the event matches any event selector, the trail processes and logs the event. If the event doesn't match any event selector, the trail doesn't log the event.\n\nYou can configure up to five event selectors for a trail.\n\nYou cannot apply both event selectors and advanced event selectors to a trail.", "properties": { - "DataResources": "In AWS CloudFormation , CloudTrail supports data event logging for Amazon S3 objects, Amazon DynamoDB tables, and AWS Lambda functions. Currently, advanced event selectors for data events are not supported in AWS CloudFormation templates. You can specify up to 250 resources for an individual event selector, but the total number of data resources cannot exceed 250 across all event selectors in a trail. This limit does not apply if you configure resource logging for all data events.\n\nFor more information, see [Data Events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-and-data-events-with-cloudtrail.html#logging-data-events) and [Limits in AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/WhatIsCloudTrail-Limits.html) in the *AWS CloudTrail User Guide* .", + "DataResources": "In AWS CloudFormation , CloudTrail supports data event logging for Amazon S3 objects, Amazon DynamoDB tables, and AWS Lambda functions. Currently, advanced event selectors for data events are not supported in AWS CloudFormation templates. You can specify up to 250 resources for an individual event selector, but the total number of data resources cannot exceed 250 across all event selectors in a trail. This limit does not apply if you configure resource logging for all data events.\n\nFor more information, see [Logging data events for trails](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) and [Limits in AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/WhatIsCloudTrail-Limits.html) in the *AWS CloudTrail User Guide* .", "ExcludeManagementEventSources": "An optional list of service event sources from which you do not want management events to be logged on your trail. In this release, the list can be empty (disables the filter), or it can filter out AWS Key Management Service or Amazon RDS Data API events by containing `kms.amazonaws.com` or `rdsdata.amazonaws.com` . By default, `ExcludeManagementEventSources` is empty, and AWS KMS and Amazon RDS Data API events are logged to your trail. You can exclude management event sources only in regions that support the event source.", "IncludeManagementEvents": "Specify if you want your event selector to include management events for your trail.\n\nFor more information, see [Management Events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide* .\n\nBy default, the value is `true` .\n\nThe first copy of management events is free. You are charged for additional copies of management events that you are logging on any subsequent trail in the same region. For more information about CloudTrail pricing, see [AWS CloudTrail Pricing](https://docs.aws.amazon.com/cloudtrail/pricing/) .", "ReadWriteType": "Specify if you want your trail to log read-only events, write-only events, or all. For example, the EC2 `GetConsoleOutput` is a read-only API operation and `RunInstances` is a write-only API operation.\n\nBy default, the value is `All` ." @@ -13454,7 +13454,7 @@ "description": "Represents the settings used to enable server-side encryption.", "properties": { "KMSMasterKeyId": "The AWS KMS key that should be used for the AWS KMS encryption. To specify a key, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. Note that you should only provide this parameter if the key is different from the default DynamoDB key `alias/aws/dynamodb` .", - "SSEEnabled": "Indicates whether server-side encryption is done using an AWS managed key or an AWS owned key. If enabled (true), server-side encryption type is set to `KMS` and an AWS managed key is used ( AWS KMS charges apply). If disabled (false) or not specified, server-side encryption is set to AWS owned key.", + "SSEEnabled": "Indicates whether server-side encryption is done using an AWS managed key or an AWS owned key. If enabled (true), server-side encryption type is set to `KMS` and an AWS managed key or customer managed key is used ( AWS KMS charges apply). If disabled (false) or not specified, server-side encryption is set to AWS owned key.", "SSEType": "Server-side encryption type. The only supported value is:\n\n- `KMS` - Server-side encryption that uses AWS Key Management Service . The key is stored in your account and is managed by AWS KMS ( AWS KMS charges apply)." } }, @@ -18265,7 +18265,7 @@ "CacheSecurityGroupNames": "A list of cache security group names to associate with this replication group.", "CacheSubnetGroupName": "The name of the cache subnet group to be used for the replication group.\n\n> If you're going to launch your cluster in an Amazon VPC, you need to create a subnet group before you start creating a cluster. For more information, see [AWS::ElastiCache::SubnetGroup](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-elasticache-subnetgroup.html) .", "DataTieringEnabled": "Enables data tiering. Data tiering is only supported for replication groups using the r6gd node type. This parameter must be set to true when using r6gd nodes. For more information, see [Data tiering](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/data-tiering.html) .", - "Engine": "The name of the cache engine to be used for the clusters in this replication group. Must be Redis.", + "Engine": "The name of the cache engine to be used for the clusters in this replication group. The value must be set to `Redis` .", "EngineVersion": "The version number of the cache engine to be used for the clusters in this replication group. To view the supported cache engine versions, use the `DescribeCacheEngineVersions` operation.\n\n*Important:* You can upgrade to a newer engine version (see [Selecting a Cache Engine and Version](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/SelectEngine.html#VersionManagement) ) in the *ElastiCache User Guide* , but you cannot downgrade to an earlier engine version. If you want to use an earlier engine version, you must delete the existing cluster or replication group and create it anew with the earlier engine version.", "GlobalReplicationGroupId": "The name of the Global datastore", "IpDiscovery": "The network type you choose when creating a replication group, either `ipv4` | `ipv6` . IPv6 is supported for workloads using Redis engine version 6.2 onward or Memcached engine version 1.6.6 on all instances built on the [Nitro system](https://docs.aws.amazon.com/https://aws.amazon.com/ec2/nitro/) .", @@ -20083,7 +20083,7 @@ "Ref": "", "ResourceARN": "" }, - "description": "The configuration of a data repository association that links an Amazon FSx for Lustre file system to an Amazon S3 bucket or an Amazon File Cache resource to an Amazon S3 bucket or an NFS file system. The data repository association configuration object is returned in the response of the following operations:\n\n- `CreateDataRepositoryAssociation`\n- `UpdateDataRepositoryAssociation`\n- `DescribeDataRepositoryAssociations`\n\nData repository associations are supported only for an Amazon FSx for Lustre file system with the `Persistent_2` deployment type and for an Amazon File Cache resource.", + "description": "Creates an Amazon FSx for Lustre data repository association (DRA). A data repository association is a link between a directory on the file system and an Amazon S3 bucket or prefix. You can have a maximum of 8 data repository associations on a file system. Data repository associations are supported only for file systems with the `Persistent_2` deployment type.\n\nEach data repository association must have a unique Amazon FSx file system directory and a unique S3 bucket or prefix associated with it. You can configure a data repository association for automatic import only, for automatic export only, or for both. To learn more about linking a data repository to your file system, see [Linking your file system to an S3 bucket](https://docs.aws.amazon.com/fsx/latest/LustreGuide/create-dra-linked-data-repo.html) .\n\n> `CreateDataRepositoryAssociation` isn't supported on Amazon File Cache resources. To create a DRA on Amazon File Cache, use the `CreateFileCache` operation.", "properties": { "BatchImportMetaDataOnCreate": "A boolean flag indicating whether an import data repository task to import metadata should run after the data repository association is created. The task runs if this flag is set to `true` .\n\n> `BatchImportMetaDataOnCreate` is not supported for data repositories linked to an Amazon File Cache resource.", "DataRepositoryPath": "The path to the data repository that will be linked to the cache or file system.\n\n- For Amazon File Cache, the path can be an NFS data repository that will be linked to the cache. The path can be in one of two formats:\n\n- If you are not using the `DataRepositorySubdirectories` parameter, the path is to an NFS Export directory (or one of its subdirectories) in the format `nsf://nfs-domain-name/exportpath` . You can therefore link a single NFS Export to a single data repository association.\n- If you are using the `DataRepositorySubdirectories` parameter, the path is the domain name of the NFS file system in the format `nfs://filer-domain-name` , which indicates the root of the subdirectories specified with the `DataRepositorySubdirectories` parameter.\n- For Amazon File Cache, the path can be an S3 bucket or prefix in the format `s3://myBucket/myPrefix/` .\n- For Amazon FSx for Lustre, the path can be an S3 bucket or prefix in the format `s3://myBucket/myPrefix/` .", @@ -29767,7 +29767,7 @@ "ImageConfig": "Configuration values that override the container image Dockerfile settings. For more information, see [Container image settings](https://docs.aws.amazon.com/lambda/latest/dg/images-create.html#images-parms) .", "KmsKeyArn": "The ARN of the AWS Key Management Service ( AWS KMS ) key that's used to encrypt your function's environment variables. If it's not provided, Lambda uses a default service key.", "Layers": "A list of [function layers](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html) to add to the function's execution environment. Specify each layer by its ARN, including the version.", - "LoggingConfig": "The function's [Amazon CloudWatch Logs setting](https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html) . By default, `BuiltInLogger` is `Enabled` . To turn off logging to CloudWatch , set `BuiltInLogger` to `Disabled` .", + "LoggingConfig": "", "MemorySize": "The amount of [memory available to the function](https://docs.aws.amazon.com/lambda/latest/dg/configuration-function-common.html#configuration-memory-console) at runtime. Increasing the function memory also increases its CPU allocation. The default value is 128 MB. The value can be any multiple of 1 MB.", "PackageType": "The type of deployment package. Set to `Image` for container image and set `Zip` for .zip file archive.", "ReservedConcurrentExecutions": "The number of simultaneous executions to reserve for the function.", @@ -29831,9 +29831,9 @@ }, "AWS::Lambda::Function.LoggingConfig": { "attributes": {}, - "description": "The function's [Amazon CloudWatch Logs setting](https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html) . By default, `BuiltInLogger` is `Enabled` . To turn off logging to CloudWatch , set `BuiltInLogger` to `Disabled` .", + "description": "", "properties": { - "BuiltInLogger": "The default is `Enabled` . To turn off logging to Amazon CloudWatch , set to `Disabled` ." + "BuiltInLogger": "" } }, "AWS::Lambda::Function.SnapStart": { @@ -37337,6 +37337,515 @@ "Value": "The optional part of a key-value pair that defines a tag. The maximum length of a tag value is 256 characters. The minimum length is 0 characters. If you don\u2019t want a resource to have a specific tag value, don\u2019t specify a value for this parameter. Amazon Pinpoint will set the value to an empty string." } }, + "AWS::Pipes::Pipe": { + "attributes": { + "Arn": "The ARN of the pipe.", + "CreationTime": "The time the pipe was created.", + "CurrentState": "The state the pipe is in.", + "LastModifiedTime": "When the pipe was last updated, in [ISO-8601 format](https://docs.aws.amazon.com/https://www.w3.org/TR/NOTE-datetime) (YYYY-MM-DDThh:mm:ss.sTZD).", + "Ref": "`Ref` returns the name of the pipe that was created by the request.", + "StateReason": "The reason the pipe is in its current state." + }, + "description": "Create a pipe. Amazon EventBridge Pipes connect event sources to targets and reduces the need for specialized knowledge and integration code.", + "properties": { + "Description": "A description of the pipe.", + "DesiredState": "The state the pipe should be in.", + "Enrichment": "The ARN of the enrichment resource.", + "EnrichmentParameters": "The parameters required to set up enrichment on your pipe.", + "Name": "The name of the pipe.", + "RoleArn": "The ARN of the role that allows the pipe to send data to the target.", + "Source": "The ARN of the source resource.", + "SourceParameters": "The parameters required to set up a source for your pipe.", + "Tags": "The list of key-value pairs to associate with the pipe.", + "Target": "The ARN of the target resource.", + "TargetParameters": "The parameters required to set up a target for your pipe." + } + }, + "AWS::Pipes::Pipe.AwsVpcConfiguration": { + "attributes": {}, + "description": "This structure specifies the VPC subnets and security groups for the task, and whether a public IP address is to be used. This structure is relevant only for ECS tasks that use the `awsvpc` network mode.", + "properties": { + "AssignPublicIp": "Specifies whether the task's elastic network interface receives a public IP address. You can specify `ENABLED` only when `LaunchType` in `EcsParameters` is set to `FARGATE` .", + "SecurityGroups": "Specifies the security groups associated with the task. These security groups must all be in the same VPC. You can specify as many as five security groups. If you do not specify a security group, the default security group for the VPC is used.", + "Subnets": "Specifies the subnets associated with the task. These subnets must all be in the same VPC. You can specify as many as 16 subnets." + } + }, + "AWS::Pipes::Pipe.BatchArrayProperties": { + "attributes": {}, + "description": "The array properties for the submitted job, such as the size of the array. The array size can be between 2 and 10,000. If you specify array properties for a job, it becomes an array job. This parameter is used only if the target is an AWS Batch job.", + "properties": { + "Size": "The size of the array, if this is an array batch job." + } + }, + "AWS::Pipes::Pipe.BatchContainerOverrides": { + "attributes": {}, + "description": "The overrides that are sent to a container.", + "properties": { + "Command": "The command to send to the container that overrides the default command from the Docker image or the task definition.", + "Environment": "The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition.\n\n> Environment variables cannot start with \" `AWS Batch` \". This naming convention is reserved for variables that AWS Batch sets.", + "InstanceType": "The instance type to use for a multi-node parallel job.\n\n> This parameter isn't applicable to single-node container jobs or jobs that run on Fargate resources, and shouldn't be provided.", + "ResourceRequirements": "The type and amount of resources to assign to a container. This overrides the settings in the job definition. The supported resources include `GPU` , `MEMORY` , and `VCPU` ." + } + }, + "AWS::Pipes::Pipe.BatchEnvironmentVariable": { + "attributes": {}, + "description": "The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition.\n\n> Environment variables cannot start with \" `AWS Batch` \". This naming convention is reserved for variables that AWS Batch sets.", + "properties": { + "Name": "The name of the key-value pair. For environment variables, this is the name of the environment variable.", + "Value": "The value of the key-value pair. For environment variables, this is the value of the environment variable." + } + }, + "AWS::Pipes::Pipe.BatchJobDependency": { + "attributes": {}, + "description": "An object that represents an AWS Batch job dependency.", + "properties": { + "JobId": "The job ID of the AWS Batch job that's associated with this dependency.", + "Type": "The type of the job dependency." + } + }, + "AWS::Pipes::Pipe.BatchParametersMap": { + "attributes": {}, + "description": "Additional parameters passed to the job that replace parameter substitution placeholders that are set in the job definition. Parameters are specified as a key and value pair mapping. Parameters included here override any corresponding parameter defaults from the job definition.", + "properties": { + "Key": "The key of the key-value pair.", + "Value": "The value of the key-value pair." + } + }, + "AWS::Pipes::Pipe.BatchResourceRequirement": { + "attributes": {}, + "description": "The type and amount of a resource to assign to a container. The supported resources include `GPU` , `MEMORY` , and `VCPU` .", + "properties": { + "Type": "The type of resource to assign to a container. The supported resources include `GPU` , `MEMORY` , and `VCPU` .", + "Value": "The quantity of the specified resource to reserve for the container. The values vary based on the `type` specified.\n\n- **type=\"GPU\"** - The number of physical GPUs to reserve for the container. Make sure that the number of GPUs reserved for all containers in a job doesn't exceed the number of available GPUs on the compute resource that the job is launched on.\n\n> GPUs aren't available for jobs that are running on Fargate resources.\n- **type=\"MEMORY\"** - The memory hard limit (in MiB) present to the container. This parameter is supported for jobs that are running on EC2 resources. If your container attempts to exceed the memory specified, the container is terminated. This parameter maps to `Memory` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--memory` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) . You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for multi-node parallel (MNP) jobs. It must be specified for each node at least once. This parameter maps to `Memory` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--memory` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .\n\n> If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see [Memory management](https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html) in the *AWS Batch User Guide* . \n\nFor jobs that are running on Fargate resources, then `value` is the hard limit (in MiB), and must match one of the supported values and the `VCPU` values must be one of the values supported for that memory value.\n\n- **value = 512** - `VCPU` = 0.25\n- **value = 1024** - `VCPU` = 0.25 or 0.5\n- **value = 2048** - `VCPU` = 0.25, 0.5, or 1\n- **value = 3072** - `VCPU` = 0.5, or 1\n- **value = 4096** - `VCPU` = 0.5, 1, or 2\n- **value = 5120, 6144, or 7168** - `VCPU` = 1 or 2\n- **value = 8192** - `VCPU` = 1, 2, 4, or 8\n- **value = 9216, 10240, 11264, 12288, 13312, 14336, or 15360** - `VCPU` = 2 or 4\n- **value = 16384** - `VCPU` = 2, 4, or 8\n- **value = 17408, 18432, 19456, 21504, 22528, 23552, 25600, 26624, 27648, 29696, or 30720** - `VCPU` = 4\n- **value = 20480, 24576, or 28672** - `VCPU` = 4 or 8\n- **value = 36864, 45056, 53248, or 61440** - `VCPU` = 8\n- **value = 32768, 40960, 49152, or 57344** - `VCPU` = 8 or 16\n- **value = 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880** - `VCPU` = 16\n- **type=\"VCPU\"** - The number of vCPUs reserved for the container. This parameter maps to `CpuShares` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--cpu-shares` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) . Each vCPU is equivalent to 1,024 CPU shares. For EC2 resources, you must specify at least one vCPU. This is required but can be specified in several places; it must be specified for each node at least once.\n\nThe default for the Fargate On-Demand vCPU resource count quota is 6 vCPUs. For more information about Fargate quotas, see [AWS Fargate quotas](https://docs.aws.amazon.com/general/latest/gr/ecs-service.html#service-quotas-fargate) in the *AWS General Reference* .\n\nFor jobs that are running on Fargate resources, then `value` must match one of the supported values and the `MEMORY` values must be one of the values supported for that `VCPU` value. The supported values are 0.25, 0.5, 1, 2, 4, 8, and 16\n\n- **value = 0.25** - `MEMORY` = 512, 1024, or 2048\n- **value = 0.5** - `MEMORY` = 1024, 2048, 3072, or 4096\n- **value = 1** - `MEMORY` = 2048, 3072, 4096, 5120, 6144, 7168, or 8192\n- **value = 2** - `MEMORY` = 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, or 16384\n- **value = 4** - `MEMORY` = 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384, 17408, 18432, 19456, 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648, 28672, 29696, or 30720\n- **value = 8** - `MEMORY` = 16384, 20480, 24576, 28672, 32768, 36864, 40960, 45056, 49152, 53248, 57344, or 61440\n- **value = 16** - `MEMORY` = 32768, 40960, 49152, 57344, 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880" + } + }, + "AWS::Pipes::Pipe.BatchRetryStrategy": { + "attributes": {}, + "description": "The retry strategy that's associated with a job. For more information, see [Automated job retries](https://docs.aws.amazon.com/batch/latest/userguide/job_retries.html) in the *AWS Batch User Guide* .", + "properties": { + "Attempts": "The number of times to move a job to the `RUNNABLE` status. If the value of `attempts` is greater than one, the job is retried on failure the same number of attempts as the value." + } + }, + "AWS::Pipes::Pipe.CapacityProviderStrategyItem": { + "attributes": {}, + "description": "The details of a capacity provider strategy. To learn more, see [CapacityProviderStrategyItem](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_CapacityProviderStrategyItem.html) in the Amazon ECS API Reference.", + "properties": { + "Base": "The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.", + "CapacityProvider": "The short name of the capacity provider.", + "Weight": "The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied." + } + }, + "AWS::Pipes::Pipe.DeadLetterConfig": { + "attributes": {}, + "description": "A `DeadLetterConfig` object that contains information about a dead-letter queue configuration.", + "properties": { + "Arn": "The ARN of the Amazon SQS queue specified as the target for the dead-letter queue." + } + }, + "AWS::Pipes::Pipe.EcsContainerOverride": { + "attributes": {}, + "description": "The overrides that are sent to a container. An empty container override can be passed in. An example of an empty container override is `{\"containerOverrides\": [ ] }` . If a non-empty container override is specified, the `name` parameter must be included.", + "properties": { + "Command": "The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name.", + "Cpu": "The number of `cpu` units reserved for the container, instead of the default value from the task definition. You must also specify a container name.", + "Environment": "The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name.", + "EnvironmentFiles": "A list of files containing the environment variables to pass to a container, instead of the value from the container definition.", + "Memory": "The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition. If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name.", + "MemoryReservation": "The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition. You must also specify a container name.", + "Name": "The name of the container that receives the override. This parameter is required if any override is specified.", + "ResourceRequirements": "The type and amount of a resource to assign to a container, instead of the default value from the task definition. The only supported resource is a GPU." + } + }, + "AWS::Pipes::Pipe.EcsEnvironmentFile": { + "attributes": {}, + "description": "A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a `.env` file extension. Each line in an environment file should contain an environment variable in `VARIABLE=VALUE` format. Lines beginning with `#` are treated as comments and are ignored. For more information about the environment variable file syntax, see [Declare default environment variables in file](https://docs.aws.amazon.com/https://docs.docker.com/compose/env-file/) .\n\nIf there are environment variables specified using the `environment` parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see [Specifying environment variables](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/taskdef-envfiles.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nThis parameter is only supported for tasks hosted on Fargate using the following platform versions:\n\n- Linux platform version `1.4.0` or later.\n- Windows platform version `1.0.0` or later.", + "properties": { + "Type": "The file type to use. The only supported value is `s3` .", + "Value": "The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file." + } + }, + "AWS::Pipes::Pipe.EcsEnvironmentVariable": { + "attributes": {}, + "description": "The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name.", + "properties": { + "Name": "The name of the key-value pair. For environment variables, this is the name of the environment variable.", + "Value": "The value of the key-value pair. For environment variables, this is the value of the environment variable." + } + }, + "AWS::Pipes::Pipe.EcsEphemeralStorage": { + "attributes": {}, + "description": "The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on Fargate . For more information, see [Fargate task storage](https://docs.aws.amazon.com/AmazonECS/latest/userguide/using_data_volumes.html) in the *Amazon ECS User Guide for Fargate* .\n\n> This parameter is only supported for tasks hosted on Fargate using Linux platform version `1.4.0` or later. This parameter is not supported for Windows containers on Fargate .", + "properties": { + "SizeInGiB": "The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is `21` GiB and the maximum supported value is `200` GiB." + } + }, + "AWS::Pipes::Pipe.EcsInferenceAcceleratorOverride": { + "attributes": {}, + "description": "Details on an Elastic Inference accelerator task override. This parameter is used to override the Elastic Inference accelerator specified in the task definition. For more information, see [Working with Amazon Elastic Inference on Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/userguide/ecs-inference.html) in the *Amazon Elastic Container Service Developer Guide* .", + "properties": { + "DeviceName": "The Elastic Inference accelerator device name to override for the task. This parameter must match a `deviceName` specified in the task definition.", + "DeviceType": "The Elastic Inference accelerator type to use." + } + }, + "AWS::Pipes::Pipe.EcsResourceRequirement": { + "attributes": {}, + "description": "The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see [Working with GPUs on Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-gpu.html) or [Working with Amazon Elastic Inference on Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-inference.html) in the *Amazon Elastic Container Service Developer Guide*", + "properties": { + "Type": "The type of resource to assign to a container. The supported values are `GPU` or `InferenceAccelerator` .", + "Value": "The value for the specified resource type.\n\nIf the `GPU` type is used, the value is the number of physical `GPUs` the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on.\n\nIf the `InferenceAccelerator` type is used, the `value` matches the `deviceName` for an InferenceAccelerator specified in a task definition." + } + }, + "AWS::Pipes::Pipe.EcsTaskOverride": { + "attributes": {}, + "description": "The overrides that are associated with a task.", + "properties": { + "ContainerOverrides": "One or more container overrides that are sent to a task.", + "Cpu": "The cpu override for the task.", + "EphemeralStorage": "The ephemeral storage setting override for the task.\n\n> This parameter is only supported for tasks hosted on Fargate that use the following platform versions:\n> \n> - Linux platform version `1.4.0` or later.\n> - Windows platform version `1.0.0` or later.", + "ExecutionRoleArn": "The Amazon Resource Name (ARN) of the task execution IAM role override for the task. For more information, see [Amazon ECS task execution IAM role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html) in the *Amazon Elastic Container Service Developer Guide* .", + "InferenceAcceleratorOverrides": "The Elastic Inference accelerator override for the task.", + "Memory": "The memory override for the task.", + "TaskRoleArn": "The Amazon Resource Name (ARN) of the IAM role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. For more information, see [IAM Role for Tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) in the *Amazon Elastic Container Service Developer Guide* ." + } + }, + "AWS::Pipes::Pipe.Filter": { + "attributes": {}, + "description": "Filter events using an event pattern. For more information, see [Events and Event Patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-and-event-patterns.html) in the *Amazon EventBridge User Guide* .", + "properties": { + "Pattern": "The event pattern." + } + }, + "AWS::Pipes::Pipe.FilterCriteria": { + "attributes": {}, + "description": "The collection of event patterns used to filter events. For more information, see [Events and Event Patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-and-event-patterns.html) in the *Amazon EventBridge User Guide* .", + "properties": { + "Filters": "The event patterns." + } + }, + "AWS::Pipes::Pipe.HeaderParametersMap": { + "attributes": {}, + "description": "The headers that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination.", + "properties": { + "Key": "The key of the key-value pair.", + "Value": "The value of the key-value pair." + } + }, + "AWS::Pipes::Pipe.MQBrokerAccessCredentials": { + "attributes": {}, + "description": "The AWS Secrets Manager secret that stores your broker credentials.", + "properties": { + "BasicAuth": "The ARN of the Secrets Manager secret." + } + }, + "AWS::Pipes::Pipe.MSKAccessCredentials": { + "attributes": {}, + "description": "The AWS Secrets Manager secret that stores your stream credentials.", + "properties": { + "ClientCertificateTlsAuth": "The ARN of the Secrets Manager secret.", + "SaslScram512Auth": "The ARN of the Secrets Manager secret." + } + }, + "AWS::Pipes::Pipe.NetworkConfiguration": { + "attributes": {}, + "description": "This structure specifies the network configuration for an Amazon ECS task.", + "properties": { + "AwsvpcConfiguration": "Use this structure to specify the VPC subnets and security groups for the task, and whether a public IP address is to be used. This structure is relevant only for ECS tasks that use the `awsvpc` network mode." + } + }, + "AWS::Pipes::Pipe.PipeEnrichmentHttpParameters": { + "attributes": {}, + "description": "These are custom parameter to be used when the target is an API Gateway REST APIs or EventBridge ApiDestinations. In the latter case, these are merged with any InvocationParameters specified on the Connection, with any values from the Connection taking precedence.", + "properties": { + "HeaderParameters": "The headers that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination.", + "PathParameterValues": "The path parameter values to be used to populate API Gateway REST API or EventBridge ApiDestination path wildcards (\"*\").", + "QueryStringParameters": "The query string keys/values that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination." + } + }, + "AWS::Pipes::Pipe.PipeEnrichmentParameters": { + "attributes": {}, + "description": "The parameters required to set up enrichment on your pipe.", + "properties": { + "HttpParameters": "Contains the HTTP parameters to use when the target is a API Gateway REST endpoint or EventBridge ApiDestination.\n\nIf you specify an API Gateway REST API or EventBridge ApiDestination as a target, you can use this parameter to specify headers, path parameters, and query string keys/values as part of your target invoking request. If you're using ApiDestinations, the corresponding Connection can also have these values configured. In case of any conflicting keys, values from the Connection take precedence.", + "InputTemplate": "Valid JSON text passed to the enrichment. In this case, nothing from the event itself is passed to the enrichment. For more information, see [The JavaScript Object Notation (JSON) Data Interchange Format](https://docs.aws.amazon.com/http://www.rfc-editor.org/rfc/rfc7159.txt) ." + } + }, + "AWS::Pipes::Pipe.PipeSourceActiveMQBrokerParameters": { + "attributes": {}, + "description": "The parameters for using an Active MQ broker as a source.", + "properties": { + "BatchSize": "The maximum number of records to include in each batch.", + "Credentials": "The credentials needed to access the resource.", + "MaximumBatchingWindowInSeconds": "The maximum length of a time to wait for events.", + "QueueName": "The name of the destination queue to consume." + } + }, + "AWS::Pipes::Pipe.PipeSourceDynamoDBStreamParameters": { + "attributes": {}, + "description": "The parameters for using a DynamoDB stream as a source.", + "properties": { + "BatchSize": "The maximum number of records to include in each batch.", + "DeadLetterConfig": "Define the target queue to send dead-letter queue events to.", + "MaximumBatchingWindowInSeconds": "The maximum length of a time to wait for events.", + "MaximumRecordAgeInSeconds": "(Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, EventBridge never discards old records.", + "MaximumRetryAttempts": "(Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, EventBridge retries failed records until the record expires in the event source.", + "OnPartialBatchItemFailure": "(Streams only) Define how to handle item process failures. `AUTOMATIC_BISECT` halves each batch and retry each half until all the records are processed or there is one failed message left in the batch.", + "ParallelizationFactor": "(Streams only) The number of batches to process concurrently from each shard. The default value is 1.", + "StartingPosition": "(Streams only) The position in a stream from which to start reading." + } + }, + "AWS::Pipes::Pipe.PipeSourceKinesisStreamParameters": { + "attributes": {}, + "description": "The parameters for using a Kinesis stream as a source.", + "properties": { + "BatchSize": "The maximum number of records to include in each batch.", + "DeadLetterConfig": "Define the target queue to send dead-letter queue events to.", + "MaximumBatchingWindowInSeconds": "The maximum length of a time to wait for events.", + "MaximumRecordAgeInSeconds": "(Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, EventBridge never discards old records.", + "MaximumRetryAttempts": "(Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, EventBridge retries failed records until the record expires in the event source.", + "OnPartialBatchItemFailure": "(Streams only) Define how to handle item process failures. `AUTOMATIC_BISECT` halves each batch and retry each half until all the records are processed or there is one failed message left in the batch.", + "ParallelizationFactor": "(Streams only) The number of batches to process concurrently from each shard. The default value is 1.", + "StartingPosition": "(Streams only) The position in a stream from which to start reading.", + "StartingPositionTimestamp": "With `StartingPosition` set to `AT_TIMESTAMP` , the time from which to start reading, in Unix time seconds." + } + }, + "AWS::Pipes::Pipe.PipeSourceManagedStreamingKafkaParameters": { + "attributes": {}, + "description": "The parameters for using an MSK stream as a source.", + "properties": { + "BatchSize": "The maximum number of records to include in each batch.", + "ConsumerGroupID": "The name of the destination queue to consume.", + "Credentials": "The credentials needed to access the resource.", + "MaximumBatchingWindowInSeconds": "The maximum length of a time to wait for events.", + "StartingPosition": "(Streams only) The position in a stream from which to start reading.", + "TopicName": "The name of the topic that the pipe will read from." + } + }, + "AWS::Pipes::Pipe.PipeSourceParameters": { + "attributes": {}, + "description": "The parameters required to set up a source for your pipe.", + "properties": { + "ActiveMQBrokerParameters": "The parameters for using an Active MQ broker as a source.", + "DynamoDBStreamParameters": "The parameters for using a DynamoDB stream as a source.", + "FilterCriteria": "The collection of event patterns used to filter events. For more information, see [Events and Event Patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-and-event-patterns.html) in the *Amazon EventBridge User Guide* .", + "KinesisStreamParameters": "The parameters for using a Kinesis stream as a source.", + "ManagedStreamingKafkaParameters": "The parameters for using an MSK stream as a source.", + "RabbitMQBrokerParameters": "The parameters for using a Rabbit MQ broker as a source.", + "SelfManagedKafkaParameters": "The parameters for using a self-managed Apache Kafka stream as a source.", + "SqsQueueParameters": "The parameters for using a Amazon SQS stream as a source." + } + }, + "AWS::Pipes::Pipe.PipeSourceRabbitMQBrokerParameters": { + "attributes": {}, + "description": "The parameters for using a Rabbit MQ broker as a source.", + "properties": { + "BatchSize": "The maximum number of records to include in each batch.", + "Credentials": "The credentials needed to access the resource.", + "MaximumBatchingWindowInSeconds": "The maximum length of a time to wait for events.", + "QueueName": "The name of the destination queue to consume.", + "VirtualHost": "The name of the virtual host associated with the source broker." + } + }, + "AWS::Pipes::Pipe.PipeSourceSelfManagedKafkaParameters": { + "attributes": {}, + "description": "The parameters for using a self-managed Apache Kafka stream as a source.", + "properties": { + "AdditionalBootstrapServers": "An array of server URLs.", + "BatchSize": "The maximum number of records to include in each batch.", + "ConsumerGroupID": "The name of the destination queue to consume.", + "Credentials": "The credentials needed to access the resource.", + "MaximumBatchingWindowInSeconds": "The maximum length of a time to wait for events.", + "ServerRootCaCertificate": "The ARN of the Secrets Manager secret used for certification.", + "StartingPosition": "(Streams only) The position in a stream from which to start reading.", + "TopicName": "The name of the topic that the pipe will read from.", + "Vpc": "This structure specifies the VPC subnets and security groups for the stream, and whether a public IP address is to be used." + } + }, + "AWS::Pipes::Pipe.PipeSourceSqsQueueParameters": { + "attributes": {}, + "description": "The parameters for using a Amazon SQS stream as a source.", + "properties": { + "BatchSize": "The maximum number of records to include in each batch.", + "MaximumBatchingWindowInSeconds": "The maximum length of a time to wait for events." + } + }, + "AWS::Pipes::Pipe.PipeTargetBatchJobParameters": { + "attributes": {}, + "description": "The parameters for using an AWS Batch job as a target.", + "properties": { + "ArrayProperties": "The array properties for the submitted job, such as the size of the array. The array size can be between 2 and 10,000. If you specify array properties for a job, it becomes an array job. This parameter is used only if the target is an AWS Batch job.", + "ContainerOverrides": "The overrides that are sent to a container.", + "DependsOn": "A list of dependencies for the job. A job can depend upon a maximum of 20 jobs. You can specify a `SEQUENTIAL` type dependency without specifying a job ID for array jobs so that each child array job completes sequentially, starting at index 0. You can also specify an `N_TO_N` type dependency with a job ID for array jobs. In that case, each index child of this job must wait for the corresponding index child of each dependency to complete before it can begin.", + "JobDefinition": "The job definition used by this job. This value can be one of `name` , `name:revision` , or the Amazon Resource Name (ARN) for the job definition. If name is specified without a revision then the latest active revision is used.", + "JobName": "The name of the job. It can be up to 128 letters long. The first character must be alphanumeric, can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).", + "Parameters": "Additional parameters passed to the job that replace parameter substitution placeholders that are set in the job definition. Parameters are specified as a key and value pair mapping. Parameters included here override any corresponding parameter defaults from the job definition.", + "RetryStrategy": "The retry strategy to use for failed jobs. When a retry strategy is specified here, it overrides the retry strategy defined in the job definition." + } + }, + "AWS::Pipes::Pipe.PipeTargetCloudWatchLogsParameters": { + "attributes": {}, + "description": "The parameters for using an CloudWatch Logs log stream as a target.", + "properties": { + "LogStreamName": "The name of the log stream.", + "Timestamp": "The time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC." + } + }, + "AWS::Pipes::Pipe.PipeTargetEcsTaskParameters": { + "attributes": {}, + "description": "The parameters for using an Amazon ECS task as a target.", + "properties": { + "CapacityProviderStrategy": "The capacity provider strategy to use for the task.\n\nIf a `capacityProviderStrategy` is specified, the `launchType` parameter must be omitted. If no `capacityProviderStrategy` or launchType is specified, the `defaultCapacityProviderStrategy` for the cluster is used.", + "EnableECSManagedTags": "Specifies whether to enable Amazon ECS managed tags for the task. For more information, see [Tagging Your Amazon ECS Resources](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-using-tags.html) in the Amazon Elastic Container Service Developer Guide.", + "EnableExecuteCommand": "Whether or not to enable the execute command functionality for the containers in this task. If true, this enables execute command functionality on all containers in the task.", + "Group": "Specifies an Amazon ECS task group for the task. The maximum length is 255 characters.", + "LaunchType": "Specifies the launch type on which your task is running. The launch type that you specify here must match one of the launch type (compatibilities) of the target task. The `FARGATE` value is supported only in the Regions where AWS Fargate with Amazon ECS is supported. For more information, see [AWS Fargate on Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS-Fargate.html) in the *Amazon Elastic Container Service Developer Guide* .", + "NetworkConfiguration": "Use this structure if the Amazon ECS task uses the `awsvpc` network mode. This structure specifies the VPC subnets and security groups associated with the task, and whether a public IP address is to be used. This structure is required if `LaunchType` is `FARGATE` because the `awsvpc` mode is required for Fargate tasks.\n\nIf you specify `NetworkConfiguration` when the target ECS task does not use the `awsvpc` network mode, the task fails.", + "Overrides": "The overrides that are associated with a task.", + "PlacementConstraints": "An array of placement constraint objects to use for the task. You can specify up to 10 constraints per task (including constraints in the task definition and those specified at runtime).", + "PlacementStrategy": "The placement strategy objects to use for the task. You can specify a maximum of five strategy rules per task.", + "PlatformVersion": "Specifies the platform version for the task. Specify only the numeric portion of the platform version, such as `1.1.0` .\n\nThis structure is used only if `LaunchType` is `FARGATE` . For more information about valid platform versions, see [AWS Fargate Platform Versions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.html) in the *Amazon Elastic Container Service Developer Guide* .", + "PropagateTags": "Specifies whether to propagate the tags from the task definition to the task. If no value is specified, the tags are not propagated. Tags can only be propagated to the task during task creation. To add tags to a task after task creation, use the `TagResource` API action.", + "ReferenceId": "The reference ID to use for the task.", + "Tags": "The metadata that you apply to the task to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. To learn more, see [RunTask](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html#ECS-RunTask-request-tags) in the Amazon ECS API Reference.", + "TaskCount": "The number of tasks to create based on `TaskDefinition` . The default is 1.", + "TaskDefinitionArn": "The ARN of the task definition to use if the event target is an Amazon ECS task." + } + }, + "AWS::Pipes::Pipe.PipeTargetEventBridgeEventBusParameters": { + "attributes": {}, + "description": "The parameters for using an EventBridge event bus as a target.", + "properties": { + "DetailType": "A free-form string, with a maximum of 128 characters, used to decide what fields to expect in the event detail.", + "EndpointId": "The URL subdomain of the endpoint. For example, if the URL for Endpoint is https://abcde.veo.endpoints.event.amazonaws.com, then the EndpointId is `abcde.veo` .\n\n> When using Java, you must include `auth-crt` on the class path.", + "Resources": "AWS resources, identified by Amazon Resource Name (ARN), which the event primarily concerns. Any number, including zero, may be present.", + "Source": "The source of the event.", + "Time": "The time stamp of the event, per [RFC3339](https://docs.aws.amazon.com/https://www.rfc-editor.org/rfc/rfc3339.txt) . If no time stamp is provided, the time stamp of the [PutEvents](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_PutEvents.html) call is used." + } + }, + "AWS::Pipes::Pipe.PipeTargetHttpParameters": { + "attributes": {}, + "description": "These are custom parameter to be used when the target is an API Gateway REST APIs or EventBridge ApiDestinations.", + "properties": { + "HeaderParameters": "The headers that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination.", + "PathParameterValues": "The path parameter values to be used to populate API Gateway REST API or EventBridge ApiDestination path wildcards (\"*\").", + "QueryStringParameters": "The query string keys/values that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination." + } + }, + "AWS::Pipes::Pipe.PipeTargetKinesisStreamParameters": { + "attributes": {}, + "description": "The parameters for using a Kinesis stream as a source.", + "properties": { + "PartitionKey": "Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream." + } + }, + "AWS::Pipes::Pipe.PipeTargetLambdaFunctionParameters": { + "attributes": {}, + "description": "The parameters for using a Lambda function as a target.", + "properties": { + "InvocationType": "Choose from the following options.\n\n- `RequestResponse` (default) - Invoke the function synchronously. Keep the connection open until the function returns a response or times out. The API response includes the function response and additional data.\n- `Event` - Invoke the function asynchronously. Send events that fail multiple times to the function's dead-letter queue (if it's configured). The API response only includes a status code.\n- `DryRun` - Validate parameter values and verify that the user or role has permission to invoke the function." + } + }, + "AWS::Pipes::Pipe.PipeTargetParameters": { + "attributes": {}, + "description": "The parameters required to set up a target for your pipe.", + "properties": { + "BatchJobParameters": "The parameters for using an AWS Batch job as a target.", + "CloudWatchLogsParameters": "The parameters for using an CloudWatch Logs log stream as a target.", + "EcsTaskParameters": "The parameters for using an Amazon ECS task as a target.", + "EventBridgeEventBusParameters": "The parameters for using an EventBridge event bus as a target.", + "HttpParameters": "These are custom parameter to be used when the target is an API Gateway REST APIs or EventBridge ApiDestinations.", + "InputTemplate": "Valid JSON text passed to the target. In this case, nothing from the event itself is passed to the target. For more information, see [The JavaScript Object Notation (JSON) Data Interchange Format](https://docs.aws.amazon.com/http://www.rfc-editor.org/rfc/rfc7159.txt) .", + "KinesisStreamParameters": "The parameters for using a Kinesis stream as a source.", + "LambdaFunctionParameters": "The parameters for using a Lambda function as a target.", + "RedshiftDataParameters": "These are custom parameters to be used when the target is a Amazon Redshift cluster to invoke the Amazon Redshift Data API ExecuteStatement.", + "SageMakerPipelineParameters": "The parameters for using a SageMaker pipeline as a target.", + "SqsQueueParameters": "The parameters for using a Amazon SQS stream as a source.", + "StepFunctionStateMachineParameters": "The parameters for using a Step Functions state machine as a target." + } + }, + "AWS::Pipes::Pipe.PipeTargetRedshiftDataParameters": { + "attributes": {}, + "description": "These are custom parameters to be used when the target is a Amazon Redshift cluster to invoke the Amazon Redshift Data API ExecuteStatement.", + "properties": { + "Database": "The name of the database. Required when authenticating using temporary credentials.", + "DbUser": "The database user name. Required when authenticating using temporary credentials.", + "SecretManagerArn": "The name or ARN of the secret that enables access to the database. Required when authenticating using SageMaker .", + "Sqls": "The SQL statement text to run.", + "StatementName": "The name of the SQL statement. You can name the SQL statement when you create it to identify the query.", + "WithEvent": "Indicates whether to send an event back to EventBridge after the SQL statement runs." + } + }, + "AWS::Pipes::Pipe.PipeTargetSageMakerPipelineParameters": { + "attributes": {}, + "description": "The parameters for using a SageMaker pipeline as a target.", + "properties": { + "PipelineParameterList": "List of Parameter names and values for SageMaker Model Building Pipeline execution." + } + }, + "AWS::Pipes::Pipe.PipeTargetSqsQueueParameters": { + "attributes": {}, + "description": "The parameters for using a Amazon SQS stream as a source.", + "properties": { + "MessageDeduplicationId": "This parameter applies only to FIFO (first-in-first-out) queues.\n\nThe token used for deduplication of sent messages.", + "MessageGroupId": "The FIFO message group ID to use as the target." + } + }, + "AWS::Pipes::Pipe.PipeTargetStateMachineParameters": { + "attributes": {}, + "description": "The parameters for using a Step Functions state machine as a target.", + "properties": { + "InvocationType": "Specify whether to wait for the state machine to finish or not." + } + }, + "AWS::Pipes::Pipe.PlacementConstraint": { + "attributes": {}, + "description": "An object representing a constraint on task placement. To learn more, see [Task Placement Constraints](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-constraints.html) in the Amazon Elastic Container Service Developer Guide.", + "properties": { + "Expression": "A cluster query language expression to apply to the constraint. You cannot specify an expression if the constraint type is `distinctInstance` . To learn more, see [Cluster Query Language](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-query-language.html) in the Amazon Elastic Container Service Developer Guide.", + "Type": "The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates." + } + }, + "AWS::Pipes::Pipe.PlacementStrategy": { + "attributes": {}, + "description": "The task placement strategy for a task or service. To learn more, see [Task Placement Strategies](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-strategies.html) in the Amazon Elastic Container Service Service Developer Guide.", + "properties": { + "Field": "The field to apply the placement strategy against. For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that is applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used.", + "Type": "The type of placement strategy. The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that is specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory (but still enough to run the task)." + } + }, + "AWS::Pipes::Pipe.QueryStringParametersMap": { + "attributes": {}, + "description": "The query string keys/values that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination.", + "properties": { + "Key": "The key of the key-value pair.", + "Value": "The value of the key-value pair." + } + }, + "AWS::Pipes::Pipe.SageMakerPipelineParameter": { + "attributes": {}, + "description": "Name/Value pair of a parameter to start execution of a SageMaker Model Building Pipeline.", + "properties": { + "Name": "Name of parameter to start execution of a SageMaker Model Building Pipeline.", + "Value": "Value of parameter to start execution of a SageMaker Model Building Pipeline." + } + }, + "AWS::Pipes::Pipe.SelfManagedKafkaAccessConfigurationCredentials": { + "attributes": {}, + "description": "The AWS Secrets Manager secret that stores your stream credentials.", + "properties": { + "BasicAuth": "The ARN of the Secrets Manager secret.", + "ClientCertificateTlsAuth": "The ARN of the Secrets Manager secret.", + "SaslScram256Auth": "The ARN of the Secrets Manager secret.", + "SaslScram512Auth": "The ARN of the Secrets Manager secret." + } + }, + "AWS::Pipes::Pipe.SelfManagedKafkaAccessConfigurationVpc": { + "attributes": {}, + "description": "This structure specifies the VPC subnets and security groups for the stream, and whether a public IP address is to be used.", + "properties": { + "SecurityGroup": "Specifies the security groups associated with the stream. These security groups must all be in the same VPC. You can specify as many as five security groups. If you do not specify a security group, the default security group for the VPC is used.", + "Subnets": "Specifies the subnets associated with the stream. These subnets must all be in the same VPC. You can specify as many as 16 subnets." + } + }, "AWS::QLDB::Ledger": { "attributes": { "Ref": "`Ref` returns the resource name. For example:\n\n`{ \"Ref\": \" myQLDBLedger \" }`\n\nFor the resource with the logical ID `myQLDBLedger` , `Ref` returns the Amazon QLDB ledger name." @@ -38358,7 +38867,7 @@ "EnableCloudwatchLogsExports": "The list of log types that need to be enabled for exporting to CloudWatch Logs. The values in the list depend on the DB engine being used. For more information, see [Publishing Database Logs to Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.html#USER_LogAccess.Procedural.UploadtoCloudWatch) in the *Amazon Aurora User Guide* .\n\n*Aurora MySQL*\n\nValid values: `audit` , `error` , `general` , `slowquery`\n\n*Aurora PostgreSQL*\n\nValid values: `postgresql`\n\nValid for: Aurora DB clusters and Multi-AZ DB clusters", "EnableHttpEndpoint": "A value that indicates whether to enable the HTTP endpoint for an Aurora Serverless DB cluster. By default, the HTTP endpoint is disabled.\n\nWhen enabled, the HTTP endpoint provides a connectionless web service API for running SQL queries on the Aurora Serverless DB cluster. You can also query your database from inside the RDS console with the query editor.\n\nFor more information, see [Using the Data API for Aurora Serverless](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html) in the *Amazon Aurora User Guide* .\n\nValid for: Aurora DB clusters only", "EnableIAMDatabaseAuthentication": "A value that indicates whether to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts. By default, mapping is disabled.\n\nFor more information, see [IAM Database Authentication](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.html) in the *Amazon Aurora User Guide.*\n\nValid for: Aurora DB clusters only", - "Engine": "The name of the database engine to be used for this DB cluster.\n\nValid Values: `aurora` (for MySQL 5.6-compatible Aurora), `aurora-mysql` (for MySQL 5.7-compatible Aurora), and `aurora-postgresql`\n\nValid for: Aurora DB clusters and Multi-AZ DB clusters", + "Engine": "The name of the database engine to be used for this DB cluster.\n\nValid Values:\n\n- `aurora` (for MySQL 5.6-compatible Aurora)\n- `aurora-mysql` (for MySQL 5.7-compatible Aurora)\n- `aurora-postgresql`\n- `mysql`\n- `postgres`\n\nValid for: Aurora DB clusters and Multi-AZ DB clusters", "EngineMode": "The DB engine mode of the DB cluster, either `provisioned` , `serverless` , `parallelquery` , `global` , or `multimaster` .\n\nThe `parallelquery` engine mode isn't required for Aurora MySQL version 1.23 and higher 1.x versions, and version 2.09 and higher 2.x versions.\n\nThe `global` engine mode isn't required for Aurora MySQL version 1.22 and higher 1.x versions, and `global` engine mode isn't required for any 2.x versions.\n\nThe `multimaster` engine mode only applies for DB clusters created with Aurora MySQL version 5.6.10a.\n\nFor Aurora PostgreSQL, the `global` engine mode isn't required, and both the `parallelquery` and the `multimaster` engine modes currently aren't supported.\n\nLimitations and requirements apply to some DB engine modes. For more information, see the following sections in the *Amazon Aurora User Guide* :\n\n- [Limitations of Aurora Serverless](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.html#aurora-serverless.limitations)\n- [Limitations of Parallel Query](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-mysql-parallel-query.html#aurora-mysql-parallel-query-limitations)\n- [Limitations of Aurora Global Databases](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html#aurora-global-database.limitations)\n- [Limitations of Multi-Master Clusters](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-multi-master.html#aurora-multi-master-limitations)\n\nValid for: Aurora DB clusters only", "EngineVersion": "The version number of the database engine to use.\n\nTo list all of the available engine versions for `aurora` (for MySQL 5.6-compatible Aurora), use the following command:\n\n`aws rds describe-db-engine-versions --engine aurora --query \"DBEngineVersions[].EngineVersion\"`\n\nTo list all of the available engine versions for `aurora-mysql` (for MySQL 5.7-compatible Aurora), use the following command:\n\n`aws rds describe-db-engine-versions --engine aurora-mysql --query \"DBEngineVersions[].EngineVersion\"`\n\nTo list all of the available engine versions for `aurora-postgresql` , use the following command:\n\n`aws rds describe-db-engine-versions --engine aurora-postgresql --query \"DBEngineVersions[].EngineVersion\"`\n\nValid for: Aurora DB clusters and Multi-AZ DB clusters", "GlobalClusterIdentifier": "If you are configuring an Aurora global database cluster and want your Aurora DB cluster to be a secondary member in the global database cluster, specify the global cluster ID of the global database cluster. To define the primary database cluster of the global cluster, use the [AWS::RDS::GlobalCluster](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-globalcluster.html) resource.\n\nIf you aren't configuring a global database cluster, don't specify this property.\n\n> To remove the DB cluster from a global database cluster, specify an empty value for the `GlobalClusterIdentifier` property. \n\nFor information about Aurora global databases, see [Working with Amazon Aurora Global Databases](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html) in the *Amazon Aurora User Guide* .\n\nValid for: Aurora DB clusters only", @@ -44594,11 +45103,11 @@ "description": "Creates a new secret. A *secret* can be a password, a set of credentials such as a user name and password, an OAuth token, or other secret information that you store in an encrypted form in Secrets Manager.\n\nTo retrieve a secret in a CloudFormation template, use a *dynamic reference* . For more information, see [Retrieve a secret in an AWS CloudFormation resource](https://docs.aws.amazon.com/secretsmanager/latest/userguide/cfn-example_reference-secret.html) .\n\nA common scenario is to first create a secret with `GenerateSecretString` , which generates a password, and then use a dynamic reference to retrieve the username and password from the secret to use as credentials for a new database. Follow these steps, as shown in the examples below:\n\n- Define the secret without referencing the service or database. You can't reference the service or database because it doesn't exist yet. The secret must contain a username and password.\n- Next, define the service or database. Include the reference to the secret to use stored credentials to define the database admin user and password.\n- Finally, define a `SecretTargetAttachment` resource type to finish configuring the secret with the required database engine type and the connection details of the service or database. The rotation function requires the details, if you attach one later by defining a [AWS::SecretsManager::RotationSchedule](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-secretsmanager-rotationschedule.html) resource type.\n\nFor information about creating a secret in the console, see [Create a secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_create-basic-secret.html) . For information about creating a secret using the CLI or SDK, see [CreateSecret](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_CreateSecret.html) .\n\nFor information about retrieving a secret in code, see [Retrieve secrets from Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets.html) .\n\n> Do not create a dynamic reference using a backslash `(\\)` as the final value. AWS CloudFormation cannot resolve those references, which causes a resource failure.", "properties": { "Description": "The description of the secret.", - "GenerateSecretString": "A structure that specifies how to generate a password to encrypt and store in the secret.\n\nEither `GenerateSecretString` or `SecretString` must have a value, but not both. They cannot both be empty.\n\nWe recommend that you specify the maximum length and include every character type that the system you are generating a password for can support.", + "GenerateSecretString": "A structure that specifies how to generate a password to encrypt and store in the secret. To include a specific string in the secret, use `SecretString` instead. If you omit both `GenerateSecretString` and `SecretString` , you create an empty secret.\n\nWe recommend that you specify the maximum length and include every character type that the system you are generating a password for can support.", "KmsKeyId": "The ARN, key ID, or alias of the AWS KMS key that Secrets Manager uses to encrypt the secret value in the secret. An alias is always prefixed by `alias/` , for example `alias/aws/secretsmanager` . For more information, see [About aliases](https://docs.aws.amazon.com/kms/latest/developerguide/alias-about.html) .\n\nTo use a AWS KMS key in a different account, use the key ARN or the alias ARN.\n\nIf you don't specify this value, then Secrets Manager uses the key `aws/secretsmanager` . If that key doesn't yet exist, then Secrets Manager creates it for you automatically the first time it encrypts the secret value.\n\nIf the secret is in a different AWS account from the credentials calling the API, then you can't use `aws/secretsmanager` to encrypt the secret, and you must create and use a customer managed AWS KMS key.", "Name": "The name of the new secret.\n\nThe secret name can contain ASCII letters, numbers, and the following characters: /_+=.@-\n\nDo not end your secret name with a hyphen followed by six characters. If you do so, you risk confusion and unexpected results when searching for a secret by partial ARN. Secrets Manager automatically adds a hyphen and six random characters after the secret name at the end of the ARN.", "ReplicaRegions": "A custom type that specifies a `Region` and the `KmsKeyId` for a replica secret.", - "SecretString": "The text to encrypt and store in the secret. We recommend you use a JSON structure of key/value pairs for your secret value.\n\nEither `GenerateSecretString` or `SecretString` must have a value, but not both. They cannot both be empty. We recommend that you use the `GenerateSecretString` property to generate a random password.", + "SecretString": "The text to encrypt and store in the secret. We recommend you use a JSON structure of key/value pairs for your secret value. To generate a random password, use `GenerateSecretString` instead. If you omit both `GenerateSecretString` and `SecretString` , you create an empty secret.", "Tags": "A list of tags to attach to the secret. Each tag is a key and value pair of strings in a JSON text string, for example:\n\n`[{\"Key\":\"CostCenter\",\"Value\":\"12345\"},{\"Key\":\"environment\",\"Value\":\"production\"}]`\n\nSecrets Manager tag key names are case sensitive. A tag with the key \"ABC\" is a different tag from one with key \"abc\".\n\nIf you check tags in permissions policies as part of your security strategy, then adding or removing a tag can change permissions. If the completion of this operation would result in you losing your permissions for this secret, then Secrets Manager blocks the operation and returns an `Access Denied` error. For more information, see [Control access to secrets using tags](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_examples.html#tag-secrets-abac) and [Limit access to identities with tags that match secrets' tags](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_examples.html#auth-and-access_tags2) .\n\nFor information about how to format a JSON parameter for the various command line tool environments, see [Using JSON for Parameters](https://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html#cli-using-param-json) . If your command-line tool or SDK requires quotation marks around the parameter, you should use single quotes to avoid confusion with the double quotes required in the JSON text.\n\nThe following restrictions apply to tags:\n\n- Maximum number of tags per secret: 50\n- Maximum key length: 127 Unicode characters in UTF-8\n- Maximum value length: 255 Unicode characters in UTF-8\n- Tag keys and values are case sensitive.\n- Do not use the `aws:` prefix in your tag names or values because AWS reserves it for AWS use. You can't edit or delete tag names or values with this prefix. Tags with this prefix do not count against your tags per secret limit.\n- If you use your tagging schema across multiple services and resources, other services might have restrictions on allowed characters. Generally allowed characters: letters, spaces, and numbers representable in UTF-8, plus the following special characters: + - = . _ : / @." } }, @@ -45240,7 +45749,7 @@ }, "AWS::SupportApp::SlackChannelConfiguration": { "attributes": {}, - "description": "You can use the `AWS::SupportApp::SlackChannelConfiguration` resource to specify your AWS account when you configure the AWS Support App . This resource includes the following information:\n\n- The Slack channel name and ID\n- The team ID in Slack\n- The Amazon Resource Name (ARN) of the AWS Identity and Access Management ( IAM ) role\n- Whether you want the AWS Support App to notify you when your support cases are created, updated, resolved, or reopened\n- The case severity that you want to get notified for\n\nFor more information, see [AWS Support App in Slack](https://docs.aws.amazon.com/awssupport/latest/user/aws-support-app-for-slack.html) in the *AWS Support User Guide* .", + "description": "You can use the `AWS::SupportApp::SlackChannelConfiguration` resource to specify your AWS account when you configure the AWS Support App . This resource includes the following information:\n\n- The Slack channel name and ID\n- The team ID in Slack\n- The Amazon Resource Name (ARN) of the AWS Identity and Access Management ( IAM ) role\n- Whether you want the AWS Support App to notify you when your support cases are created, updated, resolved, or reopened\n- The case severity that you want to get notified for\n\nFor more information, see the following topics in the *AWS Support User Guide* :\n\n- [AWS Support App in Slack](https://docs.aws.amazon.com/awssupport/latest/user/aws-support-app-for-slack.html)\n- [Creating AWS Support App in Slack resources with AWS CloudFormation](https://docs.aws.amazon.com/awssupport/latest/user/creating-resources-with-cloudformation.html)", "properties": { "ChannelId": "The channel ID in Slack. This ID identifies a channel within a Slack workspace.", "ChannelName": "The channel name in Slack. This is the channel where you invite the AWS Support App .", @@ -45256,7 +45765,7 @@ "attributes": { "Ref": "`Ref` returns the ID of the Slack workspace, such as `T012ABCDEFG` .\n\nFor the AWS Support App Slack workspace configuration, `Ref` returns the value of the Slack workspace ID." }, - "description": "You can use the `AWS::SupportApp::SlackWorkspaceConfiguration` resource to specify your Slack workspace configuration. This resource configures your AWS account so that you can use the specified Slack workspace in the AWS Support App . This resource includes the following information:\n\n- The team ID for the Slack workspace\n- The version ID of the resource to use with AWS CloudFormation", + "description": "You can use the `AWS::SupportApp::SlackWorkspaceConfiguration` resource to specify your Slack workspace configuration. This resource configures your AWS account so that you can use the specified Slack workspace in the AWS Support App . This resource includes the following information:\n\n- The team ID for the Slack workspace\n- The version ID of the resource to use with AWS CloudFormation\n\nFor more information, see the following topics in the *AWS Support User Guide* :\n\n- [AWS Support App in Slack](https://docs.aws.amazon.com/awssupport/latest/user/aws-support-app-for-slack.html)\n- [Creating AWS Support App in Slack resources with AWS CloudFormation](https://docs.aws.amazon.com/awssupport/latest/user/creating-resources-with-cloudformation.html)", "properties": { "TeamId": "The team ID in Slack. This ID uniquely identifies a Slack workspace, such as `T012ABCDEFG` .", "VersionId": "An identifier used to update an existing Slack workspace configuration in AWS CloudFormation , such as `100` ." @@ -46182,7 +46691,7 @@ "ManagedByFirewallManager": "Indicates whether the logging configuration was created by AWS Firewall Manager , as part of an AWS WAF policy configuration. If true, only Firewall Manager can modify or delete the configuration.", "Ref": "`Ref` returns the Amazon Resource Name (ARN) of the web ACL." }, - "description": "Defines an association between logging destinations and a web ACL resource, for logging from AWS WAF . As part of the association, you can specify parts of the standard logging fields to keep out of the logs and you can specify filters so that you log only a subset of the logging records.\n\n> You can define one logging destination per web ACL. \n\nYou can access information about the traffic that AWS WAF inspects using the following steps:\n\n- Create your logging destination. You can use an Amazon CloudWatch Logs log group, an Amazon Simple Storage Service (Amazon S3) bucket, or an Amazon Kinesis Data Firehose. For information about configuring logging destinations and the permissions that are required for each, see [Logging web ACL traffic information](https://docs.aws.amazon.com/waf/latest/developerguide/logging.html) in the *AWS WAF Developer Guide* .\n- Associate your logging destination to your web ACL using a `PutLoggingConfiguration` request.\n\nWhen you successfully enable logging using a `PutLoggingConfiguration` request, AWS WAF creates an additional role or policy that is required to write logs to the logging destination. For an Amazon CloudWatch Logs log group, AWS WAF creates a resource policy on the log group. For an Amazon S3 bucket, AWS WAF creates a bucket policy. For an Amazon Kinesis Data Firehose, AWS WAF creates a service-linked role.\n\nFor additional information about web ACL logging, see [Logging web ACL traffic information](https://docs.aws.amazon.com/waf/latest/developerguide/logging.html) in the *AWS WAF Developer Guide* .", + "description": "Defines an association between logging destinations and a web ACL resource, for logging from AWS WAF . As part of the association, you can specify parts of the standard logging fields to keep out of the logs and you can specify filters so that you log only a subset of the logging records.\n\n> You can define one logging destination per web ACL. \n\nYou can access information about the traffic that AWS WAF inspects using the following steps:\n\n- Create your logging destination. You can use an Amazon CloudWatch Logs log group, an Amazon Simple Storage Service (Amazon S3) bucket, or an Amazon Kinesis Data Firehose.\n\nThe name that you give the destination must start with `aws-waf-logs-` . Depending on the type of destination, you might need to configure additional settings or permissions.\n\nFor configuration requirements and pricing information for each destination type, see [Logging web ACL traffic](https://docs.aws.amazon.com/waf/latest/developerguide/logging.html) in the *AWS WAF Developer Guide* .\n- Associate your logging destination to your web ACL using a `PutLoggingConfiguration` request.\n\nWhen you successfully enable logging using a `PutLoggingConfiguration` request, AWS WAF creates an additional role or policy that is required to write logs to the logging destination. For an Amazon CloudWatch Logs log group, AWS WAF creates a resource policy on the log group. For an Amazon S3 bucket, AWS WAF creates a bucket policy. For an Amazon Kinesis Data Firehose, AWS WAF creates a service-linked role.\n\nFor additional information about web ACL logging, see [Logging web ACL traffic information](https://docs.aws.amazon.com/waf/latest/developerguide/logging.html) in the *AWS WAF Developer Guide* .", "properties": { "LogDestinationConfigs": "The logging destination configuration that you want to associate with the web ACL.\n\n> You can associate one logging destination to a web ACL.", "LoggingFilter": "Filtering that specifies which web requests are kept in the logs and which are dropped. You can filter on the rule action and on the web request labels that were applied by matching rules during web ACL evaluation.",