From ea0d9a5ed2ac573377c7c8ebd87fe5f63827278d Mon Sep 17 00:00:00 2001 From: github-actions Date: Tue, 2 Jul 2024 16:04:43 +0000 Subject: [PATCH] chore(schema): update --- samtranslator/schema/schema.json | 86 ++++---- schema_source/cloudformation-docs.json | 255 +++++++++++++++++++---- schema_source/cloudformation.schema.json | 86 ++++---- 3 files changed, 315 insertions(+), 112 deletions(-) diff --git a/samtranslator/schema/schema.json b/samtranslator/schema/schema.json index 16c7fa5fb..aac239859 100644 --- a/samtranslator/schema/schema.json +++ b/samtranslator/schema/schema.json @@ -20633,7 +20633,7 @@ "type": "number" }, "ResourceId": { - "markdownDescription": "The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .", + "markdownDescription": "The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", "title": "ResourceId", "type": "string" }, @@ -20643,7 +20643,7 @@ "type": "string" }, "ScalableDimension": { - "markdownDescription": "The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The desired task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The desired capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.", + "markdownDescription": "The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", "title": "ScalableDimension", "type": "string" }, @@ -20819,12 +20819,12 @@ "type": "string" }, "ResourceId": { - "markdownDescription": "The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .", + "markdownDescription": "The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", "title": "ResourceId", "type": "string" }, "ScalableDimension": { - "markdownDescription": "The scalable dimension. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The desired task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The desired capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.", + "markdownDescription": "The scalable dimension. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", "title": "ScalableDimension", "type": "string" }, @@ -26184,7 +26184,7 @@ "type": "object" }, "BackupVaultName": { - "markdownDescription": "The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the AWS Region where they are created.", + "markdownDescription": "The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the AWS Region where they are created. They consist of lowercase letters, numbers, and hyphens.", "title": "BackupVaultName", "type": "string" }, @@ -39682,7 +39682,7 @@ "items": { "type": "string" }, - "markdownDescription": "An array of Amazon Resource Name (ARN) strings or partial ARN strings for the specified resource type.\n\n- To log data events for all objects in all S3 buckets in your AWS account , specify the prefix as `arn:aws:s3` .\n\n> This also enables logging of data event activity performed by any user or role in your AWS account , even if that activity is performed on a bucket that belongs to another AWS account .\n- To log data events for all objects in an S3 bucket, specify the bucket and an empty object prefix such as `arn:aws:s3:::bucket-1/` . The trail logs data events for all objects in this S3 bucket.\n- To log data events for specific objects, specify the S3 bucket and object prefix such as `arn:aws:s3:::bucket-1/example-images` . The trail logs data events for objects in this S3 bucket that match the prefix.\n- To log data events for all Lambda functions in your AWS account , specify the prefix as `arn:aws:lambda` .\n\n> This also enables logging of `Invoke` activity performed by any user or role in your AWS account , even if that activity is performed on a function that belongs to another AWS account .\n- To log data events for a specific Lambda function, specify the function ARN.\n\n> Lambda function ARNs are exact. For example, if you specify a function ARN *arn:aws:lambda:us-west-2:111111111111:function:helloworld* , data events will only be logged for *arn:aws:lambda:us-west-2:111111111111:function:helloworld* . They will not be logged for *arn:aws:lambda:us-west-2:111111111111:function:helloworld2* .\n- To log data events for all DynamoDB tables in your AWS account , specify the prefix as `arn:aws:dynamodb` .", + "markdownDescription": "An array of Amazon Resource Name (ARN) strings or partial ARN strings for the specified resource type.\n\n- To log data events for all objects in all S3 buckets in your AWS account , specify the prefix as `arn:aws:s3` .\n\n> This also enables logging of data event activity performed by any user or role in your AWS account , even if that activity is performed on a bucket that belongs to another AWS account .\n- To log data events for all objects in an S3 bucket, specify the bucket and an empty object prefix such as `arn:aws:s3:::DOC-EXAMPLE-BUCKET1/` . The trail logs data events for all objects in this S3 bucket.\n- To log data events for specific objects, specify the S3 bucket and object prefix such as `arn:aws:s3:::DOC-EXAMPLE-BUCKET1/example-images` . The trail logs data events for objects in this S3 bucket that match the prefix.\n- To log data events for all Lambda functions in your AWS account , specify the prefix as `arn:aws:lambda` .\n\n> This also enables logging of `Invoke` activity performed by any user or role in your AWS account , even if that activity is performed on a function that belongs to another AWS account .\n- To log data events for a specific Lambda function, specify the function ARN.\n\n> Lambda function ARNs are exact. For example, if you specify a function ARN *arn:aws:lambda:us-west-2:111111111111:function:helloworld* , data events will only be logged for *arn:aws:lambda:us-west-2:111111111111:function:helloworld* . They will not be logged for *arn:aws:lambda:us-west-2:111111111111:function:helloworld2* .\n- To log data events for all DynamoDB tables in your AWS account , specify the prefix as `arn:aws:dynamodb` .", "title": "Values", "type": "array" } @@ -40892,6 +40892,8 @@ "type": "string" }, "EncryptionKey": { + "markdownDescription": "The key used to encrypt the domain.", + "title": "EncryptionKey", "type": "string" }, "PermissionsPolicyDocument": { @@ -41138,6 +41140,8 @@ "type": "string" }, "DomainOwner": { + "markdownDescription": "The 12-digit account number of the AWS account that owns the domain that contains the repository. It does not include dashes or spaces.", + "title": "DomainOwner", "type": "string" }, "ExternalConnections": { @@ -57437,7 +57441,7 @@ "type": "number" }, "ArchivedLogsOnly": { - "markdownDescription": "When this field is set to `Y` , AWS DMS only accesses the archived redo logs. If the archived redo logs are stored on Automatic Storage Management (ASM) only, the AWS DMS user account needs to be granted ASM privileges.", + "markdownDescription": "When this field is set to `True` , AWS DMS only accesses the archived redo logs. If the archived redo logs are stored on Automatic Storage Management (ASM) only, the AWS DMS user account needs to be granted ASM privileges.", "title": "ArchivedLogsOnly", "type": "boolean" }, @@ -57570,17 +57574,17 @@ "type": "boolean" }, "UseBFile": { - "markdownDescription": "Set this attribute to Y to capture change data using the Binary Reader utility. Set `UseLogminerReader` to N to set this attribute to Y. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC) .", + "markdownDescription": "Set this attribute to True to capture change data using the Binary Reader utility. Set `UseLogminerReader` to False to set this attribute to True. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC) .", "title": "UseBFile", "type": "boolean" }, "UseDirectPathFullLoad": { - "markdownDescription": "Set this attribute to Y to have AWS DMS use a direct path full load. Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). By using this OCI protocol, you can bulk-load Oracle target tables during a full load.", + "markdownDescription": "Set this attribute to True to have AWS DMS use a direct path full load. Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). By using this OCI protocol, you can bulk-load Oracle target tables during a full load.", "title": "UseDirectPathFullLoad", "type": "boolean" }, "UseLogminerReader": { - "markdownDescription": "Set this attribute to Y to capture change data using the Oracle LogMiner utility (the default). Set this attribute to N if you want to access the redo logs as a binary file. When you set `UseLogminerReader` to N, also set `UseBfile` to Y. For more information on this setting and using Oracle ASM, see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC) in the *AWS DMS User Guide* .", + "markdownDescription": "Set this attribute to True to capture change data using the Oracle LogMiner utility (the default). Set this attribute to False if you want to access the redo logs as a binary file. When you set `UseLogminerReader` to False, also set `UseBfile` to True. For more information on this setting and using Oracle ASM, see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC) in the *AWS DMS User Guide* .", "title": "UseLogminerReader", "type": "boolean" }, @@ -58512,8 +58516,6 @@ "title": "ComputeConfig" }, "ReplicationConfigArn": { - "markdownDescription": "The Amazon Resource Name (ARN) of this AWS DMS Serverless replication configuration.", - "title": "ReplicationConfigArn", "type": "string" }, "ReplicationConfigIdentifier": { @@ -61507,12 +61509,12 @@ "additionalProperties": false, "properties": { "ActivationKey": { - "markdownDescription": "Specifies your DataSync agent's activation key. If you don't have an activation key, see [Activate your agent](https://docs.aws.amazon.com/datasync/latest/userguide/activate-agent.html) .", + "markdownDescription": "Specifies your DataSync agent's activation key. If you don't have an activation key, see [Activating your agent](https://docs.aws.amazon.com/datasync/latest/userguide/activate-agent.html) .", "title": "ActivationKey", "type": "string" }, "AgentName": { - "markdownDescription": "Specifies a name for your agent. You can see this name in the DataSync console.", + "markdownDescription": "Specifies a name for your agent. We recommend specifying a name that you can remember.", "title": "AgentName", "type": "string" }, @@ -61528,7 +61530,7 @@ "items": { "type": "string" }, - "markdownDescription": "Specifies the ARN of the subnet where you want to run your DataSync task when using a VPC endpoint. This is the subnet where DataSync creates and manages the [network interfaces](https://docs.aws.amazon.com/datasync/latest/userguide/datasync-network.html#required-network-interfaces) for your transfer. You can only specify one ARN.", + "markdownDescription": "Specifies the ARN of the subnet where your VPC service endpoint is located. You can only specify one ARN.", "title": "SubnetArns", "type": "array" }, @@ -83916,7 +83918,7 @@ "additionalProperties": false, "properties": { "LogDriver": { - "markdownDescription": "The log driver to use for the container.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , `splunk` , and `awsfirelens` .\n\nFor more information about using the `awslogs` log driver, see [Using the awslogs log driver](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor more information about using the `awsfirelens` log driver, see [Custom log routing](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) in the *Amazon Elastic Container Service Developer Guide* .\n\n> If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.", + "markdownDescription": "The log driver to use for the container.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , `splunk` , and `awsfirelens` .\n\nFor more information about using the `awslogs` log driver, see [Send Amazon ECS logs to CloudWatch](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor more information about using the `awsfirelens` log driver, see [Send Amazon ECS logs to an AWS service or AWS Partner](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) .\n\n> If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.", "title": "LogDriver", "type": "string" }, @@ -84040,7 +84042,7 @@ }, "LogConfiguration": { "$ref": "#/definitions/AWS::ECS::Service.LogConfiguration", - "markdownDescription": "The log configuration for the container. This parameter maps to `LogConfig` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `--log-driver` option to [`docker run`](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/commandline/run/) .\n\nBy default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. For more information about the options for different supported log drivers, see [Configure logging drivers](https://docs.aws.amazon.com/https://docs.docker.com/engine/admin/logging/overview/) in the Docker documentation.\n\nUnderstand the following when specifying a log configuration for your containers.\n\n- Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , `splunk` , and `awsfirelens` .\n- This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.\n- For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the `ECS_AVAILABLE_LOGGING_DRIVERS` environment variable before containers placed on that instance can use these log configuration options. For more information, see [Amazon ECS container agent configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html) in the *Amazon Elastic Container Service Developer Guide* .\n- For tasks that are on AWS Fargate , because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.", + "markdownDescription": "The log configuration for the container. This parameter maps to `LogConfig` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `--log-driver` option to [`docker run`](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/commandline/run/) .\n\nBy default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. For more information about the options for different supported log drivers, see [Configure logging drivers](https://docs.aws.amazon.com/https://docs.docker.com/engine/admin/logging/overview/) in the Docker documentation.\n\nUnderstand the following when specifying a log configuration for your containers.\n\n- Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `syslog` , `splunk` , and `awsfirelens` .\n- This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.\n- For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the `ECS_AVAILABLE_LOGGING_DRIVERS` environment variable before containers placed on that instance can use these log configuration options. For more information, see [Amazon ECS container agent configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html) in the *Amazon Elastic Container Service Developer Guide* .\n- For tasks that are on AWS Fargate , because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.", "title": "LogConfiguration" }, "Namespace": { @@ -84451,7 +84453,7 @@ "type": "array" }, "Cpu": { - "markdownDescription": "The number of `cpu` units reserved for the container. This parameter maps to `CpuShares` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `--cpu-shares` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#security-configuration) .\n\nThis field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level `cpu` value.\n\n> You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the [Amazon EC2 Instances](https://docs.aws.amazon.com/ec2/instance-types/) detail page by 1,024. \n\nLinux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.\n\nOn Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. For more information, see [CPU share constraint](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#cpu-share-constraint) in the Docker documentation. The minimum valid CPU share value that the Linux kernel allows is 2. However, the CPU parameter isn't required, and you can use CPU values below 2 in your container definitions. For CPU values below 2 (including null), the behavior varies based on your Amazon ECS container agent version:\n\n- *Agent versions less than or equal to 1.1.0:* Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.\n- *Agent versions greater than or equal to 1.2.0:* Null, zero, and CPU values of 1 are passed to Docker as 2.\n\nOn Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as `0` , which Windows interprets as 1% of one CPU.", + "markdownDescription": "The number of `cpu` units reserved for the container. This parameter maps to `CpuShares` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `--cpu-shares` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#security-configuration) .\n\nThis field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level `cpu` value.\n\n> You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the [Amazon EC2 Instances](https://docs.aws.amazon.com/ec2/instance-types/) detail page by 1,024. \n\nLinux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.\n\nOn Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. For more information, see [CPU share constraint](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#cpu-share-constraint) in the Docker documentation. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version:\n\n- *Agent versions less than or equal to 1.1.0:* Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.\n- *Agent versions greater than or equal to 1.2.0:* Null, zero, and CPU values of 1 are passed to Docker as 2.\n- *Agent versions greater than or equal to 1.84.0:* CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares.\n\nOn Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as `0` , which Windows interprets as 1% of one CPU.", "title": "Cpu", "type": "number" }, @@ -85086,7 +85088,7 @@ "additionalProperties": false, "properties": { "LogDriver": { - "markdownDescription": "The log driver to use for the container.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , `splunk` , and `awsfirelens` .\n\nFor more information about using the `awslogs` log driver, see [Using the awslogs log driver](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor more information about using the `awsfirelens` log driver, see [Custom log routing](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) in the *Amazon Elastic Container Service Developer Guide* .\n\n> If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.", + "markdownDescription": "The log driver to use for the container.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , `splunk` , and `awsfirelens` .\n\nFor more information about using the `awslogs` log driver, see [Send Amazon ECS logs to CloudWatch](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor more information about using the `awsfirelens` log driver, see [Send Amazon ECS logs to an AWS service or AWS Partner](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) .\n\n> If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.", "title": "LogDriver", "type": "string" }, @@ -91078,6 +91080,8 @@ "type": "string" }, "ReplicationGroupId": { + "markdownDescription": "The replication group identifier. This parameter is stored as a lowercase string.\n\nConstraints:\n\n- A name must contain from 1 to 40 alphanumeric characters or hyphens.\n- The first character must be a letter.\n- A name cannot end with a hyphen or contain two consecutive hyphens.", + "title": "ReplicationGroupId", "type": "string" }, "SecurityGroupIds": { @@ -102873,7 +102877,7 @@ "type": "string" }, "OperatingSystem": { - "markdownDescription": "The operating system that your game server binaries run on. This value determines the type of fleet resources that you use for this build. If your game build contains multiple executables, they all must run on the same operating system. You must specify a valid operating system in this request. There is no default value. You can't change a build's operating system later.\n\n> If you have active fleets using the Windows Server 2012 operating system, you can continue to create new builds using this OS until October 10, 2023, when Microsoft ends its support. All others must use Windows Server 2016 when creating new Windows-based builds.", + "markdownDescription": "The operating system that your game server binaries run on. This value determines the type of fleet resources that you use for this build. If your game build contains multiple executables, they all must run on the same operating system. You must specify a valid operating system in this request. There is no default value. You can't change a build's operating system later.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use Amazon GameLift server SDK 4.x., first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to Amazon GameLift server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", "title": "OperatingSystem", "type": "string" }, @@ -102995,7 +102999,7 @@ "type": "string" }, "OperatingSystem": { - "markdownDescription": "The platform required for all containers in the container group definition.", + "markdownDescription": "The platform required for all containers in the container group definition.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use Amazon GameLift server SDK 4.x., first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to Amazon GameLift server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", "title": "OperatingSystem", "type": "string" }, @@ -168341,7 +168345,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "", + "markdownDescription": "The tags associated with the Connect attachment.", "title": "Tags", "type": "array" }, @@ -169398,7 +169402,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "", + "markdownDescription": "The tags associated with the Site-to-Site VPN attachment.", "title": "Tags", "type": "array" }, @@ -225652,7 +225656,7 @@ "type": "string" }, "Value": { - "markdownDescription": "The value of a processor feature name.", + "markdownDescription": "The value of a processor feature.", "title": "Value", "type": "string" } @@ -231490,17 +231494,17 @@ "additionalProperties": false, "properties": { "CrlData": { - "markdownDescription": "", + "markdownDescription": "The x509 v3 specified certificate revocation list (CRL).", "title": "CrlData", "type": "string" }, "Enabled": { - "markdownDescription": "", + "markdownDescription": "Specifies whether the certificate revocation list (CRL) is enabled.", "title": "Enabled", "type": "boolean" }, "Name": { - "markdownDescription": "", + "markdownDescription": "The name of the certificate revocation list (CRL).", "title": "Name", "type": "string" }, @@ -231508,7 +231512,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "", + "markdownDescription": "A list of tags to attach to the certificate revocation list (CRL).", "title": "Tags", "type": "array" }, @@ -242598,6 +242602,8 @@ "type": "string" }, "SyncName": { + "markdownDescription": "A name for the resource data sync.", + "title": "SyncName", "type": "string" }, "SyncSource": { @@ -254527,12 +254533,18 @@ "additionalProperties": false, "properties": { "BlockPublicPolicy": { + "markdownDescription": "Specifies whether to block resource-based policies that allow broad access to the secret. By default, Secrets Manager blocks policies that allow broad access, for example those that use a wildcard for the principal.", + "title": "BlockPublicPolicy", "type": "boolean" }, "ResourcePolicy": { + "markdownDescription": "A JSON-formatted string for an AWS resource-based policy. For example policies, see [Permissions policy examples](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_examples.html) .", + "title": "ResourcePolicy", "type": "object" }, "SecretId": { + "markdownDescription": "The ARN or name of the secret to attach the resource-based policy.\n\nFor an ARN, we recommend that you specify a complete ARN rather than a partial ARN.", + "title": "SecretId", "type": "string" } }, @@ -259515,6 +259527,8 @@ "type": "object" }, "InstanceId": { + "markdownDescription": "An identifier that you want to associate with the instance. Note the following:\n\n- If the service that's specified by `ServiceId` includes settings for an `SRV` record, the value of `InstanceId` is automatically included as part of the value for the `SRV` record. For more information, see [DnsRecord > Type](https://docs.aws.amazon.com/cloud-map/latest/api/API_DnsRecord.html#cloudmap-Type-DnsRecord-Type) .\n- You can use this value to update an existing instance.\n- To register a new instance, you must specify a value that's unique among instances that you register by using the same service.\n- If you specify an existing `InstanceId` and `ServiceId` , AWS Cloud Map updates the existing DNS records, if any. If there's also an existing health check, AWS Cloud Map deletes the old health check and creates a new one.\n\n> The health check isn't deleted immediately, so it will still appear for a while if you submit a `ListHealthChecks` request, for example.\n\n> Do not include sensitive information in `InstanceId` if the namespace is discoverable by public DNS queries and any `Type` member of `DnsRecord` for the service contains `SRV` because the `InstanceId` is discoverable by public DNS queries.", + "title": "InstanceId", "type": "string" }, "ServiceId": { @@ -264165,7 +264179,7 @@ "properties": { "Configuration": { "$ref": "#/definitions/AWS::VerifiedPermissions::IdentitySource.IdentitySourceConfiguration", - "markdownDescription": "Contains configuration information about an identity source.", + "markdownDescription": "Contains configuration information used when creating a new identity source.", "title": "Configuration" }, "PolicyStoreId": { @@ -264494,7 +264508,7 @@ "additionalProperties": false, "properties": { "CedarJson": { - "markdownDescription": "A JSON string representation of the schema supported by applications that use this policy store. For more information, see [Policy store schema](https://docs.aws.amazon.com/verifiedpermissions/latest/userguide/schema.html) in the *Amazon Verified Permissions User Guide* .", + "markdownDescription": "A JSON string representation of the schema supported by applications that use this policy store. For more information, see [Policy store schema](https://docs.aws.amazon.com/verifiedpermissions/latest/userguide/schema.html) in the AVP User Guide.", "title": "CedarJson", "type": "string" } @@ -269007,7 +269021,7 @@ "additionalProperties": false, "properties": { "InvalidFallbackBehavior": { - "markdownDescription": "What AWS WAF should do if it fails to completely parse the JSON body. The options are the following:\n\n- `EVALUATE_AS_STRING` - Inspect the body as plain text. AWS WAF applies the text transformations and inspection criteria that you defined for the JSON inspection to the body text string.\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.\n\nIf you don't provide this setting, AWS WAF parses and evaluates the content only up to the first parsing failure that it encounters.\n\nAWS WAF does its best to parse the entire JSON body, but might be forced to stop for reasons such as invalid characters, duplicate keys, truncation, and any content whose root node isn't an object or an array.\n\nAWS WAF parses the JSON in the following examples as two valid key, value pairs:\n\n- Missing comma: `{\"key1\":\"value1\"\"key2\":\"value2\"}`\n- Missing colon: `{\"key1\":\"value1\",\"key2\"\"value2\"}`\n- Extra colons: `{\"key1\"::\"value1\",\"key2\"\"value2\"}`", + "markdownDescription": "What AWS WAF should do if it fails to completely parse the JSON body. The options are the following:\n\n- `EVALUATE_AS_STRING` - Inspect the body as plain text. AWS WAF applies the text transformations and inspection criteria that you defined for the JSON inspection to the body text string.\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.\n\nIf you don't provide this setting, AWS WAF parses and evaluates the content only up to the first parsing failure that it encounters.\n\n> AWS WAF parsing doesn't fully validate the input JSON string, so parsing can succeed even for invalid JSON. When parsing succeeds, AWS WAF doesn't apply the fallback behavior. For more information, see [JSON body](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-fields-list.html#waf-rule-statement-request-component-json-body) in the *AWS WAF Developer Guide* .", "title": "InvalidFallbackBehavior", "type": "string" }, @@ -270509,7 +270523,7 @@ "additionalProperties": false, "properties": { "InvalidFallbackBehavior": { - "markdownDescription": "What AWS WAF should do if it fails to completely parse the JSON body. The options are the following:\n\n- `EVALUATE_AS_STRING` - Inspect the body as plain text. AWS WAF applies the text transformations and inspection criteria that you defined for the JSON inspection to the body text string.\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.\n\nIf you don't provide this setting, AWS WAF parses and evaluates the content only up to the first parsing failure that it encounters.\n\nAWS WAF does its best to parse the entire JSON body, but might be forced to stop for reasons such as invalid characters, duplicate keys, truncation, and any content whose root node isn't an object or an array.\n\nAWS WAF parses the JSON in the following examples as two valid key, value pairs:\n\n- Missing comma: `{\"key1\":\"value1\"\"key2\":\"value2\"}`\n- Missing colon: `{\"key1\":\"value1\",\"key2\"\"value2\"}`\n- Extra colons: `{\"key1\"::\"value1\",\"key2\"\"value2\"}`", + "markdownDescription": "What AWS WAF should do if it fails to completely parse the JSON body. The options are the following:\n\n- `EVALUATE_AS_STRING` - Inspect the body as plain text. AWS WAF applies the text transformations and inspection criteria that you defined for the JSON inspection to the body text string.\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.\n\nIf you don't provide this setting, AWS WAF parses and evaluates the content only up to the first parsing failure that it encounters.\n\n> AWS WAF parsing doesn't fully validate the input JSON string, so parsing can succeed even for invalid JSON. When parsing succeeds, AWS WAF doesn't apply the fallback behavior. For more information, see [JSON body](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-fields-list.html#waf-rule-statement-request-component-json-body) in the *AWS WAF Developer Guide* .", "title": "InvalidFallbackBehavior", "type": "string" }, @@ -272203,7 +272217,7 @@ "type": "array" }, "UserName": { - "markdownDescription": "The user name of the user for the WorkSpace. This user name must exist in the AWS Directory Service directory for the WorkSpace.\n\nThe reserved keyword, `[UNDEFINED]` , is used when creating user-decoupled WorkSpaces.", + "markdownDescription": "The user name of the user for the WorkSpace. This user name must exist in the AWS Directory Service directory for the WorkSpace.", "title": "UserName", "type": "string" }, @@ -272213,7 +272227,7 @@ "type": "boolean" }, "VolumeEncryptionKey": { - "markdownDescription": "The ARN of the symmetric AWS KMS key used to encrypt data stored on your WorkSpace. Amazon WorkSpaces does not support asymmetric KMS keys.", + "markdownDescription": "The symmetric AWS KMS key used to encrypt data stored on your WorkSpace. Amazon WorkSpaces does not support asymmetric KMS keys.", "title": "VolumeEncryptionKey", "type": "string" }, @@ -272265,7 +272279,7 @@ "type": "number" }, "RunningMode": { - "markdownDescription": "The running mode. For more information, see [Manage the WorkSpace Running Mode](https://docs.aws.amazon.com/workspaces/latest/adminguide/running-mode.html) .\n\n> The `MANUAL` value is only supported by Amazon WorkSpaces Core. Contact your account team to be allow-listed to use this value. For more information, see [Amazon WorkSpaces Core](https://docs.aws.amazon.com/workspaces/core/) .", + "markdownDescription": "The running mode. For more information, see [Manage the WorkSpace Running Mode](https://docs.aws.amazon.com/workspaces/latest/adminguide/running-mode.html) .", "title": "RunningMode", "type": "string" }, @@ -272689,7 +272703,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "The tags to add to the browser settings resource. A tag is a key-value pair.", + "markdownDescription": "The tags to add to the IP access settings resource. A tag is a key-value pair.", "title": "Tags", "type": "array" } diff --git a/schema_source/cloudformation-docs.json b/schema_source/cloudformation-docs.json index 4731847ec..fc28d6748 100644 --- a/schema_source/cloudformation-docs.json +++ b/schema_source/cloudformation-docs.json @@ -3245,12 +3245,120 @@ "AWS::AppSync::SourceApiAssociation SourceApiAssociationConfig": { "MergeType": "The property that indicates which merging option is enabled in the source API association.\n\nValid merge types are `MANUAL_MERGE` (default) and `AUTO_MERGE` . Manual merges are the default behavior and require the user to trigger any changes from the source APIs to the merged API manually. Auto merges subscribe the merged API to the changes performed on the source APIs so that any change in the source APIs are also made to the merged API. Auto merges use `MergedApiExecutionRoleArn` to perform merge operations.\n\nThe following values are valid:\n\n`MANUAL_MERGE | AUTO_MERGE`" }, + "AWS::AppTest::TestCase": { + "Description": "The description of the test case.", + "Name": "The name of the test case.", + "Steps": "The steps in the test case.", + "Tags": "The specified tags of the test case." + }, + "AWS::AppTest::TestCase Batch": { + "BatchJobName": "The job name of the batch.", + "BatchJobParameters": "The batch job parameters of the batch.", + "ExportDataSetNames": "The export data set names of the batch." + }, + "AWS::AppTest::TestCase CloudFormationAction": { + "ActionType": "The action type of the CloudFormation action.", + "Resource": "The resource of the CloudFormation action." + }, + "AWS::AppTest::TestCase CompareAction": { + "Input": "The input of the compare action.", + "Output": "The output of the compare action." + }, + "AWS::AppTest::TestCase DataSet": { + "Ccsid": "The CCSID of the data set.", + "Format": "The format of the data set.", + "Length": "The length of the data set.", + "Name": "The name of the data set.", + "Type": "The type of the data set." + }, + "AWS::AppTest::TestCase DatabaseCDC": { + "SourceMetadata": "The source metadata of the database CDC.", + "TargetMetadata": "The target metadata of the database CDC." + }, + "AWS::AppTest::TestCase FileMetadata": { + "DataSets": "The data sets of the file metadata.", + "DatabaseCDC": "The database CDC of the file metadata." + }, + "AWS::AppTest::TestCase Input": { + "File": "The file in the input." + }, + "AWS::AppTest::TestCase InputFile": { + "FileMetadata": "The file metadata of the input file.", + "SourceLocation": "The source location of the input file.", + "TargetLocation": "The target location of the input file." + }, + "AWS::AppTest::TestCase M2ManagedActionProperties": { + "ForceStop": "Force stops the AWS Mainframe Modernization managed action properties.", + "ImportDataSetLocation": "The import data set location of the AWS Mainframe Modernization managed action properties." + }, + "AWS::AppTest::TestCase M2ManagedApplicationAction": { + "ActionType": "The action type of the AWS Mainframe Modernization managed application action.", + "Properties": "The properties of the AWS Mainframe Modernization managed application action.", + "Resource": "The resource of the AWS Mainframe Modernization managed application action." + }, + "AWS::AppTest::TestCase M2NonManagedApplicationAction": { + "ActionType": "The action type of the AWS Mainframe Modernization non-managed application action.", + "Resource": "The resource of the AWS Mainframe Modernization non-managed application action." + }, + "AWS::AppTest::TestCase MainframeAction": { + "ActionType": "The action type of the mainframe action.", + "Properties": "The properties of the mainframe action.", + "Resource": "The resource of the mainframe action." + }, + "AWS::AppTest::TestCase MainframeActionProperties": { + "DmsTaskArn": "The DMS task ARN of the mainframe action properties." + }, + "AWS::AppTest::TestCase MainframeActionType": { + "Batch": "The batch of the mainframe action type.", + "Tn3270": "The tn3270 port of the mainframe action type." + }, + "AWS::AppTest::TestCase Output": { + "File": "The file of the output." + }, + "AWS::AppTest::TestCase OutputFile": { + "FileLocation": "The file location of the output file." + }, + "AWS::AppTest::TestCase ResourceAction": { + "CloudFormationAction": "The CloudFormation action of the resource action.", + "M2ManagedApplicationAction": "The AWS Mainframe Modernization managed application action of the resource action.", + "M2NonManagedApplicationAction": "The AWS Mainframe Modernization non-managed application action of the resource action." + }, + "AWS::AppTest::TestCase Script": { + "ScriptLocation": "The script location of the scripts.", + "Type": "The type of the scripts." + }, + "AWS::AppTest::TestCase SourceDatabaseMetadata": { + "CaptureTool": "The capture tool of the source database metadata.", + "Type": "The type of the source database metadata." + }, + "AWS::AppTest::TestCase Step": { + "Action": "The action of the step.", + "Description": "The description of the step.", + "Name": "The name of the step." + }, + "AWS::AppTest::TestCase StepAction": { + "CompareAction": "The compare action of the step action.", + "MainframeAction": "The mainframe action of the step action.", + "ResourceAction": "The resource action of the step action." + }, + "AWS::AppTest::TestCase TN3270": { + "ExportDataSetNames": "The data set names of the TN3270 protocol.", + "Script": "The script of the TN3270 protocol." + }, + "AWS::AppTest::TestCase TargetDatabaseMetadata": { + "CaptureTool": "The capture tool of the target database metadata.", + "Type": "The type of the target database metadata." + }, + "AWS::AppTest::TestCase TestCaseLatestVersion": { + "Status": "The status of the test case latest version.", + "Version": "The version of the test case latest version." + }, "AWS::ApplicationAutoScaling::ScalableTarget": { "MaxCapacity": "The maximum value that you plan to scale out to. When a scaling policy is in effect, Application Auto Scaling can scale out (expand) as needed to the maximum capacity limit in response to changing demand.", "MinCapacity": "The minimum value that you plan to scale in to. When a scaling policy is in effect, Application Auto Scaling can scale in (contract) as needed to the minimum capacity limit in response to changing demand.", - "ResourceId": "The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .", + "ResourceId": "The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", "RoleARN": "Specify the Amazon Resource Name (ARN) of an Identity and Access Management (IAM) role that allows Application Auto Scaling to modify the scalable target on your behalf. This can be either an IAM service role that Application Auto Scaling can assume to make calls to other AWS resources on your behalf, or a service-linked role for the specified service. For more information, see [How Application Auto Scaling works with IAM](https://docs.aws.amazon.com/autoscaling/application/userguide/security_iam_service-with-iam.html) in the *Application Auto Scaling User Guide* .\n\nTo automatically create a service-linked role (recommended), specify the full ARN of the service-linked role in your stack template. To find the exact ARN of the service-linked role for your AWS or custom resource, see the [Service-linked roles](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-service-linked-roles.html) topic in the *Application Auto Scaling User Guide* . Look for the ARN in the table at the bottom of the page.", - "ScalableDimension": "The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The desired task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The desired capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.", + "ScalableDimension": "The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", "ScheduledActions": "The scheduled actions for the scalable target. Duplicates aren't allowed.", "ServiceNamespace": "The namespace of the AWS service that provides the resource, or a `custom-resource` .", "SuspendedState": "An embedded object that contains attributes and attribute values that are used to suspend and resume automatic scaling. Setting the value of an attribute to `true` suspends the specified scaling activities. Setting it to `false` (default) resumes the specified scaling activities.\n\n*Suspension Outcomes*\n\n- For `DynamicScalingInSuspended` , while a suspension is in effect, all scale-in activities that are triggered by a scaling policy are suspended.\n- For `DynamicScalingOutSuspended` , while a suspension is in effect, all scale-out activities that are triggered by a scaling policy are suspended.\n- For `ScheduledScalingSuspended` , while a suspension is in effect, all scaling activities that involve scheduled actions are suspended." @@ -3275,8 +3383,8 @@ "AWS::ApplicationAutoScaling::ScalingPolicy": { "PolicyName": "The name of the scaling policy.\n\nUpdates to the name of a target tracking scaling policy are not supported, unless you also update the metric used for scaling. To change only a target tracking scaling policy's name, first delete the policy by removing the existing `AWS::ApplicationAutoScaling::ScalingPolicy` resource from the template and updating the stack. Then, recreate the resource with the same settings and a different name.", "PolicyType": "The scaling policy type.\n\nThe following policy types are supported:\n\n`TargetTrackingScaling` \u2014Not supported for Amazon EMR\n\n`StepScaling` \u2014Not supported for DynamoDB, Amazon Comprehend, Lambda, Amazon Keyspaces, Amazon MSK, Amazon ElastiCache, or Neptune.", - "ResourceId": "The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .", - "ScalableDimension": "The scalable dimension. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The desired task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The desired capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.", + "ResourceId": "The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", + "ScalableDimension": "The scalable dimension. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", "ScalingTargetId": "The CloudFormation-generated ID of an Application Auto Scaling scalable target. For more information about the ID, see the Return Value section of the `AWS::ApplicationAutoScaling::ScalableTarget` resource.\n\n> You must specify either the `ScalingTargetId` property, or the `ResourceId` , `ScalableDimension` , and `ServiceNamespace` properties, but not both.", "ServiceNamespace": "The namespace of the AWS service that provides the resource, or a `custom-resource` .", "StepScalingPolicyConfiguration": "A step scaling policy.", @@ -4219,7 +4327,7 @@ }, "AWS::Backup::BackupVault": { "AccessPolicy": "A resource-based policy that is used to manage access permissions on the target backup vault.", - "BackupVaultName": "The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the AWS Region where they are created.", + "BackupVaultName": "The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the AWS Region where they are created. They consist of lowercase letters, numbers, and hyphens.", "BackupVaultTags": "The tags to assign to the backup vault.", "EncryptionKeyArn": "A server-side encryption key you can specify to encrypt your backups from services that support full AWS Backup management; for example, `arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab` . If you specify a key, you must specify its ARN, not its alias. If you do not specify a key, AWS Backup creates a KMS key for you by default.\n\nTo learn which AWS Backup services support full AWS Backup management and how AWS Backup handles encryption for backups from services that do not yet support full AWS Backup , see [Encryption for backups in AWS Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/encryption.html)", "LockConfiguration": "Configuration for [AWS Backup Vault Lock](https://docs.aws.amazon.com/aws-backup/latest/devguide/vault-lock.html) .", @@ -6243,7 +6351,7 @@ }, "AWS::CloudTrail::Trail DataResource": { "Type": "The resource type in which you want to log data events. You can specify the following *basic* event selector resource types:\n\n- `AWS::DynamoDB::Table`\n- `AWS::Lambda::Function`\n- `AWS::S3::Object`\n\nAdditional resource types are available through *advanced* event selectors. For more information about these additional resource types, see [AdvancedFieldSelector](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_AdvancedFieldSelector.html) .", - "Values": "An array of Amazon Resource Name (ARN) strings or partial ARN strings for the specified resource type.\n\n- To log data events for all objects in all S3 buckets in your AWS account , specify the prefix as `arn:aws:s3` .\n\n> This also enables logging of data event activity performed by any user or role in your AWS account , even if that activity is performed on a bucket that belongs to another AWS account .\n- To log data events for all objects in an S3 bucket, specify the bucket and an empty object prefix such as `arn:aws:s3:::bucket-1/` . The trail logs data events for all objects in this S3 bucket.\n- To log data events for specific objects, specify the S3 bucket and object prefix such as `arn:aws:s3:::bucket-1/example-images` . The trail logs data events for objects in this S3 bucket that match the prefix.\n- To log data events for all Lambda functions in your AWS account , specify the prefix as `arn:aws:lambda` .\n\n> This also enables logging of `Invoke` activity performed by any user or role in your AWS account , even if that activity is performed on a function that belongs to another AWS account .\n- To log data events for a specific Lambda function, specify the function ARN.\n\n> Lambda function ARNs are exact. For example, if you specify a function ARN *arn:aws:lambda:us-west-2:111111111111:function:helloworld* , data events will only be logged for *arn:aws:lambda:us-west-2:111111111111:function:helloworld* . They will not be logged for *arn:aws:lambda:us-west-2:111111111111:function:helloworld2* .\n- To log data events for all DynamoDB tables in your AWS account , specify the prefix as `arn:aws:dynamodb` ." + "Values": "An array of Amazon Resource Name (ARN) strings or partial ARN strings for the specified resource type.\n\n- To log data events for all objects in all S3 buckets in your AWS account , specify the prefix as `arn:aws:s3` .\n\n> This also enables logging of data event activity performed by any user or role in your AWS account , even if that activity is performed on a bucket that belongs to another AWS account .\n- To log data events for all objects in an S3 bucket, specify the bucket and an empty object prefix such as `arn:aws:s3:::DOC-EXAMPLE-BUCKET1/` . The trail logs data events for all objects in this S3 bucket.\n- To log data events for specific objects, specify the S3 bucket and object prefix such as `arn:aws:s3:::DOC-EXAMPLE-BUCKET1/example-images` . The trail logs data events for objects in this S3 bucket that match the prefix.\n- To log data events for all Lambda functions in your AWS account , specify the prefix as `arn:aws:lambda` .\n\n> This also enables logging of `Invoke` activity performed by any user or role in your AWS account , even if that activity is performed on a function that belongs to another AWS account .\n- To log data events for a specific Lambda function, specify the function ARN.\n\n> Lambda function ARNs are exact. For example, if you specify a function ARN *arn:aws:lambda:us-west-2:111111111111:function:helloworld* , data events will only be logged for *arn:aws:lambda:us-west-2:111111111111:function:helloworld* . They will not be logged for *arn:aws:lambda:us-west-2:111111111111:function:helloworld2* .\n- To log data events for all DynamoDB tables in your AWS account , specify the prefix as `arn:aws:dynamodb` ." }, "AWS::CloudTrail::Trail EventSelector": { "DataResources": "CloudTrail supports data event logging for Amazon S3 objects, AWS Lambda functions, and Amazon DynamoDB tables with basic event selectors. You can specify up to 250 resources for an individual event selector, but the total number of data resources cannot exceed 250 across all event selectors in a trail. This limit does not apply if you configure resource logging for all data events.\n\nFor more information, see [Data Events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) and [Limits in AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/WhatIsCloudTrail-Limits.html) in the *AWS CloudTrail User Guide* .", @@ -6421,6 +6529,7 @@ }, "AWS::CodeArtifact::Domain": { "DomainName": "A string that specifies the name of the requested domain.", + "EncryptionKey": "The key used to encrypt the domain.", "PermissionsPolicyDocument": "The document that defines the resource policy that is set on a domain.", "Tags": "A list of tags to be applied to the domain." }, @@ -6456,6 +6565,7 @@ "AWS::CodeArtifact::Repository": { "Description": "A text description of the repository.", "DomainName": "The name of the domain that contains the repository.", + "DomainOwner": "The 12-digit account number of the AWS account that owns the domain that contains the repository. It does not include dashes or spaces.", "ExternalConnections": "An array of external connections associated with the repository. For more information, see [Supported external connection repositories](https://docs.aws.amazon.com/codeartifact/latest/ug/external-connection.html#supported-public-repositories) in the *CodeArtifact user guide* .", "PermissionsPolicyDocument": "The document that defines the resource policy that is set on a repository.", "RepositoryName": "The name of an upstream repository.", @@ -6587,6 +6697,7 @@ "AWS::CodeBuild::Project ProjectTriggers": { "BuildType": "Specifies the type of build this webhook will trigger. Allowed values are:\n\n- **BUILD** - A single build\n- **BUILD_BATCH** - A batch build", "FilterGroups": "A list of lists of `WebhookFilter` objects used to determine which webhook events are triggered. At least one `WebhookFilter` in the array must specify `EVENT` as its type.", + "ScopeConfiguration": "Contains configuration information about the scope for a webhook.", "Webhook": "Specifies whether or not to begin automatically rebuilding the source code every time a code change is pushed to the repository." }, "AWS::CodeBuild::Project RegistryCredential": { @@ -6598,6 +6709,9 @@ "Location": "The ARN of an S3 bucket and the path prefix for S3 logs. If your Amazon S3 bucket name is `my-bucket` , and your path prefix is `build-log` , then acceptable formats are `my-bucket/build-log` or `arn:aws:s3:::my-bucket/build-log` .", "Status": "The current status of the S3 build logs. Valid values are:\n\n- `ENABLED` : S3 build logs are enabled for this build project.\n- `DISABLED` : S3 build logs are not enabled for this build project." }, + "AWS::CodeBuild::Project ScopeConfiguration": { + "Name": "The name of either the enterprise or organization that will send webhook events to CodeBuild , depending on if the webhook is a global or organization webhook respectively." + }, "AWS::CodeBuild::Project Source": { "Auth": "Information about the authorization settings for AWS CodeBuild to access the source code to be built.\n\nThis information is for the AWS CodeBuild console's use only. Your code should not get or set `Auth` directly.", "BuildSpec": "The build specification for the project. If this value is not provided, then the source code must contain a buildspec file named `buildspec.yml` at the root level. If this value is provided, it can be either a single string containing the entire build specification, or the path to an alternate buildspec file relative to the value of the built-in environment variable `CODEBUILD_SRC_DIR` . The alternate buildspec file can have a name other than `buildspec.yml` , for example `myspec.yml` or `build_spec_qa.yml` or similar. For more information, see the [Build Spec Reference](https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-example) in the *AWS CodeBuild User Guide* .", @@ -8978,7 +9092,7 @@ "AdditionalArchivedLogDestId": "Set this attribute with `ArchivedLogDestId` in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, AWS DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.\n\nAlthough AWS DMS supports the use of the Oracle `RESETLOGS` option to open the database, never use `RESETLOGS` unless necessary. For additional information about `RESETLOGS` , see [RMAN Data Repair Concepts](https://docs.aws.amazon.com/https://docs.oracle.com/en/database/oracle/oracle-database/19/bradv/rman-data-repair-concepts.html#GUID-1805CCF7-4AF2-482D-B65A-998192F89C2B) in the *Oracle Database Backup and Recovery User's Guide* .", "AllowSelectNestedTables": "Set this attribute to `true` to enable replication of Oracle tables containing columns that are nested tables or defined types.", "ArchivedLogDestId": "Specifies the ID of the destination for the archived redo logs. This value should be the same as a number in the dest_id column of the v$archived_log view. If you work with an additional redo log destination, use the `AdditionalArchivedLogDestId` option to specify the additional destination ID. Doing this improves performance by ensuring that the correct logs are accessed from the outset.", - "ArchivedLogsOnly": "When this field is set to `Y` , AWS DMS only accesses the archived redo logs. If the archived redo logs are stored on Automatic Storage Management (ASM) only, the AWS DMS user account needs to be granted ASM privileges.", + "ArchivedLogsOnly": "When this field is set to `True` , AWS DMS only accesses the archived redo logs. If the archived redo logs are stored on Automatic Storage Management (ASM) only, the AWS DMS user account needs to be granted ASM privileges.", "AsmPassword": "For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the `*asm_user_password*` value. You set this value as part of the comma-separated value that you set to the `Password` request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see [Configuration for change data capture (CDC) on an Oracle source database](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC.Configuration) .", "AsmServer": "For an Oracle source endpoint, your ASM server address. You can set this value from the `asm_server` value. You set `asm_server` as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see [Configuration for change data capture (CDC) on an Oracle source database](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC.Configuration) .", "AsmUser": "For an Oracle source endpoint, your ASM user name. You can set this value from the `asm_user` value. You set `asm_user` as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see [Configuration for change data capture (CDC) on an Oracle source database](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC.Configuration) .", @@ -9004,9 +9118,9 @@ "SpatialDataOptionToGeoJsonFunctionName": "Use this attribute to convert `SDO_GEOMETRY` to `GEOJSON` format. By default, DMS calls the `SDO2GEOJSON` custom function if present and accessible. Or you can create your own custom function that mimics the operation of `SDOGEOJSON` and set `SpatialDataOptionToGeoJsonFunctionName` to call it instead.", "StandbyDelayTime": "Use this attribute to specify a time in minutes for the delay in standby sync. If the source is an Oracle Active Data Guard standby database, use this attribute to specify the time lag between primary and standby databases.\n\nIn AWS DMS , you can create an Oracle CDC task that uses an Active Data Guard standby instance as a source for replicating ongoing changes. Doing this eliminates the need to connect to an active database that might be in production.", "UseAlternateFolderForOnline": "Set this attribute to `true` in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs.", - "UseBFile": "Set this attribute to Y to capture change data using the Binary Reader utility. Set `UseLogminerReader` to N to set this attribute to Y. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC) .", - "UseDirectPathFullLoad": "Set this attribute to Y to have AWS DMS use a direct path full load. Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). By using this OCI protocol, you can bulk-load Oracle target tables during a full load.", - "UseLogminerReader": "Set this attribute to Y to capture change data using the Oracle LogMiner utility (the default). Set this attribute to N if you want to access the redo logs as a binary file. When you set `UseLogminerReader` to N, also set `UseBfile` to Y. For more information on this setting and using Oracle ASM, see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC) in the *AWS DMS User Guide* .", + "UseBFile": "Set this attribute to True to capture change data using the Binary Reader utility. Set `UseLogminerReader` to False to set this attribute to True. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC) .", + "UseDirectPathFullLoad": "Set this attribute to True to have AWS DMS use a direct path full load. Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). By using this OCI protocol, you can bulk-load Oracle target tables during a full load.", + "UseLogminerReader": "Set this attribute to True to capture change data using the Oracle LogMiner utility (the default). Set this attribute to False if you want to access the redo logs as a binary file. When you set `UseLogminerReader` to False, also set `UseBfile` to True. For more information on this setting and using Oracle ASM, see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC) in the *AWS DMS User Guide* .", "UsePathPrefix": "Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs." }, "AWS::DMS::Endpoint PostgreSqlSettings": { @@ -9174,7 +9288,6 @@ }, "AWS::DMS::ReplicationConfig": { "ComputeConfig": "Configuration parameters for provisioning an AWS DMS Serverless replication.", - "ReplicationConfigArn": "The Amazon Resource Name (ARN) of this AWS DMS Serverless replication configuration.", "ReplicationConfigIdentifier": "A unique identifier that you want to use to create a `ReplicationConfigArn` that is returned as part of the output from this action. You can then pass this output `ReplicationConfigArn` as the value of the `ReplicationConfigArn` option for other actions to identify both AWS DMS Serverless replications and replication configurations that you want those actions to operate on. For some actions, you can also use either this unique identifier or a corresponding ARN in action filters to identify the specific replication and replication configuration to operate on.", "ReplicationSettings": "Optional JSON settings for AWS DMS Serverless replications that are provisioned using this replication configuration. For example, see [Change processing tuning settings](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.ChangeProcessingTuning.html) .", "ReplicationType": "The type of AWS DMS Serverless replication to provision using this replication configuration.\n\nPossible values:\n\n- `\"full-load\"`\n- `\"cdc\"`\n- `\"full-load-and-cdc\"`", @@ -9691,10 +9804,10 @@ "Value": "The value to associate with the key name." }, "AWS::DataSync::Agent": { - "ActivationKey": "Specifies your DataSync agent's activation key. If you don't have an activation key, see [Activate your agent](https://docs.aws.amazon.com/datasync/latest/userguide/activate-agent.html) .", - "AgentName": "Specifies a name for your agent. You can see this name in the DataSync console.", + "ActivationKey": "Specifies your DataSync agent's activation key. If you don't have an activation key, see [Activating your agent](https://docs.aws.amazon.com/datasync/latest/userguide/activate-agent.html) .", + "AgentName": "Specifies a name for your agent. We recommend specifying a name that you can remember.", "SecurityGroupArns": "The Amazon Resource Names (ARNs) of the security groups used to protect your data transfer task subnets. See [SecurityGroupArns](https://docs.aws.amazon.com/datasync/latest/userguide/API_Ec2Config.html#DataSync-Type-Ec2Config-SecurityGroupArns) .\n\n*Pattern* : `^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):ec2:[a-z\\-0-9]*:[0-9]{12}:security-group/.*$`", - "SubnetArns": "Specifies the ARN of the subnet where you want to run your DataSync task when using a VPC endpoint. This is the subnet where DataSync creates and manages the [network interfaces](https://docs.aws.amazon.com/datasync/latest/userguide/datasync-network.html#required-network-interfaces) for your transfer. You can only specify one ARN.", + "SubnetArns": "Specifies the ARN of the subnet where your VPC service endpoint is located. You can only specify one ARN.", "Tags": "Specifies labels that help you categorize, filter, and search for your AWS resources. We recommend creating at least one tag for your agent.", "VpcEndpointId": "The ID of the virtual private cloud (VPC) endpoint that the agent has access to. This is the client-side VPC endpoint, powered by AWS PrivateLink . If you don't have an AWS PrivateLink VPC endpoint, see [AWS PrivateLink and VPC endpoints](https://docs.aws.amazon.com//vpc/latest/userguide/endpoint-services-overview.html) in the *Amazon VPC User Guide* .\n\nFor more information about activating your agent in a private network based on a VPC, see [Using AWS DataSync in a Virtual Private Cloud](https://docs.aws.amazon.com/datasync/latest/userguide/datasync-in-vpc.html) in the *AWS DataSync User Guide.*\n\nA VPC endpoint ID looks like this: `vpce-01234d5aff67890e1` ." }, @@ -13094,7 +13207,7 @@ "TargetGroupArn": "The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.\n\nA target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.\n\nFor services using the `ECS` deployment controller, you can specify one or multiple target groups. For more information, see [Registering multiple target groups with a service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/register-multiple-targetgroups.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor services using the `CODE_DEPLOY` deployment controller, you're required to define two target groups for the load balancer. For more information, see [Blue/green deployment with CodeDeploy](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-type-bluegreen.html) in the *Amazon Elastic Container Service Developer Guide* .\n\n> If your service's task definition uses the `awsvpc` network mode, you must choose `ip` as the target type, not `instance` . Do this when creating your target groups because tasks that use the `awsvpc` network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type." }, "AWS::ECS::Service LogConfiguration": { - "LogDriver": "The log driver to use for the container.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , `splunk` , and `awsfirelens` .\n\nFor more information about using the `awslogs` log driver, see [Using the awslogs log driver](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor more information about using the `awsfirelens` log driver, see [Custom log routing](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) in the *Amazon Elastic Container Service Developer Guide* .\n\n> If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.", + "LogDriver": "The log driver to use for the container.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , `splunk` , and `awsfirelens` .\n\nFor more information about using the `awslogs` log driver, see [Send Amazon ECS logs to CloudWatch](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor more information about using the `awsfirelens` log driver, see [Send Amazon ECS logs to an AWS service or AWS Partner](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) .\n\n> If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.", "Options": "The configuration options to send to the log driver. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", "SecretOptions": "The secrets to pass to the log configuration. For more information, see [Specifying sensitive data](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html) in the *Amazon Elastic Container Service Developer Guide* ." }, @@ -13119,7 +13232,7 @@ }, "AWS::ECS::Service ServiceConnectConfiguration": { "Enabled": "Specifies whether to use Service Connect with this service.", - "LogConfiguration": "The log configuration for the container. This parameter maps to `LogConfig` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `--log-driver` option to [`docker run`](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/commandline/run/) .\n\nBy default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. For more information about the options for different supported log drivers, see [Configure logging drivers](https://docs.aws.amazon.com/https://docs.docker.com/engine/admin/logging/overview/) in the Docker documentation.\n\nUnderstand the following when specifying a log configuration for your containers.\n\n- Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , `splunk` , and `awsfirelens` .\n- This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.\n- For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the `ECS_AVAILABLE_LOGGING_DRIVERS` environment variable before containers placed on that instance can use these log configuration options. For more information, see [Amazon ECS container agent configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html) in the *Amazon Elastic Container Service Developer Guide* .\n- For tasks that are on AWS Fargate , because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.", + "LogConfiguration": "The log configuration for the container. This parameter maps to `LogConfig` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `--log-driver` option to [`docker run`](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/commandline/run/) .\n\nBy default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. For more information about the options for different supported log drivers, see [Configure logging drivers](https://docs.aws.amazon.com/https://docs.docker.com/engine/admin/logging/overview/) in the Docker documentation.\n\nUnderstand the following when specifying a log configuration for your containers.\n\n- Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `syslog` , `splunk` , and `awsfirelens` .\n- This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.\n- For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the `ECS_AVAILABLE_LOGGING_DRIVERS` environment variable before containers placed on that instance can use these log configuration options. For more information, see [Amazon ECS container agent configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html) in the *Amazon Elastic Container Service Developer Guide* .\n- For tasks that are on AWS Fargate , because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.", "Namespace": "The namespace name or full Amazon Resource Name (ARN) of the AWS Cloud Map namespace for use with Service Connect. The namespace must be in the same AWS Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about AWS Cloud Map , see [Working with Services](https://docs.aws.amazon.com/cloud-map/latest/dg/working-with-services.html) in the *AWS Cloud Map Developer Guide* .", "Services": "The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.\n\nThis field is not required for a \"client\" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.\n\nAn object selects a port from the task definition, assigns a name for the AWS Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service." }, @@ -13194,7 +13307,7 @@ }, "AWS::ECS::TaskDefinition ContainerDefinition": { "Command": "The command that's passed to the container. This parameter maps to `Cmd` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `COMMAND` parameter to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#security-configuration) . For more information, see [https://docs.docker.com/engine/reference/builder/#cmd](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/builder/#cmd) . If there are multiple arguments, each argument is a separated string in the array.", - "Cpu": "The number of `cpu` units reserved for the container. This parameter maps to `CpuShares` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `--cpu-shares` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#security-configuration) .\n\nThis field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level `cpu` value.\n\n> You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the [Amazon EC2 Instances](https://docs.aws.amazon.com/ec2/instance-types/) detail page by 1,024. \n\nLinux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.\n\nOn Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. For more information, see [CPU share constraint](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#cpu-share-constraint) in the Docker documentation. The minimum valid CPU share value that the Linux kernel allows is 2. However, the CPU parameter isn't required, and you can use CPU values below 2 in your container definitions. For CPU values below 2 (including null), the behavior varies based on your Amazon ECS container agent version:\n\n- *Agent versions less than or equal to 1.1.0:* Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.\n- *Agent versions greater than or equal to 1.2.0:* Null, zero, and CPU values of 1 are passed to Docker as 2.\n\nOn Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as `0` , which Windows interprets as 1% of one CPU.", + "Cpu": "The number of `cpu` units reserved for the container. This parameter maps to `CpuShares` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `--cpu-shares` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#security-configuration) .\n\nThis field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level `cpu` value.\n\n> You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the [Amazon EC2 Instances](https://docs.aws.amazon.com/ec2/instance-types/) detail page by 1,024. \n\nLinux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.\n\nOn Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. For more information, see [CPU share constraint](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#cpu-share-constraint) in the Docker documentation. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version:\n\n- *Agent versions less than or equal to 1.1.0:* Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.\n- *Agent versions greater than or equal to 1.2.0:* Null, zero, and CPU values of 1 are passed to Docker as 2.\n- *Agent versions greater than or equal to 1.84.0:* CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares.\n\nOn Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as `0` , which Windows interprets as 1% of one CPU.", "CredentialSpecs": "A list of ARNs in SSM or Amazon S3 to a credential spec ( `CredSpec` ) file that configures the container for Active Directory authentication. We recommend that you use this parameter instead of the `dockerSecurityOptions` . The maximum number of ARNs is 1.\n\nThere are two formats for each ARN.\n\n- **credentialspecdomainless:MyARN** - You use `credentialspecdomainless:MyARN` to provide a `CredSpec` with an additional section for a secret in AWS Secrets Manager . You provide the login credentials to the domain in the secret.\n\nEach task that runs on any container instance can join different domains.\n\nYou can use this format without joining the container instance to a domain.\n- **credentialspec:MyARN** - You use `credentialspec:MyARN` to provide a `CredSpec` for a single domain.\n\nYou must join the container instance to the domain before you start any tasks that use this task definition.\n\nIn both formats, replace `MyARN` with the ARN in SSM or Amazon S3.\n\nIf you provide a `credentialspecdomainless:MyARN` , the `credspec` must provide a ARN in AWS Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see [Using gMSAs for Windows Containers](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/windows-gmsa.html) and [Using gMSAs for Linux Containers](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/linux-gmsa.html) .", "DependsOn": "The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed.\n\nFor tasks using the EC2 launch type, the container instances require at least version 1.26.0 of the container agent to turn on container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see [Updating the Amazon ECS Container Agent](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-update.html) in the *Amazon Elastic Container Service Developer Guide* . If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the `ecs-init` package. If your container instances are launched from version `20190301` or later, then they contain the required versions of the container agent and `ecs-init` . For more information, see [Amazon ECS-optimized Linux AMI](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor tasks using the Fargate launch type, the task or service requires the following platforms:\n\n- Linux platform version `1.3.0` or later.\n- Windows platform version `1.0.0` or later.\n\nIf the task definition is used in a blue/green deployment that uses [AWS::CodeDeploy::DeploymentGroup BlueGreenDeploymentConfiguration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-codedeploy-deploymentgroup-bluegreendeploymentconfiguration.html) , the `dependsOn` parameter is not supported. For more information see [Issue #680](https://docs.aws.amazon.com/https://github.com/aws-cloudformation/cloudformation-coverage-roadmap/issues/680) on the on the GitHub website.", "DisableNetworking": "When this parameter is true, networking is off within the container. This parameter maps to `NetworkDisabled` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) .\n\n> This parameter is not supported for Windows containers.", @@ -13313,7 +13426,7 @@ "Tmpfs": "The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the `--tmpfs` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#security-configuration) .\n\n> If you're using tasks that use the Fargate launch type, the `tmpfs` parameter isn't supported." }, "AWS::ECS::TaskDefinition LogConfiguration": { - "LogDriver": "The log driver to use for the container.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , `splunk` , and `awsfirelens` .\n\nFor more information about using the `awslogs` log driver, see [Using the awslogs log driver](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor more information about using the `awsfirelens` log driver, see [Custom log routing](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) in the *Amazon Elastic Container Service Developer Guide* .\n\n> If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.", + "LogDriver": "The log driver to use for the container.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , `splunk` , and `awsfirelens` .\n\nFor more information about using the `awslogs` log driver, see [Send Amazon ECS logs to CloudWatch](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor more information about using the `awsfirelens` log driver, see [Send Amazon ECS logs to an AWS service or AWS Partner](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) .\n\n> If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.", "Options": "The configuration options to send to the log driver. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", "SecretOptions": "The secrets to pass to the log configuration. For more information, see [Specifying sensitive data](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html) in the *Amazon Elastic Container Service Developer Guide* ." }, @@ -13536,6 +13649,7 @@ }, "AWS::EKS::Cluster": { "AccessConfig": "The access configuration for the cluster.", + "BootstrapSelfManagedAddons": "If you set this value to `False` when creating a cluster, the default networking add-ons will not be installed.\n\nThe default networking addons include vpc-cni, coredns, and kube-proxy.\n\nUse this option when you plan to install third-party alternative add-ons or self-manage the default networking add-ons.", "EncryptionConfig": "The encryption configuration for the cluster.", "KubernetesNetworkConfig": "The Kubernetes network configuration for the cluster.", "Logging": "The logging configuration for your cluster.", @@ -14095,8 +14209,8 @@ "Namespace": "The namespaces of the EKS cluster.\n\n*Minimum* : 1\n\n*Maximum* : 63\n\n*Pattern* : `[a-z0-9]([-a-z0-9]*[a-z0-9])?`" }, "AWS::EMRContainers::VirtualCluster Tag": { - "Key": "", - "Value": "" + "Key": "The key to use in the tag.", + "Value": "The value of the tag." }, "AWS::EMRServerless::Application": { "Architecture": "The CPU architecture of an application.", @@ -14294,6 +14408,7 @@ "PrimaryClusterId": "The identifier of the cluster that serves as the primary for this replication group. This cluster must already exist and have a status of `available` .\n\nThis parameter is not required if `NumCacheClusters` , `NumNodeGroups` , or `ReplicasPerNodeGroup` is specified.", "ReplicasPerNodeGroup": "An optional parameter that specifies the number of replica nodes in each node group (shard). Valid values are 0 to 5.", "ReplicationGroupDescription": "A user-created description for the replication group.", + "ReplicationGroupId": "The replication group identifier. This parameter is stored as a lowercase string.\n\nConstraints:\n\n- A name must contain from 1 to 40 alphanumeric characters or hyphens.\n- The first character must be a letter.\n- A name cannot end with a hyphen or contain two consecutive hyphens.", "SecurityGroupIds": "One or more Amazon VPC security groups associated with this replication group.\n\nUse this parameter only when you are creating a replication group in an Amazon Virtual Private Cloud (Amazon VPC).", "SnapshotArns": "A list of Amazon Resource Names (ARN) that uniquely identify the Redis RDB snapshot files stored in Amazon S3. The snapshot files are used to populate the new replication group. The Amazon S3 object name in the ARN cannot contain any commas. The new replication group will have the number of node groups (console: shards) specified by the parameter *NumNodeGroups* or the number of node groups configured by *NodeGroupConfiguration* regardless of the number of ARNs specified here.\n\nExample of an Amazon S3 ARN: `arn:aws:s3:::my_bucket/snapshot1.rdb`", "SnapshotName": "The name of a snapshot from which to restore data into the new replication group. The snapshot status changes to `restoring` while the new replication group is being created.", @@ -16140,7 +16255,7 @@ }, "AWS::GameLift::Build": { "Name": "A descriptive label that is associated with a build. Build names do not need to be unique.", - "OperatingSystem": "The operating system that your game server binaries run on. This value determines the type of fleet resources that you use for this build. If your game build contains multiple executables, they all must run on the same operating system. You must specify a valid operating system in this request. There is no default value. You can't change a build's operating system later.\n\n> If you have active fleets using the Windows Server 2012 operating system, you can continue to create new builds using this OS until October 10, 2023, when Microsoft ends its support. All others must use Windows Server 2016 when creating new Windows-based builds.", + "OperatingSystem": "The operating system that your game server binaries run on. This value determines the type of fleet resources that you use for this build. If your game build contains multiple executables, they all must run on the same operating system. You must specify a valid operating system in this request. There is no default value. You can't change a build's operating system later.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use Amazon GameLift server SDK 4.x., first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to Amazon GameLift server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", "ServerSdkVersion": "A server SDK version you used when integrating your game server build with Amazon GameLift. For more information see [Integrate games with custom game servers](https://docs.aws.amazon.com/gamelift/latest/developerguide/integration-custom-intro.html) . By default Amazon GameLift sets this value to `4.0.2` .", "StorageLocation": "Information indicating where your game build files are stored. Use this parameter only when creating a build with files stored in an Amazon S3 bucket that you own. The storage location must specify an Amazon S3 bucket name and key. The location must also specify a role ARN that you set up to allow Amazon GameLift to access your Amazon S3 bucket. The S3 bucket and your new build must be in the same Region.\n\nIf a `StorageLocation` is specified, the size of your file can be found in your Amazon S3 bucket. Amazon GameLift will report a `SizeOnDisk` of 0.", "Version": "Version information that is associated with this build. Version strings do not need to be unique." @@ -16154,7 +16269,7 @@ "AWS::GameLift::ContainerGroupDefinition": { "ContainerDefinitions": "The set of container definitions that are included in the container group.", "Name": "A descriptive identifier for the container group definition. The name value is unique in an AWS Region.", - "OperatingSystem": "The platform required for all containers in the container group definition.", + "OperatingSystem": "The platform required for all containers in the container group definition.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use Amazon GameLift server SDK 4.x., first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to Amazon GameLift server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", "SchedulingStrategy": "The method for deploying the container group across fleet instances. A replica container group might have multiple copies on each fleet instance. A daemon container group maintains only one copy per fleet instance.", "Tags": "", "TotalCpuLimit": "The amount of CPU units on a fleet instance to allocate for the container group. All containers in the group share these resources. This property is an integer value in CPU units (1 vCPU is equal to 1024 CPU units).\n\nYou can set additional limits for each `ContainerDefinition` in the group. If individual containers have limits, this value must be equal to or greater than the sum of all container-specific CPU limits in the group.\n\nFor more details on memory allocation, see the [Container fleet design guide](https://docs.aws.amazon.com/gamelift/latest/developerguide/containers-design-fleet) .", @@ -21527,7 +21642,7 @@ "AWS::KinesisAnalyticsV2::Application ApplicationConfiguration": { "ApplicationCodeConfiguration": "The code location and type parameters for a Managed Service for Apache Flink application.", "ApplicationSnapshotConfiguration": "Describes whether snapshots are enabled for a Managed Service for Apache Flink application.", - "ApplicationSystemRollbackConfiguration": "", + "ApplicationSystemRollbackConfiguration": "Describes whether system rollbacks are enabled for a Managed Service for Apache Flink application.", "EnvironmentProperties": "Describes execution properties for a Managed Service for Apache Flink application.", "FlinkApplicationConfiguration": "The creation and update parameters for a Managed Service for Apache Flink application.", "SqlApplicationConfiguration": "The creation and update parameters for a SQL-based Kinesis Data Analytics application.", @@ -21545,7 +21660,7 @@ "SnapshotsEnabled": "Describes whether snapshots are enabled for a Managed Service for Apache Flink application." }, "AWS::KinesisAnalyticsV2::Application ApplicationSystemRollbackConfiguration": { - "RollbackEnabled": "" + "RollbackEnabled": "Describes whether system rollbacks are enabled for a Managed Service for Apache Flink application." }, "AWS::KinesisAnalyticsV2::Application CSVMappingParameters": { "RecordColumnDelimiter": "The column delimiter. For example, in a CSV format, a comma (\",\") is the typical column delimiter.", @@ -26637,7 +26752,7 @@ "EdgeLocation": "The Region where the edge is located.", "Options": "Options for connecting an attachment.", "ProposedSegmentChange": "Describes a proposed segment change. In some cases, the segment change must first be evaluated and accepted.", - "Tags": "", + "Tags": "The tags associated with the Connect attachment.", "TransportAttachmentId": "The ID of the transport attachment." }, "AWS::NetworkManager::ConnectAttachment ConnectAttachmentOptions": { @@ -26782,7 +26897,7 @@ "AWS::NetworkManager::SiteToSiteVpnAttachment": { "CoreNetworkId": "", "ProposedSegmentChange": "Describes a proposed segment change. In some cases, the segment change must first be evaluated and accepted.", - "Tags": "", + "Tags": "The tags associated with the Site-to-Site VPN attachment.", "VpnConnectionArn": "The ARN of the site-to-site VPN attachment." }, "AWS::NetworkManager::SiteToSiteVpnAttachment ProposedSegmentChange": { @@ -27509,6 +27624,7 @@ "PreferredBackupWindow": "The start time for a one-hour period during which AWS OpsWorks CM backs up application-level data on your server if automated backups are enabled. Valid values must be specified in one of the following formats:\n\n- `HH:MM` for daily backups\n- `DDD:HH:MM` for weekly backups\n\n`MM` must be specified as `00` . The specified time is in coordinated universal time (UTC). The default value is a random, daily start time.\n\n*Example:* `08:00` , which represents a daily start time of 08:00 UTC.\n\n*Example:* `Mon:08:00` , which represents a start time of every Monday at 08:00 UTC. (8:00 a.m.)", "PreferredMaintenanceWindow": "The start time for a one-hour period each week during which AWS OpsWorks CM performs maintenance on the instance. Valid values must be specified in the following format: `DDD:HH:MM` . `MM` must be specified as `00` . The specified time is in coordinated universal time (UTC). The default value is a random one-hour period on Tuesday, Wednesday, or Friday. See `TimeWindowDefinition` for more information.\n\n*Example:* `Mon:08:00` , which represents a start time of every Monday at 08:00 UTC. (8:00 a.m.)", "SecurityGroupIds": "A list of security group IDs to attach to the Amazon EC2 instance. If you add this parameter, the specified security groups must be within the VPC that is specified by `SubnetIds` .\n\nIf you do not specify this parameter, AWS OpsWorks CM creates one new security group that uses TCP ports 22 and 443, open to 0.0.0.0/0 (everyone).", + "ServerName": "The name of the server. The server name must be unique within your AWS account, within each region. Server names must start with a letter; then letters, numbers, or hyphens (-) are allowed, up to a maximum of 40 characters.", "ServiceRoleArn": "The service role that the AWS OpsWorks CM service backend uses to work with your account. Although the AWS OpsWorks management console typically creates the service role for you, if you are using the AWS CLI or API commands, run the service-role-creation.yaml AWS CloudFormation template, located at https://s3.amazonaws.com/opsworks-cm-us-east-1-prod-default-assets/misc/opsworks-cm-roles.yaml. This template creates a CloudFormation stack that includes the service role and instance profile that you need.", "SubnetIds": "The IDs of subnets in which to launch the server EC2 instance.\n\nAmazon EC2-Classic customers: This field is required. All servers must run within a VPC. The VPC must have \"Auto Assign Public IP\" enabled.\n\nEC2-VPC customers: This field is optional. If you do not specify subnet IDs, your EC2 instances are created in a default subnet that is selected by Amazon EC2. If you specify subnet IDs, the VPC must have \"Auto Assign Public IP\" enabled.\n\nFor more information about supported Amazon EC2 platforms, see [Supported Platforms](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html) .", "Tags": "A map that contains tag keys and tag values to attach to an AWS OpsWorks for Chef Automate or OpsWorks for Puppet Enterprise server.\n\n- The key cannot be empty.\n- The key can be a maximum of 127 characters, and can contain only Unicode letters, numbers, or separators, or the following special characters: `+ - = . _ : / @`\n- The value can be a maximum 255 characters, and contain only Unicode letters, numbers, or separators, or the following special characters: `+ - = . _ : / @`\n- Leading and trailing spaces are trimmed from both the key and value.\n- A maximum of 50 user-applied tags is allowed for any AWS OpsWorks CM server." @@ -37632,6 +37748,7 @@ "EnableGlobalWriteForwarding": "Specifies whether to enable this DB cluster to forward write operations to the primary cluster of a global cluster (Aurora global database). By default, write operations are not allowed on Aurora DB clusters that are secondary clusters in an Aurora global database.\n\nYou can set this value only on Aurora DB clusters that are members of an Aurora global database. With this parameter enabled, a secondary cluster can forward writes to the current primary cluster, and the resulting changes are replicated back to this cluster. For the primary DB cluster of an Aurora global database, this value is used immediately if the primary is demoted by a global cluster API operation, but it does nothing until then.\n\nValid for Cluster Type: Aurora DB clusters only", "EnableHttpEndpoint": "Specifies whether to enable the HTTP endpoint for the DB cluster. By default, the HTTP endpoint isn't enabled.\n\nWhen enabled, the HTTP endpoint provides a connectionless web service API (RDS Data API) for running SQL queries on the DB cluster. You can also query your database from inside the RDS console with the RDS query editor.\n\nRDS Data API is supported with the following DB clusters:\n\n- Aurora PostgreSQL Serverless v2 and provisioned\n- Aurora PostgreSQL and Aurora MySQL Serverless v1\n\nFor more information, see [Using RDS Data API](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html) in the *Amazon Aurora User Guide* .\n\nValid for Cluster Type: Aurora DB clusters only", "EnableIAMDatabaseAuthentication": "A value that indicates whether to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts. By default, mapping is disabled.\n\nFor more information, see [IAM Database Authentication](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.html) in the *Amazon Aurora User Guide.*\n\nValid for: Aurora DB clusters only", + "EnableLocalWriteForwarding": "Specifies whether read replicas can forward write operations to the writer DB instance in the DB cluster. By default, write operations aren't allowed on reader DB instances.\n\nValid for: Aurora DB clusters only", "Engine": "The name of the database engine to be used for this DB cluster.\n\nValid Values:\n\n- `aurora-mysql`\n- `aurora-postgresql`\n- `mysql`\n- `postgres`\n\nValid for: Aurora DB clusters and Multi-AZ DB clusters", "EngineLifecycleSupport": "The life cycle type for this DB cluster.\n\n> By default, this value is set to `open-source-rds-extended-support` , which enrolls your DB cluster into Amazon RDS Extended Support. At the end of standard support, you can avoid charges for Extended Support by setting the value to `open-source-rds-extended-support-disabled` . In this case, creating the DB cluster will fail if the DB major version is past its end of standard support date. \n\nYou can use this setting to enroll your DB cluster into Amazon RDS Extended Support. With RDS Extended Support, you can run the selected major engine version on your DB cluster past the end of standard support for that engine version. For more information, see the following sections:\n\n- Amazon Aurora (PostgreSQL only) - [Using Amazon RDS Extended Support](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/extended-support.html) in the *Amazon Aurora User Guide*\n- Amazon RDS - [Using Amazon RDS Extended Support](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/extended-support.html) in the *Amazon RDS User Guide*\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB clusters\n\nValid Values: `open-source-rds-extended-support | open-source-rds-extended-support-disabled`\n\nDefault: `open-source-rds-extended-support`", "EngineMode": "The DB engine mode of the DB cluster, either `provisioned` or `serverless` .\n\nThe `serverless` engine mode only applies for Aurora Serverless v1 DB clusters. Aurora Serverless v2 DB clusters use the `provisioned` engine mode.\n\nFor information about limitations and requirements for Serverless DB clusters, see the following sections in the *Amazon Aurora User Guide* :\n\n- [Limitations of Aurora Serverless v1](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.html#aurora-serverless.limitations)\n- [Requirements for Aurora Serverless v2](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.requirements.html)\n\nValid for Cluster Type: Aurora DB clusters only", @@ -37808,7 +37925,7 @@ }, "AWS::RDS::DBInstance ProcessorFeature": { "Name": "The name of the processor feature. Valid names are `coreCount` and `threadsPerCore` .", - "Value": "The value of a processor feature name." + "Value": "The value of a processor feature." }, "AWS::RDS::DBInstance Tag": { "Key": "A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and can't be prefixed with `aws:` or `rds:` . The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', ':', '/', '=', '+', '-', '@' (Java regex: \"^([\\\\p{L}\\\\p{Z}\\\\p{N}_.:/=+\\\\-@]*)$\").", @@ -37923,6 +38040,7 @@ "AWS::RDS::GlobalCluster": { "DeletionProtection": "Specifies whether to enable deletion protection for the new global database cluster. The global database can't be deleted when deletion protection is enabled.", "Engine": "The database engine to use for this global database cluster.\n\nValid Values: `aurora-mysql | aurora-postgresql`\n\nConstraints:\n\n- Can't be specified if `SourceDBClusterIdentifier` is specified. In this case, Amazon Aurora uses the engine of the source DB cluster.", + "EngineLifecycleSupport": "The life cycle type for this global database cluster.\n\n> By default, this value is set to `open-source-rds-extended-support` , which enrolls your global cluster into Amazon RDS Extended Support. At the end of standard support, you can avoid charges for Extended Support by setting the value to `open-source-rds-extended-support-disabled` . In this case, creating the global cluster will fail if the DB major version is past its end of standard support date. \n\nThis setting only applies to Aurora PostgreSQL-based global databases.\n\nYou can use this setting to enroll your global cluster into Amazon RDS Extended Support. With RDS Extended Support, you can run the selected major engine version on your global cluster past the end of standard support for that engine version. For more information, see [Using Amazon RDS Extended Support](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/extended-support.html) in the *Amazon Aurora User Guide* .\n\nValid Values: `open-source-rds-extended-support | open-source-rds-extended-support-disabled`\n\nDefault: `open-source-rds-extended-support`", "EngineVersion": "The engine version to use for this global database cluster.\n\nConstraints:\n\n- Can't be specified if `SourceDBClusterIdentifier` is specified. In this case, Amazon Aurora uses the engine version of the source DB cluster.", "GlobalClusterIdentifier": "The cluster identifier for this global database cluster. This parameter is stored as a lowercase string.", "SourceDBClusterIdentifier": "The Amazon Resource Name (ARN) to use as the primary cluster of the global database.\n\nIf you provide a value for this parameter, don't specify values for the following settings because Amazon Aurora uses the values from the specified source DB cluster:\n\n- `DatabaseName`\n- `Engine`\n- `EngineVersion`\n- `StorageEncrypted`", @@ -38581,10 +38699,10 @@ "CurrentRevisionId": "The current revision id for the simulation application. If you provide a value and it matches the latest revision ID, a new version will be created." }, "AWS::RolesAnywhere::CRL": { - "CrlData": "", - "Enabled": "", - "Name": "", - "Tags": "", + "CrlData": "The x509 v3 specified certificate revocation list (CRL).", + "Enabled": "Specifies whether the certificate revocation list (CRL) is enabled.", + "Name": "The name of the certificate revocation list (CRL).", + "Tags": "A list of tags to attach to the certificate revocation list (CRL).", "TrustAnchorArn": "The ARN of the TrustAnchor the certificate revocation list (CRL) will provide revocation for." }, "AWS::RolesAnywhere::CRL Tag": { @@ -40160,6 +40278,7 @@ "KMSKeyArn": "The Amazon Resource Name (ARN) of an encryption key for a destination in Amazon S3 . You can use a KMS key to encrypt inventory data in Amazon S3 . You must specify a key that exist in the same AWS Region as the destination Amazon S3 bucket.", "S3Destination": "Configuration information for the target S3 bucket.", "SyncFormat": "A supported sync format. The following format is currently supported: JsonSerDe", + "SyncName": "A name for the resource data sync.", "SyncSource": "Information about the source where the data was synchronized.", "SyncType": "The type of resource data sync. If `SyncType` is `SyncToDestination` , then the resource data sync synchronizes data to an S3 bucket. If the `SyncType` is `SyncFromSource` then the resource data sync synchronizes data from AWS Organizations or from multiple AWS Regions ." }, @@ -42262,6 +42381,11 @@ "Key": "The key for the tag.", "Value": "The value for the tag." }, + "AWS::SecretsManager::ResourcePolicy": { + "BlockPublicPolicy": "Specifies whether to block resource-based policies that allow broad access to the secret. By default, Secrets Manager blocks policies that allow broad access, for example those that use a wildcard for the principal.", + "ResourcePolicy": "A JSON-formatted string for an AWS resource-based policy. For example policies, see [Permissions policy examples](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_examples.html) .", + "SecretId": "The ARN or name of the secret to attach the resource-based policy.\n\nFor an ARN, we recommend that you specify a complete ARN rather than a partial ARN." + }, "AWS::SecretsManager::RotationSchedule": { "HostedRotationLambda": "Creates a new Lambda rotation function based on one of the [Secrets Manager rotation function templates](https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_available-rotation-templates.html) . To use a rotation function that already exists, specify `RotationLambdaARN` instead.\n\nFor Amazon RDS master user credentials, see [AWS::RDS::DBCluster MasterUserSecret](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-dbcluster-masterusersecret.html) .", "RotateImmediatelyOnUpdate": "Specifies whether to rotate the secret immediately or wait until the next scheduled rotation window. The rotation schedule is defined in `RotationRules` .\n\nIf you don't immediately rotate the secret, Secrets Manager tests the rotation configuration by running the [`testSecret` step](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_how.html) of the Lambda rotation function. The test creates an `AWSPENDING` version of the secret and then removes it.\n\nIf you don't specify this value, then by default, Secrets Manager rotates the secret immediately.\n\nRotation is an asynchronous process. For more information, see [How rotation works](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_how.html) .", @@ -42934,6 +43058,7 @@ }, "AWS::ServiceDiscovery::Instance": { "InstanceAttributes": "A string map that contains the following information for the service that you specify in `ServiceId` :\n\n- The attributes that apply to the records that are defined in the service.\n- For each attribute, the applicable value.\n\nSupported attribute keys include the following:\n\n- **AWS_ALIAS_DNS_NAME** - If you want AWS Cloud Map to create a Route\u00a053 alias record that routes traffic to an Elastic Load Balancing load balancer, specify the DNS name that is associated with the load balancer. For information about how to get the DNS name, see [AliasTarget->DNSName](https://docs.aws.amazon.com/Route53/latest/APIReference/API_AliasTarget.html#Route53-Type-AliasTarget-DNSName) in the *Route\u00a053 API Reference* .\n\nNote the following:\n\n- The configuration for the service that is specified by `ServiceId` must include settings for an `A` record, an `AAAA` record, or both.\n- In the service that is specified by `ServiceId` , the value of `RoutingPolicy` must be `WEIGHTED` .\n- If the service that is specified by `ServiceId` includes `HealthCheckConfig` settings, AWS Cloud Map will create the health check, but it won't associate the health check with the alias record.\n- Auto naming currently doesn't support creating alias records that route traffic to AWS resources other than ELB load balancers.\n- If you specify a value for `AWS_ALIAS_DNS_NAME` , don't specify values for any of the `AWS_INSTANCE` attributes.\n- **AWS_EC2_INSTANCE_ID** - *HTTP namespaces only.* The Amazon EC2 instance ID for the instance. The `AWS_INSTANCE_IPV4` attribute contains the primary private IPv4 address. When creating resources with a type of [AWS::ServiceDiscovery::Instance](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-servicediscovery-instance.html) , if the `AWS_EC2_INSTANCE_ID` attribute is specified, the only other attribute that can be specified is `AWS_INIT_HEALTH_STATUS` . After the resource has been created, the `AWS_INSTANCE_IPV4` attribute contains the primary private IPv4 address.\n- **AWS_INIT_HEALTH_STATUS** - If the service configuration includes `HealthCheckCustomConfig` , when creating resources with a type of [AWS::ServiceDiscovery::Instance](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-servicediscovery-instance.html) you can optionally use `AWS_INIT_HEALTH_STATUS` to specify the initial status of the custom health check, `HEALTHY` or `UNHEALTHY` . If you don't specify a value for `AWS_INIT_HEALTH_STATUS` , the initial status is `HEALTHY` . This attribute can only be used when creating resources and will not be seen on existing resources.\n- **AWS_INSTANCE_CNAME** - If the service configuration includes a `CNAME` record, the domain name that you want Route\u00a053 to return in response to DNS queries, for example, `example.com` .\n\nThis value is required if the service specified by `ServiceId` includes settings for an `CNAME` record.\n- **AWS_INSTANCE_IPV4** - If the service configuration includes an `A` record, the IPv4 address that you want Route\u00a053 to return in response to DNS queries, for example, `192.0.2.44` .\n\nThis value is required if the service specified by `ServiceId` includes settings for an `A` record. If the service includes settings for an `SRV` record, you must specify a value for `AWS_INSTANCE_IPV4` , `AWS_INSTANCE_IPV6` , or both.\n- **AWS_INSTANCE_IPV6** - If the service configuration includes an `AAAA` record, the IPv6 address that you want Route\u00a053 to return in response to DNS queries, for example, `2001:0db8:85a3:0000:0000:abcd:0001:2345` .\n\nThis value is required if the service specified by `ServiceId` includes settings for an `AAAA` record. If the service includes settings for an `SRV` record, you must specify a value for `AWS_INSTANCE_IPV4` , `AWS_INSTANCE_IPV6` , or both.\n- **AWS_INSTANCE_PORT** - If the service includes an `SRV` record, the value that you want Route\u00a053 to return for the port.\n\nIf the service includes `HealthCheckConfig` , the port on the endpoint that you want Route\u00a053 to send requests to.\n\nThis value is required if you specified settings for an `SRV` record or a Route\u00a053 health check when you created the service.", + "InstanceId": "An identifier that you want to associate with the instance. Note the following:\n\n- If the service that's specified by `ServiceId` includes settings for an `SRV` record, the value of `InstanceId` is automatically included as part of the value for the `SRV` record. For more information, see [DnsRecord > Type](https://docs.aws.amazon.com/cloud-map/latest/api/API_DnsRecord.html#cloudmap-Type-DnsRecord-Type) .\n- You can use this value to update an existing instance.\n- To register a new instance, you must specify a value that's unique among instances that you register by using the same service.\n- If you specify an existing `InstanceId` and `ServiceId` , AWS Cloud Map updates the existing DNS records, if any. If there's also an existing health check, AWS Cloud Map deletes the old health check and creates a new one.\n\n> The health check isn't deleted immediately, so it will still appear for a while if you submit a `ListHealthChecks` request, for example.\n\n> Do not include sensitive information in `InstanceId` if the namespace is discoverable by public DNS queries and any `Type` member of `DnsRecord` for the service contains `SRV` because the `InstanceId` is discoverable by public DNS queries.", "ServiceId": "The ID of the service that you want to use for settings for the instance." }, "AWS::ServiceDiscovery::PrivateDnsNamespace": { @@ -43059,6 +43184,7 @@ }, "AWS::Signer::SigningProfile": { "PlatformId": "The ID of a platform that is available for use by a signing profile.", + "ProfileName": "The name of the signing profile.", "SignatureValidityPeriod": "The validity period override for any signature generated using this signing profile. If unspecified, the default is 135 months.", "Tags": "A list of tags associated with the signing profile." }, @@ -43598,7 +43724,7 @@ "Type": "Currently, the following step types are supported.\n\n- *`COPY`* - Copy the file to another location.\n- *`CUSTOM`* - Perform a custom step with an AWS Lambda function target.\n- *`DECRYPT`* - Decrypt a file that was encrypted before it was uploaded.\n- *`DELETE`* - Delete the file.\n- *`TAG`* - Add a tag to the file." }, "AWS::VerifiedPermissions::IdentitySource": { - "Configuration": "Contains configuration information about an identity source.", + "Configuration": "Contains configuration information used when creating a new identity source.", "PolicyStoreId": "Specifies the ID of the policy store in which you want to store this identity source. Only policies and requests made using this policy store can reference identities from the identity provider configured in the new identity source.", "PrincipalEntityType": "Specifies the namespace and data type of the principals generated for identities authenticated by the new identity source." }, @@ -43611,7 +43737,30 @@ "UserPoolArn": "The [Amazon Resource Name (ARN)](https://docs.aws.amazon.com//general/latest/gr/aws-arns-and-namespaces.html) of the Amazon Cognito user pool that contains the identities to be authorized." }, "AWS::VerifiedPermissions::IdentitySource IdentitySourceConfiguration": { - "CognitoUserPoolConfiguration": "A structure that contains configuration information used when creating or updating an identity source that represents a connection to an Amazon Cognito user pool used as an identity provider for Verified Permissions ." + "CognitoUserPoolConfiguration": "A structure that contains configuration information used when creating or updating an identity source that represents a connection to an Amazon Cognito user pool used as an identity provider for Verified Permissions .", + "OpenIdConnectConfiguration": "" + }, + "AWS::VerifiedPermissions::IdentitySource OpenIdConnectAccessTokenConfiguration": { + "Audiences": "The access token `aud` claim values that you want to accept in your policy store. For example, `https://myapp.example.com, https://myapp2.example.com` .", + "PrincipalIdClaim": "The claim that determines the principal in OIDC access tokens. For example, `sub` ." + }, + "AWS::VerifiedPermissions::IdentitySource OpenIdConnectConfiguration": { + "EntityIdPrefix": "A descriptive string that you want to prefix to user entities from your OIDC identity provider. For example, if you set an `entityIdPrefix` of `MyOIDCProvider` , you can reference principals in your policies in the format `MyCorp::User::MyOIDCProvider|Carlos` .", + "GroupConfiguration": "The claim in OIDC identity provider tokens that indicates a user's group membership, and the entity type that you want to map it to. For example, this object can map the contents of a `groups` claim to `MyCorp::UserGroup` .", + "Issuer": "The issuer URL of an OIDC identity provider. This URL must have an OIDC discovery endpoint at the path `.well-known/openid-configuration` .", + "TokenSelection": "The token type that you want to process from your OIDC identity provider. Your policy store can process either identity (ID) or access tokens from a given OIDC identity source." + }, + "AWS::VerifiedPermissions::IdentitySource OpenIdConnectGroupConfiguration": { + "GroupClaim": "The token claim that you want Verified Permissions to interpret as group membership. For example, `groups` .", + "GroupEntityType": "The policy store entity type that you want to map your users' group claim to. For example, `MyCorp::UserGroup` . A group entity type is an entity that can have a user entity type as a member." + }, + "AWS::VerifiedPermissions::IdentitySource OpenIdConnectIdentityTokenConfiguration": { + "ClientIds": "The ID token audience, or client ID, claim values that you want to accept in your policy store from an OIDC identity provider. For example, `1example23456789, 2example10111213` .", + "PrincipalIdClaim": "The claim that determines the principal in OIDC access tokens. For example, `sub` ." + }, + "AWS::VerifiedPermissions::IdentitySource OpenIdConnectTokenSelection": { + "AccessTokenOnly": "The OIDC configuration for processing access tokens. Contains allowed audience claims, for example `https://auth.example.com` , and the claim that you want to map to the principal, for example `sub` .", + "IdentityTokenOnly": "The OIDC configuration for processing identity (ID) tokens. Contains allowed client ID claims, for example `1example23456789` , and the claim that you want to map to the principal, for example `sub` ." }, "AWS::VerifiedPermissions::Policy": { "Definition": "Specifies the policy type and content to use for the new or updated policy. The definition structure must include either a `Static` or a `TemplateLinked` element.", @@ -43640,7 +43789,7 @@ "ValidationSettings": "Specifies the validation setting for this policy store.\n\nCurrently, the only valid and required value is `Mode` .\n\n> We recommend that you turn on `STRICT` mode only after you define a schema. If a schema doesn't exist, then `STRICT` mode causes any policy to fail validation, and Verified Permissions rejects the policy. You can turn off validation by using the [UpdatePolicyStore](https://docs.aws.amazon.com/verifiedpermissions/latest/apireference/API_UpdatePolicyStore) . Then, when you have a schema defined, use [UpdatePolicyStore](https://docs.aws.amazon.com/verifiedpermissions/latest/apireference/API_UpdatePolicyStore) again to turn validation back on." }, "AWS::VerifiedPermissions::PolicyStore SchemaDefinition": { - "CedarJson": "A JSON string representation of the schema supported by applications that use this policy store. For more information, see [Policy store schema](https://docs.aws.amazon.com/verifiedpermissions/latest/userguide/schema.html) in the *Amazon Verified Permissions User Guide* ." + "CedarJson": "A JSON string representation of the schema supported by applications that use this policy store. For more information, see [Policy store schema](https://docs.aws.amazon.com/verifiedpermissions/latest/userguide/schema.html) in the AVP User Guide." }, "AWS::VerifiedPermissions::PolicyStore ValidationSettings": { "Mode": "The validation mode currently configured for this policy store. The valid values are:\n\n- *OFF* \u2013 Neither Verified Permissions nor Cedar perform any validation on policies. No validation errors are reported by either service.\n- *STRICT* \u2013 Requires a schema to be present in the policy store. Cedar performs validation on all submitted new or updated static policies and policy templates. Any that fail validation are rejected and Cedar doesn't store them in the policy store.\n\n> If `Mode=STRICT` and the policy store doesn't contain a schema, Verified Permissions rejects all static policies and policy templates because there is no schema to validate against.\n> \n> To submit a static policy or policy template without a schema, you must turn off validation." @@ -44221,7 +44370,7 @@ "FallbackBehavior": "The match status to assign to the web request if the request doesn't have a JA3 fingerprint.\n\nYou can specify the following fallback behaviors:\n\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement." }, "AWS::WAFv2::RuleGroup JsonBody": { - "InvalidFallbackBehavior": "What AWS WAF should do if it fails to completely parse the JSON body. The options are the following:\n\n- `EVALUATE_AS_STRING` - Inspect the body as plain text. AWS WAF applies the text transformations and inspection criteria that you defined for the JSON inspection to the body text string.\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.\n\nIf you don't provide this setting, AWS WAF parses and evaluates the content only up to the first parsing failure that it encounters.\n\nAWS WAF does its best to parse the entire JSON body, but might be forced to stop for reasons such as invalid characters, duplicate keys, truncation, and any content whose root node isn't an object or an array.\n\nAWS WAF parses the JSON in the following examples as two valid key, value pairs:\n\n- Missing comma: `{\"key1\":\"value1\"\"key2\":\"value2\"}`\n- Missing colon: `{\"key1\":\"value1\",\"key2\"\"value2\"}`\n- Extra colons: `{\"key1\"::\"value1\",\"key2\"\"value2\"}`", + "InvalidFallbackBehavior": "What AWS WAF should do if it fails to completely parse the JSON body. The options are the following:\n\n- `EVALUATE_AS_STRING` - Inspect the body as plain text. AWS WAF applies the text transformations and inspection criteria that you defined for the JSON inspection to the body text string.\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.\n\nIf you don't provide this setting, AWS WAF parses and evaluates the content only up to the first parsing failure that it encounters.\n\n> AWS WAF parsing doesn't fully validate the input JSON string, so parsing can succeed even for invalid JSON. When parsing succeeds, AWS WAF doesn't apply the fallback behavior. For more information, see [JSON body](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-fields-list.html#waf-rule-statement-request-component-json-body) in the *AWS WAF Developer Guide* .", "MatchPattern": "The patterns to look for in the JSON body. AWS WAF inspects the results of these pattern matches against the rule inspection criteria.", "MatchScope": "The parts of the JSON to match against using the `MatchPattern` . If you specify `ALL` , AWS WAF matches against keys and values.\n\n`All` does not require a match to be found in the keys and a match to be found in the values. It requires a match to be found in the keys or the values or both. To require a match in the keys and in the values, use a logical `AND` statement to combine two match rules, one that inspects the keys and another that inspects the values.", "OversizeHandling": "What AWS WAF should do if the body is larger than AWS WAF can inspect.\n\nAWS WAF does not support inspecting the entire contents of the web request body if the body exceeds the limit for the resource type. When a web request body is larger than the limit, the underlying host service only forwards the contents that are within the limit to AWS WAF for inspection.\n\n- For Application Load Balancer and AWS AppSync , the limit is fixed at 8 KB (8,192 bytes).\n- For CloudFront, API Gateway, Amazon Cognito, App Runner, and Verified Access, the default limit is 16 KB (16,384 bytes), and you can increase the limit for each resource type in the web ACL `AssociationConfig` , for additional processing fees.\n\nThe options for oversize handling are the following:\n\n- `CONTINUE` - Inspect the available body contents normally, according to the rule inspection criteria.\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.\n\nYou can combine the `MATCH` or `NO_MATCH` settings for oversize handling with your rule and web ACL action settings, so that you block any request whose body is over the limit.\n\nDefault: `CONTINUE`" @@ -44513,7 +44662,7 @@ "FallbackBehavior": "The match status to assign to the web request if the request doesn't have a JA3 fingerprint.\n\nYou can specify the following fallback behaviors:\n\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement." }, "AWS::WAFv2::WebACL JsonBody": { - "InvalidFallbackBehavior": "What AWS WAF should do if it fails to completely parse the JSON body. The options are the following:\n\n- `EVALUATE_AS_STRING` - Inspect the body as plain text. AWS WAF applies the text transformations and inspection criteria that you defined for the JSON inspection to the body text string.\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.\n\nIf you don't provide this setting, AWS WAF parses and evaluates the content only up to the first parsing failure that it encounters.\n\nAWS WAF does its best to parse the entire JSON body, but might be forced to stop for reasons such as invalid characters, duplicate keys, truncation, and any content whose root node isn't an object or an array.\n\nAWS WAF parses the JSON in the following examples as two valid key, value pairs:\n\n- Missing comma: `{\"key1\":\"value1\"\"key2\":\"value2\"}`\n- Missing colon: `{\"key1\":\"value1\",\"key2\"\"value2\"}`\n- Extra colons: `{\"key1\"::\"value1\",\"key2\"\"value2\"}`", + "InvalidFallbackBehavior": "What AWS WAF should do if it fails to completely parse the JSON body. The options are the following:\n\n- `EVALUATE_AS_STRING` - Inspect the body as plain text. AWS WAF applies the text transformations and inspection criteria that you defined for the JSON inspection to the body text string.\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.\n\nIf you don't provide this setting, AWS WAF parses and evaluates the content only up to the first parsing failure that it encounters.\n\n> AWS WAF parsing doesn't fully validate the input JSON string, so parsing can succeed even for invalid JSON. When parsing succeeds, AWS WAF doesn't apply the fallback behavior. For more information, see [JSON body](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-fields-list.html#waf-rule-statement-request-component-json-body) in the *AWS WAF Developer Guide* .", "MatchPattern": "The patterns to look for in the JSON body. AWS WAF inspects the results of these pattern matches against the rule inspection criteria.", "MatchScope": "The parts of the JSON to match against using the `MatchPattern` . If you specify `ALL` , AWS WAF matches against keys and values.\n\n`All` does not require a match to be found in the keys and a match to be found in the values. It requires a match to be found in the keys or the values or both. To require a match in the keys and in the values, use a logical `AND` statement to combine two match rules, one that inspects the keys and another that inspects the values.", "OversizeHandling": "What AWS WAF should do if the body is larger than AWS WAF can inspect.\n\nAWS WAF does not support inspecting the entire contents of the web request body if the body exceeds the limit for the resource type. When a web request body is larger than the limit, the underlying host service only forwards the contents that are within the limit to AWS WAF for inspection.\n\n- For Application Load Balancer and AWS AppSync , the limit is fixed at 8 KB (8,192 bytes).\n- For CloudFront, API Gateway, Amazon Cognito, App Runner, and Verified Access, the default limit is 16 KB (16,384 bytes), and you can increase the limit for each resource type in the web ACL `AssociationConfig` , for additional processing fees.\n\nThe options for oversize handling are the following:\n\n- `CONTINUE` - Inspect the available body contents normally, according to the rule inspection criteria.\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.\n\nYou can combine the `MATCH` or `NO_MATCH` settings for oversize handling with your rule and web ACL action settings, so that you block any request whose body is over the limit.\n\nDefault: `CONTINUE`" @@ -44801,9 +44950,9 @@ "DirectoryId": "The identifier of the AWS Directory Service directory for the WorkSpace.", "RootVolumeEncryptionEnabled": "Indicates whether the data stored on the root volume is encrypted.", "Tags": "The tags for the WorkSpace.", - "UserName": "The user name of the user for the WorkSpace. This user name must exist in the AWS Directory Service directory for the WorkSpace.\n\nThe reserved keyword, `[UNDEFINED]` , is used when creating user-decoupled WorkSpaces.", + "UserName": "The user name of the user for the WorkSpace. This user name must exist in the AWS Directory Service directory for the WorkSpace.", "UserVolumeEncryptionEnabled": "Indicates whether the data stored on the user volume is encrypted.", - "VolumeEncryptionKey": "The ARN of the symmetric AWS KMS key used to encrypt data stored on your WorkSpace. Amazon WorkSpaces does not support asymmetric KMS keys.", + "VolumeEncryptionKey": "The symmetric AWS KMS key used to encrypt data stored on your WorkSpace. Amazon WorkSpaces does not support asymmetric KMS keys.", "WorkspaceProperties": "The WorkSpace properties." }, "AWS::WorkSpaces::Workspace Tag": { @@ -44813,10 +44962,36 @@ "AWS::WorkSpaces::Workspace WorkspaceProperties": { "ComputeTypeName": "The compute type. For more information, see [Amazon WorkSpaces Bundles](https://docs.aws.amazon.com/workspaces/details/#Amazon_WorkSpaces_Bundles) .", "RootVolumeSizeGib": "The size of the root volume. For important information about how to modify the size of the root and user volumes, see [Modify a WorkSpace](https://docs.aws.amazon.com/workspaces/latest/adminguide/modify-workspaces.html) .", - "RunningMode": "The running mode. For more information, see [Manage the WorkSpace Running Mode](https://docs.aws.amazon.com/workspaces/latest/adminguide/running-mode.html) .\n\n> The `MANUAL` value is only supported by Amazon WorkSpaces Core. Contact your account team to be allow-listed to use this value. For more information, see [Amazon WorkSpaces Core](https://docs.aws.amazon.com/workspaces/core/) .", + "RunningMode": "The running mode. For more information, see [Manage the WorkSpace Running Mode](https://docs.aws.amazon.com/workspaces/latest/adminguide/running-mode.html) .", "RunningModeAutoStopTimeoutInMinutes": "The time after a user logs off when WorkSpaces are automatically stopped. Configured in 60-minute intervals.", "UserVolumeSizeGib": "The size of the user storage. For important information about how to modify the size of the root and user volumes, see [Modify a WorkSpace](https://docs.aws.amazon.com/workspaces/latest/adminguide/modify-workspaces.html) ." }, + "AWS::WorkSpaces::WorkspacesPool": { + "ApplicationSettings": "The persistent application settings for users of the pool.", + "BundleId": "The identifier of the bundle used by the pool.", + "Capacity": "Describes the user capacity for the pool.", + "Description": "The description of the pool.", + "DirectoryId": "The identifier of the directory used by the pool.", + "PoolName": "The name of the pool.", + "Tags": "The tags for the pool.", + "TimeoutSettings": "The amount of time that a pool session remains active after users disconnect. If they try to reconnect to the pool session after a disconnection or network interruption within this time interval, they are connected to their previous session. Otherwise, they are connected to a new session with a new pool instance." + }, + "AWS::WorkSpaces::WorkspacesPool ApplicationSettings": { + "SettingsGroup": "The path prefix for the S3 bucket where users\u2019 persistent application settings are stored.", + "Status": "Enables or disables persistent application settings for users during their pool sessions." + }, + "AWS::WorkSpaces::WorkspacesPool Capacity": { + "DesiredUserSessions": "The desired number of user sessions for the WorkSpaces in the pool." + }, + "AWS::WorkSpaces::WorkspacesPool Tag": { + "Key": "The key of the tag.", + "Value": "The value of the tag." + }, + "AWS::WorkSpaces::WorkspacesPool TimeoutSettings": { + "DisconnectTimeoutInSeconds": "Specifies the amount of time, in seconds, that a streaming session remains active after users disconnect. If users try to reconnect to the streaming session after a disconnection or network interruption within the time set, they are connected to their previous session. Otherwise, they are connected to a new session with a new streaming instance.", + "IdleDisconnectTimeoutInSeconds": "The amount of time in seconds a connection will stay active while idle.", + "MaxUserDurationInSeconds": "Specifies the maximum amount of time, in seconds, that a streaming session can remain active. If users are still connected to a streaming instance five minutes before this limit is reached, they are prompted to save any open documents before being disconnected. After this time elapses, the instance is terminated and replaced by a new instance." + }, "AWS::WorkSpacesThinClient::Environment": { "DesiredSoftwareSetId": "The ID of the software set to apply.", "DesktopArn": "The Amazon Resource Name (ARN) of the desktop to stream from Amazon WorkSpaces, WorkSpaces Web, or AppStream 2.0.", @@ -44863,7 +45038,7 @@ "Description": "The description of the IP access settings.", "DisplayName": "The display name of the IP access settings.", "IpRules": "The IP rules of the IP access settings.", - "Tags": "The tags to add to the browser settings resource. A tag is a key-value pair." + "Tags": "The tags to add to the IP access settings resource. A tag is a key-value pair." }, "AWS::WorkSpacesWeb::IpAccessSettings IpRule": { "Description": "The description of the IP rule.", diff --git a/schema_source/cloudformation.schema.json b/schema_source/cloudformation.schema.json index 5ff6f9dcc..abdbffec3 100644 --- a/schema_source/cloudformation.schema.json +++ b/schema_source/cloudformation.schema.json @@ -20605,7 +20605,7 @@ "type": "number" }, "ResourceId": { - "markdownDescription": "The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .", + "markdownDescription": "The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", "title": "ResourceId", "type": "string" }, @@ -20615,7 +20615,7 @@ "type": "string" }, "ScalableDimension": { - "markdownDescription": "The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The desired task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The desired capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.", + "markdownDescription": "The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", "title": "ScalableDimension", "type": "string" }, @@ -20791,12 +20791,12 @@ "type": "string" }, "ResourceId": { - "markdownDescription": "The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .", + "markdownDescription": "The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", "title": "ResourceId", "type": "string" }, "ScalableDimension": { - "markdownDescription": "The scalable dimension. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The desired task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The desired capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.", + "markdownDescription": "The scalable dimension. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", "title": "ScalableDimension", "type": "string" }, @@ -26156,7 +26156,7 @@ "type": "object" }, "BackupVaultName": { - "markdownDescription": "The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the AWS Region where they are created.", + "markdownDescription": "The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the AWS Region where they are created. They consist of lowercase letters, numbers, and hyphens.", "title": "BackupVaultName", "type": "string" }, @@ -39654,7 +39654,7 @@ "items": { "type": "string" }, - "markdownDescription": "An array of Amazon Resource Name (ARN) strings or partial ARN strings for the specified resource type.\n\n- To log data events for all objects in all S3 buckets in your AWS account , specify the prefix as `arn:aws:s3` .\n\n> This also enables logging of data event activity performed by any user or role in your AWS account , even if that activity is performed on a bucket that belongs to another AWS account .\n- To log data events for all objects in an S3 bucket, specify the bucket and an empty object prefix such as `arn:aws:s3:::bucket-1/` . The trail logs data events for all objects in this S3 bucket.\n- To log data events for specific objects, specify the S3 bucket and object prefix such as `arn:aws:s3:::bucket-1/example-images` . The trail logs data events for objects in this S3 bucket that match the prefix.\n- To log data events for all Lambda functions in your AWS account , specify the prefix as `arn:aws:lambda` .\n\n> This also enables logging of `Invoke` activity performed by any user or role in your AWS account , even if that activity is performed on a function that belongs to another AWS account .\n- To log data events for a specific Lambda function, specify the function ARN.\n\n> Lambda function ARNs are exact. For example, if you specify a function ARN *arn:aws:lambda:us-west-2:111111111111:function:helloworld* , data events will only be logged for *arn:aws:lambda:us-west-2:111111111111:function:helloworld* . They will not be logged for *arn:aws:lambda:us-west-2:111111111111:function:helloworld2* .\n- To log data events for all DynamoDB tables in your AWS account , specify the prefix as `arn:aws:dynamodb` .", + "markdownDescription": "An array of Amazon Resource Name (ARN) strings or partial ARN strings for the specified resource type.\n\n- To log data events for all objects in all S3 buckets in your AWS account , specify the prefix as `arn:aws:s3` .\n\n> This also enables logging of data event activity performed by any user or role in your AWS account , even if that activity is performed on a bucket that belongs to another AWS account .\n- To log data events for all objects in an S3 bucket, specify the bucket and an empty object prefix such as `arn:aws:s3:::DOC-EXAMPLE-BUCKET1/` . The trail logs data events for all objects in this S3 bucket.\n- To log data events for specific objects, specify the S3 bucket and object prefix such as `arn:aws:s3:::DOC-EXAMPLE-BUCKET1/example-images` . The trail logs data events for objects in this S3 bucket that match the prefix.\n- To log data events for all Lambda functions in your AWS account , specify the prefix as `arn:aws:lambda` .\n\n> This also enables logging of `Invoke` activity performed by any user or role in your AWS account , even if that activity is performed on a function that belongs to another AWS account .\n- To log data events for a specific Lambda function, specify the function ARN.\n\n> Lambda function ARNs are exact. For example, if you specify a function ARN *arn:aws:lambda:us-west-2:111111111111:function:helloworld* , data events will only be logged for *arn:aws:lambda:us-west-2:111111111111:function:helloworld* . They will not be logged for *arn:aws:lambda:us-west-2:111111111111:function:helloworld2* .\n- To log data events for all DynamoDB tables in your AWS account , specify the prefix as `arn:aws:dynamodb` .", "title": "Values", "type": "array" } @@ -40864,6 +40864,8 @@ "type": "string" }, "EncryptionKey": { + "markdownDescription": "The key used to encrypt the domain.", + "title": "EncryptionKey", "type": "string" }, "PermissionsPolicyDocument": { @@ -41110,6 +41112,8 @@ "type": "string" }, "DomainOwner": { + "markdownDescription": "The 12-digit account number of the AWS account that owns the domain that contains the repository. It does not include dashes or spaces.", + "title": "DomainOwner", "type": "string" }, "ExternalConnections": { @@ -57409,7 +57413,7 @@ "type": "number" }, "ArchivedLogsOnly": { - "markdownDescription": "When this field is set to `Y` , AWS DMS only accesses the archived redo logs. If the archived redo logs are stored on Automatic Storage Management (ASM) only, the AWS DMS user account needs to be granted ASM privileges.", + "markdownDescription": "When this field is set to `True` , AWS DMS only accesses the archived redo logs. If the archived redo logs are stored on Automatic Storage Management (ASM) only, the AWS DMS user account needs to be granted ASM privileges.", "title": "ArchivedLogsOnly", "type": "boolean" }, @@ -57542,17 +57546,17 @@ "type": "boolean" }, "UseBFile": { - "markdownDescription": "Set this attribute to Y to capture change data using the Binary Reader utility. Set `UseLogminerReader` to N to set this attribute to Y. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC) .", + "markdownDescription": "Set this attribute to True to capture change data using the Binary Reader utility. Set `UseLogminerReader` to False to set this attribute to True. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC) .", "title": "UseBFile", "type": "boolean" }, "UseDirectPathFullLoad": { - "markdownDescription": "Set this attribute to Y to have AWS DMS use a direct path full load. Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). By using this OCI protocol, you can bulk-load Oracle target tables during a full load.", + "markdownDescription": "Set this attribute to True to have AWS DMS use a direct path full load. Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). By using this OCI protocol, you can bulk-load Oracle target tables during a full load.", "title": "UseDirectPathFullLoad", "type": "boolean" }, "UseLogminerReader": { - "markdownDescription": "Set this attribute to Y to capture change data using the Oracle LogMiner utility (the default). Set this attribute to N if you want to access the redo logs as a binary file. When you set `UseLogminerReader` to N, also set `UseBfile` to Y. For more information on this setting and using Oracle ASM, see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC) in the *AWS DMS User Guide* .", + "markdownDescription": "Set this attribute to True to capture change data using the Oracle LogMiner utility (the default). Set this attribute to False if you want to access the redo logs as a binary file. When you set `UseLogminerReader` to False, also set `UseBfile` to True. For more information on this setting and using Oracle ASM, see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC) in the *AWS DMS User Guide* .", "title": "UseLogminerReader", "type": "boolean" }, @@ -58484,8 +58488,6 @@ "title": "ComputeConfig" }, "ReplicationConfigArn": { - "markdownDescription": "The Amazon Resource Name (ARN) of this AWS DMS Serverless replication configuration.", - "title": "ReplicationConfigArn", "type": "string" }, "ReplicationConfigIdentifier": { @@ -61479,12 +61481,12 @@ "additionalProperties": false, "properties": { "ActivationKey": { - "markdownDescription": "Specifies your DataSync agent's activation key. If you don't have an activation key, see [Activate your agent](https://docs.aws.amazon.com/datasync/latest/userguide/activate-agent.html) .", + "markdownDescription": "Specifies your DataSync agent's activation key. If you don't have an activation key, see [Activating your agent](https://docs.aws.amazon.com/datasync/latest/userguide/activate-agent.html) .", "title": "ActivationKey", "type": "string" }, "AgentName": { - "markdownDescription": "Specifies a name for your agent. You can see this name in the DataSync console.", + "markdownDescription": "Specifies a name for your agent. We recommend specifying a name that you can remember.", "title": "AgentName", "type": "string" }, @@ -61500,7 +61502,7 @@ "items": { "type": "string" }, - "markdownDescription": "Specifies the ARN of the subnet where you want to run your DataSync task when using a VPC endpoint. This is the subnet where DataSync creates and manages the [network interfaces](https://docs.aws.amazon.com/datasync/latest/userguide/datasync-network.html#required-network-interfaces) for your transfer. You can only specify one ARN.", + "markdownDescription": "Specifies the ARN of the subnet where your VPC service endpoint is located. You can only specify one ARN.", "title": "SubnetArns", "type": "array" }, @@ -83881,7 +83883,7 @@ "additionalProperties": false, "properties": { "LogDriver": { - "markdownDescription": "The log driver to use for the container.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , `splunk` , and `awsfirelens` .\n\nFor more information about using the `awslogs` log driver, see [Using the awslogs log driver](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor more information about using the `awsfirelens` log driver, see [Custom log routing](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) in the *Amazon Elastic Container Service Developer Guide* .\n\n> If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.", + "markdownDescription": "The log driver to use for the container.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , `splunk` , and `awsfirelens` .\n\nFor more information about using the `awslogs` log driver, see [Send Amazon ECS logs to CloudWatch](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor more information about using the `awsfirelens` log driver, see [Send Amazon ECS logs to an AWS service or AWS Partner](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) .\n\n> If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.", "title": "LogDriver", "type": "string" }, @@ -84005,7 +84007,7 @@ }, "LogConfiguration": { "$ref": "#/definitions/AWS::ECS::Service.LogConfiguration", - "markdownDescription": "The log configuration for the container. This parameter maps to `LogConfig` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `--log-driver` option to [`docker run`](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/commandline/run/) .\n\nBy default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. For more information about the options for different supported log drivers, see [Configure logging drivers](https://docs.aws.amazon.com/https://docs.docker.com/engine/admin/logging/overview/) in the Docker documentation.\n\nUnderstand the following when specifying a log configuration for your containers.\n\n- Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , `splunk` , and `awsfirelens` .\n- This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.\n- For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the `ECS_AVAILABLE_LOGGING_DRIVERS` environment variable before containers placed on that instance can use these log configuration options. For more information, see [Amazon ECS container agent configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html) in the *Amazon Elastic Container Service Developer Guide* .\n- For tasks that are on AWS Fargate , because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.", + "markdownDescription": "The log configuration for the container. This parameter maps to `LogConfig` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `--log-driver` option to [`docker run`](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/commandline/run/) .\n\nBy default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. For more information about the options for different supported log drivers, see [Configure logging drivers](https://docs.aws.amazon.com/https://docs.docker.com/engine/admin/logging/overview/) in the Docker documentation.\n\nUnderstand the following when specifying a log configuration for your containers.\n\n- Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `syslog` , `splunk` , and `awsfirelens` .\n- This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.\n- For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the `ECS_AVAILABLE_LOGGING_DRIVERS` environment variable before containers placed on that instance can use these log configuration options. For more information, see [Amazon ECS container agent configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html) in the *Amazon Elastic Container Service Developer Guide* .\n- For tasks that are on AWS Fargate , because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.", "title": "LogConfiguration" }, "Namespace": { @@ -84416,7 +84418,7 @@ "type": "array" }, "Cpu": { - "markdownDescription": "The number of `cpu` units reserved for the container. This parameter maps to `CpuShares` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `--cpu-shares` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#security-configuration) .\n\nThis field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level `cpu` value.\n\n> You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the [Amazon EC2 Instances](https://docs.aws.amazon.com/ec2/instance-types/) detail page by 1,024. \n\nLinux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.\n\nOn Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. For more information, see [CPU share constraint](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#cpu-share-constraint) in the Docker documentation. The minimum valid CPU share value that the Linux kernel allows is 2. However, the CPU parameter isn't required, and you can use CPU values below 2 in your container definitions. For CPU values below 2 (including null), the behavior varies based on your Amazon ECS container agent version:\n\n- *Agent versions less than or equal to 1.1.0:* Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.\n- *Agent versions greater than or equal to 1.2.0:* Null, zero, and CPU values of 1 are passed to Docker as 2.\n\nOn Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as `0` , which Windows interprets as 1% of one CPU.", + "markdownDescription": "The number of `cpu` units reserved for the container. This parameter maps to `CpuShares` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `--cpu-shares` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#security-configuration) .\n\nThis field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level `cpu` value.\n\n> You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the [Amazon EC2 Instances](https://docs.aws.amazon.com/ec2/instance-types/) detail page by 1,024. \n\nLinux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.\n\nOn Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. For more information, see [CPU share constraint](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#cpu-share-constraint) in the Docker documentation. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version:\n\n- *Agent versions less than or equal to 1.1.0:* Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.\n- *Agent versions greater than or equal to 1.2.0:* Null, zero, and CPU values of 1 are passed to Docker as 2.\n- *Agent versions greater than or equal to 1.84.0:* CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares.\n\nOn Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as `0` , which Windows interprets as 1% of one CPU.", "title": "Cpu", "type": "number" }, @@ -85051,7 +85053,7 @@ "additionalProperties": false, "properties": { "LogDriver": { - "markdownDescription": "The log driver to use for the container.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , `splunk` , and `awsfirelens` .\n\nFor more information about using the `awslogs` log driver, see [Using the awslogs log driver](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor more information about using the `awsfirelens` log driver, see [Custom log routing](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) in the *Amazon Elastic Container Service Developer Guide* .\n\n> If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.", + "markdownDescription": "The log driver to use for the container.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , `splunk` , and `awsfirelens` .\n\nFor more information about using the `awslogs` log driver, see [Send Amazon ECS logs to CloudWatch](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor more information about using the `awsfirelens` log driver, see [Send Amazon ECS logs to an AWS service or AWS Partner](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) .\n\n> If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.", "title": "LogDriver", "type": "string" }, @@ -91043,6 +91045,8 @@ "type": "string" }, "ReplicationGroupId": { + "markdownDescription": "The replication group identifier. This parameter is stored as a lowercase string.\n\nConstraints:\n\n- A name must contain from 1 to 40 alphanumeric characters or hyphens.\n- The first character must be a letter.\n- A name cannot end with a hyphen or contain two consecutive hyphens.", + "title": "ReplicationGroupId", "type": "string" }, "SecurityGroupIds": { @@ -102831,7 +102835,7 @@ "type": "string" }, "OperatingSystem": { - "markdownDescription": "The operating system that your game server binaries run on. This value determines the type of fleet resources that you use for this build. If your game build contains multiple executables, they all must run on the same operating system. You must specify a valid operating system in this request. There is no default value. You can't change a build's operating system later.\n\n> If you have active fleets using the Windows Server 2012 operating system, you can continue to create new builds using this OS until October 10, 2023, when Microsoft ends its support. All others must use Windows Server 2016 when creating new Windows-based builds.", + "markdownDescription": "The operating system that your game server binaries run on. This value determines the type of fleet resources that you use for this build. If your game build contains multiple executables, they all must run on the same operating system. You must specify a valid operating system in this request. There is no default value. You can't change a build's operating system later.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use Amazon GameLift server SDK 4.x., first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to Amazon GameLift server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", "title": "OperatingSystem", "type": "string" }, @@ -102953,7 +102957,7 @@ "type": "string" }, "OperatingSystem": { - "markdownDescription": "The platform required for all containers in the container group definition.", + "markdownDescription": "The platform required for all containers in the container group definition.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use Amazon GameLift server SDK 4.x., first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to Amazon GameLift server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", "title": "OperatingSystem", "type": "string" }, @@ -168292,7 +168296,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "", + "markdownDescription": "The tags associated with the Connect attachment.", "title": "Tags", "type": "array" }, @@ -169349,7 +169353,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "", + "markdownDescription": "The tags associated with the Site-to-Site VPN attachment.", "title": "Tags", "type": "array" }, @@ -225603,7 +225607,7 @@ "type": "string" }, "Value": { - "markdownDescription": "The value of a processor feature name.", + "markdownDescription": "The value of a processor feature.", "title": "Value", "type": "string" } @@ -231441,17 +231445,17 @@ "additionalProperties": false, "properties": { "CrlData": { - "markdownDescription": "", + "markdownDescription": "The x509 v3 specified certificate revocation list (CRL).", "title": "CrlData", "type": "string" }, "Enabled": { - "markdownDescription": "", + "markdownDescription": "Specifies whether the certificate revocation list (CRL) is enabled.", "title": "Enabled", "type": "boolean" }, "Name": { - "markdownDescription": "", + "markdownDescription": "The name of the certificate revocation list (CRL).", "title": "Name", "type": "string" }, @@ -231459,7 +231463,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "", + "markdownDescription": "A list of tags to attach to the certificate revocation list (CRL).", "title": "Tags", "type": "array" }, @@ -242528,6 +242532,8 @@ "type": "string" }, "SyncName": { + "markdownDescription": "A name for the resource data sync.", + "title": "SyncName", "type": "string" }, "SyncSource": { @@ -254457,12 +254463,18 @@ "additionalProperties": false, "properties": { "BlockPublicPolicy": { + "markdownDescription": "Specifies whether to block resource-based policies that allow broad access to the secret. By default, Secrets Manager blocks policies that allow broad access, for example those that use a wildcard for the principal.", + "title": "BlockPublicPolicy", "type": "boolean" }, "ResourcePolicy": { + "markdownDescription": "A JSON-formatted string for an AWS resource-based policy. For example policies, see [Permissions policy examples](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_examples.html) .", + "title": "ResourcePolicy", "type": "object" }, "SecretId": { + "markdownDescription": "The ARN or name of the secret to attach the resource-based policy.\n\nFor an ARN, we recommend that you specify a complete ARN rather than a partial ARN.", + "title": "SecretId", "type": "string" } }, @@ -259445,6 +259457,8 @@ "type": "object" }, "InstanceId": { + "markdownDescription": "An identifier that you want to associate with the instance. Note the following:\n\n- If the service that's specified by `ServiceId` includes settings for an `SRV` record, the value of `InstanceId` is automatically included as part of the value for the `SRV` record. For more information, see [DnsRecord > Type](https://docs.aws.amazon.com/cloud-map/latest/api/API_DnsRecord.html#cloudmap-Type-DnsRecord-Type) .\n- You can use this value to update an existing instance.\n- To register a new instance, you must specify a value that's unique among instances that you register by using the same service.\n- If you specify an existing `InstanceId` and `ServiceId` , AWS Cloud Map updates the existing DNS records, if any. If there's also an existing health check, AWS Cloud Map deletes the old health check and creates a new one.\n\n> The health check isn't deleted immediately, so it will still appear for a while if you submit a `ListHealthChecks` request, for example.\n\n> Do not include sensitive information in `InstanceId` if the namespace is discoverable by public DNS queries and any `Type` member of `DnsRecord` for the service contains `SRV` because the `InstanceId` is discoverable by public DNS queries.", + "title": "InstanceId", "type": "string" }, "ServiceId": { @@ -264088,7 +264102,7 @@ "properties": { "Configuration": { "$ref": "#/definitions/AWS::VerifiedPermissions::IdentitySource.IdentitySourceConfiguration", - "markdownDescription": "Contains configuration information about an identity source.", + "markdownDescription": "Contains configuration information used when creating a new identity source.", "title": "Configuration" }, "PolicyStoreId": { @@ -264417,7 +264431,7 @@ "additionalProperties": false, "properties": { "CedarJson": { - "markdownDescription": "A JSON string representation of the schema supported by applications that use this policy store. For more information, see [Policy store schema](https://docs.aws.amazon.com/verifiedpermissions/latest/userguide/schema.html) in the *Amazon Verified Permissions User Guide* .", + "markdownDescription": "A JSON string representation of the schema supported by applications that use this policy store. For more information, see [Policy store schema](https://docs.aws.amazon.com/verifiedpermissions/latest/userguide/schema.html) in the AVP User Guide.", "title": "CedarJson", "type": "string" } @@ -268930,7 +268944,7 @@ "additionalProperties": false, "properties": { "InvalidFallbackBehavior": { - "markdownDescription": "What AWS WAF should do if it fails to completely parse the JSON body. The options are the following:\n\n- `EVALUATE_AS_STRING` - Inspect the body as plain text. AWS WAF applies the text transformations and inspection criteria that you defined for the JSON inspection to the body text string.\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.\n\nIf you don't provide this setting, AWS WAF parses and evaluates the content only up to the first parsing failure that it encounters.\n\nAWS WAF does its best to parse the entire JSON body, but might be forced to stop for reasons such as invalid characters, duplicate keys, truncation, and any content whose root node isn't an object or an array.\n\nAWS WAF parses the JSON in the following examples as two valid key, value pairs:\n\n- Missing comma: `{\"key1\":\"value1\"\"key2\":\"value2\"}`\n- Missing colon: `{\"key1\":\"value1\",\"key2\"\"value2\"}`\n- Extra colons: `{\"key1\"::\"value1\",\"key2\"\"value2\"}`", + "markdownDescription": "What AWS WAF should do if it fails to completely parse the JSON body. The options are the following:\n\n- `EVALUATE_AS_STRING` - Inspect the body as plain text. AWS WAF applies the text transformations and inspection criteria that you defined for the JSON inspection to the body text string.\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.\n\nIf you don't provide this setting, AWS WAF parses and evaluates the content only up to the first parsing failure that it encounters.\n\n> AWS WAF parsing doesn't fully validate the input JSON string, so parsing can succeed even for invalid JSON. When parsing succeeds, AWS WAF doesn't apply the fallback behavior. For more information, see [JSON body](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-fields-list.html#waf-rule-statement-request-component-json-body) in the *AWS WAF Developer Guide* .", "title": "InvalidFallbackBehavior", "type": "string" }, @@ -270432,7 +270446,7 @@ "additionalProperties": false, "properties": { "InvalidFallbackBehavior": { - "markdownDescription": "What AWS WAF should do if it fails to completely parse the JSON body. The options are the following:\n\n- `EVALUATE_AS_STRING` - Inspect the body as plain text. AWS WAF applies the text transformations and inspection criteria that you defined for the JSON inspection to the body text string.\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.\n\nIf you don't provide this setting, AWS WAF parses and evaluates the content only up to the first parsing failure that it encounters.\n\nAWS WAF does its best to parse the entire JSON body, but might be forced to stop for reasons such as invalid characters, duplicate keys, truncation, and any content whose root node isn't an object or an array.\n\nAWS WAF parses the JSON in the following examples as two valid key, value pairs:\n\n- Missing comma: `{\"key1\":\"value1\"\"key2\":\"value2\"}`\n- Missing colon: `{\"key1\":\"value1\",\"key2\"\"value2\"}`\n- Extra colons: `{\"key1\"::\"value1\",\"key2\"\"value2\"}`", + "markdownDescription": "What AWS WAF should do if it fails to completely parse the JSON body. The options are the following:\n\n- `EVALUATE_AS_STRING` - Inspect the body as plain text. AWS WAF applies the text transformations and inspection criteria that you defined for the JSON inspection to the body text string.\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.\n\nIf you don't provide this setting, AWS WAF parses and evaluates the content only up to the first parsing failure that it encounters.\n\n> AWS WAF parsing doesn't fully validate the input JSON string, so parsing can succeed even for invalid JSON. When parsing succeeds, AWS WAF doesn't apply the fallback behavior. For more information, see [JSON body](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-fields-list.html#waf-rule-statement-request-component-json-body) in the *AWS WAF Developer Guide* .", "title": "InvalidFallbackBehavior", "type": "string" }, @@ -272126,7 +272140,7 @@ "type": "array" }, "UserName": { - "markdownDescription": "The user name of the user for the WorkSpace. This user name must exist in the AWS Directory Service directory for the WorkSpace.\n\nThe reserved keyword, `[UNDEFINED]` , is used when creating user-decoupled WorkSpaces.", + "markdownDescription": "The user name of the user for the WorkSpace. This user name must exist in the AWS Directory Service directory for the WorkSpace.", "title": "UserName", "type": "string" }, @@ -272136,7 +272150,7 @@ "type": "boolean" }, "VolumeEncryptionKey": { - "markdownDescription": "The ARN of the symmetric AWS KMS key used to encrypt data stored on your WorkSpace. Amazon WorkSpaces does not support asymmetric KMS keys.", + "markdownDescription": "The symmetric AWS KMS key used to encrypt data stored on your WorkSpace. Amazon WorkSpaces does not support asymmetric KMS keys.", "title": "VolumeEncryptionKey", "type": "string" }, @@ -272188,7 +272202,7 @@ "type": "number" }, "RunningMode": { - "markdownDescription": "The running mode. For more information, see [Manage the WorkSpace Running Mode](https://docs.aws.amazon.com/workspaces/latest/adminguide/running-mode.html) .\n\n> The `MANUAL` value is only supported by Amazon WorkSpaces Core. Contact your account team to be allow-listed to use this value. For more information, see [Amazon WorkSpaces Core](https://docs.aws.amazon.com/workspaces/core/) .", + "markdownDescription": "The running mode. For more information, see [Manage the WorkSpace Running Mode](https://docs.aws.amazon.com/workspaces/latest/adminguide/running-mode.html) .", "title": "RunningMode", "type": "string" }, @@ -272612,7 +272626,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "The tags to add to the browser settings resource. A tag is a key-value pair.", + "markdownDescription": "The tags to add to the IP access settings resource. A tag is a key-value pair.", "title": "Tags", "type": "array" }