diff --git a/clients/client-ecs/src/commands/CreateServiceCommand.ts b/clients/client-ecs/src/commands/CreateServiceCommand.ts index d81f510f33cf..9c7d8fc6870c 100644 --- a/clients/client-ecs/src/commands/CreateServiceCommand.ts +++ b/clients/client-ecs/src/commands/CreateServiceCommand.ts @@ -35,6 +35,9 @@ export interface CreateServiceCommandOutput extends CreateServiceResponse, __Met * *

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

*
+ * + *

Amazon Elastic Inference (EI) is no longer available to customers.

+ *
*

In addition to maintaining the desired count of tasks in your service, you can * optionally run your service behind one or more load balancers. The load balancers * distribute traffic across the tasks that are associated with the service. For more @@ -112,7 +115,6 @@ export interface CreateServiceCommandOutput extends CreateServiceResponse, __Met * information about task placement and task placement strategies, see Amazon ECS * task placement in the Amazon Elastic Container Service Developer Guide *

- *

Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.

* @example * Use a bare-bones client and the command you need to make an API call. * ```javascript diff --git a/clients/client-ecs/src/commands/RunTaskCommand.ts b/clients/client-ecs/src/commands/RunTaskCommand.ts index e2a649287bed..7e02f444a1d8 100644 --- a/clients/client-ecs/src/commands/RunTaskCommand.ts +++ b/clients/client-ecs/src/commands/RunTaskCommand.ts @@ -32,12 +32,14 @@ export interface RunTaskCommandOutput extends RunTaskResponse, __MetadataBearer * *

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

*
+ * + *

Amazon Elastic Inference (EI) is no longer available to customers.

+ *
*

You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places * tasks using placement constraints and placement strategies. For more information, see * Scheduling Tasks in the Amazon Elastic Container Service Developer Guide.

*

Alternatively, you can use StartTask to use your own scheduler or * place tasks manually on specific container instances.

- *

Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.

*

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or * updating a service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

*

The Amazon ECS API follows an eventual consistency model. This is because of the diff --git a/clients/client-ecs/src/commands/StartTaskCommand.ts b/clients/client-ecs/src/commands/StartTaskCommand.ts index 756fc575d3e3..fa59b14ff758 100644 --- a/clients/client-ecs/src/commands/StartTaskCommand.ts +++ b/clients/client-ecs/src/commands/StartTaskCommand.ts @@ -33,7 +33,9 @@ export interface StartTaskCommandOutput extends StartTaskResponse, __MetadataBea * *

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

* - *

Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.

+ * + *

Amazon Elastic Inference (EI) is no longer available to customers.

+ *
*

Alternatively, you can useRunTask to place tasks for you. For more * information, see Scheduling Tasks in the Amazon Elastic Container Service Developer Guide.

*

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or diff --git a/clients/client-ecs/src/models/models_0.ts b/clients/client-ecs/src/models/models_0.ts index 5b9a4a45aafe..a51d106ecfe3 100644 --- a/clients/client-ecs/src/models/models_0.ts +++ b/clients/client-ecs/src/models/models_0.ts @@ -716,11 +716,13 @@ export interface ClusterConfiguration { * FARGATE_SPOT capacity providers. The Fargate capacity providers are * available to all accounts and only need to be associated with a cluster to be used in a * capacity provider strategy.

- *

With FARGATE_SPOT, you can run interruption tolerant tasks at a rate - * that's discounted compared to the FARGATE price. FARGATE_SPOT - * runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are - * interrupted with a two-minute warning. FARGATE_SPOT only supports Linux - * tasks with the X86_64 architecture on platform version 1.3.0 or later.

+ *

With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's + * discounted compared to the FARGATE price. FARGATE_SPOT runs + * tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are + * interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks + * with the X86_64 architecture on platform version 1.3.0 or later. + * FARGATE_SPOT supports Linux tasks with the ARM64 architecture on + * platform version 1.4.0 or later.

*

A capacity provider strategy may contain a maximum of 6 capacity providers.

* @public */ @@ -1964,7 +1966,203 @@ export interface LogConfiguration { logDriver: LogDriver | undefined; /** - *

The configuration options to send to the log driver. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '\{\{.Server.APIVersion\}\}' + *

The configuration options to send to the log driver.

+ *

The options you can specify depend on the log driver. Some + * of the options you can specify when you use the awslogs log driver to route logs to + * Amazon CloudWatch include the following:

+ *
+ *
awslogs-create-group
+ *
+ *

Required: No

+ *

Specify whether you want the log group to be + * created automatically. If this option isn't + * specified, it defaults to + * false.

+ * + *

Your IAM policy must include the + * logs:CreateLogGroup permission before + * you attempt to use + * awslogs-create-group.

+ *
+ *
+ *
awslogs-region
+ *
+ *

Required: Yes

+ *

Specify the Amazon Web Services Region that the + * awslogs log driver is to send your + * Docker logs to. You can choose to send all of your + * logs from clusters in different Regions to a + * single region in CloudWatch Logs. This is so that they're + * all visible in one location. Otherwise, you can + * separate them by Region for more granularity. Make + * sure that the specified log group exists in the + * Region that you specify with this option.

+ *
+ *
awslogs-group
+ *
+ *

Required: Yes

+ *

Make sure to specify a log group that the + * awslogs log driver sends its log + * streams to.

+ *
+ *
awslogs-stream-prefix
+ *
+ *

Required: Yes, when + * using the Fargate launch + * type.Optional for + * the EC2 launch type, required for + * the Fargate launch + * type.

+ *

Use the awslogs-stream-prefix + * option to associate a log stream with the + * specified prefix, the container name, and the ID + * of the Amazon ECS task that the container belongs to. + * If you specify a prefix with this option, then the + * log stream takes the format prefix-name/container-name/ecs-task-id.

+ *

If you don't specify a prefix + * with this option, then the log stream is named + * after the container ID that's assigned by the + * Docker daemon on the container instance. Because + * it's difficult to trace logs back to the container + * that sent them with just the Docker container ID + * (which is only available on the container + * instance), we recommend that you specify a prefix + * with this option.

+ *

For Amazon ECS services, you can use the service + * name as the prefix. Doing so, you can trace log + * streams to the service that the container belongs + * to, the name of the container that sent them, and + * the ID of the task that the container belongs + * to.

+ *

You must specify a + * stream-prefix for your logs to have your logs + * appear in the Log pane when using the Amazon ECS + * console.

+ *
+ *
awslogs-datetime-format
+ *
+ *

Required: No

+ *

This option defines a multiline start pattern + * in Python strftime format. A log + * message consists of a line that matches the + * pattern and any following lines that don’t match + * the pattern. The matched line is the delimiter + * between log messages.

+ *

One example of a use case for using this + * format is for parsing output such as a stack dump, + * which might otherwise be logged in multiple + * entries. The correct pattern allows it to be + * captured in a single entry.

+ *

For more information, see awslogs-datetime-format.

+ *

You cannot configure both the + * awslogs-datetime-format and + * awslogs-multiline-pattern + * options.

+ * + *

Multiline logging performs regular + * expression parsing and matching of all log + * messages. This might have a negative impact on + * logging performance.

+ *
+ *
+ *
awslogs-multiline-pattern
+ *
+ *

Required: No

+ *

This option defines a multiline start pattern + * that uses a regular expression. A log message + * consists of a line that matches the pattern and + * any following lines that don’t match the pattern. + * The matched line is the delimiter between log + * messages.

+ *

For more information, see awslogs-multiline-pattern.

+ *

This option is ignored if + * awslogs-datetime-format is also + * configured.

+ *

You cannot configure both the + * awslogs-datetime-format and + * awslogs-multiline-pattern + * options.

+ * + *

Multiline logging performs regular + * expression parsing and matching of all log + * messages. This might have a negative impact on + * logging performance.

+ *
+ *
+ *
mode
+ *
+ *

Required: No

+ *

Valid values: non-blocking | + * blocking + *

+ *

This option defines the delivery mode of log + * messages from the container to CloudWatch Logs. The delivery + * mode you choose affects application availability + * when the flow of logs from container to CloudWatch is + * interrupted.

+ *

If you use the blocking + * mode and the flow of logs to CloudWatch is interrupted, + * calls from container code to write to the + * stdout and stderr + * streams will block. The logging thread of the + * application will block as a result. This may cause + * the application to become unresponsive and lead to + * container healthcheck failure.

+ *

If you use the non-blocking mode, + * the container's logs are instead stored in an + * in-memory intermediate buffer configured with the + * max-buffer-size option. This prevents + * the application from becoming unresponsive when + * logs cannot be sent to CloudWatch. We recommend using this mode if you want to + * ensure service availability and are okay with some + * log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.

+ *
+ *
max-buffer-size
+ *
+ *

Required: No

+ *

Default value: 1m + *

+ *

When non-blocking mode is used, + * the max-buffer-size log option + * controls the size of the buffer that's used for + * intermediate message storage. Make sure to specify + * an adequate buffer size based on your application. + * When the buffer fills up, further logs cannot be + * stored. Logs that cannot be stored are lost. + *

+ *
+ *
+ *

To route logs using the splunk log router, you need to specify a + * splunk-token and a + * splunk-url.

+ *

When you use the awsfirelens log router to route logs to an Amazon Web Services Service or + * Amazon Web Services Partner Network destination for log storage and analytics, you can + * set the log-driver-buffer-limit option to limit + * the number of events that are buffered in memory, before + * being sent to the log router container. It can help to + * resolve potential log loss issue because high throughput + * might result in memory running out for the buffer inside of + * Docker.

+ *

Other options you can specify when using awsfirelens to route + * logs depend on the destination. When you export logs to + * Amazon Data Firehose, you can specify the Amazon Web Services Region with + * region and a name for the log stream with + * delivery_stream.

+ *

When you export logs to + * Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with + * region and a data stream name with + * stream.

+ *

When you export logs to Amazon OpenSearch Service, + * you can specify options like Name, + * Host (OpenSearch Service endpoint without protocol), Port, + * Index, Type, + * Aws_auth, Aws_region, Suppress_Type_Name, and + * tls.

+ *

When you export logs to Amazon S3, you can + * specify the bucket using the bucket option. You can also specify region, + * total_file_size, upload_timeout, + * and use_put_object as options.

+ *

This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '\{\{.Server.APIVersion\}\}' *

* @public */ diff --git a/codegen/sdk-codegen/aws-models/ecs.json b/codegen/sdk-codegen/aws-models/ecs.json index 7fcb2a81ee02..3d0034cdf6b6 100644 --- a/codegen/sdk-codegen/aws-models/ecs.json +++ b/codegen/sdk-codegen/aws-models/ecs.json @@ -1681,7 +1681,7 @@ } }, "traits": { - "smithy.api#documentation": "

The details of a capacity provider strategy. A capacity provider strategy can be set\n\t\t\twhen using the RunTaskor CreateCluster APIs or as\n\t\t\tthe default capacity provider strategy for a cluster with the CreateCluster API.

\n

Only capacity providers that are already associated with a cluster and have an\n\t\t\t\tACTIVE or UPDATING status can be used in a capacity\n\t\t\tprovider strategy. The PutClusterCapacityProviders API is used to\n\t\t\tassociate a capacity provider with a cluster.

\n

If specifying a capacity provider that uses an Auto Scaling group, the capacity\n\t\t\tprovider must already be created. New Auto Scaling group capacity providers can be\n\t\t\tcreated with the CreateClusterCapacityProvider API operation.

\n

To use a Fargate capacity provider, specify either the FARGATE or\n\t\t\t\tFARGATE_SPOT capacity providers. The Fargate capacity providers are\n\t\t\tavailable to all accounts and only need to be associated with a cluster to be used in a\n\t\t\tcapacity provider strategy.

\n

With FARGATE_SPOT, you can run interruption tolerant tasks at a rate\n\t\t\tthat's discounted compared to the FARGATE price. FARGATE_SPOT\n\t\t\truns tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are\n\t\t\tinterrupted with a two-minute warning. FARGATE_SPOT only supports Linux\n\t\t\ttasks with the X86_64 architecture on platform version 1.3.0 or later.

\n

A capacity provider strategy may contain a maximum of 6 capacity providers.

" + "smithy.api#documentation": "

The details of a capacity provider strategy. A capacity provider strategy can be set\n\t\t\twhen using the RunTaskor CreateCluster APIs or as\n\t\t\tthe default capacity provider strategy for a cluster with the CreateCluster API.

\n

Only capacity providers that are already associated with a cluster and have an\n\t\t\t\tACTIVE or UPDATING status can be used in a capacity\n\t\t\tprovider strategy. The PutClusterCapacityProviders API is used to\n\t\t\tassociate a capacity provider with a cluster.

\n

If specifying a capacity provider that uses an Auto Scaling group, the capacity\n\t\t\tprovider must already be created. New Auto Scaling group capacity providers can be\n\t\t\tcreated with the CreateClusterCapacityProvider API operation.

\n

To use a Fargate capacity provider, specify either the FARGATE or\n\t\t\t\tFARGATE_SPOT capacity providers. The Fargate capacity providers are\n\t\t\tavailable to all accounts and only need to be associated with a cluster to be used in a\n\t\t\tcapacity provider strategy.

\n

With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's\n\t\t\tdiscounted compared to the FARGATE price. FARGATE_SPOT runs\n\t\t\ttasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are\n\t\t\tinterrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks\n\t\t\twith the X86_64 architecture on platform version 1.3.0 or later.\n\t\t\t\tFARGATE_SPOT supports Linux tasks with the ARM64 architecture on\n\t\t\tplatform version 1.4.0 or later.

\n

A capacity provider strategy may contain a maximum of 6 capacity providers.

" } }, "com.amazonaws.ecs#CapacityProviderStrategyItemBase": { @@ -3136,7 +3136,7 @@ } ], "traits": { - "smithy.api#documentation": "

Runs and maintains your desired number of tasks from a specified task definition. If\n\t\t\tthe number of tasks running in a service drops below the desiredCount,\n\t\t\tAmazon ECS runs another copy of the task in the specified cluster. To update an existing\n\t\t\tservice, use UpdateService.

\n \n

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

\n
\n

In addition to maintaining the desired count of tasks in your service, you can\n\t\t\toptionally run your service behind one or more load balancers. The load balancers\n\t\t\tdistribute traffic across the tasks that are associated with the service. For more\n\t\t\tinformation, see Service load balancing in the Amazon Elastic Container Service Developer Guide.

\n

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or\n\t\t\tupdating a service. volumeConfigurations is only supported for REPLICA\n\t\t\tservice and not DAEMON service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

\n

Tasks for services that don't use a load balancer are considered healthy if they're in\n\t\t\tthe RUNNING state. Tasks for services that use a load balancer are\n\t\t\tconsidered healthy if they're in the RUNNING state and are reported as\n\t\t\thealthy by the load balancer.

\n

There are two service scheduler strategies available:

\n \n

You can optionally specify a deployment configuration for your service. The deployment\n\t\t\tis initiated by changing properties. For example, the deployment might be initiated by\n\t\t\tthe task definition or by your desired count of a service. You can use UpdateService. The default value for a replica service for\n\t\t\t\tminimumHealthyPercent is 100%. The default value for a daemon service\n\t\t\tfor minimumHealthyPercent is 0%.

\n

If a service uses the ECS deployment controller, the minimum healthy\n\t\t\tpercent represents a lower limit on the number of tasks in a service that must remain in\n\t\t\tthe RUNNING state during a deployment. Specifically, it represents it as a\n\t\t\tpercentage of your desired number of tasks (rounded up to the nearest integer). This\n\t\t\thappens when any of your container instances are in the DRAINING state if\n\t\t\tthe service contains tasks using the EC2 launch type. Using this\n\t\t\tparameter, you can deploy without using additional cluster capacity. For example, if you\n\t\t\tset your service to have desired number of four tasks and a minimum healthy percent of\n\t\t\t50%, the scheduler might stop two existing tasks to free up cluster capacity before\n\t\t\tstarting two new tasks. If they're in the RUNNING state, tasks for services\n\t\t\tthat don't use a load balancer are considered healthy . If they're in the\n\t\t\t\tRUNNING state and reported as healthy by the load balancer, tasks for\n\t\t\tservices that do use a load balancer are considered healthy . The\n\t\t\tdefault value for minimum healthy percent is 100%.

\n

If a service uses the ECS deployment controller, the maximum percent parameter represents an upper limit on the\n\t\t\tnumber of tasks in a service that are allowed in the RUNNING or\n\t\t\t\tPENDING state during a deployment. Specifically, it represents it as a\n\t\t\tpercentage of the desired number of tasks (rounded down to the nearest integer). This\n\t\t\thappens when any of your container instances are in the DRAINING state if\n\t\t\tthe service contains tasks using the EC2 launch type. Using this\n\t\t\tparameter, you can define the deployment batch size. For example, if your service has a\n\t\t\tdesired number of four tasks and a maximum percent value of 200%, the scheduler may\n\t\t\tstart four new tasks before stopping the four older tasks (provided that the cluster\n\t\t\tresources required to do this are available). The default value for maximum percent is\n\t\t\t200%.

\n

If a service uses either the CODE_DEPLOY or EXTERNAL\n\t\t\tdeployment controller types and tasks that use the EC2 launch type, the\n\t\t\t\tminimum healthy percent and maximum percent values are used only to define the lower and upper limit\n\t\t\ton the number of the tasks in the service that remain in the RUNNING state.\n\t\t\tThis is while the container instances are in the DRAINING state. If the\n\t\t\ttasks in the service use the Fargate launch type, the minimum healthy\n\t\t\tpercent and maximum percent values aren't used. This is the case even if they're\n\t\t\tcurrently visible when describing your service.

\n

When creating a service that uses the EXTERNAL deployment controller, you\n\t\t\tcan specify only parameters that aren't controlled at the task set level. The only\n\t\t\trequired parameter is the service name. You control your services using the CreateTaskSet. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide.

\n

When the service scheduler launches new tasks, it determines task placement. For\n\t\t\tinformation about task placement and task placement strategies, see Amazon ECS\n\t\t\t\ttask placement in the Amazon Elastic Container Service Developer Guide\n

\n

Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.

", + "smithy.api#documentation": "

Runs and maintains your desired number of tasks from a specified task definition. If\n\t\t\tthe number of tasks running in a service drops below the desiredCount,\n\t\t\tAmazon ECS runs another copy of the task in the specified cluster. To update an existing\n\t\t\tservice, use UpdateService.

\n \n

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

\n
\n \n

Amazon Elastic Inference (EI) is no longer available to customers.

\n
\n

In addition to maintaining the desired count of tasks in your service, you can\n\t\t\toptionally run your service behind one or more load balancers. The load balancers\n\t\t\tdistribute traffic across the tasks that are associated with the service. For more\n\t\t\tinformation, see Service load balancing in the Amazon Elastic Container Service Developer Guide.

\n

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or\n\t\t\tupdating a service. volumeConfigurations is only supported for REPLICA\n\t\t\tservice and not DAEMON service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

\n

Tasks for services that don't use a load balancer are considered healthy if they're in\n\t\t\tthe RUNNING state. Tasks for services that use a load balancer are\n\t\t\tconsidered healthy if they're in the RUNNING state and are reported as\n\t\t\thealthy by the load balancer.

\n

There are two service scheduler strategies available:

\n \n

You can optionally specify a deployment configuration for your service. The deployment\n\t\t\tis initiated by changing properties. For example, the deployment might be initiated by\n\t\t\tthe task definition or by your desired count of a service. You can use UpdateService. The default value for a replica service for\n\t\t\t\tminimumHealthyPercent is 100%. The default value for a daemon service\n\t\t\tfor minimumHealthyPercent is 0%.

\n

If a service uses the ECS deployment controller, the minimum healthy\n\t\t\tpercent represents a lower limit on the number of tasks in a service that must remain in\n\t\t\tthe RUNNING state during a deployment. Specifically, it represents it as a\n\t\t\tpercentage of your desired number of tasks (rounded up to the nearest integer). This\n\t\t\thappens when any of your container instances are in the DRAINING state if\n\t\t\tthe service contains tasks using the EC2 launch type. Using this\n\t\t\tparameter, you can deploy without using additional cluster capacity. For example, if you\n\t\t\tset your service to have desired number of four tasks and a minimum healthy percent of\n\t\t\t50%, the scheduler might stop two existing tasks to free up cluster capacity before\n\t\t\tstarting two new tasks. If they're in the RUNNING state, tasks for services\n\t\t\tthat don't use a load balancer are considered healthy . If they're in the\n\t\t\t\tRUNNING state and reported as healthy by the load balancer, tasks for\n\t\t\tservices that do use a load balancer are considered healthy . The\n\t\t\tdefault value for minimum healthy percent is 100%.

\n

If a service uses the ECS deployment controller, the maximum percent parameter represents an upper limit on the\n\t\t\tnumber of tasks in a service that are allowed in the RUNNING or\n\t\t\t\tPENDING state during a deployment. Specifically, it represents it as a\n\t\t\tpercentage of the desired number of tasks (rounded down to the nearest integer). This\n\t\t\thappens when any of your container instances are in the DRAINING state if\n\t\t\tthe service contains tasks using the EC2 launch type. Using this\n\t\t\tparameter, you can define the deployment batch size. For example, if your service has a\n\t\t\tdesired number of four tasks and a maximum percent value of 200%, the scheduler may\n\t\t\tstart four new tasks before stopping the four older tasks (provided that the cluster\n\t\t\tresources required to do this are available). The default value for maximum percent is\n\t\t\t200%.

\n

If a service uses either the CODE_DEPLOY or EXTERNAL\n\t\t\tdeployment controller types and tasks that use the EC2 launch type, the\n\t\t\t\tminimum healthy percent and maximum percent values are used only to define the lower and upper limit\n\t\t\ton the number of the tasks in the service that remain in the RUNNING state.\n\t\t\tThis is while the container instances are in the DRAINING state. If the\n\t\t\ttasks in the service use the Fargate launch type, the minimum healthy\n\t\t\tpercent and maximum percent values aren't used. This is the case even if they're\n\t\t\tcurrently visible when describing your service.

\n

When creating a service that uses the EXTERNAL deployment controller, you\n\t\t\tcan specify only parameters that aren't controlled at the task set level. The only\n\t\t\trequired parameter is the service name. You control your services using the CreateTaskSet. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide.

\n

When the service scheduler launches new tasks, it determines task placement. For\n\t\t\tinformation about task placement and task placement strategies, see Amazon ECS\n\t\t\t\ttask placement in the Amazon Elastic Container Service Developer Guide\n

", "smithy.api#examples": [ { "title": "To create a new service", @@ -7774,7 +7774,7 @@ "options": { "target": "com.amazonaws.ecs#LogConfigurationOptionsMap", "traits": { - "smithy.api#documentation": "

The configuration options to send to the log driver. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'\n

" + "smithy.api#documentation": "

The configuration options to send to the log driver.

\n

The options you can specify depend on the log driver. Some\n\t\t\t\tof the options you can specify when you use the awslogs log driver to route logs to\n\t\t\t\tAmazon CloudWatch include the following:

\n
\n
awslogs-create-group
\n
\n

Required: No

\n

Specify whether you want the log group to be\n\t\t\t\t\t\t\tcreated automatically. If this option isn't\n\t\t\t\t\t\t\tspecified, it defaults to\n\t\t\t\t\t\t\tfalse.

\n \n

Your IAM policy must include the\n\t\t\t\t\t\t\t\tlogs:CreateLogGroup permission before\n\t\t\t\t\t\t\t\tyou attempt to use\n\t\t\t\t\t\t\t\tawslogs-create-group.

\n
\n
\n
awslogs-region
\n
\n

Required: Yes

\n

Specify the Amazon Web Services Region that the\n\t\t\t\t\t\t\tawslogs log driver is to send your\n\t\t\t\t\t\t\tDocker logs to. You can choose to send all of your\n\t\t\t\t\t\t\tlogs from clusters in different Regions to a\n\t\t\t\t\t\t\tsingle region in CloudWatch Logs. This is so that they're\n\t\t\t\t\t\t\tall visible in one location. Otherwise, you can\n\t\t\t\t\t\t\tseparate them by Region for more granularity. Make\n\t\t\t\t\t\t\tsure that the specified log group exists in the\n\t\t\t\t\t\t\tRegion that you specify with this option.

\n
\n
awslogs-group
\n
\n

Required: Yes

\n

Make sure to specify a log group that the\n\t\t\t\t\t\t\tawslogs log driver sends its log\n\t\t\t\t\t\t\tstreams to.

\n
\n
awslogs-stream-prefix
\n
\n

Required: Yes, when\n\t\t\t\t\t\t\tusing the Fargate launch\n\t\t\t\t\t\t\ttype.Optional for\n\t\t\t\t\t\t\t\tthe EC2 launch type, required for\n\t\t\t\t\t\t\t\tthe Fargate launch\n\t\t\t\t\t\t\t\ttype.

\n

Use the awslogs-stream-prefix\n\t\t\t\t\t\t\toption to associate a log stream with the\n\t\t\t\t\t\t\tspecified prefix, the container name, and the ID\n\t\t\t\t\t\t\tof the Amazon ECS task that the container belongs to.\n\t\t\t\t\t\t\tIf you specify a prefix with this option, then the\n\t\t\t\t\t\t\tlog stream takes the format prefix-name/container-name/ecs-task-id.

\n

If you don't specify a prefix\n\t\t\t\t\t\t\twith this option, then the log stream is named\n\t\t\t\t\t\t\tafter the container ID that's assigned by the\n\t\t\t\t\t\t\tDocker daemon on the container instance. Because\n\t\t\t\t\t\t\tit's difficult to trace logs back to the container\n\t\t\t\t\t\t\tthat sent them with just the Docker container ID\n\t\t\t\t\t\t\t(which is only available on the container\n\t\t\t\t\t\t\tinstance), we recommend that you specify a prefix\n\t\t\t\t\t\t\twith this option.

\n

For Amazon ECS services, you can use the service\n\t\t\t\t\t\t\tname as the prefix. Doing so, you can trace log\n\t\t\t\t\t\t\tstreams to the service that the container belongs\n\t\t\t\t\t\t\tto, the name of the container that sent them, and\n\t\t\t\t\t\t\tthe ID of the task that the container belongs\n\t\t\t\t\t\t\tto.

\n

You must specify a\n\t\t\t\t\t\t\tstream-prefix for your logs to have your logs\n\t\t\t\t\t\t\tappear in the Log pane when using the Amazon ECS\n\t\t\t\t\t\t\tconsole.

\n
\n
awslogs-datetime-format
\n
\n

Required: No

\n

This option defines a multiline start pattern\n\t\t\t\t\t\t\tin Python strftime format. A log\n\t\t\t\t\t\t\tmessage consists of a line that matches the\n\t\t\t\t\t\t\tpattern and any following lines that don’t match\n\t\t\t\t\t\t\tthe pattern. The matched line is the delimiter\n\t\t\t\t\t\t\tbetween log messages.

\n

One example of a use case for using this\n\t\t\t\t\t\t\tformat is for parsing output such as a stack dump,\n\t\t\t\t\t\t\twhich might otherwise be logged in multiple\n\t\t\t\t\t\t\tentries. The correct pattern allows it to be\n\t\t\t\t\t\t\tcaptured in a single entry.

\n

For more information, see awslogs-datetime-format.

\n

You cannot configure both the\n\t\t\t\t\t\t\tawslogs-datetime-format and\n\t\t\t\t\t\t\tawslogs-multiline-pattern\n\t\t\t\t\t\t\toptions.

\n \n

Multiline logging performs regular\n\t\t\t\t\t\t\t\texpression parsing and matching of all log\n\t\t\t\t\t\t\t\tmessages. This might have a negative impact on\n\t\t\t\t\t\t\t\tlogging performance.

\n
\n
\n
awslogs-multiline-pattern
\n
\n

Required: No

\n

This option defines a multiline start pattern\n\t\t\t\t\t\t\tthat uses a regular expression. A log message\n\t\t\t\t\t\t\tconsists of a line that matches the pattern and\n\t\t\t\t\t\t\tany following lines that don’t match the pattern.\n\t\t\t\t\t\t\tThe matched line is the delimiter between log\n\t\t\t\t\t\t\tmessages.

\n

For more information, see awslogs-multiline-pattern.

\n

This option is ignored if\n\t\t\t\t\t\t\tawslogs-datetime-format is also\n\t\t\t\t\t\t\tconfigured.

\n

You cannot configure both the\n\t\t\t\t\t\t\tawslogs-datetime-format and\n\t\t\t\t\t\t\tawslogs-multiline-pattern\n\t\t\t\t\t\t\toptions.

\n \n

Multiline logging performs regular\n\t\t\t\t\t\t\t\texpression parsing and matching of all log\n\t\t\t\t\t\t\t\tmessages. This might have a negative impact on\n\t\t\t\t\t\t\t\tlogging performance.

\n
\n
\n
mode
\n
\n

Required: No

\n

Valid values: non-blocking |\n\t\t\t\t\t\t\tblocking\n

\n

This option defines the delivery mode of log\n\t\t\t\t\t\t\tmessages from the container to CloudWatch Logs. The delivery\n\t\t\t\t\t\t\tmode you choose affects application availability\n\t\t\t\t\t\t\twhen the flow of logs from container to CloudWatch is\n\t\t\t\t\t\t\tinterrupted.

\n

If you use the blocking\n\t\t\t\t\t\t\tmode and the flow of logs to CloudWatch is interrupted,\n\t\t\t\t\t\t\tcalls from container code to write to the\n\t\t\t\t\t\t\tstdout and stderr\n\t\t\t\t\t\t\tstreams will block. The logging thread of the\n\t\t\t\t\t\t\tapplication will block as a result. This may cause\n\t\t\t\t\t\t\tthe application to become unresponsive and lead to\n\t\t\t\t\t\t\tcontainer healthcheck failure.

\n

If you use the non-blocking mode,\n\t\t\t\t\t\t\tthe container's logs are instead stored in an\n\t\t\t\t\t\t\tin-memory intermediate buffer configured with the\n\t\t\t\t\t\t\tmax-buffer-size option. This prevents\n\t\t\t\t\t\t\tthe application from becoming unresponsive when\n\t\t\t\t\t\t\tlogs cannot be sent to CloudWatch. We recommend using this mode if you want to\n\t\t\t\t\t\t\tensure service availability and are okay with some\n\t\t\t\t\t\t\tlog loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.

\n
\n
max-buffer-size
\n
\n

Required: No

\n

Default value: 1m\n

\n

When non-blocking mode is used,\n\t\t\t\t\t\t\tthe max-buffer-size log option\n\t\t\t\t\t\t\tcontrols the size of the buffer that's used for\n\t\t\t\t\t\t\tintermediate message storage. Make sure to specify\n\t\t\t\t\t\t\tan adequate buffer size based on your application.\n\t\t\t\t\t\t\tWhen the buffer fills up, further logs cannot be\n\t\t\t\t\t\t\tstored. Logs that cannot be stored are lost.\n\t\t\t\t\t\t

\n
\n
\n

To route logs using the splunk log router, you need to specify a\n\t\t\t\tsplunk-token and a\n\t\t\t\tsplunk-url.

\n

When you use the awsfirelens log router to route logs to an Amazon Web Services Service or\n\t\t\t\tAmazon Web Services Partner Network destination for log storage and analytics, you can\n\t\t\t\tset the log-driver-buffer-limit option to limit\n\t\t\t\tthe number of events that are buffered in memory, before\n\t\t\t\tbeing sent to the log router container. It can help to\n\t\t\t\tresolve potential log loss issue because high throughput\n\t\t\t\tmight result in memory running out for the buffer inside of\n\t\t\t\tDocker.

\n

Other options you can specify when using awsfirelens to route\n\t\t\t\tlogs depend on the destination. When you export logs to\n\t\t\t\tAmazon Data Firehose, you can specify the Amazon Web Services Region with\n\t\t\t\tregion and a name for the log stream with\n\t\t\t\tdelivery_stream.

\n

When you export logs to\n\t\t\t\tAmazon Kinesis Data Streams, you can specify an Amazon Web Services Region with\n\t\t\t\tregion and a data stream name with\n\t\t\t\tstream.

\n

When you export logs to Amazon OpenSearch Service,\n\t\t\t\tyou can specify options like Name,\n\t\t\t\tHost (OpenSearch Service endpoint without protocol), Port,\n\t\t\t\tIndex, Type,\n\t\t\t\tAws_auth, Aws_region, Suppress_Type_Name, and\n\t\t\t\ttls.

\n

When you export logs to Amazon S3, you can\n\t\t\t\t\tspecify the bucket using the bucket option. You can also specify region,\n\t\t\t\t\ttotal_file_size, upload_timeout,\n\t\t\t\t\tand use_put_object as options.

\n

This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'\n

" } }, "secretOptions": { @@ -9477,7 +9477,7 @@ } ], "traits": { - "smithy.api#documentation": "

Starts a new task using the specified task definition.

\n \n

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

\n
\n

You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places\n\t\t\ttasks using placement constraints and placement strategies. For more information, see\n\t\t\t\tScheduling Tasks in the Amazon Elastic Container Service Developer Guide.

\n

Alternatively, you can use StartTask to use your own scheduler or\n\t\t\tplace tasks manually on specific container instances.

\n

Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.

\n

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or\n\t\t\tupdating a service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

\n

The Amazon ECS API follows an eventual consistency model. This is because of the\n\t\t\tdistributed nature of the system supporting the API. This means that the result of an\n\t\t\tAPI command you run that affects your Amazon ECS resources might not be immediately visible\n\t\t\tto all subsequent commands you run. Keep this in mind when you carry out an API command\n\t\t\tthat immediately follows a previous API command.

\n

To manage eventual consistency, you can do the following:

\n ", + "smithy.api#documentation": "

Starts a new task using the specified task definition.

\n \n

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

\n
\n \n

Amazon Elastic Inference (EI) is no longer available to customers.

\n
\n

You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places\n\t\t\ttasks using placement constraints and placement strategies. For more information, see\n\t\t\t\tScheduling Tasks in the Amazon Elastic Container Service Developer Guide.

\n

Alternatively, you can use StartTask to use your own scheduler or\n\t\t\tplace tasks manually on specific container instances.

\n

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or\n\t\t\tupdating a service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

\n

The Amazon ECS API follows an eventual consistency model. This is because of the\n\t\t\tdistributed nature of the system supporting the API. This means that the result of an\n\t\t\tAPI command you run that affects your Amazon ECS resources might not be immediately visible\n\t\t\tto all subsequent commands you run. Keep this in mind when you carry out an API command\n\t\t\tthat immediately follows a previous API command.

\n

To manage eventual consistency, you can do the following:

\n ", "smithy.api#examples": [ { "title": "To run a task on your default cluster", @@ -10591,7 +10591,7 @@ } ], "traits": { - "smithy.api#documentation": "

Starts a new task from the specified task definition on the specified container\n\t\t\tinstance or instances.

\n \n

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

\n
\n

Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.

\n

Alternatively, you can useRunTask to place tasks for you. For more\n\t\t\tinformation, see Scheduling Tasks in the Amazon Elastic Container Service Developer Guide.

\n

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or\n\t\t\tupdating a service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

" + "smithy.api#documentation": "

Starts a new task from the specified task definition on the specified container\n\t\t\tinstance or instances.

\n \n

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

\n
\n \n

Amazon Elastic Inference (EI) is no longer available to customers.

\n
\n

Alternatively, you can useRunTask to place tasks for you. For more\n\t\t\tinformation, see Scheduling Tasks in the Amazon Elastic Container Service Developer Guide.

\n

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or\n\t\t\tupdating a service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

" } }, "com.amazonaws.ecs#StartTaskRequest": {