From 56ac0f2c024d4556528825fc395108182e3e4f11 Mon Sep 17 00:00:00 2001 From: awstools Date: Tue, 26 Mar 2024 18:22:15 +0000 Subject: [PATCH] docs(client-ecs): This is a documentation update for Amazon ECS. --- .../client-ecs/src/commands/CreateServiceCommand.ts | 2 +- .../client-ecs/src/commands/CreateTaskSetCommand.ts | 2 +- clients/client-ecs/src/commands/RunTaskCommand.ts | 2 +- clients/client-ecs/src/commands/StartTaskCommand.ts | 2 +- .../client-ecs/src/commands/UpdateServiceCommand.ts | 2 +- codegen/sdk-codegen/aws-models/ecs.json | 10 +++++----- 6 files changed, 10 insertions(+), 10 deletions(-) diff --git a/clients/client-ecs/src/commands/CreateServiceCommand.ts b/clients/client-ecs/src/commands/CreateServiceCommand.ts index 42c3235ff5b7..fe5f199d6377 100644 --- a/clients/client-ecs/src/commands/CreateServiceCommand.ts +++ b/clients/client-ecs/src/commands/CreateServiceCommand.ts @@ -32,7 +32,7 @@ export interface CreateServiceCommandOutput extends CreateServiceResponse, __Met * Amazon ECS runs another copy of the task in the specified cluster. To update an existing * service, see the UpdateService action.

* - *

The following change began on March 21, 2024. When the task definition revision is not specified, Amazon ECS resolves the task definition revision before it authorizes the task definition.

+ *

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

*
*

In addition to maintaining the desired count of tasks in your service, you can * optionally run your service behind one or more load balancers. The load balancers diff --git a/clients/client-ecs/src/commands/CreateTaskSetCommand.ts b/clients/client-ecs/src/commands/CreateTaskSetCommand.ts index 6d7551ac8921..6b15d0143ae1 100644 --- a/clients/client-ecs/src/commands/CreateTaskSetCommand.ts +++ b/clients/client-ecs/src/commands/CreateTaskSetCommand.ts @@ -32,7 +32,7 @@ export interface CreateTaskSetCommandOutput extends CreateTaskSetResponse, __Met * Amazon ECS deployment * types in the Amazon Elastic Container Service Developer Guide.

* - *

The following change began on March 21, 2024. When the task definition revision is not specified, Amazon ECS resolves the task definition revision before it authorizes the task definition.

+ *

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

*
*

For information about the maximum number of task sets and otther quotas, see Amazon ECS * service quotas in the Amazon Elastic Container Service Developer Guide.

diff --git a/clients/client-ecs/src/commands/RunTaskCommand.ts b/clients/client-ecs/src/commands/RunTaskCommand.ts index 8b7e9ba82e0e..fef621e19f20 100644 --- a/clients/client-ecs/src/commands/RunTaskCommand.ts +++ b/clients/client-ecs/src/commands/RunTaskCommand.ts @@ -29,7 +29,7 @@ export interface RunTaskCommandOutput extends RunTaskResponse, __MetadataBearer /** *

Starts a new task using the specified task definition.

* - *

The following change began on March 21, 2024. When the task definition revision is not specified, Amazon ECS resolves the task definition revision before it authorizes the task definition.

+ *

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

*
*

You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places * tasks using placement constraints and placement strategies. For more information, see diff --git a/clients/client-ecs/src/commands/StartTaskCommand.ts b/clients/client-ecs/src/commands/StartTaskCommand.ts index 8ebb09b750c9..cf02d2e07476 100644 --- a/clients/client-ecs/src/commands/StartTaskCommand.ts +++ b/clients/client-ecs/src/commands/StartTaskCommand.ts @@ -30,7 +30,7 @@ export interface StartTaskCommandOutput extends StartTaskResponse, __MetadataBea *

Starts a new task from the specified task definition on the specified container * instance or instances.

* - *

The following change began on March 21, 2024. When the task definition revision is not specified, Amazon ECS resolves the task definition revision before it authorizes the task definition.

+ *

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

*
*

Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.

*

Alternatively, you can use RunTask to place tasks for you. For more diff --git a/clients/client-ecs/src/commands/UpdateServiceCommand.ts b/clients/client-ecs/src/commands/UpdateServiceCommand.ts index 1158bcfd2794..4fc75d83ca12 100644 --- a/clients/client-ecs/src/commands/UpdateServiceCommand.ts +++ b/clients/client-ecs/src/commands/UpdateServiceCommand.ts @@ -29,7 +29,7 @@ export interface UpdateServiceCommandOutput extends UpdateServiceResponse, __Met /** *

Modifies the parameters of a service.

* - *

The following change began on March 21, 2024. When the task definition revision is not specified, Amazon ECS resolves the task definition revision before it authorizes the task definition.

+ *

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

*
*

For services using the rolling update (ECS) you can update the desired * count, deployment configuration, network configuration, load balancers, service diff --git a/codegen/sdk-codegen/aws-models/ecs.json b/codegen/sdk-codegen/aws-models/ecs.json index db117a87ac8b..803c856b4b9f 100644 --- a/codegen/sdk-codegen/aws-models/ecs.json +++ b/codegen/sdk-codegen/aws-models/ecs.json @@ -3097,7 +3097,7 @@ } ], "traits": { - "smithy.api#documentation": "

Runs and maintains your desired number of tasks from a specified task definition. If\n\t\t\tthe number of tasks running in a service drops below the desiredCount,\n\t\t\tAmazon ECS runs another copy of the task in the specified cluster. To update an existing\n\t\t\tservice, see the UpdateService action.

\n \n

The following change began on March 21, 2024. When the task definition revision is not specified, Amazon ECS resolves the task definition revision before it authorizes the task definition.

\n
\n

In addition to maintaining the desired count of tasks in your service, you can\n\t\t\toptionally run your service behind one or more load balancers. The load balancers\n\t\t\tdistribute traffic across the tasks that are associated with the service. For more\n\t\t\tinformation, see Service load balancing in the Amazon Elastic Container Service Developer Guide.

\n

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or\n\t\t\tupdating a service. volumeConfigurations is only supported for REPLICA\n\t\t\tservice and not DAEMON service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

\n

Tasks for services that don't use a load balancer are considered healthy if they're in\n\t\t\tthe RUNNING state. Tasks for services that use a load balancer are\n\t\t\tconsidered healthy if they're in the RUNNING state and are reported as\n\t\t\thealthy by the load balancer.

\n

There are two service scheduler strategies available:

\n \n

You can optionally specify a deployment configuration for your service. The deployment\n\t\t\tis initiated by changing properties. For example, the deployment might be initiated by\n\t\t\tthe task definition or by your desired count of a service. This is done with an UpdateService operation. The default value for a replica service for\n\t\t\t\tminimumHealthyPercent is 100%. The default value for a daemon service\n\t\t\tfor minimumHealthyPercent is 0%.

\n

If a service uses the ECS deployment controller, the minimum healthy\n\t\t\tpercent represents a lower limit on the number of tasks in a service that must remain in\n\t\t\tthe RUNNING state during a deployment. Specifically, it represents it as a\n\t\t\tpercentage of your desired number of tasks (rounded up to the nearest integer). This\n\t\t\thappens when any of your container instances are in the DRAINING state if\n\t\t\tthe service contains tasks using the EC2 launch type. Using this\n\t\t\tparameter, you can deploy without using additional cluster capacity. For example, if you\n\t\t\tset your service to have desired number of four tasks and a minimum healthy percent of\n\t\t\t50%, the scheduler might stop two existing tasks to free up cluster capacity before\n\t\t\tstarting two new tasks. If they're in the RUNNING state, tasks for services\n\t\t\tthat don't use a load balancer are considered healthy . If they're in the\n\t\t\t\tRUNNING state and reported as healthy by the load balancer, tasks for\n\t\t\tservices that do use a load balancer are considered healthy . The\n\t\t\tdefault value for minimum healthy percent is 100%.

\n

If a service uses the ECS deployment controller, the maximum percent parameter represents an upper limit on the\n\t\t\tnumber of tasks in a service that are allowed in the RUNNING or\n\t\t\t\tPENDING state during a deployment. Specifically, it represents it as a\n\t\t\tpercentage of the desired number of tasks (rounded down to the nearest integer). This\n\t\t\thappens when any of your container instances are in the DRAINING state if\n\t\t\tthe service contains tasks using the EC2 launch type. Using this\n\t\t\tparameter, you can define the deployment batch size. For example, if your service has a\n\t\t\tdesired number of four tasks and a maximum percent value of 200%, the scheduler may\n\t\t\tstart four new tasks before stopping the four older tasks (provided that the cluster\n\t\t\tresources required to do this are available). The default value for maximum percent is\n\t\t\t200%.

\n

If a service uses either the CODE_DEPLOY or EXTERNAL\n\t\t\tdeployment controller types and tasks that use the EC2 launch type, the\n\t\t\t\tminimum healthy percent and maximum percent values are used only to define the lower and upper limit\n\t\t\ton the number of the tasks in the service that remain in the RUNNING state.\n\t\t\tThis is while the container instances are in the DRAINING state. If the\n\t\t\ttasks in the service use the Fargate launch type, the minimum healthy\n\t\t\tpercent and maximum percent values aren't used. This is the case even if they're\n\t\t\tcurrently visible when describing your service.

\n

When creating a service that uses the EXTERNAL deployment controller, you\n\t\t\tcan specify only parameters that aren't controlled at the task set level. The only\n\t\t\trequired parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide.

\n

When the service scheduler launches new tasks, it determines task placement. For information\n\t\t\tabout task placement and task placement strategies, see Amazon ECS\n\t\t\t\ttask placement in the Amazon Elastic Container Service Developer Guide\n

\n

Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.

", + "smithy.api#documentation": "

Runs and maintains your desired number of tasks from a specified task definition. If\n\t\t\tthe number of tasks running in a service drops below the desiredCount,\n\t\t\tAmazon ECS runs another copy of the task in the specified cluster. To update an existing\n\t\t\tservice, see the UpdateService action.

\n \n

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

\n
\n

In addition to maintaining the desired count of tasks in your service, you can\n\t\t\toptionally run your service behind one or more load balancers. The load balancers\n\t\t\tdistribute traffic across the tasks that are associated with the service. For more\n\t\t\tinformation, see Service load balancing in the Amazon Elastic Container Service Developer Guide.

\n

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or\n\t\t\tupdating a service. volumeConfigurations is only supported for REPLICA\n\t\t\tservice and not DAEMON service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

\n

Tasks for services that don't use a load balancer are considered healthy if they're in\n\t\t\tthe RUNNING state. Tasks for services that use a load balancer are\n\t\t\tconsidered healthy if they're in the RUNNING state and are reported as\n\t\t\thealthy by the load balancer.

\n

There are two service scheduler strategies available:

\n \n

You can optionally specify a deployment configuration for your service. The deployment\n\t\t\tis initiated by changing properties. For example, the deployment might be initiated by\n\t\t\tthe task definition or by your desired count of a service. This is done with an UpdateService operation. The default value for a replica service for\n\t\t\t\tminimumHealthyPercent is 100%. The default value for a daemon service\n\t\t\tfor minimumHealthyPercent is 0%.

\n

If a service uses the ECS deployment controller, the minimum healthy\n\t\t\tpercent represents a lower limit on the number of tasks in a service that must remain in\n\t\t\tthe RUNNING state during a deployment. Specifically, it represents it as a\n\t\t\tpercentage of your desired number of tasks (rounded up to the nearest integer). This\n\t\t\thappens when any of your container instances are in the DRAINING state if\n\t\t\tthe service contains tasks using the EC2 launch type. Using this\n\t\t\tparameter, you can deploy without using additional cluster capacity. For example, if you\n\t\t\tset your service to have desired number of four tasks and a minimum healthy percent of\n\t\t\t50%, the scheduler might stop two existing tasks to free up cluster capacity before\n\t\t\tstarting two new tasks. If they're in the RUNNING state, tasks for services\n\t\t\tthat don't use a load balancer are considered healthy . If they're in the\n\t\t\t\tRUNNING state and reported as healthy by the load balancer, tasks for\n\t\t\tservices that do use a load balancer are considered healthy . The\n\t\t\tdefault value for minimum healthy percent is 100%.

\n

If a service uses the ECS deployment controller, the maximum percent parameter represents an upper limit on the\n\t\t\tnumber of tasks in a service that are allowed in the RUNNING or\n\t\t\t\tPENDING state during a deployment. Specifically, it represents it as a\n\t\t\tpercentage of the desired number of tasks (rounded down to the nearest integer). This\n\t\t\thappens when any of your container instances are in the DRAINING state if\n\t\t\tthe service contains tasks using the EC2 launch type. Using this\n\t\t\tparameter, you can define the deployment batch size. For example, if your service has a\n\t\t\tdesired number of four tasks and a maximum percent value of 200%, the scheduler may\n\t\t\tstart four new tasks before stopping the four older tasks (provided that the cluster\n\t\t\tresources required to do this are available). The default value for maximum percent is\n\t\t\t200%.

\n

If a service uses either the CODE_DEPLOY or EXTERNAL\n\t\t\tdeployment controller types and tasks that use the EC2 launch type, the\n\t\t\t\tminimum healthy percent and maximum percent values are used only to define the lower and upper limit\n\t\t\ton the number of the tasks in the service that remain in the RUNNING state.\n\t\t\tThis is while the container instances are in the DRAINING state. If the\n\t\t\ttasks in the service use the Fargate launch type, the minimum healthy\n\t\t\tpercent and maximum percent values aren't used. This is the case even if they're\n\t\t\tcurrently visible when describing your service.

\n

When creating a service that uses the EXTERNAL deployment controller, you\n\t\t\tcan specify only parameters that aren't controlled at the task set level. The only\n\t\t\trequired parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide.

\n

When the service scheduler launches new tasks, it determines task placement. For information\n\t\t\tabout task placement and task placement strategies, see Amazon ECS\n\t\t\t\ttask placement in the Amazon Elastic Container Service Developer Guide\n

\n

Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.

", "smithy.api#examples": [ { "title": "To create a new service", @@ -3420,7 +3420,7 @@ } ], "traits": { - "smithy.api#documentation": "

Create a task set in the specified cluster and service. This is used when a service\n\t\t\tuses the EXTERNAL deployment controller type. For more information, see\n\t\t\t\tAmazon ECS deployment\n\t\t\t\ttypes in the Amazon Elastic Container Service Developer Guide.

\n \n

The following change began on March 21, 2024. When the task definition revision is not specified, Amazon ECS resolves the task definition revision before it authorizes the task definition.

\n
\n

For information about the maximum number of task sets and otther quotas, see Amazon ECS\n\t\t\tservice quotas in the Amazon Elastic Container Service Developer Guide.

" + "smithy.api#documentation": "

Create a task set in the specified cluster and service. This is used when a service\n\t\t\tuses the EXTERNAL deployment controller type. For more information, see\n\t\t\t\tAmazon ECS deployment\n\t\t\t\ttypes in the Amazon Elastic Container Service Developer Guide.

\n \n

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

\n
\n

For information about the maximum number of task sets and otther quotas, see Amazon ECS\n\t\t\tservice quotas in the Amazon Elastic Container Service Developer Guide.

" } }, "com.amazonaws.ecs#CreateTaskSetRequest": { @@ -9379,7 +9379,7 @@ } ], "traits": { - "smithy.api#documentation": "

Starts a new task using the specified task definition.

\n \n

The following change began on March 21, 2024. When the task definition revision is not specified, Amazon ECS resolves the task definition revision before it authorizes the task definition.

\n
\n

You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places\n\t\t\ttasks using placement constraints and placement strategies. For more information, see\n\t\t\t\tScheduling Tasks in the Amazon Elastic Container Service Developer Guide.

\n

Alternatively, you can use StartTask to use your own scheduler or\n\t\t\tplace tasks manually on specific container instances.

\n

Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.

\n

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or\n\t\t\tupdating a service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

\n

The Amazon ECS API follows an eventual consistency model. This is because of the\n\t\t\tdistributed nature of the system supporting the API. This means that the result of an\n\t\t\tAPI command you run that affects your Amazon ECS resources might not be immediately visible\n\t\t\tto all subsequent commands you run. Keep this in mind when you carry out an API command\n\t\t\tthat immediately follows a previous API command.

\n

To manage eventual consistency, you can do the following:

\n ", + "smithy.api#documentation": "

Starts a new task using the specified task definition.

\n \n

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

\n
\n

You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places\n\t\t\ttasks using placement constraints and placement strategies. For more information, see\n\t\t\t\tScheduling Tasks in the Amazon Elastic Container Service Developer Guide.

\n

Alternatively, you can use StartTask to use your own scheduler or\n\t\t\tplace tasks manually on specific container instances.

\n

Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.

\n

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or\n\t\t\tupdating a service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

\n

The Amazon ECS API follows an eventual consistency model. This is because of the\n\t\t\tdistributed nature of the system supporting the API. This means that the result of an\n\t\t\tAPI command you run that affects your Amazon ECS resources might not be immediately visible\n\t\t\tto all subsequent commands you run. Keep this in mind when you carry out an API command\n\t\t\tthat immediately follows a previous API command.

\n

To manage eventual consistency, you can do the following:

\n ", "smithy.api#examples": [ { "title": "To run a task on your default cluster", @@ -10493,7 +10493,7 @@ } ], "traits": { - "smithy.api#documentation": "

Starts a new task from the specified task definition on the specified container\n\t\t\tinstance or instances.

\n \n

The following change began on March 21, 2024. When the task definition revision is not specified, Amazon ECS resolves the task definition revision before it authorizes the task definition.

\n
\n

Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.

\n

Alternatively, you can use RunTask to place tasks for you. For more\n\t\t\tinformation, see Scheduling Tasks in the Amazon Elastic Container Service Developer Guide.

\n

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or\n\t\t\tupdating a service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

" + "smithy.api#documentation": "

Starts a new task from the specified task definition on the specified container\n\t\t\tinstance or instances.

\n \n

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

\n
\n

Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.

\n

Alternatively, you can use RunTask to place tasks for you. For more\n\t\t\tinformation, see Scheduling Tasks in the Amazon Elastic Container Service Developer Guide.

\n

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or\n\t\t\tupdating a service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

" } }, "com.amazonaws.ecs#StartTaskRequest": { @@ -12781,7 +12781,7 @@ } ], "traits": { - "smithy.api#documentation": "

Modifies the parameters of a service.

\n \n

The following change began on March 21, 2024. When the task definition revision is not specified, Amazon ECS resolves the task definition revision before it authorizes the task definition.

\n
\n

For services using the rolling update (ECS) you can update the desired\n\t\t\tcount, deployment configuration, network configuration, load balancers, service\n\t\t\tregistries, enable ECS managed tags option, propagate tags option, task placement\n\t\t\tconstraints and strategies, and task definition. When you update any of these\n\t\t\tparameters, Amazon ECS starts new tasks with the new configuration.

\n

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when starting or\n\t\t\trunning a task, or when creating or updating a service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide. You can update\n\t\t\tyour volume configurations and trigger a new deployment.\n\t\t\t\tvolumeConfigurations is only supported for REPLICA service and not\n\t\t\tDAEMON service. If you leave volumeConfigurations\n null, it doesn't trigger a new deployment. For more infomation on volumes,\n\t\t\tsee Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

\n

For services using the blue/green (CODE_DEPLOY) deployment controller,\n\t\t\tonly the desired count, deployment configuration, health check grace period, task\n\t\t\tplacement constraints and strategies, enable ECS managed tags option, and propagate tags\n\t\t\tcan be updated using this API. If the network configuration, platform version, task\n\t\t\tdefinition, or load balancer need to be updated, create a new CodeDeploy deployment. For more\n\t\t\tinformation, see CreateDeployment in the CodeDeploy API Reference.

\n

For services using an external deployment controller, you can update only the desired\n\t\t\tcount, task placement constraints and strategies, health check grace period, enable ECS\n\t\t\tmanaged tags option, and propagate tags option, using this API. If the launch type, load\n\t\t\tbalancer, network configuration, platform version, or task definition need to be\n\t\t\tupdated, create a new task set For more information, see CreateTaskSet.

\n

You can add to or subtract from the number of instantiations of a task definition in a\n\t\t\tservice by specifying the cluster that the service is running in and a new\n\t\t\t\tdesiredCount parameter.

\n

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when starting or\n\t\t\trunning a task, or when creating or updating a service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

\n

If you have updated the container image of your application, you can create a new task\n\t\t\tdefinition with that image and deploy it to your service. The service scheduler uses the\n\t\t\tminimum healthy percent and maximum percent parameters (in the service's deployment\n\t\t\tconfiguration) to determine the deployment strategy.

\n \n

If your updated Docker image uses the same tag as what is in the existing task\n\t\t\t\tdefinition for your service (for example, my_image:latest), you don't\n\t\t\t\tneed to create a new revision of your task definition. You can update the service\n\t\t\t\tusing the forceNewDeployment option. The new tasks launched by the\n\t\t\t\tdeployment pull the current image/tag combination from your repository when they\n\t\t\t\tstart.

\n
\n

You can also update the deployment configuration of a service. When a deployment is\n\t\t\ttriggered by updating the task definition of a service, the service scheduler uses the\n\t\t\tdeployment configuration parameters, minimumHealthyPercent and\n\t\t\t\tmaximumPercent, to determine the deployment strategy.

\n \n

When UpdateService stops a task during a deployment, the equivalent\n\t\t\tof docker stop is issued to the containers running in the task. This\n\t\t\tresults in a SIGTERM and a 30-second timeout. After this,\n\t\t\t\tSIGKILL is sent and the containers are forcibly stopped. If the\n\t\t\tcontainer handles the SIGTERM gracefully and exits within 30 seconds from\n\t\t\treceiving it, no SIGKILL is sent.

\n

When the service scheduler launches new tasks, it determines task placement in your\n\t\t\tcluster with the following logic.

\n \n

When the service scheduler stops running tasks, it attempts to maintain balance across\n\t\t\tthe Availability Zones in your cluster using the following logic:

\n \n \n

You must have a service-linked role when you update any of the following service\n\t\t\t\tproperties:

\n \n

For more information about the role see the CreateService request\n\t\t\t\tparameter \n role\n .

\n
", + "smithy.api#documentation": "

Modifies the parameters of a service.

\n \n

On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.

\n
\n

For services using the rolling update (ECS) you can update the desired\n\t\t\tcount, deployment configuration, network configuration, load balancers, service\n\t\t\tregistries, enable ECS managed tags option, propagate tags option, task placement\n\t\t\tconstraints and strategies, and task definition. When you update any of these\n\t\t\tparameters, Amazon ECS starts new tasks with the new configuration.

\n

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when starting or\n\t\t\trunning a task, or when creating or updating a service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide. You can update\n\t\t\tyour volume configurations and trigger a new deployment.\n\t\t\t\tvolumeConfigurations is only supported for REPLICA service and not\n\t\t\tDAEMON service. If you leave volumeConfigurations\n null, it doesn't trigger a new deployment. For more infomation on volumes,\n\t\t\tsee Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

\n

For services using the blue/green (CODE_DEPLOY) deployment controller,\n\t\t\tonly the desired count, deployment configuration, health check grace period, task\n\t\t\tplacement constraints and strategies, enable ECS managed tags option, and propagate tags\n\t\t\tcan be updated using this API. If the network configuration, platform version, task\n\t\t\tdefinition, or load balancer need to be updated, create a new CodeDeploy deployment. For more\n\t\t\tinformation, see CreateDeployment in the CodeDeploy API Reference.

\n

For services using an external deployment controller, you can update only the desired\n\t\t\tcount, task placement constraints and strategies, health check grace period, enable ECS\n\t\t\tmanaged tags option, and propagate tags option, using this API. If the launch type, load\n\t\t\tbalancer, network configuration, platform version, or task definition need to be\n\t\t\tupdated, create a new task set For more information, see CreateTaskSet.

\n

You can add to or subtract from the number of instantiations of a task definition in a\n\t\t\tservice by specifying the cluster that the service is running in and a new\n\t\t\t\tdesiredCount parameter.

\n

You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when starting or\n\t\t\trunning a task, or when creating or updating a service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.

\n

If you have updated the container image of your application, you can create a new task\n\t\t\tdefinition with that image and deploy it to your service. The service scheduler uses the\n\t\t\tminimum healthy percent and maximum percent parameters (in the service's deployment\n\t\t\tconfiguration) to determine the deployment strategy.

\n \n

If your updated Docker image uses the same tag as what is in the existing task\n\t\t\t\tdefinition for your service (for example, my_image:latest), you don't\n\t\t\t\tneed to create a new revision of your task definition. You can update the service\n\t\t\t\tusing the forceNewDeployment option. The new tasks launched by the\n\t\t\t\tdeployment pull the current image/tag combination from your repository when they\n\t\t\t\tstart.

\n
\n

You can also update the deployment configuration of a service. When a deployment is\n\t\t\ttriggered by updating the task definition of a service, the service scheduler uses the\n\t\t\tdeployment configuration parameters, minimumHealthyPercent and\n\t\t\t\tmaximumPercent, to determine the deployment strategy.

\n \n

When UpdateService stops a task during a deployment, the equivalent\n\t\t\tof docker stop is issued to the containers running in the task. This\n\t\t\tresults in a SIGTERM and a 30-second timeout. After this,\n\t\t\t\tSIGKILL is sent and the containers are forcibly stopped. If the\n\t\t\tcontainer handles the SIGTERM gracefully and exits within 30 seconds from\n\t\t\treceiving it, no SIGKILL is sent.

\n

When the service scheduler launches new tasks, it determines task placement in your\n\t\t\tcluster with the following logic.

\n \n

When the service scheduler stops running tasks, it attempts to maintain balance across\n\t\t\tthe Availability Zones in your cluster using the following logic:

\n \n \n

You must have a service-linked role when you update any of the following service\n\t\t\t\tproperties:

\n \n

For more information about the role see the CreateService request\n\t\t\t\tparameter \n role\n .

\n
", "smithy.api#examples": [ { "title": "To change the task definition used in a service",