Skip to content

Commit

Permalink
docs(client-ecs): This is a documentation only release that updates t…
Browse files Browse the repository at this point in the history
…o documentation to let customers know that Amazon Elastic Inference is no longer available.
  • Loading branch information
awstools committed Oct 10, 2024
1 parent 9695325 commit 1ec3fb8
Show file tree
Hide file tree
Showing 5 changed files with 218 additions and 14 deletions.
4 changes: 3 additions & 1 deletion clients/client-ecs/src/commands/CreateServiceCommand.ts
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,9 @@ export interface CreateServiceCommandOutput extends CreateServiceResponse, __Met
* <note>
* <p>On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.</p>
* </note>
* <note>
* <p>Amazon Elastic Inference (EI) is no longer available to customers.</p>
* </note>
* <p>In addition to maintaining the desired count of tasks in your service, you can
* optionally run your service behind one or more load balancers. The load balancers
* distribute traffic across the tasks that are associated with the service. For more
Expand Down Expand Up @@ -112,7 +115,6 @@ export interface CreateServiceCommandOutput extends CreateServiceResponse, __Met
* information about task placement and task placement strategies, see <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement.html">Amazon ECS
* task placement</a> in the <i>Amazon Elastic Container Service Developer Guide</i>
* </p>
* <p>Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service. </p>
* @example
* Use a bare-bones client and the command you need to make an API call.
* ```javascript
Expand Down
4 changes: 3 additions & 1 deletion clients/client-ecs/src/commands/RunTaskCommand.ts
Original file line number Diff line number Diff line change
Expand Up @@ -32,12 +32,14 @@ export interface RunTaskCommandOutput extends RunTaskResponse, __MetadataBearer
* <note>
* <p>On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.</p>
* </note>
* <note>
* <p>Amazon Elastic Inference (EI) is no longer available to customers.</p>
* </note>
* <p>You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places
* tasks using placement constraints and placement strategies. For more information, see
* <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scheduling_tasks.html">Scheduling Tasks</a> in the <i>Amazon Elastic Container Service Developer Guide</i>.</p>
* <p>Alternatively, you can use <code>StartTask</code> to use your own scheduler or
* place tasks manually on specific container instances.</p>
* <p>Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service. </p>
* <p>You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or
* updating a service. For more infomation, see <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ebs-volumes.html#ebs-volume-types">Amazon EBS volumes</a> in the <i>Amazon Elastic Container Service Developer Guide</i>.</p>
* <p>The Amazon ECS API follows an eventual consistency model. This is because of the
Expand Down
4 changes: 3 additions & 1 deletion clients/client-ecs/src/commands/StartTaskCommand.ts
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,9 @@ export interface StartTaskCommandOutput extends StartTaskResponse, __MetadataBea
* <note>
* <p>On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.</p>
* </note>
* <p>Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service. </p>
* <note>
* <p>Amazon Elastic Inference (EI) is no longer available to customers.</p>
* </note>
* <p>Alternatively, you can use<code>RunTask</code> to place tasks for you. For more
* information, see <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scheduling_tasks.html">Scheduling Tasks</a> in the <i>Amazon Elastic Container Service Developer Guide</i>.</p>
* <p>You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or
Expand Down
210 changes: 204 additions & 6 deletions clients/client-ecs/src/models/models_0.ts
Original file line number Diff line number Diff line change
Expand Up @@ -716,11 +716,13 @@ export interface ClusterConfiguration {
* <code>FARGATE_SPOT</code> capacity providers. The Fargate capacity providers are
* available to all accounts and only need to be associated with a cluster to be used in a
* capacity provider strategy.</p>
* <p>With <code>FARGATE_SPOT</code>, you can run interruption tolerant tasks at a rate
* that's discounted compared to the <code>FARGATE</code> price. <code>FARGATE_SPOT</code>
* runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are
* interrupted with a two-minute warning. <code>FARGATE_SPOT</code> only supports Linux
* tasks with the X86_64 architecture on platform version 1.3.0 or later.</p>
* <p>With <code>FARGATE_SPOT</code>, you can run interruption tolerant tasks at a rate that's
* discounted compared to the <code>FARGATE</code> price. <code>FARGATE_SPOT</code> runs
* tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are
* interrupted with a two-minute warning. <code>FARGATE_SPOT</code> supports Linux tasks
* with the X86_64 architecture on platform version 1.3.0 or later.
* <code>FARGATE_SPOT</code> supports Linux tasks with the ARM64 architecture on
* platform version 1.4.0 or later.</p>
* <p>A capacity provider strategy may contain a maximum of 6 capacity providers.</p>
* @public
*/
Expand Down Expand Up @@ -1964,7 +1966,203 @@ export interface LogConfiguration {
logDriver: LogDriver | undefined;

/**
* <p>The configuration options to send to the log driver. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: <code>sudo docker version --format '\{\{.Server.APIVersion\}\}'</code>
* <p>The configuration options to send to the log driver.</p>
* <p>The options you can specify depend on the log driver. Some
* of the options you can specify when you use the <code>awslogs</code> log driver to route logs to
* Amazon CloudWatch include the following:</p>
* <dl>
* <dt>awslogs-create-group</dt>
* <dd>
* <p>Required: No</p>
* <p>Specify whether you want the log group to be
* created automatically. If this option isn't
* specified, it defaults to
* <code>false</code>.</p>
* <note>
* <p>Your IAM policy must include the
* <code>logs:CreateLogGroup</code> permission before
* you attempt to use
* <code>awslogs-create-group</code>.</p>
* </note>
* </dd>
* <dt>awslogs-region</dt>
* <dd>
* <p>Required: Yes</p>
* <p>Specify the Amazon Web Services Region that the
* <code>awslogs</code> log driver is to send your
* Docker logs to. You can choose to send all of your
* logs from clusters in different Regions to a
* single region in CloudWatch Logs. This is so that they're
* all visible in one location. Otherwise, you can
* separate them by Region for more granularity. Make
* sure that the specified log group exists in the
* Region that you specify with this option.</p>
* </dd>
* <dt>awslogs-group</dt>
* <dd>
* <p>Required: Yes</p>
* <p>Make sure to specify a log group that the
* <code>awslogs</code> log driver sends its log
* streams to.</p>
* </dd>
* <dt>awslogs-stream-prefix</dt>
* <dd>
* <p>Required: Yes, when
* using the Fargate launch
* type.Optional for
* the EC2 launch type, required for
* the Fargate launch
* type.</p>
* <p>Use the <code>awslogs-stream-prefix</code>
* option to associate a log stream with the
* specified prefix, the container name, and the ID
* of the Amazon ECS task that the container belongs to.
* If you specify a prefix with this option, then the
* log stream takes the format <code>prefix-name/container-name/ecs-task-id</code>.</p>
* <p>If you don't specify a prefix
* with this option, then the log stream is named
* after the container ID that's assigned by the
* Docker daemon on the container instance. Because
* it's difficult to trace logs back to the container
* that sent them with just the Docker container ID
* (which is only available on the container
* instance), we recommend that you specify a prefix
* with this option.</p>
* <p>For Amazon ECS services, you can use the service
* name as the prefix. Doing so, you can trace log
* streams to the service that the container belongs
* to, the name of the container that sent them, and
* the ID of the task that the container belongs
* to.</p>
* <p>You must specify a
* stream-prefix for your logs to have your logs
* appear in the Log pane when using the Amazon ECS
* console.</p>
* </dd>
* <dt>awslogs-datetime-format</dt>
* <dd>
* <p>Required: No</p>
* <p>This option defines a multiline start pattern
* in Python <code>strftime</code> format. A log
* message consists of a line that matches the
* pattern and any following lines that don’t match
* the pattern. The matched line is the delimiter
* between log messages.</p>
* <p>One example of a use case for using this
* format is for parsing output such as a stack dump,
* which might otherwise be logged in multiple
* entries. The correct pattern allows it to be
* captured in a single entry.</p>
* <p>For more information, see <a href="https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format">awslogs-datetime-format</a>.</p>
* <p>You cannot configure both the
* <code>awslogs-datetime-format</code> and
* <code>awslogs-multiline-pattern</code>
* options.</p>
* <note>
* <p>Multiline logging performs regular
* expression parsing and matching of all log
* messages. This might have a negative impact on
* logging performance.</p>
* </note>
* </dd>
* <dt>awslogs-multiline-pattern</dt>
* <dd>
* <p>Required: No</p>
* <p>This option defines a multiline start pattern
* that uses a regular expression. A log message
* consists of a line that matches the pattern and
* any following lines that don’t match the pattern.
* The matched line is the delimiter between log
* messages.</p>
* <p>For more information, see <a href="https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern">awslogs-multiline-pattern</a>.</p>
* <p>This option is ignored if
* <code>awslogs-datetime-format</code> is also
* configured.</p>
* <p>You cannot configure both the
* <code>awslogs-datetime-format</code> and
* <code>awslogs-multiline-pattern</code>
* options.</p>
* <note>
* <p>Multiline logging performs regular
* expression parsing and matching of all log
* messages. This might have a negative impact on
* logging performance.</p>
* </note>
* </dd>
* <dt>mode</dt>
* <dd>
* <p>Required: No</p>
* <p>Valid values: <code>non-blocking</code> |
* <code>blocking</code>
* </p>
* <p>This option defines the delivery mode of log
* messages from the container to CloudWatch Logs. The delivery
* mode you choose affects application availability
* when the flow of logs from container to CloudWatch is
* interrupted.</p>
* <p>If you use the <code>blocking</code>
* mode and the flow of logs to CloudWatch is interrupted,
* calls from container code to write to the
* <code>stdout</code> and <code>stderr</code>
* streams will block. The logging thread of the
* application will block as a result. This may cause
* the application to become unresponsive and lead to
* container healthcheck failure. </p>
* <p>If you use the <code>non-blocking</code> mode,
* the container's logs are instead stored in an
* in-memory intermediate buffer configured with the
* <code>max-buffer-size</code> option. This prevents
* the application from becoming unresponsive when
* logs cannot be sent to CloudWatch. We recommend using this mode if you want to
* ensure service availability and are okay with some
* log loss. For more information, see <a href="http://aws.amazon.com/blogs/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/">Preventing log loss with non-blocking mode in the <code>awslogs</code> container log driver</a>.</p>
* </dd>
* <dt>max-buffer-size</dt>
* <dd>
* <p>Required: No</p>
* <p>Default value: <code>1m</code>
* </p>
* <p>When <code>non-blocking</code> mode is used,
* the <code>max-buffer-size</code> log option
* controls the size of the buffer that's used for
* intermediate message storage. Make sure to specify
* an adequate buffer size based on your application.
* When the buffer fills up, further logs cannot be
* stored. Logs that cannot be stored are lost.
* </p>
* </dd>
* </dl>
* <p>To route logs using the <code>splunk</code> log router, you need to specify a
* <code>splunk-token</code> and a
* <code>splunk-url</code>.</p>
* <p>When you use the <code>awsfirelens</code> log router to route logs to an Amazon Web Services Service or
* Amazon Web Services Partner Network destination for log storage and analytics, you can
* set the <code>log-driver-buffer-limit</code> option to limit
* the number of events that are buffered in memory, before
* being sent to the log router container. It can help to
* resolve potential log loss issue because high throughput
* might result in memory running out for the buffer inside of
* Docker.</p>
* <p>Other options you can specify when using <code>awsfirelens</code> to route
* logs depend on the destination. When you export logs to
* Amazon Data Firehose, you can specify the Amazon Web Services Region with
* <code>region</code> and a name for the log stream with
* <code>delivery_stream</code>.</p>
* <p>When you export logs to
* Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with
* <code>region</code> and a data stream name with
* <code>stream</code>.</p>
* <p> When you export logs to Amazon OpenSearch Service,
* you can specify options like <code>Name</code>,
* <code>Host</code> (OpenSearch Service endpoint without protocol), <code>Port</code>,
* <code>Index</code>, <code>Type</code>,
* <code>Aws_auth</code>, <code>Aws_region</code>, <code>Suppress_Type_Name</code>, and
* <code>tls</code>.</p>
* <p>When you export logs to Amazon S3, you can
* specify the bucket using the <code>bucket</code> option. You can also specify <code>region</code>,
* <code>total_file_size</code>, <code>upload_timeout</code>,
* and <code>use_put_object</code> as options.</p>
* <p>This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: <code>sudo docker version --format '\{\{.Server.APIVersion\}\}'</code>
* </p>
* @public
*/
Expand Down
Loading

0 comments on commit 1ec3fb8

Please sign in to comment.