diff --git a/clients/client-cloudwatch-logs/src/CloudWatchLogs.ts b/clients/client-cloudwatch-logs/src/CloudWatchLogs.ts index 7c83b750b89c..34d77e4a57d3 100644 --- a/clients/client-cloudwatch-logs/src/CloudWatchLogs.ts +++ b/clients/client-cloudwatch-logs/src/CloudWatchLogs.ts @@ -360,7 +360,6 @@ export class CloudWatchLogs extends CloudWatchLogsClient { *
You can export logs from multiple log groups or multiple time ranges to the same S3 * bucket. To separate log data for each export task, specify a prefix to be used as the Amazon * S3 key prefix for all exported objects.
- * *Time-based sorting on chunks of log data inside an exported file is not guaranteed. You can * sort the exported log field data by using Linux utilities.
@@ -1156,7 +1155,6 @@ export class CloudWatchLogs extends CloudWatchLogsClient { *Lists log events from the specified log group. You can list all the log events or filter the results * using a filter pattern, a time range, and the name of the log stream.
*You must have the logs;FilterLogEvents
permission to perform this operation.
By default, this operation returns as many log events as can fit in 1 MB (up to 10,000 * log events) or all the events found within the specified time range. If the results include a * token, that means there are more log events available. You can get additional results by @@ -1232,11 +1230,9 @@ export class CloudWatchLogs extends CloudWatchLogsClient { /** *
Lists log events from the specified log stream. You can list all of the log events or * filter using a time range.
- * *By default, this operation returns as many log events as can fit in a response size of 1MB (up to 10,000 log events). * You can get additional log events by specifying one of the tokens in a subsequent call. * This operation can return empty results while there are more log events available through the token.
- * *If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and * view data from the linked source accounts. For more information, see * CloudWatch cross-account observability.
@@ -1548,9 +1544,6 @@ export class CloudWatchLogs extends CloudWatchLogsClient { *Creates or updates an access policy associated with an existing * destination. An access policy is an IAM policy document that is used * to authorize claims to register a subscription filter against a given destination.
- *If multiple Amazon Web Services accounts are sending logs to this destination, each sender account must be
- * listed separately in the policy. The policy does not support specifying *
- * as the Principal or the use of the aws:PrincipalOrgId
global key.
Uploads a batch of log events to the specified log stream.
- *You must include the sequence token obtained from the response of the previous call. An
- * upload in a newly created log stream does not require a sequence token. You can also get the
- * sequence token in the expectedSequenceToken
field from
- * InvalidSequenceTokenException
. If you call PutLogEvents
twice
- * within a narrow time period using the same value for sequenceToken
, both calls
- * might be successful or one might be rejected.
The sequence token is now ignored in PutLogEvents
+ * actions. PutLogEvents
+ * actions are always accepted and never return InvalidSequenceTokenException
or
+ * DataAlreadyAcceptedException
even if the sequence token is not valid. You can use
+ * parallel PutLogEvents
actions on the same log stream.
The batch of events must satisfy the following constraints:
*The maximum number of log events in a batch is 10,000.
*There is a quota of five requests per second per log stream. Additional requests - * are throttled. This quota can't be changed.
+ *The quota of five requests per second per log stream
+ * has been removed. Instead, PutLogEvents
actions are throttled based on a
+ * per-second per-account quota. You can request an increase to the per-second throttling
+ * quota by using the Service Quotas service.
If a call to PutLogEvents
returns "UnrecognizedClientException" the most
@@ -1707,7 +1705,6 @@ export class CloudWatchLogs extends CloudWatchLogsClient {
/**
*
Creates or updates a query definition for CloudWatch Logs Insights. For * more information, see Analyzing Log Data with CloudWatch Logs Insights.
- * *To update a query definition, specify its queryDefinitionId
in your request.
* The values of name
, queryString
, and logGroupNames
are
* changed to the values that you specify in your update operation. No current values are
@@ -1890,16 +1887,13 @@ export class CloudWatchLogs extends CloudWatchLogsClient {
*
Schedules a query of a log group using CloudWatch Logs Insights. You specify the log group * and time range to query and the query string to use.
*For more information, see CloudWatch Logs Insights Query Syntax.
- * *Queries time out after 15 minutes of runtime. If your queries are timing out, reduce the * time range being searched or partition your query into a number of queries.
- * *If you are using CloudWatch cross-account observability, you can use this operation in a
* monitoring account to start a query in a linked source account. For more information, see
* CloudWatch
* cross-account observability. For a cross-account StartQuery
operation,
* the query definition must be defined in the monitoring account.
You can have up to 20 concurrent CloudWatch Logs insights queries, including queries * that have been added to dashboards.
*/ diff --git a/clients/client-cloudwatch-logs/src/commands/CreateExportTaskCommand.ts b/clients/client-cloudwatch-logs/src/commands/CreateExportTaskCommand.ts index 047cc988a845..9c16721aedc6 100644 --- a/clients/client-cloudwatch-logs/src/commands/CreateExportTaskCommand.ts +++ b/clients/client-cloudwatch-logs/src/commands/CreateExportTaskCommand.ts @@ -45,7 +45,6 @@ export interface CreateExportTaskCommandOutput extends CreateExportTaskResponse, *You can export logs from multiple log groups or multiple time ranges to the same S3 * bucket. To separate log data for each export task, specify a prefix to be used as the Amazon * S3 key prefix for all exported objects.
- * *Time-based sorting on chunks of log data inside an exported file is not guaranteed. You can * sort the exported log field data by using Linux utilities.
diff --git a/clients/client-cloudwatch-logs/src/commands/FilterLogEventsCommand.ts b/clients/client-cloudwatch-logs/src/commands/FilterLogEventsCommand.ts index cb37333bf158..f9a0c1959a0b 100644 --- a/clients/client-cloudwatch-logs/src/commands/FilterLogEventsCommand.ts +++ b/clients/client-cloudwatch-logs/src/commands/FilterLogEventsCommand.ts @@ -32,7 +32,6 @@ export interface FilterLogEventsCommandOutput extends FilterLogEventsResponse, _ *Lists log events from the specified log group. You can list all the log events or filter the results * using a filter pattern, a time range, and the name of the log stream.
*You must have the logs;FilterLogEvents
permission to perform this operation.
By default, this operation returns as many log events as can fit in 1 MB (up to 10,000 * log events) or all the events found within the specified time range. If the results include a * token, that means there are more log events available. You can get additional results by diff --git a/clients/client-cloudwatch-logs/src/commands/GetLogEventsCommand.ts b/clients/client-cloudwatch-logs/src/commands/GetLogEventsCommand.ts index bbffbbc5b2e9..b5a6b64556f3 100644 --- a/clients/client-cloudwatch-logs/src/commands/GetLogEventsCommand.ts +++ b/clients/client-cloudwatch-logs/src/commands/GetLogEventsCommand.ts @@ -31,11 +31,9 @@ export interface GetLogEventsCommandOutput extends GetLogEventsResponse, __Metad /** *
Lists log events from the specified log stream. You can list all of the log events or * filter using a time range.
- * *By default, this operation returns as many log events as can fit in a response size of 1MB (up to 10,000 log events). * You can get additional log events by specifying one of the tokens in a subsequent call. * This operation can return empty results while there are more log events available through the token.
- * *If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and * view data from the linked source accounts. For more information, see * CloudWatch cross-account observability.
diff --git a/clients/client-cloudwatch-logs/src/commands/PutDestinationPolicyCommand.ts b/clients/client-cloudwatch-logs/src/commands/PutDestinationPolicyCommand.ts index cf5dc321dc9f..a7aaca985b7e 100644 --- a/clients/client-cloudwatch-logs/src/commands/PutDestinationPolicyCommand.ts +++ b/clients/client-cloudwatch-logs/src/commands/PutDestinationPolicyCommand.ts @@ -27,9 +27,6 @@ export interface PutDestinationPolicyCommandOutput extends __MetadataBearer {} *Creates or updates an access policy associated with an existing * destination. An access policy is an IAM policy document that is used * to authorize claims to register a subscription filter against a given destination.
- *If multiple Amazon Web Services accounts are sending logs to this destination, each sender account must be
- * listed separately in the policy. The policy does not support specifying *
- * as the Principal or the use of the aws:PrincipalOrgId
global key.
Uploads a batch of log events to the specified log stream.
- *You must include the sequence token obtained from the response of the previous call. An
- * upload in a newly created log stream does not require a sequence token. You can also get the
- * sequence token in the expectedSequenceToken
field from
- * InvalidSequenceTokenException
. If you call PutLogEvents
twice
- * within a narrow time period using the same value for sequenceToken
, both calls
- * might be successful or one might be rejected.
The sequence token is now ignored in PutLogEvents
+ * actions. PutLogEvents
+ * actions are always accepted and never return InvalidSequenceTokenException
or
+ * DataAlreadyAcceptedException
even if the sequence token is not valid. You can use
+ * parallel PutLogEvents
actions on the same log stream.
The batch of events must satisfy the following constraints:
*The maximum number of log events in a batch is 10,000.
*There is a quota of five requests per second per log stream. Additional requests - * are throttled. This quota can't be changed.
+ *The quota of five requests per second per log stream
+ * has been removed. Instead, PutLogEvents
actions are throttled based on a
+ * per-second per-account quota. You can request an increase to the per-second throttling
+ * quota by using the Service Quotas service.
If a call to PutLogEvents
returns "UnrecognizedClientException" the most
diff --git a/clients/client-cloudwatch-logs/src/commands/PutQueryDefinitionCommand.ts b/clients/client-cloudwatch-logs/src/commands/PutQueryDefinitionCommand.ts
index 342ff0cbe11e..6dfa4003683a 100644
--- a/clients/client-cloudwatch-logs/src/commands/PutQueryDefinitionCommand.ts
+++ b/clients/client-cloudwatch-logs/src/commands/PutQueryDefinitionCommand.ts
@@ -31,7 +31,6 @@ export interface PutQueryDefinitionCommandOutput extends PutQueryDefinitionRespo
/**
*
Creates or updates a query definition for CloudWatch Logs Insights. For * more information, see Analyzing Log Data with CloudWatch Logs Insights.
- * *To update a query definition, specify its queryDefinitionId
in your request.
* The values of name
, queryString
, and logGroupNames
are
* changed to the values that you specify in your update operation. No current values are
diff --git a/clients/client-cloudwatch-logs/src/commands/StartQueryCommand.ts b/clients/client-cloudwatch-logs/src/commands/StartQueryCommand.ts
index 61d2891bd4a8..6d4b6ceca986 100644
--- a/clients/client-cloudwatch-logs/src/commands/StartQueryCommand.ts
+++ b/clients/client-cloudwatch-logs/src/commands/StartQueryCommand.ts
@@ -32,16 +32,13 @@ export interface StartQueryCommandOutput extends StartQueryResponse, __MetadataB
*
Schedules a query of a log group using CloudWatch Logs Insights. You specify the log group * and time range to query and the query string to use.
*For more information, see CloudWatch Logs Insights Query Syntax.
- * *Queries time out after 15 minutes of runtime. If your queries are timing out, reduce the * time range being searched or partition your query into a number of queries.
- * *If you are using CloudWatch cross-account observability, you can use this operation in a
* monitoring account to start a query in a linked source account. For more information, see
* CloudWatch
* cross-account observability. For a cross-account StartQuery
operation,
* the query definition must be defined in the monitoring account.
You can have up to 20 concurrent CloudWatch Logs insights queries, including queries * that have been added to dashboards.
* @example diff --git a/clients/client-cloudwatch-logs/src/models/models_0.ts b/clients/client-cloudwatch-logs/src/models/models_0.ts index 20b6b8fb2fa1..673e465c419d 100644 --- a/clients/client-cloudwatch-logs/src/models/models_0.ts +++ b/clients/client-cloudwatch-logs/src/models/models_0.ts @@ -248,6 +248,13 @@ export interface CreateLogStreamRequest { /** *The event was already logged.
+ *
+ * PutLogEvents
+ * actions are now always accepted and never return
+ * DataAlreadyAcceptedException
regardless of whether a given batch of log events
+ * has already been accepted.
Foo
, log groups
* named FooBar
, aws/Foo
, and GroupFoo
would match, but foo
,
* F/o/o
and Froo
would not match.
- *
- *
- *
*
* logGroupNamePattern
and logGroupNamePrefix
are mutually exclusive.
@@ -608,8 +612,6 @@ export interface DescribeLogGroupsRequest {
*
If you are using a monitoring account, set this to True
to have the operation
* return log groups in
* the accounts listed in accountIdentifiers
.
If this parameter is set to true
and accountIdentifiers
*
* contains a null value, the operation returns all log groups in the monitoring account
@@ -785,12 +787,19 @@ export interface LogStream {
/**
*
The ingestion time, expressed as the number of milliseconds after Jan 1, 1970
- * 00:00:00 UTC
.
lastIngestionTime
value updates on an eventual consistency basis. It
+ * typically updates in less than an hour after ingestion, but in rare situations might take longer.
*/
lastIngestionTime?: number;
/**
* The sequence token.
+ *The sequence token is now ignored in
+ * PutLogEvents
+ * actions. PutLogEvents
actions are always accepted regardless of receiving an invalid sequence token.
+ * You don't need to obtain uploadSequenceToken
to use a PutLogEvents
action.
The sequence token is not valid. You can get the correct sequence token in
* the expectedSequenceToken
field in the InvalidSequenceTokenException
* message.
+ * PutLogEvents
+ * actions are now always accepted and never return
+ * InvalidSequenceTokenException
regardless of receiving an invalid sequence token.
The sequence token obtained from the response of the previous PutLogEvents
- * call. An upload in a newly created log stream does not require a sequence token. You can also
- * get the sequence token using DescribeLogStreams. If you call PutLogEvents
twice within a narrow
- * time period using the same value for sequenceToken
, both calls might be
- * successful or one might be rejected.
The sequenceToken
parameter is now ignored in PutLogEvents
+ * actions. PutLogEvents
+ * actions are now accepted and never return InvalidSequenceTokenException
or
+ * DataAlreadyAcceptedException
even if the sequence token is not valid.
The next sequence token.
+ *This field has been deprecated.
+ *The sequence token is now ignored in PutLogEvents
+ * actions. PutLogEvents
+ * actions are always accepted even if the sequence token is not valid. You can use
+ * parallel PutLogEvents
actions on the same log stream and you do not need
+ * to wait for the response of a previous PutLogEvents
action to obtain
+ * the nextSequenceToken
value.
Associates the specified KMS key with the specified log\n group.
\nAssociating a KMS key with a log group overrides any existing\n associations between the log group and a KMS key. After a KMS key is associated with a log group, all newly ingested data for the log group is encrypted\n using the KMS key. This association is stored as long as the data encrypted\n with the KMS keyis still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested.
\nCloudWatch Logs supports only symmetric KMS keys. Do not use an associate\n an asymmetric KMS key with your log group. For more information, see Using\n Symmetric and Asymmetric Keys.
\nIt can take up to 5 minutes for this operation to take effect.
\nIf you attempt to associate a KMS key with a log group but the KMS key does not exist or the KMS key is disabled, you receive an\n InvalidParameterException
error.
Associates the specified KMS key with the specified log\n group.
\nAssociating a KMS key with a log group overrides any existing\n associations between the log group and a KMS key. After a KMS key is associated with a log group, all newly ingested data for the log group is encrypted\n using the KMS key. This association is stored as long as the data encrypted\n with the KMS keyis still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested.
\nCloudWatch Logs supports only symmetric KMS keys. Do not use an associate\n an asymmetric KMS key with your log group. For more information, see Using\n Symmetric and Asymmetric Keys.
\nIt can take up to 5 minutes for this operation to take effect.
\nIf you attempt to associate a KMS key with a log group but the KMS key does not exist or the KMS key is disabled, you receive an\n InvalidParameterException
error.
Cancels the specified export task.
\nThe task must be in the PENDING
or RUNNING
state.
Cancels the specified export task.
\nThe task must be in the PENDING
or RUNNING
state.
Creates an export task so that you can efficiently export data from a log group to an\n Amazon S3 bucket. When you perform a CreateExportTask
operation, you must use\n credentials that have permission to write to the S3 bucket that you specify as the\n destination.
Exporting log data to S3 buckets that are encrypted by KMS is supported.\n Exporting log data to Amazon S3 buckets that have S3 Object Lock enabled with a\n retention period is also supported.
\nExporting to S3 buckets that are encrypted with AES-256 is supported.
\nThis is an asynchronous call. If all the required information is provided, this \n operation initiates an export task and responds with the ID of the task. After the task has started,\n you can use DescribeExportTasks to get the status of the export task. Each account can\n only have one active (RUNNING
or PENDING
) export task at a time.\n To cancel an export task, use CancelExportTask.
You can export logs from multiple log groups or multiple time ranges to the same S3\n bucket. To separate log data for each export task, specify a prefix to be used as the Amazon\n S3 key prefix for all exported objects.
\n \nTime-based sorting on chunks of log data inside an exported file is not guaranteed. You can\n sort the exported log field data by using Linux utilities.
\nCreates an export task so that you can efficiently export data from a log group to an\n Amazon S3 bucket. When you perform a CreateExportTask
operation, you must use\n credentials that have permission to write to the S3 bucket that you specify as the\n destination.
Exporting log data to S3 buckets that are encrypted by KMS is supported.\n Exporting log data to Amazon S3 buckets that have S3 Object Lock enabled with a\n retention period is also supported.
\nExporting to S3 buckets that are encrypted with AES-256 is supported.
\nThis is an asynchronous call. If all the required information is provided, this \n operation initiates an export task and responds with the ID of the task. After the task has started,\n you can use DescribeExportTasks to get the status of the export task. Each account can\n only have one active (RUNNING
or PENDING
) export task at a time.\n To cancel an export task, use CancelExportTask.
You can export logs from multiple log groups or multiple time ranges to the same S3\n bucket. To separate log data for each export task, specify a prefix to be used as the Amazon\n S3 key prefix for all exported objects.
\nTime-based sorting on chunks of log data inside an exported file is not guaranteed. You can\n sort the exported log field data by using Linux utilities.
\nCreates a log stream for the specified log group. A log stream is a sequence of log events\n that originate from a single source, such as an application instance or a resource that is \n being monitored.
\nThere is no limit on the number of log streams that you can create for a log group. There is a limit \n of 50 TPS on CreateLogStream
operations, after which transactions are throttled.
You must use the following guidelines when naming a log stream:
\nLog stream names must be unique within the log group.
\nLog stream names can be between 1 and 512 characters long.
\nDon't use ':' (colon) or '*' (asterisk) characters.
\nCreates a log stream for the specified log group. A log stream is a sequence of log events\n that originate from a single source, such as an application instance or a resource that is \n being monitored.
\nThere is no limit on the number of log streams that you can create for a log group. There is a limit \n of 50 TPS on CreateLogStream
operations, after which transactions are throttled.
You must use the following guidelines when naming a log stream:
\nLog stream names must be unique within the log group.
\nLog stream names can be between 1 and 512 characters long.
\nDon't use ':' (colon) or '*' (asterisk) characters.
\nThe event was already logged.
", + "smithy.api#documentation": "The event was already logged.
\n\n PutLogEvents
\n actions are now always accepted and never return\n DataAlreadyAcceptedException
regardless of whether a given batch of log events\n has already been accepted.
Deletes the specified retention policy.
\nLog events do not expire if they belong to log groups without a retention policy.
" + "smithy.api#documentation": "Deletes the specified retention policy.
\nLog events do not expire if they belong to log groups without a retention policy.
" } }, "com.amazonaws.cloudwatchlogs#DeleteRetentionPolicyRequest": { @@ -953,7 +953,7 @@ "logGroupNamePattern": { "target": "com.amazonaws.cloudwatchlogs#LogGroupNamePattern", "traits": { - "smithy.api#documentation": "If you specify a string for this parameter, the operation returns only log groups that have names\nthat match the string based on a case-sensitive substring search. For example, if you specify Foo
, log groups\nnamed FooBar
, aws/Foo
, and GroupFoo
would match, but foo
, \nF/o/o
and Froo
would not match.
\n logGroupNamePattern
and logGroupNamePrefix
are mutually exclusive. \n Only one \n of these parameters can be passed.\n
If you specify a string for this parameter, the operation returns only log groups that have names\nthat match the string based on a case-sensitive substring search. For example, if you specify Foo
, log groups\nnamed FooBar
, aws/Foo
, and GroupFoo
would match, but foo
, \nF/o/o
and Froo
would not match.
\n logGroupNamePattern
and logGroupNamePrefix
are mutually exclusive. \n Only one \n of these parameters can be passed.\n
If you are using a monitoring account, set this to True
to have the operation\n return log groups in \n the accounts listed in accountIdentifiers
.
If this parameter is set to true
and accountIdentifiers
\n\n contains a null value, the operation returns all log groups in the monitoring account\n and all log groups in all source accounts that are linked to the monitoring account.
If you specify includeLinkedAccounts
in your request, then\n metricFilterCount
, retentionInDays
, and storedBytes
\n are not included in the response.
If you are using a monitoring account, set this to True
to have the operation\n return log groups in \n the accounts listed in accountIdentifiers
.
If this parameter is set to true
and accountIdentifiers
\n\n contains a null value, the operation returns all log groups in the monitoring account\n and all log groups in all source accounts that are linked to the monitoring account.
If you specify includeLinkedAccounts
in your request, then\n metricFilterCount
, retentionInDays
, and storedBytes
\n are not included in the response.
The prefix to match.
\nIf orderBy
is LastEventTime
, you cannot specify this\n parameter.
The prefix to match.
\nIf orderBy
is LastEventTime
, you cannot specify this\n parameter.
Disassociates the associated KMS key from the specified log\n group.
\nAfter the KMS key is disassociated from the log group, CloudWatch Logs stops encrypting newly ingested data for the log group. All previously ingested data\n remains encrypted, and CloudWatch Logs requires permissions for the KMS key\n whenever the encrypted data is requested.
\nNote that it can take up to 5 minutes for this operation to take effect.
" + "smithy.api#documentation": "Disassociates the associated KMS key from the specified log\n group.
\nAfter the KMS key is disassociated from the log group, CloudWatch Logs stops encrypting newly ingested data for the log group. All previously ingested data\n remains encrypted, and CloudWatch Logs requires permissions for the KMS key\n whenever the encrypted data is requested.
\nNote that it can take up to 5 minutes for this operation to take effect.
" } }, "com.amazonaws.cloudwatchlogs#DisassociateKmsKeyRequest": { @@ -1808,7 +1808,7 @@ } ], "traits": { - "smithy.api#documentation": "Lists log events from the specified log group. You can list all the log events or filter the results\n using a filter pattern, a time range, and the name of the log stream.
\nYou must have the logs;FilterLogEvents
permission to perform this operation.
By default, this operation returns as many log events as can fit in 1 MB (up to 10,000\n log events) or all the events found within the specified time range. If the results include a\n token, that means there are more log events available. You can get additional results by\n specifying the token in a subsequent call. This operation can return empty results while there\n are more log events available through the token.
\nThe returned log events are sorted by event timestamp, the timestamp when the event was ingested\n by CloudWatch Logs, and the ID of the PutLogEvents
request.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and \n view data from the linked source accounts. For more information, see \n CloudWatch cross-account observability.
", + "smithy.api#documentation": "Lists log events from the specified log group. You can list all the log events or filter the results\n using a filter pattern, a time range, and the name of the log stream.
\nYou must have the logs;FilterLogEvents
permission to perform this operation.
By default, this operation returns as many log events as can fit in 1 MB (up to 10,000\n log events) or all the events found within the specified time range. If the results include a\n token, that means there are more log events available. You can get additional results by\n specifying the token in a subsequent call. This operation can return empty results while there\n are more log events available through the token.
\nThe returned log events are sorted by event timestamp, the timestamp when the event was ingested\n by CloudWatch Logs, and the ID of the PutLogEvents
request.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and \n view data from the linked source accounts. For more information, see \n CloudWatch cross-account observability.
", "smithy.api#paginated": { "inputToken": "nextToken", "outputToken": "nextToken", @@ -2063,7 +2063,7 @@ } ], "traits": { - "smithy.api#documentation": "Lists log events from the specified log stream. You can list all of the log events or\n filter using a time range.
\n\nBy default, this operation returns as many log events as can fit in a response size of 1MB (up to 10,000 log events). \n You can get additional log events by specifying one of the tokens in a subsequent call.\n This operation can return empty results while there are more log events available through the token.
\n \nIf you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and \n view data from the linked source accounts. For more information, see \n CloudWatch cross-account observability.
", + "smithy.api#documentation": "Lists log events from the specified log stream. You can list all of the log events or\n filter using a time range.
\nBy default, this operation returns as many log events as can fit in a response size of 1MB (up to 10,000 log events). \n You can get additional log events by specifying one of the tokens in a subsequent call.\n This operation can return empty results while there are more log events available through the token.
\nIf you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and \n view data from the linked source accounts. For more information, see \n CloudWatch cross-account observability.
", "smithy.api#paginated": { "inputToken": "nextToken", "outputToken": "nextForwardToken", @@ -2419,7 +2419,7 @@ } }, "traits": { - "smithy.api#documentation": "The sequence token is not valid. You can get the correct sequence token in \n the expectedSequenceToken
field in the InvalidSequenceTokenException
\n message.
The sequence token is not valid. You can get the correct sequence token in \n the expectedSequenceToken
field in the InvalidSequenceTokenException
\n message.
\n PutLogEvents
\n actions are now always accepted and never return\n InvalidSequenceTokenException
regardless of receiving an invalid sequence token.
The ingestion time, expressed as the number of milliseconds after Jan 1, 1970\n 00:00:00 UTC
.
The ingestion time, expressed as the number of milliseconds after Jan 1, 1970\n 00:00:00 UTC
The lastIngestionTime
value updates on an eventual consistency basis. It \n typically updates in less than an hour after ingestion, but in rare situations might take longer.
The sequence token.
" + "smithy.api#documentation": "The sequence token.
\nThe sequence token is now ignored in \n PutLogEvents
\n actions. PutLogEvents
actions are always accepted regardless of receiving an invalid sequence token. \n You don't need to obtain uploadSequenceToken
to use a PutLogEvents
action.
You can use Amazon CloudWatch Logs to monitor, store, and access your log files from\n EC2 instances, CloudTrail, and other sources. You can then retrieve the associated\n log data from CloudWatch Logs using the CloudWatch console. Alternatively, you can use\n CloudWatch Logs commands in the Amazon Web Services CLI, CloudWatch Logs API, or CloudWatch\n Logs SDK.
\nYou can use CloudWatch Logs to:
\n\n Monitor logs from EC2 instances in real time: You\n can use CloudWatch Logs to monitor applications and systems using log data. For example,\n CloudWatch Logs can track the number of errors that occur in your application logs. Then,\n it can send you a notification whenever the rate of errors exceeds a threshold that you\n specify. CloudWatch Logs uses your log data for monitoring so no code changes are\n required. For example, you can monitor application logs for specific literal terms (such\n as \"NullReferenceException\"). You can also count the number of occurrences of a literal\n term at a particular position in log data (such as \"404\" status codes in an Apache access\n log). When the term you are searching for is found, CloudWatch Logs reports the data to a\n CloudWatch metric that you specify.
\n\n Monitor CloudTrail logged events: You can\n create alarms in CloudWatch and receive notifications of particular API activity as\n captured by CloudTrail. You can use the notification to perform troubleshooting.
\n\n Archive log data: You can use CloudWatch Logs to\n store your log data in highly durable storage. You can change the log retention setting so\n that any log events earlier than this setting are automatically deleted. The CloudWatch\n Logs agent helps to quickly send both rotated and non-rotated log data off of a host and\n into the log service. You can then access the raw log data when you need it.
\nYou can use Amazon CloudWatch Logs to monitor, store, and access your log files from\n EC2 instances, CloudTrail, and other sources. You can then retrieve the associated\n log data from CloudWatch Logs using the CloudWatch console. Alternatively, you can use\n CloudWatch Logs commands in the Amazon Web Services CLI, CloudWatch Logs API, or CloudWatch\n Logs SDK.
\nYou can use CloudWatch Logs to:
\n\n Monitor logs from EC2 instances in real time: You\n can use CloudWatch Logs to monitor applications and systems using log data. For example,\n CloudWatch Logs can track the number of errors that occur in your application logs. Then,\n it can send you a notification whenever the rate of errors exceeds a threshold that you\n specify. CloudWatch Logs uses your log data for monitoring so no code changes are\n required. For example, you can monitor application logs for specific literal terms (such\n as \"NullReferenceException\"). You can also count the number of occurrences of a literal\n term at a particular position in log data (such as \"404\" status codes in an Apache access\n log). When the term you are searching for is found, CloudWatch Logs reports the data to a\n CloudWatch metric that you specify.
\n\n Monitor CloudTrail logged events: You can\n create alarms in CloudWatch and receive notifications of particular API activity as\n captured by CloudTrail. You can use the notification to perform troubleshooting.
\n\n Archive log data: You can use CloudWatch Logs to\n store your log data in highly durable storage. You can change the log retention setting so\n that any log events earlier than this setting are automatically deleted. The CloudWatch\n Logs agent helps to quickly send both rotated and non-rotated log data off of a host and\n into the log service. You can then access the raw log data when you need it.
\nCreates or updates an access policy associated with an existing\n destination. An access policy is an IAM policy document that is used\n to authorize claims to register a subscription filter against a given destination.
\nIf multiple Amazon Web Services accounts are sending logs to this destination, each sender account must be \n listed separately in the policy. The policy does not support specifying *
\n as the Principal or the use of the aws:PrincipalOrgId
global key.
Creates or updates an access policy associated with an existing\n destination. An access policy is an IAM policy document that is used\n to authorize claims to register a subscription filter against a given destination.
" } }, "com.amazonaws.cloudwatchlogs#PutDestinationPolicyRequest": { @@ -5566,7 +5566,7 @@ } ], "traits": { - "smithy.api#documentation": "Uploads a batch of log events to the specified log stream.
\nYou must include the sequence token obtained from the response of the previous call. An\n upload in a newly created log stream does not require a sequence token. You can also get the\n sequence token in the expectedSequenceToken
field from\n InvalidSequenceTokenException
. If you call PutLogEvents
twice\n within a narrow time period using the same value for sequenceToken
, both calls\n might be successful or one might be rejected.
The batch of events must satisfy the following constraints:
\nThe maximum batch size is 1,048,576 bytes. This size is calculated as the sum of\n all event messages in UTF-8, plus 26 bytes for each log event.
\nNone of the log events in the batch can be more than 2 hours in the future.
\nNone of the log events in the batch can be more than 14 days in the past. Also,\n none of the log events can be from earlier than the retention period of the log\n group.
\nThe log events in the batch must be in chronological order by their timestamp. The\n timestamp is the time that the event occurred, expressed as the number of milliseconds\n after Jan 1, 1970 00:00:00 UTC
. (In Amazon Web Services Tools for PowerShell\n and the Amazon Web Services SDK for .NET, the timestamp is specified in .NET format:\n yyyy-mm-ddThh:mm:ss
. For example, 2017-09-15T13:45:30
.)
A batch of log events in a single request cannot span more than 24 hours. Otherwise, the operation fails.
\nThe maximum number of log events in a batch is 10,000.
\nThere is a quota of five requests per second per log stream. Additional requests\n are throttled. This quota can't be changed.
\nIf a call to PutLogEvents
returns \"UnrecognizedClientException\" the most\n likely cause is a non-valid Amazon Web Services access key ID or secret key.
Uploads a batch of log events to the specified log stream.
\nThe sequence token is now ignored in PutLogEvents
\n actions. PutLogEvents
\n actions are always accepted and never return InvalidSequenceTokenException
or\n DataAlreadyAcceptedException
even if the sequence token is not valid. You can use\n parallel PutLogEvents
actions on the same log stream.
The batch of events must satisfy the following constraints:
\nThe maximum batch size is 1,048,576 bytes. This size is calculated as the sum of\n all event messages in UTF-8, plus 26 bytes for each log event.
\nNone of the log events in the batch can be more than 2 hours in the future.
\nNone of the log events in the batch can be more than 14 days in the past. Also,\n none of the log events can be from earlier than the retention period of the log\n group.
\nThe log events in the batch must be in chronological order by their timestamp. The\n timestamp is the time that the event occurred, expressed as the number of milliseconds\n after Jan 1, 1970 00:00:00 UTC
. (In Amazon Web Services Tools for PowerShell\n and the Amazon Web Services SDK for .NET, the timestamp is specified in .NET format:\n yyyy-mm-ddThh:mm:ss
. For example, 2017-09-15T13:45:30
.)
A batch of log events in a single request cannot span more than 24 hours. Otherwise, the operation fails.
\nThe maximum number of log events in a batch is 10,000.
\nThe quota of five requests per second per log stream\n has been removed. Instead, PutLogEvents
actions are throttled based on a \n per-second per-account quota. You can request an increase to the per-second throttling\n quota by using the Service Quotas service.
If a call to PutLogEvents
returns \"UnrecognizedClientException\" the most\n likely cause is a non-valid Amazon Web Services access key ID or secret key.
The sequence token obtained from the response of the previous PutLogEvents
\n call. An upload in a newly created log stream does not require a sequence token. You can also\n get the sequence token using DescribeLogStreams. If you call PutLogEvents
twice within a narrow\n time period using the same value for sequenceToken
, both calls might be\n successful or one might be rejected.
The sequence token obtained from the response of the previous PutLogEvents
\n call.
The sequenceToken
parameter is now ignored in PutLogEvents
\n actions. PutLogEvents
\n actions are now accepted and never return InvalidSequenceTokenException
or\n DataAlreadyAcceptedException
even if the sequence token is not valid.
The next sequence token.
" + "smithy.api#documentation": "The next sequence token.
\nThis field has been deprecated.
\nThe sequence token is now ignored in PutLogEvents
\n actions. PutLogEvents
\n actions are always accepted even if the sequence token is not valid. You can use\n parallel PutLogEvents
actions on the same log stream and you do not need\n to wait for the response of a previous PutLogEvents
action to obtain \n the nextSequenceToken
value.
Creates or updates a query definition for CloudWatch Logs Insights. For \n more information, see Analyzing Log Data with CloudWatch Logs Insights.
\n \nTo update a query definition, specify its queryDefinitionId
in your request.\n The values of name
, queryString
, and logGroupNames
are\n changed to the values that you specify in your update operation. No current values are\n retained from the current query definition. For example, imagine updating a current query\n definition that includes log groups. If you don't specify the logGroupNames
\n parameter in your update operation, the query definition changes to contain no log\n groups.
You must have the logs:PutQueryDefinition
permission to be able to perform\n this operation.
Creates or updates a query definition for CloudWatch Logs Insights. For \n more information, see Analyzing Log Data with CloudWatch Logs Insights.
\nTo update a query definition, specify its queryDefinitionId
in your request.\n The values of name
, queryString
, and logGroupNames
are\n changed to the values that you specify in your update operation. No current values are\n retained from the current query definition. For example, imagine updating a current query\n definition that includes log groups. If you don't specify the logGroupNames
\n parameter in your update operation, the query definition changes to contain no log\n groups.
You must have the logs:PutQueryDefinition
permission to be able to perform\n this operation.
Details of the new policy, including the identity of the principal that is enabled to put logs to this account. This is formatted as a JSON string.\n This parameter is required.
\nThe following example creates a resource policy enabling the Route 53 service to put\n DNS query logs in to the specified log group. Replace \"logArn\"
with the ARN of \n your CloudWatch Logs resource, such as a log group or log stream.
CloudWatch Logs also supports aws:SourceArn\n and aws:SourceAccount\ncondition context keys.
\nIn the example resource policy, you would replace the value of SourceArn
with\n the resource making the call from RouteĀ 53 to CloudWatch Logs. You would also\n replace the value of SourceAccount
with the Amazon Web Services account ID making\n that call.
\n {\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"Route53LogsToCloudWatchLogs\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": [\n \"route53.amazonaws.com\"\n ]\n },\n \"Action\": \"logs:PutLogEvents\",\n \"Resource\": \"logArn\",\n \"Condition\": {\n \"ArnLike\": {\n \"aws:SourceArn\": \"myRoute53ResourceArn\"\n },\n \"StringEquals\": {\n \"aws:SourceAccount\": \"myAwsAccountId\"\n }\n }\n }\n ]\n}
\n \n
Details of the new policy, including the identity of the principal that is enabled to put logs to this account. This is formatted as a JSON string.\n This parameter is required.
\nThe following example creates a resource policy enabling the Route 53 service to put\n DNS query logs in to the specified log group. Replace \"logArn\"
with the ARN of \n your CloudWatch Logs resource, such as a log group or log stream.
CloudWatch Logs also supports aws:SourceArn\n and aws:SourceAccount\ncondition context keys.
\nIn the example resource policy, you would replace the value of SourceArn
with\n the resource making the call from RouteĀ 53 to CloudWatch Logs. You would also\n replace the value of SourceAccount
with the Amazon Web Services account ID making\n that call.
\n {\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"Route53LogsToCloudWatchLogs\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": [\n \"route53.amazonaws.com\"\n ]\n },\n \"Action\": \"logs:PutLogEvents\",\n \"Resource\": \"logArn\",\n \"Condition\": {\n \"ArnLike\": {\n \"aws:SourceArn\": \"myRoute53ResourceArn\"\n },\n \"StringEquals\": {\n \"aws:SourceAccount\": \"myAwsAccountId\"\n }\n }\n }\n ]\n}
\n
Creates or updates a subscription filter and associates it with the specified log\n group. With subscription filters, you can subscribe to a real-time stream of log events\n ingested through PutLogEvents\n and have them delivered to a specific destination. When log events are sent to the receiving\n service, they are Base64 encoded and compressed with the GZIP format.
\nThe following destinations are supported for subscription filters:
\nAn Amazon Kinesis data stream belonging to the same account as the subscription\n filter, for same-account delivery.
\nA logical destination that belongs to a different account, for cross-account delivery.
\nAn Amazon Kinesis Data Firehose delivery stream that belongs to the same account as\n the subscription filter, for same-account delivery.
\nAn Lambda function that belongs to the same account as the\n subscription filter, for same-account delivery.
\nEach log group can have up to two subscription filters associated with it. If you are\n updating an existing filter, you must specify the correct name in filterName
.\n
To perform a PutSubscriptionFilter
operation, you must also have the \n iam:PassRole
permission.
Creates or updates a subscription filter and associates it with the specified log\n group. With subscription filters, you can subscribe to a real-time stream of log events\n ingested through PutLogEvents\n and have them delivered to a specific destination. When log events are sent to the receiving\n service, they are Base64 encoded and compressed with the GZIP format.
\nThe following destinations are supported for subscription filters:
\nAn Amazon Kinesis data stream belonging to the same account as the subscription\n filter, for same-account delivery.
\nA logical destination that belongs to a different account, for cross-account delivery.
\nAn Amazon Kinesis Data Firehose delivery stream that belongs to the same account as\n the subscription filter, for same-account delivery.
\nAn Lambda function that belongs to the same account as the\n subscription filter, for same-account delivery.
\nEach log group can have up to two subscription filters associated with it. If you are\n updating an existing filter, you must specify the correct name in filterName
.\n
To perform a PutSubscriptionFilter
operation, you must also have the \n iam:PassRole
permission.
Schedules a query of a log group using CloudWatch Logs Insights. You specify the log group\n and time range to query and the query string to use.
\nFor more information, see CloudWatch Logs Insights Query Syntax.
\n \nQueries time out after 15 minutes of runtime. If your queries are timing out, reduce the\n time range being searched or partition your query into a number of queries.
\n \nIf you are using CloudWatch cross-account observability, you can use this operation in a\n monitoring account to start a query in a linked source account. For more information, see\n CloudWatch\n cross-account observability. For a cross-account StartQuery
operation,\n the query definition must be defined in the monitoring account.
You can have up to 20 concurrent CloudWatch Logs insights queries, including queries\n that have been added to dashboards.
" + "smithy.api#documentation": "Schedules a query of a log group using CloudWatch Logs Insights. You specify the log group\n and time range to query and the query string to use.
\nFor more information, see CloudWatch Logs Insights Query Syntax.
\nQueries time out after 15 minutes of runtime. If your queries are timing out, reduce the\n time range being searched or partition your query into a number of queries.
\nIf you are using CloudWatch cross-account observability, you can use this operation in a\n monitoring account to start a query in a linked source account. For more information, see\n CloudWatch\n cross-account observability. For a cross-account StartQuery
operation,\n the query definition must be defined in the monitoring account.
You can have up to 20 concurrent CloudWatch Logs insights queries, including queries\n that have been added to dashboards.
" } }, "com.amazonaws.cloudwatchlogs#StartQueryRequest": {