diff --git a/clients/client-s3/src/commands/CompleteMultipartUploadCommand.ts b/clients/client-s3/src/commands/CompleteMultipartUploadCommand.ts index 8c7f4e16c1421..b87bb8230e57a 100644 --- a/clients/client-s3/src/commands/CompleteMultipartUploadCommand.ts +++ b/clients/client-s3/src/commands/CompleteMultipartUploadCommand.ts @@ -46,12 +46,12 @@ export interface CompleteMultipartUploadCommandOutput extends CompleteMultipartU *
Completes a multipart upload by assembling previously uploaded parts.
*You first initiate the multipart upload and then upload all parts using the UploadPart
* operation. After successfully uploading all relevant parts of an upload, you call this
- * action to complete the upload. Upon receiving this request, Amazon S3 concatenates all the
- * parts in ascending order by part number to create a new object. In the Complete Multipart
- * Upload request, you must provide the parts list. You must ensure that the parts list is
- * complete. This action concatenates the parts that you provide in the list. For each part in
- * the list, you must provide the part number and the ETag
value, returned after
- * that part was uploaded.
ETag
value, returned after that part
+ * was uploaded.
* Processing of a Complete Multipart Upload request could take several minutes to * complete. After Amazon S3 begins processing the request, it sends an HTTP response header that * specifies a 200 OK response. While processing is in progress, Amazon S3 periodically sends white diff --git a/clients/client-s3/src/commands/CopyObjectCommand.ts b/clients/client-s3/src/commands/CopyObjectCommand.ts index ba9c3ec8db3a2..d8853be3fbd2e 100644 --- a/clients/client-s3/src/commands/CopyObjectCommand.ts +++ b/clients/client-s3/src/commands/CopyObjectCommand.ts @@ -87,29 +87,31 @@ export interface CopyObjectCommandOutput extends CopyObjectOutput, __MetadataBea *
When copying an object, you can preserve all metadata (the default) or specify new metadata. - * However, the access control list (ACL) is not preserved and is set to private for the user making the request. To - * override the default ACL setting, specify a new ACL when generating a copy request. For - * more information, see Using ACLs.
- *To specify whether you want the object metadata copied from the source object or
- * replaced with metadata provided in the request, you can optionally add the
- * x-amz-metadata-directive
header. When you grant permissions, you can use
- * the s3:x-amz-metadata-directive
condition key to enforce certain metadata
- * behavior when objects are uploaded. For more information, see Specifying Conditions in a
- * Policy in the Amazon S3 User Guide. For a complete list of
- * Amazon S3-specific condition keys, see Actions, Resources, and Condition Keys for
- * Amazon S3.
When copying an object, you can preserve all metadata (the default) or specify + * new metadata. However, the access control list (ACL) is not preserved and is set + * to private for the user making the request. To override the default ACL setting, + * specify a new ACL when generating a copy request. For more information, see Using + * ACLs.
+ *To specify whether you want the object metadata copied from the source object
+ * or replaced with metadata provided in the request, you can optionally add the
+ * x-amz-metadata-directive
header. When you grant permissions, you
+ * can use the s3:x-amz-metadata-directive
condition key to enforce
+ * certain metadata behavior when objects are uploaded. For more information, see
+ * Specifying Conditions in a
+ * Policy in the Amazon S3 User Guide. For a complete list
+ * of Amazon S3-specific condition keys, see Actions, Resources, and Condition
+ * Keys for Amazon S3.
- * x-amz-website-redirect-location
is unique to each object and must be
- * specified in the request headers to copy the value.
x-amz-website-redirect-location
is unique to each object and
+ * must be specified in the request headers to copy the value.
* To only copy an object under certain conditions, such as whether the Etag
- * matches or whether the object was modified before or after a specified date, use the
- * following request parameters:
To only copy an object under certain conditions, such as whether the
+ * Etag
matches or whether the object was modified before or after a
+ * specified date, use the following request parameters:
@@ -133,12 +135,14 @@ export interface CopyObjectCommandOutput extends CopyObjectOutput, __MetadataBea *
If both the x-amz-copy-source-if-match
and
- * x-amz-copy-source-if-unmodified-since
headers are present in the request
- * and evaluate as follows, Amazon S3 returns 200 OK
and copies the data:
x-amz-copy-source-if-unmodified-since
headers are present in the
+ * request and evaluate as follows, Amazon S3 returns 200 OK
and copies the
+ * data:
*
- * x-amz-copy-source-if-match
condition evaluates to true
x-amz-copy-source-if-match
condition evaluates to
+ * true
* @@ -147,13 +151,14 @@ export interface CopyObjectCommandOutput extends CopyObjectOutput, __MetadataBea *
If both the x-amz-copy-source-if-none-match
and
- * x-amz-copy-source-if-modified-since
headers are present in the request and
- * evaluate as follows, Amazon S3 returns the 412 Precondition Failed
response
- * code:
x-amz-copy-source-if-modified-since
headers are present in the
+ * request and evaluate as follows, Amazon S3 returns the 412 Precondition
+ * Failed
response code:
*
- * x-amz-copy-source-if-none-match
condition evaluates to false
x-amz-copy-source-if-none-match
condition evaluates to
+ * false
* @@ -163,13 +168,13 @@ export interface CopyObjectCommandOutput extends CopyObjectOutput, __MetadataBea *
All headers with the x-amz-
prefix, including
- * x-amz-copy-source
, must be signed.
x-amz-copy-source
, must be signed.
* Amazon S3 automatically encrypts all new objects that are copied to an S3 bucket. When - * copying an object, if you don't specify encryption information in your copy + *
Amazon S3 automatically encrypts all new objects that are copied to an S3 bucket. + * When copying an object, if you don't specify encryption information in your copy * request, the encryption setting of the target object is set to the default * encryption configuration of the destination bucket. By default, all buckets have a * base level of encryption configuration that uses server-side encryption with Amazon S3 @@ -179,71 +184,80 @@ export interface CopyObjectCommandOutput extends CopyObjectOutput, __MetadataBea * server-side encryption with customer-provided encryption keys (SSE-C), Amazon S3 uses * the corresponding KMS key, or a customer-provided key to encrypt the target * object copy.
- *When you perform a CopyObject
operation, if you want to use a different type
- * of encryption setting for the target object, you can use other appropriate
- * encryption-related headers to encrypt the target object with a KMS key, an Amazon S3 managed
- * key, or a customer-provided key. With server-side encryption, Amazon S3 encrypts your data as it
- * writes your data to disks in its data centers and decrypts the data when you access it. If the
- * encryption setting in your request is different from the default encryption configuration
- * of the destination bucket, the encryption setting in your request takes precedence. If the
- * source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary
- * encryption information in your request so that Amazon S3 can decrypt the object for copying. For
- * more information about server-side encryption, see Using Server-Side
- * Encryption.
When you perform a CopyObject
operation, if you want to use a
+ * different type of encryption setting for the target object, you can use other
+ * appropriate encryption-related headers to encrypt the target object with a
+ * KMS key, an Amazon S3 managed key, or a customer-provided key. With server-side
+ * encryption, Amazon S3 encrypts your data as it writes your data to disks in its data
+ * centers and decrypts the data when you access it. If the encryption setting in
+ * your request is different from the default encryption configuration of the
+ * destination bucket, the encryption setting in your request takes precedence. If
+ * the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the
+ * necessary encryption information in your request so that Amazon S3 can decrypt the
+ * object for copying. For more information about server-side encryption, see Using
+ * Server-Side Encryption.
If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the * object. For more information, see Amazon S3 Bucket Keys in the * Amazon S3 User Guide.
*When copying an object, you can optionally use headers to grant ACL-based permissions. - * By default, all objects are private. Only the owner has full access control. When adding a - * new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups - * that are defined by Amazon S3. These permissions are then added to the ACL on the object. For more - * information, see Access Control List (ACL) Overview and Managing ACLs Using the REST - * API.
- *If the bucket that you're copying objects to uses the bucket owner enforced setting for
- * S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use
- * this setting only accept PUT
requests that don't specify an ACL or PUT
requests that
- * specify bucket owner full control ACLs, such as the bucket-owner-full-control
- * canned ACL or an equivalent form of this ACL expressed in the XML format.
For more information, see Controlling ownership of - * objects and disabling ACLs in the Amazon S3 User Guide.
+ *When copying an object, you can optionally use headers to grant ACL-based + * permissions. By default, all objects are private. Only the owner has full access + * control. When adding a new object, you can grant permissions to individual + * Amazon Web Services accounts or to predefined groups that are defined by Amazon S3. These permissions + * are then added to the ACL on the object. For more information, see Access Control + * List (ACL) Overview and Managing ACLs Using the REST + * API.
+ *If the bucket that you're copying objects to uses the bucket owner enforced
+ * setting for S3 Object Ownership, ACLs are disabled and no longer affect
+ * permissions. Buckets that use this setting only accept PUT
requests
+ * that don't specify an ACL or PUT
requests that specify bucket owner
+ * full control ACLs, such as the bucket-owner-full-control
canned ACL
+ * or an equivalent form of this ACL expressed in the XML format.
For more information, see Controlling + * ownership of objects and disabling ACLs in the + * Amazon S3 User Guide.
*If your bucket uses the bucket owner enforced setting for Object Ownership, all - * objects written to the bucket by any account will be owned by the bucket owner.
+ *If your bucket uses the bucket owner enforced setting for Object Ownership, + * all objects written to the bucket by any account will be owned by the bucket + * owner.
*When copying an object, if it has a checksum, that checksum will be copied to the new
- * object by default. When you copy the object over, you can optionally specify a different
- * checksum algorithm to use with the x-amz-checksum-algorithm
header.
When copying an object, if it has a checksum, that checksum will be copied to
+ * the new object by default. When you copy the object over, you can optionally
+ * specify a different checksum algorithm to use with the
+ * x-amz-checksum-algorithm
header.
You can use the CopyObject
action to change the storage class of an object
- * that is already stored in Amazon S3 by using the StorageClass
parameter. For more
- * information, see Storage Classes in the
- * Amazon S3 User Guide.
If the source object's storage class is GLACIER, you must restore a copy of - * this object before you can use it as a source object for the copy operation. For - * more information, see RestoreObject. For - * more information, see Copying - * Objects.
+ *You can use the CopyObject
action to change the storage class of
+ * an object that is already stored in Amazon S3 by using the StorageClass
+ * parameter. For more information, see Storage Classes in
+ * the Amazon S3 User Guide.
If the source object's storage class is GLACIER or + * DEEP_ARCHIVE, or the object's storage class is + * INTELLIGENT_TIERING and it's S3 Intelligent-Tiering access tier is + * Archive Access or Deep Archive Access, you must restore a copy of this object + * before you can use it as a source object for the copy operation. For more + * information, see RestoreObject. For + * more information, see Copying + * Objects.
*By default, x-amz-copy-source
header identifies the current version of an object
- * to copy. If the current version is a delete marker, Amazon S3 behaves as if the object was
- * deleted. To copy a different version, use the versionId
subresource.
If you enable versioning on the target bucket, Amazon S3 generates a unique version ID for
- * the object being copied. This version ID is different from the version ID of the source
- * object. Amazon S3 returns the version ID of the copied object in the
- * x-amz-version-id
response header in the response.
If you do not enable versioning or suspend it on the target bucket, the version ID that - * Amazon S3 generates is always null.
+ *By default, x-amz-copy-source
header identifies the current
+ * version of an object to copy. If the current version is a delete marker, Amazon S3
+ * behaves as if the object was deleted. To copy a different version, use the
+ * versionId
subresource.
If you enable versioning on the target bucket, Amazon S3 generates a unique version
+ * ID for the object being copied. This version ID is different from the version ID
+ * of the source object. Amazon S3 returns the version ID of the copied object in the
+ * x-amz-version-id
response header in the response.
If you do not enable versioning or suspend it on the target bucket, the version + * ID that Amazon S3 generates is always null.
*The following operations are related to CopyObject
:
If you want to create an Amazon S3 on Outposts bucket, see Create Bucket.
*By default, the bucket is created in the US East (N. Virginia) Region. You can
- * optionally specify a Region in the request body. You might choose a Region to optimize
- * latency, minimize costs, or address regulatory requirements. For example, if you reside in
- * Europe, you will probably find it advantageous to create buckets in the Europe (Ireland)
- * Region. For more information, see Accessing a
+ * optionally specify a Region in the request body. To constrain the bucket creation to a
+ * specific Region, you can use
+ * LocationConstraint
+ * condition key. You might choose a Region to
+ * optimize latency, minimize costs, or address regulatory requirements. For example, if you
+ * reside in Europe, you will probably find it advantageous to create buckets in the Europe
+ * (Ireland) Region. For more information, see Accessing a
* bucket.
If you send your create bucket request to the s3.amazonaws.com
endpoint,
- * the request goes to the us-east-1
Region. Accordingly, the signature calculations in
- * Signature Version 4 must use us-east-1
as the Region, even if the location constraint in
- * the request specifies another Region where the bucket is to be created. If you create a
- * bucket in a Region other than US East (N. Virginia), your application must be able to
- * handle 307 redirect. For more information, see Virtual hosting of
- * buckets.
us-east-1
Region. Accordingly, the signature
+ * calculations in Signature Version 4 must use us-east-1
as the Region, even
+ * if the location constraint in the request specifies another Region where the bucket is
+ * to be created. If you create a bucket in a Region other than US East (N. Virginia), your
+ * application must be able to handle 307 redirect. For more information, see Virtual hosting of
+ * buckets.
* In addition to s3:CreateBucket
, the following permissions are required when
- * your CreateBucket
request includes specific headers:
In addition to s3:CreateBucket
, the following permissions are
+ * required when your CreateBucket
request includes specific
+ * headers:
- * Access control lists (ACLs) - If your CreateBucket
request
- * specifies access control list (ACL) permissions and the ACL is public-read, public-read-write,
- * authenticated-read, or if you specify access permissions explicitly through any other
- * ACL, both s3:CreateBucket
and s3:PutBucketAcl
permissions
- * are needed. If the ACL for the CreateBucket
request is private or if the request doesn't
- * specify any ACLs, only s3:CreateBucket
permission is needed.
CreateBucket
request specifies access control list (ACL)
+ * permissions and the ACL is public-read, public-read-write,
+ * authenticated-read, or if you specify access permissions explicitly through
+ * any other ACL, both s3:CreateBucket
and
+ * s3:PutBucketAcl
permissions are needed. If the ACL for the
+ * CreateBucket
request is private or if the request doesn't
+ * specify any ACLs, only s3:CreateBucket
permission is needed.
+ *
*
- * Object Lock - If ObjectLockEnabledForBucket
is set to true in your
- * CreateBucket
request,
- * s3:PutBucketObjectLockConfiguration
and
- * s3:PutBucketVersioning
permissions are required.
ObjectLockEnabledForBucket
is set to true in your
+ * CreateBucket
request,
+ * s3:PutBucketObjectLockConfiguration
and
+ * s3:PutBucketVersioning
permissions are required.
*
- * S3 Object Ownership - If your CreateBucket
request includes the x-amz-object-ownership
header, then the
- * s3:PutBucketOwnershipControls
permission is required. By default, ObjectOwnership
is set to BucketOWnerEnforced
and ACLs are disabled. We recommend keeping
- * ACLs disabled, except in uncommon use cases where you must control access for each object individually. If you want to change the ObjectOwnership
setting, you can use the
- * x-amz-object-ownership
header in your CreateBucket
request to set the ObjectOwnership
setting of your choice.
- * For more information about S3 Object Ownership, see Controlling object
- * ownership in the Amazon S3 User Guide.
CreateBucket
request includes the
+ * x-amz-object-ownership
header, then the
+ * s3:PutBucketOwnershipControls
permission is required. By
+ * default, ObjectOwnership
is set to
+ * BucketOWnerEnforced
and ACLs are disabled. We recommend
+ * keeping ACLs disabled, except in uncommon use cases where you must control
+ * access for each object individually. If you want to change the
+ * ObjectOwnership
setting, you can use the
+ * x-amz-object-ownership
header in your
+ * CreateBucket
request to set the ObjectOwnership
+ * setting of your choice. For more information about S3 Object Ownership, see
+ * Controlling
+ * object ownership in the
+ * Amazon S3 User Guide.
*
- * S3 Block Public Access - If your specific use case requires granting public access to your S3 resources, you can disable Block Public Access. You can create a new bucket with Block Public Access enabled, then separately call the
+ * S3 Block Public Access - If your
+ * specific use case requires granting public access to your S3 resources, you
+ * can disable Block Public Access. You can create a new bucket with Block
+ * Public Access enabled, then separately call the
* DeletePublicAccessBlock
* API. To use this operation, you must have the
- * s3:PutBucketPublicAccessBlock
permission. By default, all Block
- * Public Access settings are enabled for new buckets. To avoid inadvertent exposure of
- * your resources, we recommend keeping the S3 Block Public Access settings enabled. For more information about S3 Block Public Access, see Blocking public
- * access to your Amazon S3 storage in the Amazon S3 User Guide.
s3:PutBucketPublicAccessBlock
permission. By default, all
+ * Block Public Access settings are enabled for new buckets. To avoid
+ * inadvertent exposure of your resources, we recommend keeping the S3 Block
+ * Public Access settings enabled. For more information about S3 Block Public
+ * Access, see Blocking
+ * public access to your Amazon S3 storage in the
+ * Amazon S3 User Guide.
* If your CreateBucket
request sets BucketOwnerEnforced
for Amazon S3 Object Ownership
- * and specifies a bucket ACL that provides access to an external Amazon Web Services account, your request fails with a 400
error and returns the InvalidBucketAcLWithObjectOwnership
error code. For more information,
- * see Setting Object
- * Ownership on an existing bucket in the Amazon S3 User Guide.
If your CreateBucket
request sets BucketOwnerEnforced
for
+ * Amazon S3 Object Ownership and specifies a bucket ACL that provides access to an external
+ * Amazon Web Services account, your request fails with a 400
error and returns the
+ * InvalidBucketAcLWithObjectOwnership
error code. For more information,
+ * see Setting Object
+ * Ownership on an existing bucket in the Amazon S3 User Guide.
+ *
The following operations are related to CreateBucket
:
If you have configured a lifecycle rule to abort incomplete multipart uploads, the * upload must complete within the number of days specified in the bucket lifecycle * configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort - * action and Amazon S3 aborts the multipart upload. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration.
+ * action and Amazon S3 aborts the multipart upload. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle + * Configuration. *For information about the permissions required to use the multipart upload API, see * Multipart * Upload and Permissions.
@@ -126,19 +127,19 @@ export interface CreateMultipartUploadCommandOutput extends CreateMultipartUploa * *Amazon S3 encrypts data - * by using server-side encryption with an Amazon S3 managed key (SSE-S3) by default. Server-side encryption is for data encryption at rest. Amazon S3 encrypts - * your data as it writes it to disks in its data centers and decrypts it when you - * access it. You can request that Amazon S3 encrypts - * data at rest by using server-side encryption with other key options. The option you use depends on + *
Amazon S3 encrypts data by using server-side encryption with an Amazon S3 managed key + * (SSE-S3) by default. Server-side encryption is for data encryption at rest. Amazon S3 + * encrypts your data as it writes it to disks in its data centers and decrypts it + * when you access it. You can request that Amazon S3 encrypts data at rest by using + * server-side encryption with other key options. The option you use depends on * whether you want to use KMS keys (SSE-KMS) or provide your own encryption keys * (SSE-C).
*Use KMS keys (SSE-KMS) that include the Amazon Web Services managed key
- * (aws/s3
) and KMS customer managed keys stored in Key Management Service (KMS) – If you
- * want Amazon Web Services to manage the keys used to encrypt data, specify the following
- * headers in the request.
aws/s3
) and KMS customer managed keys stored in Key Management Service (KMS) –
+ * If you want Amazon Web Services to manage the keys used to encrypt data, specify the
+ * following headers in the request.
* @@ -163,9 +164,10 @@ export interface CreateMultipartUploadCommandOutput extends CreateMultipartUploa * protect the data.
* *All GET
and PUT
requests for an object protected
- * by KMS fail if you don't make them by using Secure Sockets Layer (SSL),
- * Transport Layer Security (TLS), or Signature Version 4.
All GET
and PUT
requests for an object
+ * protected by KMS fail if you don't make them by using Secure Sockets
+ * Layer (SSL), Transport Layer Security (TLS), or Signature Version
+ * 4.
For more information about server-side encryption with KMS keys
* (SSE-KMS), see Protecting Data
diff --git a/clients/client-s3/src/commands/DeleteBucketEncryptionCommand.ts b/clients/client-s3/src/commands/DeleteBucketEncryptionCommand.ts
index 58b85e63700e8..602fb80d9ef3a 100644
--- a/clients/client-s3/src/commands/DeleteBucketEncryptionCommand.ts
+++ b/clients/client-s3/src/commands/DeleteBucketEncryptionCommand.ts
@@ -36,9 +36,9 @@ export interface DeleteBucketEncryptionCommandOutput extends __MetadataBearer {}
/**
* @public
- * This implementation of the DELETE action resets the default encryption for the
- * bucket as server-side encryption with Amazon S3 managed keys (SSE-S3). For information about the
- * bucket default encryption feature, see Amazon S3 Bucket Default Encryption
+ * This implementation of the DELETE action resets the default encryption for the bucket as
+ * server-side encryption with Amazon S3 managed keys (SSE-S3). For information about the bucket
+ * default encryption feature, see Amazon S3 Bucket Default Encryption
* in the Amazon S3 User Guide. To use this operation, you must have permissions to perform the
* s3:PutEncryptionConfiguration
action. The bucket owner has this permission
diff --git a/clients/client-s3/src/commands/DeleteBucketPolicyCommand.ts b/clients/client-s3/src/commands/DeleteBucketPolicyCommand.ts
index b990f01d43e4b..33939ccaff43a 100644
--- a/clients/client-s3/src/commands/DeleteBucketPolicyCommand.ts
+++ b/clients/client-s3/src/commands/DeleteBucketPolicyCommand.ts
@@ -50,8 +50,9 @@ export interface DeleteBucketPolicyCommandOutput extends __MetadataBearer {}
* buckets, the root principal in a bucket owner's Amazon Web Services account can perform the
* GetBucketPolicy
, PutBucketPolicy
, and
* DeleteBucketPolicy
API actions, even if their bucket policy explicitly
- * denies the root principal's access. Bucket owner root principals can only be blocked from performing
- * these API actions by VPC endpoint policies and Amazon Web Services Organizations policies.
For more information about bucket policies, see Using Bucket Policies and * UserPolicies.
diff --git a/clients/client-s3/src/commands/DeleteObjectsCommand.ts b/clients/client-s3/src/commands/DeleteObjectsCommand.ts index 1557cd4336722..33df1555e4b1b 100644 --- a/clients/client-s3/src/commands/DeleteObjectsCommand.ts +++ b/clients/client-s3/src/commands/DeleteObjectsCommand.ts @@ -58,7 +58,8 @@ export interface DeleteObjectsCommandOutput extends DeleteObjectsOutput, __Metad * provide an invalid token, whether there are versioned keys in the request or not, the * entire Multi-Object Delete request will fail. For information about MFA Delete, see MFA * Delete. - *Finally, the Content-MD5 header is required for all Multi-Object Delete requests. Amazon S3 uses the header value to ensure that your request body has not been altered in + *
Finally, the Content-MD5 header is required for all Multi-Object Delete requests. Amazon S3 + * uses the header value to ensure that your request body has not been altered in * transit.
*The following operations are related to DeleteObjects
:
For more information about transfer acceleration, see Transfer Acceleration in * the Amazon S3 User Guide.
- *The following operations are related to GetBucketAccelerateConfiguration
:
The following operations are related to
+ * GetBucketAccelerateConfiguration
:
diff --git a/clients/client-s3/src/commands/GetBucketAnalyticsConfigurationCommand.ts b/clients/client-s3/src/commands/GetBucketAnalyticsConfigurationCommand.ts index f7406794603e0..7d4269457a583 100644 --- a/clients/client-s3/src/commands/GetBucketAnalyticsConfigurationCommand.ts +++ b/clients/client-s3/src/commands/GetBucketAnalyticsConfigurationCommand.ts @@ -51,7 +51,8 @@ export interface GetBucketAnalyticsConfigurationCommandOutput * Amazon S3 User Guide.
*For information about Amazon S3 analytics feature, see Amazon S3 Analytics – Storage Class * Analysis in the Amazon S3 User Guide.
- *The following operations are related to GetBucketAnalyticsConfiguration
:
The following operations are related to
+ * GetBucketAnalyticsConfiguration
:
diff --git a/clients/client-s3/src/commands/GetBucketPolicyCommand.ts b/clients/client-s3/src/commands/GetBucketPolicyCommand.ts
index b833b3597b5c8..d1bd4e8aaaeff 100644
--- a/clients/client-s3/src/commands/GetBucketPolicyCommand.ts
+++ b/clients/client-s3/src/commands/GetBucketPolicyCommand.ts
@@ -47,10 +47,11 @@ export interface GetBucketPolicyCommandOutput extends GetBucketPolicyOutput, __M
* To ensure that bucket owners don't inadvertently lock themselves out of their own
* buckets, the root principal in a bucket owner's Amazon Web Services account can perform the
- * GetBucketPolicy
, PutBucketPolicy
, and
- * DeleteBucketPolicy
API actions, even if their bucket policy explicitly
- * denies the root principal's access. Bucket owner root principals can only be blocked from performing
- * these API actions by VPC endpoint policies and Amazon Web Services Organizations policies.GetBucketPolicy
, PutBucketPolicy
, and
+ * DeleteBucketPolicy
API actions, even if their bucket policy explicitly
+ * denies the root principal's access. Bucket owner root principals can only be blocked
+ * from performing these API actions by VPC endpoint policies and Amazon Web Services Organizations
+ * policies.
To use this API operation against an access point, provide the alias of the access point in place of the bucket name.
*To use this API operation against an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name. diff --git a/clients/client-s3/src/commands/GetObjectAttributesCommand.ts b/clients/client-s3/src/commands/GetObjectAttributesCommand.ts index 4e62067e9af2f..d22814720e262 100644 --- a/clients/client-s3/src/commands/GetObjectAttributesCommand.ts +++ b/clients/client-s3/src/commands/GetObjectAttributesCommand.ts @@ -125,22 +125,25 @@ export interface GetObjectAttributesCommandOutput extends GetObjectAttributesOut *
The permissions that you need to use this operation depend on whether the bucket is
- * versioned. If the bucket is versioned, you need both the s3:GetObjectVersion
- * and s3:GetObjectVersionAttributes
permissions for this operation. If the
- * bucket is not versioned, you need the s3:GetObject
and
- * s3:GetObjectAttributes
permissions. For more information, see Specifying
- * Permissions in a Policy in the Amazon S3 User Guide. If the
- * object that you request does not exist, the error Amazon S3 returns depends on whether you also
- * have the s3:ListBucket
permission.
The permissions that you need to use this operation depend on whether the
+ * bucket is versioned. If the bucket is versioned, you need both the
+ * s3:GetObjectVersion
and s3:GetObjectVersionAttributes
+ * permissions for this operation. If the bucket is not versioned, you need the
+ * s3:GetObject
and s3:GetObjectAttributes
permissions.
+ * For more information, see Specifying Permissions in
+ * a Policy in the Amazon S3 User Guide. If the object
+ * that you request does not exist, the error Amazon S3 returns depends on whether you
+ * also have the s3:ListBucket
permission.
If you have the s3:ListBucket
permission on the bucket, Amazon S3 returns
- * an HTTP status code 404 Not Found
("no such key") error.
If you have the s3:ListBucket
permission on the bucket, Amazon S3
+ * returns an HTTP status code 404 Not Found
("no such key")
+ * error.
If you don't have the s3:ListBucket
permission, Amazon S3 returns an HTTP
- * status code 403 Forbidden
("access denied") error.
If you don't have the s3:ListBucket
permission, Amazon S3 returns
+ * an HTTP status code 403 Forbidden
("access denied")
+ * error.
You need the relevant read object (or version) permission for this operation. For more
- * information, see Specifying Permissions in a
- * Policy. If the object that you request doesn’t exist, the error that Amazon S3 returns depends
- * on whether you also have the s3:ListBucket
permission.
You need the relevant read object (or version) permission for this operation.
+ * For more information, see Specifying Permissions in
+ * a Policy. If the object that you request doesn’t exist, the error that
+ * Amazon S3 returns depends on whether you also have the s3:ListBucket
+ * permission.
If you have the s3:ListBucket
permission on the bucket, Amazon S3
- * returns an HTTP status code 404 (Not Found) error.
If you don’t have the s3:ListBucket
permission, Amazon S3 returns an
- * HTTP status code 403 ("access denied") error.
By default, the GET
action returns the current version of an object. To return a
- * different version, use the versionId
subresource.
By default, the GET
action returns the current version of an
+ * object. To return a different version, use the versionId
+ * subresource.
If you supply a versionId
, you need the
- * s3:GetObjectVersion
permission to access a specific version of an
- * object. If you request a specific version, you do not need to have the
- * s3:GetObject
permission. If you request the current version
- * without a specific version ID, only s3:GetObject
permission is
- * required. s3:GetObjectVersion
permission won't be required.
s3:GetObjectVersion
permission to access a specific
+ * version of an object. If you request a specific version, you do not need
+ * to have the s3:GetObject
permission. If you request the
+ * current version without a specific version ID, only
+ * s3:GetObject
permission is required.
+ * s3:GetObjectVersion
permission won't be required.
* If the current version of the object is a delete marker, Amazon S3 behaves as if the
- * object was deleted and includes x-amz-delete-marker: true
in the
- * response.
If the current version of the object is a delete marker, Amazon S3 behaves
+ * as if the object was deleted and includes x-amz-delete-marker:
+ * true
in the response.
There are times when you want to override certain response header values in a GET
- * response. For example, you might override the Content-Disposition
response
- * header value in your GET
request.
There are times when you want to override certain response header values in a
+ * GET
response. For example, you might override the
+ * Content-Disposition
response header value in your GET
+ * request.
You can override values for a set of response headers using the following query
- * parameters. These response header values are sent only on a successful request, that is,
- * when status code 200 OK is returned. The set of headers you can override using these
- * parameters is a subset of the headers that Amazon S3 accepts when you create an object. The
- * response headers that you can override for the GET
response are Content-Type
,
- * Content-Language
, Expires
, Cache-Control
,
- * Content-Disposition
, and Content-Encoding
. To override these
- * header values in the GET
response, you use the following request parameters.
GET
response are Content-Type
,
+ * Content-Language
, Expires
,
+ * Cache-Control
, Content-Disposition
, and
+ * Content-Encoding
. To override these header values in the
+ * GET
response, you use the following request parameters.
* You must sign the request, either using an Authorization header or a presigned URL, - * when using these parameters. They cannot be used with an unsigned (anonymous) - * request.
+ *You must sign the request, either using an Authorization header or a + * presigned URL, when using these parameters. They cannot be used with an + * unsigned (anonymous) request.
*If both of the If-Match
and If-Unmodified-Since
headers are
- * present in the request as follows: If-Match
condition evaluates to
- * true
, and; If-Unmodified-Since
condition evaluates to
- * false
; then, S3 returns 200 OK and the data requested.
If both of the If-None-Match
and If-Modified-Since
headers are
- * present in the request as follows: If-None-Match
condition evaluates to
- * false
, and; If-Modified-Since
condition evaluates to
- * true
; then, S3 returns 304 Not Modified response code.
If both of the If-Match
and If-Unmodified-Since
+ * headers are present in the request as follows: If-Match
condition
+ * evaluates to true
, and; If-Unmodified-Since
condition
+ * evaluates to false
; then, S3 returns 200 OK and the data requested.
If both of the If-None-Match
and If-Modified-Since
+ * headers are present in the request as follows: If-None-Match
+ * condition evaluates to false
, and; If-Modified-Since
+ * condition evaluates to true
; then, S3 returns 304 Not Modified
+ * response code.
For more information about conditional requests, see RFC 7232.
*s3:ListBucket
action. The bucket owner has this permission by default and
* can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing
* Access Permissions to Your Amazon S3 Resources.
- * To use this API operation against an access point, you must provide the alias of the access point in place of the - * bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to - * the access point hostname. The access point hostname takes the form + *
To use this API operation against an access point, you must provide the alias of the access point in + * place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct + * requests to the access point hostname. The access point hostname takes the form * AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. * When using the Amazon Web Services SDKs, you provide the ARN in place of the bucket name. For more * information, see Using access points.
diff --git a/clients/client-s3/src/commands/HeadObjectCommand.ts b/clients/client-s3/src/commands/HeadObjectCommand.ts index 5122784d92df6..592ccb7afa3a3 100644 --- a/clients/client-s3/src/commands/HeadObjectCommand.ts +++ b/clients/client-s3/src/commands/HeadObjectCommand.ts @@ -42,9 +42,9 @@ export interface HeadObjectCommandOutput extends HeadObjectOutput, __MetadataBea /** * @public - *The HEAD
action retrieves metadata from an object without returning the object itself.
- * This action is useful if you're only interested in an object's metadata. To use HEAD
, you
- * must have READ access to the object.
The HEAD
action retrieves metadata from an object without returning the
+ * object itself. This action is useful if you're only interested in an object's metadata. To
+ * use HEAD
, you must have READ access to the object.
A HEAD
request has the same options as a GET
action on an
* object. The response is identical to the GET
response except that there is no
* response body. Because of this, if the HEAD
request generates an error, it
@@ -133,18 +133,18 @@ export interface HeadObjectCommandOutput extends HeadObjectOutput, __MetadataBea
*
You need the relevant read object (or version) permission for this operation. For more - * information, see Actions, resources, and condition keys for Amazon S3. - * If the object you request doesn't exist, the error that Amazon S3 returns depends - * on whether you also have the s3:ListBucket permission.
+ *You need the relevant read object (or version) permission for this operation. + * For more information, see Actions, resources, and condition + * keys for Amazon S3. If the object you request doesn't exist, the error that + * Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
*If you have the s3:ListBucket
permission on the bucket, Amazon S3 returns
- * an HTTP status code 404 error.
If you have the s3:ListBucket
permission on the bucket, Amazon S3
+ * returns an HTTP status code 404 error.
If you don’t have the s3:ListBucket
permission, Amazon S3 returns an HTTP
- * status code 403 error.
If you don’t have the s3:ListBucket
permission, Amazon S3 returns
+ * an HTTP status code 403 error.
You can set access permissions by using one of the following methods:
*Specify a canned ACL with the x-amz-acl
request header. Amazon S3 supports
- * a set of predefined ACLs, known as canned ACLs. Each canned ACL
- * has a predefined set of grantees and permissions. Specify the canned ACL name as the
- * value of x-amz-acl
. If you use this header, you cannot use other access
- * control-specific headers in your request. For more information, see Canned
- * ACL.
Specify a canned ACL with the x-amz-acl
request header. Amazon S3
+ * supports a set of predefined ACLs, known as canned
+ * ACLs. Each canned ACL has a predefined set of grantees and
+ * permissions. Specify the canned ACL name as the value of
+ * x-amz-acl
. If you use this header, you cannot use other
+ * access control-specific headers in your request. For more information, see
+ * Canned
+ * ACL.
Specify access permissions explicitly with the x-amz-grant-read
,
- * x-amz-grant-read-acp
, x-amz-grant-write-acp
, and
- * x-amz-grant-full-control
headers. When using these headers, you
- * specify explicit access permissions and grantees (Amazon Web Services accounts or Amazon S3 groups) who
- * will receive the permission. If you use these ACL-specific headers, you cannot use
- * the x-amz-acl
header to set a canned ACL. These parameters map to the
- * set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control
- * List (ACL) Overview.
You specify each grantee as a type=value pair, where the type is one of the - * following:
+ *Specify access permissions explicitly with the
+ * x-amz-grant-read
, x-amz-grant-read-acp
,
+ * x-amz-grant-write-acp
, and
+ * x-amz-grant-full-control
headers. When using these headers,
+ * you specify explicit access permissions and grantees (Amazon Web Services accounts or Amazon S3
+ * groups) who will receive the permission. If you use these ACL-specific
+ * headers, you cannot use the x-amz-acl
header to set a canned
+ * ACL. These parameters map to the set of permissions that Amazon S3 supports in an
+ * ACL. For more information, see Access Control List (ACL)
+ * Overview.
You specify each grantee as a type=value pair, where the type is one of + * the following:
*
- * id
– if the value specified is the canonical user ID of an
- * Amazon Web Services account
id
– if the value specified is the canonical user ID
+ * of an Amazon Web Services account
* @@ -102,8 +106,8 @@ export interface PutBucketAclCommandOutput extends __MetadataBearer {} *
- * emailAddress
– if the value specified is the email address of
- * an Amazon Web Services account
emailAddress
– if the value specified is the email
+ * address of an Amazon Web Services account
* Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
*For example, the following x-amz-grant-write
header grants create,
- * overwrite, and delete objects permission to LogDelivery group predefined by Amazon S3 and
- * two Amazon Web Services accounts identified by their email addresses.
For example, the following x-amz-grant-write
header grants
+ * create, overwrite, and delete objects permission to LogDelivery group
+ * predefined by Amazon S3 and two Amazon Web Services accounts identified by their email
+ * addresses.
- * x-amz-grant-write: uri="http://acs.amazonaws.com/groups/s3/LogDelivery",
- * id="111122223333", id="555566667777"
+ * x-amz-grant-write:
+ * uri="http://acs.amazonaws.com/groups/s3/LogDelivery", id="111122223333",
+ * id="555566667777"
*
You can use either a canned ACL or specify access permissions explicitly. You cannot do - * both.
+ *You can use either a canned ACL or specify access permissions explicitly. You + * cannot do both.
*You can specify the person (grantee) to whom you're assigning access rights (using - * request elements) in the following ways:
+ *You can specify the person (grantee) to whom you're assigning access rights + * (using request elements) in the following ways:
*By the person's ID:
*
*
+ * xsi:type="CanonicalUser">
DisplayName is optional and ignored in the request
*By URI:
*
*
+ * xsi:type="Group">
By Email address:
*
*
+ * xsi:type="AmazonCustomerByEmail">
The grantee is resolved to the CanonicalUser and, in a response to a GET Object - * acl request, appears as the CanonicalUser.
+ *The grantee is resolved to the CanonicalUser and, in a response to a GET + * Object acl request, appears as the CanonicalUser.
*Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
*The following operations are related to PutBucketAnalyticsConfiguration
:
The following operations are related to
+ * PutBucketAnalyticsConfiguration
:
diff --git a/clients/client-s3/src/commands/PutBucketEncryptionCommand.ts b/clients/client-s3/src/commands/PutBucketEncryptionCommand.ts index 79d41d5e7f779..78cb19b267962 100644 --- a/clients/client-s3/src/commands/PutBucketEncryptionCommand.ts +++ b/clients/client-s3/src/commands/PutBucketEncryptionCommand.ts @@ -41,14 +41,10 @@ export interface PutBucketEncryptionCommandOutput extends __MetadataBearer {} * and Amazon S3 Bucket Keys for an existing bucket.
*By default, all buckets have a default encryption configuration that uses server-side * encryption with Amazon S3 managed keys (SSE-S3). You can optionally configure default encryption - * for a bucket by using server-side encryption with Key Management Service (KMS) keys (SSE-KMS), - * dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), or server-side - * encryption with customer-provided keys (SSE-C). If you specify default encryption by using - * SSE-KMS, you can also configure Amazon S3 Bucket Keys. For information about bucket default - * encryption, see Amazon S3 bucket default encryption - * in the Amazon S3 User Guide. For more information about S3 Bucket Keys, see - * Amazon S3 Bucket - * Keys in the Amazon S3 User Guide.
+ * for a bucket by using server-side encryption with Key Management Service (KMS) keys (SSE-KMS) or + * dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS). If you specify default encryption by using + * SSE-KMS, you can also configure Amazon S3 Bucket + * Keys. If you use PutBucketEncryption to set your default bucket encryption to SSE-KMS, you should verify that your KMS key ID is correct. Amazon S3 does not validate the KMS key ID provided in PutBucketEncryption requests. *This action requires Amazon Web Services Signature Version 4. For more information, see * Authenticating Requests (Amazon Web Services Signature Version 4).
diff --git a/clients/client-s3/src/commands/PutBucketIntelligentTieringConfigurationCommand.ts b/clients/client-s3/src/commands/PutBucketIntelligentTieringConfigurationCommand.ts index 64c5b21139da1..10fb9d79ac785 100644 --- a/clients/client-s3/src/commands/PutBucketIntelligentTieringConfigurationCommand.ts +++ b/clients/client-s3/src/commands/PutBucketIntelligentTieringConfigurationCommand.ts @@ -69,7 +69,8 @@ export interface PutBucketIntelligentTieringConfigurationCommandOutput extends _ * or Deep Archive Access tier. * *
- * PutBucketIntelligentTieringConfiguration
has the following special errors:
PutBucketIntelligentTieringConfiguration
has the following special
+ * errors:
*
- * Cause: You are not the owner of the specified bucket,
- * or you do not have the s3:PutIntelligentTieringConfiguration
- * bucket permission to set the configuration on the bucket.
s3:PutIntelligentTieringConfiguration
bucket
+ * permission to set the configuration on the bucket.
* To use this operation, you must have permission to perform the
- * s3:PutInventoryConfiguration
action. The bucket owner has this permission
- * by default and can grant this permission to others.
The s3:PutInventoryConfiguration
permission allows a user to create an
- * S3
- * Inventory report that includes all object metadata fields available and to
- * specify the destination bucket to store the inventory. A user with read access to objects
- * in the destination bucket can also access all object metadata fields that are available in
- * the inventory report.
s3:PutInventoryConfiguration
action. The bucket owner has this
+ * permission by default and can grant this permission to others.
+ * The s3:PutInventoryConfiguration
permission allows a user to
+ * create an S3 Inventory
+ * report that includes all object metadata fields available and to specify the
+ * destination bucket to store the inventory. A user with read access to objects in
+ * the destination bucket can also access all object metadata fields that are
+ * available in the inventory report.
To restrict access to an inventory report, see Restricting access to an Amazon S3 Inventory report in the - * Amazon S3 User Guide. For more information about the metadata fields - * available in S3 Inventory, see Amazon S3 - * Inventory lists in the Amazon S3 User Guide. For more - * information about permissions, see Permissions related to bucket subresource operations and Identity and - * access management in Amazon S3 in the Amazon S3 User Guide.
+ * Amazon S3 User Guide. For more information about the metadata + * fields available in S3 Inventory, see Amazon S3 Inventory lists in the Amazon S3 User Guide. For + * more information about permissions, see Permissions related to bucket subresource operations and Identity and access management in Amazon S3 in the + * Amazon S3 User Guide. *@@ -98,17 +97,18 @@ export interface PutBucketInventoryConfigurationCommandOutput extends __Metadata * Code: TooManyConfigurations
** Cause: You are attempting to create a new configuration - * but have already reached the 1,000-configuration limit.
+ * but have already reached the 1,000-configuration limit. *
- * Cause: You are not the owner of the specified bucket,
- * or you do not have the s3:PutInventoryConfiguration
bucket
- * permission to set the configuration on the bucket.
s3:PutInventoryConfiguration
bucket permission to
+ * set the configuration on the bucket.
* The following operations are related to PutBucketInventoryConfiguration
:
The following operations are related to
+ * PutBucketInventoryConfiguration
:
diff --git a/clients/client-s3/src/commands/PutBucketLifecycleConfigurationCommand.ts b/clients/client-s3/src/commands/PutBucketLifecycleConfigurationCommand.ts index 00b68bc5b398f..e056638c3f2e4 100644 --- a/clients/client-s3/src/commands/PutBucketLifecycleConfigurationCommand.ts +++ b/clients/client-s3/src/commands/PutBucketLifecycleConfigurationCommand.ts @@ -56,40 +56,43 @@ export interface PutBucketLifecycleConfigurationCommandOutput extends __Metadata *
You specify the lifecycle configuration in your request body. The lifecycle - * configuration is specified as XML consisting of one or more rules. An Amazon S3 Lifecycle - * configuration can have up to 1,000 rules. This limit is not adjustable. Each rule consists - * of the following:
+ * configuration is specified as XML consisting of one or more rules. An Amazon S3 + * Lifecycle configuration can have up to 1,000 rules. This limit is not adjustable. + * Each rule consists of the following: *A filter identifying a subset of objects to which the rule applies. The filter can - * be based on a key name prefix, object tags, or a combination of both.
+ *A filter identifying a subset of objects to which the rule applies. The + * filter can be based on a key name prefix, object tags, or a combination of + * both.
*A status indicating whether the rule is in effect.
*One or more lifecycle transition and expiration actions that you want Amazon S3 to - * perform on the objects identified by the filter. If the state of your bucket is - * versioning-enabled or versioning-suspended, you can have many versions of the same - * object (one current version and zero or more noncurrent versions). Amazon S3 provides - * predefined actions that you can specify for current and noncurrent object - * versions.
+ *One or more lifecycle transition and expiration actions that you want + * Amazon S3 to perform on the objects identified by the filter. If the state of + * your bucket is versioning-enabled or versioning-suspended, you can have many + * versions of the same object (one current version and zero or more noncurrent + * versions). Amazon S3 provides predefined actions that you can specify for current + * and noncurrent object versions.
*For more information, see Object Lifecycle Management - * and Lifecycle Configuration Elements.
+ *For more information, see Object Lifecycle + * Management and Lifecycle Configuration + * Elements.
*By default, all Amazon S3 resources are private, including buckets, objects, and related
- * subresources (for example, lifecycle configuration and website configuration). Only the
- * resource owner (that is, the Amazon Web Services account that created it) can access the resource. The
- * resource owner can optionally grant access permissions to others by writing an access
- * policy. For this operation, a user must get the s3:PutLifecycleConfiguration
- * permission.
You can also explicitly deny permissions. An explicit deny also supersedes any other - * permissions. If you want to block users or accounts from removing or deleting objects from - * your bucket, you must deny them permissions for the following actions:
+ *By default, all Amazon S3 resources are private, including buckets, objects, and
+ * related subresources (for example, lifecycle configuration and website
+ * configuration). Only the resource owner (that is, the Amazon Web Services account that created
+ * it) can access the resource. The resource owner can optionally grant access
+ * permissions to others by writing an access policy. For this operation, a user must
+ * get the s3:PutLifecycleConfiguration
permission.
You can also explicitly deny permissions. An explicit deny also supersedes any + * other permissions. If you want to block users or accounts from removing or + * deleting objects from your bucket, you must deny them permissions for the + * following actions:
*@@ -107,11 +110,12 @@ export interface PutBucketLifecycleConfigurationCommandOutput extends __Metadata *
*For more information about permissions, see Managing Access Permissions to - * Your Amazon S3 Resources.
+ *For more information about permissions, see Managing Access + * Permissions to Your Amazon S3 Resources.
*The following operations are related to PutBucketLifecycleConfiguration
:
The following operations are related to
+ * PutBucketLifecycleConfiguration
:
diff --git a/clients/client-s3/src/commands/PutBucketLoggingCommand.ts b/clients/client-s3/src/commands/PutBucketLoggingCommand.ts index bea85083d8e23..7abcd64e66858 100644 --- a/clients/client-s3/src/commands/PutBucketLoggingCommand.ts +++ b/clients/client-s3/src/commands/PutBucketLoggingCommand.ts @@ -55,15 +55,15 @@ export interface PutBucketLoggingCommandOutput extends __MetadataBearer {} *
You can specify the person (grantee) to whom you're assigning access rights (by using - * request elements) in the following ways:
+ *You can specify the person (grantee) to whom you're assigning access rights (by + * using request elements) in the following ways:
*By the person's ID:
*
*
+ * xsi:type="CanonicalUser">
* DisplayName
is optional and ignored in the request.
By Email address:
*
*
+ * xsi:type="AmazonCustomerByEmail">
The grantee is resolved to the CanonicalUser
and, in a response to a GETObjectAcl
- * request, appears as the CanonicalUser.
The grantee is resolved to the CanonicalUser
and, in a
+ * response to a GETObjectAcl
request, appears as the
+ * CanonicalUser.
By URI:
*
*
+ * xsi:type="Group">
To enable logging, you use LoggingEnabled
and its children request elements. To disable
- * logging, you use an empty BucketLoggingStatus
request element:
To enable logging, you use LoggingEnabled
and its children request
+ * elements. To disable logging, you use an empty BucketLoggingStatus
request
+ * element:
* To ensure that bucket owners don't inadvertently lock themselves out of their own
* buckets, the root principal in a bucket owner's Amazon Web Services account can perform the
- *
diff --git a/clients/client-s3/src/commands/PutBucketPolicyCommand.ts b/clients/client-s3/src/commands/PutBucketPolicyCommand.ts
index e741abfc75a03..3eba7d66c5e33 100644
--- a/clients/client-s3/src/commands/PutBucketPolicyCommand.ts
+++ b/clients/client-s3/src/commands/PutBucketPolicyCommand.ts
@@ -48,10 +48,11 @@ export interface PutBucketPolicyCommandOutput extends __MetadataBearer {}
* GetBucketPolicy
, PutBucketPolicy
, and
- * DeleteBucketPolicy
API actions, even if their bucket policy explicitly
- * denies the root principal's access. Bucket owner root principals can only be blocked from performing
- * these API actions by VPC endpoint policies and Amazon Web Services Organizations policies.GetBucketPolicy
, PutBucketPolicy
, and
+ * DeleteBucketPolicy
API actions, even if their bucket policy explicitly
+ * denies the root principal's access. Bucket owner root principals can only be blocked
+ * from performing these API actions by VPC endpoint policies and Amazon Web Services Organizations
+ * policies.
For more information, see Bucket policy * examples.
diff --git a/clients/client-s3/src/commands/PutBucketReplicationCommand.ts b/clients/client-s3/src/commands/PutBucketReplicationCommand.ts index 552630ac0a779..da1759eede9da 100644 --- a/clients/client-s3/src/commands/PutBucketReplicationCommand.ts +++ b/clients/client-s3/src/commands/PutBucketReplicationCommand.ts @@ -42,7 +42,11 @@ export interface PutBucketReplicationCommandOutput extends __MetadataBearer {} *Specify the replication configuration in the request body. In the replication * configuration, you provide the name of the destination bucket or buckets where you want * Amazon S3 to replicate objects, the IAM role that Amazon S3 can assume to replicate objects on your - * behalf, and other relevant information.
+ * behalf, and other relevant information. You can invoke this request for a specific + * Amazon Web Services Region by using the + * + *aws:RequestedRegion
+ * condition key.
* A replication configuration must include at least one rule, and can contain a maximum of * 1,000. Each rule identifies a subset of objects to replicate by filtering the objects in * the source bucket. To choose additional subsets of objects to replicate, add a rule for @@ -61,31 +65,33 @@ export interface PutBucketReplicationCommandOutput extends __MetadataBearer {} *
By default, Amazon S3 doesn't replicate objects that are stored at rest using server-side
- * encryption with KMS keys. To replicate Amazon Web Services KMS-encrypted objects, add the following:
- * SourceSelectionCriteria
, SseKmsEncryptedObjects
,
- * Status
, EncryptionConfiguration
, and
- * ReplicaKmsKeyID
. For information about replication configuration, see
- * Replicating Objects
- * Created with SSE Using KMS keys.
By default, Amazon S3 doesn't replicate objects that are stored at rest using
+ * server-side encryption with KMS keys. To replicate Amazon Web Services KMS-encrypted objects,
+ * add the following: SourceSelectionCriteria
,
+ * SseKmsEncryptedObjects
, Status
,
+ * EncryptionConfiguration
, and ReplicaKmsKeyID
. For
+ * information about replication configuration, see Replicating
+ * Objects Created with SSE Using KMS keys.
For information on PutBucketReplication
errors, see List of
- * replication-related error codes
+ * replication-related error codes
*
To create a PutBucketReplication
request, you must have
- * s3:PutReplicationConfiguration
permissions for the bucket.
+ * s3:PutReplicationConfiguration
permissions for the bucket.
*
*
By default, a resource owner, in this case the Amazon Web Services account that created the bucket, - * can perform this operation. The resource owner can also grant others permissions to perform - * the operation. For more information about permissions, see Specifying Permissions in a - * Policy and Managing Access Permissions to - * Your Amazon S3 Resources.
+ *By default, a resource owner, in this case the Amazon Web Services account that created the + * bucket, can perform this operation. The resource owner can also grant others + * permissions to perform the operation. For more information about permissions, see + * Specifying Permissions in + * a Policy and Managing Access + * Permissions to Your Amazon S3 Resources.
*To perform this operation, the user or role performing the action must have the - * iam:PassRole permission.
+ *To perform this operation, the user or role performing the action must have + * the iam:PassRole + * permission.
*When this operation sets the tags for a bucket, it will overwrite any current tags * the bucket already has. You cannot use this operation to add tags to an existing list of @@ -56,47 +56,30 @@ export interface PutBucketTaggingCommandOutput extends __MetadataBearer {} * and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing * Access Permissions to Your Amazon S3 Resources.
*
- * PutBucketTagging
has the following special errors:
PutBucketTagging
has the following special errors. For more Amazon S3 errors
+ * see, Error
+ * Responses.
* Error code: InvalidTagError
- *
Description: The tag provided was not a valid tag. This error can occur if - * the tag did not pass input validation. For information about tag restrictions, - * see User-Defined Tag Restrictions and Amazon Web Services-Generated Cost Allocation Tag Restrictions.
- *
+ * InvalidTag
- The tag provided was not a valid tag. This error
+ * can occur if the tag did not pass input validation. For more information, see Using
+ * Cost Allocation in Amazon S3 Bucket Tags.
Error code: MalformedXMLError
- *
Description: The XML provided does not match the schema.
- *
+ * MalformedXML
- The XML provided does not match the
+ * schema.
Error code: OperationAbortedError
- *
Description: A conflicting conditional action is currently in progress - * against this resource. Please try again.
- *
+ * OperationAborted
- A conflicting conditional action is
+ * currently in progress against this resource. Please try again.
Error code: InternalError
- *
Description: The service was unable to apply the provided tag to the - * bucket.
- *
+ * InternalError
- The service was unable to apply the provided
+ * tag to the bucket.
The following operations are related to PutBucketTagging
:
MfaDelete
request elements in a request to set the versioning state of the
* bucket.
* If you have an object expiration lifecycle configuration in your non-versioned bucket and - * you want to maintain the same permanent delete behavior when you enable versioning, you - * must add a noncurrent expiration policy. The noncurrent expiration lifecycle configuration will - * manage the deletes of the noncurrent object versions in the version-enabled bucket. (A - * version-enabled bucket maintains one current and zero or more noncurrent object - * versions.) For more information, see Lifecycle and Versioning.
+ *If you have an object expiration lifecycle configuration in your non-versioned bucket + * and you want to maintain the same permanent delete behavior when you enable versioning, + * you must add a noncurrent expiration policy. The noncurrent expiration lifecycle + * configuration will manage the deletes of the noncurrent object versions in the + * version-enabled bucket. (A version-enabled bucket maintains one current and zero or more + * noncurrent object versions.) For more information, see Lifecycle and Versioning.
*The following operations are related to PutBucketVersioning
:
Amazon S3 has a limitation of 50 routing rules per website configuration. If you require more * than 50 routing rules, you can use object redirect. For more information, see Configuring an * Object Redirect in the Amazon S3 User Guide.
+ *The maximum request length is limited to 128 KB.
* @example * Use a bare-bones client and the command you need to make an API call. * ```javascript diff --git a/clients/client-s3/src/commands/PutObjectAclCommand.ts b/clients/client-s3/src/commands/PutObjectAclCommand.ts index 2875ef64d0d36..5e597622f3b59 100644 --- a/clients/client-s3/src/commands/PutObjectAclCommand.ts +++ b/clients/client-s3/src/commands/PutObjectAclCommand.ts @@ -61,29 +61,32 @@ export interface PutObjectAclCommandOutput extends PutObjectAclOutput, __Metadat *You can set access permissions using one of the following methods:
*Specify a canned ACL with the x-amz-acl
request header. Amazon S3 supports
- * a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set
- * of grantees and permissions. Specify the canned ACL name as the value of
- * x-amz-ac
l. If you use this header, you cannot use other access
- * control-specific headers in your request. For more information, see Canned
- * ACL.
Specify a canned ACL with the x-amz-acl
request header. Amazon S3
+ * supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has
+ * a predefined set of grantees and permissions. Specify the canned ACL name as
+ * the value of x-amz-ac
l. If you use this header, you cannot use
+ * other access control-specific headers in your request. For more information,
+ * see Canned
+ * ACL.
Specify access permissions explicitly with the x-amz-grant-read
,
- * x-amz-grant-read-acp
, x-amz-grant-write-acp
, and
- * x-amz-grant-full-control
headers. When using these headers, you
- * specify explicit access permissions and grantees (Amazon Web Services accounts or Amazon S3 groups) who
- * will receive the permission. If you use these ACL-specific headers, you cannot use
- * x-amz-acl
header to set a canned ACL. These parameters map to the set
- * of permissions that Amazon S3 supports in an ACL. For more information, see Access Control
- * List (ACL) Overview.
You specify each grantee as a type=value pair, where the type is one of the - * following:
+ *Specify access permissions explicitly with the
+ * x-amz-grant-read
, x-amz-grant-read-acp
,
+ * x-amz-grant-write-acp
, and
+ * x-amz-grant-full-control
headers. When using these headers,
+ * you specify explicit access permissions and grantees (Amazon Web Services accounts or Amazon S3
+ * groups) who will receive the permission. If you use these ACL-specific
+ * headers, you cannot use x-amz-acl
header to set a canned ACL.
+ * These parameters map to the set of permissions that Amazon S3 supports in an ACL.
+ * For more information, see Access Control List (ACL)
+ * Overview.
You specify each grantee as a type=value pair, where the type is one of + * the following:
*
- * id
– if the value specified is the canonical user ID of an
- * Amazon Web Services account
id
– if the value specified is the canonical user ID
+ * of an Amazon Web Services account
* @@ -92,8 +95,8 @@ export interface PutObjectAclCommandOutput extends PutObjectAclOutput, __Metadat *
- * emailAddress
– if the value specified is the email address of
- * an Amazon Web Services account
emailAddress
– if the value specified is the email
+ * address of an Amazon Web Services account
* Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
*For example, the following x-amz-grant-read
header grants list
- * objects permission to the two Amazon Web Services accounts identified by their email
+ *
For example, the following x-amz-grant-read
header grants
+ * list objects permission to the two Amazon Web Services accounts identified by their email
* addresses.
* x-amz-grant-read: emailAddress="xyz@amazon.com",
- * emailAddress="abc@amazon.com"
+ * emailAddress="abc@amazon.com"
*
You can use either a canned ACL or specify access permissions explicitly. You cannot do - * both.
+ *You can use either a canned ACL or specify access permissions explicitly. You + * cannot do both.
* *You can specify the person (grantee) to whom you're assigning access rights (using - * request elements) in the following ways:
+ *You can specify the person (grantee) to whom you're assigning access rights + * (using request elements) in the following ways:
*By the person's ID:
*
*
+ * xsi:type="CanonicalUser">
DisplayName is optional and ignored in the request.
*By URI:
*
*
+ * xsi:type="Group">
By Email address:
*
*
+ * xsi:type="AmazonCustomerByEmail">
The grantee is resolved to the CanonicalUser and, in a response to a GET Object - * acl request, appears as the CanonicalUser.
+ *The grantee is resolved to the CanonicalUser and, in a response to a GET + * Object acl request, appears as the CanonicalUser.
*Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
*The ACL of an object is set at the object version level. By default, PUT sets the ACL of
- * the current version of an object. To set the ACL of a different version, use the
- * versionId
subresource.
The ACL of an object is set at the object version level. By default, PUT sets
+ * the ACL of the current version of an object. To set the ACL of a different
+ * version, use the versionId
subresource.
The following operations are related to PutObjectAcl
:
Sets the supplied tag-set to an object that already exists in a bucket.
- *A tag is a key-value pair. You can associate tags with an object by sending a PUT - * request against the tagging subresource that is associated with the object. You can - * retrieve tags by sending a GET request. For more information, see GetObjectTagging.
+ *Sets the supplied tag-set to an object that already exists in a bucket. A tag is a + * key-value pair. For more information, see Object Tagging.
+ *You can associate tags with an object by sending a PUT request against the tagging + * subresource that is associated with the object. You can retrieve tags by sending a GET + * request. For more information, see GetObjectTagging.
*For tagging-related restrictions related to characters and encodings, see Tag * Restrictions. Note that Amazon S3 limits the maximum number of tags to 10 tags per * object.
@@ -49,69 +50,31 @@ export interface PutObjectTaggingCommandOutput extends PutObjectTaggingOutput, _ * permission and can grant this permission to others. *To put tags of any other version, use the versionId
query parameter. You
* also need permission for the s3:PutObjectVersionTagging
action.
For information about the Amazon S3 object tagging feature, see Object Tagging.
*
- * PutObjectTagging
has the following special errors:
PutObjectTagging
has the following special errors. For more Amazon S3 errors
+ * see, Error
+ * Responses.
* - * Code: InvalidTagError - *
- *- * Cause: The tag provided was not a valid tag. This error can occur - * if the tag did not pass input validation. For more information, see Object - * Tagging. - *
- *
+ * InvalidTag
- The tag provided was not a valid tag. This error
+ * can occur if the tag did not pass input validation. For more information, see Object
+ * Tagging.
- * Code: MalformedXMLError - *
- *- * Cause: The XML provided does not match the schema. - *
- *
+ * MalformedXML
- The XML provided does not match the
+ * schema.
- * Code: OperationAbortedError - *
- *- * Cause: A conflicting conditional action is currently in progress - * against this resource. Please try again. - *
- *
+ * OperationAborted
- A conflicting conditional action is
+ * currently in progress against this resource. Please try again.
- * Code: InternalError - *
- *- * Cause: The service was unable to apply the provided tag to the - * object. - *
- *
+ * InternalError
- The service was unable to apply the provided
+ * tag to the object.
The following operations are related to PutObjectTagging
:
PublicAccessBlock
configuration for both the
* bucket (or the bucket that contains the object) and the bucket owner's account. If the
* PublicAccessBlock
configurations are different between the bucket and
- * the account, Amazon S3 uses the most restrictive combination of the bucket-level and
+ * the account, S3 uses the most restrictive combination of the bucket-level and
* account-level settings.
*
* For more information about when Amazon S3 considers a bucket or an object public, see The Meaning of "Public".
diff --git a/clients/client-s3/src/commands/RestoreObjectCommand.ts b/clients/client-s3/src/commands/RestoreObjectCommand.ts index 49783a71f7af3..19b86aa5ff1ed 100644 --- a/clients/client-s3/src/commands/RestoreObjectCommand.ts +++ b/clients/client-s3/src/commands/RestoreObjectCommand.ts @@ -61,41 +61,39 @@ export interface RestoreObjectCommandOutput extends RestoreObjectOutput, __Metad ** Managing Access with ACLs in the - * Amazon S3 User Guide + * Amazon S3 User Guide *
*- * Protecting Data Using - * Server-Side Encryption in the - * Amazon S3 User Guide + * Protecting Data Using Server-Side Encryption in the + * Amazon S3 User Guide *
*Define the SQL expression for the SELECT
type of restoration for your
- * query in the request body's SelectParameters
structure. You can use
- * expressions like the following examples.
Define the SQL expression for the SELECT
type of restoration for your query
+ * in the request body's SelectParameters
structure. You can use expressions like
+ * the following examples.
The following expression returns all records from the specified - * object.
+ *The following expression returns all records from the specified object.
*
* SELECT * FROM Object
*
Assuming that you are not using any headers for data stored in the object, - * you can specify columns with positional headers.
+ *Assuming that you are not using any headers for data stored in the object, you can + * specify columns with positional headers.
*
* SELECT s._1, s._2 FROM Object s WHERE s._3 > 100
*
If you have headers and you set the fileHeaderInfo
in the
- * CSV
structure in the request body to USE
, you can
- * specify headers in the query. (If you set the fileHeaderInfo
field
- * to IGNORE
, the first row is skipped for the query.) You cannot mix
- * ordinal positions with header column names.
CSV
structure in the request body to USE
, you can
+ * specify headers in the query. (If you set the fileHeaderInfo
field to
+ * IGNORE
, the first row is skipped for the query.) You cannot mix
+ * ordinal positions with header column names.
*
* SELECT s.Id, s.FirstName, s.SSN FROM S3Object s
*
To use this operation, you must have permissions to perform the
- * s3:RestoreObject
action. The bucket owner has this permission by default
- * and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing
- * Access Permissions to Your Amazon S3 Resources in the
- * Amazon S3 User Guide.
s3:RestoreObject
action. The bucket owner has this permission by
+ * default and can grant this permission to others. For more information about
+ * permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources in the
+ * Amazon S3 User Guide.
* Objects that you archive to the S3 Glacier Flexible Retrieval Flexible Retrieval or - * S3 Glacier Deep Archive storage class, and S3 Intelligent-Tiering Archive or + *
Objects that you archive to the S3 Glacier Flexible Retrieval Flexible Retrieval + * or S3 Glacier Deep Archive storage class, and S3 Intelligent-Tiering Archive or * S3 Intelligent-Tiering Deep Archive tiers, are not accessible in real time. For objects in the - * S3 Glacier Flexible Retrieval Flexible Retrieval or S3 Glacier Deep Archive storage - * classes, you must first initiate a restore request, and then wait until a temporary copy of - * the object is available. If you want a permanent copy of the object, create a copy of it in - * the Amazon S3 Standard storage class in your S3 bucket. To access an archived object, you must - * restore the object for the duration (number of days) that you specify. For objects in the - * Archive Access or Deep Archive Access tiers of S3 Intelligent-Tiering, you must first - * initiate a restore request, and then wait until the object is moved into the Frequent - * Access tier.
- *To restore a specific object version, you can provide a version ID. If you don't provide - * a version ID, Amazon S3 restores the current version.
- *When restoring an archived object, you can specify one of the following data access tier
- * options in the Tier
element of the request body:
To restore a specific object version, you can provide a version ID. If you + * don't provide a version ID, Amazon S3 restores the current version.
+ *When restoring an archived object, you can specify one of the following data
+ * access tier options in the Tier
element of the request body:
- * Expedited
- Expedited retrievals allow you to quickly access your
- * data stored in the S3 Glacier Flexible Retrieval Flexible Retrieval storage class or
- * S3 Intelligent-Tiering Archive tier when occasional urgent requests for restoring archives
- * are required. For all but the largest archived objects (250 MB+), data accessed using
- * Expedited retrievals is typically made available within 1–5 minutes. Provisioned
- * capacity ensures that retrieval capacity for Expedited retrievals is available when
- * you need it. Expedited retrievals and provisioned capacity are not available for
- * objects stored in the S3 Glacier Deep Archive storage class or
+ * Expedited
- Expedited retrievals allow you to quickly access
+ * your data stored in the S3 Glacier Flexible Retrieval Flexible Retrieval
+ * storage class or S3 Intelligent-Tiering Archive tier when occasional urgent requests
+ * for restoring archives are required. For all but the largest archived
+ * objects (250 MB+), data accessed using Expedited retrievals is typically
+ * made available within 1–5 minutes. Provisioned capacity ensures that
+ * retrieval capacity for Expedited retrievals is available when you need it.
+ * Expedited retrievals and provisioned capacity are not available for objects
+ * stored in the S3 Glacier Deep Archive storage class or
* S3 Intelligent-Tiering Deep Archive tier.
- * Standard
- Standard retrievals allow you to access any of your
- * archived objects within several hours. This is the default option for retrieval
- * requests that do not specify the retrieval option. Standard retrievals typically
- * finish within 3–5 hours for objects stored in the S3 Glacier Flexible Retrieval Flexible
- * Retrieval storage class or S3 Intelligent-Tiering Archive tier. They typically finish within
- * 12 hours for objects stored in the S3 Glacier Deep Archive storage class or
- * S3 Intelligent-Tiering Deep Archive tier. Standard retrievals are free for objects stored in
- * S3 Intelligent-Tiering.
Standard
- Standard retrievals allow you to access any of
+ * your archived objects within several hours. This is the default option for
+ * retrieval requests that do not specify the retrieval option. Standard
+ * retrievals typically finish within 3–5 hours for objects stored in the
+ * S3 Glacier Flexible Retrieval Flexible Retrieval storage class or
+ * S3 Intelligent-Tiering Archive tier. They typically finish within 12 hours for
+ * objects stored in the S3 Glacier Deep Archive storage class or
+ * S3 Intelligent-Tiering Deep Archive tier. Standard retrievals are free for objects stored
+ * in S3 Intelligent-Tiering.
*
- * Bulk
- Bulk retrievals free for objects stored in the S3 Glacier
- * Flexible Retrieval and S3 Intelligent-Tiering storage classes, enabling you to
- * retrieve large amounts, even petabytes, of data at no cost. Bulk retrievals typically
- * finish within 5–12 hours for objects stored in the S3 Glacier Flexible Retrieval
- * Flexible Retrieval storage class or S3 Intelligent-Tiering Archive tier. Bulk retrievals are
- * also the lowest-cost retrieval option when restoring objects from
- * S3 Glacier Deep Archive. They typically finish within 48 hours for objects
- * stored in the S3 Glacier Deep Archive storage class or S3 Intelligent-Tiering Deep Archive
- * tier.
Bulk
- Bulk retrievals free for objects stored in the
+ * S3 Glacier Flexible Retrieval and S3 Intelligent-Tiering storage classes,
+ * enabling you to retrieve large amounts, even petabytes, of data at no cost.
+ * Bulk retrievals typically finish within 5–12 hours for objects stored in the
+ * S3 Glacier Flexible Retrieval Flexible Retrieval storage class or
+ * S3 Intelligent-Tiering Archive tier. Bulk retrievals are also the lowest-cost
+ * retrieval option when restoring objects from
+ * S3 Glacier Deep Archive. They typically finish within 48 hours for
+ * objects stored in the S3 Glacier Deep Archive storage class or
+ * S3 Intelligent-Tiering Deep Archive tier.
* For more information about archive retrieval options and provisioned capacity for
- * Expedited
data access, see Restoring Archived Objects in
- * the Amazon S3 User Guide.
You can use Amazon S3 restore speed upgrade to change the restore speed to a faster speed - * while it is in progress. For more information, see Upgrading the speed of an in-progress restore in the - * Amazon S3 User Guide.
- *To get the status of object restoration, you can send a HEAD
request.
- * Operations return the x-amz-restore
header, which provides information about
- * the restoration status, in the response. You can use Amazon S3 event notifications to notify you
- * when a restore is initiated or completed. For more information, see Configuring Amazon S3
- * Event Notifications in the Amazon S3 User Guide.
After restoring an archived object, you can update the restoration period by reissuing - * the request with a new period. Amazon S3 updates the restoration period relative to the current - * time and charges only for the request-there are no data transfer charges. You cannot - * update the restoration period when Amazon S3 is actively processing your current restore request - * for the object.
- *If your bucket has a lifecycle configuration with a rule that includes an expiration - * action, the object expiration overrides the life span that you specify in a restore - * request. For example, if you restore an object copy for 10 days, but the object is - * scheduled to expire in 3 days, Amazon S3 deletes the object in 3 days. For more information - * about lifecycle configuration, see PutBucketLifecycleConfiguration and Object Lifecycle Management - * in Amazon S3 User Guide.
+ *For more information about archive retrieval options and provisioned capacity
+ * for Expedited
data access, see Restoring Archived
+ * Objects in the Amazon S3 User Guide.
You can use Amazon S3 restore speed upgrade to change the restore speed to a faster + * speed while it is in progress. For more information, see Upgrading the speed of an in-progress restore in the + * Amazon S3 User Guide.
+ *To get the status of object restoration, you can send a HEAD
+ * request. Operations return the x-amz-restore
header, which provides
+ * information about the restoration status, in the response. You can use Amazon S3 event
+ * notifications to notify you when a restore is initiated or completed. For more
+ * information, see Configuring Amazon S3 Event
+ * Notifications in the Amazon S3 User Guide.
After restoring an archived object, you can update the restoration period by + * reissuing the request with a new period. Amazon S3 updates the restoration period + * relative to the current time and charges only for the request-there are no + * data transfer charges. You cannot update the restoration period when Amazon S3 is + * actively processing your current restore request for the object.
+ *If your bucket has a lifecycle configuration with a rule that includes an + * expiration action, the object expiration overrides the life span that you specify + * in a restore request. For example, if you restore an object copy for 10 days, but + * the object is scheduled to expire in 3 days, Amazon S3 deletes the object in 3 days. + * For more information about lifecycle configuration, see PutBucketLifecycleConfiguration and Object Lifecycle + * Management in Amazon S3 User Guide.
*A successful action returns either the 200 OK
or 202 Accepted
- * status code.
A successful action returns either the 200 OK
or 202
+ * Accepted
status code.
If the object is not previously restored, then Amazon S3 returns 202
- * Accepted
in the response.
If the object is previously restored, Amazon S3 returns 200 OK
in the
- * response.
If the object is previously restored, Amazon S3 returns 200 OK
in
+ * the response.
- * Cause: Object restore is already in progress. (This error does not - * apply to SELECT type requests.) + * Cause: Object restore is already in progress. (This error + * does not apply to SELECT type requests.) *
*- * Cause: expedited retrievals are currently not available. Try again - * later. (Returned if there is insufficient capacity to process the Expedited - * request. This error applies only to Expedited retrievals and not to - * S3 Standard or Bulk retrievals.) + * Cause: expedited retrievals are currently not available. + * Try again later. (Returned if there is insufficient capacity to + * process the Expedited request. This error applies only to Expedited + * retrievals and not to S3 Standard or Bulk retrievals.) *
*You must have s3:GetObject
permission for this operation. Amazon S3 Select does
- * not support anonymous access. For more information about permissions, see Specifying
- * Permissions in a Policy in the Amazon S3 User Guide.
You must have s3:GetObject
permission for this operation. Amazon S3
+ * Select does not support anonymous access. For more information about permissions,
+ * see Specifying Permissions in
+ * a Policy in the Amazon S3 User Guide.
- * CSV, JSON, and Parquet - Objects must be in CSV, JSON, or - * Parquet format.
+ * CSV, JSON, and Parquet - Objects must be in CSV, + * JSON, or Parquet format. *@@ -78,63 +79,68 @@ export interface SelectObjectContentCommandOutput extends SelectObjectContentOut *
- * GZIP or BZIP2 - CSV and JSON files can be compressed using - * GZIP or BZIP2. GZIP and BZIP2 are the only compression formats that Amazon S3 Select - * supports for CSV and JSON files. Amazon S3 Select supports columnar compression for - * Parquet using GZIP or Snappy. Amazon S3 Select does not support whole-object compression - * for Parquet objects.
+ * GZIP or BZIP2 - CSV and JSON files can be compressed + * using GZIP or BZIP2. GZIP and BZIP2 are the only compression formats that + * Amazon S3 Select supports for CSV and JSON files. Amazon S3 Select supports columnar + * compression for Parquet using GZIP or Snappy. Amazon S3 Select does not support + * whole-object compression for Parquet objects. *- * Server-side encryption - Amazon S3 Select supports querying - * objects that are protected with server-side encryption.
- *For objects that are encrypted with customer-provided encryption keys (SSE-C), you - * must use HTTPS, and you must use the headers that are documented in the GetObject. For more information about SSE-C, see Server-Side - * Encryption (Using Customer-Provided Encryption Keys) in the - * Amazon S3 User Guide.
- *For objects that are encrypted with Amazon S3 managed keys (SSE-S3) and Amazon Web Services KMS keys - * (SSE-KMS), server-side encryption is handled transparently, so you don't need to - * specify anything. For more information about server-side encryption, including SSE-S3 - * and SSE-KMS, see Protecting Data Using - * Server-Side Encryption in the Amazon S3 User Guide.
+ * Server-side encryption - Amazon S3 Select supports + * querying objects that are protected with server-side encryption. + *For objects that are encrypted with customer-provided encryption keys + * (SSE-C), you must use HTTPS, and you must use the headers that are + * documented in the GetObject. For more + * information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys) + * in the Amazon S3 User Guide.
+ *For objects that are encrypted with Amazon S3 managed keys (SSE-S3) and + * Amazon Web Services KMS keys (SSE-KMS), server-side encryption is handled transparently, + * so you don't need to specify anything. For more information about + * server-side encryption, including SSE-S3 and SSE-KMS, see Protecting Data Using Server-Side Encryption in the + * Amazon S3 User Guide.
*Given the response size is unknown, Amazon S3 Select streams the response as a series of
- * messages and includes a Transfer-Encoding
header with chunked
as
- * its value in the response. For more information, see Appendix: SelectObjectContent
- * Response.
Given the response size is unknown, Amazon S3 Select streams the response as a
+ * series of messages and includes a Transfer-Encoding
header with
+ * chunked
as its value in the response. For more information, see
+ * Appendix:
+ * SelectObjectContent
+ * Response.
The SelectObjectContent
action does not support the following
- * GetObject
functionality. For more information, see GetObject.
GetObject
functionality. For more information, see GetObject.
*
- * Range
: Although you can specify a scan range for an Amazon S3 Select request
- * (see SelectObjectContentRequest - ScanRange in the request parameters),
- * you cannot specify the range of bytes of an object to return.
Range
: Although you can specify a scan range for an Amazon S3 Select
+ * request (see SelectObjectContentRequest - ScanRange in the request
+ * parameters), you cannot specify the range of bytes of an object to return.
+ *
* The GLACIER
, DEEP_ARCHIVE
, and REDUCED_REDUNDANCY
storage classes, or the ARCHIVE_ACCESS
and
- * DEEP_ARCHIVE_ACCESS
access tiers of
- * the INTELLIGENT_TIERING
storage class: You cannot query objects in
- * the GLACIER
, DEEP_ARCHIVE
, or REDUCED_REDUNDANCY
storage classes, nor objects in the
- * ARCHIVE_ACCESS
or
- * DEEP_ARCHIVE_ACCESS
access tiers of
- * the INTELLIGENT_TIERING
storage class. For
- * more information about storage classes, see Using Amazon S3 storage
- * classes in the Amazon S3 User Guide.
The GLACIER
, DEEP_ARCHIVE
, and
+ * REDUCED_REDUNDANCY
storage classes, or the
+ * ARCHIVE_ACCESS
and DEEP_ARCHIVE_ACCESS
access
+ * tiers of the INTELLIGENT_TIERING
storage class: You cannot
+ * query objects in the GLACIER
, DEEP_ARCHIVE
, or
+ * REDUCED_REDUNDANCY
storage classes, nor objects in the
+ * ARCHIVE_ACCESS
or DEEP_ARCHIVE_ACCESS
access
+ * tiers of the INTELLIGENT_TIERING
storage class. For more
+ * information about storage classes, see Using Amazon S3
+ * storage classes in the
+ * Amazon S3 User Guide.
For a list of special errors for this operation, see List of - * SELECT Object Content Error Codes + *
For a list of special errors for this operation, see List of SELECT Object Content Error Codes *
*If your bucket has versioning enabled, you could have multiple versions of the same
- * object. By default, x-amz-copy-source
identifies the current version of the
- * object to copy. If the current version is a delete marker and you don't specify a versionId
- * in the x-amz-copy-source
, Amazon S3 returns a 404 error, because the object does
- * not exist. If you specify versionId in the x-amz-copy-source
and the versionId
- * is a delete marker, Amazon S3 returns an HTTP 400 error, because you are not allowed to specify
- * a delete marker as a version for the x-amz-copy-source
.
You can optionally specify a specific version of the source object to copy by adding the
- * versionId
subresource as shown in the following example:
If your bucket has versioning enabled, you could have multiple versions of the
+ * same object. By default, x-amz-copy-source
identifies the current
+ * version of the object to copy. If the current version is a delete marker and you
+ * don't specify a versionId in the x-amz-copy-source
, Amazon S3 returns a
+ * 404 error, because the object does not exist. If you specify versionId in the
+ * x-amz-copy-source
and the versionId is a delete marker, Amazon S3
+ * returns an HTTP 400 error, because you are not allowed to specify a delete marker
+ * as a version for the x-amz-copy-source
.
You can optionally specify a specific version of the source object to copy by
+ * adding the versionId
subresource as shown in the following
+ * example:
* x-amz-copy-source: /bucket/object?versionId=version id
*
- * Cause: The specified multipart upload does not exist. The upload - * ID might be invalid, or the multipart upload might have been aborted or - * completed. + * Cause: The specified multipart upload does not exist. The + * upload ID might be invalid, or the multipart upload might have been + * aborted or completed. *
*- * Cause: The specified copy source is not supported as a byte-range - * copy source. + * Cause: The specified copy source is not supported as a + * byte-range copy source. *
*Specifies the days since the initiation of an incomplete multipart upload that Amazon S3 will * wait before permanently removing all parts of the upload. For more information, see - * Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration in the - * Amazon S3 User Guide.
+ * Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration in + * the Amazon S3 User Guide. */ export interface AbortIncompleteMultipartUpload { /** @@ -88,8 +88,10 @@ export interface AbortMultipartUploadRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -628,8 +630,10 @@ export interface CompleteMultipartUploadRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -1099,11 +1103,12 @@ export interface CopyObjectRequest { /** * @public - *By default, Amazon S3 uses the STANDARD Storage Class to store newly created objects. The - * STANDARD storage class provides high durability and high availability. Depending on - * performance needs, you can specify a different Storage Class. Amazon S3 on Outposts only uses - * the OUTPOSTS Storage Class. For more information, see Storage Classes in the - * Amazon S3 User Guide.
+ *If the x-amz-storage-class
header is not used, the copied object will be stored in the
+ * STANDARD Storage Class by default. The STANDARD storage class provides high durability and
+ * high availability. Depending on performance needs, you can specify a different Storage
+ * Class. Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, see
+ * Storage
+ * Classes in the Amazon S3 User Guide.
Specifies the KMS key ID to use for object encryption. All GET and PUT requests for an + *
Specifies the KMS ID (Key ID, Key ARN, or Key Alias) to use for object encryption. All GET and PUT requests for an
* object protected by KMS will fail if they're not made via SSL or using SigV4. For
* information about configuring any of the officially supported Amazon Web Services SDKs and Amazon Web Services CLI, see
* Specifying the
@@ -1197,8 +1202,10 @@ export interface CopyObjectRequest {
/**
* @public
* Confirms that the requester knows that they will be charged for the request. Bucket
- * owners need not specify this parameter in their requests. For information about downloading
- * objects from Requester Pays buckets, see Downloading Objects in
+ * owners need not specify this parameter in their requests. If either the source or
+ * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for
+ * corresponding charges to copy the object. For information about downloading objects from
+ * Requester Pays buckets, see Downloading Objects in
* Requester Pays Buckets in the Amazon S3 User Guide.
The response also includes the x-amz-abort-rule-id
header that provides the
* ID of the lifecycle configuration rule that defines this action.
Specifies the ID of the symmetric encryption customer managed key to use for object encryption. + *
Specifies the ID (Key ID, Key ARN, or Key Alias) of the symmetric encryption customer managed key to use for object encryption. * All GET and PUT requests for an object protected by KMS will fail if they're not made via * SSL or using SigV4. For information about configuring any of the officially supported Amazon Web Services * SDKs and Amazon Web Services CLI, see Specifying the Signature Version in Request Authentication @@ -1765,8 +1773,10 @@ export interface CreateMultipartUploadRequest { /** * @public *
Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -2081,8 +2091,9 @@ export interface DeleteBucketWebsiteRequest { export interface DeleteObjectOutput { /** * @public - *Specifies whether the versioned object that was permanently deleted was (true) or was - * not (false) a delete marker.
+ *Indicates whether the specified object version that was permanently deleted was (true) or was + * not (false) a delete marker before deletion. In a simple DELETE, this header indicates whether (true) or + * not (false) the current version of the object is a delete marker.
*/ DeleteMarker?: boolean; @@ -2139,8 +2150,10 @@ export interface DeleteObjectRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -2179,9 +2192,9 @@ export interface DeletedObject { /** * @public - *Specifies whether the versioned object that was permanently deleted was (true) or was - * not (false) a delete marker. In a simple DELETE, this header indicates whether (true) or - * not (false) a delete marker was created.
+ *Indicates whether the specified object version that was permanently deleted was (true) or was + * not (false) a delete marker before deletion. In a simple DELETE, this header indicates whether (true) or + * not (false) the current version of the object is a delete marker.
*/ DeleteMarker?: boolean; @@ -4191,8 +4204,10 @@ export interface DeleteObjectsRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -4330,8 +4345,10 @@ export interface GetBucketAccelerateConfigurationRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -4771,22 +4788,26 @@ export interface ServerSideEncryptionByDefault { *Amazon Web Services Key Management Service (KMS) customer Amazon Web Services KMS key ID to use for the default
* encryption. This parameter is allowed if and only if SSEAlgorithm
is set to
* aws:kms
.
You can specify the key ID or the Amazon Resource Name (ARN) of the KMS key. If you use - * a key ID, you can run into a LogDestination undeliverable error when creating a VPC flow - * log.
- *If you are using encryption with cross-account or Amazon Web Services service operations you must use - * a fully qualified KMS key ARN. For more information, see Using encryption for cross-account operations.
+ *You can specify the key ID, key alias, or the Amazon Resource Name (ARN) of the KMS + * key.
*Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
*
Key ARN:
- * arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
+ *
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
+ *
Key Alias: alias/alias-name
*
If you use a key ID, you can run into a LogDestination undeliverable error when creating + * a VPC flow log.
+ *If you are using encryption with cross-account or Amazon Web Services service operations you must use + * a fully qualified KMS key ARN. For more information, see Using encryption for cross-account operations.
*Amazon S3 only supports symmetric encryption KMS keys. For more information, see Asymmetric keys in Amazon Web Services KMS in the Amazon Web Services Key Management Service * Developer Guide.
@@ -5328,8 +5349,8 @@ export interface GetBucketInventoryConfigurationRequest { export interface LifecycleExpiration { /** * @public - *Indicates at what date the object is to be moved or deleted. The date value must conform to the ISO 8601 format. - * The time is always midnight UTC.
+ *Indicates at what date the object is to be moved or deleted. The date value must conform + * to the ISO 8601 format. The time is always midnight UTC.
*/ Date?: Date; @@ -5707,8 +5728,8 @@ export interface LifecycleRule { * @public *Specifies the days since the initiation of an incomplete multipart upload that Amazon S3 will * wait before permanently removing all parts of the upload. For more information, see - * Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration in the - * Amazon S3 User Guide.
+ * Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration in + * the Amazon S3 User Guide. */ AbortIncompleteMultipartUpload?: AbortIncompleteMultipartUpload; } @@ -6187,7 +6208,9 @@ export interface S3KeyFilter { /** * @public *Specifies object key name filtering rules. For information about key name filtering, see - * Configuring event notifications using object key name filtering in the Amazon S3 User Guide.
+ * Configuring event + * notifications using object key name filtering in the + * Amazon S3 User Guide. */ export interface NotificationConfigurationFilter { /** @@ -6227,7 +6250,9 @@ export interface LambdaFunctionConfiguration { /** * @public *Specifies object key name filtering rules. For information about key name filtering, see - * Configuring event notifications using object key name filtering in the Amazon S3 User Guide.
+ * Configuring event + * notifications using object key name filtering in the + * Amazon S3 User Guide. */ Filter?: NotificationConfigurationFilter; } @@ -6261,7 +6286,9 @@ export interface QueueConfiguration { /** * @public *Specifies object key name filtering rules. For information about key name filtering, see - * Configuring event notifications using object key name filtering in the Amazon S3 User Guide.
+ * Configuring event + * notifications using object key name filtering in the + * Amazon S3 User Guide. */ Filter?: NotificationConfigurationFilter; } @@ -6297,7 +6324,9 @@ export interface TopicConfiguration { /** * @public *Specifies object key name filtering rules. For information about key name filtering, see - * Configuring event notifications using object key name filtering in the Amazon S3 User Guide.
+ * Configuring event + * notifications using object key name filtering in the + * Amazon S3 User Guide. */ Filter?: NotificationConfigurationFilter; } @@ -7895,8 +7924,10 @@ export interface GetObjectRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -8018,8 +8049,10 @@ export interface GetObjectAclRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -8323,8 +8356,10 @@ export interface GetObjectAttributesRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -8337,8 +8372,8 @@ export interface GetObjectAttributesRequest { /** * @public - *Specifies the fields at the root level that you want returned in the - * response. Fields that you do not specify are not returned.
+ *Specifies the fields at the root level that you want returned in the response. Fields + * that you do not specify are not returned.
*/ ObjectAttributes: (ObjectAttributes | string)[] | undefined; } @@ -8394,8 +8429,10 @@ export interface GetObjectLegalHoldRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -8602,8 +8639,10 @@ export interface GetObjectRetentionRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -8668,8 +8707,10 @@ export interface GetObjectTaggingRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -8714,8 +8755,10 @@ export interface GetObjectTorrentRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -8826,10 +8869,11 @@ export interface HeadBucketRequest { * @public *The bucket name.
*When using this action with an access point, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
- *When you use this action with an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name.
- * If the Object Lambda access point alias in a request is not valid, the error code InvalidAccessPointAliasError
is returned.
- * For more information about InvalidAccessPointAliasError
, see List of
- * Error Codes.
When you use this action with an Object Lambda access point, provide the alias of the Object Lambda access point in place of the
+ * bucket name. If the Object Lambda access point alias in a request is not valid, the error code
+ * InvalidAccessPointAliasError
is returned. For more information about
+ * InvalidAccessPointAliasError
, see List of Error
+ * Codes.
When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form
* AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.
Note: To supply the Multi-region Access Point (MRAP) to Bucket, you need to install the "@aws-sdk/signature-v4-crt" package to your project dependencies. @@ -9266,8 +9310,10 @@ export interface HeadObjectRequest { /** * @public *
Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -9855,8 +9901,10 @@ export interface ListMultipartUploadsRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -9864,10 +9912,10 @@ export interface ListMultipartUploadsRequest { /** * @public - *Specifies the restoration status of an object. Objects in certain storage classes must be restored - * before they can be retrieved. For more information about these storage classes and how to work with - * archived objects, see - * Working with archived objects in the Amazon S3 User Guide.
+ *Specifies the restoration status of an object. Objects in certain storage classes must + * be restored before they can be retrieved. For more information about these storage classes + * and how to work with archived objects, see Working with archived + * objects in the Amazon S3 User Guide.
*/ export interface RestoreStatus { /** @@ -9877,9 +9925,11 @@ export interface RestoreStatus { *
* x-amz-optional-object-attributes: IsRestoreInProgress="true"
*
If the object restoration has completed, the header returns the value FALSE
. For example:
If the object restoration has completed, the header returns the value
+ * FALSE
. For example:
- * x-amz-optional-object-attributes: IsRestoreInProgress="false", RestoreExpiryDate="2012-12-21T00:00:00.000Z"
+ * x-amz-optional-object-attributes: IsRestoreInProgress="false",
+ * RestoreExpiryDate="2012-12-21T00:00:00.000Z"
*
If the object hasn't been restored, there is no header response.
*/ @@ -9890,7 +9940,8 @@ export interface RestoreStatus { *Indicates when the restored copy will expire. This value is populated only if the object * has already been restored. For example:
*
- * x-amz-optional-object-attributes: IsRestoreInProgress="false", RestoreExpiryDate="2012-12-21T00:00:00.000Z"
+ * x-amz-optional-object-attributes: IsRestoreInProgress="false",
+ * RestoreExpiryDate="2012-12-21T00:00:00.000Z"
*
Specifies the restoration status of an object. Objects in certain storage classes must be restored - * before they can be retrieved. For more information about these storage classes and how to work with - * archived objects, see - * Working with archived objects in the Amazon S3 User Guide.
+ *Specifies the restoration status of an object. Objects in certain storage classes must + * be restored before they can be retrieved. For more information about these storage classes + * and how to work with archived objects, see Working with archived + * objects in the Amazon S3 User Guide.
*/ RestoreStatus?: RestoreStatus; } @@ -10145,15 +10196,16 @@ export interface ListObjectsRequest { /** * @public - *Marker is where you want Amazon S3 to start listing from. Amazon S3 starts listing after - * this specified key. Marker can be any key in the bucket.
+ *Marker is where you want Amazon S3 to start listing from. Amazon S3 starts listing after this + * specified key. Marker can be any key in the bucket.
*/ Marker?: string; /** * @public *Sets the maximum number of keys returned in the response. By default, the action returns - * up to 1,000 key names. The response might contain fewer keys but will never contain more.
+ * up to 1,000 key names. The response might contain fewer keys but will never contain more. + * */ MaxKeys?: number; @@ -10178,8 +10230,8 @@ export interface ListObjectsRequest { /** * @public - *Specifies the optional fields that you want returned in the response. - * Fields that you do not specify are not returned.
+ *Specifies the optional fields that you want returned in the response. Fields that you do + * not specify are not returned.
*/ OptionalObjectAttributes?: (OptionalObjectAttributes | string)[]; } @@ -10408,8 +10460,8 @@ export interface ListObjectsV2Request { /** * @public - *Specifies the optional fields that you want returned in the response. - * Fields that you do not specify are not returned.
+ *Specifies the optional fields that you want returned in the response. Fields that you do + * not specify are not returned.
*/ OptionalObjectAttributes?: (OptionalObjectAttributes | string)[]; } @@ -10526,10 +10578,10 @@ export interface ObjectVersion { /** * @public - *Specifies the restoration status of an object. Objects in certain storage classes must be restored - * before they can be retrieved. For more information about these storage classes and how to work with - * archived objects, see - * Working with archived objects in the Amazon S3 User Guide.
+ *Specifies the restoration status of an object. Objects in certain storage classes must + * be restored before they can be retrieved. For more information about these storage classes + * and how to work with archived objects, see Working with archived + * objects in the Amazon S3 User Guide.
*/ RestoreStatus?: RestoreStatus; } @@ -10718,16 +10770,18 @@ export interface ListObjectVersionsRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; /** * @public - *Specifies the optional fields that you want returned in the response. - * Fields that you do not specify are not returned.
+ *Specifies the optional fields that you want returned in the response. Fields that you do + * not specify are not returned.
*/ OptionalObjectAttributes?: (OptionalObjectAttributes | string)[]; } @@ -10808,7 +10862,8 @@ export interface ListPartsOutput { *If the bucket has a lifecycle rule configured with an action to abort incomplete * multipart uploads and the prefix in the lifecycle rule matches the object name in the * request, then the response includes this header indicating when the initiated multipart - * upload will become eligible for abort operation. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration.
+ * upload will become eligible for abort operation. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle + * Configuration. *The response will also include the x-amz-abort-rule-id
header that will
* provide the ID of the lifecycle configuration rule that defines this action.
Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -12018,9 +12075,9 @@ export interface PutObjectOutput { * @public *If present, specifies the Amazon Web Services KMS Encryption Context to use for object encryption. The
* value of this header is a base64-encoded UTF-8 string holding JSON with the encryption
- * context key-value pairs. This value is stored as object metadata and automatically gets passed
- * on to Amazon Web Services KMS for future GetObject
or CopyObject
operations on
- * this object.
GetObject
or CopyObject
+ * operations on this object.
*/
SSEKMSEncryptionContext?: string;
@@ -12279,7 +12336,7 @@ export interface PutObjectRequest {
/**
* @public
* If Specifies the Amazon Web Services KMS Encryption Context to use for object encryption. The value of
* this header is a base64-encoded UTF-8 string holding JSON with the encryption context
- * key-value pairs. This value is stored as object metadata and automatically gets passed on to
- * Amazon Web Services KMS for future x-amz-server-side-encryption
has a valid value of aws:kms
- * or aws:kms:dsse
, this header specifies the ID of the Key Management Service (KMS)
+ * or aws:kms:dsse
, this header specifies the ID (Key ID, Key ARN, or Key Alias) of the Key Management Service (KMS)
* symmetric encryption customer managed key that was used for the object. If you specify
* x-amz-server-side-encryption:aws:kms
or
* x-amz-server-side-encryption:aws:kms:dsse
, but do not provide
@@ -12293,9 +12350,9 @@ export interface PutObjectRequest {
* @public
*
GetObject
or CopyObject
operations on this
- * object.GetObject
or CopyObject
operations on
+ * this object.
Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -12463,8 +12522,10 @@ export interface PutObjectAclRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -12523,8 +12584,10 @@ export interface PutObjectLegalHoldRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; diff --git a/clients/client-s3/src/models/models_1.ts b/clients/client-s3/src/models/models_1.ts index 4fb84de3656b4..514d508a6c662 100644 --- a/clients/client-s3/src/models/models_1.ts +++ b/clients/client-s3/src/models/models_1.ts @@ -53,8 +53,10 @@ export interface PutObjectLockConfigurationRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -133,8 +135,10 @@ export interface PutObjectRetentionRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -250,8 +254,10 @@ export interface PutObjectTaggingRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -900,8 +906,10 @@ export interface RestoreObjectRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -1530,8 +1538,10 @@ export interface UploadPartRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -1804,8 +1814,10 @@ export interface UploadPartCopyRequest { /** * @public *Confirms that the requester knows that they will be charged for the request. Bucket - * owners need not specify this parameter in their requests. For information about downloading - * objects from Requester Pays buckets, see Downloading Objects in + * owners need not specify this parameter in their requests. If either the source or + * destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for + * corresponding charges to copy the object. For information about downloading objects from + * Requester Pays buckets, see Downloading Objects in * Requester Pays Buckets in the Amazon S3 User Guide.
*/ RequestPayer?: RequestPayer | string; @@ -2163,7 +2175,7 @@ export interface WriteGetObjectResponseRequest { /** * @public - *If present, specifies the ID of the Amazon Web Services Key Management Service (Amazon Web Services KMS) symmetric + *
If present, specifies the ID (Key ID, Key ARN, or Key Alias) of the Amazon Web Services Key Management Service (Amazon Web Services KMS) symmetric * encryption customer managed key that was used for stored in Amazon S3 object.
*/ SSEKMSKeyId?: string; diff --git a/codegen/sdk-codegen/aws-models/s3.json b/codegen/sdk-codegen/aws-models/s3.json index a211a1a2f0e1a..6a6a04809fbd8 100644 --- a/codegen/sdk-codegen/aws-models/s3.json +++ b/codegen/sdk-codegen/aws-models/s3.json @@ -44,7 +44,7 @@ } }, "traits": { - "smithy.api#documentation": "Specifies the days since the initiation of an incomplete multipart upload that Amazon S3 will\n wait before permanently removing all parts of the upload. For more information, see \n Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration in the\n Amazon S3 User Guide.
" + "smithy.api#documentation": "Specifies the days since the initiation of an incomplete multipart upload that Amazon S3 will\n wait before permanently removing all parts of the upload. For more information, see \n Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration in\n the Amazon S3 User Guide.
" } }, "com.amazonaws.s3#AbortMultipartUpload": { @@ -4768,9 +4768,7 @@ "disableDoubleEncoding": true, "name": "sigv4a", "signingName": "s3", - "signingRegionSet": [ - "*" - ] + "signingRegionSet": ["*"] } ] }, @@ -8240,9 +8238,7 @@ "authSchemes": [ { "name": "sigv4a", - "signingRegionSet": [ - "*" - ], + "signingRegionSet": ["*"], "signingName": "s3", "disableDoubleEncoding": true } @@ -16357,7 +16353,7 @@ "target": "com.amazonaws.s3#CompleteMultipartUploadOutput" }, "traits": { - "smithy.api#documentation": "Completes a multipart upload by assembling previously uploaded parts.
\nYou first initiate the multipart upload and then upload all parts using the UploadPart\n operation. After successfully uploading all relevant parts of an upload, you call this\n action to complete the upload. Upon receiving this request, Amazon S3 concatenates all the\n parts in ascending order by part number to create a new object. In the Complete Multipart\n Upload request, you must provide the parts list. You must ensure that the parts list is\n complete. This action concatenates the parts that you provide in the list. For each part in\n the list, you must provide the part number and the ETag
value, returned after\n that part was uploaded.
Processing of a Complete Multipart Upload request could take several minutes to\n complete. After Amazon S3 begins processing the request, it sends an HTTP response header that\n specifies a 200 OK response. While processing is in progress, Amazon S3 periodically sends white\n space characters to keep the connection from timing out. A request could fail after the\n initial 200 OK response has been sent. This means that a 200 OK
response can\n contain either a success or an error. If you call the S3 API directly, make sure to design\n your application to parse the contents of the response and handle it appropriately. If you\n use Amazon Web Services SDKs, SDKs handle this condition. The SDKs detect the embedded error and apply\n error handling per your configuration settings (including automatically retrying the\n request as appropriate). If the condition persists, the SDKs throws an exception (or, for\n the SDKs that don't use exceptions, they return the error).
Note that if CompleteMultipartUpload
fails, applications should be prepared\n to retry the failed requests. For more information, see Amazon S3 Error Best\n Practices.
You cannot use Content-Type: application/x-www-form-urlencoded
with\n Complete Multipart Upload requests. Also, if you do not provide a\n Content-Type
header, CompleteMultipartUpload
returns a 200\n OK response.
For more information about multipart uploads, see Uploading Objects Using Multipart\n Upload.
\nFor information about permissions required to use the multipart upload API, see Multipart Upload\n and Permissions.
\n\n CompleteMultipartUpload
has the following special errors:
Error code: EntityTooSmall
\n
Description: Your proposed upload is smaller than the minimum allowed object\n size. Each part must be at least 5 MB in size, except the last part.
\n400 Bad Request
\nError code: InvalidPart
\n
Description: One or more of the specified parts could not be found. The part\n might not have been uploaded, or the specified entity tag might not have\n matched the part's entity tag.
\n400 Bad Request
\nError code: InvalidPartOrder
\n
Description: The list of parts was not in ascending order. The parts list\n must be specified in order by part number.
\n400 Bad Request
\nError code: NoSuchUpload
\n
Description: The specified multipart upload does not exist. The upload ID\n might be invalid, or the multipart upload might have been aborted or\n completed.
\n404 Not Found
\nThe following operations are related to CompleteMultipartUpload
:
\n UploadPart\n
\n\n AbortMultipartUpload\n
\n\n ListParts\n
\n\n ListMultipartUploads\n
\nCompletes a multipart upload by assembling previously uploaded parts.
\nYou first initiate the multipart upload and then upload all parts using the UploadPart\n operation. After successfully uploading all relevant parts of an upload, you call this\n action to complete the upload. Upon receiving this request, Amazon S3 concatenates all the parts\n in ascending order by part number to create a new object. In the Complete Multipart Upload\n request, you must provide the parts list. You must ensure that the parts list is complete.\n This action concatenates the parts that you provide in the list. For each part in the list,\n you must provide the part number and the ETag
value, returned after that part\n was uploaded.
Processing of a Complete Multipart Upload request could take several minutes to\n complete. After Amazon S3 begins processing the request, it sends an HTTP response header that\n specifies a 200 OK response. While processing is in progress, Amazon S3 periodically sends white\n space characters to keep the connection from timing out. A request could fail after the\n initial 200 OK response has been sent. This means that a 200 OK
response can\n contain either a success or an error. If you call the S3 API directly, make sure to design\n your application to parse the contents of the response and handle it appropriately. If you\n use Amazon Web Services SDKs, SDKs handle this condition. The SDKs detect the embedded error and apply\n error handling per your configuration settings (including automatically retrying the\n request as appropriate). If the condition persists, the SDKs throws an exception (or, for\n the SDKs that don't use exceptions, they return the error).
Note that if CompleteMultipartUpload
fails, applications should be prepared\n to retry the failed requests. For more information, see Amazon S3 Error Best\n Practices.
You cannot use Content-Type: application/x-www-form-urlencoded
with\n Complete Multipart Upload requests. Also, if you do not provide a\n Content-Type
header, CompleteMultipartUpload
returns a 200\n OK response.
For more information about multipart uploads, see Uploading Objects Using Multipart\n Upload.
\nFor information about permissions required to use the multipart upload API, see Multipart Upload\n and Permissions.
\n\n CompleteMultipartUpload
has the following special errors:
Error code: EntityTooSmall
\n
Description: Your proposed upload is smaller than the minimum allowed object\n size. Each part must be at least 5 MB in size, except the last part.
\n400 Bad Request
\nError code: InvalidPart
\n
Description: One or more of the specified parts could not be found. The part\n might not have been uploaded, or the specified entity tag might not have\n matched the part's entity tag.
\n400 Bad Request
\nError code: InvalidPartOrder
\n
Description: The list of parts was not in ascending order. The parts list\n must be specified in order by part number.
\n400 Bad Request
\nError code: NoSuchUpload
\n
Description: The specified multipart upload does not exist. The upload ID\n might be invalid, or the multipart upload might have been aborted or\n completed.
\n404 Not Found
\nThe following operations are related to CompleteMultipartUpload
:
\n UploadPart\n
\n\n AbortMultipartUpload\n
\n\n ListParts\n
\n\n ListMultipartUploads\n
\nCreates a copy of an object that is already stored in Amazon S3.
\nYou can store individual objects of up to 5 TB in Amazon S3. You create a copy of your\n object up to 5 GB in size in a single atomic action using this API. However, to copy an\n object greater than 5 GB, you must use the multipart upload Upload Part - Copy\n (UploadPartCopy) API. For more information, see Copy Object Using the\n REST Multipart Upload API.
\nAll copy requests must be authenticated. Additionally, you must have\n read access to the source object and write\n access to the destination bucket. For more information, see REST Authentication. Both the\n Region that you want to copy the object from and the Region that you want to copy the\n object to must be enabled for your account.
\nA copy request might return an error when Amazon S3 receives the copy request or while Amazon S3\n is copying the files. If the error occurs before the copy action starts, you receive a\n standard Amazon S3 error. If the error occurs during the copy operation, the error response is\n embedded in the 200 OK
response. This means that a 200 OK
\n response can contain either a success or an error. If you call the S3 API directly, make\n sure to design your application to parse the contents of the response and handle it\n appropriately. If you use Amazon Web Services SDKs, SDKs handle this condition. The SDKs detect the\n embedded error and apply error handling per your configuration settings (including\n automatically retrying the request as appropriate). If the condition persists, the SDKs\n throws an exception (or, for the SDKs that don't use exceptions, they return the\n error).
If the copy is successful, you receive a response with information about the copied\n object.
\nIf the request is an HTTP 1.1 request, the response is chunk encoded. If it were not,\n it would not contain the content-length, and you would need to read the entire\n body.
\nThe copy request charge is based on the storage class and Region that you specify for\n the destination object. The request can also result in a data retrieval charge for the\n source if the source storage class bills for data retrieval. For pricing information, see\n Amazon S3 pricing.
\nAmazon S3 transfer acceleration does not support cross-Region copies. If you request a\n cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad\n Request
error. For more information, see Transfer\n Acceleration.
When copying an object, you can preserve all metadata (the default) or specify new metadata.\n However, the access control list (ACL) is not preserved and is set to private for the user making the request. To\n override the default ACL setting, specify a new ACL when generating a copy request. For\n more information, see Using ACLs.
\nTo specify whether you want the object metadata copied from the source object or\n replaced with metadata provided in the request, you can optionally add the\n x-amz-metadata-directive
header. When you grant permissions, you can use\n the s3:x-amz-metadata-directive
condition key to enforce certain metadata\n behavior when objects are uploaded. For more information, see Specifying Conditions in a\n Policy in the Amazon S3 User Guide. For a complete list of\n Amazon S3-specific condition keys, see Actions, Resources, and Condition Keys for\n Amazon S3.
\n x-amz-website-redirect-location
is unique to each object and must be\n specified in the request headers to copy the value.
To only copy an object under certain conditions, such as whether the Etag
\n matches or whether the object was modified before or after a specified date, use the\n following request parameters:
\n x-amz-copy-source-if-match
\n
\n x-amz-copy-source-if-none-match
\n
\n x-amz-copy-source-if-unmodified-since
\n
\n x-amz-copy-source-if-modified-since
\n
If both the x-amz-copy-source-if-match
and\n x-amz-copy-source-if-unmodified-since
headers are present in the request\n and evaluate as follows, Amazon S3 returns 200 OK
and copies the data:
\n x-amz-copy-source-if-match
condition evaluates to true
\n x-amz-copy-source-if-unmodified-since
condition evaluates to\n false
If both the x-amz-copy-source-if-none-match
and\n x-amz-copy-source-if-modified-since
headers are present in the request and\n evaluate as follows, Amazon S3 returns the 412 Precondition Failed
response\n code:
\n x-amz-copy-source-if-none-match
condition evaluates to false
\n x-amz-copy-source-if-modified-since
condition evaluates to\n true
All headers with the x-amz-
prefix, including\n x-amz-copy-source
, must be signed.
Amazon S3 automatically encrypts all new objects that are copied to an S3 bucket. When\n copying an object, if you don't specify encryption information in your copy\n request, the encryption setting of the target object is set to the default\n encryption configuration of the destination bucket. By default, all buckets have a\n base level of encryption configuration that uses server-side encryption with Amazon S3\n managed keys (SSE-S3). If the destination bucket has a default encryption\n configuration that uses server-side encryption with Key Management Service (KMS) keys\n (SSE-KMS), dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), or\n server-side encryption with customer-provided encryption keys (SSE-C), Amazon S3 uses\n the corresponding KMS key, or a customer-provided key to encrypt the target\n object copy.
\nWhen you perform a CopyObject
operation, if you want to use a different type\n of encryption setting for the target object, you can use other appropriate\n encryption-related headers to encrypt the target object with a KMS key, an Amazon S3 managed\n key, or a customer-provided key. With server-side encryption, Amazon S3 encrypts your data as it\n writes your data to disks in its data centers and decrypts the data when you access it. If the\n encryption setting in your request is different from the default encryption configuration\n of the destination bucket, the encryption setting in your request takes precedence. If the\n source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary\n encryption information in your request so that Amazon S3 can decrypt the object for copying. For\n more information about server-side encryption, see Using Server-Side\n Encryption.
If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the\n object. For more information, see Amazon S3 Bucket Keys in the\n Amazon S3 User Guide.
\nWhen copying an object, you can optionally use headers to grant ACL-based permissions.\n By default, all objects are private. Only the owner has full access control. When adding a\n new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups\n that are defined by Amazon S3. These permissions are then added to the ACL on the object. For more\n information, see Access Control List (ACL) Overview and Managing ACLs Using the REST\n API.
\nIf the bucket that you're copying objects to uses the bucket owner enforced setting for\n S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use\n this setting only accept PUT
requests that don't specify an ACL or PUT
requests that\n specify bucket owner full control ACLs, such as the bucket-owner-full-control
\n canned ACL or an equivalent form of this ACL expressed in the XML format.
For more information, see Controlling ownership of\n objects and disabling ACLs in the Amazon S3 User Guide.
\nIf your bucket uses the bucket owner enforced setting for Object Ownership, all\n objects written to the bucket by any account will be owned by the bucket owner.
\nWhen copying an object, if it has a checksum, that checksum will be copied to the new\n object by default. When you copy the object over, you can optionally specify a different\n checksum algorithm to use with the x-amz-checksum-algorithm
header.
You can use the CopyObject
action to change the storage class of an object\n that is already stored in Amazon S3 by using the StorageClass
parameter. For more\n information, see Storage Classes in the\n Amazon S3 User Guide.
If the source object's storage class is GLACIER, you must restore a copy of\n this object before you can use it as a source object for the copy operation. For\n more information, see RestoreObject. For\n more information, see Copying\n Objects.
\nBy default, x-amz-copy-source
header identifies the current version of an object\n to copy. If the current version is a delete marker, Amazon S3 behaves as if the object was\n deleted. To copy a different version, use the versionId
subresource.
If you enable versioning on the target bucket, Amazon S3 generates a unique version ID for\n the object being copied. This version ID is different from the version ID of the source\n object. Amazon S3 returns the version ID of the copied object in the\n x-amz-version-id
response header in the response.
If you do not enable versioning or suspend it on the target bucket, the version ID that\n Amazon S3 generates is always null.
\nThe following operations are related to CopyObject
:
Creates a copy of an object that is already stored in Amazon S3.
\nYou can store individual objects of up to 5 TB in Amazon S3. You create a copy of your\n object up to 5 GB in size in a single atomic action using this API. However, to copy an\n object greater than 5 GB, you must use the multipart upload Upload Part - Copy\n (UploadPartCopy) API. For more information, see Copy Object Using the\n REST Multipart Upload API.
\nAll copy requests must be authenticated. Additionally, you must have\n read access to the source object and write\n access to the destination bucket. For more information, see REST Authentication. Both the\n Region that you want to copy the object from and the Region that you want to copy the\n object to must be enabled for your account.
\nA copy request might return an error when Amazon S3 receives the copy request or while Amazon S3\n is copying the files. If the error occurs before the copy action starts, you receive a\n standard Amazon S3 error. If the error occurs during the copy operation, the error response is\n embedded in the 200 OK
response. This means that a 200 OK
\n response can contain either a success or an error. If you call the S3 API directly, make\n sure to design your application to parse the contents of the response and handle it\n appropriately. If you use Amazon Web Services SDKs, SDKs handle this condition. The SDKs detect the\n embedded error and apply error handling per your configuration settings (including\n automatically retrying the request as appropriate). If the condition persists, the SDKs\n throws an exception (or, for the SDKs that don't use exceptions, they return the\n error).
If the copy is successful, you receive a response with information about the copied\n object.
\nIf the request is an HTTP 1.1 request, the response is chunk encoded. If it were not,\n it would not contain the content-length, and you would need to read the entire\n body.
\nThe copy request charge is based on the storage class and Region that you specify for\n the destination object. The request can also result in a data retrieval charge for the\n source if the source storage class bills for data retrieval. For pricing information, see\n Amazon S3 pricing.
\nAmazon S3 transfer acceleration does not support cross-Region copies. If you request a\n cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad\n Request
error. For more information, see Transfer\n Acceleration.
When copying an object, you can preserve all metadata (the default) or specify\n new metadata. However, the access control list (ACL) is not preserved and is set\n to private for the user making the request. To override the default ACL setting,\n specify a new ACL when generating a copy request. For more information, see Using\n ACLs.
\nTo specify whether you want the object metadata copied from the source object\n or replaced with metadata provided in the request, you can optionally add the\n x-amz-metadata-directive
header. When you grant permissions, you\n can use the s3:x-amz-metadata-directive
condition key to enforce\n certain metadata behavior when objects are uploaded. For more information, see\n Specifying Conditions in a\n Policy in the Amazon S3 User Guide. For a complete list\n of Amazon S3-specific condition keys, see Actions, Resources, and Condition\n Keys for Amazon S3.
\n x-amz-website-redirect-location
is unique to each object and\n must be specified in the request headers to copy the value.
To only copy an object under certain conditions, such as whether the\n Etag
matches or whether the object was modified before or after a\n specified date, use the following request parameters:
\n x-amz-copy-source-if-match
\n
\n x-amz-copy-source-if-none-match
\n
\n x-amz-copy-source-if-unmodified-since
\n
\n x-amz-copy-source-if-modified-since
\n
If both the x-amz-copy-source-if-match
and\n x-amz-copy-source-if-unmodified-since
headers are present in the\n request and evaluate as follows, Amazon S3 returns 200 OK
and copies the\n data:
\n x-amz-copy-source-if-match
condition evaluates to\n true
\n x-amz-copy-source-if-unmodified-since
condition evaluates to\n false
If both the x-amz-copy-source-if-none-match
and\n x-amz-copy-source-if-modified-since
headers are present in the\n request and evaluate as follows, Amazon S3 returns the 412 Precondition\n Failed
response code:
\n x-amz-copy-source-if-none-match
condition evaluates to\n false
\n x-amz-copy-source-if-modified-since
condition evaluates to\n true
All headers with the x-amz-
prefix, including\n x-amz-copy-source
, must be signed.
Amazon S3 automatically encrypts all new objects that are copied to an S3 bucket.\n When copying an object, if you don't specify encryption information in your copy\n request, the encryption setting of the target object is set to the default\n encryption configuration of the destination bucket. By default, all buckets have a\n base level of encryption configuration that uses server-side encryption with Amazon S3\n managed keys (SSE-S3). If the destination bucket has a default encryption\n configuration that uses server-side encryption with Key Management Service (KMS) keys\n (SSE-KMS), dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), or\n server-side encryption with customer-provided encryption keys (SSE-C), Amazon S3 uses\n the corresponding KMS key, or a customer-provided key to encrypt the target\n object copy.
\nWhen you perform a CopyObject
operation, if you want to use a\n different type of encryption setting for the target object, you can use other\n appropriate encryption-related headers to encrypt the target object with a\n KMS key, an Amazon S3 managed key, or a customer-provided key. With server-side\n encryption, Amazon S3 encrypts your data as it writes your data to disks in its data\n centers and decrypts the data when you access it. If the encryption setting in\n your request is different from the default encryption configuration of the\n destination bucket, the encryption setting in your request takes precedence. If\n the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the\n necessary encryption information in your request so that Amazon S3 can decrypt the\n object for copying. For more information about server-side encryption, see Using\n Server-Side Encryption.
If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the\n object. For more information, see Amazon S3 Bucket Keys in the\n Amazon S3 User Guide.
\nWhen copying an object, you can optionally use headers to grant ACL-based\n permissions. By default, all objects are private. Only the owner has full access\n control. When adding a new object, you can grant permissions to individual\n Amazon Web Services accounts or to predefined groups that are defined by Amazon S3. These permissions\n are then added to the ACL on the object. For more information, see Access Control\n List (ACL) Overview and Managing ACLs Using the REST\n API.
\nIf the bucket that you're copying objects to uses the bucket owner enforced\n setting for S3 Object Ownership, ACLs are disabled and no longer affect\n permissions. Buckets that use this setting only accept PUT
requests\n that don't specify an ACL or PUT
requests that specify bucket owner\n full control ACLs, such as the bucket-owner-full-control
canned ACL\n or an equivalent form of this ACL expressed in the XML format.
For more information, see Controlling\n ownership of objects and disabling ACLs in the\n Amazon S3 User Guide.
\nIf your bucket uses the bucket owner enforced setting for Object Ownership,\n all objects written to the bucket by any account will be owned by the bucket\n owner.
\nWhen copying an object, if it has a checksum, that checksum will be copied to\n the new object by default. When you copy the object over, you can optionally\n specify a different checksum algorithm to use with the\n x-amz-checksum-algorithm
header.
You can use the CopyObject
action to change the storage class of\n an object that is already stored in Amazon S3 by using the StorageClass
\n parameter. For more information, see Storage Classes in\n the Amazon S3 User Guide.
If the source object's storage class is GLACIER or\n DEEP_ARCHIVE, or the object's storage class is\n INTELLIGENT_TIERING and it's S3 Intelligent-Tiering access tier is\n Archive Access or Deep Archive Access, you must restore a copy of this object\n before you can use it as a source object for the copy operation. For more\n information, see RestoreObject. For\n more information, see Copying\n Objects.
\nBy default, x-amz-copy-source
header identifies the current\n version of an object to copy. If the current version is a delete marker, Amazon S3\n behaves as if the object was deleted. To copy a different version, use the\n versionId
subresource.
If you enable versioning on the target bucket, Amazon S3 generates a unique version\n ID for the object being copied. This version ID is different from the version ID\n of the source object. Amazon S3 returns the version ID of the copied object in the\n x-amz-version-id
response header in the response.
If you do not enable versioning or suspend it on the target bucket, the version\n ID that Amazon S3 generates is always null.
\nThe following operations are related to CopyObject
:
By default, Amazon S3 uses the STANDARD Storage Class to store newly created objects. The\n STANDARD storage class provides high durability and high availability. Depending on\n performance needs, you can specify a different Storage Class. Amazon S3 on Outposts only uses\n the OUTPOSTS Storage Class. For more information, see Storage Classes in the\n Amazon S3 User Guide.
", + "smithy.api#documentation": "If the x-amz-storage-class
header is not used, the copied object will be stored in the\n STANDARD Storage Class by default. The STANDARD storage class provides high durability and\n high availability. Depending on performance needs, you can specify a different Storage\n Class. Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, see\n Storage\n Classes in the Amazon S3 User Guide.
Specifies the KMS key ID to use for object encryption. All GET and PUT requests for an\n object protected by KMS will fail if they're not made via SSL or using SigV4. For\n information about configuring any of the officially supported Amazon Web Services SDKs and Amazon Web Services CLI, see\n Specifying the\n Signature Version in Request Authentication in the\n Amazon S3 User Guide.
", + "smithy.api#documentation": "Specifies the KMS ID (Key ID, Key ARN, or Key Alias) to use for object encryption. All GET and PUT requests for an\n object protected by KMS will fail if they're not made via SSL or using SigV4. For\n information about configuring any of the officially supported Amazon Web Services SDKs and Amazon Web Services CLI, see\n Specifying the\n Signature Version in Request Authentication in the\n Amazon S3 User Guide.
", "smithy.api#httpHeader": "x-amz-server-side-encryption-aws-kms-key-id" } }, @@ -17282,16 +17278,19 @@ } ], "traits": { - "smithy.api#documentation": "Creates a new S3 bucket. To create a bucket, you must register with Amazon S3 and have a\n valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to\n create buckets. By creating the bucket, you become the bucket owner.
\nNot every string is an acceptable bucket name. For information about bucket naming\n restrictions, see Bucket naming\n rules.
\nIf you want to create an Amazon S3 on Outposts bucket, see Create Bucket.
\nBy default, the bucket is created in the US East (N. Virginia) Region. You can\n optionally specify a Region in the request body. You might choose a Region to optimize\n latency, minimize costs, or address regulatory requirements. For example, if you reside in\n Europe, you will probably find it advantageous to create buckets in the Europe (Ireland)\n Region. For more information, see Accessing a\n bucket.
\nIf you send your create bucket request to the s3.amazonaws.com
endpoint,\n the request goes to the us-east-1
Region. Accordingly, the signature calculations in\n Signature Version 4 must use us-east-1
as the Region, even if the location constraint in\n the request specifies another Region where the bucket is to be created. If you create a\n bucket in a Region other than US East (N. Virginia), your application must be able to\n handle 307 redirect. For more information, see Virtual hosting of\n buckets.
In addition to s3:CreateBucket
, the following permissions are required when\n your CreateBucket
request includes specific headers:
\n Access control lists (ACLs) - If your CreateBucket
request\n specifies access control list (ACL) permissions and the ACL is public-read, public-read-write,\n authenticated-read, or if you specify access permissions explicitly through any other\n ACL, both s3:CreateBucket
and s3:PutBucketAcl
permissions\n are needed. If the ACL for the CreateBucket
request is private or if the request doesn't\n specify any ACLs, only s3:CreateBucket
permission is needed.
\n Object Lock - If ObjectLockEnabledForBucket
is set to true in your\n CreateBucket
request,\n s3:PutBucketObjectLockConfiguration
and\n s3:PutBucketVersioning
permissions are required.
\n S3 Object Ownership - If your CreateBucket
request includes the x-amz-object-ownership
header, then the\n s3:PutBucketOwnershipControls
permission is required. By default, ObjectOwnership
is set to BucketOWnerEnforced
and ACLs are disabled. We recommend keeping\n ACLs disabled, except in uncommon use cases where you must control access for each object individually. If you want to change the ObjectOwnership
setting, you can use the \n x-amz-object-ownership
header in your CreateBucket
request to set the ObjectOwnership
setting of your choice.\n For more information about S3 Object Ownership, see Controlling object\n ownership in the Amazon S3 User Guide.
\n S3 Block Public Access - If your specific use case requires granting public access to your S3 resources, you can disable Block Public Access. You can create a new bucket with Block Public Access enabled, then separately call the \n DeletePublicAccessBlock
\n API. To use this operation, you must have the\n s3:PutBucketPublicAccessBlock
permission. By default, all Block\n Public Access settings are enabled for new buckets. To avoid inadvertent exposure of\n your resources, we recommend keeping the S3 Block Public Access settings enabled. For more information about S3 Block Public Access, see Blocking public\n access to your Amazon S3 storage in the Amazon S3 User Guide.
If your CreateBucket
request sets BucketOwnerEnforced
for Amazon S3 Object Ownership\n and specifies a bucket ACL that provides access to an external Amazon Web Services account, your request fails with a 400
error and returns the InvalidBucketAcLWithObjectOwnership
error code. For more information,\n see Setting Object\n Ownership on an existing bucket in the Amazon S3 User Guide.
The following operations are related to CreateBucket
:
\n PutObject\n
\n\n DeleteBucket\n
\nCreates a new S3 bucket. To create a bucket, you must register with Amazon S3 and have a\n valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to\n create buckets. By creating the bucket, you become the bucket owner.
\nNot every string is an acceptable bucket name. For information about bucket naming\n restrictions, see Bucket naming\n rules.
\nIf you want to create an Amazon S3 on Outposts bucket, see Create Bucket.
\nBy default, the bucket is created in the US East (N. Virginia) Region. You can\n optionally specify a Region in the request body. To constrain the bucket creation to a\n specific Region, you can use \n LocationConstraint
\n condition key. You might choose a Region to\n optimize latency, minimize costs, or address regulatory requirements. For example, if you\n reside in Europe, you will probably find it advantageous to create buckets in the Europe\n (Ireland) Region. For more information, see Accessing a\n bucket.
If you send your create bucket request to the s3.amazonaws.com
endpoint,\n the request goes to the us-east-1
Region. Accordingly, the signature\n calculations in Signature Version 4 must use us-east-1
as the Region, even\n if the location constraint in the request specifies another Region where the bucket is\n to be created. If you create a bucket in a Region other than US East (N. Virginia), your\n application must be able to handle 307 redirect. For more information, see Virtual hosting of\n buckets.
In addition to s3:CreateBucket
, the following permissions are\n required when your CreateBucket
request includes specific\n headers:
\n Access control lists (ACLs) - If your\n CreateBucket
request specifies access control list (ACL)\n permissions and the ACL is public-read, public-read-write,\n authenticated-read, or if you specify access permissions explicitly through\n any other ACL, both s3:CreateBucket
and\n s3:PutBucketAcl
permissions are needed. If the ACL for the\n CreateBucket
request is private or if the request doesn't\n specify any ACLs, only s3:CreateBucket
permission is needed.\n
\n Object Lock - If\n ObjectLockEnabledForBucket
is set to true in your\n CreateBucket
request,\n s3:PutBucketObjectLockConfiguration
and\n s3:PutBucketVersioning
permissions are required.
\n S3 Object Ownership - If your\n CreateBucket
request includes the\n x-amz-object-ownership
header, then the\n s3:PutBucketOwnershipControls
permission is required. By\n default, ObjectOwnership
is set to\n BucketOWnerEnforced
and ACLs are disabled. We recommend\n keeping ACLs disabled, except in uncommon use cases where you must control\n access for each object individually. If you want to change the\n ObjectOwnership
setting, you can use the\n x-amz-object-ownership
header in your\n CreateBucket
request to set the ObjectOwnership
\n setting of your choice. For more information about S3 Object Ownership, see\n Controlling\n object ownership in the\n Amazon S3 User Guide.
\n S3 Block Public Access - If your\n specific use case requires granting public access to your S3 resources, you\n can disable Block Public Access. You can create a new bucket with Block\n Public Access enabled, then separately call the \n DeletePublicAccessBlock
\n API. To use this operation, you must have the\n s3:PutBucketPublicAccessBlock
permission. By default, all\n Block Public Access settings are enabled for new buckets. To avoid\n inadvertent exposure of your resources, we recommend keeping the S3 Block\n Public Access settings enabled. For more information about S3 Block Public\n Access, see Blocking\n public access to your Amazon S3 storage in the\n Amazon S3 User Guide.
If your CreateBucket
request sets BucketOwnerEnforced
for\n Amazon S3 Object Ownership and specifies a bucket ACL that provides access to an external\n Amazon Web Services account, your request fails with a 400
error and returns the\n InvalidBucketAcLWithObjectOwnership
error code. For more information,\n see Setting Object\n Ownership on an existing bucket in the Amazon S3 User Guide.\n
The following operations are related to CreateBucket
:
\n PutObject\n
\n\n DeleteBucket\n
\nThis action initiates a multipart upload and returns an upload ID. This upload ID is\n used to associate all of the parts in the specific multipart upload. You specify this\n upload ID in each of your subsequent upload part requests (see UploadPart). You also include this\n upload ID in the final request to either complete or abort the multipart upload\n request.
\nFor more information about multipart uploads, see Multipart Upload Overview.
\nIf you have configured a lifecycle rule to abort incomplete multipart uploads, the\n upload must complete within the number of days specified in the bucket lifecycle\n configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort\n action and Amazon S3 aborts the multipart upload. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration.
\nFor information about the permissions required to use the multipart upload API, see\n Multipart\n Upload and Permissions.
\nFor request signing, multipart upload is just a series of regular requests. You initiate\n a multipart upload, send one or more requests to upload parts, and then complete the\n multipart upload process. You sign each request individually. There is nothing special\n about signing multipart upload requests. For more information about signing, see Authenticating Requests (Amazon Web Services Signature Version 4).
\nAfter you initiate a multipart upload and upload one or more parts, to stop being\n charged for storing the uploaded parts, you must either complete or abort the multipart\n upload. Amazon S3 frees up the space used to store the parts and stop charging you for\n storing them only after you either complete or abort a multipart upload.
\nServer-side encryption is for data encryption at rest. Amazon S3 encrypts your data as it\n writes it to disks in its data centers and decrypts it when you access it. Amazon S3\n automatically encrypts all new objects that are uploaded to an S3 bucket. When doing a\n multipart upload, if you don't specify encryption information in your request, the\n encryption setting of the uploaded parts is set to the default encryption configuration of\n the destination bucket. By default, all buckets have a base level of encryption\n configuration that uses server-side encryption with Amazon S3 managed keys (SSE-S3). If the\n destination bucket has a default encryption configuration that uses server-side encryption\n with an Key Management Service (KMS) key (SSE-KMS), or a customer-provided encryption key (SSE-C),\n Amazon S3 uses the corresponding KMS key, or a customer-provided key to encrypt the uploaded\n parts. When you perform a CreateMultipartUpload operation, if you want to use a different\n type of encryption setting for the uploaded parts, you can request that Amazon S3 encrypts the\n object with a KMS key, an Amazon S3 managed key, or a customer-provided key. If the encryption\n setting in your request is different from the default encryption configuration of the\n destination bucket, the encryption setting in your request takes precedence. If you choose\n to provide your own encryption key, the request headers you provide in UploadPart\n and UploadPartCopy requests must match the headers you used in the request to\n initiate the upload by using CreateMultipartUpload
. You can request that Amazon S3\n save the uploaded parts encrypted with server-side encryption with an Amazon S3 managed key\n (SSE-S3), an Key Management Service (KMS) key (SSE-KMS), or a customer-provided encryption key\n (SSE-C).
To perform a multipart upload with encryption by using an Amazon Web Services KMS key, the requester\n must have permission to the kms:Decrypt
and kms:GenerateDataKey*
\n actions on the key. These permissions are required because Amazon S3 must decrypt and read data\n from the encrypted file parts before it completes the multipart upload. For more\n information, see Multipart upload API\n and permissions and Protecting data using\n server-side encryption with Amazon Web Services KMS in the\n Amazon S3 User Guide.
If your Identity and Access Management (IAM) user or role is in the same Amazon Web Services account as the KMS key,\n then you must have these permissions on the key policy. If your IAM user or role belongs\n to a different account than the key, then you must have the permissions on both the key\n policy and your IAM user or role.
\nFor more information, see Protecting Data Using Server-Side\n Encryption.
\nWhen copying an object, you can optionally specify the accounts or groups that\n should be granted specific permissions on the new object. There are two ways to\n grant the permissions using the request headers:
\nSpecify a canned ACL with the x-amz-acl
request header. For\n more information, see Canned\n ACL.
Specify access permissions explicitly with the\n x-amz-grant-read
, x-amz-grant-read-acp
,\n x-amz-grant-write-acp
, and\n x-amz-grant-full-control
headers. These parameters map to\n the set of permissions that Amazon S3 supports in an ACL. For more information,\n see Access Control List (ACL) Overview.
You can use either a canned ACL or specify access permissions explicitly. You\n cannot do both.
\nAmazon S3 encrypts data\n by using server-side encryption with an Amazon S3 managed key (SSE-S3) by default. Server-side encryption is for data encryption at rest. Amazon S3 encrypts\n your data as it writes it to disks in its data centers and decrypts it when you\n access it. You can request that Amazon S3 encrypts\n data at rest by using server-side encryption with other key options. The option you use depends on\n whether you want to use KMS keys (SSE-KMS) or provide your own encryption keys\n (SSE-C).
\nUse KMS keys (SSE-KMS) that include the Amazon Web Services managed key\n (aws/s3
) and KMS customer managed keys stored in Key Management Service (KMS) – If you\n want Amazon Web Services to manage the keys used to encrypt data, specify the following\n headers in the request.
\n x-amz-server-side-encryption
\n
\n x-amz-server-side-encryption-aws-kms-key-id
\n
\n x-amz-server-side-encryption-context
\n
If you specify x-amz-server-side-encryption:aws:kms
, but\n don't provide x-amz-server-side-encryption-aws-kms-key-id
,\n Amazon S3 uses the Amazon Web Services managed key (aws/s3
key) in KMS to\n protect the data.
All GET
and PUT
requests for an object protected\n by KMS fail if you don't make them by using Secure Sockets Layer (SSL),\n Transport Layer Security (TLS), or Signature Version 4.
For more information about server-side encryption with KMS keys\n (SSE-KMS), see Protecting Data\n Using Server-Side Encryption with KMS keys.
\nUse customer-provided encryption keys (SSE-C) – If you want to manage\n your own encryption keys, provide all the following headers in the\n request.
\n\n x-amz-server-side-encryption-customer-algorithm
\n
\n x-amz-server-side-encryption-customer-key
\n
\n x-amz-server-side-encryption-customer-key-MD5
\n
For more information about server-side encryption with customer-provided\n encryption keys (SSE-C), see \n Protecting data using server-side encryption with customer-provided\n encryption keys (SSE-C).
\nYou also can use the following access control–related headers with this\n operation. By default, all objects are private. Only the owner has full access\n control. When adding a new object, you can grant permissions to individual\n Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then\n added to the access control list (ACL) on the object. For more information, see\n Using ACLs. With this operation, you can grant access permissions\n using one of the following two methods:
\nSpecify a canned ACL (x-amz-acl
) — Amazon S3 supports a set of\n predefined ACLs, known as canned ACLs. Each canned ACL\n has a predefined set of grantees and permissions. For more information, see\n Canned\n ACL.
Specify access permissions explicitly — To explicitly grant access\n permissions to specific Amazon Web Services accounts or groups, use the following headers.\n Each header maps to specific permissions that Amazon S3 supports in an ACL. For\n more information, see Access Control List (ACL)\n Overview. In the header, you specify a list of grantees who get\n the specific permission. To grant permissions explicitly, use:
\n\n x-amz-grant-read
\n
\n x-amz-grant-write
\n
\n x-amz-grant-read-acp
\n
\n x-amz-grant-write-acp
\n
\n x-amz-grant-full-control
\n
You specify each grantee as a type=value pair, where the type is one of\n the following:
\n\n id
– if the value specified is the canonical user ID\n of an Amazon Web Services account
\n uri
– if you are granting permissions to a predefined\n group
\n emailAddress
– if the value specified is the email\n address of an Amazon Web Services account
Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
\nUS East (N. Virginia)
\nUS West (N. California)
\nUS West (Oregon)
\nAsia Pacific (Singapore)
\nAsia Pacific (Sydney)
\nAsia Pacific (Tokyo)
\nEurope (Ireland)
\nSouth America (São Paulo)
\nFor a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.
\nFor example, the following x-amz-grant-read
header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:
\n x-amz-grant-read: id=\"11112222333\", id=\"444455556666\"
\n
The following operations are related to CreateMultipartUpload
:
\n UploadPart\n
\n\n AbortMultipartUpload\n
\n\n ListParts\n
\n\n ListMultipartUploads\n
\nThis action initiates a multipart upload and returns an upload ID. This upload ID is\n used to associate all of the parts in the specific multipart upload. You specify this\n upload ID in each of your subsequent upload part requests (see UploadPart). You also include this\n upload ID in the final request to either complete or abort the multipart upload\n request.
\nFor more information about multipart uploads, see Multipart Upload Overview.
\nIf you have configured a lifecycle rule to abort incomplete multipart uploads, the\n upload must complete within the number of days specified in the bucket lifecycle\n configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort\n action and Amazon S3 aborts the multipart upload. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle\n Configuration.
\nFor information about the permissions required to use the multipart upload API, see\n Multipart\n Upload and Permissions.
\nFor request signing, multipart upload is just a series of regular requests. You initiate\n a multipart upload, send one or more requests to upload parts, and then complete the\n multipart upload process. You sign each request individually. There is nothing special\n about signing multipart upload requests. For more information about signing, see Authenticating Requests (Amazon Web Services Signature Version 4).
\nAfter you initiate a multipart upload and upload one or more parts, to stop being\n charged for storing the uploaded parts, you must either complete or abort the multipart\n upload. Amazon S3 frees up the space used to store the parts and stop charging you for\n storing them only after you either complete or abort a multipart upload.
\nServer-side encryption is for data encryption at rest. Amazon S3 encrypts your data as it\n writes it to disks in its data centers and decrypts it when you access it. Amazon S3\n automatically encrypts all new objects that are uploaded to an S3 bucket. When doing a\n multipart upload, if you don't specify encryption information in your request, the\n encryption setting of the uploaded parts is set to the default encryption configuration of\n the destination bucket. By default, all buckets have a base level of encryption\n configuration that uses server-side encryption with Amazon S3 managed keys (SSE-S3). If the\n destination bucket has a default encryption configuration that uses server-side encryption\n with an Key Management Service (KMS) key (SSE-KMS), or a customer-provided encryption key (SSE-C),\n Amazon S3 uses the corresponding KMS key, or a customer-provided key to encrypt the uploaded\n parts. When you perform a CreateMultipartUpload operation, if you want to use a different\n type of encryption setting for the uploaded parts, you can request that Amazon S3 encrypts the\n object with a KMS key, an Amazon S3 managed key, or a customer-provided key. If the encryption\n setting in your request is different from the default encryption configuration of the\n destination bucket, the encryption setting in your request takes precedence. If you choose\n to provide your own encryption key, the request headers you provide in UploadPart\n and UploadPartCopy requests must match the headers you used in the request to\n initiate the upload by using CreateMultipartUpload
. You can request that Amazon S3\n save the uploaded parts encrypted with server-side encryption with an Amazon S3 managed key\n (SSE-S3), an Key Management Service (KMS) key (SSE-KMS), or a customer-provided encryption key\n (SSE-C).
To perform a multipart upload with encryption by using an Amazon Web Services KMS key, the requester\n must have permission to the kms:Decrypt
and kms:GenerateDataKey*
\n actions on the key. These permissions are required because Amazon S3 must decrypt and read data\n from the encrypted file parts before it completes the multipart upload. For more\n information, see Multipart upload API\n and permissions and Protecting data using\n server-side encryption with Amazon Web Services KMS in the\n Amazon S3 User Guide.
If your Identity and Access Management (IAM) user or role is in the same Amazon Web Services account as the KMS key,\n then you must have these permissions on the key policy. If your IAM user or role belongs\n to a different account than the key, then you must have the permissions on both the key\n policy and your IAM user or role.
\nFor more information, see Protecting Data Using Server-Side\n Encryption.
\nWhen copying an object, you can optionally specify the accounts or groups that\n should be granted specific permissions on the new object. There are two ways to\n grant the permissions using the request headers:
\nSpecify a canned ACL with the x-amz-acl
request header. For\n more information, see Canned\n ACL.
Specify access permissions explicitly with the\n x-amz-grant-read
, x-amz-grant-read-acp
,\n x-amz-grant-write-acp
, and\n x-amz-grant-full-control
headers. These parameters map to\n the set of permissions that Amazon S3 supports in an ACL. For more information,\n see Access Control List (ACL) Overview.
You can use either a canned ACL or specify access permissions explicitly. You\n cannot do both.
\nAmazon S3 encrypts data by using server-side encryption with an Amazon S3 managed key\n (SSE-S3) by default. Server-side encryption is for data encryption at rest. Amazon S3\n encrypts your data as it writes it to disks in its data centers and decrypts it\n when you access it. You can request that Amazon S3 encrypts data at rest by using\n server-side encryption with other key options. The option you use depends on\n whether you want to use KMS keys (SSE-KMS) or provide your own encryption keys\n (SSE-C).
\nUse KMS keys (SSE-KMS) that include the Amazon Web Services managed key\n (aws/s3
) and KMS customer managed keys stored in Key Management Service (KMS) –\n If you want Amazon Web Services to manage the keys used to encrypt data, specify the\n following headers in the request.
\n x-amz-server-side-encryption
\n
\n x-amz-server-side-encryption-aws-kms-key-id
\n
\n x-amz-server-side-encryption-context
\n
If you specify x-amz-server-side-encryption:aws:kms
, but\n don't provide x-amz-server-side-encryption-aws-kms-key-id
,\n Amazon S3 uses the Amazon Web Services managed key (aws/s3
key) in KMS to\n protect the data.
All GET
and PUT
requests for an object\n protected by KMS fail if you don't make them by using Secure Sockets\n Layer (SSL), Transport Layer Security (TLS), or Signature Version\n 4.
For more information about server-side encryption with KMS keys\n (SSE-KMS), see Protecting Data\n Using Server-Side Encryption with KMS keys.
\nUse customer-provided encryption keys (SSE-C) – If you want to manage\n your own encryption keys, provide all the following headers in the\n request.
\n\n x-amz-server-side-encryption-customer-algorithm
\n
\n x-amz-server-side-encryption-customer-key
\n
\n x-amz-server-side-encryption-customer-key-MD5
\n
For more information about server-side encryption with customer-provided\n encryption keys (SSE-C), see \n Protecting data using server-side encryption with customer-provided\n encryption keys (SSE-C).
\nYou also can use the following access control–related headers with this\n operation. By default, all objects are private. Only the owner has full access\n control. When adding a new object, you can grant permissions to individual\n Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then\n added to the access control list (ACL) on the object. For more information, see\n Using ACLs. With this operation, you can grant access permissions\n using one of the following two methods:
\nSpecify a canned ACL (x-amz-acl
) — Amazon S3 supports a set of\n predefined ACLs, known as canned ACLs. Each canned ACL\n has a predefined set of grantees and permissions. For more information, see\n Canned\n ACL.
Specify access permissions explicitly — To explicitly grant access\n permissions to specific Amazon Web Services accounts or groups, use the following headers.\n Each header maps to specific permissions that Amazon S3 supports in an ACL. For\n more information, see Access Control List (ACL)\n Overview. In the header, you specify a list of grantees who get\n the specific permission. To grant permissions explicitly, use:
\n\n x-amz-grant-read
\n
\n x-amz-grant-write
\n
\n x-amz-grant-read-acp
\n
\n x-amz-grant-write-acp
\n
\n x-amz-grant-full-control
\n
You specify each grantee as a type=value pair, where the type is one of\n the following:
\n\n id
– if the value specified is the canonical user ID\n of an Amazon Web Services account
\n uri
– if you are granting permissions to a predefined\n group
\n emailAddress
– if the value specified is the email\n address of an Amazon Web Services account
Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
\nUS East (N. Virginia)
\nUS West (N. California)
\nUS West (Oregon)
\nAsia Pacific (Singapore)
\nAsia Pacific (Sydney)
\nAsia Pacific (Tokyo)
\nEurope (Ireland)
\nSouth America (São Paulo)
\nFor a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.
\nFor example, the following x-amz-grant-read
header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:
\n x-amz-grant-read: id=\"11112222333\", id=\"444455556666\"
\n
The following operations are related to CreateMultipartUpload
:
\n UploadPart\n
\n\n AbortMultipartUpload\n
\n\n ListParts\n
\n\n ListMultipartUploads\n
\nIf the bucket has a lifecycle rule configured with an action to abort incomplete\n multipart uploads and the prefix in the lifecycle rule matches the object name in the\n request, the response includes this header. The header indicates when the initiated\n multipart upload becomes eligible for an abort operation. For more information, see \n Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration.
\nThe response also includes the x-amz-abort-rule-id
header that provides the\n ID of the lifecycle configuration rule that defines this action.
If the bucket has a lifecycle rule configured with an action to abort incomplete\n multipart uploads and the prefix in the lifecycle rule matches the object name in the\n request, the response includes this header. The header indicates when the initiated\n multipart upload becomes eligible for an abort operation. For more information, see \n Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle\n Configuration.
\nThe response also includes the x-amz-abort-rule-id
header that provides the\n ID of the lifecycle configuration rule that defines this action.
Specifies the ID of the symmetric encryption customer managed key to use for object encryption.\n All GET and PUT requests for an object protected by KMS will fail if they're not made via\n SSL or using SigV4. For information about configuring any of the officially supported Amazon Web Services\n SDKs and Amazon Web Services CLI, see Specifying the Signature Version in Request Authentication\n in the Amazon S3 User Guide.
", + "smithy.api#documentation": "Specifies the ID (Key ID, Key ARN, or Key Alias) of the symmetric encryption customer managed key to use for object encryption.\n All GET and PUT requests for an object protected by KMS will fail if they're not made via\n SSL or using SigV4. For information about configuring any of the officially supported Amazon Web Services\n SDKs and Amazon Web Services CLI, see Specifying the Signature Version in Request Authentication\n in the Amazon S3 User Guide.
", "smithy.api#httpHeader": "x-amz-server-side-encryption-aws-kms-key-id" } }, @@ -17983,7 +17982,7 @@ "target": "smithy.api#Unit" }, "traits": { - "smithy.api#documentation": "This implementation of the DELETE action resets the default encryption for the\n bucket as server-side encryption with Amazon S3 managed keys (SSE-S3). For information about the\n bucket default encryption feature, see Amazon S3 Bucket Default Encryption\n in the Amazon S3 User Guide.
\nTo use this operation, you must have permissions to perform the\n s3:PutEncryptionConfiguration
action. The bucket owner has this permission\n by default. The bucket owner can grant this permission to others. For more information\n about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to your Amazon S3 Resources in the\n Amazon S3 User Guide.
The following operations are related to DeleteBucketEncryption
:
\n PutBucketEncryption\n
\n\n GetBucketEncryption\n
\nThis implementation of the DELETE action resets the default encryption for the bucket as\n server-side encryption with Amazon S3 managed keys (SSE-S3). For information about the bucket\n default encryption feature, see Amazon S3 Bucket Default Encryption\n in the Amazon S3 User Guide.
\nTo use this operation, you must have permissions to perform the\n s3:PutEncryptionConfiguration
action. The bucket owner has this permission\n by default. The bucket owner can grant this permission to others. For more information\n about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to your Amazon S3 Resources in the\n Amazon S3 User Guide.
The following operations are related to DeleteBucketEncryption
:
\n PutBucketEncryption\n
\n\n GetBucketEncryption\n
\nThis implementation of the DELETE action uses the policy subresource to delete the\n policy of a specified bucket. If you are using an identity other than the root user of the\n Amazon Web Services account that owns the bucket, the calling identity must have the\n DeleteBucketPolicy
permissions on the specified bucket and belong to the\n bucket owner's account to use this operation.
If you don't have DeleteBucketPolicy
permissions, Amazon S3 returns a 403\n Access Denied
error. If you have the correct permissions, but you're not using an\n identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not\n Allowed
error.
To ensure that bucket owners don't inadvertently lock themselves out of their own\n buckets, the root principal in a bucket owner's Amazon Web Services account can perform the\n GetBucketPolicy
, PutBucketPolicy
, and\n DeleteBucketPolicy
API actions, even if their bucket policy explicitly\n denies the root principal's access. Bucket owner root principals can only be blocked from performing \n these API actions by VPC endpoint policies and Amazon Web Services Organizations policies.
For more information about bucket policies, see Using Bucket Policies and\n UserPolicies.
\nThe following operations are related to DeleteBucketPolicy
\n
\n CreateBucket\n
\n\n DeleteObject\n
\nThis implementation of the DELETE action uses the policy subresource to delete the\n policy of a specified bucket. If you are using an identity other than the root user of the\n Amazon Web Services account that owns the bucket, the calling identity must have the\n DeleteBucketPolicy
permissions on the specified bucket and belong to the\n bucket owner's account to use this operation.
If you don't have DeleteBucketPolicy
permissions, Amazon S3 returns a 403\n Access Denied
error. If you have the correct permissions, but you're not using an\n identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not\n Allowed
error.
To ensure that bucket owners don't inadvertently lock themselves out of their own\n buckets, the root principal in a bucket owner's Amazon Web Services account can perform the\n GetBucketPolicy
, PutBucketPolicy
, and\n DeleteBucketPolicy
API actions, even if their bucket policy explicitly\n denies the root principal's access. Bucket owner root principals can only be blocked\n from performing these API actions by VPC endpoint policies and Amazon Web Services Organizations\n policies.
For more information about bucket policies, see Using Bucket Policies and\n UserPolicies.
\nThe following operations are related to DeleteBucketPolicy
\n
\n CreateBucket\n
\n\n DeleteObject\n
\nRemoves the null version (if there is one) of an object and inserts a delete marker,\n which becomes the latest version of the object. If there isn't a null version, Amazon S3 does\n not remove any objects but will still respond that the command was successful.
\nTo remove a specific version, you must use the version Id subresource. Using this\n subresource permanently deletes the version. If the object deleted is a delete marker, Amazon S3\n sets the response header, x-amz-delete-marker
, to true.
If the object you want to delete is in a bucket where the bucket versioning\n configuration is MFA Delete enabled, you must include the x-amz-mfa
request\n header in the DELETE versionId
request. Requests that include\n x-amz-mfa
must use HTTPS.
For more information about MFA Delete, see Using MFA Delete. To see sample\n requests that use versioning, see Sample\n Request.
\nYou can delete objects by explicitly calling DELETE Object or configure its lifecycle\n (PutBucketLifecycle) to enable Amazon S3 to remove them for you. If you want to block\n users or accounts from removing or deleting objects from your bucket, you must deny them\n the s3:DeleteObject
, s3:DeleteObjectVersion
, and\n s3:PutLifeCycleConfiguration
actions.
The following action is related to DeleteObject
:
\n PutObject\n
\nSpecifies whether the versioned object that was permanently deleted was (true) or was\n not (false) a delete marker.
", + "smithy.api#documentation": "Indicates whether the specified object version that was permanently deleted was (true) or was\n not (false) a delete marker before deletion. In a simple DELETE, this header indicates whether (true) or\n not (false) the current version of the object is a delete marker.
", "smithy.api#httpHeader": "x-amz-delete-marker" } }, @@ -18708,15 +18706,14 @@ "smithy.api#documentation": "Removes the entire tag set from the specified object. For more information about\n managing object tags, see Object Tagging.
\nTo use this operation, you must have permission to perform the\n s3:DeleteObjectTagging
action.
To delete tags of a specific object version, add the versionId
query\n parameter in the request. You will need permission for the\n s3:DeleteObjectVersionTagging
action.
The following operations are related to DeleteObjectTagging
:
\n PutObjectTagging\n
\n\n GetObjectTagging\n
\nThis action enables you to delete multiple objects from a bucket using a single HTTP\n request. If you know the object keys that you want to delete, then this action provides a\n suitable alternative to sending individual delete requests, reducing per-request\n overhead.
\nThe request contains a list of up to 1000 keys that you want to delete. In the XML, you\n provide the object key names, and optionally, version IDs if you want to delete a specific\n version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a\n delete action and returns the result of that delete, success, or failure, in the response.\n Note that if the object specified in the request is not found, Amazon S3 returns the result as\n deleted.
\nThe action supports two modes for the response: verbose and quiet. By default, the\n action uses verbose mode in which the response includes the result of deletion of each key\n in your request. In quiet mode the response includes only keys where the delete action\n encountered an error. For a successful deletion, the action does not return any information\n about the delete in the response body.
\nWhen performing this action on an MFA Delete enabled bucket, that attempts to delete any\n versioned objects, you must include an MFA token. If you do not provide one, the entire\n request will fail, even if there are non-versioned objects you are trying to delete. If you\n provide an invalid token, whether there are versioned keys in the request or not, the\n entire Multi-Object Delete request will fail. For information about MFA Delete, see MFA\n Delete.
\nFinally, the Content-MD5 header is required for all Multi-Object Delete requests. Amazon S3 uses the header value to ensure that your request body has not been altered in\n transit.
\nThe following operations are related to DeleteObjects
:
\n UploadPart\n
\n\n ListParts\n
\n\n AbortMultipartUpload\n
\nThis action enables you to delete multiple objects from a bucket using a single HTTP\n request. If you know the object keys that you want to delete, then this action provides a\n suitable alternative to sending individual delete requests, reducing per-request\n overhead.
\nThe request contains a list of up to 1000 keys that you want to delete. In the XML, you\n provide the object key names, and optionally, version IDs if you want to delete a specific\n version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a\n delete action and returns the result of that delete, success, or failure, in the response.\n Note that if the object specified in the request is not found, Amazon S3 returns the result as\n deleted.
\nThe action supports two modes for the response: verbose and quiet. By default, the\n action uses verbose mode in which the response includes the result of deletion of each key\n in your request. In quiet mode the response includes only keys where the delete action\n encountered an error. For a successful deletion, the action does not return any information\n about the delete in the response body.
\nWhen performing this action on an MFA Delete enabled bucket, that attempts to delete any\n versioned objects, you must include an MFA token. If you do not provide one, the entire\n request will fail, even if there are non-versioned objects you are trying to delete. If you\n provide an invalid token, whether there are versioned keys in the request or not, the\n entire Multi-Object Delete request will fail. For information about MFA Delete, see MFA\n Delete.
\nFinally, the Content-MD5 header is required for all Multi-Object Delete requests. Amazon S3\n uses the header value to ensure that your request body has not been altered in\n transit.
\nThe following operations are related to DeleteObjects
:
\n UploadPart\n
\n\n ListParts\n
\n\n AbortMultipartUpload\n
\nSpecifies whether the versioned object that was permanently deleted was (true) or was\n not (false) a delete marker. In a simple DELETE, this header indicates whether (true) or\n not (false) a delete marker was created.
" + "smithy.api#documentation": "Indicates whether the specified object version that was permanently deleted was (true) or was\n not (false) a delete marker before deletion. In a simple DELETE, this header indicates whether (true) or\n not (false) the current version of the object is a delete marker.
" } }, "DeleteMarkerVersionId": { @@ -19535,7 +19566,7 @@ "target": "com.amazonaws.s3#GetBucketAccelerateConfigurationOutput" }, "traits": { - "smithy.api#documentation": "This implementation of the GET action uses the accelerate
subresource to\n return the Transfer Acceleration state of a bucket, which is either Enabled
or\n Suspended
. Amazon S3 Transfer Acceleration is a bucket-level feature that\n enables you to perform faster data transfers to and from Amazon S3.
To use this operation, you must have permission to perform the\n s3:GetAccelerateConfiguration
action. The bucket owner has this permission\n by default. The bucket owner can grant this permission to others. For more information\n about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to your Amazon S3 Resources in the\n Amazon S3 User Guide.
You set the Transfer Acceleration state of an existing bucket to Enabled
or\n Suspended
by using the PutBucketAccelerateConfiguration operation.
A GET accelerate
request does not return a state value for a bucket that\n has no transfer acceleration state. A bucket has no Transfer Acceleration state if a state\n has never been set on the bucket.
For more information about transfer acceleration, see Transfer Acceleration in\n the Amazon S3 User Guide.
\nThe following operations are related to GetBucketAccelerateConfiguration
:
This implementation of the GET action uses the accelerate
subresource to\n return the Transfer Acceleration state of a bucket, which is either Enabled
or\n Suspended
. Amazon S3 Transfer Acceleration is a bucket-level feature that\n enables you to perform faster data transfers to and from Amazon S3.
To use this operation, you must have permission to perform the\n s3:GetAccelerateConfiguration
action. The bucket owner has this permission\n by default. The bucket owner can grant this permission to others. For more information\n about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to your Amazon S3 Resources in the\n Amazon S3 User Guide.
You set the Transfer Acceleration state of an existing bucket to Enabled
or\n Suspended
by using the PutBucketAccelerateConfiguration operation.
A GET accelerate
request does not return a state value for a bucket that\n has no transfer acceleration state. A bucket has no Transfer Acceleration state if a state\n has never been set on the bucket.
For more information about transfer acceleration, see Transfer Acceleration in\n the Amazon S3 User Guide.
\nThe following operations are related to\n GetBucketAccelerateConfiguration
:
This implementation of the GET action returns an analytics configuration (identified by\n the analytics configuration ID) from the bucket.
\nTo use this operation, you must have permissions to perform the\n s3:GetAnalyticsConfiguration
action. The bucket owner has this permission\n by default. The bucket owner can grant this permission to others. For more information\n about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to Your Amazon S3 Resources in the\n Amazon S3 User Guide.
For information about Amazon S3 analytics feature, see Amazon S3 Analytics – Storage Class\n Analysis in the Amazon S3 User Guide.
\nThe following operations are related to GetBucketAnalyticsConfiguration
:
This implementation of the GET action returns an analytics configuration (identified by\n the analytics configuration ID) from the bucket.
\nTo use this operation, you must have permissions to perform the\n s3:GetAnalyticsConfiguration
action. The bucket owner has this permission\n by default. The bucket owner can grant this permission to others. For more information\n about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to Your Amazon S3 Resources in the\n Amazon S3 User Guide.
For information about Amazon S3 analytics feature, see Amazon S3 Analytics – Storage Class\n Analysis in the Amazon S3 User Guide.
\nThe following operations are related to\n GetBucketAnalyticsConfiguration
:
Returns the policy of a specified bucket. If you are using an identity other than the\n root user of the Amazon Web Services account that owns the bucket, the calling identity must have the\n GetBucketPolicy
permissions on the specified bucket and belong to the\n bucket owner's account in order to use this operation.
If you don't have GetBucketPolicy
permissions, Amazon S3 returns a 403\n Access Denied
error. If you have the correct permissions, but you're not using an\n identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not\n Allowed
error.
To ensure that bucket owners don't inadvertently lock themselves out of their own\n buckets, the root principal in a bucket owner's Amazon Web Services account can perform the\n GetBucketPolicy
, PutBucketPolicy
, and\n DeleteBucketPolicy
API actions, even if their bucket policy explicitly\n denies the root principal's access. Bucket owner root principals can only be blocked from performing \n these API actions by VPC endpoint policies and Amazon Web Services Organizations policies.
To use this API operation against an access point, provide the alias of the access point in place of the bucket name.
\nTo use this API operation against an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name. \nIf the Object Lambda access point alias in a request is not valid, the error code InvalidAccessPointAliasError
is returned. \nFor more information about InvalidAccessPointAliasError
, see List of\n Error Codes.
For more information about bucket policies, see Using Bucket Policies and User\n Policies.
\nThe following action is related to GetBucketPolicy
:
\n GetObject\n
\nReturns the policy of a specified bucket. If you are using an identity other than the\n root user of the Amazon Web Services account that owns the bucket, the calling identity must have the\n GetBucketPolicy
permissions on the specified bucket and belong to the\n bucket owner's account in order to use this operation.
If you don't have GetBucketPolicy
permissions, Amazon S3 returns a 403\n Access Denied
error. If you have the correct permissions, but you're not using an\n identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not\n Allowed
error.
To ensure that bucket owners don't inadvertently lock themselves out of their own\n buckets, the root principal in a bucket owner's Amazon Web Services account can perform the\n GetBucketPolicy
, PutBucketPolicy
, and\n DeleteBucketPolicy
API actions, even if their bucket policy explicitly\n denies the root principal's access. Bucket owner root principals can only be blocked\n from performing these API actions by VPC endpoint policies and Amazon Web Services Organizations\n policies.
To use this API operation against an access point, provide the alias of the access point in place of the bucket name.
\nTo use this API operation against an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name. \nIf the Object Lambda access point alias in a request is not valid, the error code InvalidAccessPointAliasError
is returned. \nFor more information about InvalidAccessPointAliasError
, see List of\n Error Codes.
For more information about bucket policies, see Using Bucket Policies and User\n Policies.
\nThe following action is related to GetBucketPolicy
:
\n GetObject\n
\nRetrieves objects from Amazon S3. To use GET
, you must have READ
\n access to the object. If you grant READ
access to the anonymous user, you can\n return the object without using an authorization header.
An Amazon S3 bucket has no directory hierarchy such as you would find in a typical computer\n file system. You can, however, create a logical hierarchy by using object key names that\n imply a folder structure. For example, instead of naming an object sample.jpg
,\n you can name it photos/2006/February/sample.jpg
.
To get an object from such a logical hierarchy, specify the full key name for the object\n in the GET
operation. For a virtual hosted-style request example, if you have\n the object photos/2006/February/sample.jpg
, specify the resource as\n /photos/2006/February/sample.jpg
. For a path-style request example, if you\n have the object photos/2006/February/sample.jpg
in the bucket named\n examplebucket
, specify the resource as\n /examplebucket/photos/2006/February/sample.jpg
. For more information about\n request types, see HTTP Host\n Header Bucket Specification.
For more information about returning the ACL of an object, see GetObjectAcl.
\nIf the object you are retrieving is stored in the S3 Glacier Flexible Retrieval or\n S3 Glacier Deep Archive storage class, or S3 Intelligent-Tiering Archive or\n S3 Intelligent-Tiering Deep Archive tiers, before you can retrieve the object you must first restore a\n copy using RestoreObject. Otherwise, this action returns an\n InvalidObjectState
error. For information about restoring archived objects,\n see Restoring\n Archived Objects.
Encryption request headers, like x-amz-server-side-encryption
, should not\n be sent for GET requests if your object uses server-side encryption with Key Management Service (KMS)\n keys (SSE-KMS), dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), or\n server-side encryption with Amazon S3 managed encryption keys (SSE-S3). If your object does use\n these types of keys, you’ll get an HTTP 400 Bad Request error.
If you encrypt an object by using server-side encryption with customer-provided\n encryption keys (SSE-C) when you store the object in Amazon S3, then when you GET the object,\n you must use the following headers:
\n\n x-amz-server-side-encryption-customer-algorithm
\n
\n x-amz-server-side-encryption-customer-key
\n
\n x-amz-server-side-encryption-customer-key-MD5
\n
For more information about SSE-C, see Server-Side Encryption\n (Using Customer-Provided Encryption Keys).
\nAssuming you have the relevant permission to read object tags, the response also returns\n the x-amz-tagging-count
header that provides the count of number of tags\n associated with the object. You can use GetObjectTagging to retrieve\n the tag set associated with an object.
You need the relevant read object (or version) permission for this operation. For more\n information, see Specifying Permissions in a\n Policy. If the object that you request doesn’t exist, the error that Amazon S3 returns depends\n on whether you also have the s3:ListBucket
permission.
If you have the s3:ListBucket
permission on the bucket, Amazon S3\n returns an HTTP status code 404 (Not Found) error.
If you don’t have the s3:ListBucket
permission, Amazon S3 returns an\n HTTP status code 403 (\"access denied\") error.
By default, the GET
action returns the current version of an object. To return a\n different version, use the versionId
subresource.
If you supply a versionId
, you need the\n s3:GetObjectVersion
permission to access a specific version of an\n object. If you request a specific version, you do not need to have the\n s3:GetObject
permission. If you request the current version\n without a specific version ID, only s3:GetObject
permission is\n required. s3:GetObjectVersion
permission won't be required.
If the current version of the object is a delete marker, Amazon S3 behaves as if the\n object was deleted and includes x-amz-delete-marker: true
in the\n response.
For more information about versioning, see PutBucketVersioning.
\nThere are times when you want to override certain response header values in a GET
\n response. For example, you might override the Content-Disposition
response\n header value in your GET
request.
You can override values for a set of response headers using the following query\n parameters. These response header values are sent only on a successful request, that is,\n when status code 200 OK is returned. The set of headers you can override using these\n parameters is a subset of the headers that Amazon S3 accepts when you create an object. The\n response headers that you can override for the GET
response are Content-Type
,\n Content-Language
, Expires
, Cache-Control
,\n Content-Disposition
, and Content-Encoding
. To override these\n header values in the GET
response, you use the following request parameters.
You must sign the request, either using an Authorization header or a presigned URL,\n when using these parameters. They cannot be used with an unsigned (anonymous)\n request.
\n\n response-content-type
\n
\n response-content-language
\n
\n response-expires
\n
\n response-cache-control
\n
\n response-content-disposition
\n
\n response-content-encoding
\n
If both of the If-Match
and If-Unmodified-Since
headers are\n present in the request as follows: If-Match
condition evaluates to\n true
, and; If-Unmodified-Since
condition evaluates to\n false
; then, S3 returns 200 OK and the data requested.
If both of the If-None-Match
and If-Modified-Since
headers are\n present in the request as follows: If-None-Match
condition evaluates to\n false
, and; If-Modified-Since
condition evaluates to\n true
; then, S3 returns 304 Not Modified response code.
For more information about conditional requests, see RFC 7232.
\nThe following operations are related to GetObject
:
\n ListBuckets\n
\n\n GetObjectAcl\n
\nRetrieves objects from Amazon S3. To use GET
, you must have READ
\n access to the object. If you grant READ
access to the anonymous user, you can\n return the object without using an authorization header.
An Amazon S3 bucket has no directory hierarchy such as you would find in a typical computer\n file system. You can, however, create a logical hierarchy by using object key names that\n imply a folder structure. For example, instead of naming an object sample.jpg
,\n you can name it photos/2006/February/sample.jpg
.
To get an object from such a logical hierarchy, specify the full key name for the object\n in the GET
operation. For a virtual hosted-style request example, if you have\n the object photos/2006/February/sample.jpg
, specify the resource as\n /photos/2006/February/sample.jpg
. For a path-style request example, if you\n have the object photos/2006/February/sample.jpg
in the bucket named\n examplebucket
, specify the resource as\n /examplebucket/photos/2006/February/sample.jpg
. For more information about\n request types, see HTTP Host\n Header Bucket Specification.
For more information about returning the ACL of an object, see GetObjectAcl.
\nIf the object you are retrieving is stored in the S3 Glacier Flexible Retrieval or\n S3 Glacier Deep Archive storage class, or S3 Intelligent-Tiering Archive or\n S3 Intelligent-Tiering Deep Archive tiers, before you can retrieve the object you must first restore a\n copy using RestoreObject. Otherwise, this action returns an\n InvalidObjectState
error. For information about restoring archived objects,\n see Restoring\n Archived Objects.
Encryption request headers, like x-amz-server-side-encryption
, should not\n be sent for GET requests if your object uses server-side encryption with Key Management Service (KMS)\n keys (SSE-KMS), dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), or\n server-side encryption with Amazon S3 managed encryption keys (SSE-S3). If your object does use\n these types of keys, you’ll get an HTTP 400 Bad Request error.
If you encrypt an object by using server-side encryption with customer-provided\n encryption keys (SSE-C) when you store the object in Amazon S3, then when you GET the object,\n you must use the following headers:
\n\n x-amz-server-side-encryption-customer-algorithm
\n
\n x-amz-server-side-encryption-customer-key
\n
\n x-amz-server-side-encryption-customer-key-MD5
\n
For more information about SSE-C, see Server-Side Encryption\n (Using Customer-Provided Encryption Keys).
\nAssuming you have the relevant permission to read object tags, the response also returns\n the x-amz-tagging-count
header that provides the count of number of tags\n associated with the object. You can use GetObjectTagging to retrieve\n the tag set associated with an object.
You need the relevant read object (or version) permission for this operation.\n For more information, see Specifying Permissions in\n a Policy. If the object that you request doesn’t exist, the error that\n Amazon S3 returns depends on whether you also have the s3:ListBucket
\n permission.
If you have the s3:ListBucket
permission on the bucket, Amazon S3\n returns an HTTP status code 404 (Not Found) error.
If you don’t have the s3:ListBucket
permission, Amazon S3 returns an\n HTTP status code 403 (\"access denied\") error.
By default, the GET
action returns the current version of an\n object. To return a different version, use the versionId
\n subresource.
If you supply a versionId
, you need the\n s3:GetObjectVersion
permission to access a specific\n version of an object. If you request a specific version, you do not need\n to have the s3:GetObject
permission. If you request the\n current version without a specific version ID, only\n s3:GetObject
permission is required.\n s3:GetObjectVersion
permission won't be required.
If the current version of the object is a delete marker, Amazon S3 behaves\n as if the object was deleted and includes x-amz-delete-marker:\n true
in the response.
For more information about versioning, see PutBucketVersioning.
\nThere are times when you want to override certain response header values in a\n GET
response. For example, you might override the\n Content-Disposition
response header value in your GET
\n request.
You can override values for a set of response headers using the following query\n parameters. These response header values are sent only on a successful request,\n that is, when status code 200 OK is returned. The set of headers you can override\n using these parameters is a subset of the headers that Amazon S3 accepts when you\n create an object. The response headers that you can override for the\n GET
response are Content-Type
,\n Content-Language
, Expires
,\n Cache-Control
, Content-Disposition
, and\n Content-Encoding
. To override these header values in the\n GET
response, you use the following request parameters.
You must sign the request, either using an Authorization header or a\n presigned URL, when using these parameters. They cannot be used with an\n unsigned (anonymous) request.
\n\n response-content-type
\n
\n response-content-language
\n
\n response-expires
\n
\n response-cache-control
\n
\n response-content-disposition
\n
\n response-content-encoding
\n
If both of the If-Match
and If-Unmodified-Since
\n headers are present in the request as follows: If-Match
condition\n evaluates to true
, and; If-Unmodified-Since
condition\n evaluates to false
; then, S3 returns 200 OK and the data requested.
If both of the If-None-Match
and If-Modified-Since
\n headers are present in the request as follows: If-None-Match
\n condition evaluates to false
, and; If-Modified-Since
\n condition evaluates to true
; then, S3 returns 304 Not Modified\n response code.
For more information about conditional requests, see RFC 7232.
\nThe following operations are related to GetObject
:
\n ListBuckets\n
\n\n GetObjectAcl\n
\nRetrieves all the metadata from an object without returning the object itself. This\n action is useful if you're interested only in an object's metadata. To use\n GetObjectAttributes
, you must have READ access to the object.
\n GetObjectAttributes
combines the functionality of HeadObject
\n and ListParts
. All of the data returned with each of those individual calls\n can be returned with a single call to GetObjectAttributes
.
If you encrypt an object by using server-side encryption with customer-provided\n encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the\n metadata from the object, you must use the following headers:
\n\n x-amz-server-side-encryption-customer-algorithm
\n
\n x-amz-server-side-encryption-customer-key
\n
\n x-amz-server-side-encryption-customer-key-MD5
\n
For more information about SSE-C, see Server-Side Encryption\n (Using Customer-Provided Encryption Keys) in the\n Amazon S3 User Guide.
\nEncryption request headers, such as x-amz-server-side-encryption
,\n should not be sent for GET requests if your object uses server-side encryption\n with Amazon Web Services KMS keys stored in Amazon Web Services Key Management Service (SSE-KMS) or\n server-side encryption with Amazon S3 managed keys (SSE-S3). If your object does use\n these types of keys, you'll get an HTTP 400 Bad Request
error.
The last modified property in this case is the creation date of the\n object.
\nConsider the following when using request headers:
\n If both of the If-Match
and If-Unmodified-Since
headers\n are present in the request as follows, then Amazon S3 returns the HTTP status code\n 200 OK
and the data requested:
\n If-Match
condition evaluates to true
.
\n If-Unmodified-Since
condition evaluates to\n false
.
If both of the If-None-Match
and If-Modified-Since
\n headers are present in the request as follows, then Amazon S3 returns the HTTP status code\n 304 Not Modified
:
\n If-None-Match
condition evaluates to false
.
\n If-Modified-Since
condition evaluates to\n true
.
For more information about conditional requests, see RFC 7232.
\nThe permissions that you need to use this operation depend on whether the bucket is\n versioned. If the bucket is versioned, you need both the s3:GetObjectVersion
\n and s3:GetObjectVersionAttributes
permissions for this operation. If the\n bucket is not versioned, you need the s3:GetObject
and\n s3:GetObjectAttributes
permissions. For more information, see Specifying\n Permissions in a Policy in the Amazon S3 User Guide. If the\n object that you request does not exist, the error Amazon S3 returns depends on whether you also\n have the s3:ListBucket
permission.
If you have the s3:ListBucket
permission on the bucket, Amazon S3 returns\n an HTTP status code 404 Not Found
(\"no such key\") error.
If you don't have the s3:ListBucket
permission, Amazon S3 returns an HTTP\n status code 403 Forbidden
(\"access denied\") error.
The following actions are related to GetObjectAttributes
:
\n GetObject\n
\n\n GetObjectAcl\n
\n\n GetObjectLegalHold\n
\n\n GetObjectRetention\n
\n\n GetObjectTagging\n
\n\n HeadObject\n
\n\n ListParts\n
\nRetrieves all the metadata from an object without returning the object itself. This\n action is useful if you're interested only in an object's metadata. To use\n GetObjectAttributes
, you must have READ access to the object.
\n GetObjectAttributes
combines the functionality of HeadObject
\n and ListParts
. All of the data returned with each of those individual calls\n can be returned with a single call to GetObjectAttributes
.
If you encrypt an object by using server-side encryption with customer-provided\n encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the\n metadata from the object, you must use the following headers:
\n\n x-amz-server-side-encryption-customer-algorithm
\n
\n x-amz-server-side-encryption-customer-key
\n
\n x-amz-server-side-encryption-customer-key-MD5
\n
For more information about SSE-C, see Server-Side Encryption\n (Using Customer-Provided Encryption Keys) in the\n Amazon S3 User Guide.
\nEncryption request headers, such as x-amz-server-side-encryption
,\n should not be sent for GET requests if your object uses server-side encryption\n with Amazon Web Services KMS keys stored in Amazon Web Services Key Management Service (SSE-KMS) or\n server-side encryption with Amazon S3 managed keys (SSE-S3). If your object does use\n these types of keys, you'll get an HTTP 400 Bad Request
error.
The last modified property in this case is the creation date of the\n object.
\nConsider the following when using request headers:
\n If both of the If-Match
and If-Unmodified-Since
headers\n are present in the request as follows, then Amazon S3 returns the HTTP status code\n 200 OK
and the data requested:
\n If-Match
condition evaluates to true
.
\n If-Unmodified-Since
condition evaluates to\n false
.
If both of the If-None-Match
and If-Modified-Since
\n headers are present in the request as follows, then Amazon S3 returns the HTTP status code\n 304 Not Modified
:
\n If-None-Match
condition evaluates to false
.
\n If-Modified-Since
condition evaluates to\n true
.
For more information about conditional requests, see RFC 7232.
\nThe permissions that you need to use this operation depend on whether the\n bucket is versioned. If the bucket is versioned, you need both the\n s3:GetObjectVersion
and s3:GetObjectVersionAttributes
\n permissions for this operation. If the bucket is not versioned, you need the\n s3:GetObject
and s3:GetObjectAttributes
permissions.\n For more information, see Specifying Permissions in\n a Policy in the Amazon S3 User Guide. If the object\n that you request does not exist, the error Amazon S3 returns depends on whether you\n also have the s3:ListBucket
permission.
If you have the s3:ListBucket
permission on the bucket, Amazon S3\n returns an HTTP status code 404 Not Found
(\"no such key\")\n error.
If you don't have the s3:ListBucket
permission, Amazon S3 returns\n an HTTP status code 403 Forbidden
(\"access denied\")\n error.
The following actions are related to GetObjectAttributes
:
\n GetObject\n
\n\n GetObjectAcl\n
\n\n GetObjectLegalHold\n
\n\n GetObjectRetention\n
\n\n GetObjectTagging\n
\n\n HeadObject\n
\n\n ListParts\n
\nSpecifies the fields at the root level that you want returned in the\n response. Fields that you do not specify are not returned.
", + "smithy.api#documentation": "Specifies the fields at the root level that you want returned in the response. Fields\n that you do not specify are not returned.
", "smithy.api#httpHeader": "x-amz-object-attributes", "smithy.api#required": {} } @@ -21972,19 +21992,22 @@ "smithy.api#documentation": "Returns the tag-set of an object. You send the GET request against the tagging\n subresource associated with the object.
\nTo use this operation, you must have permission to perform the\n s3:GetObjectTagging
action. By default, the GET action returns information\n about current version of an object. For a versioned bucket, you can have multiple versions\n of an object in your bucket. To retrieve tags of any other version, use the versionId query\n parameter. You also need permission for the s3:GetObjectVersionTagging
\n action.
By default, the bucket owner has this permission and can grant this permission to\n others.
\nFor information about the Amazon S3 object tagging feature, see Object Tagging.
\nThe following actions are related to GetObjectTagging
:
\n DeleteObjectTagging\n
\n\n GetObjectAttributes\n
\n\n PutObjectTagging\n
\nThis action is useful to determine if a bucket exists and you have permission to access\n it. The action returns a 200 OK
if the bucket exists and you have permission\n to access it.
If the bucket does not exist or you do not have permission to access it, the\n HEAD
request returns a generic 400 Bad Request
, 403\n Forbidden
or 404 Not Found
code. A message body is not included, so\n you cannot determine the exception beyond these error codes.
To use this operation, you must have permissions to perform the\n s3:ListBucket
action. The bucket owner has this permission by default and\n can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to Your Amazon S3 Resources.
To use this API operation against an access point, you must provide the alias of the access point in place of the\n bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to\n the access point hostname. The access point hostname takes the form\n AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.\n When using the Amazon Web Services SDKs, you provide the ARN in place of the bucket name. For more\n information, see Using access points.
\nTo use this API operation against an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name. \nIf the Object Lambda access point alias in a request is not valid, the error code InvalidAccessPointAliasError
is returned. \nFor more information about InvalidAccessPointAliasError
, see List of\n Error Codes.
This action is useful to determine if a bucket exists and you have permission to access\n it. The action returns a 200 OK
if the bucket exists and you have permission\n to access it.
If the bucket does not exist or you do not have permission to access it, the\n HEAD
request returns a generic 400 Bad Request
, 403\n Forbidden
or 404 Not Found
code. A message body is not included, so\n you cannot determine the exception beyond these error codes.
To use this operation, you must have permissions to perform the\n s3:ListBucket
action. The bucket owner has this permission by default and\n can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to Your Amazon S3 Resources.
To use this API operation against an access point, you must provide the alias of the access point in\n place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct\n requests to the access point hostname. The access point hostname takes the form\n AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.\n When using the Amazon Web Services SDKs, you provide the ARN in place of the bucket name. For more\n information, see Using access points.
\nTo use this API operation against an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name. \nIf the Object Lambda access point alias in a request is not valid, the error code InvalidAccessPointAliasError
is returned. \nFor more information about InvalidAccessPointAliasError
, see List of\n Error Codes.
The bucket name.
\nWhen using this action with an access point, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
\nWhen you use this action with an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name. \n If the Object Lambda access point alias in a request is not valid, the error code InvalidAccessPointAliasError
is returned. \n For more information about InvalidAccessPointAliasError
, see List of\n Error Codes.
When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form \n AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.
The bucket name.
\nWhen using this action with an access point, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
\nWhen you use this action with an Object Lambda access point, provide the alias of the Object Lambda access point in place of the\n bucket name. If the Object Lambda access point alias in a request is not valid, the error code\n InvalidAccessPointAliasError
is returned. For more information about\n InvalidAccessPointAliasError
, see List of Error\n Codes.
When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form \n AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.
The HEAD
action retrieves metadata from an object without returning the object itself.\n This action is useful if you're only interested in an object's metadata. To use HEAD
, you\n must have READ access to the object.
A HEAD
request has the same options as a GET
action on an\n object. The response is identical to the GET
response except that there is no\n response body. Because of this, if the HEAD
request generates an error, it\n returns a generic 400 Bad Request
, 403 Forbidden
or 404 Not\n Found
code. It is not possible to retrieve the exact exception beyond these error\n codes.
If you encrypt an object by using server-side encryption with customer-provided\n encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the\n metadata from the object, you must use the following headers:
\n\n x-amz-server-side-encryption-customer-algorithm
\n
\n x-amz-server-side-encryption-customer-key
\n
\n x-amz-server-side-encryption-customer-key-MD5
\n
For more information about SSE-C, see Server-Side Encryption\n (Using Customer-Provided Encryption Keys).
\nEncryption request headers, like x-amz-server-side-encryption
,\n should not be sent for GET
requests if your object uses server-side\n encryption with Key Management Service (KMS) keys (SSE-KMS), dual-layer server-side\n encryption with Amazon Web Services KMS keys (DSSE-KMS), or server-side encryption with Amazon S3\n managed encryption keys (SSE-S3). If your object does use these types of keys,\n you’ll get an HTTP 400 Bad Request error.
The last modified property in this case is the creation date of the\n object.
\nRequest headers are limited to 8 KB in size. For more information, see Common\n Request Headers.
\nConsider the following when using request headers:
\n Consideration 1 – If both of the If-Match
and\n If-Unmodified-Since
headers are present in the request as\n follows:
\n If-Match
condition evaluates to true
, and;
\n If-Unmodified-Since
condition evaluates to\n false
;
Then Amazon S3 returns 200 OK
and the data requested.
Consideration 2 – If both of the If-None-Match
and\n If-Modified-Since
headers are present in the request as\n follows:
\n If-None-Match
condition evaluates to false
,\n and;
\n If-Modified-Since
condition evaluates to\n true
;
Then Amazon S3 returns the 304 Not Modified
response code.
For more information about conditional requests, see RFC 7232.
\nYou need the relevant read object (or version) permission for this operation. For more\n information, see Actions, resources, and condition keys for Amazon S3. \n If the object you request doesn't exist, the error that Amazon S3 returns depends\n on whether you also have the s3:ListBucket permission.
\nIf you have the s3:ListBucket
permission on the bucket, Amazon S3 returns\n an HTTP status code 404 error.
If you don’t have the s3:ListBucket
permission, Amazon S3 returns an HTTP\n status code 403 error.
The following actions are related to HeadObject
:
\n GetObject\n
\n\n GetObjectAttributes\n
\nThe HEAD
action retrieves metadata from an object without returning the\n object itself. This action is useful if you're only interested in an object's metadata. To\n use HEAD
, you must have READ access to the object.
A HEAD
request has the same options as a GET
action on an\n object. The response is identical to the GET
response except that there is no\n response body. Because of this, if the HEAD
request generates an error, it\n returns a generic 400 Bad Request
, 403 Forbidden
or 404 Not\n Found
code. It is not possible to retrieve the exact exception beyond these error\n codes.
If you encrypt an object by using server-side encryption with customer-provided\n encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the\n metadata from the object, you must use the following headers:
\n\n x-amz-server-side-encryption-customer-algorithm
\n
\n x-amz-server-side-encryption-customer-key
\n
\n x-amz-server-side-encryption-customer-key-MD5
\n
For more information about SSE-C, see Server-Side Encryption\n (Using Customer-Provided Encryption Keys).
\nEncryption request headers, like x-amz-server-side-encryption
,\n should not be sent for GET
requests if your object uses server-side\n encryption with Key Management Service (KMS) keys (SSE-KMS), dual-layer server-side\n encryption with Amazon Web Services KMS keys (DSSE-KMS), or server-side encryption with Amazon S3\n managed encryption keys (SSE-S3). If your object does use these types of keys,\n you’ll get an HTTP 400 Bad Request error.
The last modified property in this case is the creation date of the\n object.
\nRequest headers are limited to 8 KB in size. For more information, see Common\n Request Headers.
\nConsider the following when using request headers:
\n Consideration 1 – If both of the If-Match
and\n If-Unmodified-Since
headers are present in the request as\n follows:
\n If-Match
condition evaluates to true
, and;
\n If-Unmodified-Since
condition evaluates to\n false
;
Then Amazon S3 returns 200 OK
and the data requested.
Consideration 2 – If both of the If-None-Match
and\n If-Modified-Since
headers are present in the request as\n follows:
\n If-None-Match
condition evaluates to false
,\n and;
\n If-Modified-Since
condition evaluates to\n true
;
Then Amazon S3 returns the 304 Not Modified
response code.
For more information about conditional requests, see RFC 7232.
\nYou need the relevant read object (or version) permission for this operation.\n For more information, see Actions, resources, and condition\n keys for Amazon S3. If the object you request doesn't exist, the error that\n Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
\nIf you have the s3:ListBucket
permission on the bucket, Amazon S3\n returns an HTTP status code 404 error.
If you don’t have the s3:ListBucket
permission, Amazon S3 returns\n an HTTP status code 403 error.
The following actions are related to HeadObject
:
\n GetObject\n
\n\n GetObjectAttributes\n
\nIndicates at what date the object is to be moved or deleted. The date value must conform to the ISO 8601 format. \n The time is always midnight UTC.
" + "smithy.api#documentation": "Indicates at what date the object is to be moved or deleted. The date value must conform\n to the ISO 8601 format. The time is always midnight UTC.
" } }, "Days": { @@ -24511,7 +24534,7 @@ "OptionalObjectAttributes": { "target": "com.amazonaws.s3#OptionalObjectAttributesList", "traits": { - "smithy.api#documentation": "Specifies the optional fields that you want returned in the response.\n Fields that you do not specify are not returned.
", + "smithy.api#documentation": "Specifies the optional fields that you want returned in the response. Fields that you do\n not specify are not returned.
", "smithy.api#httpHeader": "x-amz-optional-object-attributes" } } @@ -24651,7 +24674,7 @@ "Marker": { "target": "com.amazonaws.s3#Marker", "traits": { - "smithy.api#documentation": "Marker is where you want Amazon S3 to start listing from. Amazon S3 starts listing after\n this specified key. Marker can be any key in the bucket.
", + "smithy.api#documentation": "Marker is where you want Amazon S3 to start listing from. Amazon S3 starts listing after this\n specified key. Marker can be any key in the bucket.
", "smithy.api#httpQuery": "marker" } }, @@ -24659,7 +24682,7 @@ "target": "com.amazonaws.s3#MaxKeys", "traits": { "smithy.api#default": 0, - "smithy.api#documentation": "Sets the maximum number of keys returned in the response. By default, the action returns\n up to 1,000 key names. The response might contain fewer keys but will never contain more.
", + "smithy.api#documentation": "Sets the maximum number of keys returned in the response. By default, the action returns\n up to 1,000 key names. The response might contain fewer keys but will never contain more.\n
", "smithy.api#httpQuery": "max-keys" } }, @@ -24687,7 +24710,7 @@ "OptionalObjectAttributes": { "target": "com.amazonaws.s3#OptionalObjectAttributesList", "traits": { - "smithy.api#documentation": "Specifies the optional fields that you want returned in the response.\n Fields that you do not specify are not returned.
", + "smithy.api#documentation": "Specifies the optional fields that you want returned in the response. Fields that you do\n not specify are not returned.
", "smithy.api#httpHeader": "x-amz-optional-object-attributes" } } @@ -24897,7 +24920,7 @@ "OptionalObjectAttributes": { "target": "com.amazonaws.s3#OptionalObjectAttributesList", "traits": { - "smithy.api#documentation": "Specifies the optional fields that you want returned in the response.\n Fields that you do not specify are not returned.
", + "smithy.api#documentation": "Specifies the optional fields that you want returned in the response. Fields that you do\n not specify are not returned.
", "smithy.api#httpHeader": "x-amz-optional-object-attributes" } } @@ -24935,7 +24958,7 @@ "AbortDate": { "target": "com.amazonaws.s3#AbortDate", "traits": { - "smithy.api#documentation": "If the bucket has a lifecycle rule configured with an action to abort incomplete\n multipart uploads and the prefix in the lifecycle rule matches the object name in the\n request, then the response includes this header indicating when the initiated multipart\n upload will become eligible for abort operation. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration.
\nThe response will also include the x-amz-abort-rule-id
header that will\n provide the ID of the lifecycle configuration rule that defines this action.
If the bucket has a lifecycle rule configured with an action to abort incomplete\n multipart uploads and the prefix in the lifecycle rule matches the object name in the\n request, then the response includes this header indicating when the initiated multipart\n upload will become eligible for abort operation. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle\n Configuration.
\nThe response will also include the x-amz-abort-rule-id
header that will\n provide the ID of the lifecycle configuration rule that defines this action.
Specifies object key name filtering rules. For information about key name filtering, see\n Configuring event notifications using object key name filtering in the Amazon S3 User Guide.
" + "smithy.api#documentation": "Specifies object key name filtering rules. For information about key name filtering, see\n Configuring event\n notifications using object key name filtering in the\n Amazon S3 User Guide.
" } }, "com.amazonaws.s3#NotificationId": { @@ -25684,7 +25707,7 @@ "RestoreStatus": { "target": "com.amazonaws.s3#RestoreStatus", "traits": { - "smithy.api#documentation": "Specifies the restoration status of an object. Objects in certain storage classes must be restored\n before they can be retrieved. For more information about these storage classes and how to work with\n archived objects, see \n Working with archived objects in the Amazon S3 User Guide.
" + "smithy.api#documentation": "Specifies the restoration status of an object. Objects in certain storage classes must\n be restored before they can be retrieved. For more information about these storage classes\n and how to work with archived objects, see Working with archived\n objects in the Amazon S3 User Guide.
" } } }, @@ -26200,7 +26223,7 @@ "RestoreStatus": { "target": "com.amazonaws.s3#RestoreStatus", "traits": { - "smithy.api#documentation": "Specifies the restoration status of an object. Objects in certain storage classes must be restored\n before they can be retrieved. For more information about these storage classes and how to work with\n archived objects, see \n Working with archived objects in the Amazon S3 User Guide.
" + "smithy.api#documentation": "Specifies the restoration status of an object. Objects in certain storage classes must\n be restored before they can be retrieved. For more information about these storage classes\n and how to work with archived objects, see Working with archived\n objects in the Amazon S3 User Guide.
" } } }, @@ -26695,7 +26718,7 @@ "requestAlgorithmMember": "ChecksumAlgorithm", "requestChecksumRequired": true }, - "smithy.api#documentation": "Sets the permissions on an existing bucket using access control lists (ACL). For more\n information, see Using ACLs. To set the ACL of a\n bucket, you must have WRITE_ACP
permission.
You can use one of the following two ways to set a bucket's permissions:
\nSpecify the ACL in the request body
\nSpecify permissions using request headers
\nYou cannot specify access permission using both the body and the request\n headers.
\nDepending on your application needs, you may choose to set the ACL on a bucket using\n either the request body or the headers. For example, if you have an existing application\n that updates a bucket ACL using the request body, then you can continue to use that\n approach.
\nIf your bucket uses the bucket owner enforced setting for S3 Object Ownership, ACLs\n are disabled and no longer affect permissions. You must use policies to grant access to\n your bucket and the objects in it. Requests to set ACLs or update ACLs fail and return\n the AccessControlListNotSupported
error code. Requests to read ACLs are\n still supported. For more information, see Controlling object\n ownership in the Amazon S3 User Guide.
You can set access permissions by using one of the following methods:
\nSpecify a canned ACL with the x-amz-acl
request header. Amazon S3 supports\n a set of predefined ACLs, known as canned ACLs. Each canned ACL\n has a predefined set of grantees and permissions. Specify the canned ACL name as the\n value of x-amz-acl
. If you use this header, you cannot use other access\n control-specific headers in your request. For more information, see Canned\n ACL.
Specify access permissions explicitly with the x-amz-grant-read
,\n x-amz-grant-read-acp
, x-amz-grant-write-acp
, and\n x-amz-grant-full-control
headers. When using these headers, you\n specify explicit access permissions and grantees (Amazon Web Services accounts or Amazon S3 groups) who\n will receive the permission. If you use these ACL-specific headers, you cannot use\n the x-amz-acl
header to set a canned ACL. These parameters map to the\n set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control\n List (ACL) Overview.
You specify each grantee as a type=value pair, where the type is one of the\n following:
\n\n id
– if the value specified is the canonical user ID of an\n Amazon Web Services account
\n uri
– if you are granting permissions to a predefined\n group
\n emailAddress
– if the value specified is the email address of\n an Amazon Web Services account
Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
\nUS East (N. Virginia)
\nUS West (N. California)
\nUS West (Oregon)
\nAsia Pacific (Singapore)
\nAsia Pacific (Sydney)
\nAsia Pacific (Tokyo)
\nEurope (Ireland)
\nSouth America (São Paulo)
\nFor a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.
\nFor example, the following x-amz-grant-write
header grants create,\n overwrite, and delete objects permission to LogDelivery group predefined by Amazon S3 and\n two Amazon Web Services accounts identified by their email addresses.
\n x-amz-grant-write: uri=\"http://acs.amazonaws.com/groups/s3/LogDelivery\",\n id=\"111122223333\", id=\"555566667777\"
\n
You can use either a canned ACL or specify access permissions explicitly. You cannot do\n both.
\nYou can specify the person (grantee) to whom you're assigning access rights (using\n request elements) in the following ways:
\nBy the person's ID:
\n\n
\n
DisplayName is optional and ignored in the request
\nBy URI:
\n\n
\n
By Email address:
\n\n
\n
The grantee is resolved to the CanonicalUser and, in a response to a GET Object\n acl request, appears as the CanonicalUser.
\nUsing email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
\nUS East (N. Virginia)
\nUS West (N. California)
\nUS West (Oregon)
\nAsia Pacific (Singapore)
\nAsia Pacific (Sydney)
\nAsia Pacific (Tokyo)
\nEurope (Ireland)
\nSouth America (São Paulo)
\nFor a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.
\nThe following operations are related to PutBucketAcl
:
\n CreateBucket\n
\n\n DeleteBucket\n
\n\n GetObjectAcl\n
\nSets the permissions on an existing bucket using access control lists (ACL). For more\n information, see Using ACLs. To set the ACL of a\n bucket, you must have WRITE_ACP
permission.
You can use one of the following two ways to set a bucket's permissions:
\nSpecify the ACL in the request body
\nSpecify permissions using request headers
\nYou cannot specify access permission using both the body and the request\n headers.
\nDepending on your application needs, you may choose to set the ACL on a bucket using\n either the request body or the headers. For example, if you have an existing application\n that updates a bucket ACL using the request body, then you can continue to use that\n approach.
\nIf your bucket uses the bucket owner enforced setting for S3 Object Ownership, ACLs\n are disabled and no longer affect permissions. You must use policies to grant access to\n your bucket and the objects in it. Requests to set ACLs or update ACLs fail and return\n the AccessControlListNotSupported
error code. Requests to read ACLs are\n still supported. For more information, see Controlling object\n ownership in the Amazon S3 User Guide.
You can set access permissions by using one of the following methods:
\nSpecify a canned ACL with the x-amz-acl
request header. Amazon S3\n supports a set of predefined ACLs, known as canned\n ACLs. Each canned ACL has a predefined set of grantees and\n permissions. Specify the canned ACL name as the value of\n x-amz-acl
. If you use this header, you cannot use other\n access control-specific headers in your request. For more information, see\n Canned\n ACL.
Specify access permissions explicitly with the\n x-amz-grant-read
, x-amz-grant-read-acp
,\n x-amz-grant-write-acp
, and\n x-amz-grant-full-control
headers. When using these headers,\n you specify explicit access permissions and grantees (Amazon Web Services accounts or Amazon S3\n groups) who will receive the permission. If you use these ACL-specific\n headers, you cannot use the x-amz-acl
header to set a canned\n ACL. These parameters map to the set of permissions that Amazon S3 supports in an\n ACL. For more information, see Access Control List (ACL)\n Overview.
You specify each grantee as a type=value pair, where the type is one of\n the following:
\n\n id
– if the value specified is the canonical user ID\n of an Amazon Web Services account
\n uri
– if you are granting permissions to a predefined\n group
\n emailAddress
– if the value specified is the email\n address of an Amazon Web Services account
Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
\nUS East (N. Virginia)
\nUS West (N. California)
\nUS West (Oregon)
\nAsia Pacific (Singapore)
\nAsia Pacific (Sydney)
\nAsia Pacific (Tokyo)
\nEurope (Ireland)
\nSouth America (São Paulo)
\nFor a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.
\nFor example, the following x-amz-grant-write
header grants\n create, overwrite, and delete objects permission to LogDelivery group\n predefined by Amazon S3 and two Amazon Web Services accounts identified by their email\n addresses.
\n x-amz-grant-write:\n uri=\"http://acs.amazonaws.com/groups/s3/LogDelivery\", id=\"111122223333\",\n id=\"555566667777\"
\n
You can use either a canned ACL or specify access permissions explicitly. You\n cannot do both.
\nYou can specify the person (grantee) to whom you're assigning access rights\n (using request elements) in the following ways:
\nBy the person's ID:
\n\n
\n
DisplayName is optional and ignored in the request
\nBy URI:
\n\n
\n
By Email address:
\n\n
\n
The grantee is resolved to the CanonicalUser and, in a response to a GET\n Object acl request, appears as the CanonicalUser.
\nUsing email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
\nUS East (N. Virginia)
\nUS West (N. California)
\nUS West (Oregon)
\nAsia Pacific (Singapore)
\nAsia Pacific (Sydney)
\nAsia Pacific (Tokyo)
\nEurope (Ireland)
\nSouth America (São Paulo)
\nFor a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.
\nThe following operations are related to PutBucketAcl
:
\n CreateBucket\n
\n\n DeleteBucket\n
\n\n GetObjectAcl\n
\nSets an analytics configuration for the bucket (specified by the analytics configuration\n ID). You can have up to 1,000 analytics configurations per bucket.
\nYou can choose to have storage class analysis export analysis reports sent to a\n comma-separated values (CSV) flat file. See the DataExport
request element.\n Reports are updated daily and are based on the object filters that you configure. When\n selecting data export, you specify a destination bucket and an optional destination prefix\n where the file is written. You can export the data to a destination bucket in a different\n account. However, the destination bucket must be in the same Region as the bucket that you\n are making the PUT analytics configuration to. For more information, see Amazon S3\n Analytics – Storage Class Analysis.
You must create a bucket policy on the destination bucket where the exported file is\n written to grant permissions to Amazon S3 to write objects to the bucket. For an example\n policy, see Granting Permissions for Amazon S3 Inventory and Storage Class Analysis.
\nTo use this operation, you must have permissions to perform the\n s3:PutAnalyticsConfiguration
action. The bucket owner has this permission\n by default. The bucket owner can grant this permission to others. For more information\n about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to Your Amazon S3 Resources.
\n PutBucketAnalyticsConfiguration
has the following special errors:
\n HTTP Error: HTTP 400 Bad Request\n
\n\n Code: InvalidArgument\n
\n\n Cause: Invalid argument.\n
\n\n HTTP Error: HTTP 400 Bad Request\n
\n\n Code: TooManyConfigurations\n
\n\n Cause: You are attempting to create a new configuration but have\n already reached the 1,000-configuration limit.\n
\n\n HTTP Error: HTTP 403 Forbidden\n
\n\n Code: AccessDenied\n
\n\n Cause: You are not the owner of the specified bucket, or you do\n not have the s3:PutAnalyticsConfiguration bucket permission to set the\n configuration on the bucket.\n
\nThe following operations are related to PutBucketAnalyticsConfiguration
:
Sets an analytics configuration for the bucket (specified by the analytics configuration\n ID). You can have up to 1,000 analytics configurations per bucket.
\nYou can choose to have storage class analysis export analysis reports sent to a\n comma-separated values (CSV) flat file. See the DataExport
request element.\n Reports are updated daily and are based on the object filters that you configure. When\n selecting data export, you specify a destination bucket and an optional destination prefix\n where the file is written. You can export the data to a destination bucket in a different\n account. However, the destination bucket must be in the same Region as the bucket that you\n are making the PUT analytics configuration to. For more information, see Amazon S3\n Analytics – Storage Class Analysis.
You must create a bucket policy on the destination bucket where the exported file is\n written to grant permissions to Amazon S3 to write objects to the bucket. For an example\n policy, see Granting Permissions for Amazon S3 Inventory and Storage Class Analysis.
\nTo use this operation, you must have permissions to perform the\n s3:PutAnalyticsConfiguration
action. The bucket owner has this permission\n by default. The bucket owner can grant this permission to others. For more information\n about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to Your Amazon S3 Resources.
\n PutBucketAnalyticsConfiguration
has the following special errors:
\n HTTP Error: HTTP 400 Bad Request\n
\n\n Code: InvalidArgument\n
\n\n Cause: Invalid argument.\n
\n\n HTTP Error: HTTP 400 Bad Request\n
\n\n Code: TooManyConfigurations\n
\n\n Cause: You are attempting to create a new configuration but have\n already reached the 1,000-configuration limit.\n
\n\n HTTP Error: HTTP 403 Forbidden\n
\n\n Code: AccessDenied\n
\n\n Cause: You are not the owner of the specified bucket, or you do\n not have the s3:PutAnalyticsConfiguration bucket permission to set the\n configuration on the bucket.\n
\nThe following operations are related to\n PutBucketAnalyticsConfiguration
:
This action uses the encryption
subresource to configure default encryption\n and Amazon S3 Bucket Keys for an existing bucket.
By default, all buckets have a default encryption configuration that uses server-side\n encryption with Amazon S3 managed keys (SSE-S3). You can optionally configure default encryption\n for a bucket by using server-side encryption with Key Management Service (KMS) keys (SSE-KMS),\n dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), or server-side\n encryption with customer-provided keys (SSE-C). If you specify default encryption by using\n SSE-KMS, you can also configure Amazon S3 Bucket Keys. For information about bucket default\n encryption, see Amazon S3 bucket default encryption\n in the Amazon S3 User Guide. For more information about S3 Bucket Keys, see\n Amazon S3 Bucket\n Keys in the Amazon S3 User Guide.
\nThis action requires Amazon Web Services Signature Version 4. For more information, see \n Authenticating Requests (Amazon Web Services Signature Version 4).
\nTo use this operation, you must have permission to perform the\n s3:PutEncryptionConfiguration
action. The bucket owner has this permission\n by default. The bucket owner can grant this permission to others. For more information\n about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to Your Amazon S3 Resources in the\n Amazon S3 User Guide.
The following operations are related to PutBucketEncryption
:
\n GetBucketEncryption\n
\nThis action uses the encryption
subresource to configure default encryption\n and Amazon S3 Bucket Keys for an existing bucket.
By default, all buckets have a default encryption configuration that uses server-side\n encryption with Amazon S3 managed keys (SSE-S3). You can optionally configure default encryption\n for a bucket by using server-side encryption with Key Management Service (KMS) keys (SSE-KMS) or\n dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS). If you specify default encryption by using\n SSE-KMS, you can also configure Amazon S3 Bucket\n Keys. If you use PutBucketEncryption to set your default bucket encryption to SSE-KMS, you should verify that your KMS key ID is correct. Amazon S3 does not validate the KMS key ID provided in PutBucketEncryption requests.
\nThis action requires Amazon Web Services Signature Version 4. For more information, see \n Authenticating Requests (Amazon Web Services Signature Version 4).
\nTo use this operation, you must have permission to perform the\n s3:PutEncryptionConfiguration
action. The bucket owner has this permission\n by default. The bucket owner can grant this permission to others. For more information\n about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to Your Amazon S3 Resources in the\n Amazon S3 User Guide.
The following operations are related to PutBucketEncryption
:
\n GetBucketEncryption\n
\nPuts a S3 Intelligent-Tiering configuration to the specified bucket. You can have up to\n 1,000 S3 Intelligent-Tiering configurations per bucket.
\nThe S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead. S3 Intelligent-Tiering delivers automatic cost savings in three low latency and high throughput access tiers. To get the lowest storage cost on data that can be accessed in minutes to hours, you can choose to activate additional archiving capabilities.
\nThe S3 Intelligent-Tiering storage class is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. If the size of an object is less than 128 KB, it is not monitored and not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the Frequent Access tier rates in the S3 Intelligent-Tiering storage class.
\nFor more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.
\nOperations related to PutBucketIntelligentTieringConfiguration
include:
You only need S3 Intelligent-Tiering enabled on a bucket if you want to automatically\n move objects stored in the S3 Intelligent-Tiering storage class to the Archive Access\n or Deep Archive Access tier.
\n\n PutBucketIntelligentTieringConfiguration
has the following special errors:
\n Code: InvalidArgument
\n\n Cause: Invalid Argument
\n\n Code: TooManyConfigurations
\n\n Cause: You are attempting to create a new configuration\n but have already reached the 1,000-configuration limit.
\n\n Cause: You are not the owner of the specified bucket,\n or you do not have the s3:PutIntelligentTieringConfiguration
\n bucket permission to set the configuration on the bucket.
Puts a S3 Intelligent-Tiering configuration to the specified bucket. You can have up to\n 1,000 S3 Intelligent-Tiering configurations per bucket.
\nThe S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead. S3 Intelligent-Tiering delivers automatic cost savings in three low latency and high throughput access tiers. To get the lowest storage cost on data that can be accessed in minutes to hours, you can choose to activate additional archiving capabilities.
\nThe S3 Intelligent-Tiering storage class is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. If the size of an object is less than 128 KB, it is not monitored and not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the Frequent Access tier rates in the S3 Intelligent-Tiering storage class.
\nFor more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.
\nOperations related to PutBucketIntelligentTieringConfiguration
include:
You only need S3 Intelligent-Tiering enabled on a bucket if you want to automatically\n move objects stored in the S3 Intelligent-Tiering storage class to the Archive Access\n or Deep Archive Access tier.
\n\n PutBucketIntelligentTieringConfiguration
has the following special\n errors:
\n Code: InvalidArgument
\n\n Cause: Invalid Argument
\n\n Code: TooManyConfigurations
\n\n Cause: You are attempting to create a new configuration\n but have already reached the 1,000-configuration limit.
\n\n Cause: You are not the owner of the specified bucket, or\n you do not have the s3:PutIntelligentTieringConfiguration
bucket\n permission to set the configuration on the bucket.
This implementation of the PUT
action adds an inventory configuration\n (identified by the inventory ID) to the bucket. You can have up to 1,000 inventory\n configurations per bucket.
Amazon S3 inventory generates inventories of the objects in the bucket on a daily or weekly\n basis, and the results are published to a flat file. The bucket that is inventoried is\n called the source bucket, and the bucket where the inventory flat file\n is stored is called the destination bucket. The\n destination bucket must be in the same Amazon Web Services Region as the\n source bucket.
\nWhen you configure an inventory for a source bucket, you specify\n the destination bucket where you want the inventory to be stored, and\n whether to generate the inventory daily or weekly. You can also configure what object\n metadata to include and whether to inventory all object versions or only current versions.\n For more information, see Amazon S3 Inventory in the\n Amazon S3 User Guide.
\nYou must create a bucket policy on the destination bucket to\n grant permissions to Amazon S3 to write objects to the bucket in the defined location. For an\n example policy, see Granting Permissions for Amazon S3 Inventory and Storage Class Analysis.
\nTo use this operation, you must have permission to perform the\n s3:PutInventoryConfiguration
action. The bucket owner has this permission\n by default and can grant this permission to others.
The s3:PutInventoryConfiguration
permission allows a user to create an\n S3\n Inventory report that includes all object metadata fields available and to\n specify the destination bucket to store the inventory. A user with read access to objects\n in the destination bucket can also access all object metadata fields that are available in\n the inventory report.
To restrict access to an inventory report, see Restricting access to an Amazon S3 Inventory report in the\n Amazon S3 User Guide. For more information about the metadata fields\n available in S3 Inventory, see Amazon S3\n Inventory lists in the Amazon S3 User Guide. For more\n information about permissions, see Permissions related to bucket subresource operations and Identity and\n access management in Amazon S3 in the Amazon S3 User Guide.
\n\n PutBucketInventoryConfiguration
has the following special errors:
\n Code: InvalidArgument
\n\n Cause: Invalid Argument
\n\n Code: TooManyConfigurations
\n\n Cause: You are attempting to create a new configuration\n but have already reached the 1,000-configuration limit.
\n\n Cause: You are not the owner of the specified bucket,\n or you do not have the s3:PutInventoryConfiguration
bucket\n permission to set the configuration on the bucket.
The following operations are related to PutBucketInventoryConfiguration
:
This implementation of the PUT
action adds an inventory configuration\n (identified by the inventory ID) to the bucket. You can have up to 1,000 inventory\n configurations per bucket.
Amazon S3 inventory generates inventories of the objects in the bucket on a daily or weekly\n basis, and the results are published to a flat file. The bucket that is inventoried is\n called the source bucket, and the bucket where the inventory flat file\n is stored is called the destination bucket. The\n destination bucket must be in the same Amazon Web Services Region as the\n source bucket.
\nWhen you configure an inventory for a source bucket, you specify\n the destination bucket where you want the inventory to be stored, and\n whether to generate the inventory daily or weekly. You can also configure what object\n metadata to include and whether to inventory all object versions or only current versions.\n For more information, see Amazon S3 Inventory in the\n Amazon S3 User Guide.
\nYou must create a bucket policy on the destination bucket to\n grant permissions to Amazon S3 to write objects to the bucket in the defined location. For an\n example policy, see Granting Permissions for Amazon S3 Inventory and Storage Class Analysis.
\nTo use this operation, you must have permission to perform the\n s3:PutInventoryConfiguration
action. The bucket owner has this\n permission by default and can grant this permission to others.
The s3:PutInventoryConfiguration
permission allows a user to\n create an S3 Inventory\n report that includes all object metadata fields available and to specify the\n destination bucket to store the inventory. A user with read access to objects in\n the destination bucket can also access all object metadata fields that are\n available in the inventory report.
To restrict access to an inventory report, see Restricting access to an Amazon S3 Inventory report in the\n Amazon S3 User Guide. For more information about the metadata\n fields available in S3 Inventory, see Amazon S3 Inventory lists in the Amazon S3 User Guide. For\n more information about permissions, see Permissions related to bucket subresource operations and Identity and access management in Amazon S3 in the\n Amazon S3 User Guide.
\n\n PutBucketInventoryConfiguration
has the following special errors:
\n Code: InvalidArgument
\n\n Cause: Invalid Argument
\n\n Code: TooManyConfigurations
\n\n Cause: You are attempting to create a new configuration\n but have already reached the 1,000-configuration limit.
\n\n Cause: You are not the owner of the specified bucket, or\n you do not have the s3:PutInventoryConfiguration
bucket permission to\n set the configuration on the bucket.
The following operations are related to\n PutBucketInventoryConfiguration
:
Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle\n configuration. Keep in mind that this will overwrite an existing lifecycle configuration,\n so if you want to retain any configuration details, they must be included in the new\n lifecycle configuration. For information about lifecycle configuration, see Managing\n your storage lifecycle.
\nBucket lifecycle configuration now supports specifying a lifecycle rule using an\n object key name prefix, one or more object tags, or a combination of both. Accordingly,\n this section describes the latest API. The previous version of the API supported\n filtering based only on an object key name prefix, which is supported for backward\n compatibility. For the related API description, see PutBucketLifecycle.
\nYou specify the lifecycle configuration in your request body. The lifecycle\n configuration is specified as XML consisting of one or more rules. An Amazon S3 Lifecycle\n configuration can have up to 1,000 rules. This limit is not adjustable. Each rule consists\n of the following:
\nA filter identifying a subset of objects to which the rule applies. The filter can\n be based on a key name prefix, object tags, or a combination of both.
\nA status indicating whether the rule is in effect.
\nOne or more lifecycle transition and expiration actions that you want Amazon S3 to\n perform on the objects identified by the filter. If the state of your bucket is\n versioning-enabled or versioning-suspended, you can have many versions of the same\n object (one current version and zero or more noncurrent versions). Amazon S3 provides\n predefined actions that you can specify for current and noncurrent object\n versions.
\nFor more information, see Object Lifecycle Management\n and Lifecycle Configuration Elements.
\nBy default, all Amazon S3 resources are private, including buckets, objects, and related\n subresources (for example, lifecycle configuration and website configuration). Only the\n resource owner (that is, the Amazon Web Services account that created it) can access the resource. The\n resource owner can optionally grant access permissions to others by writing an access\n policy. For this operation, a user must get the s3:PutLifecycleConfiguration
\n permission.
You can also explicitly deny permissions. An explicit deny also supersedes any other\n permissions. If you want to block users or accounts from removing or deleting objects from\n your bucket, you must deny them permissions for the following actions:
\n\n s3:DeleteObject
\n
\n s3:DeleteObjectVersion
\n
\n s3:PutLifecycleConfiguration
\n
For more information about permissions, see Managing Access Permissions to\n Your Amazon S3 Resources.
\nThe following operations are related to PutBucketLifecycleConfiguration
:
Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle\n configuration. Keep in mind that this will overwrite an existing lifecycle configuration,\n so if you want to retain any configuration details, they must be included in the new\n lifecycle configuration. For information about lifecycle configuration, see Managing\n your storage lifecycle.
\nBucket lifecycle configuration now supports specifying a lifecycle rule using an\n object key name prefix, one or more object tags, or a combination of both. Accordingly,\n this section describes the latest API. The previous version of the API supported\n filtering based only on an object key name prefix, which is supported for backward\n compatibility. For the related API description, see PutBucketLifecycle.
\nYou specify the lifecycle configuration in your request body. The lifecycle\n configuration is specified as XML consisting of one or more rules. An Amazon S3\n Lifecycle configuration can have up to 1,000 rules. This limit is not adjustable.\n Each rule consists of the following:
\nA filter identifying a subset of objects to which the rule applies. The\n filter can be based on a key name prefix, object tags, or a combination of\n both.
\nA status indicating whether the rule is in effect.
\nOne or more lifecycle transition and expiration actions that you want\n Amazon S3 to perform on the objects identified by the filter. If the state of\n your bucket is versioning-enabled or versioning-suspended, you can have many\n versions of the same object (one current version and zero or more noncurrent\n versions). Amazon S3 provides predefined actions that you can specify for current\n and noncurrent object versions.
\nFor more information, see Object Lifecycle\n Management and Lifecycle Configuration\n Elements.
\nBy default, all Amazon S3 resources are private, including buckets, objects, and\n related subresources (for example, lifecycle configuration and website\n configuration). Only the resource owner (that is, the Amazon Web Services account that created\n it) can access the resource. The resource owner can optionally grant access\n permissions to others by writing an access policy. For this operation, a user must\n get the s3:PutLifecycleConfiguration
permission.
You can also explicitly deny permissions. An explicit deny also supersedes any\n other permissions. If you want to block users or accounts from removing or\n deleting objects from your bucket, you must deny them permissions for the\n following actions:
\n\n s3:DeleteObject
\n
\n s3:DeleteObjectVersion
\n
\n s3:PutLifecycleConfiguration
\n
For more information about permissions, see Managing Access\n Permissions to Your Amazon S3 Resources.
\nThe following operations are related to\n PutBucketLifecycleConfiguration
:
Set the logging parameters for a bucket and to specify permissions for who can view and\n modify the logging parameters. All logs are saved to buckets in the same Amazon Web Services Region as\n the source bucket. To set the logging status of a bucket, you must be the bucket\n owner.
\nThe bucket owner is automatically granted FULL_CONTROL to all logs. You use the\n Grantee
request element to grant access to other people. The\n Permissions
request element specifies the kind of access the grantee has to\n the logs.
If the target bucket for log delivery uses the bucket owner enforced setting for S3\n Object Ownership, you can't use the Grantee
request element to grant access\n to others. Permissions can only be granted using policies. For more information, see\n Permissions for server access log delivery in the\n Amazon S3 User Guide.
You can specify the person (grantee) to whom you're assigning access rights (by using\n request elements) in the following ways:
\nBy the person's ID:
\n\n
\n
\n DisplayName
is optional and ignored in the request.
By Email address:
\n\n
\n
The grantee is resolved to the CanonicalUser
and, in a response to a GETObjectAcl
\n request, appears as the CanonicalUser.
By URI:
\n\n
\n
To enable logging, you use LoggingEnabled
and its children request elements. To disable\n logging, you use an empty BucketLoggingStatus
request element:
\n
\n
For more information about server access logging, see Server Access Logging in the\n Amazon S3 User Guide.
\nFor more information about creating a bucket, see CreateBucket. For more\n information about returning the logging status of a bucket, see GetBucketLogging.
\nThe following operations are related to PutBucketLogging
:
\n PutObject\n
\n\n DeleteBucket\n
\n\n CreateBucket\n
\n\n GetBucketLogging\n
\nSet the logging parameters for a bucket and to specify permissions for who can view and\n modify the logging parameters. All logs are saved to buckets in the same Amazon Web Services Region as\n the source bucket. To set the logging status of a bucket, you must be the bucket\n owner.
\nThe bucket owner is automatically granted FULL_CONTROL to all logs. You use the\n Grantee
request element to grant access to other people. The\n Permissions
request element specifies the kind of access the grantee has to\n the logs.
If the target bucket for log delivery uses the bucket owner enforced setting for S3\n Object Ownership, you can't use the Grantee
request element to grant access\n to others. Permissions can only be granted using policies. For more information, see\n Permissions for server access log delivery in the\n Amazon S3 User Guide.
You can specify the person (grantee) to whom you're assigning access rights (by\n using request elements) in the following ways:
\nBy the person's ID:
\n\n
\n
\n DisplayName
is optional and ignored in the request.
By Email address:
\n\n
\n
The grantee is resolved to the CanonicalUser
and, in a\n response to a GETObjectAcl
request, appears as the\n CanonicalUser.
By URI:
\n\n
\n
To enable logging, you use LoggingEnabled
and its children request\n elements. To disable logging, you use an empty BucketLoggingStatus
request\n element:
\n
\n
For more information about server access logging, see Server Access Logging in the\n Amazon S3 User Guide.
\nFor more information about creating a bucket, see CreateBucket. For more\n information about returning the logging status of a bucket, see GetBucketLogging.
\nThe following operations are related to PutBucketLogging
:
\n PutObject\n
\n\n DeleteBucket\n
\n\n CreateBucket\n
\n\n GetBucketLogging\n
\nApplies an Amazon S3 bucket policy to an Amazon S3 bucket. If you are using an identity other than\n the root user of the Amazon Web Services account that owns the bucket, the calling identity must have the\n PutBucketPolicy
permissions on the specified bucket and belong to the\n bucket owner's account in order to use this operation.
If you don't have PutBucketPolicy
permissions, Amazon S3 returns a 403\n Access Denied
error. If you have the correct permissions, but you're not using an\n identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not\n Allowed
error.
To ensure that bucket owners don't inadvertently lock themselves out of their own\n buckets, the root principal in a bucket owner's Amazon Web Services account can perform the\n GetBucketPolicy
, PutBucketPolicy
, and\n DeleteBucketPolicy
API actions, even if their bucket policy explicitly\n denies the root principal's access. Bucket owner root principals can only be blocked from performing \n these API actions by VPC endpoint policies and Amazon Web Services Organizations policies.
For more information, see Bucket policy\n examples.
\nThe following operations are related to PutBucketPolicy
:
\n CreateBucket\n
\n\n DeleteBucket\n
\nApplies an Amazon S3 bucket policy to an Amazon S3 bucket. If you are using an identity other than\n the root user of the Amazon Web Services account that owns the bucket, the calling identity must have the\n PutBucketPolicy
permissions on the specified bucket and belong to the\n bucket owner's account in order to use this operation.
If you don't have PutBucketPolicy
permissions, Amazon S3 returns a 403\n Access Denied
error. If you have the correct permissions, but you're not using an\n identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not\n Allowed
error.
To ensure that bucket owners don't inadvertently lock themselves out of their own\n buckets, the root principal in a bucket owner's Amazon Web Services account can perform the\n GetBucketPolicy
, PutBucketPolicy
, and\n DeleteBucketPolicy
API actions, even if their bucket policy explicitly\n denies the root principal's access. Bucket owner root principals can only be blocked\n from performing these API actions by VPC endpoint policies and Amazon Web Services Organizations\n policies.
For more information, see Bucket policy\n examples.
\nThe following operations are related to PutBucketPolicy
:
\n CreateBucket\n
\n\n DeleteBucket\n
\nCreates a replication configuration or replaces an existing one. For more information,\n see Replication in the Amazon S3 User Guide.
\nSpecify the replication configuration in the request body. In the replication\n configuration, you provide the name of the destination bucket or buckets where you want\n Amazon S3 to replicate objects, the IAM role that Amazon S3 can assume to replicate objects on your\n behalf, and other relevant information.
\nA replication configuration must include at least one rule, and can contain a maximum of\n 1,000. Each rule identifies a subset of objects to replicate by filtering the objects in\n the source bucket. To choose additional subsets of objects to replicate, add a rule for\n each subset.
\nTo specify a subset of the objects in the source bucket to apply a replication rule to,\n add the Filter element as a child of the Rule element. You can filter objects based on an\n object key prefix, one or more object tags, or both. When you add the Filter element in the\n configuration, you must also add the following elements:\n DeleteMarkerReplication
, Status
, and\n Priority
.
If you are using an earlier version of the replication configuration, Amazon S3 handles\n replication of delete markers differently. For more information, see Backward Compatibility.
\nFor information about enabling versioning on a bucket, see Using Versioning.
\nBy default, Amazon S3 doesn't replicate objects that are stored at rest using server-side\n encryption with KMS keys. To replicate Amazon Web Services KMS-encrypted objects, add the following:\n SourceSelectionCriteria
, SseKmsEncryptedObjects
,\n Status
, EncryptionConfiguration
, and\n ReplicaKmsKeyID
. For information about replication configuration, see\n Replicating Objects\n Created with SSE Using KMS keys.
For information on PutBucketReplication
errors, see List of\n replication-related error codes\n
To create a PutBucketReplication
request, you must have\n s3:PutReplicationConfiguration
permissions for the bucket.\n \n
By default, a resource owner, in this case the Amazon Web Services account that created the bucket,\n can perform this operation. The resource owner can also grant others permissions to perform\n the operation. For more information about permissions, see Specifying Permissions in a\n Policy and Managing Access Permissions to\n Your Amazon S3 Resources.
\nTo perform this operation, the user or role performing the action must have the\n iam:PassRole permission.
\nThe following operations are related to PutBucketReplication
:
\n GetBucketReplication\n
\nCreates a replication configuration or replaces an existing one. For more information,\n see Replication in the Amazon S3 User Guide.
\nSpecify the replication configuration in the request body. In the replication\n configuration, you provide the name of the destination bucket or buckets where you want\n Amazon S3 to replicate objects, the IAM role that Amazon S3 can assume to replicate objects on your\n behalf, and other relevant information. You can invoke this request for a specific\n Amazon Web Services Region by using the \n \n aws:RequestedRegion
\n condition key.
A replication configuration must include at least one rule, and can contain a maximum of\n 1,000. Each rule identifies a subset of objects to replicate by filtering the objects in\n the source bucket. To choose additional subsets of objects to replicate, add a rule for\n each subset.
\nTo specify a subset of the objects in the source bucket to apply a replication rule to,\n add the Filter element as a child of the Rule element. You can filter objects based on an\n object key prefix, one or more object tags, or both. When you add the Filter element in the\n configuration, you must also add the following elements:\n DeleteMarkerReplication
, Status
, and\n Priority
.
If you are using an earlier version of the replication configuration, Amazon S3 handles\n replication of delete markers differently. For more information, see Backward Compatibility.
\nFor information about enabling versioning on a bucket, see Using Versioning.
\nBy default, Amazon S3 doesn't replicate objects that are stored at rest using\n server-side encryption with KMS keys. To replicate Amazon Web Services KMS-encrypted objects,\n add the following: SourceSelectionCriteria
,\n SseKmsEncryptedObjects
, Status
,\n EncryptionConfiguration
, and ReplicaKmsKeyID
. For\n information about replication configuration, see Replicating\n Objects Created with SSE Using KMS keys.
For information on PutBucketReplication
errors, see List of\n replication-related error codes\n
To create a PutBucketReplication
request, you must have\n s3:PutReplicationConfiguration
permissions for the bucket.\n \n
By default, a resource owner, in this case the Amazon Web Services account that created the\n bucket, can perform this operation. The resource owner can also grant others\n permissions to perform the operation. For more information about permissions, see\n Specifying Permissions in\n a Policy and Managing Access\n Permissions to Your Amazon S3 Resources.
\nTo perform this operation, the user or role performing the action must have\n the iam:PassRole\n permission.
\nThe following operations are related to PutBucketReplication
:
\n GetBucketReplication\n
\nSets the tags for a bucket.
\nUse tags to organize your Amazon Web Services bill to reflect your own cost structure. To do this,\n sign up to get your Amazon Web Services account bill with tag key values included. Then, to see the cost\n of combined resources, organize your billing information according to resources with the\n same tag key values. For example, you can tag several resources with a specific application\n name, and then organize your billing information to see the total cost of that application\n across several services. For more information, see Cost Allocation and\n Tagging and Using Cost Allocation in Amazon S3 Bucket\n Tags.
\nWhen this operation sets the tags for a bucket, it will overwrite any current tags\n the bucket already has. You cannot use this operation to add tags to an existing list of\n tags.
\nTo use this operation, you must have permissions to perform the\n s3:PutBucketTagging
action. The bucket owner has this permission by default\n and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to Your Amazon S3 Resources.
\n PutBucketTagging
has the following special errors:
Error code: InvalidTagError
\n
Description: The tag provided was not a valid tag. This error can occur if\n the tag did not pass input validation. For information about tag restrictions,\n see User-Defined Tag Restrictions and Amazon Web Services-Generated Cost Allocation Tag Restrictions.
\nError code: MalformedXMLError
\n
Description: The XML provided does not match the schema.
\nError code: OperationAbortedError
\n
Description: A conflicting conditional action is currently in progress\n against this resource. Please try again.
\nError code: InternalError
\n
Description: The service was unable to apply the provided tag to the\n bucket.
\nThe following operations are related to PutBucketTagging
:
\n GetBucketTagging\n
\n\n DeleteBucketTagging\n
\nSets the tags for a bucket.
\nUse tags to organize your Amazon Web Services bill to reflect your own cost structure. To do this,\n sign up to get your Amazon Web Services account bill with tag key values included. Then, to see the cost\n of combined resources, organize your billing information according to resources with the\n same tag key values. For example, you can tag several resources with a specific application\n name, and then organize your billing information to see the total cost of that application\n across several services. For more information, see Cost Allocation and\n Tagging and Using Cost Allocation in Amazon S3\n Bucket Tags.
\nWhen this operation sets the tags for a bucket, it will overwrite any current tags\n the bucket already has. You cannot use this operation to add tags to an existing list of\n tags.
\nTo use this operation, you must have permissions to perform the\n s3:PutBucketTagging
action. The bucket owner has this permission by default\n and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to Your Amazon S3 Resources.
\n PutBucketTagging
has the following special errors. For more Amazon S3 errors\n see, Error\n Responses.
\n InvalidTag
- The tag provided was not a valid tag. This error\n can occur if the tag did not pass input validation. For more information, see Using\n Cost Allocation in Amazon S3 Bucket Tags.
\n MalformedXML
- The XML provided does not match the\n schema.
\n OperationAborted
- A conflicting conditional action is\n currently in progress against this resource. Please try again.
\n InternalError
- The service was unable to apply the provided\n tag to the bucket.
The following operations are related to PutBucketTagging
:
\n GetBucketTagging\n
\n\n DeleteBucketTagging\n
\nSets the versioning state of an existing bucket.
\nYou can set the versioning state with one of the following values:
\n\n Enabled—Enables versioning for the objects in the\n bucket. All objects added to the bucket receive a unique version ID.
\n\n Suspended—Disables versioning for the objects in the\n bucket. All objects added to the bucket receive the version ID null.
\nIf the versioning state has never been set on a bucket, it has no versioning state; a\n GetBucketVersioning request does not return a versioning state value.
\nIn order to enable MFA Delete, you must be the bucket owner. If you are the bucket owner\n and want to enable MFA Delete in the bucket versioning configuration, you must include the\n x-amz-mfa request
header and the Status
and the\n MfaDelete
request elements in a request to set the versioning state of the\n bucket.
If you have an object expiration lifecycle configuration in your non-versioned bucket and\n you want to maintain the same permanent delete behavior when you enable versioning, you\n must add a noncurrent expiration policy. The noncurrent expiration lifecycle configuration will\n manage the deletes of the noncurrent object versions in the version-enabled bucket. (A\n version-enabled bucket maintains one current and zero or more noncurrent object\n versions.) For more information, see Lifecycle and Versioning.
\nThe following operations are related to PutBucketVersioning
:
\n CreateBucket\n
\n\n DeleteBucket\n
\n\n GetBucketVersioning\n
\nSets the versioning state of an existing bucket.
\nYou can set the versioning state with one of the following values:
\n\n Enabled—Enables versioning for the objects in the\n bucket. All objects added to the bucket receive a unique version ID.
\n\n Suspended—Disables versioning for the objects in the\n bucket. All objects added to the bucket receive the version ID null.
\nIf the versioning state has never been set on a bucket, it has no versioning state; a\n GetBucketVersioning request does not return a versioning state value.
\nIn order to enable MFA Delete, you must be the bucket owner. If you are the bucket owner\n and want to enable MFA Delete in the bucket versioning configuration, you must include the\n x-amz-mfa request
header and the Status
and the\n MfaDelete
request elements in a request to set the versioning state of the\n bucket.
If you have an object expiration lifecycle configuration in your non-versioned bucket\n and you want to maintain the same permanent delete behavior when you enable versioning,\n you must add a noncurrent expiration policy. The noncurrent expiration lifecycle\n configuration will manage the deletes of the noncurrent object versions in the\n version-enabled bucket. (A version-enabled bucket maintains one current and zero or more\n noncurrent object versions.) For more information, see Lifecycle and Versioning.
\nThe following operations are related to PutBucketVersioning
:
\n CreateBucket\n
\n\n DeleteBucket\n
\n\n GetBucketVersioning\n
\nSets the configuration of the website that is specified in the website
\n subresource. To configure a bucket as a website, you can add this subresource on the bucket\n with website configuration information such as the file name of the index document and any\n redirect rules. For more information, see Hosting Websites on Amazon S3.
This PUT action requires the S3:PutBucketWebsite
permission. By default,\n only the bucket owner can configure the website attached to a bucket; however, bucket\n owners can allow other users to set the website configuration by writing a bucket policy\n that grants them the S3:PutBucketWebsite
permission.
To redirect all website requests sent to the bucket's website endpoint, you add a\n website configuration with the following elements. Because all requests are sent to another\n website, you don't need to provide index document name for the bucket.
\n\n WebsiteConfiguration
\n
\n RedirectAllRequestsTo
\n
\n HostName
\n
\n Protocol
\n
If you want granular control over redirects, you can use the following elements to add\n routing rules that describe conditions for redirecting requests and information about the\n redirect destination. In this case, the website configuration must provide an index\n document for the bucket, because some requests might not be redirected.
\n\n WebsiteConfiguration
\n
\n IndexDocument
\n
\n Suffix
\n
\n ErrorDocument
\n
\n Key
\n
\n RoutingRules
\n
\n RoutingRule
\n
\n Condition
\n
\n HttpErrorCodeReturnedEquals
\n
\n KeyPrefixEquals
\n
\n Redirect
\n
\n Protocol
\n
\n HostName
\n
\n ReplaceKeyPrefixWith
\n
\n ReplaceKeyWith
\n
\n HttpRedirectCode
\n
Amazon S3 has a limitation of 50 routing rules per website configuration. If you require more\n than 50 routing rules, you can use object redirect. For more information, see Configuring an\n Object Redirect in the Amazon S3 User Guide.
", + "smithy.api#documentation": "Sets the configuration of the website that is specified in the website
\n subresource. To configure a bucket as a website, you can add this subresource on the bucket\n with website configuration information such as the file name of the index document and any\n redirect rules. For more information, see Hosting Websites on Amazon S3.
This PUT action requires the S3:PutBucketWebsite
permission. By default,\n only the bucket owner can configure the website attached to a bucket; however, bucket\n owners can allow other users to set the website configuration by writing a bucket policy\n that grants them the S3:PutBucketWebsite
permission.
To redirect all website requests sent to the bucket's website endpoint, you add a\n website configuration with the following elements. Because all requests are sent to another\n website, you don't need to provide index document name for the bucket.
\n\n WebsiteConfiguration
\n
\n RedirectAllRequestsTo
\n
\n HostName
\n
\n Protocol
\n
If you want granular control over redirects, you can use the following elements to add\n routing rules that describe conditions for redirecting requests and information about the\n redirect destination. In this case, the website configuration must provide an index\n document for the bucket, because some requests might not be redirected.
\n\n WebsiteConfiguration
\n
\n IndexDocument
\n
\n Suffix
\n
\n ErrorDocument
\n
\n Key
\n
\n RoutingRules
\n
\n RoutingRule
\n
\n Condition
\n
\n HttpErrorCodeReturnedEquals
\n
\n KeyPrefixEquals
\n
\n Redirect
\n
\n Protocol
\n
\n HostName
\n
\n ReplaceKeyPrefixWith
\n
\n ReplaceKeyWith
\n
\n HttpRedirectCode
\n
Amazon S3 has a limitation of 50 routing rules per website configuration. If you require more\n than 50 routing rules, you can use object redirect. For more information, see Configuring an\n Object Redirect in the Amazon S3 User Guide.
\nThe maximum request length is limited to 128 KB.
", "smithy.api#examples": [ { "title": "Set website configuration on a bucket", @@ -28095,19 +28100,16 @@ "smithy.api#documentation": "Adds an object to a bucket. You must have WRITE permissions on a bucket to add an object\n to it.
\nAmazon S3 never adds partial objects; if you receive a success response, Amazon S3 added the\n entire object to the bucket. You cannot use PutObject
to only update a\n single piece of metadata for an existing object. You must put the entire object with\n updated metadata if you want to update some values.
Amazon S3 is a distributed system. If it receives multiple write requests for the same object\n simultaneously, it overwrites all but the last object written. To prevent objects from\n being deleted or overwritten, you can use Amazon S3 Object\n Lock.
\nTo ensure that data is not corrupted traversing the network, use the\n Content-MD5
header. When you use this header, Amazon S3 checks the object\n against the provided MD5 value and, if they do not match, returns an error. Additionally,\n you can calculate the MD5 while putting an object to Amazon S3 and compare the returned ETag to\n the calculated MD5 value.
To successfully complete the PutObject
request, you must have the\n s3:PutObject
in your IAM permissions.
To successfully change the objects acl of your PutObject
request,\n you must have the s3:PutObjectAcl
in your IAM permissions.
To successfully set the tag-set with your PutObject
request, you\n must have the s3:PutObjectTagging
in your IAM permissions.
The Content-MD5
header is required for any request to upload an\n object with a retention period configured using Amazon S3 Object Lock. For more\n information about Amazon S3 Object Lock, see Amazon S3 Object Lock\n Overview in the Amazon S3 User Guide.
You have four mutually exclusive options to protect data using server-side encryption in\n Amazon S3, depending on how you choose to manage the encryption keys. Specifically, the\n encryption key options are Amazon S3 managed keys (SSE-S3), Amazon Web Services KMS keys (SSE-KMS or\n DSSE-KMS), and customer-provided keys (SSE-C). Amazon S3 encrypts data with server-side\n encryption by using Amazon S3 managed keys (SSE-S3) by default. You can optionally tell Amazon S3 to\n encrypt data at rest by using server-side encryption with other key options. For more\n information, see Using Server-Side\n Encryption.
\nWhen adding a new object, you can use headers to grant ACL-based permissions to\n individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are\n then added to the ACL on the object. By default, all objects are private. Only the owner\n has full access control. For more information, see Access Control List (ACL) Overview\n and Managing\n ACLs Using the REST API.
\nIf the bucket that you're uploading objects to uses the bucket owner enforced setting\n for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that\n use this setting only accept PUT requests that don't specify an ACL or PUT requests that\n specify bucket owner full control ACLs, such as the bucket-owner-full-control
\n canned ACL or an equivalent form of this ACL expressed in the XML format. PUT requests that\n contain other ACLs (for example, custom grants to certain Amazon Web Services accounts) fail and return a\n 400
error with the error code AccessControlListNotSupported
.\n For more information, see Controlling ownership of\n objects and disabling ACLs in the Amazon S3 User Guide.
If your bucket uses the bucket owner enforced setting for Object Ownership, all\n objects written to the bucket by any account will be owned by the bucket owner.
\nBy default, Amazon S3 uses the STANDARD Storage Class to store newly created objects. The\n STANDARD storage class provides high durability and high availability. Depending on\n performance needs, you can specify a different Storage Class. Amazon S3 on Outposts only uses\n the OUTPOSTS Storage Class. For more information, see Storage Classes in the\n Amazon S3 User Guide.
\nIf you enable versioning for a bucket, Amazon S3 automatically generates a unique version ID\n for the object being stored. Amazon S3 returns this ID in the response. When you enable\n versioning for a bucket, if Amazon S3 receives multiple write requests for the same object\n simultaneously, it stores all of the objects. For more information about versioning, see\n Adding Objects to\n Versioning-Enabled Buckets. For information about returning the versioning state\n of a bucket, see GetBucketVersioning.
\nFor more information about related Amazon S3 APIs, see the following:
\n\n CopyObject\n
\n\n DeleteObject\n
\nUses the acl
subresource to set the access control list (ACL) permissions\n for a new or existing object in an S3 bucket. You must have WRITE_ACP
\n permission to set the ACL of an object. For more information, see What\n permissions can I grant? in the Amazon S3 User Guide.
This action is not supported by Amazon S3 on Outposts.
\nDepending on your application needs, you can choose to set the ACL on an object using\n either the request body or the headers. For example, if you have an existing application\n that updates a bucket ACL using the request body, you can continue to use that approach.\n For more information, see Access Control List (ACL) Overview\n in the Amazon S3 User Guide.
\nIf your bucket uses the bucket owner enforced setting for S3 Object Ownership, ACLs\n are disabled and no longer affect permissions. You must use policies to grant access to\n your bucket and the objects in it. Requests to set ACLs or update ACLs fail and return\n the AccessControlListNotSupported
error code. Requests to read ACLs are\n still supported. For more information, see Controlling object\n ownership in the Amazon S3 User Guide.
You can set access permissions using one of the following methods:
\nSpecify a canned ACL with the x-amz-acl
request header. Amazon S3 supports\n a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set\n of grantees and permissions. Specify the canned ACL name as the value of\n x-amz-ac
l. If you use this header, you cannot use other access\n control-specific headers in your request. For more information, see Canned\n ACL.
Specify access permissions explicitly with the x-amz-grant-read
,\n x-amz-grant-read-acp
, x-amz-grant-write-acp
, and\n x-amz-grant-full-control
headers. When using these headers, you\n specify explicit access permissions and grantees (Amazon Web Services accounts or Amazon S3 groups) who\n will receive the permission. If you use these ACL-specific headers, you cannot use\n x-amz-acl
header to set a canned ACL. These parameters map to the set\n of permissions that Amazon S3 supports in an ACL. For more information, see Access Control\n List (ACL) Overview.
You specify each grantee as a type=value pair, where the type is one of the\n following:
\n\n id
– if the value specified is the canonical user ID of an\n Amazon Web Services account
\n uri
– if you are granting permissions to a predefined\n group
\n emailAddress
– if the value specified is the email address of\n an Amazon Web Services account
Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
\nUS East (N. Virginia)
\nUS West (N. California)
\nUS West (Oregon)
\nAsia Pacific (Singapore)
\nAsia Pacific (Sydney)
\nAsia Pacific (Tokyo)
\nEurope (Ireland)
\nSouth America (São Paulo)
\nFor a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.
\nFor example, the following x-amz-grant-read
header grants list\n objects permission to the two Amazon Web Services accounts identified by their email\n addresses.
\n x-amz-grant-read: emailAddress=\"xyz@amazon.com\",\n emailAddress=\"abc@amazon.com\"
\n
You can use either a canned ACL or specify access permissions explicitly. You cannot do\n both.
\nYou can specify the person (grantee) to whom you're assigning access rights (using\n request elements) in the following ways:
\nBy the person's ID:
\n\n
\n
DisplayName is optional and ignored in the request.
\nBy URI:
\n\n
\n
By Email address:
\n\n
\n
The grantee is resolved to the CanonicalUser and, in a response to a GET Object\n acl request, appears as the CanonicalUser.
\nUsing email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
\nUS East (N. Virginia)
\nUS West (N. California)
\nUS West (Oregon)
\nAsia Pacific (Singapore)
\nAsia Pacific (Sydney)
\nAsia Pacific (Tokyo)
\nEurope (Ireland)
\nSouth America (São Paulo)
\nFor a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.
\nThe ACL of an object is set at the object version level. By default, PUT sets the ACL of\n the current version of an object. To set the ACL of a different version, use the\n versionId
subresource.
The following operations are related to PutObjectAcl
:
\n CopyObject\n
\n\n GetObject\n
\nUses the acl
subresource to set the access control list (ACL) permissions\n for a new or existing object in an S3 bucket. You must have WRITE_ACP
\n permission to set the ACL of an object. For more information, see What\n permissions can I grant? in the Amazon S3 User Guide.
This action is not supported by Amazon S3 on Outposts.
\nDepending on your application needs, you can choose to set the ACL on an object using\n either the request body or the headers. For example, if you have an existing application\n that updates a bucket ACL using the request body, you can continue to use that approach.\n For more information, see Access Control List (ACL) Overview\n in the Amazon S3 User Guide.
\nIf your bucket uses the bucket owner enforced setting for S3 Object Ownership, ACLs\n are disabled and no longer affect permissions. You must use policies to grant access to\n your bucket and the objects in it. Requests to set ACLs or update ACLs fail and return\n the AccessControlListNotSupported
error code. Requests to read ACLs are\n still supported. For more information, see Controlling object\n ownership in the Amazon S3 User Guide.
You can set access permissions using one of the following methods:
\nSpecify a canned ACL with the x-amz-acl
request header. Amazon S3\n supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has\n a predefined set of grantees and permissions. Specify the canned ACL name as\n the value of x-amz-ac
l. If you use this header, you cannot use\n other access control-specific headers in your request. For more information,\n see Canned\n ACL.
Specify access permissions explicitly with the\n x-amz-grant-read
, x-amz-grant-read-acp
,\n x-amz-grant-write-acp
, and\n x-amz-grant-full-control
headers. When using these headers,\n you specify explicit access permissions and grantees (Amazon Web Services accounts or Amazon S3\n groups) who will receive the permission. If you use these ACL-specific\n headers, you cannot use x-amz-acl
header to set a canned ACL.\n These parameters map to the set of permissions that Amazon S3 supports in an ACL.\n For more information, see Access Control List (ACL)\n Overview.
You specify each grantee as a type=value pair, where the type is one of\n the following:
\n\n id
– if the value specified is the canonical user ID\n of an Amazon Web Services account
\n uri
– if you are granting permissions to a predefined\n group
\n emailAddress
– if the value specified is the email\n address of an Amazon Web Services account
Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
\nUS East (N. Virginia)
\nUS West (N. California)
\nUS West (Oregon)
\nAsia Pacific (Singapore)
\nAsia Pacific (Sydney)
\nAsia Pacific (Tokyo)
\nEurope (Ireland)
\nSouth America (São Paulo)
\nFor a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.
\nFor example, the following x-amz-grant-read
header grants\n list objects permission to the two Amazon Web Services accounts identified by their email\n addresses.
\n x-amz-grant-read: emailAddress=\"xyz@amazon.com\",\n emailAddress=\"abc@amazon.com\"
\n
You can use either a canned ACL or specify access permissions explicitly. You\n cannot do both.
\nYou can specify the person (grantee) to whom you're assigning access rights\n (using request elements) in the following ways:
\nBy the person's ID:
\n\n
\n
DisplayName is optional and ignored in the request.
\nBy URI:
\n\n
\n
By Email address:
\n\n
\n
The grantee is resolved to the CanonicalUser and, in a response to a GET\n Object acl request, appears as the CanonicalUser.
\nUsing email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
\nUS East (N. Virginia)
\nUS West (N. California)
\nUS West (Oregon)
\nAsia Pacific (Singapore)
\nAsia Pacific (Sydney)
\nAsia Pacific (Tokyo)
\nEurope (Ireland)
\nSouth America (São Paulo)
\nFor a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.
\nThe ACL of an object is set at the object version level. By default, PUT sets\n the ACL of the current version of an object. To set the ACL of a different\n version, use the versionId
subresource.
The following operations are related to PutObjectAcl
:
\n CopyObject\n
\n\n GetObject\n
\nIf present, specifies the Amazon Web Services KMS Encryption Context to use for object encryption. The\n value of this header is a base64-encoded UTF-8 string holding JSON with the encryption\n context key-value pairs. This value is stored as object metadata and automatically gets passed\n on to Amazon Web Services KMS for future GetObject
or CopyObject
operations on\n this object.
If present, specifies the Amazon Web Services KMS Encryption Context to use for object encryption. The\n value of this header is a base64-encoded UTF-8 string holding JSON with the encryption\n context key-value pairs. This value is stored as object metadata and automatically gets\n passed on to Amazon Web Services KMS for future GetObject
or CopyObject
\n operations on this object.
If x-amz-server-side-encryption
has a valid value of aws:kms
\n or aws:kms:dsse
, this header specifies the ID of the Key Management Service (KMS)\n symmetric encryption customer managed key that was used for the object. If you specify\n x-amz-server-side-encryption:aws:kms
or\n x-amz-server-side-encryption:aws:kms:dsse
, but do not provide\n x-amz-server-side-encryption-aws-kms-key-id
, Amazon S3 uses the Amazon Web Services managed key\n (aws/s3
) to protect the data. If the KMS key does not exist in the same\n account that's issuing the command, you must use the full ARN and not just the ID.
If x-amz-server-side-encryption
has a valid value of aws:kms
\n or aws:kms:dsse
, this header specifies the ID (Key ID, Key ARN, or Key Alias) of the Key Management Service (KMS)\n symmetric encryption customer managed key that was used for the object. If you specify\n x-amz-server-side-encryption:aws:kms
or\n x-amz-server-side-encryption:aws:kms:dsse
, but do not provide\n x-amz-server-side-encryption-aws-kms-key-id
, Amazon S3 uses the Amazon Web Services managed key\n (aws/s3
) to protect the data. If the KMS key does not exist in the same\n account that's issuing the command, you must use the full ARN and not just the ID.
Specifies the Amazon Web Services KMS Encryption Context to use for object encryption. The value of\n this header is a base64-encoded UTF-8 string holding JSON with the encryption context\n key-value pairs. This value is stored as object metadata and automatically gets passed on to\n Amazon Web Services KMS for future GetObject
or CopyObject
operations on this\n object.
Specifies the Amazon Web Services KMS Encryption Context to use for object encryption. The value of\n this header is a base64-encoded UTF-8 string holding JSON with the encryption context\n key-value pairs. This value is stored as object metadata and automatically gets passed on\n to Amazon Web Services KMS for future GetObject
or CopyObject
operations on\n this object.
Sets the supplied tag-set to an object that already exists in a bucket.
\nA tag is a key-value pair. You can associate tags with an object by sending a PUT\n request against the tagging subresource that is associated with the object. You can\n retrieve tags by sending a GET request. For more information, see GetObjectTagging.
\nFor tagging-related restrictions related to characters and encodings, see Tag\n Restrictions. Note that Amazon S3 limits the maximum number of tags to 10 tags per\n object.
\nTo use this operation, you must have permission to perform the\n s3:PutObjectTagging
action. By default, the bucket owner has this\n permission and can grant this permission to others.
To put tags of any other version, use the versionId
query parameter. You\n also need permission for the s3:PutObjectVersionTagging
action.
For information about the Amazon S3 object tagging feature, see Object Tagging.
\n\n PutObjectTagging
has the following special errors:
\n Code: InvalidTagError \n
\n\n Cause: The tag provided was not a valid tag. This error can occur\n if the tag did not pass input validation. For more information, see Object\n Tagging.\n
\n\n Code: MalformedXMLError \n
\n\n Cause: The XML provided does not match the schema.\n
\n\n Code: OperationAbortedError \n
\n\n Cause: A conflicting conditional action is currently in progress\n against this resource. Please try again.\n
\n\n Code: InternalError\n
\n\n Cause: The service was unable to apply the provided tag to the\n object.\n
\nThe following operations are related to PutObjectTagging
:
\n GetObjectTagging\n
\n\n DeleteObjectTagging\n
\nSets the supplied tag-set to an object that already exists in a bucket. A tag is a\n key-value pair. For more information, see Object Tagging.
\nYou can associate tags with an object by sending a PUT request against the tagging\n subresource that is associated with the object. You can retrieve tags by sending a GET\n request. For more information, see GetObjectTagging.
\nFor tagging-related restrictions related to characters and encodings, see Tag\n Restrictions. Note that Amazon S3 limits the maximum number of tags to 10 tags per\n object.
\nTo use this operation, you must have permission to perform the\n s3:PutObjectTagging
action. By default, the bucket owner has this\n permission and can grant this permission to others.
To put tags of any other version, use the versionId
query parameter. You\n also need permission for the s3:PutObjectVersionTagging
action.
\n PutObjectTagging
has the following special errors. For more Amazon S3 errors\n see, Error\n Responses.
\n InvalidTag
- The tag provided was not a valid tag. This error\n can occur if the tag did not pass input validation. For more information, see Object\n Tagging.
\n MalformedXML
- The XML provided does not match the\n schema.
\n OperationAborted
- A conflicting conditional action is\n currently in progress against this resource. Please try again.
\n InternalError
- The service was unable to apply the provided\n tag to the object.
The following operations are related to PutObjectTagging
:
\n GetObjectTagging\n
\n\n DeleteObjectTagging\n
\nCreates or modifies the PublicAccessBlock
configuration for an Amazon S3 bucket.\n To use this operation, you must have the s3:PutBucketPublicAccessBlock
\n permission. For more information about Amazon S3 permissions, see Specifying Permissions in a\n Policy.
When Amazon S3 evaluates the PublicAccessBlock
configuration for a bucket or\n an object, it checks the PublicAccessBlock
configuration for both the\n bucket (or the bucket that contains the object) and the bucket owner's account. If the\n PublicAccessBlock
configurations are different between the bucket and\n the account, Amazon S3 uses the most restrictive combination of the bucket-level and\n account-level settings.
For more information about when Amazon S3 considers a bucket or an object public, see The Meaning of \"Public\".
\nThe following operations are related to PutPublicAccessBlock
:
\n GetPublicAccessBlock\n
\nCreates or modifies the PublicAccessBlock
configuration for an Amazon S3 bucket.\n To use this operation, you must have the s3:PutBucketPublicAccessBlock
\n permission. For more information about Amazon S3 permissions, see Specifying Permissions in a\n Policy.
When Amazon S3 evaluates the PublicAccessBlock
configuration for a bucket or\n an object, it checks the PublicAccessBlock
configuration for both the\n bucket (or the bucket that contains the object) and the bucket owner's account. If the\n PublicAccessBlock
configurations are different between the bucket and\n the account, S3 uses the most restrictive combination of the bucket-level and\n account-level settings.
For more information about when Amazon S3 considers a bucket or an object public, see The Meaning of \"Public\".
\nThe following operations are related to PutPublicAccessBlock
:
\n GetPublicAccessBlock\n
\nConfirms that the requester knows that they will be charged for the request. Bucket\n owners need not specify this parameter in their requests. For information about downloading\n objects from Requester Pays buckets, see Downloading Objects in\n Requester Pays Buckets in the Amazon S3 User Guide.
" + "smithy.api#documentation": "Confirms that the requester knows that they will be charged for the request. Bucket\n owners need not specify this parameter in their requests. If either the source or\n destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for\n corresponding charges to copy the object. For information about downloading objects from\n Requester Pays buckets, see Downloading Objects in\n Requester Pays Buckets in the Amazon S3 User Guide.
" } }, "com.amazonaws.s3#RequestPaymentConfiguration": { @@ -29712,7 +29714,7 @@ "aws.protocols#httpChecksum": { "requestAlgorithmMember": "ChecksumAlgorithm" }, - "smithy.api#documentation": "Restores an archived copy of an object back into Amazon S3
\nThis action is not supported by Amazon S3 on Outposts.
\nThis action performs the following types of requests:
\n\n select
- Perform a select query on an archived object
\n restore an archive
- Restore an archived object
For more information about the S3
structure in the request body, see the\n following:
\n PutObject\n
\n\n Managing Access with ACLs in the\n Amazon S3 User Guide\n
\n\n Protecting Data Using\n Server-Side Encryption in the\n Amazon S3 User Guide\n
\nDefine the SQL expression for the SELECT
type of restoration for your\n query in the request body's SelectParameters
structure. You can use\n expressions like the following examples.
The following expression returns all records from the specified\n object.
\n\n SELECT * FROM Object
\n
Assuming that you are not using any headers for data stored in the object,\n you can specify columns with positional headers.
\n\n SELECT s._1, s._2 FROM Object s WHERE s._3 > 100
\n
If you have headers and you set the fileHeaderInfo
in the\n CSV
structure in the request body to USE
, you can\n specify headers in the query. (If you set the fileHeaderInfo
field\n to IGNORE
, the first row is skipped for the query.) You cannot mix\n ordinal positions with header column names.
\n SELECT s.Id, s.FirstName, s.SSN FROM S3Object s
\n
When making a select request, you can also do the following:
\nTo expedite your queries, specify the Expedited
tier. For more\n information about tiers, see \"Restoring Archives,\" later in this topic.
Specify details about the data serialization format of both the input object that\n is being queried and the serialization of the CSV-encoded query results.
\nThe following are additional important facts about the select feature:
\nThe output results are new Amazon S3 objects. Unlike archive retrievals, they are\n stored until explicitly deleted-manually or through a lifecycle configuration.
\nYou can issue more than one select request on the same Amazon S3 object. Amazon S3 doesn't\n duplicate requests, so avoid issuing duplicate requests.
\n Amazon S3 accepts a select request even if the object has already been restored. A\n select request doesn’t return error response 409
.
To use this operation, you must have permissions to perform the\n s3:RestoreObject
action. The bucket owner has this permission by default\n and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to Your Amazon S3 Resources in the\n Amazon S3 User Guide.
Objects that you archive to the S3 Glacier Flexible Retrieval Flexible Retrieval or\n S3 Glacier Deep Archive storage class, and S3 Intelligent-Tiering Archive or\n S3 Intelligent-Tiering Deep Archive tiers, are not accessible in real time. For objects in the\n S3 Glacier Flexible Retrieval Flexible Retrieval or S3 Glacier Deep Archive storage\n classes, you must first initiate a restore request, and then wait until a temporary copy of\n the object is available. If you want a permanent copy of the object, create a copy of it in\n the Amazon S3 Standard storage class in your S3 bucket. To access an archived object, you must\n restore the object for the duration (number of days) that you specify. For objects in the\n Archive Access or Deep Archive Access tiers of S3 Intelligent-Tiering, you must first\n initiate a restore request, and then wait until the object is moved into the Frequent\n Access tier.
\nTo restore a specific object version, you can provide a version ID. If you don't provide\n a version ID, Amazon S3 restores the current version.
\nWhen restoring an archived object, you can specify one of the following data access tier\n options in the Tier
element of the request body:
\n Expedited
- Expedited retrievals allow you to quickly access your\n data stored in the S3 Glacier Flexible Retrieval Flexible Retrieval storage class or\n S3 Intelligent-Tiering Archive tier when occasional urgent requests for restoring archives\n are required. For all but the largest archived objects (250 MB+), data accessed using\n Expedited retrievals is typically made available within 1–5 minutes. Provisioned\n capacity ensures that retrieval capacity for Expedited retrievals is available when\n you need it. Expedited retrievals and provisioned capacity are not available for\n objects stored in the S3 Glacier Deep Archive storage class or\n S3 Intelligent-Tiering Deep Archive tier.
\n Standard
- Standard retrievals allow you to access any of your\n archived objects within several hours. This is the default option for retrieval\n requests that do not specify the retrieval option. Standard retrievals typically\n finish within 3–5 hours for objects stored in the S3 Glacier Flexible Retrieval Flexible\n Retrieval storage class or S3 Intelligent-Tiering Archive tier. They typically finish within\n 12 hours for objects stored in the S3 Glacier Deep Archive storage class or\n S3 Intelligent-Tiering Deep Archive tier. Standard retrievals are free for objects stored in\n S3 Intelligent-Tiering.
\n Bulk
- Bulk retrievals free for objects stored in the S3 Glacier\n Flexible Retrieval and S3 Intelligent-Tiering storage classes, enabling you to\n retrieve large amounts, even petabytes, of data at no cost. Bulk retrievals typically\n finish within 5–12 hours for objects stored in the S3 Glacier Flexible Retrieval\n Flexible Retrieval storage class or S3 Intelligent-Tiering Archive tier. Bulk retrievals are\n also the lowest-cost retrieval option when restoring objects from\n S3 Glacier Deep Archive. They typically finish within 48 hours for objects\n stored in the S3 Glacier Deep Archive storage class or S3 Intelligent-Tiering Deep Archive\n tier.
For more information about archive retrieval options and provisioned capacity for\n Expedited
data access, see Restoring Archived Objects in\n the Amazon S3 User Guide.
You can use Amazon S3 restore speed upgrade to change the restore speed to a faster speed\n while it is in progress. For more information, see Upgrading the speed of an in-progress restore in the\n Amazon S3 User Guide.
\nTo get the status of object restoration, you can send a HEAD
request.\n Operations return the x-amz-restore
header, which provides information about\n the restoration status, in the response. You can use Amazon S3 event notifications to notify you\n when a restore is initiated or completed. For more information, see Configuring Amazon S3\n Event Notifications in the Amazon S3 User Guide.
After restoring an archived object, you can update the restoration period by reissuing\n the request with a new period. Amazon S3 updates the restoration period relative to the current\n time and charges only for the request-there are no data transfer charges. You cannot\n update the restoration period when Amazon S3 is actively processing your current restore request\n for the object.
\nIf your bucket has a lifecycle configuration with a rule that includes an expiration\n action, the object expiration overrides the life span that you specify in a restore\n request. For example, if you restore an object copy for 10 days, but the object is\n scheduled to expire in 3 days, Amazon S3 deletes the object in 3 days. For more information\n about lifecycle configuration, see PutBucketLifecycleConfiguration and Object Lifecycle Management\n in Amazon S3 User Guide.
\nA successful action returns either the 200 OK
or 202 Accepted
\n status code.
If the object is not previously restored, then Amazon S3 returns 202\n Accepted
in the response.
If the object is previously restored, Amazon S3 returns 200 OK
in the\n response.
Special errors:
\n\n Code: RestoreAlreadyInProgress\n
\n\n Cause: Object restore is already in progress. (This error does not\n apply to SELECT type requests.)\n
\n\n HTTP Status Code: 409 Conflict\n
\n\n SOAP Fault Code Prefix: Client\n
\n\n Code: GlacierExpeditedRetrievalNotAvailable\n
\n\n Cause: expedited retrievals are currently not available. Try again\n later. (Returned if there is insufficient capacity to process the Expedited\n request. This error applies only to Expedited retrievals and not to\n S3 Standard or Bulk retrievals.)\n
\n\n HTTP Status Code: 503\n
\n\n SOAP Fault Code Prefix: N/A\n
\nThe following operations are related to RestoreObject
:
Restores an archived copy of an object back into Amazon S3
\nThis action is not supported by Amazon S3 on Outposts.
\nThis action performs the following types of requests:
\n\n select
- Perform a select query on an archived object
\n restore an archive
- Restore an archived object
For more information about the S3
structure in the request body, see the\n following:
\n PutObject\n
\n\n Managing Access with ACLs in the\n Amazon S3 User Guide\n
\n\n Protecting Data Using Server-Side Encryption in the\n Amazon S3 User Guide\n
\nDefine the SQL expression for the SELECT
type of restoration for your query\n in the request body's SelectParameters
structure. You can use expressions like\n the following examples.
The following expression returns all records from the specified object.
\n\n SELECT * FROM Object
\n
Assuming that you are not using any headers for data stored in the object, you can\n specify columns with positional headers.
\n\n SELECT s._1, s._2 FROM Object s WHERE s._3 > 100
\n
If you have headers and you set the fileHeaderInfo
in the\n CSV
structure in the request body to USE
, you can\n specify headers in the query. (If you set the fileHeaderInfo
field to\n IGNORE
, the first row is skipped for the query.) You cannot mix\n ordinal positions with header column names.
\n SELECT s.Id, s.FirstName, s.SSN FROM S3Object s
\n
When making a select request, you can also do the following:
\nTo expedite your queries, specify the Expedited
tier. For more\n information about tiers, see \"Restoring Archives,\" later in this topic.
Specify details about the data serialization format of both the input object that\n is being queried and the serialization of the CSV-encoded query results.
\nThe following are additional important facts about the select feature:
\nThe output results are new Amazon S3 objects. Unlike archive retrievals, they are\n stored until explicitly deleted-manually or through a lifecycle configuration.
\nYou can issue more than one select request on the same Amazon S3 object. Amazon S3 doesn't\n duplicate requests, so avoid issuing duplicate requests.
\n Amazon S3 accepts a select request even if the object has already been restored. A\n select request doesn’t return error response 409
.
To use this operation, you must have permissions to perform the\n s3:RestoreObject
action. The bucket owner has this permission by\n default and can grant this permission to others. For more information about\n permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources in the\n Amazon S3 User Guide.
Objects that you archive to the S3 Glacier Flexible Retrieval Flexible Retrieval\n or S3 Glacier Deep Archive storage class, and S3 Intelligent-Tiering Archive or\n S3 Intelligent-Tiering Deep Archive tiers, are not accessible in real time. For objects in the\n S3 Glacier Flexible Retrieval Flexible Retrieval or S3 Glacier Deep Archive\n storage classes, you must first initiate a restore request, and then wait until a\n temporary copy of the object is available. If you want a permanent copy of the\n object, create a copy of it in the Amazon S3 Standard storage class in your S3 bucket.\n To access an archived object, you must restore the object for the duration (number\n of days) that you specify. For objects in the Archive Access or Deep Archive\n Access tiers of S3 Intelligent-Tiering, you must first initiate a restore request,\n and then wait until the object is moved into the Frequent Access tier.
\nTo restore a specific object version, you can provide a version ID. If you\n don't provide a version ID, Amazon S3 restores the current version.
\nWhen restoring an archived object, you can specify one of the following data\n access tier options in the Tier
element of the request body:
\n Expedited
- Expedited retrievals allow you to quickly access\n your data stored in the S3 Glacier Flexible Retrieval Flexible Retrieval\n storage class or S3 Intelligent-Tiering Archive tier when occasional urgent requests\n for restoring archives are required. For all but the largest archived\n objects (250 MB+), data accessed using Expedited retrievals is typically\n made available within 1–5 minutes. Provisioned capacity ensures that\n retrieval capacity for Expedited retrievals is available when you need it.\n Expedited retrievals and provisioned capacity are not available for objects\n stored in the S3 Glacier Deep Archive storage class or\n S3 Intelligent-Tiering Deep Archive tier.
\n Standard
- Standard retrievals allow you to access any of\n your archived objects within several hours. This is the default option for\n retrieval requests that do not specify the retrieval option. Standard\n retrievals typically finish within 3–5 hours for objects stored in the\n S3 Glacier Flexible Retrieval Flexible Retrieval storage class or\n S3 Intelligent-Tiering Archive tier. They typically finish within 12 hours for\n objects stored in the S3 Glacier Deep Archive storage class or\n S3 Intelligent-Tiering Deep Archive tier. Standard retrievals are free for objects stored\n in S3 Intelligent-Tiering.
\n Bulk
- Bulk retrievals free for objects stored in the\n S3 Glacier Flexible Retrieval and S3 Intelligent-Tiering storage classes,\n enabling you to retrieve large amounts, even petabytes, of data at no cost.\n Bulk retrievals typically finish within 5–12 hours for objects stored in the\n S3 Glacier Flexible Retrieval Flexible Retrieval storage class or\n S3 Intelligent-Tiering Archive tier. Bulk retrievals are also the lowest-cost\n retrieval option when restoring objects from\n S3 Glacier Deep Archive. They typically finish within 48 hours for\n objects stored in the S3 Glacier Deep Archive storage class or\n S3 Intelligent-Tiering Deep Archive tier.
For more information about archive retrieval options and provisioned capacity\n for Expedited
data access, see Restoring Archived\n Objects in the Amazon S3 User Guide.
You can use Amazon S3 restore speed upgrade to change the restore speed to a faster\n speed while it is in progress. For more information, see Upgrading the speed of an in-progress restore in the\n Amazon S3 User Guide.
\nTo get the status of object restoration, you can send a HEAD
\n request. Operations return the x-amz-restore
header, which provides\n information about the restoration status, in the response. You can use Amazon S3 event\n notifications to notify you when a restore is initiated or completed. For more\n information, see Configuring Amazon S3 Event\n Notifications in the Amazon S3 User Guide.
After restoring an archived object, you can update the restoration period by\n reissuing the request with a new period. Amazon S3 updates the restoration period\n relative to the current time and charges only for the request-there are no\n data transfer charges. You cannot update the restoration period when Amazon S3 is\n actively processing your current restore request for the object.
\nIf your bucket has a lifecycle configuration with a rule that includes an\n expiration action, the object expiration overrides the life span that you specify\n in a restore request. For example, if you restore an object copy for 10 days, but\n the object is scheduled to expire in 3 days, Amazon S3 deletes the object in 3 days.\n For more information about lifecycle configuration, see PutBucketLifecycleConfiguration and Object Lifecycle\n Management in Amazon S3 User Guide.
\nA successful action returns either the 200 OK
or 202\n Accepted
status code.
If the object is not previously restored, then Amazon S3 returns 202\n Accepted
in the response.
If the object is previously restored, Amazon S3 returns 200 OK
in\n the response.
Special errors:
\n\n Code: RestoreAlreadyInProgress\n
\n\n Cause: Object restore is already in progress. (This error\n does not apply to SELECT type requests.)\n
\n\n HTTP Status Code: 409 Conflict\n
\n\n SOAP Fault Code Prefix: Client\n
\n\n Code: GlacierExpeditedRetrievalNotAvailable\n
\n\n Cause: expedited retrievals are currently not available.\n Try again later. (Returned if there is insufficient capacity to\n process the Expedited request. This error applies only to Expedited\n retrievals and not to S3 Standard or Bulk retrievals.)\n
\n\n HTTP Status Code: 503\n
\n\n SOAP Fault Code Prefix: N/A\n
\nThe following operations are related to RestoreObject
:
Specifies whether the object is currently being restored. If the object restoration is\n in progress, the header returns the value TRUE
. For example:
\n x-amz-optional-object-attributes: IsRestoreInProgress=\"true\"
\n
If the object restoration has completed, the header returns the value FALSE
. For example:
\n x-amz-optional-object-attributes: IsRestoreInProgress=\"false\", RestoreExpiryDate=\"2012-12-21T00:00:00.000Z\"
\n
If the object hasn't been restored, there is no header response.
" + "smithy.api#documentation": "Specifies whether the object is currently being restored. If the object restoration is\n in progress, the header returns the value TRUE
. For example:
\n x-amz-optional-object-attributes: IsRestoreInProgress=\"true\"
\n
If the object restoration has completed, the header returns the value\n FALSE
. For example:
\n x-amz-optional-object-attributes: IsRestoreInProgress=\"false\",\n RestoreExpiryDate=\"2012-12-21T00:00:00.000Z\"
\n
If the object hasn't been restored, there is no header response.
" } }, "RestoreExpiryDate": { "target": "com.amazonaws.s3#RestoreExpiryDate", "traits": { - "smithy.api#documentation": "Indicates when the restored copy will expire. This value is populated only if the object\n has already been restored. For example:
\n\n x-amz-optional-object-attributes: IsRestoreInProgress=\"false\", RestoreExpiryDate=\"2012-12-21T00:00:00.000Z\"
\n
Indicates when the restored copy will expire. This value is populated only if the object\n has already been restored. For example:
\n\n x-amz-optional-object-attributes: IsRestoreInProgress=\"false\",\n RestoreExpiryDate=\"2012-12-21T00:00:00.000Z\"
\n
Specifies the restoration status of an object. Objects in certain storage classes must be restored\n before they can be retrieved. For more information about these storage classes and how to work with\n archived objects, see \n Working with archived objects in the Amazon S3 User Guide.
" + "smithy.api#documentation": "Specifies the restoration status of an object. Objects in certain storage classes must\n be restored before they can be retrieved. For more information about these storage classes\n and how to work with archived objects, see Working with archived\n objects in the Amazon S3 User Guide.
" } }, "com.amazonaws.s3#Role": { @@ -30087,7 +30089,7 @@ "target": "com.amazonaws.s3#SelectObjectContentOutput" }, "traits": { - "smithy.api#documentation": "This action filters the contents of an Amazon S3 object based on a simple structured query\n language (SQL) statement. In the request, along with the SQL expression, you must also\n specify a data serialization format (JSON, CSV, or Apache Parquet) of the object. Amazon S3 uses\n this format to parse object data into records, and returns only records that match the\n specified SQL expression. You must also specify the data serialization format for the\n response.
\nThis action is not supported by Amazon S3 on Outposts.
\nFor more information about Amazon S3 Select, see Selecting Content from\n Objects and SELECT\n Command in the Amazon S3 User Guide.
\n \nYou must have s3:GetObject
permission for this operation. Amazon S3 Select does\n not support anonymous access. For more information about permissions, see Specifying\n Permissions in a Policy in the Amazon S3 User Guide.
You can use Amazon S3 Select to query objects that have the following format\n properties:
\n\n CSV, JSON, and Parquet - Objects must be in CSV, JSON, or\n Parquet format.
\n\n UTF-8 - UTF-8 is the only encoding type Amazon S3 Select\n supports.
\n\n GZIP or BZIP2 - CSV and JSON files can be compressed using\n GZIP or BZIP2. GZIP and BZIP2 are the only compression formats that Amazon S3 Select\n supports for CSV and JSON files. Amazon S3 Select supports columnar compression for\n Parquet using GZIP or Snappy. Amazon S3 Select does not support whole-object compression\n for Parquet objects.
\n\n Server-side encryption - Amazon S3 Select supports querying\n objects that are protected with server-side encryption.
\nFor objects that are encrypted with customer-provided encryption keys (SSE-C), you\n must use HTTPS, and you must use the headers that are documented in the GetObject. For more information about SSE-C, see Server-Side\n Encryption (Using Customer-Provided Encryption Keys) in the\n Amazon S3 User Guide.
\nFor objects that are encrypted with Amazon S3 managed keys (SSE-S3) and Amazon Web Services KMS keys\n (SSE-KMS), server-side encryption is handled transparently, so you don't need to\n specify anything. For more information about server-side encryption, including SSE-S3\n and SSE-KMS, see Protecting Data Using\n Server-Side Encryption in the Amazon S3 User Guide.
\nGiven the response size is unknown, Amazon S3 Select streams the response as a series of\n messages and includes a Transfer-Encoding
header with chunked
as\n its value in the response. For more information, see Appendix: SelectObjectContent\n Response.
The SelectObjectContent
action does not support the following\n GetObject
functionality. For more information, see GetObject.
\n Range
: Although you can specify a scan range for an Amazon S3 Select request\n (see SelectObjectContentRequest - ScanRange in the request parameters),\n you cannot specify the range of bytes of an object to return.
The GLACIER
, DEEP_ARCHIVE
, and REDUCED_REDUNDANCY
storage classes, or the ARCHIVE_ACCESS
and \n DEEP_ARCHIVE_ACCESS
access tiers of \n the INTELLIGENT_TIERING
storage class: You cannot query objects in \n the GLACIER
, DEEP_ARCHIVE
, or REDUCED_REDUNDANCY
storage classes, nor objects in the \n ARCHIVE_ACCESS
or \n DEEP_ARCHIVE_ACCESS
access tiers of \n the INTELLIGENT_TIERING
storage class. For\n more information about storage classes, see Using Amazon S3 storage\n classes in the Amazon S3 User Guide.
For a list of special errors for this operation, see List of\n SELECT Object Content Error Codes\n
\nThe following operations are related to SelectObjectContent
:
\n GetObject\n
\nThis action filters the contents of an Amazon S3 object based on a simple structured query\n language (SQL) statement. In the request, along with the SQL expression, you must also\n specify a data serialization format (JSON, CSV, or Apache Parquet) of the object. Amazon S3 uses\n this format to parse object data into records, and returns only records that match the\n specified SQL expression. You must also specify the data serialization format for the\n response.
\nThis action is not supported by Amazon S3 on Outposts.
\nFor more information about Amazon S3 Select, see Selecting Content from\n Objects and SELECT\n Command in the Amazon S3 User Guide.
\n \nYou must have s3:GetObject
permission for this operation. Amazon S3\n Select does not support anonymous access. For more information about permissions,\n see Specifying Permissions in\n a Policy in the Amazon S3 User Guide.
You can use Amazon S3 Select to query objects that have the following format\n properties:
\n\n CSV, JSON, and Parquet - Objects must be in CSV,\n JSON, or Parquet format.
\n\n UTF-8 - UTF-8 is the only encoding type Amazon S3 Select\n supports.
\n\n GZIP or BZIP2 - CSV and JSON files can be compressed\n using GZIP or BZIP2. GZIP and BZIP2 are the only compression formats that\n Amazon S3 Select supports for CSV and JSON files. Amazon S3 Select supports columnar\n compression for Parquet using GZIP or Snappy. Amazon S3 Select does not support\n whole-object compression for Parquet objects.
\n\n Server-side encryption - Amazon S3 Select supports\n querying objects that are protected with server-side encryption.
\nFor objects that are encrypted with customer-provided encryption keys\n (SSE-C), you must use HTTPS, and you must use the headers that are\n documented in the GetObject. For more\n information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys)\n in the Amazon S3 User Guide.
\nFor objects that are encrypted with Amazon S3 managed keys (SSE-S3) and\n Amazon Web Services KMS keys (SSE-KMS), server-side encryption is handled transparently,\n so you don't need to specify anything. For more information about\n server-side encryption, including SSE-S3 and SSE-KMS, see Protecting Data Using Server-Side Encryption in the\n Amazon S3 User Guide.
\nGiven the response size is unknown, Amazon S3 Select streams the response as a\n series of messages and includes a Transfer-Encoding
header with\n chunked
as its value in the response. For more information, see\n Appendix:\n SelectObjectContent\n Response.
The SelectObjectContent
action does not support the following\n GetObject
functionality. For more information, see GetObject.
\n Range
: Although you can specify a scan range for an Amazon S3 Select\n request (see SelectObjectContentRequest - ScanRange in the request\n parameters), you cannot specify the range of bytes of an object to return.\n
The GLACIER
, DEEP_ARCHIVE
, and\n REDUCED_REDUNDANCY
storage classes, or the\n ARCHIVE_ACCESS
and DEEP_ARCHIVE_ACCESS
access\n tiers of the INTELLIGENT_TIERING
storage class: You cannot\n query objects in the GLACIER
, DEEP_ARCHIVE
, or\n REDUCED_REDUNDANCY
storage classes, nor objects in the\n ARCHIVE_ACCESS
or DEEP_ARCHIVE_ACCESS
access\n tiers of the INTELLIGENT_TIERING
storage class. For more\n information about storage classes, see Using Amazon S3\n storage classes in the\n Amazon S3 User Guide.
For a list of special errors for this operation, see List of SELECT Object Content Error Codes\n
\nThe following operations are related to SelectObjectContent
:
\n GetObject\n
\nAmazon Web Services Key Management Service (KMS) customer Amazon Web Services KMS key ID to use for the default\n encryption. This parameter is allowed if and only if SSEAlgorithm
is set to\n aws:kms
.
You can specify the key ID or the Amazon Resource Name (ARN) of the KMS key. If you use\n a key ID, you can run into a LogDestination undeliverable error when creating a VPC flow\n log.
\nIf you are using encryption with cross-account or Amazon Web Services service operations you must use\n a fully qualified KMS key ARN. For more information, see Using encryption for cross-account operations.
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN:\n arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Amazon S3 only supports symmetric encryption KMS keys. For more information, see Asymmetric keys in Amazon Web Services KMS in the Amazon Web Services Key Management Service\n Developer Guide.
\nAmazon Web Services Key Management Service (KMS) customer Amazon Web Services KMS key ID to use for the default\n encryption. This parameter is allowed if and only if SSEAlgorithm
is set to\n aws:kms
.
You can specify the key ID, key alias, or the Amazon Resource Name (ARN) of the KMS\n key.
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key Alias: alias/alias-name
\n
If you use a key ID, you can run into a LogDestination undeliverable error when creating\n a VPC flow log.
\nIf you are using encryption with cross-account or Amazon Web Services service operations you must use\n a fully qualified KMS key ARN. For more information, see Using encryption for cross-account operations.
\nAmazon S3 only supports symmetric encryption KMS keys. For more information, see Asymmetric keys in Amazon Web Services KMS in the Amazon Web Services Key Management Service\n Developer Guide.
\nUploads a part by copying data from an existing object as data source. You specify the\n data source by adding the request header x-amz-copy-source
in your request and\n a byte range by adding the request header x-amz-copy-source-range
in your\n request.
For information about maximum and minimum part sizes and other multipart upload\n specifications, see Multipart upload limits in the Amazon S3 User Guide.
\nInstead of using an existing object as part data, you might use the UploadPart\n action and provide data in your request.
\nYou must initiate a multipart upload before you can upload any part. In response to your\n initiate request. Amazon S3 returns a unique identifier, the upload ID, that you must include in\n your upload part request.
\nFor more information about using the UploadPartCopy
operation, see the\n following:
For conceptual information about multipart uploads, see Uploading\n Objects Using Multipart Upload in the\n Amazon S3 User Guide.
\nFor information about permissions required to use the multipart upload API, see\n Multipart Upload and Permissions in the\n Amazon S3 User Guide.
\nFor information about copying objects using a single atomic action vs. a multipart\n upload, see Operations on Objects in\n the Amazon S3 User Guide.
\nFor information about using server-side encryption with customer-provided\n encryption keys with the UploadPartCopy
operation, see CopyObject and UploadPart.
Note the following additional considerations about the request headers\n x-amz-copy-source-if-match
, x-amz-copy-source-if-none-match
,\n x-amz-copy-source-if-unmodified-since
, and\n x-amz-copy-source-if-modified-since
:
\n
\n Consideration 1 - If both of the\n x-amz-copy-source-if-match
and\n x-amz-copy-source-if-unmodified-since
headers are present in the\n request as follows:
\n x-amz-copy-source-if-match
condition evaluates to true
,\n and;
\n x-amz-copy-source-if-unmodified-since
condition evaluates to\n false
;
Amazon S3 returns 200 OK
and copies the data.\n
\n Consideration 2 - If both of the\n x-amz-copy-source-if-none-match
and\n x-amz-copy-source-if-modified-since
headers are present in the\n request as follows:
\n x-amz-copy-source-if-none-match
condition evaluates to\n false
, and;
\n x-amz-copy-source-if-modified-since
condition evaluates to\n true
;
Amazon S3 returns 412 Precondition Failed
response code.\n
If your bucket has versioning enabled, you could have multiple versions of the same\n object. By default, x-amz-copy-source
identifies the current version of the\n object to copy. If the current version is a delete marker and you don't specify a versionId\n in the x-amz-copy-source
, Amazon S3 returns a 404 error, because the object does\n not exist. If you specify versionId in the x-amz-copy-source
and the versionId\n is a delete marker, Amazon S3 returns an HTTP 400 error, because you are not allowed to specify\n a delete marker as a version for the x-amz-copy-source
.
You can optionally specify a specific version of the source object to copy by adding the\n versionId
subresource as shown in the following example:
\n x-amz-copy-source: /bucket/object?versionId=version id
\n
\n Code: NoSuchUpload\n
\n\n Cause: The specified multipart upload does not exist. The upload\n ID might be invalid, or the multipart upload might have been aborted or\n completed.\n
\n\n HTTP Status Code: 404 Not Found\n
\n\n Code: InvalidRequest\n
\n\n Cause: The specified copy source is not supported as a byte-range\n copy source.\n
\n\n HTTP Status Code: 400 Bad Request\n
\nThe following operations are related to UploadPartCopy
:
\n UploadPart\n
\n\n AbortMultipartUpload\n
\n\n ListParts\n
\n\n ListMultipartUploads\n
\nUploads a part by copying data from an existing object as data source. You specify the\n data source by adding the request header x-amz-copy-source
in your request and\n a byte range by adding the request header x-amz-copy-source-range
in your\n request.
For information about maximum and minimum part sizes and other multipart upload\n specifications, see Multipart upload limits in the Amazon S3 User Guide.
\nInstead of using an existing object as part data, you might use the UploadPart\n action and provide data in your request.
\nYou must initiate a multipart upload before you can upload any part. In response to your\n initiate request. Amazon S3 returns a unique identifier, the upload ID, that you must include in\n your upload part request.
\nFor more information about using the UploadPartCopy
operation, see the\n following:
For conceptual information about multipart uploads, see Uploading\n Objects Using Multipart Upload in the\n Amazon S3 User Guide.
\nFor information about permissions required to use the multipart upload API, see\n Multipart Upload and Permissions in the\n Amazon S3 User Guide.
\nFor information about copying objects using a single atomic action vs. a multipart\n upload, see Operations on Objects in\n the Amazon S3 User Guide.
\nFor information about using server-side encryption with customer-provided\n encryption keys with the UploadPartCopy
operation, see CopyObject and UploadPart.
Note the following additional considerations about the request headers\n x-amz-copy-source-if-match
, x-amz-copy-source-if-none-match
,\n x-amz-copy-source-if-unmodified-since
, and\n x-amz-copy-source-if-modified-since
:
\n
\n Consideration 1 - If both of the\n x-amz-copy-source-if-match
and\n x-amz-copy-source-if-unmodified-since
headers are present in the\n request as follows:
\n x-amz-copy-source-if-match
condition evaluates to true
,\n and;
\n x-amz-copy-source-if-unmodified-since
condition evaluates to\n false
;
Amazon S3 returns 200 OK
and copies the data.\n
\n Consideration 2 - If both of the\n x-amz-copy-source-if-none-match
and\n x-amz-copy-source-if-modified-since
headers are present in the\n request as follows:
\n x-amz-copy-source-if-none-match
condition evaluates to\n false
, and;
\n x-amz-copy-source-if-modified-since
condition evaluates to\n true
;
Amazon S3 returns 412 Precondition Failed
response code.\n
If your bucket has versioning enabled, you could have multiple versions of the\n same object. By default, x-amz-copy-source
identifies the current\n version of the object to copy. If the current version is a delete marker and you\n don't specify a versionId in the x-amz-copy-source
, Amazon S3 returns a\n 404 error, because the object does not exist. If you specify versionId in the\n x-amz-copy-source
and the versionId is a delete marker, Amazon S3\n returns an HTTP 400 error, because you are not allowed to specify a delete marker\n as a version for the x-amz-copy-source
.
You can optionally specify a specific version of the source object to copy by\n adding the versionId
subresource as shown in the following\n example:
\n x-amz-copy-source: /bucket/object?versionId=version id
\n
\n Code: NoSuchUpload\n
\n\n Cause: The specified multipart upload does not exist. The\n upload ID might be invalid, or the multipart upload might have been\n aborted or completed.\n
\n\n HTTP Status Code: 404 Not Found\n
\n\n Code: InvalidRequest\n
\n\n Cause: The specified copy source is not supported as a\n byte-range copy source.\n
\n\n HTTP Status Code: 400 Bad Request\n
\nThe following operations are related to UploadPartCopy
:
\n UploadPart\n
\n\n AbortMultipartUpload\n
\n\n ListParts\n
\n\n ListMultipartUploads\n
\nPasses transformed objects to a GetObject
operation when using Object Lambda access points. For\n information about Object Lambda access points, see Transforming objects with\n Object Lambda access points in the Amazon S3 User Guide.
This operation supports metadata that can be returned by GetObject, in addition to\n RequestRoute
, RequestToken
, StatusCode
,\n ErrorCode
, and ErrorMessage
. The GetObject
\n response metadata is supported so that the WriteGetObjectResponse
caller,\n typically an Lambda function, can provide the same metadata when it internally invokes\n GetObject
. When WriteGetObjectResponse
is called by a\n customer-owned Lambda function, the metadata returned to the end user\n GetObject
call might differ from what Amazon S3 would normally return.
You can include any number of metadata headers. When including a metadata header, it\n should be prefaced with x-amz-meta
. For example,\n x-amz-meta-my-custom-header: MyCustomValue
. The primary use case for this\n is to forward GetObject
metadata.
Amazon Web Services provides some prebuilt Lambda functions that you can use with S3 Object Lambda to\n detect and redact personally identifiable information (PII) and decompress S3 objects.\n These Lambda functions are available in the Amazon Web Services Serverless Application Repository, and\n can be selected through the Amazon Web Services Management Console when you create your Object Lambda access point.
\nExample 1: PII Access Control - This Lambda function uses Amazon Comprehend, a\n natural language processing (NLP) service using machine learning to find insights and\n relationships in text. It automatically detects personally identifiable information (PII)\n such as names, addresses, dates, credit card numbers, and social security numbers from\n documents in your Amazon S3 bucket.
\nExample 2: PII Redaction - This Lambda function uses Amazon Comprehend, a natural\n language processing (NLP) service using machine learning to find insights and relationships\n in text. It automatically redacts personally identifiable information (PII) such as names,\n addresses, dates, credit card numbers, and social security numbers from documents in your\n Amazon S3 bucket.
\nExample 3: Decompression - The Lambda function S3ObjectLambdaDecompression, is\n equipped to decompress objects stored in S3 in one of six compressed file formats including\n bzip2, gzip, snappy, zlib, zstandard and ZIP.
\nFor information on how to view and use these functions, see Using Amazon Web Services built Lambda\n functions in the Amazon S3 User Guide.
", "smithy.api#endpoint": { "hostPrefix": "{RequestRoute}." @@ -31751,7 +31751,7 @@ "SSEKMSKeyId": { "target": "com.amazonaws.s3#SSEKMSKeyId", "traits": { - "smithy.api#documentation": "If present, specifies the ID of the Amazon Web Services Key Management Service (Amazon Web Services KMS) symmetric\n encryption customer managed key that was used for stored in Amazon S3 object.
", + "smithy.api#documentation": "If present, specifies the ID (Key ID, Key ARN, or Key Alias) of the Amazon Web Services Key Management Service (Amazon Web Services KMS) symmetric\n encryption customer managed key that was used for stored in Amazon S3 object.
", "smithy.api#httpHeader": "x-amz-fwd-header-x-amz-server-side-encryption-aws-kms-key-id" } }, @@ -31804,4 +31804,4 @@ } } } -} \ No newline at end of file +}