-
-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump S3 syntax to AWS provider 4.x #37
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bridgecrew has found infrastructure configuration errors in this PR ⬇️
@@ -4,34 +4,36 @@ resource "aws_s3_bucket" "default" { | |||
#bridgecrew:skip=CKV_AWS_52:Skipping `Ensure S3 bucket has MFA delete enabled` due to issue in terraform (https://github.com/hashicorp/terraform-provider-aws/issues/629). | |||
bucket = module.this.id | |||
force_destroy = true | |||
tags = module.this.tags |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 bucket has cross-region replication enabled
Resource: aws_s3_bucket.default | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}
Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.@@ -4,34 +4,36 @@ resource "aws_s3_bucket" "default" { | |||
#bridgecrew:skip=CKV_AWS_52:Skipping `Ensure S3 bucket has MFA delete enabled` due to issue in terraform (https://github.com/hashicorp/terraform-provider-aws/issues/629). | |||
bucket = module.this.id | |||
force_destroy = true | |||
tags = module.this.tags |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 Bucket has public access blocks
Resource: aws_s3_bucket.default | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
@@ -4,34 +4,36 @@ resource "aws_s3_bucket" "default" { | |||
#bridgecrew:skip=CKV_AWS_52:Skipping `Ensure S3 bucket has MFA delete enabled` due to issue in terraform (https://github.com/hashicorp/terraform-provider-aws/issues/629). | |||
bucket = module.this.id | |||
force_destroy = true | |||
tags = module.this.tags | |||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
} | |
server_side_encryption_configuration { | |
rule { | |
apply_server_side_encryption_by_default { | |
sse_algorithm = "aws:kms" | |
} | |
} | |
} | |
} |
Ensure S3 buckets are encrypted with KMS by default
Resource: aws_s3_bucket.default | ID: BC_AWS_GENERAL_56
Description
TBA@@ -4,34 +4,36 @@ resource "aws_s3_bucket" "default" { | |||
#bridgecrew:skip=CKV_AWS_52:Skipping `Ensure S3 bucket has MFA delete enabled` due to issue in terraform (https://github.com/hashicorp/terraform-provider-aws/issues/629). | |||
bucket = module.this.id | |||
force_destroy = true | |||
tags = module.this.tags |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 bucket has cross-region replication enabled
Resource: module.ses_lambda_forwarder.aws_s3_bucket.default | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}
Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.@@ -4,34 +4,36 @@ resource "aws_s3_bucket" "default" { | |||
#bridgecrew:skip=CKV_AWS_52:Skipping `Ensure S3 bucket has MFA delete enabled` due to issue in terraform (https://github.com/hashicorp/terraform-provider-aws/issues/629). | |||
bucket = module.this.id | |||
force_destroy = true | |||
tags = module.this.tags |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 buckets are encrypted with KMS by default
Resource: module.ses_lambda_forwarder.aws_s3_bucket.default | ID: BC_AWS_GENERAL_56
How to Fix
resource "aws_s3_bucket" "mybucket" {
...
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.mykey.arn
+ sse_algorithm = "aws:kms"
}
}
}
}
Description
TBAThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our road map calls for S3 bucket resources to be provisioned with our s3-bucket module, with its inputs passed through where relevant, rather than to piecemeal update the S3 bucket resources everywhere. Therefore these changes are not acceptable. You can view cloudposse/terraform-aws-s3-log-storage#71 as an example.
Once the module has been updated to the current s3-bucket
module and fully supports AWS provider v4, we will need to make an internal decision about version number changes. We may want to release this as v1.0.0 or v2.0.0.
Understand, thanks for explanation. |
Using S3 properties
versioning
,logging
andserver_side_encryption_configuration
is not possible anymore with AWS provider 4.x.This PR fixes it by migrating to newer resources.
See: hashicorp/terraform-provider-aws#23125