A Terraform module to create a Google Cloud Storage on Google Cloud Services (GCP).
**_This module supports Terraform version 1 and is compatible with the Terraform Google Provider version >= 5.10
This module is part of our Infrastructure as Code (IaC) framework that enables our users and customers to easily deploy and manage reusable, secure, and production-grade cloud infrastructure.
- Module Features
- Getting Started
- Module Argument Reference
- Module Outputs
- External Documentation
- Module Versioning
- About Mineiros
- Reporting Issues
- Contributing
- Makefile Targets
- License
This module implements the following terraform resources
google_storage_bucket
and supports additional features of the following modules:
Most basic usage just setting required arguments:
module "terraform-google-storage-bucket" {
source = "github.com/mineiros-io/terraform-google-storage-bucket?ref=v0.1.0"
name = "my-bucket"
}
See variables.tf and examples/ for details and use-cases.
-
name
: (Requiredstring
)Name of the bucket.
-
force_destroy
: (Optionalbool
)When deleting a bucket, this boolean option will delete all contained objects. If you try to delete a bucket that contains objects, Terraform will fail that run.
Default is
false
. -
location
: (Optionalstring
)The GCS location.
Default is
"US"
. -
project
: (Optionalstring
)The ID of the project in which the resource belongs. If it is not provided, the provider project is used.
-
storage_class
: (Optionalstring
)The Storage Class of the new bucket. Supported values include:
STANDARD
,MULTI_REGIONAL
,REGIONAL
,NEARLINE
,COLDLINE
,ARCHIVE
.Default is
"STANDARD"
. -
lifecycle_rules
: (Optionallist(lifecycle_rule)
)A set of identities that will be able to create objects inside the bucket.
Example:
lifecycle_rules = [{ action = { type = "SetStorageClass" storage_class = "NEARLINE" } condition = { age = 60 no_age = false created_before = "2018-08-20" with_state = "LIVE" matches_storage_class = ["REGIONAL"] matches_prefix = ["bucket"] matches_suffix = [] num_newer_versions = 10 custom_time_before = "1970-01-01" days_since_custom_time = 1 days_since_noncurrent_time = 1 noncurrent_time_before = "1970-01-01" } }]
Each
lifecycle_rule
object in the list accepts the following attributes:-
action
: (Requiredlist(action)
)The Lifecycle Rule's action configuration.
Each
action
object in the list accepts the following attributes:-
type
: (Optionalstring
)The type of the action of this Lifecycle Rule. Supported values include:
Delete
andSetStorageClass
. -
storage_class
: (Optionalstring
)The target Storage Class of objects affected by this Lifecycle Rule. Supported values include:
STANDARD
,MULTI_REGIONAL
,REGIONAL
,NEARLINE
,COLDLINE
,ARCHIVE
.
-
-
condition
: (Requiredlist(condition)
)The Lifecycle Rule's action configuration.
Each
condition
object in the list accepts the following attributes:-
age
: (Optionalnumber
)Minimum age of an object in days to satisfy this condition.
-
no_age
: (Optionalbool
)While set true, age value will be omitted. Required to set true when age is unset in the config file.
-
created_before
: (Optionalstring
)A date in the RFC 3339 format YYYY-MM-DD. This condition is satisfied when an object is created before midnight of the specified date in UTC.
-
with_state
: (Optionalstring
)Match to live and/or archived objects. Unversioned buckets have only live objects. Supported values include:
LIVE
,ARCHIVED
,ANY
. -
matches_storage_class
: (Optionalstring
)Storage Class of objects to satisfy this condition. Supported values include:
STANDARD
,MULTI_REGIONAL
,REGIONAL
,NEARLINE
,COLDLINE
,ARCHIVE
,DURABLE_REDUCED_AVAILABILITY
. -
matches_prefix
: (Optionalstring
)One or more matching name prefixes to satisfy this condition.
-
matches_suffix
: (Optionalstring
)One or more matching name suffixes to satisfy this condition.
-
num_newer_versions
: (Optionalnumber
)Relevant only for versioned objects. The number of newer versions of an object to satisfy this condition.
-
custom_time_before
: (Optionalstring
)A date in the RFC 3339 format YYYY-MM-DD. This condition is satisfied when the customTime metadata for the object is set to an earlier date than the date used in this lifecycle condition.
-
days_since_custom_time
: (Optionalnumber
)Days since the date set in the
customTime
metadata for the object. This condition is satisfied when the current date and time is at least the specified number of days after thecustomTime
. -
days_since_noncurrent_time
: (Optionalnumber
)Relevant only for versioned objects. Number of days elapsed since the noncurrent timestamp of an object.
-
noncurrent_time_before
: (Optionalstring
)Relevant only for versioned objects. The date in RFC 3339 (e.g. 2017-06-13) when the object became nonconcurrent.
-
-
-
versioning_enabled
: (Optionalbool
)Whether versioning should be enabled.
Default is
false
. -
website
: (Optionalobject(website)
)Configuration if the bucket acts as a website.
Example:
website = { main_page_suffix = "index.html" not_found_page = "404.html" }
The
website
object accepts the following attributes:-
main_page_suffix
: (Optionalstring
)Behaves as the bucket's directory index where missing objects are treated as potential directories.
-
not_found_page
: (Optionalstring
)The custom object to return when a requested resource is not found.
-
-
autoclass
: (Optionalobject(website)
)The bucket's Autoclass configuration.
Example:
autoclass = { enabled = true terminal_storage_class = "NEARLINE" }
The
website
object accepts the following attributes:-
enabled
: (Requiredstring
)While set to true, autoclass automatically transitions objects in your bucket to appropriate storage classes based on each object's access pattern.
-
terminal_storage_class
: (Optionalstring
)The storage class that objects in the bucket eventually transition to if they are not read for a certain length of time. Supported values include: NEARLINE, ARCHIVE.
-
-
cors
: (Optionallist(cors)
)The bucket's Cross-Origin Resource Sharing (CORS) configuration.
Example:
cors = [{ origin = ["http://image-store.com"] method = ["GET", "HEAD", "PUT", "POST", "DELETE"] response_header = ["*"] max_age_seconds = 3600 }]
Each
cors
object in the list accepts the following attributes:-
origin
: (Optionalset(string)
)The list of Origins eligible to receive CORS response headers. Note: "*" is permitted in the list of origins, and means "any Origin".
-
method
: (Optionalset(string)
)The list of HTTP methods on which to include CORS response headers, (
GET
,OPTIONS
,POST
, etc) Note:"*"
is permitted in the list of methods, and meansany method
. -
response_header
: (Optionalset(string)
)The list of HTTP headers other than the simple response headers to give permission for the user-agent to share across domains.
-
max_age_seconds
: (Optionalset(string)
)The value, in seconds, to return in the Access-Control-Max-Age header used in preflight responses.
-
-
encryption_default_kms_key_name
: (Optionalstring
)The id of a Cloud KMS key that will be used to encrypt objects inserted into this bucket, if no encryption method is specified. You must pay attention to whether the crypto key is available in the location that this bucket is created in.
-
custom_placement_config
: (Optionalobject(custom_placement_config)
)The bucket's custom location configuration, which specifies the individual regions that comprise a dual-region bucket. If the bucket is designated a single or multi-region, the parameters are empty.
The
custom_placement_config
object accepts the following attributes:-
data_locations
: (Requiredlist(string)
)The list of individual regions that comprise a dual-region bucket. If any of the data_locations changes, it will recreate the bucket.
-
-
logging
: (Optionalobject(logging)
)The bucket's Access & Storage Logs configuration.
Example:
logging = { log_bucket = "example-log-bucket" log_object_prefix = "gcs-log" }
The
logging
object accepts the following attributes:-
log_bucket
: (Requiredstring
)The bucket that will receive log objects.
-
log_object_prefix
: (Optionalstring
)The object prefix for log objects. If it's not provided, by default GCS sets this to this bucket's name.
-
-
retention_policy
: (Optionalobject(retention_policy)
)Configuration of the bucket's data retention policy for how long objects in the bucket should be retained.
Example:
retention_policy = { is_locked = false retention_period = 200000 }
The
retention_policy
object accepts the following attributes:-
is_locked
: (Optionalbool
)If set to
true
, the bucket will be locked and permanently restrict edits to the bucket's retention policy. Caution: Locking a bucket is an irreversible actionDefault is
false
. -
retention_period
: (Requirednumber
)The period of time, in seconds, that objects in the bucket must be retained and cannot be deleted, overwritten, or archived. The value must be less than
2,147,483,647
second.
-
-
labels
: (Optionalmap(string)
)A map of key/value label pairs to assign to the bucket.
-
requester_pays
: (Optionalbool
)Enables Requester Pays on a storage bucket.
Default is
false
. -
uniform_bucket_level_access
: (Optionalbool
)Enables Uniform bucket-level access access to a bucket.
Default is
true
. -
default_event_based_hold
: (Optionalbool
)Whether or not to automatically apply an eventBasedHold to new objects added to the bucket.
Default is
false
. -
enable_object_retention
: (Optionalbool
)Enables object retention on a storage bucket.
Default is
false
. -
public_access_prevention
: (Optionalstring
)Prevents public access to a bucket. Acceptable values are "inherited" or "enforced". If "inherited", the bucket uses public access prevention. only if the bucket is subject to the public access prevention organization policy constraint. Defaults to "inherited".
Default is
"inherited"
. -
rpo
: (Optionalstring
)The recovery point objective for cross-region replication of the bucket. Applicable only for dual and multi-region buckets. "DEFAULT" sets default replication. "ASYNC_TURBO" value enables turbo replication, valid for dual-region buckets only. If rpo is not specified at bucket creation, it defaults to "DEFAULT" for dual and multi-region buckets. NOTE If used with single-region bucket, It will throw an error.
Default is
null
. -
object_creators
: (Optionalset(string)
)A set of identities that will be able to create objects inside the bucket.
Default is
[]
. -
object_viewers
: (Optionalset(string)
)A set of identities that will be able to view objects inside the bucket.
Default is
[]
. -
legacy_readers
: (Optionalset(string)
)A set of identities that get the legacy bucket and object reader role assigned.
Default is
[]
. -
legacy_writers
: (Optionalset(string)
)A set of identities that get the legacy bucket and object writer role assigned.
Default is
[]
. -
object_admins
: (Optionalset(string)
)A set of identities that will be able to administrate objects inside the bucket.
Default is
[]
.
-
iam
: (Optionallist(iam)
)A list of IAM access.
Example:
iam = [{ role = "roles/storage.admin" members = ["user:member@example.com"] authoritative = false }]
Each
iam
object in the list accepts the following attributes:-
members
: (Optionalset(string)
)Identities that will be granted the privilege in role. Each entry can have one of the following values:
allUsers
: A special identifier that represents anyone who is on the internet; with or without a Google account.allAuthenticatedUsers
: A special identifier that represents anyone who is authenticated with a Google account or a service account.user:{emailid}
: An email address that represents a specific Google account. For example, alice@gmail.com or joe@example.com.serviceAccount:{emailid}
: An email address that represents a service account. For example, my-other-app@appspot.gserviceaccount.com.group:{emailid}
: An email address that represents a Google group. For example, admins@example.com.domain:{domain}
: A G Suite domain (primary, instead of alias) name that represents all the users of that domain. For example, google.com or example.com.projectOwner:projectid
: Owners of the given project. For example,projectOwner:my-example-project
projectEditor:projectid
: Editors of the given project. For example,projectEditor:my-example-project
projectViewer:projectid
: Viewers of the given project. For example,projectViewer:my-example-project
computed:{identifier}
: An existing key fromvar.computed_members_map
.
Default is
[]
. -
role
: (Optionalstring
)The role that should be applied. Note that custom roles must be of the format
[projects|organizations]/{parent-name}/roles/{role-name}
. -
roles
: (Optionallist(string)
)The set of roles that should be applied. Note that custom roles must be of the format
[projects|organizations]/{parent-name}/roles/{role-name}
. -
authoritative
: (Optionalbool
)Whether to exclusively set (authoritative mode) or add (non-authoritative/additive mode) members to the role.
Default is
true
.
-
-
computed_members_map
: (Optionalmap(string)
)A map of members to replace in
members
of various IAM settings to handle terraform computed values.Default is
{}
. -
policy_bindings
: (Optionallist(policy_binding)
)A list of IAM policy bindings.
Example:
policy_bindings = [{ role = "roles/storage.admin" members = ["user:member@example.com"] condition = { title = "expires_after_2021_12_31" description = "Expiring at midnight of 2021-12-31" expression = "request.time < timestamp(\"2022-01-01T00:00:00Z\")" } }]
Each
policy_binding
object in the list accepts the following attributes:-
role
: (Requiredstring
)The role that should be applied.
-
members
: (Optionalset(string)
)Identities that will be granted the privilege in
role
.Default is
var.members
. -
condition
: (Optionalobject(condition)
)An IAM Condition for a given binding.
Example:
condition = { expression = "request.time < timestamp(\"2022-01-01T00:00:00Z\")" title = "expires_after_2021_12_31" }
The
condition
object accepts the following attributes:-
expression
: (Requiredstring
)Textual representation of an expression in Common Expression Language syntax.
-
title
: (Requiredstring
)A title for the expression, i.e. a short string describing its purpose.
-
description
: (Optionalstring
)An optional description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.
-
-
-
module_enabled
: (Optionalbool
)Specifies whether resources in the module will be created.
Default is
true
. -
module_depends_on
: (Optionallist(dependency)
)A list of dependencies. Any object can be assigned to this list to define a hidden external dependency.
Example:
module_depends_on = [ google_network.network ]
The following attributes are exported in the outputs of the module:
-
bucket
: (object(bucket)
)All attributes of the created
google_storage_bucket
resource. -
iam
: (list(iam)
)The
iam
resource objects that define the access to the GCS bucket. -
policy_binding
: (object(policy_binding)
)All attributes of the created policy_bindings mineiros-io/storage-bucket-iam/google module.
This Module follows the principles of Semantic Versioning (SemVer).
Given a version number MAJOR.MINOR.PATCH
, we increment the:
MAJOR
version when we make incompatible changes,MINOR
version when we add functionality in a backwards compatible manner, andPATCH
version when we make backwards compatible bug fixes.
- Backwards compatibility in versions
0.0.z
is not guaranteed whenz
is increased. (Initial development) - Backwards compatibility in versions
0.y.z
is not guaranteed wheny
is increased. (Pre-release)
Mineiros is a remote-first company headquartered in Berlin, Germany that solves development, automation and security challenges in cloud infrastructure.
Our vision is to massively reduce time and overhead for teams to manage and deploy production-grade and secure cloud infrastructure.
We offer commercial support for all of our modules and encourage you to reach out if you have any questions or need help. Feel free to email us at hello@mineiros.io or join our Community Slack channel.
We use GitHub Issues to track community reported issues and missing features.
Contributions are always encouraged and welcome! For the process of accepting changes, we use Pull Requests. If you'd like more information, please see our Contribution Guidelines.
This repository comes with a handy Makefile.
Run make help
to see details on each available target.
This module is licensed under the Apache License Version 2.0, January 2004. Please see LICENSE for full details.
Copyright © 2020-2022 Mineiros GmbH