This module implements IBM Event Streams for IBM Cloud with topics, partitions, throughput, storage size, cleanup policy, retention time, retention size, segment size, and schema.
The Event Streams service supports payload data encryption that uses a root key CRN of a key management service, such as Key Protect or Hyper Protect Crypto Services. You specify the root key CRN with the kms_key_crn
input. For more information, see Managing encryption in Event Streams.
Before you run the module, configure an authorization policy to allow the Event Streams service to access the key management service instance with the reader role. For more information, see Using authorizations to grant access between services.
You can't manage the policy in the same Terraform state file as the Event Streams service instance. When you issue a terraform destroy
command, the instance is only soft deleted and remains as a reclamation resource for a while to support recovery (reclamation). An authorization policy must exist when the instance is hard deleted or reclaimed or else the unregistration of the instance from the root key fails on the backend. If the policy doesn't exist, the only way to unregister the instance, which is a requirement for deletion of the root key, is by opening a support case. For more information, see Using a customer-managed key.
module "event_streams" {
source = "terraform-ibm-modules/event-streams/ibm"
version = "latest" # Replace "latest" with a release version to lock into a specific release
resource_group = "event-streams-rg"
plan = "standard"
topics = [
{
name = "topic-1"
partitions = 1
config = {
"cleanup.policy" = "delete"
"retention.ms" = "86400000"
"retention.bytes" = "10485760"
"segment.bytes" = "10485760"
}
},
{
name = "topic-2"
partitions = 1
config = {
"cleanup.policy" = "compact,delete"
"retention.ms" = "86400000"
"retention.bytes" = "1073741824"
"segment.bytes" = "536870912"
}
}
]
schema_id = [{
schema_id = "my-es-schema_1"
schema = {
type = "string"
name = "name_1"
}
},
{
schema_id = "my-es-schema_2"
schema = {
type = "string"
name = "name_2"
}
},
{
schema_id = "my-es-schema_3"
schema = {
type = "string"
name = "name_3"
}
}
]
}
You need the following permissions to run this module.
- Account Management
- Resource Group service
Viewer
platform access
- Resource Group service
- IAM Services
- Event Streams service
Editor
platform accessManager
service access
- Event Streams service
Name | Version |
---|---|
terraform | >= 1.3.0 |
ibm | >= 1.65.0, <2.0.0 |
Name | Source | Version |
---|---|---|
cbr_rule | terraform-ibm-modules/cbr/ibm//modules/cbr-rule-module | 1.27.0 |
Name | Type |
---|---|
ibm_event_streams_schema.es_schema | resource |
ibm_event_streams_topic.es_topic | resource |
ibm_resource_instance.es_instance | resource |
Name | Description | Type | Default | Required |
---|---|---|---|---|
cbr_rules | The list of context-based restriction rules to create. | list(object({ |
[] |
no |
create_timeout | The timeout value for creating an Event Streams instance. Specify 3h for an Enterprise plan instance. Add 1 h for each level of non-default throughput. Add 30 min for each level of non-default storage size. |
string |
"3h" |
no |
delete_timeout | The timeout value for deleting an Event Streams instance. | string |
"15m" |
no |
es_name | The name to give the Event Streams instance created by this module. | string |
n/a | yes |
kms_key_crn | The root key CRN of the key management service (Key Protect or Hyper Protect Crypto Services) to use to encrypt the payload data. Learn more about integrating Hyper Protect Crypto Services with Event Streams. Configure an authorization policy to allow the Event Streams service to access the key management service instance with the reader role (Learn more). You can't manage the policy in the same Terraform state file as the Event Streams service instance (Learn more). | string |
null |
no |
plan | The plan for the Event Streams instance. Possible values: lite , standard , enterprise-3nodes-2tb . |
string |
"standard" |
no |
region | The region where the Event Streams are created. | string |
"us-south" |
no |
resource_group_id | The resource group ID where the Event Streams instance is created. | string |
n/a | yes |
schemas | The list of schema objects. Include the schema_id and the type and name of the schema in the schema object. |
list(object( |
[] |
no |
service_endpoints | The type of service endpoints. Possible values: 'public', 'private', 'public-and-private'. | string |
"public" |
no |
storage_size | Storage size of the Event Streams in GB. Applies only to Enterprise plan instances. Possible values: 2048 , 4096 , 6144 , 8192 , 10240 , 12288 . Storage capacity cannot be reduced after the instance is created. When the throughput input variable is set to 300 , storage size starts at 4096. When throughput is 450 , storage size starts starts at 6144 . |
number |
"2048" |
no |
tags | The list of tags associated with the Event Steams instance. | list(string) |
[] |
no |
throughput | Throughput capacity in MB per second. Applies only to Enterprise plan instances. Possible values: 150 , 300 , 450 . |
number |
"150" |
no |
topics | The list of topics to apply to resources. Only one topic is allowed for Lite plan instances. | list(object( |
[] |
no |
update_timeout | The timeout value for updating an Event Streams instance. Specify 1h for an Enterprise plan instance. Add 1 h for each level of non-default throughput. A 30 min for each level of non-default storage size. |
string |
"1h" |
no |
Name | Description |
---|---|
crn | Event Streams crn |
guid | Event Streams guid |
id | Event Streams instance id |
kafka_broker_version | The Kafka version |
kafka_brokers_sasl | (Array of Strings) Kafka brokers use for interacting with Kafka native API |
kafka_http_url | The API endpoint to interact with Event Streams REST API |
You can report issues and request features for this module in GitHub issues in the module repo. See Report an issue or request a feature.
To set up your local development environment, see Local development setup in the project documentation.