Skip to content

Commit

Permalink
Merge pull request #74 from delphix-integrations/develop
Browse files Browse the repository at this point in the history
HUBS-2153 | Release Version 3.1.0 | Develop -> Main
  • Loading branch information
ankit-patil-hubs authored Nov 16, 2023
2 parents 0196c20 + 6e8b119 commit 0f6c290
Show file tree
Hide file tree
Showing 14 changed files with 1,250 additions and 42 deletions.
2 changes: 1 addition & 1 deletion .goreleaser.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Visit https://goreleaser.com for documentation on how to customize this
# behavior.
env:
- PROVIDER_VERSION=3.0.0
- PROVIDER_VERSION=3.1.0
before:
hooks:
# this is just an example and not a requirement for provider building/publishing
Expand Down
2 changes: 1 addition & 1 deletion GNUmakefile
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ HOSTNAME=delphix.com
NAMESPACE=dct
NAME=delphix
BINARY=terraform-provider-${NAME}
VERSION=3.0.0
VERSION=3.1.0
OS_ARCH=darwin_amd64

default: install
Expand Down
4 changes: 2 additions & 2 deletions docs/resources/appdata_dsource.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@
In Delphix terminology, a dSource is a database that the Delphix Continuous Data Engine uses to create and update virtual copies of your database.
A dSource is created and managed by the Delphix Continuous Data Engine.

The dSource resource allows Terraform to create and delete Delphix dSources. This specifically enables the apply and destroy Terraform commands. Modification of existing dSource resources via the apply command is not supported. All supported parameters are listed below.
The Appdata dSource resource allows Terraform to create and delete AppData dSources. This specifically enables the apply and destroy Terraform commands. Modification of existing dSource resources via the apply command is not supported. All supported parameters are listed below.

## System Requirements

* Data Control Tower v10.0.1+ is required for dSource management. Lower versions are not supported.
* The dSource Resource does not support Oracle, SQL Server, or SAP ASE. The below examples are shown from the PostgreSQL context. The parameters values can be updated for other connectors (i.e. AppData), such as SAP HANA, IBM Db2, etc.
* This Appdata dSource Resource only supports Appdata based datasources , such as POSTGRES,SAP HANA, IBM Db2, etc.The below examples are shown from the PostgreSQL context. See the Oracle dSource Resource for the support of Oracle. The Delphix Provider does not support Oracle, SQL Server, or SAP ASE.

## Example Usage

Expand Down
197 changes: 197 additions & 0 deletions docs/resources/oracle_dsource.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,197 @@
# Resource: <resource name> delphix_oracle_dsource

In Delphix terminology, a dSource is a database that the Delphix Continuous Data Engine uses to create and update virtual copies of your database.
A dSource is created and managed by the Delphix Continuous Data Engine.

The Oracle dSource resource allows Terraform to create and delete Oracle dSources. This specifically enables the apply and destroy Terraform commands. Modification of existing dSource resources via the apply command is not supported. All supported parameters are listed below.

## System Requirements

* Data Control Tower v10.0.1+ is required for dSource management. Lower versions are not supported.
* This Oracle dSource Resource only supports Oracle. See the AppData dSource Resource for the support of other connectors (i.e. AppData), such as PostgreSQL and SAP HANA. The Delphix Provider does not support SQL Server or SAP ASE.

## Example Usage

* The linking of a dSource can be performed via direct ingestion as shown in the example below

```hcl
# Link Oracle dSource
resource "delphix_oracle_dsource" "test_oracle_dsource" {
name = "test2"
source_value = "DBOMSRB331B3"
group_id = "3-GROUP-1"
log_sync_enabled = false
make_current_account_owner = true
environment_user_id = "HOST_USER-1"
rman_channels = 2
files_per_set = 5
check_logical = false
encrypted_linking_enabled = false
compressed_linking_enabled = true
bandwidth_limit = 0
number_of_connections = 1
diagnose_no_logging_faults = true
pre_provisioning_enabled = false
link_now = true
force_full_backup = false
double_sync = false
skip_space_check = false
do_not_resume = false
files_for_full_backup = []
log_sync_mode = "UNDEFINED"
log_sync_interval = 5
}
```

## Argument Reference

* `source_value` - (Required) Id or Name of the source to link.

* `group_id` - (Required) Id of the dataset group where this dSource should belong to.

* `log_sync_enabled` - (Required) True if LogSync should run for this database.

* `make_current_account_owner` - (Required) Whether the account creating this reporting schedule must be configured as owner of the reporting schedule.

* `description` - (Optional) The notes/description for the dSource.

* `external_file_path` - (Optional) External file path.

* `environment_user_id` - (Optional) Id of the environment user to use for linking.

* `backup_level_enabled` - (Optional) Boolean value indicates whether LEVEL-based incremental backups can be used on the source database.

* `rman_channels` - (Optional) Number of parallel channels to use.

* `files_per_set` - (Optional) Number of data files to include in each RMAN backup set.

* `check_logical` - (Optional) True if extended block checking should be used for this linked database.

* `encrypted_linking_enabled` - (Optional) True if SnapSync data from the source should be retrieved through an encrypted connection. Enabling this feature can decrease the performance of SnapSync from the source but has no impact on the performance of VDBs created from the retrieved data.

* `compressed_linking_enabled` - (Optional) True if SnapSync data from the source should be compressed over the network. Enabling this feature will reduce network bandwidth consumption and may significantly improve throughput, especially over slow network.

* `bandwidth_limit` - (Optional) Bandwidth limit (MB/s) for SnapSync and LogSync network traffic. A value of 0 means no limit.

* `number_of_connections` - (Optional) Total number of transport connections to use during SnapSync.

* `diagnose_no_logging_faults` - (Optional) If true, NOLOGGING operations on this container are treated as faults and cannot be resolved manually.

* `pre_provisioning_enabled` - (Optional) If true, pre-provisioning will be performed after every sync.

* `link_now` - (Optional) True if initial load should be done immediately.

* `force_full_backup` - (Optional) Whether or not to take another full backup of the source database.

* `double_sync` - (Optional) True if two SnapSyncs should be performed in immediate succession to reduce the number of logs required to provision the snapshot. This may significantly reduce the time necessary to provision from a snapshot.

* `skip_space_check` - (Optional) Skip check that tests if there is enough space available to store the database in the Delphix Engine. The Delphix Engine estimates how much space a database will occupy after compression and prevents SnapSync if insufficient space is available. This safeguard can be overridden using this option. This may be useful when linking highly compressible databases.

* `do_not_resume` - (Optional) Indicates whether a fresh SnapSync must be started regardless if it was possible to resume the current SnapSync. If true, we will not resume but instead ignore previous progress and backup all datafiles even if already completed from previous failed SnapSync. This does not force a full backup, if an incremental was in progress this will start a new incremental snapshot.

* `files_for_full_backup` - (Optional) List of datafiles to take a full backup of. This would be useful in situations where certain datafiles could not be backed up during previous SnapSync due to corruption or because they went offline.

* `log_sync_mode` - (Optional) LogSync operation mode for this database [ ARCHIVE_ONLY_MODE, ARCHIVE_REDO_MODE, UNDEFINED ].

* `log_sync_interval` - (Optional) Interval between LogSync requests, in seconds.

* `non_sys_password` - (Optional) Password for non sys user authentication (Single tenant only).

* `non_sys_username` - (Optional) Non-SYS database user to access this database. Only required for username-password auth (Single tenant only).

* `non_sys_vault` - (Optional) The name or reference of the vault from which to read the database credentials (Single tenant only).

* `non_sys_hashicorp_vault_engine` - (Optional) Vault engine name where the credential is stored (Single tenant only).

* `non_sys_hashicorp_vault_secret_path` - (Optional) Path in the vault engine where the credential is stored (Single tenant only).

* `non_sys_hashicorp_vault_username_key` - (Optional) Hashicorp vault key for the username in the key-value store (Single tenant only).

* `non_sys_hashicorp_vault_secret_key` - (Optional) Hashicorp vault key for the password in the key-value store (Single tenant only).

* `non_sys_azure_vault_name` - (Optional) Azure key vault name (Single tenant only).

* `non_sys_azure_vault_username_key` - (Optional) Azure vault key for the username in the key-value store (Single tenant only).

* `non_sys_azure_vault_secret_key` - (Optional) Azure vault key for the password in the key-value store (Single tenant only).

* `non_sys_cyberark_vault_query_string` - (Optional) Query to find a credential in the CyberArk vault (Single tenant only).

* `fallback_username` - (Optional) The database fallback username. Optional if bequeath connections are enabled (to be used in case of bequeath connection failures). Only required for username-password auth..

* `fallback_password` - (Optional) Password for fallback username.

* `fallback_vault` - (Optional) The name or reference of the vault from which to read the database credentials.

* `fallback_hashicorp_vault_engine` - (Optional) Vault engine name where the credential is stored.

* `fallback_hashicorp_vault_secret_path` - (Optional) Path in the vault engine where the credential is stored.

* `fallback_hashicorp_vault_username_key` - (Optional) Hashicorp vault key for the username in the key-value store.

* `fallback_hashicorp_vault_secret_key` - (Optional) Hashicorp vault key for the password in the key-value store.

* `fallback_azure_vault_name` - (Optional) Azure key vault name.

* `fallback_azure_vault_username_key` - (Optional) Azure vault key for the username in the key-value store.

* `fallback_azure_vault_secret_key` - (Optional) Azure vault key for the password in the key-value store.

* `fallback_cyberark_vault_query_string` - (Optional) Query to find a credential in the CyberArk vault.

* `tags` - (Optional) The tags to be created for dSource. This is a map of 2 parameters:
* `key` - (Required) Key of the tag
* `value` - (Required) Value of the tag

* `ops_pre_log_sync` - (Optional) Operations to perform after syncing a created dSource and before running the LogSync.
* `name` - Name of the hook
* `command` - Command to be executed
* `shell` - Type of shell. Valid values are `[bash, shell, expect, ps, psd]`
* `credentials_env_vars` - List of environment variables that will contain credentials for this operation
* `base_var_name` - Base name of the environment variables. Variables are named by appending '_USER', '_PASSWORD', '_PUBKEY' and '_PRIVKEY' to this base name, respectively. Variables whose values are not entered or are not present in the type of credential or vault selected, will not be set.
* `password` - Password to assign to the environment variables.
* `vault` - The name or reference of the vault to assign to the environment variables.
* `hashicorp_vault_engine` - Vault engine name where the credential is stored.
* `hashicorp_vault_secret_path` - Path in the vault engine where the credential is stored.
* `hashicorp_vault_username_key` - Hashicorp vault key for the username in the key-value store.
* `hashicorp_vault_secret_key` - Hashicorp vault key for the password in the key-value store.
* `azure_vault_name` - Azure key vault name.
* `azure_vault_username_key` - Azure vault key in the key-value store.
* `azure_vault_secret_key` - Azure vault key in the key-value store.
* `cyberark_vault_query_string` - Query to find a credential in the CyberArk vault.

* `ops_pre_sync` - (Optional) Operations to perform before syncing the created dSource. These operations can quiesce any data prior to syncing
* `name` - Name of the hook
* `command` - Command to be executed
* `shell` - Type of shell. Valid values are `[bash, shell, expect, ps, psd]`
* `credentials_env_vars` - List of environment variables that will contain credentials for this operation
* `base_var_name` - Base name of the environment variables. Variables are named by appending '_USER', '_PASSWORD', '_PUBKEY' and '_PRIVKEY' to this base name, respectively. Variables whose values are not entered or are not present in the type of credential or vault selected, will not be set.
* `password` - Password to assign to the environment variables.
* `vault` - The name or reference of the vault to assign to the environment variables.
* `hashicorp_vault_engine` - Vault engine name where the credential is stored.
* `hashicorp_vault_secret_path` - Path in the vault engine where the credential is stored.
* `hashicorp_vault_username_key` - Hashicorp vault key for the username in the key-value store.
* `hashicorp_vault_secret_key` - Hashicorp vault key for the password in the key-value store.
* `azure_vault_name` - Azure key vault name.
* `azure_vault_username_key` - Azure vault key in the key-value store.
* `azure_vault_secret_key` - Azure vault key in the key-value store.
* `cyberark_vault_query_string` - Query to find a credential in the CyberArk vault.

* `ops_post_sync` - (Optional) Operations to perform after syncing a created dSource.
* `name` - Name of the hook
* `command` - Command to be executed
* `shell` - Type of shell. Valid values are `[bash, shell, expect, ps, psd]`
* `credentials_env_vars` - List of environment variables that will contain credentials for this operation
* `base_var_name` - Base name of the environment variables. Variables are named by appending '_USER', '_PASSWORD', '_PUBKEY' and '_PRIVKEY' to this base name, respectively. Variables whose values are not entered or are not present in the type of credential or vault selected, will not be set.
* `password` - Password to assign to the environment variables.
* `vault` - The name or reference of the vault to assign to the environment variables.
* `hashicorp_vault_engine` - Vault engine name where the credential is stored.
* `hashicorp_vault_secret_path` - Path in the vault engine where the credential is stored.
* `hashicorp_vault_username_key` - Hashicorp vault key for the username in the key-value store.
* `hashicorp_vault_secret_key` - Hashicorp vault key for the password in the key-value store.
* `azure_vault_name` - Azure key vault name.
* `azure_vault_username_key` - Azure vault key in the key-value store.
* `azure_vault_secret_key` - Azure vault key in the key-value store.
* `cyberark_vault_query_string` - Query to find a credential in the CyberArk vault.

File renamed without changes.
75 changes: 75 additions & 0 deletions examples/oracle_dsource/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
/**
* Summary: This template showcases the properties available when creating an app data dsource.
*/

terraform {
required_providers {
delphix = {
version = "VERSION"
source = "delphix-integrations/delphix"
}
}
}

provider "delphix" {
tls_insecure_skip = true
key = "1.XXXX"
host = "HOSTNAME"
}



resource "delphix_oracle_dsource" "test_oracle_dsource" {
name = "test2"
source_value = "DBOMSRB331B3"
group_id = "4-GROUP-1"
log_sync_enabled = false
make_current_account_owner = true
environment_user_id = "HOST_USER-1"
rman_channels = 2
files_per_set = 5
check_logical = false
encrypted_linking_enabled = false
compressed_linking_enabled = true
bandwidth_limit = 0
number_of_connections = 1
diagnose_no_logging_faults = true
pre_provisioning_enabled = false
link_now = true
force_full_backup = false
double_sync = false
skip_space_check = false
do_not_resume = false
files_for_full_backup = []
log_sync_mode = "UNDEFINED"
log_sync_interval = 5
ops_pre_sync {
name = "key-1"
command = "echo \"hello world\""
shell = "shell"
credentials_env_vars {
base_var_name = "XXXX"
password = "XXXX"
}
}
ops_post_sync {
name = "key-2"
command = "echo \"hello world\""
shell = "shell"
credentials_env_vars {
base_var_name = "XXXX"
password = "XXXX"
}
}
ops_pre_log_sync {
name = "key-2"
command = "echo \"hello world\""
shell = "shell"
credentials_env_vars {
base_var_name = "XXXX"
password = "XXXX"
}
}
}


2 changes: 1 addition & 1 deletion examples/simple-provision/versions.tf
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ terraform {
required_providers {
delphix = {
source = "delphix-integrations/delphix"
version = "3.0.0"
version = "3.1.0"
}
}
}
43 changes: 43 additions & 0 deletions examples/vdb_group/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
/**
* Summary: This template showcases the properties available when creating an app data dsource.
*/

terraform {
required_providers {
delphix = {
version = "VERSION"
source = "delphix-integrations/delphix"
}
}
}

provider "delphix" {
tls_insecure_skip = true
key = "1.XXXX"
host = "HOSTNAME"
}


locals {
vdbs = {
"vdb4" = { snapshot_id = "6-ORACLE_DB_CONTAINER-21", name = "us4" },
"vdb5" = { snapshot_id = "6-ORACLE_DB_CONTAINER-23", name = "us5" },
"vdb1" = { snapshot_id = "6-ORACLE_DB_CONTAINER-7", name = "us1" },
"vdb2" = { snapshot_id = "6-ORACLE_DB_CONTAINER-1", name = "us2" },
"vdb3" = { snapshot_id = "6-ORACLE_DB_CONTAINER-5", name = "us3" }
}
}

resource "delphix_vdb" "example" {
for_each = try(local.vdbs, {})
name = each.value.name
source_data_id = each.value.snapshot_id
auto_select_repository = true

}

#sort helps to maintain thr order of the vdbs to avoid erroneous drift
resource "delphix_vdb_group" "this" {
name = "random"
vdb_ids = sort(flatten([for vdb in delphix_vdb.example : vdb.id]))
}
20 changes: 10 additions & 10 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ require (
github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/fatih/color v1.7.0 // indirect
github.com/google/go-cmp v0.5.8 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/hashicorp/errwrap v1.0.0 // indirect
github.com/hashicorp/go-checkpoint v0.5.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
Expand Down Expand Up @@ -45,17 +45,17 @@ require (
github.com/vmihailenco/msgpack/v4 v4.3.12 // indirect
github.com/vmihailenco/tagparser v0.1.1 // indirect
github.com/zclconf/go-cty v1.10.0 // indirect
golang.org/x/crypto v0.1.0 // indirect
golang.org/x/sys v0.5.0 // indirect
golang.org/x/text v0.7.0 // indirect
google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d // indirect
google.golang.org/grpc v1.45.0 // indirect
golang.org/x/crypto v0.14.0 // indirect
golang.org/x/sys v0.13.0 // indirect
golang.org/x/text v0.13.0 // indirect
google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 // indirect
google.golang.org/grpc v1.56.3 // indirect
)

require (
github.com/golang/protobuf v1.5.2 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/hashicorp/terraform-plugin-sdk/v2 v2.12.0
golang.org/x/net v0.7.0 // indirect
google.golang.org/appengine v1.6.6 // indirect
google.golang.org/protobuf v1.27.1 // indirect
golang.org/x/net v0.17.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.30.0 // indirect
)
Loading

0 comments on commit 0f6c290

Please sign in to comment.