Skip to content

Commit

Permalink
EDSF-290 Create full DSF installation example
Browse files Browse the repository at this point in the history
  • Loading branch information
lindanasredin committed Jul 6, 2023
1 parent af2de8d commit 0d7351d
Show file tree
Hide file tree
Showing 17 changed files with 1,270 additions and 72 deletions.
162 changes: 162 additions & 0 deletions examples/installation/dsf_single_account_deployment/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,162 @@
# DSF Single Account Deployment example
[![GitHub tag](https://img.shields.io/github/v/tag/imperva/dsfkit.svg)](https://github.com/imperva/dsfkit/tags)

This example provides a full DSF (Data Security Fabric) deployment with DSF Hub, Agentless Gateways, DAM (Database Activity Monitoring) and DRA (Data Risk Analytics);
deployed in a single account and two regions.

This deployment consists of:

1. Primary and secondary DSF Hub in region X
2. Primary and secondary Agentless Gateway Hub in region Y
3. DAM MX in region X
4. DAM Agent Gateway in region Y
5. DRA Admin in region X
6. DRA Analytics server in region Y
7. DSF Hub HADR setup
8. Agentless Gateway HADR setup
9. Federation of both primary and secondary DSF Hub with all primary and secondary Agentless Gateways
10. Integration from MX to DSF Hub (Audit from Agent source and Security Issues)

This example is intended for Professional Service and customers who want to bring their own networking, security groups, etc.</br>
It is mandatory to provide as input to this example the following variables:
1. The AWS profile of the DSF nodes' AWS account
2. The AWS regions of the DSF nodes
3. The subnets in which to deploy the DSF nodes, they can be in the same or in different subnets

It is not mandatory to provide the security groups Ids of the DSF nodes, but in case they are provided, you should add the relevant CIDRs and ports to the security groups before running the deployment.<br/>


## Modularity
The deployment is modular and allows users to deploy one or more of the following modules:

1. Sonar
- DSF Hub
- DSF Hub secondary HADR (High Availability Disaster Recovery) node
- Agentless Gateways
- Agentless Gateways secondary HADR (High Availability Disaster Recovery) nodes
2. DAM
- MX
- Agent Gateways
3. DRA
- Admin server
- Analytic servers

### Deploying Specific Modules

To deploy specific modules, you can customize the deployment by setting the corresponding variables in your Terraform configuration. Here are the instructions to deploy the following specific modules:

#### 1. DAM Only Deployment

To deploy only the DAM module, set the following variables in your Terraform configuration:
```
enable_dam = true
enable_dsf_hub = false
enable_dra = false
```

This configuration will enable the DAM module while disabling the DSF Hub and DRA modules.

#### 2. DRA Only Deployment

To deploy only the DRA module, set the following variables in your Terraform configuration:
```
enable_dam = false
enable_dsf_hub = false
enable_dra = true
```

This configuration will enable the DRA module while disabling the DSF Hub and DAM modules.

#### 3. Sonar Only Deployment

To deploy only the Sonar module, set the following variables in your Terraform configuration:
```
enable_dam = false
enable_dsf_hub = true
enable_dra = false
```

This configuration will enable the Sonar module, including the DSF Hub, while disabling the DAM and DRA modules.

Feel free to customize your deployment by setting the appropriate variables based on your requirements.


## Variables
Several variables in the `variables.tf` file are important for configuring the deployment. The following variables dictate the deployment content and should be paid more attention to:

### Products
- `enable_dsf_hub`: Enable DSF Hub module
- `enable_dam`: Enable DAM module
- `enable_dra`: Enable DRA module

### Server Count
- `dra_analytics_server_count`: Number of DRA analytic servers
- `agentless_gw_count`: Number of Agentless Gateways
- `agent_gw_count`: Number of Agent Gateways

### High Availability (HADR)
- `hub_hadr`: Enable DSF Hub High Availability Disaster Recovery (HADR)
- `agentless_gw_hadr`: Enable Agentless Gateway High Availability Disaster Recovery (HADR)

### Networking
- `subnet_ids`: IDs of the subnets for the deployment

###

For a full list of this example's customization options which don't require code changes, refer to the [variables.tf](./variables.tf) file.

### Customizing Variables

There are various ways to customize variables in Terraform, in this example, it is recommended to create a 'terrafrom.tfvars'
file in the example's directory, and add the customized variables to it.

For example:

```tf
aws_profile = "myProfile"
aws_region_x = "us-east-1"
aws_region_y = "us-east-2"
subnet_ids= {
hub_primary_subnet_id = "subnet-xxxxxxxxxxxxxxxx1"
hub_secondary_subnet_id = "subnet-xxxxxxxxxxxxxxxx2"
agentless_gw_primary_subnet_id = "subnet-xxxxxxxxxxxxxxxx3"
agentless_gw_secondary_subnet_id = "subnet-xxxxxxxxxxxxxxxx4"
mx_subnet_id = "subnet-xxxxxxxxxxxxxxxx5"
agent_gw_subnet_id = "subnet-xxxxxxxxxxxxxxxx6"
dra_admin_subnet_id = "subnet-xxxxxxxxxxxxxxxx7"
dra_analytics_subnet_id = "subnet-xxxxxxxxxxxxxxxx8"
}
security_group_ids_hub = ["sg-xxxxxxxxxxxxxxxx11", "sg-xxxxxxxxxxxxxxxx12"]
security_group_ids_agentless_gw = ["sg-xxxxxxxxxxxxxxxx21", "sg-xxxxxxxxxxxxxxxx22"]
security_group_ids_mx = ["sg-xxxxxxxxxxxxxxxx31", "sg-xxxxxxxxxxxxxxxx32"]
security_group_ids_agent_gw = ["sg-xxxxxxxxxxxxxxxx41", "sg-xxxxxxxxxxxxxxxx42"]
security_group_ids_dra_admin = ["sg-xxxxxxxxxxxxxxxx51", "sg-xxxxxxxxxxxxxxxx52"]
security_group_ids_dra_analytics = ["sg-xxxxxxxxxxxxxxxx61", "sg-xxxxxxxxxxxxxxxx62"]
tarball_location = {
s3_bucket = "bucket_name"
s3_region = "us-east-1"
s3_key = "tarball_name"
}
workstation_cidr = ["10.0.0.0/24"]
```

Then run the deployment as usual with the following command:
```bash
terraform apply
```
For a full list of this example's customization options which don't require code changes, refer to the [variables.tf](./variables.tf) file.

## Storing the Terraform State in an S3 Bucket

To store the Terraform state in an S3 bucket instead of locally, uncomment the '[backend.tf](./backend.tf)' file and fill in the necessary details.
Make sure that the user running the deployment has read and write access to this bucket. You can find the required permissions [here](https://developer.hashicorp.com/terraform/language/settings/backends/s3#s3-bucket-permissions).

## Deploying DSF Nodes without Outbound Internet Access

Follow these steps to deploy a DSF node (Hub, Agentless Gateway, MX, Agent Gateway, DAR Admin or DRA Analytics server) in an environment without outbound internet access.
1. Provide a custom AMI with the following dependencies: AWS CLI, unzip, lvm2 and jq.
You can create a custom AMI with these dependencies installed by launching an Amazon EC2 instance, installing the dependencies, and creating an AMI from the instance.
You can then use this custom AMI when launching the DSF Hub and/or Agentless Gateway instances.
2. Update the _ami_ variable in your Terraform example with the details of the custom AMI you created.
3. Create an S3 VPC endpoint to allow the instances to access S3 without going over the internet. You can create an S3 VPC endpoint using the Amazon VPC console, AWS CLI, or an AWS SDK.
4. Create a Secrets Manager VPC endpoint to allow the instances to access the Secrets Manager without going over the internet. You can create a Secrets Manager VPC endpoint using the Amazon VPC console, AWS CLI, or an AWS SDK.
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
locals {
# TODO why is creating a db_with_agent conditioned by the creation of a cluster?
# change to local.agent_gw_count
db_types_for_agent = local.create_agent_gw_cluster > 0 ? var.simulation_db_types_for_agent : []
}

module "db_with_agent" {
source = "imperva/dsf-db-with-agent/aws"
version = "1.5.0" # latest release tag
count = length(local.db_types_for_agent)

friendly_name = join("-", [local.deployment_name_salted, "db", "with", "agent", count.index])

os_type = var.agent_source_os
db_type = local.db_types_for_agent[count.index]

subnet_id = local.agent_gw_subnet_id
key_pair = module.key_pair.key_pair.key_pair_name
allowed_ssh_cidrs = [format("%s/32", module.mx[0].private_ip)]

registration_params = {
agent_gateway_host = module.agent_gw[0].private_ip
secure_password = local.password
server_group = module.mx[0].configuration.default_server_group
site = module.mx[0].configuration.default_site
}
tags = local.tags
depends_on = [
module.agent_gw_cluster_setup
]
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
locals {
db_types_for_agentless = local.agentless_gw_count > 0 ? var.simulation_db_types_for_agentless : []
}

module "rds_mysql" {
source = "imperva/dsf-poc-db-onboarder/aws//modules/rds-mysql-db"
version = "1.5.0" # latest release tag
count = contains(local.db_types_for_agentless, "RDS MySQL") ? 1 : 0

rds_subnet_ids = local.db_subnet_ids
security_group_ingress_cidrs = local.workstation_cidr
tags = local.tags
}

module "rds_mssql" {
source = "imperva/dsf-poc-db-onboarder/aws//modules/rds-mssql-db"
version = "1.5.0" # latest release tag
count = contains(local.db_types_for_agentless, "RDS MsSQL") ? 1 : 0

rds_subnet_ids = local.db_subnet_ids
security_group_ingress_cidrs = local.workstation_cidr

tags = local.tags
providers = {
aws = aws,
aws.poc_scripts_s3_region = aws.poc_scripts_s3_region
}
}

module "db_onboarding" {
source = "imperva/dsf-poc-db-onboarder/aws"
version = "1.5.0" # latest release tag
for_each = { for idx, val in concat(module.rds_mysql, module.rds_mssql) : idx => val }

sonar_version = module.globals.tarball_location.version
usc_access_token = module.hub[0].access_tokens.usc.token
hub_info = {
hub_ip_address = module.hub[0].public_ip
hub_private_ssh_key_path = module.key_pair.private_key_file_path
hub_ssh_user = module.hub[0].ssh_user
}

assignee_gw = module.agentless_gw[0].jsonar_uid
assignee_role = module.agentless_gw[0].iam_role
database_details = {
db_username = each.value.db_username
db_password = each.value.db_password
db_arn = each.value.db_arn
db_port = each.value.db_port
db_identifier = each.value.db_identifier
db_address = each.value.db_address
db_engine = each.value.db_engine
db_name = try(each.value.db_name, null)
}
tags = local.tags
depends_on = [
module.federation,
module.rds_mysql,
module.rds_mssql
]
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#terraform {
# backend "s3" {
## Fill in your bucket details
# bucket = "myBucket"
# key = "terraform.tfstate"
# region = "us-east-1"
# profile = "myProfile"
# }
#}
83 changes: 83 additions & 0 deletions examples/installation/dsf_single_account_deployment/dam.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
locals {
agent_gw_count = var.enable_dam ? var.agent_gw_count : 0
gateway_group_name = "temporaryGatewayGroup"
create_agent_gw_cluster = local.agent_gw_count >= 2 ? 1 : 0

agent_gw_cidr_list = [data.aws_subnet.agent_gw.cidr_block]
}

module "mx" {
source = "imperva/dsf-mx/aws"
version = "1.5.0" # latest release tag
count = var.enable_dam ? 1 : 0

friendly_name = join("-", [local.deployment_name_salted, "mx"])
dam_version = var.dam_version
subnet_id = local.mx_subnet_id
license_file = var.license_file
key_pair = module.key_pair.key_pair.key_pair_name
secure_password = local.password
mx_password = local.password