Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to break the replication source and promote the cross region aws rds cluster from replica to standalone #1770

Closed
dev-usa opened this issue Sep 28, 2017 · 4 comments
Labels
bug Addresses a defect in current functionality. service/rds Issues and PRs that pertain to the rds service.

Comments

@dev-usa
Copy link

dev-usa commented Sep 28, 2017

Hi there,

I am able to successfully create a RDS cluster in Region 1 using the following script -

Terraform Version

Terraform v0.10.6

Affected Resource(s)

  • aws_rds_cluster

Terraform Configuration Files

    resource "aws_rds_cluster" "aurora_cluster" {
    
        cluster_identifier              = "${var.environment_name}-aurora-cluster"
        database_name                   = "mydb"
        master_username                 = "${var.rds_master_username}"
        master_password                 = "${var.rds_master_password}"
        backup_retention_period         = 14
        final_snapshot_identifier       = "${var.environment_name}AuroraCluster"
    
        apply_immediately               = true
        db_cluster_parameter_group_name = "${aws_rds_cluster_parameter_group.default.name}"
    
        tags {
            Name         = "${var.environment_name}-Aurora-DB-Cluster"
            ManagedBy    = "terraform"
            Environment  = "${var.environment_name}"
        }
    
        lifecycle {
            create_before_destroy = true
        }
    }
    
    resource "aws_rds_cluster_instance" "aurora_cluster_instance" {
    
        count                 = "${length(split(",", var.multi_azs))}"
    
        identifier            = "${var.environment_name}-aurora-instance-${count.index}"
        cluster_identifier    = "${aws_rds_cluster.aurora_cluster.id}"
        instance_class        = "db.t2.small"
        publicly_accessible   = true
        apply_immediately     = true
    
        tags {
            Name         = "${var.environment_name}-Aurora-DB-Instance-${count.index}"
            ManagedBy    = "terraform"
            Environment  = "${var.environment_name}"
        }
    
        lifecycle {
            create_before_destroy = true
        }
    }
    
    output "db_primary_cluster_arn" {
     rds_cluster.aurora_cluster.cluster_identifier}"
      value = "${"${format("arn:aws:rds:%s:%s:cluster:%s", "${var.db_region}", "${data.aws_caller_identity.current.account_id}", "${aws_rds_cluster.aurora_cluster.cluster_identifier}")}"}"
    }

and create a Cross Region replica using the below, in region 2 -

    resource "aws_rds_cluster" "aurora_crr_cluster" {
    
        cluster_identifier            = "${var.environment_name}-aurora-crr-cluster"
        database_name                 = "mydb"
        master_username               = "${var.rds_master_username}"
        master_password               = "${var.rds_master_password}"
        backup_retention_period       = 14
        final_snapshot_identifier     = "${var.environment_name}AuroraCRRCluster"
        apply_immediately             = true

        # Referencing to the primary region's cluster
        replication_source_identifier = "${var.db_primary_cluster_arn}"
    
        tags {
            Name         = "${var.environment_name}-Aurora-DB-CRR-Cluster"
            ManagedBy    = "terraform"
            Environment  = "${var.environment_name}"
        }
    
        lifecycle {
            create_before_destroy = true
        }
    
    }
    
    resource "aws_rds_cluster_instance" "aurora_crr_cluster_instance" {
    
        count                 = "${length(split(",", var.multi_azs))}"
    
        identifier            = "${var.environment_name}-aurora-crr-instance-${count.index}"
        cluster_identifier    = "${aws_rds_cluster.aurora_crr_cluster.id}"
        instance_class        = "db.t2.small"
        publicly_accessible   = true
        apply_immediately     = true
    
        tags {
            Name         = "${var.environment_name}-Aurora-DB-Instance-${count.index}"
            ManagedBy    = "terraform"
            Environment  = "${var.environment_name}"
        }
    
        lifecycle {
            create_before_destroy = true
        }
    
    }

by running the below command -

terraform apply -var "aws_profile=my-sandbox-profile" -var "source_db_cluster_arn=arn:aws:rds:us-east-2:account-id:cluster:dev-aurora-cluster"

All works well as expected so far and the Cross Region Replica cluster is created as expected.

However, when I want to promote the Cross Region Replica created in Region 2 to a Standalone cluster, I run -

terraform apply -var "aws_profile=my-sandbox-profile"

which basically unsets the value for the "replication_source_identifier" for the "aws_rds_cluster"

I see that the output from Terraform says -

module.db_replica.aws_rds_cluster.aurora_crr_cluster: Modifying... (ID: dev-aurora-crr-cluster)
  replication_source_identifier: "arn:aws:rds:us-east-2:account_nbr:cluster:dev-aurora-cluster" => ""
module.db_replica.aws_rds_cluster.aurora_crr_cluster: Modifications complete after 1s (ID: dev-aurora-crr-cluster)

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

But, I see "NO CHANGE" happening to the cross region cluster on the AWS console. I still see that the replication source is existing and same and the cross region cluster is NOT updated to a "standalone" in AWS.

If I try to do the same thing via the AWS CLI -

aws rds promote-read-replica-db-cluster --db-cluster-identifier="dev-aurora-crr-cluster" --region="us-west-1"

I see that the change is triggered immediately and the Cross Region Replica is promoted to a stand alone cluster. Does anyone know where I may be doing things wrong?
or Terraform does not support promoting Cross Regional Replica's to standalone clusters. Please advice.

StackOverflow link - https://stackoverflow.com/q/46473349/829542

Debug Output

GitHub Gist containing the complete debug output -
https://gist.github.com/dev-usa/fc0d66b21952aa80760f0ffb75cb3efb

Expected Behavior

The Cross Region RDS Aurora Replica should have been promoted to a standalone cluster

Actual Behavior

No change to the infrastructure although the change is acknowledged

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
@dev-usa dev-usa changed the title Unable to break the replication source for a cross region aws rds cluster Unable to break the replication source and promote the cross region aws rds cluster from replica to standalone Sep 28, 2017
@Ninir Ninir added the bug Addresses a defect in current functionality. label Oct 11, 2017
@jude-pieries
Copy link

@dev-usa , was your cross region replica encrypted ?
if yes , was both master and slave clustered created in a single terraform module ? how did you managed the multiple providers ?

bellow is the TF that i attempted to create the read replica
resource "aws_rds_cluster" "aurora_mysql_rr" {
provider = "aws.dr"
count = "${1 - var.cluster_rr_count}"
cluster_identifier = "${var.cluster_identifier}-rr"
database_name = "${var.dbname}"
master_username = "${var.username}"
master_password = "XXXXXXXX"
backup_retention_period = 14
final_snapshot_identifier = "${var.cluster_identifier}-rr-snapshot"
apply_immediately = true

kms_key_id             = "${lookup(var.server-side-kms,var.aws-accountdr)}"
vpc_security_group_ids = ["${aws_security_group.db_security_group_rr.id}"]
db_subnet_group_name 	= "${aws_db_subnet_group.db_subnet_group_rr.name}"
db_cluster_parameter_group_name ="${var.cluster_parameter_group}"

# Referencing to the primary region's cluster
replication_source_identifier =  "${local.aws_primary_cluster_arn}"

tags {
	Name         = "${var.cluster_identifier}-rr"
	ManagedBy    = "terraform"
	Environment  = "${var.aws-accountdr}"
}

lifecycle {
	create_before_destroy = false
}

}

i get the following error
aws_rds_cluster.aurora_mysql_rr: InvalidParameterCombination: Source cluster arn:aws:rds:us-west-2:XXXXXXXXX:cluster:mysql-payable-rr is encrypted; pre-signed URL has to be specified
[14:57:34] status code: 400, request id: cec9041e-b3a7-11e7-a121-1b87426fec06

@dev-usa
Copy link
Author

dev-usa commented Oct 24, 2017

@jude-pieries

Sorry for the delayed response. Here are my thoughts.

was your cross region replica encrypted ? NO and YES. See below.

NO. Initially I did not have the databases encrypted. At that time, I created separate terraform scripts for creating primary and then spun off cross region replica. Both my primary and cross region were unencrypted storages. If you want to achieve the same using a single script you could use provider aliasing. But it does not result in a well maintainable script in my opinion. Hence, separate scripts for primary and cross region replicas.

YES. Later I realized that I missed encryption and started applying kms keys. I realized that it is not possible to create encrypted replicas using terraform scripts. Hence, my primary database gets created using terraform as encrypted storage. The cross region replica is created using the below shell / aws script.

Thanks.

#!bin/bash

# Script from
# https://github.com/terraform-providers/terraform-provider-aws/issues/630
#
# Execution Sample - sh CreateEncryptedReplica.sh "dev-aurora-cluster" "us-west-1" "us-west-2" "us-west-2a us-west-2b us-west-2c" "target_region_kms_key"

db_identifier=$1
source_region=$2
backup_region=$3
backup_region_azs=$4
backup_kms_key=$5
source_db_identifier=$6

account_id=$(aws sts get-caller-identity --output text --query 'Account') || account_id=$(aws sts get-caller-identity --output text --query 'Account')

echo "Checking to see if cluster exists"
cluster=`aws rds describe-db-clusters --region ${backup_region} --db-cluster-identifier ${db_identifier} || echo 'UNDEFINED'`

if [ "${cluster}" == "UNDEFINED" ]; then

  echo "Creating replica cluster"
  # create the replica cluster if it does not already exist
  aws rds create-db-cluster \
	  --region ${backup_region} \
	  --db-cluster-identifier ${db_identifier} \
	  --replication-source-identifier arn:aws:rds:${source_region}:${account_id}:cluster:${source_db_identifier} \
	  --kms-key-id ${backup_kms_key} \
	  --storage-encrypted \
	  --source-region ${source_region} \
    --availability-zones ${backup_region_azs} \
    --engine aurora

    # WORKING COMMAND FOR CREATING THE CLUSTER
    # aws rds create-db-cluster   --region ${backup_region}   --db-cluster-identifier ${db_identifier}   --replication-source-identifier arn:aws:rds:${source_region}:${account_id}:cluster:${source_db_identifier}   --kms-key-id ${backup_kms_key}   --storage-encrypted   --source-region ${source_region}     --availability-zones ${backup_region_azs}  --engine aurora

fi

echo "Waiting for replica cluster to become available"
cluster_status=`aws rds describe-db-clusters --region ${backup_region} --db-cluster-identifier ${db_identifier} --query 'DBClusters[*].Status' | grep \" | sed 's/.*"\(.*\)".*/\1/g'`
count=0
while [ "${cluster_status}" != "available" ]
do
	echo "Cluster Status: ${cluster_status}"
	echo "sleeping for 1 minute..."
	sleep 60
	cluster_status=`aws rds describe-db-clusters --region ${backup_region} --db-cluster-identifier ${db_identifier} --query 'DBClusters[*].Status' | grep \" | sed 's/.*"\(.*\)".*/\1/g'`
	# wait at most 4 hours for cluster to be available
	((count++)) && ((count>=24)) && break
done

if [ "${cluster_status}" != "available" ]; then
  echo "Replica cluster never became available"
  exit 1
fi

@radeksimko radeksimko added the service/rds Issues and PRs that pertain to the rds service. label Jan 28, 2018
@nywilken
Copy link
Contributor

Hi @dev-usa thanks for opening up this issue in order to help us best track it I am going to close it and link to it from the related issue #6749. Any additional comments or feedback should be added to the referenced issue.

@ghost
Copy link

ghost commented Mar 30, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 30, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/rds Issues and PRs that pertain to the rds service.
Projects
None yet
Development

No branches or pull requests

5 participants