Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Get <some exchange> unsupported protocol scheme "" #15

Closed
porcospino opened this issue Jul 2, 2021 · 2 comments
Closed

Error: Get <some exchange> unsupported protocol scheme "" #15

porcospino opened this issue Jul 2, 2021 · 2 comments

Comments

@porcospino
Copy link

I've written a mini-module that's a wrapper around this provider and the hashicorp/aws aws_mq_broker. I can stand up a broker and declare exchanges, queues, bindings, etc. But if I try to change the deployment_mode or host_instance_type I get an error when planning:

│ Error: Get "/api/exchanges/%2F/myOtherExchange": unsupported protocol scheme ""
│ 
│   with module.rabbitmq.rabbitmq_exchange.exchanges["myOtherExchange"],
│   on modules/rabbitmq.tf line 137, in resource "rabbitmq_exchange" "exchanges":
│  137: resource "rabbitmq_exchange" "exchanges" {

I know that the error is triggered when I make a change to the configuration of the aws_mq_broker, but it appears to come from this provider.

Terraform Version

Terraform v1.0.1
on linux_amd64
+ provider registry.terraform.io/cyrilgdn/rabbitmq v1.5.1
+ provider registry.terraform.io/hashicorp/aws v3.47.0
+ provider registry.terraform.io/hashicorp/random v3.1.0

Affected Resource(s)

  • rabbitmq_exchange
  • rabbitmq_queue

Terraform Configuration Files

This is the module:

terraform {

  required_version = "~> 1.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
    rabbitmq = {
      source  = "cyrilgdn/rabbitmq"
      version = "1.5.1"
    }
  }
}

variable "environment" {
  description = "The environment of the RabbitMQ broker, e.g. 'dev', 'staging', 'production'"
  type    = string
}

variable "region" {
  description = "AWS Region"
  default = "eu-west-1"
}

variable "host_instance_type" {
  description = "Broker instance type"
  default     = "mq.t3.micro"

  validation {
    condition     = contains(["mq.t3.micro", "mq.m5.large", "mq.m5.xlarge", "mq.m5.2xlarge", "mq.m5.4xlarge"], var.host_instance_type)
    error_message = "The 'host_instance_type' variable must be one of mq.t3.micro, mq.m5.large, mq.m5.xlarge, mq.m5.2xlarge, mq.m5.4xlarge."
  }
}

variable "deployment_mode" {
  description = "SINGLE_INSTANCE for testing, or CLUSTER_MULTI_AZ for HA. Note that mq.t3.micro cannot be used in CLUSTER_MULTI_AZ"
  default     = "SINGLE_INSTANCE"
}

provider "aws" {
  region = var.region
}

locals {

  rabbitmq_engine_version = "3.8.11"
  publicly_accessible     = true

  exchanges = toset([
    "myExchange",
    "myOtherExchange"
  ])

  queues = {
    "myqueue" = {
      exchange    = "myExchange"
      routing_key = "myRoutingKey"
    },
    "myotherqueue" = {
      exchange    = "myOtherExchange"
      routing_key = "myOtherRoutingKey"
    }
  }
}

resource "random_string" "admin_password" {
  length  = 32
  special = false
}

resource "random_string" "rabbit_password" {
  length  = 32
  special = false
}

resource "aws_ssm_parameter" "admin_password" {
  name        = "/rabbit/admin_password"
  description = "Administrator password for RabbitMQ"
  type        = "SecureString"
  value       = random_string.admin_password.result
  overwrite   = false
}

resource "aws_ssm_parameter" "rabbit_password" {
  name        = "/rabbit/user_password"
  description = "User password for RabbitMQ"
  type        = "SecureString"
  value       = random_string.rabbit_password.result
  overwrite   = false
}

resource "aws_mq_broker" "rabbitmq" {
  broker_name                = "rabbitmq-${var.environment}"
  engine_type                = "RabbitMQ"
  engine_version             = local.rabbitmq_engine_version
  host_instance_type         = var.host_instance_type
  deployment_mode            = var.host_instance_type == "mq.t3.micro" ? "SINGLE_INSTANCE" : var.deployment_mode
  publicly_accessible        = local.publicly_accessible
  auto_minor_version_upgrade = true

  user {
    username = "admin"
    password = aws_ssm_parameter.admin_password.value
  }

  maintenance_window_start_time {
    day_of_week = "TUESDAY"
    time_of_day = "07:00"
    time_zone   = "Europe/London"
  }

  logs {
    general = true
  }

  tags = {
    Environment = var.environment
    ManagedBy   = "Terraform"
  }

  lifecycle {
    ignore_changes = [
      # This is a managed service, upgraded during maintenance windows
      engine_version,
    ]
  }
}

provider "rabbitmq" {
  endpoint = aws_mq_broker.rabbitmq.instances.0.console_url
  username = "admin"
  password = aws_ssm_parameter.admin_password.value
}

resource "rabbitmq_exchange" "exchanges" {
  for_each = local.exchanges
  name     = each.value
  vhost    = "/"

  settings {

    type        = "direct"
    durable     = true
    auto_delete = false

    arguments = {
      internal = false
    }
  }
}

resource "rabbitmq_queue" "queues" {
  for_each = local.queues
  name     = each.key
  vhost    = "/"

  settings {
    auto_delete = false
    durable     = true
  }
}

resource "rabbitmq_binding" "bindings" {
  for_each         = rabbitmq_queue.queues
  source           = local.queues[each.key].exchange
  destination      = each.key
  routing_key      = local.queues[each.key].routing_key
  destination_type = "queue"
  vhost            = "/"
}

resource "rabbitmq_user" "rabbit" {
  name     = "rabbit"
  password = aws_ssm_parameter.rabbit_password.value
  tags     = ["management"]
}

resource "rabbitmq_permissions" "user_permissions" {
  user  = rabbitmq_user.rabbit.name
  vhost = "/"

  permissions {
    configure = "^$"
    write     = ".*"
    read      = ".*"
  }
}

This is the configuration that calls it:

module "rabbitmq" {
  # assuming you put the above module in "./modules"
  source             = "./modules"
  environment        = "test"
  host_instance_type = "mq.t3.small"
  deployment_mode    = "SINGLE_INSTANCE"
}

This works just fine. It stands up a broker and configures the relevant exchanges, queues, bindings. The problem comes when changing the configuration, e.g.

module "rabbitmq" {
  source             = "./modules"
  environment        = "test"
  host_instance_type = "mq.m5.large"
  deployment_mode    = "CLUSTER_MULTI_AZ"
}

Debug Output

There's way too much sensitive info in a debug output, but you can certainly reproduce with:

$ TF_LOG=DEBUG terraform plan 

Expected Behavior

I would expect a plan that either reconfigures the cluster, or replaces it

Actual Behavior

│ Error: Get "/api/exchanges/%2F/myOtherExchange": unsupported protocol scheme ""
│ 
│   with module.rabbitmq.rabbitmq_exchange.exchanges["myOtherExchange"],
│   on modules/rabbitmq.tf line 137, in resource "rabbitmq_exchange" "exchanges":
│  137: resource "rabbitmq_exchange" "exchanges" {
│ 
╵
╷
│ Error: Get "/api/exchanges/%2F/myExchange": unsupported protocol scheme ""
│ 
│   with module.rabbitmq.rabbitmq_exchange.exchanges["myExchange"],
│   on modules/rabbitmq.tf line 137, in resource "rabbitmq_exchange" "exchanges":
│  137: resource "rabbitmq_exchange" "exchanges" {
│ 
╵
╷
│ Error: Get "/api/queues/%2F/myqueue": unsupported protocol scheme ""
│ 
│   with module.rabbitmq.rabbitmq_queue.queues["myqueue"],
│   on modules/rabbitmq.tf line 154, in resource "rabbitmq_queue" "queues":
│  154: resource "rabbitmq_queue" "queues" {
│ 
╵
╷
│ Error: Get "/api/queues/%2F/myotherqueue": unsupported protocol scheme ""
│ 
│   with module.rabbitmq.rabbitmq_queue.queues["myotherqueue"],
│   on modules/rabbitmq.tf line 154, in resource "rabbitmq_queue" "queues":
│  154: resource "rabbitmq_queue" "queues" {
│ 
╵
╷
│ Error: Get "/api/users/rabbit": unsupported protocol scheme ""
│ 
│   with module.rabbitmq.rabbitmq_user.rabbit,
│   on modules/rabbitmq.tf line 174, in resource "rabbitmq_user" "rabbit":
│  174: resource "rabbitmq_user" "rabbit" {
@cyrilgdn
Copy link
Owner

cyrilgdn commented Jul 4, 2021

Hi @porcospino ,

Thanks for opening this issue, I tested your code and was able to reproduce it but unfortunately I don't think it's linked to this provider. It's probably more linked to this Terraform issue: hashicorp/terraform#4149

What happens: When you change deployment mode or instance type for the broker, this will recreate a new one, so during the plan Terraform knows that instances URL will change but doesn't know the new one yet, if you try it without any rabbitmq_xxx resources, you'll have something like that in the plan:

      ~ instances                  = [
          - {
              - console_url = "https://XXX.mq.eu-west-1.amazonaws.com"
              - endpoints   = [
                  - "amqps://XXX.mq.eu-west-1.amazonaws.com:5671",
                ]
              - ip_address  = ""
            },
        ] -> (known after apply)

So with any rabbitmq_xxx resources defined, during the same plan the RabbitMQ provider will receive an empty string for the endpoint setting, so it's not able to connect the the cluster.

Where I work, we usually create clusters and resources on these clusters in multiple steps (with remote state to retrieve the output values of previous steps).

You can also terraform apply -target the broker first then apply everything, or terraform state rm all the rabbitmq_xxx resources before an apply that will destroy the broker.
But I don't think it's possible to plan/apply this (unless I'm not aware of an existing solution in recent Terraform version).

In any case, this is not an issue with this provider. The error message could be clearer though, and I need to check why the validation function didn't work properly, but to be sure I added a

	if endpoint == "" {
		panic("no endpoint")
	}

in my locally built provider and it panics during the plan.

I allow myself to close this issue but feel free to comment it or open it back if needed or if I'm wrong.

@cyrilgdn cyrilgdn closed this as completed Jul 4, 2021
@porcospino
Copy link
Author

Hi @cyrilgdn

Thanks for taking the time to provide such a comprehensive answer. The terraform state rm workaround worked for me. It's a pity AWS replaces the entire cluster, including the network load balancer, when you make this change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants