Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manually deleted workspace with secret scope results in error on plan #92

Closed
EliiseS opened this issue Jun 10, 2020 · 13 comments
Closed
Labels
azure Occurring on Azure cloud Bug The issue is a bug.
Milestone

Comments

@EliiseS
Copy link
Contributor

EliiseS commented Jun 10, 2020

Creating a workspace with a secret scope, cluster or possibly other references and then manually deleting the workspace after creating, results in an error on terraform plan/apply.

Terraform Version

0.12.24

Affected Resource(s)

Please list the resources as a list, for example:

  • databricks_workspace
  • databricks_job

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

Panic Output

Error: parse :///api/2.0/secrets/scopes/list?: missing protocol scheme
Error: parse :///api/2.0/clusters/get?cluster_id=0610-100720-loss540: missing protocol scheme

Expected Behavior

Deleting an existing workspace, previously created by terrafrom, waits for a new workspace be created before querying for secret scopes, clusters etc.

Actual Behavior

Deleting an existing workspace, previously created by terrafrom, results in an error on terraform plan/apply.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Create a workspace with secret scope, cluster, etc in terraform
  2. Manually delete the workspace (on Azure in this case)
  3. terraform plan

Important Factoids

  • Ran on Azure
  • SP authentication

Other panic output

Error output of a workspace with cluster, secrets and ADAL Gen2 mount, that was manually deleted:
image

Configuration

variable "user" {
  type        = string
}

variable "password" {
  type        = string
}

variable "client_id" {
  type        = string
}

variable "client_secret" {
  type        = string
}

variable "tenant_id" {
  type        = string
}

variable "subscription_id" {
  type        = string
}

provider "azurerm" {
    version = "~> 2.10"
    client_id         = var.client_id
    client_secret     = var.client_secret
    tenant_id         = var.tenant_id
    subscription_id   = var.subscription_id
    features {}
    skip_provider_registration = true

}

resource "azurerm_resource_group" "db" {
  name     = "db-labs-resources"
  location = "West Europe"
}

resource "azurerm_databricks_workspace" "module" {
  name                        = "db-labs-worspace"
  resource_group_name         = azurerm_resource_group.db.name
  location                    = azurerm_resource_group.db.location
  sku                         = "premium"
}

data "azurerm_client_config" "current" {}

provider "databricks" {
  version = "~> 0.1"

  azure_auth = {
    managed_resource_group = azurerm_databricks_workspace.module.managed_resource_group_name
    azure_region           = azurerm_databricks_workspace.module.location
    workspace_name         = azurerm_databricks_workspace.module.name
    resource_group         = azurerm_databricks_workspace.module.resource_group_name
    client_id               = var.client_id
    client_secret           = var.client_secret
    tenant_id               = var.tenant_id
    subscription_id         = var.subscription_id
}

resource "databricks_secret_scope" "sandbox_storage" {
  name                     = "sandbox-storage"
  initial_manage_principal = "users"
}

resource "databricks_secret" "secret" {
  key          = "secret"
  string_value = "I am a secret"
  scope        = databricks_secret_scope.sandbox_storage.name
}
@stikkireddy
Copy link
Contributor

@EliiseS thanks for raising this let me take a quick look at this, is the auth SP based auth? or manual?

Is the expectation that during the plan phase, we will recognize that the workspace itself is missing so, when the workspace is not found we will be attempting to recreate all the resources?

@stikkireddy
Copy link
Contributor

More background to why the error is being thrown, it is thrown during the providerConfigure as we are using the AAD oauth token to create a Databricks token to provision all resources. During the databricks token creation it is failing as the URL for databricks workspace is not valid anymore.

So is the expected behavior that we propagate the knowledge that the workspace is not found to all the resources to identify them to be recreated?

@EliiseS
Copy link
Contributor Author

EliiseS commented Jun 15, 2020

Hey! Thanks for getting back to me so quickly, sorry for the delay on my part.
I've updated the issue with an example config to reproduce the issue. I've discovered this issue with SP, which I think is the only scenario this would make sense in, since the workspace is being created in the same terraform. Correct me if I'm wrong in that assumption.

So is the expected behavior that we propagate the knowledge that the workspace is not found to all the resources to identify them to be recreated?

I'd say so, since you'd want to be able to reprovision a workspace if it's accidentally deleted with everything it originally had.

@EliiseS
Copy link
Contributor Author

EliiseS commented Jun 16, 2020

I'd like to pick up this and try to work out a solution if you don't mind? I'd love to hear if you have ideas for a solution too!

@lawrencegripper
Copy link
Contributor

This may help as the workspace URL would change as the workspace was recreated #34

@stikkireddy
Copy link
Contributor

@lawrencegripper @EliiseS if my understanding is correct both, urls, the old pattern and new pattern should work but yes i agree with Lawrence that with the new changes to the workspace in azurerm we should probably migrate to the new url scheme.

@EliiseS
Copy link
Contributor Author

EliiseS commented Jun 18, 2020

After testing with the new workspace_url field from #114, we've discovered a new issue that originates from an unknown issue occuring between the master branch and the hash of the latest release.

tf plan result:

2020/06/18 13:46:40 [INFO] backend/local: plan operation completed
Error: failed to get credentials from config file; error msg: Authentication is not configured for provider. Please configure it
through one of the following options:
1. DATABRICKS_HOST + DATABRICKS_TOKEN environment variables.
2. host + token provider arguments.
3. Run `databricks configure --token` that will create /root/.databrickscfg file.

Please check https://docs.databricks.com/dev-tools/cli/index.html#set-up-authentication for details

  on test.tf line 51, in provider "databricks":
  51: provider "databricks" {

@stikkireddy, do you have any insight into this?

@stikkireddy
Copy link
Contributor

Can you please paste your provider hcl code @EliiseS ?

@EliiseS
Copy link
Contributor Author

EliiseS commented Jun 18, 2020

@stikkireddy We've managed to fix the issue here: #119

@stikkireddy stikkireddy removed their assignment Jun 25, 2020
@nfx nfx added this to the v0.2.0 milestone Jun 25, 2020
@nfx nfx added the Bug The issue is a bug. label Jul 2, 2020
@EliiseS
Copy link
Contributor Author

EliiseS commented Jul 8, 2020

Waiting on #158 to see if it fixes the issues. Otherwise, the recommended approach by terraform for initalizating providers with output from resources can be found here: hashicorp/terraform#25314

@nfx
Copy link
Contributor

nfx commented Jul 9, 2020

@EliiseS from the looks of it, errors are in older format. now APIError returns a nice struct without any HTML. New make install should work as well.

In #158 i'll change map to list (or set) as well.

@EliiseS
Copy link
Contributor Author

EliiseS commented Jul 9, 2020

@nfx yeah, I'm waiting for #158 to be ready so I can see if that solves the problem.

@nfx
Copy link
Contributor

nfx commented Aug 25, 2020

@EliiseS can you confirm current master branch fixes the issue?

@nfx nfx closed this as completed Aug 25, 2020
@nfx nfx modified the milestones: v0.2.0, v0.2.4 Aug 25, 2020
@nfx nfx added the azure Occurring on Azure cloud label Feb 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
azure Occurring on Azure cloud Bug The issue is a bug.
Projects
None yet
Development

No branches or pull requests

4 participants