-
Notifications
You must be signed in to change notification settings - Fork 404
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Manually deleted workspace with secret scope results in error on plan #92
Comments
@EliiseS thanks for raising this let me take a quick look at this, is the auth SP based auth? or manual? Is the expectation that during the plan phase, we will recognize that the workspace itself is missing so, when the workspace is not found we will be attempting to recreate all the resources? |
More background to why the error is being thrown, it is thrown during the providerConfigure as we are using the AAD oauth token to create a Databricks token to provision all resources. During the databricks token creation it is failing as the URL for databricks workspace is not valid anymore. So is the expected behavior that we propagate the knowledge that the workspace is not found to all the resources to identify them to be recreated? |
Hey! Thanks for getting back to me so quickly, sorry for the delay on my part. So is the expected behavior that we propagate the knowledge that the workspace is not found to all the resources to identify them to be recreated? I'd say so, since you'd want to be able to reprovision a workspace if it's accidentally deleted with everything it originally had. |
I'd like to pick up this and try to work out a solution if you don't mind? I'd love to hear if you have ideas for a solution too! |
This may help as the workspace URL would change as the workspace was recreated #34 |
@lawrencegripper @EliiseS if my understanding is correct both, urls, the old pattern and new pattern should work but yes i agree with Lawrence that with the new changes to the workspace in azurerm we should probably migrate to the new url scheme. |
After testing with the new
2020/06/18 13:46:40 [INFO] backend/local: plan operation completed
Error: failed to get credentials from config file; error msg: Authentication is not configured for provider. Please configure it
through one of the following options:
1. DATABRICKS_HOST + DATABRICKS_TOKEN environment variables.
2. host + token provider arguments.
3. Run `databricks configure --token` that will create /root/.databrickscfg file.
Please check https://docs.databricks.com/dev-tools/cli/index.html#set-up-authentication for details
on test.tf line 51, in provider "databricks":
51: provider "databricks" { @stikkireddy, do you have any insight into this? |
Can you please paste your provider hcl code @EliiseS ? |
@stikkireddy We've managed to fix the issue here: #119 |
Waiting on #158 to see if it fixes the issues. Otherwise, the recommended approach by terraform for initalizating providers with output from resources can be found here: hashicorp/terraform#25314 |
@EliiseS can you confirm current master branch fixes the issue? |
Creating a workspace with a secret scope, cluster or possibly other references and then manually deleting the workspace after creating, results in an error on terraform plan/apply.
Terraform Version
0.12.24
Affected Resource(s)
Please list the resources as a list, for example:
Terraform Configuration Files
Panic Output
Expected Behavior
Deleting an existing workspace, previously created by terrafrom, waits for a new workspace be created before querying for secret scopes, clusters etc.
Actual Behavior
Deleting an existing workspace, previously created by terrafrom, results in an error on terraform plan/apply.
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform plan
Important Factoids
Other panic output
Error output of a workspace with cluster, secrets and ADAL Gen2 mount, that was manually deleted:
Configuration
The text was updated successfully, but these errors were encountered: