Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix #110 cannot change storage account type #117

Closed

Conversation

cchildress
Copy link
Contributor

#110

Forces a new resource if the account_type changes.

@nbering
Copy link

nbering commented Jun 21, 2017

Have you tested this? Because I usually have a lot of issues destroying storage accounts and then creating a new one with the same name. There's often a delay of several (up to 24) hours to free the name for reuse.

This also seems to be a somewhat naive approach, since changes between certain storage account types are valid, but some account types cannot be transitioned to others.

I suppose there is precedent for this kind of aggressive delete and create new (for example, deleting VMs to change their instance size), but that is because there is not "stop vm to make the change" functionality in Terraform.

Does anyone know of a better solution?

@nbering
Copy link

nbering commented Jun 21, 2017

I should add, the "failure" I get in creating a new storage account by the same name is not an error. The request just never finishes. Terraform will sit for hours waiting for the storage account to finish creating and it doesn't. This is extremely annoying when that storage account is a dependency for a the VHD on a VM. I added a "storage account prefix" property for all my VM+Storage modules to solve this problem when destroying and creating new setups rapidly for development.

@nbering
Copy link

nbering commented Jun 21, 2017

For reference, my view coincides somewhat with hashicorp/terraform#8769 and hashicorp/terraform#113, though these are completely speculative changes only in proposal or idea stages.

@cchildress
Copy link
Contributor Author

Hi @nbering
Yup, I tested it in East Australia last night and this morning. Last night I couldn't even test the storage account functionality with 0.9.6 as a starting point because Azure couldn't create a new storage account with a name I'd never used before in under an hour (I got tired of waiting on it). This morning I was able to destroy the old account and create a new one in under 5 minutes.

Storage accounts definitely seem to be Azure's weak spot. Sometimes it takes 24 hours for them to handle the resource, but sometimes it takes 5 minutes. I'm not really sure how Terraform should best handle that though. I don't think that logic belongs in the storage account code though. The best way I can think of the handle this is to have a flag for certain resources to note that they are eventually consistent resources and have Terraform just kick off the request and mark the resource as requested in the state file. On the next run you could look for the resource that was marked as requested and validate that it was created. Any children that depend on the resource would just have to be skipped over though. It would take away your ability to just run terraform apply and go from nothing to a running VM so I guess you'd win some / lose some there...

With Terraform and Azure as they stand today this is the only way to reliably transition between storage account types. If we want to be able to transition between different replication options without forcing a recreate (which is a valid option with Azure) maybe the answer here is to break account_type into account_tier (forces new) and replication_type (can transition).

To expand the idea beyond just storage accounts and into virtual machines as well, you're right that the VM resizing can be kinda clunky. The problem I've seen with Azure is that shut-down-to-resize is valid only for certain intersections of sizes. It's perfectly valid to go from a Standard_A2 to a Standard_D2, but you can't go to a Standard_F2. Some size transitions do force a new resource. As far as I know the Azure API doesn't return anything to let you (or your automation) know which size transitions are valid or which sizes are available at all (can't run GPU nodes in North Europe for example). Another good example is in PR-14688. The sanity logic on the Terraform side was set to allow 4TB disks in all regions even though they note in only works in one location. Until there's a way to know what is valid in which region I think we're just going to have to live with Azure API failures...

@nbering
Copy link

nbering commented Jun 21, 2017

I agree that forcing a new resource might be the only way to handle this with the current terraform API. I looked over the schema and I don't see any opportunity at this point for a resource to provide custom logic on whether to force a new resource. This is unfortunate, really, as that sort of fine-grained control would be very helpful for handling validation with Azure's rat's nest of of conditions and limitations.

The biggest concern I have over this change would be that if someone was already accustomed to changing account type between the types that can transition, they could end up accidentally losing data after the update. I'll grand that it's a little careless to run apply without plan after doing an upgrade.

Is there anything about project lifecycle that can be used to help mitigate the impact of behaviour changes that lead to an increased likelihood of resource destruction? It's probably especially critical with the separation if terraform core auto-updates the plugins (not sure if it does). At least previously the changelog was all in one place. This might be more of a meta-issue that deserves it's own issue.

@cchildress
Copy link
Contributor Author

Yeah, a note in the changelog to bring attention to this would be a good idea (similar to what's in the -target notes for 0.10).

I'm thinking that breaking account_type up would give us the fine-grained control you're looking for. I might have a look at what that would entail and see if it's low enough fruit to go for.
Maybe go from

"account_type": {
  Type:             schema.TypeString,
  Required:         true,
  ForceNew:         true,
  ValidateFunc:     validateArmStorageAccountType,
  DiffSuppressFunc: ignoreCaseDiffSuppressFunc,
},

to

"account_tier" {
  Type:             schema.TypeString,
  Required:         true,
  ForceNew:         true,
  ValidateFunc:     validateArmStorageAccountTier,
  DiffSuppressFunc: ignoreCaseDiffSuppressFunc,
}
"account_replication" {
  Type:             schema.TypeString,
  Required:         true,
  ValidateFunc:     validateArmStorageAccountReplication,
  DiffSuppressFunc: ignoreCaseDiffSuppressFunc,
}

Of course this brings about a new backwards compatibility problem too...
I'm open to suggestions here.

In running dev builds of 0.10 I've seen notices about pinning the plugin versions. Hopefully that helps to address your concern about the core pulling the latest version of the plugin.

@cchildress
Copy link
Contributor Author

@nbering
I'm not opening a pull request with this yet because it's no where near ready, but I did a first pass at breaking account_type up if you want to play around with it some:

diff --git a/azurerm/resource_arm_storage_account.go b/azurerm/resource_arm_storage_account.go
index ac6f4cf..c6e5085 100644
--- a/azurerm/resource_arm_storage_account.go
+++ b/azurerm/resource_arm_storage_account.go
@@ -58,11 +58,18 @@ func resourceArmStorageAccount() *schema.Resource {
 				Default: string(storage.Storage),
 			},
 
-			"account_type": {
+			"account_replication": {
+				Type:             schema.TypeString,
+				Required:         true,
+				ValidateFunc:     validateArmStorageAccountReplication,
+				DiffSuppressFunc: ignoreCaseDiffSuppressFunc,
+			},
+
+			"account_tier": {
 				Type:             schema.TypeString,
 				Required:         true,
 				ForceNew:         true,
-				ValidateFunc:     validateArmStorageAccountType,
+				ValidateFunc:     validateArmStorageAccountTier,
 				DiffSuppressFunc: ignoreCaseDiffSuppressFunc,
 			},
 
@@ -150,7 +157,9 @@ func resourceArmStorageAccountCreate(d *schema.ResourceData, meta interface{}) e
 	resourceGroupName := d.Get("resource_group_name").(string)
 	storageAccountName := d.Get("name").(string)
 	accountKind := d.Get("account_kind").(string)
-	accountType := d.Get("account_type").(string)
+	accountTier := d.Get("account_tier").(string)
+	accountReplication := d.Get("account_replication").(string)
+	accountType := fmt.Sprintf("%s_%s", accountTier, accountReplication)
 
 	location := d.Get("location").(string)
 	tags := d.Get("tags").(map[string]interface{})
@@ -249,8 +258,10 @@ func resourceArmStorageAccountUpdate(d *schema.ResourceData, meta interface{}) e
 
 	d.Partial(true)
 
-	if d.HasChange("account_type") {
-		accountType := d.Get("account_type").(string)
+	if d.HasChange("account_tier") || d.HasChange("account_replication") {
+		accountTier := d.Get("account_tier").(string)
+		accountReplication := d.Get("account_replication").(string)
+		accountType := fmt.Sprintf("%s_%s", accountTier, accountReplication)
 
 		sku := storage.Sku{
 			Name: storage.SkuName(accountType),
@@ -264,7 +275,8 @@ func resourceArmStorageAccountUpdate(d *schema.ResourceData, meta interface{}) e
 			return fmt.Errorf("Error updating Azure Storage Account type %q: %s", storageAccountName, err)
 		}
 
-		d.SetPartial("account_type")
+		d.SetPartial("account_tier")
+		d.SetPartial("account_replication")
 	}
 
 	if d.HasChange("access_tier") {
@@ -354,7 +366,8 @@ func resourceArmStorageAccountRead(d *schema.ResourceData, meta interface{}) err
 	d.Set("secondary_access_key", accessKeys[1].Value)
 	d.Set("location", resp.Location)
 	d.Set("account_kind", resp.Kind)
-	d.Set("account_type", resp.Sku.Name)
+	d.Set("account_tier", resp.Sku.Tier)
+	d.Set("account_replication", strings.Split(fmt.Sprintf("%v", resp.Sku.Name), "_")[1])
 	d.Set("primary_location", resp.AccountProperties.PrimaryLocation)
 	d.Set("secondary_location", resp.AccountProperties.SecondaryLocation)
 
@@ -428,6 +441,37 @@ func validateArmStorageAccountName(v interface{}, k string) (ws []string, es []e
 	return
 }
 
+func validateArmStorageAccountTier(v interface{}, k string) (ws []string, es []error) {
+	validAccountTypes := []string{"standard", "premium"}
+
+	input := strings.ToLower(v.(string))
+
+	for _, valid := range validAccountTypes {
+		if valid == input {
+			return
+		}
+	}
+
+	es = append(es, fmt.Errorf("Invalid storage account tier %q", input))
+	return
+}
+
+func validateArmStorageAccountReplication(v interface{}, k string) (ws []string, es []error) {
+	validAccountTypes := []string{"lrs", "zrs", "grs", "ragrs"}
+
+	input := strings.ToLower(v.(string))
+
+	for _, valid := range validAccountTypes {
+		if valid == input {
+			return
+		}
+	}
+
+	es = append(es, fmt.Errorf("Invalid storage account replication mode %q", input))
+	return
+}
+
+// TODO: get this to work outside the schema validation (maybe somewhere in the create / update functions?)
 func validateArmStorageAccountType(v interface{}, k string) (ws []string, es []error) {
 	validAccountTypes := []string{"standard_lrs", "standard_zrs",
 		"standard_grs", "standard_ragrs", "premium_lrs"}

I was able to run it a few times and even changed the redundancy mode on a storage account back and forth and it didn't immediately crash...

tombuildsstuff added a commit that referenced this pull request Sep 28, 2017
- Support for File Encryption. Fixes #80
- Adding Import tests
- Support for Custom Domains on Storage Accounts. Fixes #15
- Splitting the storage account Tier and Replication out into separate fields. Incorporating #117 & Fixing #110
@tombuildsstuff
Copy link
Contributor

Hey @cchildress

Firstly - apologies for the really delayed review on this - I saw the comments and thought we'd reviewed/replied to this!

I really like the idea of splitting up the Storage Tier and the Replication Type to allow to this to be ForceNew / updated as required. I've been adding functionality to the azurerm_storage_account resource today and ended up implementing this in PR #363 - which in my testing appears to work well, and as such I'd like to close this PR in favour of the split out fields (as you specced above) which has been implemented in the other PR if you don't mind?

Given the AzureRM Provider is split out from Terraform Core - we can bump the version respectively to highlight there's breaking changes to the Schema (and that users will need to update to this new field).

Thanks!

@cchildress
Copy link
Contributor Author

HI @tombuildsstuff
Sounds good to me. Let's close this PR and ship that one.
Changing the major field for the semver does sound like a good idea.

Thanks for carrying the idea to completion.

@tombuildsstuff
Copy link
Contributor

@cchildress 👍 thanks

@cchildress cchildress deleted the storage_account_recreate branch September 29, 2017 19:08
@ghost
Copy link

ghost commented Apr 1, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 1, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants