Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VMSS: Enhance Upgrade Policy with Automatic OS upgrades and Rolling Upgrade Policy support #922

Merged
merged 22 commits into from
Oct 26, 2018
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
83b20f2
vmss: add automatic os upgrades, network profile health probe, and ro…
agolomoodysaada Mar 2, 2018
979f86a
VMSS: updates to fix PR comments
katbyte Jun 6, 2018
1efd35b
Merge remote-tracking branch 'origin/master' into upgrade_policy_rolling
katbyte Jun 7, 2018
3df1cbf
Cleanup after merge
katbyte Jun 7, 2018
1cf7591
Merge branch 'master' into upgrade_policy_rolling
katbyte Jun 12, 2018
eae3c87
Fixed vmss example varible defaults
katbyte Jun 13, 2018
5c961ee
Add test for rolling autoupdate
julienstroheker Jun 15, 2018
e113177
Merge pull request #1 from julienstroheker/upgrade_policy_rolling
agolomoodysaada Jun 18, 2018
054a76b
Introduce Action Group resource of Azure Monitor (#1419)
Aug 1, 2018
ca40c07
Update CHANGELOG.md (#1703)
WodansSon Aug 1, 2018
f1a390b
Resolve conflict with upstream master
Aug 1, 2018
d3ba7f3
Resolve conflict with master branch, again
Aug 3, 2018
e90823c
Merge branch 'master' into upgrade_policy_rolling
Aug 3, 2018
990cb6f
Remove wrong updates due to the hard reset on master
Aug 4, 2018
c5186a2
Add validations to the new properties.
Aug 4, 2018
5c2e8dc
Regroup the schema related to each other.
Aug 4, 2018
648b96c
Fix destruction issue in the test case
Aug 7, 2018
017aadd
Add nil check to fix all test cases
Aug 10, 2018
7743a77
Fix code according to the feedback
Aug 11, 2018
01406cd
Add test case to update rolling upgrade policy
Sep 28, 2018
8489acb
Resolve conflict with upstream branch
Sep 28, 2018
477b881
Fix rolling_upgrade_policy diff issue when mode is not rolling
Oct 1, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
109 changes: 105 additions & 4 deletions azurerm/resource_arm_virtual_machine_scale_set.go
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ import (
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/helper/structure"
"github.com/hashicorp/terraform/helper/validation"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
)

Expand Down Expand Up @@ -109,6 +110,61 @@ func resourceArmVirtualMachineScaleSet() *schema.Resource {
"upgrade_policy_mode": {
Type: schema.TypeString,
Required: true,
ValidateFunc: validation.StringInSlice([]string{
string(compute.Automatic),
string(compute.Manual),
string(compute.Rolling),
}, true),
DiffSuppressFunc: ignoreCaseDiffSuppressFunc,
},

"health_probe_id": {
Type: schema.TypeString,
Optional: true,
ValidateFunc: azure.ValidateResourceID,
},

"automatic_os_upgrade": {
Type: schema.TypeBool,
Optional: true,
Default: false,
},

"rolling_upgrade_policy": {
Type: schema.TypeList,
Optional: true,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"max_batch_instance_percent": {
Type: schema.TypeInt,
Optional: true,
Default: 20,
ValidateFunc: validation.IntBetween(5, 100),
},

"max_unhealthy_instance_percent": {
Type: schema.TypeInt,
Optional: true,
Default: 20,
ValidateFunc: validation.IntBetween(5, 100),
},

"max_unhealthy_upgraded_instance_percent": {
Type: schema.TypeInt,
Optional: true,
Default: 20,
ValidateFunc: validation.IntBetween(5, 100),
},

"pause_time_between_batches": {
Type: schema.TypeString,
Optional: true,
Default: "PT0S",
ValidateFunc: validateIso8601Duration(),
},
},
},
},

"overprovision": {
Expand Down Expand Up @@ -689,14 +745,17 @@ func resourceArmVirtualMachineScaleSetCreate(d *schema.ResourceData, meta interf
return err
}

updatePolicy := d.Get("upgrade_policy_mode").(string)
upgradePolicy := d.Get("upgrade_policy_mode").(string)
automaticOsUpgrade := d.Get("automatic_os_upgrade").(bool)
overprovision := d.Get("overprovision").(bool)
singlePlacementGroup := d.Get("single_placement_group").(bool)
priority := d.Get("priority").(string)

scaleSetProps := compute.VirtualMachineScaleSetProperties{
UpgradePolicy: &compute.UpgradePolicy{
Mode: compute.UpgradeMode(updatePolicy),
Mode: compute.UpgradeMode(upgradePolicy),
AutomaticOSUpgrade: &automaticOsUpgrade,
RollingUpgradePolicy: expandAzureRmRollingUpgradePolicy(d),
},
VirtualMachineProfile: &compute.VirtualMachineScaleSetVMProfile{
NetworkProfile: expandAzureRmVirtualMachineScaleSetNetworkProfile(d),
Expand All @@ -714,6 +773,12 @@ func resourceArmVirtualMachineScaleSetCreate(d *schema.ResourceData, meta interf
scaleSetProps.VirtualMachineProfile.DiagnosticsProfile = &diagnosticProfile
}

if v, ok := d.GetOk("health_probe_id"); ok {
scaleSetProps.VirtualMachineProfile.NetworkProfile.HealthProbe = &compute.APIEntityReference{
ID: utils.String(v.(string)),
}
}

properties := compute.VirtualMachineScaleSet{
Name: &name,
Location: &location,
Expand Down Expand Up @@ -801,9 +866,13 @@ func resourceArmVirtualMachineScaleSetRead(d *schema.ResourceData, meta interfac
}

if properties := resp.VirtualMachineScaleSetProperties; properties != nil {

if upgradePolicy := properties.UpgradePolicy; upgradePolicy != nil {
d.Set("upgrade_policy_mode", upgradePolicy.Mode)
d.Set("automatic_os_upgrade", upgradePolicy.AutomaticOSUpgrade)

if err := d.Set("rolling_upgrade_policy", flattenAzureRmVirtualMachineScaleSetRollingUpgradePolicy(upgradePolicy.RollingUpgradePolicy)); err != nil {
return fmt.Errorf("[DEBUG] Error setting Virtual Machine Scale Set Rolling Upgrade Policy error: %#v", err)
}
}
d.Set("overprovision", properties.Overprovision)
d.Set("single_placement_group", properties.SinglePlacementGroup)
Expand All @@ -830,7 +899,6 @@ func resourceArmVirtualMachineScaleSetRead(d *schema.ResourceData, meta interfac
if err := d.Set("os_profile_secrets", flattenedSecrets); err != nil {
return fmt.Errorf("[DEBUG] Error setting `os_profile_secrets`: %#v", err)
}

}

if windowsConfiguration := osProfile.WindowsConfiguration; windowsConfiguration != nil {
Expand All @@ -852,6 +920,12 @@ func resourceArmVirtualMachineScaleSetRead(d *schema.ResourceData, meta interfac
}

if networkProfile := profile.NetworkProfile; networkProfile != nil {
if hp := networkProfile.HealthProbe; hp != nil {
if id := hp.ID; id != nil {
d.Set("health_probe_id", id)
}
}

flattenedNetworkProfile := flattenAzureRmVirtualMachineScaleSetNetworkProfile(networkProfile)
if err := d.Set("network_profile", flattenedNetworkProfile); err != nil {
return fmt.Errorf("[DEBUG] Error setting `network_profile`: %#v", err)
Expand Down Expand Up @@ -891,6 +965,7 @@ func resourceArmVirtualMachineScaleSetRead(d *schema.ResourceData, meta interfac
}
}
}

}

if plan := resp.Plan; plan != nil {
Expand Down Expand Up @@ -1059,6 +1134,17 @@ func flattenAzureRmVirtualMachineScaleSetBootDiagnostics(bootDiagnostic *compute
return []interface{}{b}
}

func flattenAzureRmVirtualMachineScaleSetRollingUpgradePolicy(rollingUpgradePolicy *compute.RollingUpgradePolicy) []interface{} {
b := map[string]interface{}{
"max_batch_instance_percent": *rollingUpgradePolicy.MaxBatchInstancePercent,
Copy link
Collaborator

@katbyte katbyte Aug 8, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These all need to be nil-checked. #Resolved

"max_unhealthy_instance_percent": *rollingUpgradePolicy.MaxUnhealthyInstancePercent,
"max_unhealthy_upgraded_instance_percent": *rollingUpgradePolicy.MaxUnhealthyUpgradedInstancePercent,
"pause_time_between_batches": *rollingUpgradePolicy.PauseTimeBetweenBatches,
}

return []interface{}{b}
}

func flattenAzureRmVirtualMachineScaleSetNetworkProfile(profile *compute.VirtualMachineScaleSetNetworkProfile) []map[string]interface{} {
networkConfigurations := profile.NetworkInterfaceConfigurations
result := make([]map[string]interface{}, 0, len(*networkConfigurations))
Expand Down Expand Up @@ -1417,6 +1503,21 @@ func expandVirtualMachineScaleSetSku(d *schema.ResourceData) (*compute.Sku, erro
return sku, nil
}

func expandAzureRmRollingUpgradePolicy(d *schema.ResourceData) *compute.RollingUpgradePolicy {
if configs, ok := d.Get("rolling_upgrade_policy").([]interface{}); ok && len(configs) > 0 {
config := configs[0].(map[string]interface{})

return &compute.RollingUpgradePolicy{
MaxBatchInstancePercent: utils.Int32(int32(config["max_batch_instance_percent"].(int))),
MaxUnhealthyInstancePercent: utils.Int32(int32(config["max_unhealthy_instance_percent"].(int))),
MaxUnhealthyUpgradedInstancePercent: utils.Int32(int32(config["max_unhealthy_upgraded_instance_percent"].(int))),
PauseTimeBetweenBatches: utils.String(config["pause_time_between_batches"].(string)),
}
} else {
Copy link
Collaborator

@katbyte katbyte Aug 8, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this else is redundant, above if statement returns a value. #Resolved

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kt@katbyte.me is right. Please take at this link:
https://golang.org/ref/spec#Return_statements

The return values will be initialized to 0 while entering into function, so return nil here can be saved.


In reply to: 208484494 [](ancestors = 208484494)

return nil
}
}

func expandAzureRmVirtualMachineScaleSetNetworkProfile(d *schema.ResourceData) *compute.VirtualMachineScaleSetNetworkProfile {
scaleSetNetworkProfileConfigs := d.Get("network_profile").(*schema.Set).List()
networkProfileConfig := make([]compute.VirtualMachineScaleSetNetworkConfiguration, 0, len(scaleSetNetworkProfileConfigs))
Expand Down
152 changes: 152 additions & 0 deletions azurerm/resource_arm_virtual_machine_scale_set_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -690,6 +690,25 @@ func TestAccAzureRMVirtualMachineScaleSet_multipleNetworkProfiles(t *testing.T)
})
}

func TestAccAzureRMVirtualMachineScaleSet_AutoUpdates(t *testing.T) {
JunyiYi marked this conversation as resolved.
Show resolved Hide resolved
resourceName := "azurerm_virtual_machine_scale_set.test"
ri := acctest.RandInt()
config := testAccAzureRMVirtualMachineScaleSet_RollingAutoUpdates(ri, testLocation())
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testCheckAzureRMVirtualMachineScaleSetDestroy,
Steps: []resource.TestStep{
{
Config: config,
Check: resource.ComposeTestCheckFunc(
testCheckAzureRMVirtualMachineScaleSetExists(resourceName),
),
},
},
})
}

func testGetAzureRMVirtualMachineScaleSet(s *terraform.State, resourceName string) (result *compute.VirtualMachineScaleSet, err error) {
// Ensure we have enough information in state to look up in API
rs, ok := s.RootModule().Resources[resourceName]
Expand Down Expand Up @@ -3983,3 +4002,136 @@ resource "azurerm_virtual_machine_scale_set" "test" {
}
`, rInt, location)
}

func testAccAzureRMVirtualMachineScaleSet_RollingAutoUpdates(rInt int, location string) string {
return fmt.Sprintf(`
resource "azurerm_resource_group" "test" {
name = "acctestrg-%[1]d"
location = "%[2]s"
}

resource "azurerm_virtual_network" "test" {
name = "acctvn-%[1]d"
address_space = ["10.0.0.0/8"]
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
}

resource "azurerm_subnet" "test" {
name = "acctsub-%[1]d"
resource_group_name = "${azurerm_resource_group.test.name}"
virtual_network_name = "${azurerm_virtual_network.test.name}"
address_prefix = "10.0.0.0/16"
}

resource "azurerm_public_ip" "test" {
name = "PublicIPForLB"
Copy link
Contributor

@tombuildsstuff tombuildsstuff Aug 8, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be acctestpip%d to get caught by the sweepers #Resolved

location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
public_ip_address_allocation = "Dynamic"
idle_timeout_in_minutes = 4
}

resource "azurerm_lb" "test" {
name = "acctestlb-%[1]d"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"

frontend_ip_configuration {
name = "PublicIPAddress"
public_ip_address_id = "${azurerm_public_ip.test.id}"
}
}

resource "azurerm_lb_rule" "test" {
resource_group_name = "${azurerm_resource_group.test.name}"
loadbalancer_id = "${azurerm_lb.test.id}"
name = "LBRule"
protocol = "Tcp"
frontend_port = 22
backend_port = 22
frontend_ip_configuration_name = "PublicIPAddress"
probe_id = "${azurerm_lb_probe.test.id}"
backend_address_pool_id = "${azurerm_lb_backend_address_pool.test.id}"
}

resource "azurerm_lb_probe" "test" {
resource_group_name = "${azurerm_resource_group.test.name}"
loadbalancer_id = "${azurerm_lb.test.id}"
name = "ssh-running-probe"
port = 22
protocol = "Tcp"
}

resource "azurerm_lb_backend_address_pool" "test" {
name = "test"
resource_group_name = "${azurerm_resource_group.test.name}"
loadbalancer_id = "${azurerm_lb.test.id}"
}

resource "azurerm_virtual_machine_scale_set" "test" {
name = "acctvmss-%[1]d"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"

upgrade_policy_mode = "Rolling"

automatic_os_upgrade = true

rolling_upgrade_policy {
max_batch_instance_percent = 20
max_unhealthy_instance_percent = 20
max_unhealthy_upgraded_instance_percent = 20
pause_time_between_batches = "PT0S"
}

health_probe_id = "${azurerm_lb_probe.test.id}"

sku {
name = "Standard_A0"
tier = "Standard"
capacity = 1
}

os_profile {
computer_name_prefix = "testvm-%[1]d"
admin_username = "myadmin"
admin_password = "Passwword1234"
}

network_profile {
name = "TestNetworkProfile"
primary = true

ip_configuration {
name = "TestIPConfiguration"
subnet_id = "${azurerm_subnet.test.id}"
load_balancer_backend_address_pool_ids = ["${azurerm_lb_backend_address_pool.test.id}"]
primary = true
}
}

storage_profile_os_disk {
name = ""
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}

storage_profile_data_disk {
lun = 0
caching = "ReadWrite"
create_option = "Empty"
disk_size_gb = 10
managed_disk_type = "Standard_LRS"
}

storage_profile_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
}
`, rInt, location)
}
22 changes: 22 additions & 0 deletions examples/vmss-automatic-rolling-updates/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Linux VM Scale Set with Automatic OS upgrades and Rolling Upgrade Policy

This template deploys a Linux (Ubuntu) VM Scale Set. Once the VMSS is deployed, the user can deploy an application inside each of the VMs (either by directly logging into the VMs or via a [`remote-exec` provisioner](https://www.terraform.io/docs/provisioners/remote-exec.html)).

Please review the official documentation first : [Azure virtual machine scale set automatic OS upgrades](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade)

## main.tf
The `main.tf` file contains the actual resources that will be deployed. It also contains the Azure Resource Group definition and any defined variables.

## outputs.tf
This data is outputted when `terraform apply` is called, and can be queried using the `terraform output` command.

You may leave the provider block in the `main.tf`, as it is in this template, or you can create a file called `provider.tf` and add it to your `.gitignore` file.

Azure requires that an application is added to Azure Active Directory to generate the `client_id`, `client_secret`, and `tenant_id` needed by Terraform (`subscription_id` can be recovered from your Azure account details). Please go [here](https://www.terraform.io/docs/providers/azurerm/) for full instructions on how to create this to populate your `provider.tf` file.

## terraform.tfvars
If a `terraform.tfvars` or any `.auto.tfvars` files are present in the current directory, Terraform automatically loads them to populate variables. We don't recommend saving usernames and password to version control, but you can create a local secret variables file and use the `-var-file` flag or the `.auto.tfvars` extension to load it.

## variables.tf
The `variables.tf` file contains all of the input parameters that the user can specify when deploying this Terraform template.

Loading