diff --git a/_partials/_create-upload-ssh-key.mdx b/_partials/_create-upload-ssh-key.mdx
index 95dd2a8020..4bd7834101 100644
--- a/_partials/_create-upload-ssh-key.mdx
+++ b/_partials/_create-upload-ssh-key.mdx
@@ -3,8 +3,6 @@ partial_category: palette-setup
partial_name: generate-ssh-key
---
-Follow these steps to create an SSH key using the terminal and upload it to Palette:
-
1. Open the terminal on your computer.
2. Check for existing SSH keys by invoking the following command.
diff --git a/docs/docs-content/getting-started/aws/deploy-manage-k8s-cluster-tf.md b/docs/docs-content/getting-started/aws/deploy-manage-k8s-cluster-tf.md
index e484b73aa9..ad927c9901 100644
--- a/docs/docs-content/getting-started/aws/deploy-manage-k8s-cluster-tf.md
+++ b/docs/docs-content/getting-started/aws/deploy-manage-k8s-cluster-tf.md
@@ -21,7 +21,6 @@ with the new cluster profile version, and then perform a rollback.
To complete this tutorial, you will need the following items in place:
-- Tenant admin access to Palette.
- Follow the steps described in the [Set up Palette with AWS](./setup.md) guide to authenticate Palette for use with
your AWS cloud account and create a Palette API key.
- [Docker Desktop](https://www.docker.com/products/docker-desktop/) or [Podman](https://podman.io/docs/installation)
@@ -248,6 +247,7 @@ resource "spectrocloud_cluster_profile" "aws-profile" {
tag = data.spectrocloud_pack.aws_ubuntu.version
uid = data.spectrocloud_pack.aws_ubuntu.id
values = data.spectrocloud_pack.aws_ubuntu.values
+ type = "spectro"
}
pack {
@@ -255,6 +255,7 @@ resource "spectrocloud_cluster_profile" "aws-profile" {
tag = data.spectrocloud_pack.aws_k8s.version
uid = data.spectrocloud_pack.aws_k8s.id
values = data.spectrocloud_pack.aws_k8s.values
+ type = "spectro"
}
pack {
@@ -262,6 +263,7 @@ resource "spectrocloud_cluster_profile" "aws-profile" {
tag = data.spectrocloud_pack.aws_cni.version
uid = data.spectrocloud_pack.aws_cni.id
values = data.spectrocloud_pack.aws_cni.values
+ type = "spectro"
}
pack {
@@ -269,6 +271,7 @@ resource "spectrocloud_cluster_profile" "aws-profile" {
tag = data.spectrocloud_pack.aws_csi.version
uid = data.spectrocloud_pack.aws_csi.id
values = data.spectrocloud_pack.aws_csi.values
+ type = "spectro"
}
pack {
@@ -282,6 +285,7 @@ resource "spectrocloud_cluster_profile" "aws-profile" {
db_password = base64encode(var.db_password),
auth_token = base64encode(var.auth_token)
})
+ type = "oci"
}
}
```
@@ -474,6 +478,13 @@ terraform init
Terraform has been successfully initialized!
```
+:::warning
+
+Before deploying the resources, ensure that there are no active clusters named `aws-cluster` or cluster profiles named
+`tf-aws-profile` in your Palette project.
+
+:::
+
Issue the `plan` command to preview the resources that Terraform will create.
```shell
@@ -533,9 +544,9 @@ to click on the logo to increase the counter and for a fun image change.
## Version Cluster Profiles
-As previously mentioned, Palette supports the creation of multiple cluster profile versions using the same profile name.
-This provides you with better change visibility and control over the layers in your host clusters. Profile versions are
-commonly used for adding or removing layers and pack configuration updates.
+Palette supports the creation of multiple cluster profile versions using the same profile name. This provides you with
+better change visibility and control over the layers in your host clusters. Profile versions are commonly used for
+adding or removing layers and pack configuration updates.
The version number of a given profile must be unique and use the semantic versioning format `major.minor.patch`. In this
tutorial, you used Terraform to deploy two versions of an AWS cluster profile. The snippet below displays a segment of
@@ -611,10 +622,8 @@ Once the changes have been completed, Palette marks the cluster layers with a gr
![Image that shows the cluster with Kubecost](/getting-started/aws/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp)
-Next, download the
-[kubeconfig](https://deploy-preview-3173--docs-spectrocloud.netlify.app/clusters/cluster-management/kubeconfig/) file
-for your cluster from the Palette UI. This file enables you and other users to issue `kubectl` commands against the host
-cluster.
+Next, download the [kubeconfig](../../clusters/cluster-management/kubeconfig.md) file for your cluster from the Palette
+UI. This file enables you and other users to issue `kubectl` commands against the host cluster.
![Image that shows the cluster's kubeconfig file location](/getting-started/aws/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp)
@@ -685,7 +694,7 @@ the resources you created through Terraform.
terraform destroy --auto-approve
```
-Output:
+A successful execution of `terraform destroy` will output the following.
```shell
Destroy complete! Resources: 3 destroyed.
diff --git a/docs/docs-content/getting-started/aws/setup.md b/docs/docs-content/getting-started/aws/setup.md
index b543404b23..1ac1347917 100644
--- a/docs/docs-content/getting-started/aws/setup.md
+++ b/docs/docs-content/getting-started/aws/setup.md
@@ -13,16 +13,11 @@ order to authenticate Palette and allow it to deploy host clusters.
## Prerequisites
-The prerequisite steps to getting started with Palette on AWS are as follows.
-
-- Sign up to [Palette](https://www.spectrocloud.com/get-started).
-
- - Your Palette account role must have the `clusterProfile.create` permission to create a cluster profile. Refer to the
- [Roles and Permissions](../../user-management/palette-rbac/project-scope-roles-permissions.md#cluster-profile-admin)
- documentation for more information.
+- A Palette account with [tenant admin](../../tenant-settings/tenant-settings.md) access.
- Sign up to a public cloud account from
- [AWS](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account).
+ [AWS](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account). The AWS cloud account
+ must have the required [IAM policies](../../clusters/public-cloud/aws/required-iam-policies.md).
- An SSH key pair available in the region where you want to deploy the cluster. Check out the
[Create EC2 SSH Key Pair](https://docs.aws.amazon.com/ground-station/latest/ug/create-ec2-ssh-key-pair.html) for
diff --git a/docs/docs-content/getting-started/aws/update-k8s-cluster.md b/docs/docs-content/getting-started/aws/update-k8s-cluster.md
index 1e886c019e..cda86eba65 100644
--- a/docs/docs-content/getting-started/aws/update-k8s-cluster.md
+++ b/docs/docs-content/getting-started/aws/update-k8s-cluster.md
@@ -284,5 +284,5 @@ Cluster profiles provide consistency during the cluster creation process, as wel
They can be versioned to keep a record of previously working cluster states, giving you visibility when updating or
rolling back workloads across your environments.
-We recommend that you continue to the [Deploy a Cluster with Terraform](./deploy-manage-k8s-cluster-tf.md) page to learn
-about how you can use Palette with Terraform.
+We recommend that you continue to the [Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) page to
+learn about how you can use Palette with Terraform.
diff --git a/docs/docs-content/getting-started/azure/azure.md b/docs/docs-content/getting-started/azure/azure.md
index a67fef3766..0f64c5bf3c 100644
--- a/docs/docs-content/getting-started/azure/azure.md
+++ b/docs/docs-content/getting-started/azure/azure.md
@@ -43,10 +43,10 @@ your cluster is deployed, you can update it using cluster profile updates.
relativeURL: "./update-k8s-cluster",
},
{
- title: "Deploy a Cluster with Terraform",
- description: "Deploy a Palette host cluster with Terraform.",
+ title: "Cluster Management with Terraform",
+ description: "Deploy and update a Palette host cluster with Terraform.",
buttonText: "Learn more",
- relativeURL: "./deploy-k8s-cluster-tf",
+ relativeURL: "./deploy-manage-k8s-cluster-tf",
},
]}
/>
diff --git a/docs/docs-content/getting-started/azure/deploy-k8s-cluster-tf.md b/docs/docs-content/getting-started/azure/deploy-k8s-cluster-tf.md
deleted file mode 100644
index 41dde42eb9..0000000000
--- a/docs/docs-content/getting-started/azure/deploy-k8s-cluster-tf.md
+++ /dev/null
@@ -1,564 +0,0 @@
----
-sidebar_label: "Deploy a Cluster with Terraform"
-title: "Deploy a Cluster with Terraform"
-description: "Learn to deploy a Palette host cluster with Terraform."
-icon: ""
-hide_table_of_contents: false
-sidebar_position: 50
-tags: ["getting-started", "azure"]
----
-
-The [Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) provider
-enables you to create and manage Palette resources in a codified manner by leveraging Infrastructure as Code (IaC). Some
-notable reasons why you would want to utilize IaC are:
-
-- The ability to automate infrastructure.
-
-- Improved collaboration in making infrastructure changes.
-
-- Self-documentation of infrastructure through code.
-
-- Allows tracking all infrastructure in a single source of truth.
-
-If want to become more familiar with Terraform, we recommend you check out the
-[Terraform](https://developer.hashicorp.com/terraform/intro) learning resources from HashiCorp.
-
-This tutorial will teach you how to deploy a host cluster with Terraform using Amazon Web Services (AWS), Microsoft
-Azure, or Google Cloud Platform (GCP) cloud providers. You will learn about _Cluster Mode_ and _Cluster Profiles_ and
-how these components enable you to deploy customized applications to Kubernetes with minimal effort using the
-[Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) provider.
-
-## Prerequisites
-
-To complete this tutorial, you will need the following items
-
-- Basic knowledge of containers.
-- [Docker Desktop](https://www.docker.com/products/docker-desktop/), [Podman](https://podman.io/docs/installation) or
- another container management tool.
-
-- Follow the steps described in the [Set up Palette with Azure](./setup.md) guide to authenticate Palette for use with
- your Azure cloud account.
-
-## Set Up Local Environment
-
-You can clone the tutorials repository locally or follow along by downloading a Docker image that contains the tutorial
-code and all dependencies.
-
-
-
-:::warning
-
-If you choose to clone the repository instead of using the tutorial container make sure you have Terraform v1.4.0 or
-greater installed.
-
-:::
-
-
-
-
-
-
-
-Ensure Docker Desktop on your local machine is available. Use the following command and ensure you receive an output
-displaying the version number.
-
-```bash
-docker version
-```
-
-Download the tutorial image to your local machine.
-
-```bash
-docker pull ghcr.io/spectrocloud/tutorials:1.1.3
-```
-
-Next, start the container, and open a bash session into it.
-
-```shell
-docker run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.3 bash
-```
-
-Navigate to the tutorial code.
-
-```shell
-cd /terraform/iaas-cluster-deployment-tf
-```
-
-
-
-
-
-If you are not running a Linux operating system, create and start the Podman Machine in your local environment.
-Otherwise, skip this step.
-
-```bash
-podman machine init
-podman machine start
-```
-
-Use the following command and ensure you receive an output displaying the installation information.
-
-```bash
-podman info
-```
-
-Download the tutorial image to your local machine.
-
-```bash
-podman pull ghcr.io/spectrocloud/tutorials:1.1.3
-```
-
-Next, start the container, and open a bash session into it.
-
-```shell
-podman run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.3 bash
-```
-
-Navigate to the tutorial code.
-
-```shell
-cd /terraform/iaas-cluster-deployment-tf
-```
-
-
-
-
-
-Open a terminal window and download the tutorial code from GitHub.
-
-```shell
-git@github.com:spectrocloud/tutorials.git
-```
-
-Change the directory to the tutorial folder.
-
-```shell
-cd tutorials/
-```
-
-Check out the following git tag.
-
-```shell
-git checkout v1.1.3
-```
-
-Change the directory to the tutorial code.
-
-```shell
-cd terraform/iaas-cluster-deployment-tf/
-```
-
-
-
-
-
-## Create an API Key
-
-Before you can get started with the Terraform code, you need a Spectro Cloud API key.
-
-To create an API key, log in to [Palette](https://console.spectrocloud.com) and click on the user **User Menu** and
-select **My API Keys**.
-
-![Image that points to the user drop-down Menu and points to the API key link](/tutorials/deploy-clusters/clusters_public-cloud_deploy-k8s-cluster_create_api_key.webp)
-
-Next, click on **Add New API Key**. Fill out the required input field, **API Key Name**, and the **Expiration Date**.
-Click on **Confirm** to create the API key. Copy the key value to your clipboard, as you will use it shortly.
-
-
-
-In your terminal session, issue the following command to export the API key as an environment variable.
-
-
-
-```shell
-export SPECTROCLOUD_APIKEY=YourAPIKeyHere
-```
-
-The [Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) provider
-requires credentials to interact with the Palette API. The Spectro Cloud Terraform provider will use the environment
-variable to authenticate with the Spectro Cloud API endpoint.
-
-## Resources Review
-
-To help you get started with Terraform, the tutorial code is structured to support deploying a cluster to either Azure,
-GCP, or AWS. Before you deploy a host cluster to your target provider, take a few moments to review the following files
-in the folder structure.
-
-- **provider.tf** - This file contains the Terraform providers that are used to support the deployment of the cluster.
-
-- **inputs.tf** - This file contains all the Terraform variables for the deployment logic.
-
-- **data.tf** - This file contains all the query resources that perform read actions.
-
-- **cluster_profiles.tf** - This file contains the cluster profile definitions for each cloud provider.
-
-- **clusters.tf** - This file has all the required cluster configurations to deploy a host cluster to one of the cloud
- providers.
-
-- **terraform.tfvars** - Use this file to customize the deployment and target a specific cloud provider. This is the
- primary file you will modify.
-
-- **outputs.tf** - This file contains content that will be output in the terminal session upon a successful Terraform
- `apply` action.
-
-The following section allows you to review the core Terraform resources more closely.
-
-#### Provider
-
-The **provider.tf** file contains the Terraform providers and their respective versions. The tutorial uses two
-providers - the Spectro Cloud Terraform provider and the TLS Terraform provider. Note how the project name is specified
-in the `provider "spectrocloud" {}` block. You can change the target project by changing the value specified in the
-`project_name` parameter.
-
-```hcl
-terraform {
- required_providers {
- spectrocloud = {
- version = ">= 0.13.1"
- source = "spectrocloud/spectrocloud"
- }
- tls = {
- source = "hashicorp/tls"
- version = "4.0.4"
- }
- }
-}
-
-provider "spectrocloud" {
- project_name = "Default"
-}
-```
-
-The next file you should become familiar with is the **cluster-profiles.tf** file.
-
-The Spectro Cloud Terraform provider has several resources available for use. When creating a cluster profile, use
-`spectrocloud_cluster_profile`. This resource can be used to customize all layers of a cluster profile. You can specify
-all the different packs and versions to use and add a manifest or Helm chart.
-
-In the **cluster-profiles.tf** file, the cluster profile resource is declared three times. Each instance of the resource
-is for a specific cloud provider. Using the Azure cluster profile as an example, note how the **cluster-profiles.tf**
-file uses `pack {}` blocks to specify each layer of the profile. The order in which you arrange contents of the
-`pack {}` blocks plays an important role, as each layer maps to the core infrastructure in a cluster profile.
-
-The first listed `pack {}` block must be the OS, followed by Kubernetes, the container network interface, and the
-container storage interface. The first `pack {}` block in the list equates to the bottom layer of the cluster profile.
-Ensure you define the bottom layer of the cluster profile - the OS layer - first in the list of `pack {}` blocks.
-
-```hcl
-resource "spectrocloud_cluster_profile" "azure-profile" {
- count = var.deploy-azure ? 1 : 0
-
- name = "tf-azure-profile"
- description = "A basic cluster profile for Azure"
- tags = concat(var.tags, ["env:azure"])
- cloud = "azure"
- type = "cluster"
-
- pack {
- name = data.spectrocloud_pack.azure_ubuntu.name
- tag = data.spectrocloud_pack.azure_ubuntu.version
- uid = data.spectrocloud_pack.azure_ubuntu.id
- values = data.spectrocloud_pack.azure_ubuntu.values
- }
-
- pack {
- name = data.spectrocloud_pack.azure_k8s.name
- tag = data.spectrocloud_pack.azure_k8s.version
- uid = data.spectrocloud_pack.azure_k8s.id
- values = data.spectrocloud_pack.azure_k8s.values
- }
-
- pack {
- name = data.spectrocloud_pack.azure_cni.name
- tag = data.spectrocloud_pack.azure_cni.version
- uid = data.spectrocloud_pack.azure_cni.id
- values = data.spectrocloud_pack.azure_cni.values
- }
-
- pack {
- name = data.spectrocloud_pack.azure_csi.name
- tag = data.spectrocloud_pack.azure_csi.version
- uid = data.spectrocloud_pack.azure_csi.id
- values = data.spectrocloud_pack.azure_csi.values
- }
-
- pack {
- name = "hello-universe"
- type = "manifest"
- tag = "1.0.0"
- values = ""
- manifest {
- name = "hello-universe"
- content = file("manifests/hello-universe.yaml")
- }
- }
-}
-```
-
-The last `pack {}` block contains a manifest file with all the Kubernetes configurations for the
-[Hello Universe](https://github.com/spectrocloud/hello-universe) application. Including the application in the profile
-ensures the application is installed during cluster deployment. If you wonder what all the data resources are for, head
-to the next section to review them.
-
-You may have noticed that each `pack {}` block contains references to a data resource.
-
-```hcl
- pack {
- name = data.spectrocloud_pack.azure_csi.name
- tag = data.spectrocloud_pack.azure_csi.version
- uid = data.spectrocloud_pack.azure_csi.id
- values = data.spectrocloud_pack.azure_csi.values
- }
-```
-
-[Data resources](https://developer.hashicorp.com/terraform/language/data-sources) are used to perform read actions in
-Terraform. The Spectro Cloud Terraform provider exposes several data resources to help you make your Terraform code more
-dynamic. The data resource used in the cluster profile is `spectrocloud_pack`. This resource enables you to query
-Palette for information about a specific pack. You can get information about the pack using the data resource such as
-unique ID, registry ID, available versions, and the pack's YAML values.
-
-Below is the data resource used to query Palette for information about the Kubernetes pack for version `1.27.5`.
-
-```hcl
-data "spectrocloud_pack" "azure_k8s" {
- name = "kubernetes"
- version = "1.27.5"
- registry_uid = data.spectrocloud_registry.public_registry.id
-}
-```
-
-Using the data resource, you avoid manually typing in the parameter values required by the cluster profile's `pack {}`
-block.
-
-The **clusters.tf** file contains the definitions for deploying a host cluster to one of the cloud providers. To create
-a host cluster, you must use a cluster resource for the cloud provider you are targeting. The following Terraform
-cluster resources are defined in this file.
-
-| Terraform Resource | Platform |
-| ------------------------------------------------------------------------------------------------------------------------------------- | -------- |
-| [`spectrocloud_cluster_aws`](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/resources/cluster_aws) | AWS |
-| [`spectrocloud_cluster_azure`](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/resources/cluster_azure) | Azure |
-| [`spectrocloud_cluster_gcp`](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/resources/cluster_gcp) | GCP |
-
-Using the `spectrocloud_cluster_azure` resource in this tutorial as an example, note how the resource accepts a set of
-parameters. When deploying a cluster, you can change the same parameters in the Palette user interface (UI). You can
-learn more about each parameter by reviewing the resource documentation page hosted in the Terraform registry.
-
-```hcl
-resource "spectrocloud_cluster_azure" "cluster" {
- name = "azure-cluster"
- tags = concat(var.tags, ["env:azure"])
- cloud_account_id = data.spectrocloud_cloudaccount_azure.account[0].id
-
- cloud_config {
- subscription_id = var.azure_subscription_id
- resource_group = var.azure_resource_group
- region = var.azure-region
- ssh_key = tls_private_key.tutorial_ssh_key[0].public_key_openssh
- }
-
- cluster_profile {
- id = spectrocloud_cluster_profile.azure-profile[0].id
- }
-
- machine_pool {
- control_plane = true
- control_plane_as_worker = true
- name = "control-plane-pool"
- count = var.azure_control_plane_nodes.count
- instance_type = var.azure_control_plane_nodes.instance_type
- azs = var.azure_control_plane_nodes.azs
- is_system_node_pool = var.azure_control_plane_nodes.is_system_node_pool
- disk {
- size_gb = var.azure_control_plane_nodes.disk_size_gb
- type = "Standard_LRS"
- }
- }
-
- machine_pool {
- name = "worker-basic"
- count = var.azure_worker_nodes.count
- instance_type = var.azure_worker_nodes.instance_type
- azs = var.azure_worker_nodes.azs
- is_system_node_pool = var.azure_worker_nodes.is_system_node_pool
- }
-
- timeouts {
- create = "30m"
- delete = "15m"
- }
-}
-```
-
-To deploy a cluster using Terraform, you must first modify the **terraform.tfvars** file. Open the **terraform.tfvars**
-file in the editor of your choice, and locate the cloud provider you will use to deploy a host cluster.
-
-To simplify the process, we added a toggle variable in the Terraform template, that you can use to select the deployment
-environment. Each cloud provider has a section in the template that contains all the variables you must populate.
-Variables to populate are identified with `REPLACE_ME`.
-
-In the example Azure section below, you would change `deploy-azure = false` to `deploy-azure = true` to deploy to Azure.
-Additionally, you would replace all the variables with a value `REPLACE_ME`. You can also update the values for nodes in
-the control plane pool or worker pool.
-
-```hcl
-###########################
-# Azure Deployment Settings
-############################
-deploy-azure = false # Set to true to deploy to Azure
-azure-use-azs = true # Set to false when you deploy to a region without AZs
-
-azure-cloud-account-name = "REPLACE_ME"
-azure-region = "REPLACE_ME"
-azure_subscription_id = "REPLACE_ME"
-azure_resource_group = "REPLACE_ME"
-
-
-azure_master_nodes = {
- count = "1"
- control_plane = true
- instance_type = "Standard_A8_v2"
- disk_size_gb = "60"
- azs = ["1"] # If you want to deploy to multiple AZs, add them here.
- is_system_node_pool = false
-}
-
-azure_worker_nodes = {
- count = "1"
- control_plane = false
- instance_type = "Standard_A8_v2"
- disk_size_gb = "60"
- azs = ["1"] # If you want to deploy to multiple AZs, add them here.
- is_system_node_pool = false
-}
-```
-
-When you are done making the required changes, issue the following command to initialize Terraform.
-
-```shell
-terraform init
-```
-
-Next, issue the `plan` command to preview the changes.
-
-```shell
-terraform plan
-```
-
-Output:
-
-```shell
-Plan: 2 to add, 0 to change, 0 to destroy.
-```
-
-If you change the desired cloud provider's toggle variable to `true,` you will receive an output message that two new
-resources will be created. The two resources are your cluster profile and the host cluster.
-
-To deploy all the resources, use the `apply` command.
-
-```shell
-terraform apply -auto-approve
-```
-
-To check out the cluster profile creation in Palette, log in to [Palette](https://console.spectrocloud.com), and from
-the left **Main Menu** click on **Profiles**. Locate the cluster profile with the name `tf-azure-profile`. Click on the
-cluster profile to review its details, such as layers, packs, and versions.
-
-![A view of the cluster profile](/getting-started/azure/getting-started_deploy-k8s-cluster-tf_profile_review.webp)
-
-You can also check the cluster creation process by navigating to the left **Main Menu** and selecting **Clusters**.
-
-![Update the cluster](/getting-started/azure/getting-started_deploy-k8s-cluster_create_cluster.webp)
-
-Select your cluster to review its details page, which contains the status, cluster profile, event logs, and more.
-
-The cluster deployment may take several minutes depending on the cloud provider, node count, node sizes used, and the
-cluster profile. You can learn more about the deployment progress by reviewing the event log. Click on the **Events**
-tab to check the event log.
-
-![Update the cluster](/getting-started/azure/getting-started_deploy-k8s-cluster_event_log.webp)
-
-## Verify the Application
-
-When the cluster deploys, you can access the Hello Universe application. From the cluster's **Overview** page, click on
-the URL for port **:8080** next to the **hello-universe-service** in the **Services** row. This URL will take you to the
-application landing page.
-
-:::warning
-
-It can take up to three minutes for DNS to properly resolve the public load balancer URL. We recommend waiting a few
-moments before clicking on the service URL to prevent the browser from caching an unresolved DNS request.
-
-:::
-
-![Deployed application](/getting-started/getting-started_deploy-k8s-cluster_hello-universe-without-api.webp)
-
-Welcome to Hello Universe, a demo application to help you learn more about Palette and its features. Feel free to click
-on the logo to increase the counter and for a fun image change.
-
-You have deployed your first application to a cluster managed by Palette through Terraform. Your first application is a
-single container application with no upstream dependencies.
-
-## Cleanup
-
-Use the following steps to clean up the resources you created for the tutorial. Use the `destroy` command to remove all
-the resources you created through Terraform.
-
-```shell
-terraform destroy --auto-approve
-```
-
-Output:
-
-```shell
-Destroy complete! Resources: 2 destroyed.
-```
-
-:::info
-
-If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for force delete. To trigger a force
-delete, navigate to the cluster’s details page and click on **Settings**. Click on **Force Delete Cluster** to delete
-the cluster. Palette automatically removes clusters stuck in the cluster deletion phase for over 24 hours.
-
-:::
-
-If you are using the tutorial container and want to exit the container, type `exit` in your terminal session and press
-the **Enter** key. Next, issue the following command to stop the container.
-
-
-
-
-
-```shell
-docker stop tutorialContainer && \
-docker rmi --force ghcr.io/spectrocloud/tutorials:1.1.3
-```
-
-
-
-
-
-```shell
-podman stop tutorialContainer && \
-podman rmi --force ghcr.io/spectrocloud/tutorials:1.1.3
-```
-
-
-
-
-
-## Wrap-Up
-
-In this tutorial, you created a cluster profile, which is a template that contains the core layers required to deploy a
-host cluster. You then deployed a host cluster onto your preferred cloud service provider using Terraform.
-
-We encourage you to check out the [Deploy an Application using Palette Dev Engine](../../devx/apps/deploy-app.md)
-tutorial to learn more about Palette. Palette Dev Engine can help you deploy applications more quickly through the usage
-of [virtual clusters](../../glossary-all.md#palette-virtual-cluster). Feel free to check out the reference links below
-to learn more about Palette.
-
-- [Palette Modes](../../introduction/palette-modes.md)
-
-- [Palette Clusters](../../clusters/clusters.md)
-
-- [Hello Universe GitHub repository](https://github.com/spectrocloud/hello-universe)
diff --git a/docs/docs-content/getting-started/azure/deploy-manage-k8s-cluster-tf.md b/docs/docs-content/getting-started/azure/deploy-manage-k8s-cluster-tf.md
new file mode 100644
index 0000000000..b6a4fea5c0
--- /dev/null
+++ b/docs/docs-content/getting-started/azure/deploy-manage-k8s-cluster-tf.md
@@ -0,0 +1,748 @@
+---
+sidebar_label: "Cluster Management with Terraform"
+title: "Cluster Management with Terraform"
+description: "Learn how to deploy and update a Palette host cluster to Azure with Terraform."
+icon: ""
+hide_table_of_contents: false
+sidebar_position: 50
+toc_max_heading_level: 2
+tags: ["getting-started", "azure", "terraform"]
+---
+
+The [Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) provider
+allows you to create and manage Palette resources using Infrastructure as Code (IaC). With IaC, you can automate the
+provisioning of resources, collaborate on changes, and maintain a single source of truth for your infrastructure.
+
+This tutorial will teach you how to use Terraform to deploy and update an Azure host cluster. You will learn how to
+create two versions of a cluster profile with different demo applications, update the deployed cluster with the new
+cluster profile version, and then perform a rollback.
+
+## Prerequisites
+
+To complete this tutorial, you will need the following items in place:
+
+- Follow the steps described in the [Set up Palette with Azure](./setup.md) guide to authenticate Palette for use with
+ your Azure cloud account and create a Palette API key.
+- [Docker Desktop](https://www.docker.com/products/docker-desktop/) or [Podman](https://podman.io/docs/installation)
+ installed if you choose to follow along using the tutorial container.
+- If you choose to clone the repository instead of using the tutorial container, make sure you have the following
+ software installed:
+ - [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) v1.9.0 or greater
+ - [Git](https://git-scm.com/downloads)
+ - [Kubectl](https://kubernetes.io/docs/tasks/tools/)
+
+## Set Up Local Environment
+
+You can clone the [Tutorials](https://github.com/spectrocloud/tutorials) repository locally or follow along by
+downloading a container image that includes the tutorial code and all dependencies.
+
+
+
+
+
+Start Docker Desktop and ensure that the Docker daemon is available by issuing the following command.
+
+```bash
+docker ps
+```
+
+Next, download the tutorial image, start the container, and open a bash session into it.
+
+```shell
+docker run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.7 bash
+```
+
+Navigate to the folder that contains the tutorial code.
+
+```shell
+cd /terraform/getting-started-deployment-tf
+```
+
+:::warning
+
+Do not exit the container until the tutorial is complete. Otherwise, you may lose your progress.
+
+:::
+
+
+
+
+
+If you are not using a Linux operating system, create and start the Podman Machine in your local environment. Otherwise,
+skip this step.
+
+```bash
+podman machine init
+podman machine start
+```
+
+Use the following command and ensure you receive an output displaying the installation information.
+
+```bash
+podman info
+```
+
+Next, download the tutorial image, start the container, and open a bash session into it.
+
+```shell
+podman run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.7 bash
+```
+
+Navigate to the folder that contains the tutorial code.
+
+```shell
+cd /terraform/getting-started-deployment-tf
+```
+
+:::warning
+
+Do not exit the container until the tutorial is complete. Otherwise, you may lose your progress.
+
+:::
+
+
+
+
+
+Open a terminal window and download the tutorial code from GitHub.
+
+```shell
+git clone https://github.com/spectrocloud/tutorials.git
+```
+
+Change the directory to the tutorial folder.
+
+```shell
+cd tutorials/
+```
+
+Check out the following git tag.
+
+```shell
+git checkout v1.1.7
+```
+
+Navigate to the folder that contains the tutorial code.
+
+```shell
+cd /terraform/getting-started-deployment-tf
+```
+
+
+
+
+
+## Resources Review
+
+To help you get started with Terraform, the tutorial code is structured to support deploying a cluster to either AWS,
+Azure, GCP, or VMware vSphere. Before you deploy a host cluster to Azure, review the following files in the folder
+structure.
+
+| **File** | **Description** |
+| ----------------------- | ---------------------------------------------------------------------------------------------------------------------- |
+| **provider.tf** | This file contains the Terraform providers that are used to support the deployment of the cluster. |
+| **inputs.tf** | This file contains all the Terraform variables required for the deployment logic. |
+| **data.tf** | This file contains all the query resources that perform read actions. |
+| **cluster_profiles.tf** | This file contains the cluster profile definitions for each cloud provider. |
+| **clusters.tf** | This file has the cluster configurations required to deploy a host cluster to one of the cloud providers. |
+| **terraform.tfvars** | Use this file to target a specific cloud provider and customize the deployment. This is the only file you must modify. |
+| **ippool.tf** | This file contains the configuration required for VMware deployments that use static IP placement. |
+| **ssh-key.tf** | This file has the SSH key resource definition required for Azure and VMware deployments. |
+| **outputs.tf** | This file contains the content that will be displayed in the terminal after a successful Terraform `apply` action. |
+
+The following section reviews the core Terraform resources more closely.
+
+#### Provider
+
+The **provider.tf** file contains the Terraform providers used in the tutorial and their respective versions. This
+tutorial uses four providers:
+
+- [Spectro Cloud](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs)
+- [TLS](https://registry.terraform.io/providers/hashicorp/tls/latest)
+- [vSphere](https://registry.terraform.io/providers/hashicorp/vsphere/latest)
+- [Local](https://registry.terraform.io/providers/hashicorp/local/latest)
+
+Note how the project name is specified in the `provider "spectrocloud" {}` block. You can change the target project by
+modifying the value of the `palette-project` variable in the **terraform.tfvars** file.
+
+```hcl
+terraform {
+ required_providers {
+ spectrocloud = {
+ version = ">= 0.20.6"
+ source = "spectrocloud/spectrocloud"
+ }
+
+ tls = {
+ source = "hashicorp/tls"
+ version = "4.0.4"
+ }
+
+ vsphere = {
+ source = "hashicorp/vsphere"
+ version = ">= 2.6.1"
+ }
+
+ local = {
+ source = "hashicorp/local"
+ version = "2.4.1"
+ }
+ }
+
+ required_version = ">= 1.9"
+}
+
+provider "spectrocloud" {
+ project_name = var.palette-project
+}
+```
+
+#### Cluster Profile
+
+The next file you should become familiar with is the **cluster_profiles.tf** file. The `spectrocloud_cluster_profile`
+resource allows you to create a cluster profile and customize its layers. You can specify the packs and versions to use
+or add a manifest or Helm chart.
+
+The cluster profile resource is declared eight times in the **cluster-profiles.tf** file, with each pair of resources
+being designated for a specific provider. In this tutorial, two versions of the Azure cluster profile are deployed:
+version `1.0.0` deploys the [Hello Universe](https://github.com/spectrocloud/hello-universe) pack, while version `1.1.0`
+deploys the [Kubecost](https://www.kubecost.com/) pack along with the
+[Hello Universe](https://github.com/spectrocloud/hello-universe) application.
+
+The cluster profiles include layers for the Operating System (OS), Kubernetes, container network interface, and
+container storage interface. The first `pack {}` block in the list equates to the bottom layer of the cluster profile.
+Ensure you define the bottom layer of the cluster profile - the OS layer - first in the list of `pack {}` blocks, as the
+order in which you arrange the contents of the `pack {}` blocks plays an important role in the cluster profile creation.
+The table below displays the packs deployed in each version of the cluster profile.
+
+| **Pack Type** | **Pack Name** | **Version** | **Cluster Profile v1.0.0** | **Cluster Profile v1.1.0** |
+| ------------- | ------------------ | ----------- | -------------------------- | -------------------------- |
+| OS | `ubuntu-azure` | `22.04` | :white_check_mark: | :white_check_mark: |
+| Kubernetes | `kubernetes` | `1.27.5` | :white_check_mark: | :white_check_mark: |
+| Network | `cni-calico-azure` | `3.26.1` | :white_check_mark: | :white_check_mark: |
+| Storage | `csi-azure` | `1.28.3` | :white_check_mark: | :white_check_mark: |
+| App Services | `hellouniverse` | `1.1.2` | :white_check_mark: | :white_check_mark: |
+| App Services | `cost-analyzer` | `1.103.3` | :x: | :white_check_mark: |
+
+The Hello Universe pack has two configured [presets](../../glossary-all.md#presets). The first preset deploys a
+standalone frontend application, while the second one deploys a three-tier application with a frontend, API server, and
+Postgres database. This tutorial deploys the three-tier version of the
+[Hello Universe](https://github.com/spectrocloud/hello-universe) pack. The preset selection in the Terraform code is
+specified within the Hello Universe pack block with the `values` field and by using the **values-3tier.yaml** file.
+Below is an example of version `1.0.0` of the Azure cluster profile Terraform resource.
+
+```hcl
+resource "spectrocloud_cluster_profile" "azure-profile" {
+ count = var.deploy-azure ? 1 : 0
+
+ name = "tf-azure-profile"
+ description = "A basic cluster profile for Azure"
+ tags = concat(var.tags, ["env:azure"])
+ cloud = "azure"
+ type = "cluster"
+ version = "1.0.0"
+
+ pack {
+ name = data.spectrocloud_pack.azure_ubuntu.name
+ tag = data.spectrocloud_pack.azure_ubuntu.version
+ uid = data.spectrocloud_pack.azure_ubuntu.id
+ values = data.spectrocloud_pack.azure_ubuntu.values
+ type = "spectro"
+ }
+
+ pack {
+ name = data.spectrocloud_pack.azure_k8s.name
+ tag = data.spectrocloud_pack.azure_k8s.version
+ uid = data.spectrocloud_pack.azure_k8s.id
+ values = data.spectrocloud_pack.azure_k8s.values
+ type = "spectro"
+ }
+
+ pack {
+ name = data.spectrocloud_pack.azure_cni.name
+ tag = data.spectrocloud_pack.azure_cni.version
+ uid = data.spectrocloud_pack.azure_cni.id
+ values = data.spectrocloud_pack.azure_cni.values
+ type = "spectro"
+ }
+
+ pack {
+ name = data.spectrocloud_pack.azure_csi.name
+ tag = data.spectrocloud_pack.azure_csi.version
+ uid = data.spectrocloud_pack.azure_csi.id
+ values = data.spectrocloud_pack.azure_csi.values
+ type = "spectro"
+ }
+
+ pack {
+ name = data.spectrocloud_pack.hellouniverse.name
+ tag = data.spectrocloud_pack.hellouniverse.version
+ uid = data.spectrocloud_pack.hellouniverse.id
+ values = templatefile("manifests/values-3tier.yaml", {
+ namespace = var.app_namespace,
+ port = var.app_port,
+ replicas = var.replicas_number
+ db_password = base64encode(var.db_password),
+ auth_token = base64encode(var.auth_token)
+ })
+ type = "oci"
+ }
+}
+```
+
+#### Data Resources
+
+Each `pack {}` block contains references to a data resource.
+[Data resources](https://developer.hashicorp.com/terraform/language/data-sources) are used to perform read actions in
+Terraform. The Spectro Cloud Terraform provider exposes several data resources to help you make your Terraform code more
+dynamic. The data resource used in the cluster profile is `spectrocloud_pack`. This resource enables you to query
+Palette for information about a specific pack, such as its unique ID, registry ID, available versions, and YAML values.
+
+Below is the data resource used to query Palette for information about the Kubernetes pack for version `1.27.5`.
+
+```hcl
+data "spectrocloud_pack" "azure_k8s" {
+ name = "kubernetes"
+ version = "1.27.5"
+ registry_uid = data.spectrocloud_registry.public_registry.id
+}
+```
+
+Using the data resource helps you avoid manually entering the parameter values required by the cluster profile's
+`pack {}` block.
+
+#### Cluster
+
+The **clusters.tf** file contains the definitions required for deploying a host cluster to one of the infrastructure
+providers. To create an Azure host cluster, you must set the `deploy-azure` variable in the **terraform.tfvars** file to
+true.
+
+When deploying a cluster using Terraform, you must provide the same parameters as those available in the Palette UI for
+the cluster deployment step, such as the instance size and number of nodes. You can learn more about each parameter by
+reviewing the
+[Azure cluster resource](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/resources/cluster_azure)
+documentation.
+
+```hcl
+resource "spectrocloud_cluster_azure" "azure-cluster" {
+ count = var.deploy-azure ? 1 : 0
+
+ name = "azure-cluster"
+ tags = concat(var.tags, ["env:azure"])
+ cloud_account_id = data.spectrocloud_cloudaccount_azure.account[0].id
+
+ cloud_config {
+ subscription_id = var.azure_subscription_id
+ resource_group = var.azure_resource_group
+ region = var.azure-region
+ ssh_key = tls_private_key.tutorial_ssh_key_azure[0].public_key_openssh
+ }
+
+ cluster_profile {
+ id = var.deploy-azure && var.deploy-azure-kubecost ? resource.spectrocloud_cluster_profile.azure-profile-kubecost[0].id : resource.spectrocloud_cluster_profile.azure-profile[0].id
+ }
+
+ machine_pool {
+ control_plane = true
+ control_plane_as_worker = true
+ name = "control-plane-pool"
+ count = var.azure_control_plane_nodes.count
+ instance_type = var.azure_control_plane_nodes.instance_type
+ azs = var.azure-use-azs ? var.azure_control_plane_nodes.azs : [""]
+ is_system_node_pool = var.azure_control_plane_nodes.is_system_node_pool
+ disk {
+ size_gb = var.azure_control_plane_nodes.disk_size_gb
+ type = "Standard_LRS"
+ }
+ }
+
+ machine_pool {
+ name = "worker-basic"
+ count = var.azure_worker_nodes.count
+ instance_type = var.azure_worker_nodes.instance_type
+ azs = var.azure-use-azs ? var.azure_worker_nodes.azs : [""]
+ is_system_node_pool = var.azure_worker_nodes.is_system_node_pool
+ }
+
+ timeouts {
+ create = "30m"
+ delete = "15m"
+ }
+}
+```
+
+## Terraform Tests
+
+Before starting the cluster deployment, test the Terraform code to ensure the resources will be provisioned correctly.
+Issue the following command in your terminal.
+
+```bash
+terraform test
+```
+
+A successful test execution will output the following.
+
+```text hideClipboard
+Success! 16 passed, 0 failed.
+```
+
+## Input Variables
+
+To deploy a cluster using Terraform, you must first modify the **terraform.tfvars** file. Open it in the editor of your
+choice. The tutorial container includes the editor [Nano](https://www.nano-editor.org).
+
+The file is structured with different sections. Each provider has a section with variables that need to be filled in,
+identified by the placeholder `REPLACE_ME`. Additionally, there is a toggle variable named `deploy-`
+available for each provider, which you can use to select the deployment environment.
+
+In the **Palette Settings** section, modify the name of the `palette-project` variable if you wish to deploy to a
+Palette project different from the default one.
+
+```hcl {4}
+#####################
+# Palette Settings
+#####################
+palette-project = "Default" # The name of your project in Palette.
+```
+
+Next, in the **Hello Universe Configuration** section, provide values for the database password and authentication token
+for the Hello Universe pack. For example, you can use the value `password` for the database password and the default
+token provided in the
+[Hello Universe](https://github.com/spectrocloud/hello-universe/tree/main?tab=readme-ov-file#reverse-proxy-with-kubernetes)
+repository for the authentication token.
+
+```hcl {7-8}
+##############################
+# Hello Universe Configuration
+##############################
+app_namespace = "hello-universe" # The namespace in which the application will be deployed.
+app_port = 8080 # The cluster port number on which the service will listen for incoming traffic.
+replicas_number = 1 # The number of pods to be created.
+db_password = "REPLACE ME" # The database password to connect to the API database.
+auth_token = "REPLACE ME" # The auth token for the API connection.
+```
+
+Locate the Azure provider section and change `deploy-azure = false` to `deploy-azure = true`. Additionally, replace all
+occurrences of `REPLACE_ME` with their corresponding values, such as those for the `azure-cloud-account-name`,
+`azure-region`, `azure_subscription_id`, and `azure_resource_group` variables. You can also update the values for the
+nodes in the control plane or worker node pools as needed.
+
+```hcl {4,8-11}
+###########################
+# Azure Deployment Settings
+############################
+deploy-azure = false # Set to true to deploy to Azure.
+deploy-azure-kubecost = false # Set to true to deploy to Azure and include Kubecost to your cluster profile.
+azure-use-azs = true # Set to false when you deploy to a region without AZs.
+
+azure-cloud-account-name = "REPLACE ME"
+azure-region = "REPLACE ME"
+azure_subscription_id = "REPLACE ME"
+azure_resource_group = "REPLACE ME"
+
+
+azure_control_plane_nodes = {
+ count = "1"
+ control_plane = true
+ instance_type = "Standard_A8_v2"
+ disk_size_gb = "60"
+ azs = ["1"] # If you want to deploy to multiple AZs, add them here.
+ is_system_node_pool = false
+}
+
+azure_worker_nodes = {
+ count = "1"
+ control_plane = false
+ instance_type = "Standard_A8_v2"
+ disk_size_gb = "60"
+ azs = ["1"] # If you want to deploy to multiple AZs, add them here.
+ is_system_node_pool = false
+}
+```
+
+When you are done making the required changes, save the file.
+
+## Deploy the Cluster
+
+Before starting the cluster provisioning, export your [Palette API key](./setup.md#create-a-palette-api-key) as an
+environment variable. This step allows the Terraform code to authenticate with the Palette API.
+
+```bash
+export SPECTROCLOUD_APIKEY=
+```
+
+Next, issue the following command to initialize Terraform. The `init` command initializes the working directory that
+contains the Terraform files.
+
+```shell
+terraform init
+```
+
+```text hideClipboard
+Terraform has been successfully initialized!
+```
+
+:::warning
+
+Before deploying the resources, ensure that there are no active clusters named `azure-cluster` or cluster profiles named
+`tf-azure-profile` in your Palette project.
+
+:::
+
+Issue the `plan` command to preview the resources that Terraform will create.
+
+```shell
+terraform plan
+```
+
+The output indicates that four new resources will be created: two versions of the Azure cluster profile, the host
+cluster, and an SSH key pair. The host cluster will use version `1.0.0` of the cluster profile.
+
+```shell
+Plan: 4 to add, 0 to change, 0 to destroy.
+```
+
+To deploy the resources, use the `apply` command.
+
+```shell
+terraform apply -auto-approve
+```
+
+To check that the cluster profile was created correctly, log in to [Palette](https://console.spectrocloud.com), and
+click **Profiles** from the left **Main Menu**. Locate the cluster profile named `tf-azure-profile`. Click on the
+cluster profile to review its layers and versions.
+
+![A view of the cluster profile](/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp)
+
+You can also check the cluster creation process by selecting **Clusters** from the left **Main Menu**.
+
+![Update the cluster](/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp)
+
+Select your cluster to review its details page, which contains the status, cluster profile, event logs, and more.
+
+The cluster deployment may take 15 to 30 minutes depending on the cloud provider, cluster profile, cluster size, and the
+node pool configurations provided. You can learn more about the deployment progress by reviewing the event log. Click on
+the **Events** tab to check the log.
+
+![Update the cluster](/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp)
+
+### Verify the Application
+
+In Palette, navigate to the left **Main Menu** and select **Clusters**.
+
+Select your cluster to view its **Overview** tab. When the application is deployed and ready for network traffic,
+indicated in the **Services** field, Palette exposes the service URL. Click on the URL for port **:8080** to access the
+Hello Universe application.
+
+:::warning
+
+It can take up to three minutes for DNS to properly resolve the public load balancer URL. We recommend waiting a few
+moments before clicking on the service URL to prevent the browser from caching an unresolved DNS request.
+
+:::
+
+![Deployed application](/getting-started/azure/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp)
+
+Welcome to Hello Universe, a demo application developed to help you learn more about Palette and its features. Feel free
+to click on the logo to increase the counter and for a fun image change.
+
+## Version Cluster Profiles
+
+Palette supports the creation of multiple cluster profile versions using the same profile name. This provides you with
+better change visibility and control over the layers in your host clusters. Profile versions are commonly used for
+adding or removing layers and pack configuration updates.
+
+The version number of a given profile must be unique and use the semantic versioning format `major.minor.patch`. In this
+tutorial, you used Terraform to deploy two versions of an Azure cluster profile. The snippet below displays a segment of
+the Terraform cluster profile resource version `1.0.0` that was deployed.
+
+```hcl {4,9}
+resource "spectrocloud_cluster_profile" "azure-profile" {
+ count = var.deploy-azure ? 1 : 0
+
+ name = "tf-azure-profile"
+ description = "A basic cluster profile for Azure"
+ tags = concat(var.tags, ["env:azure"])
+ cloud = "azure"
+ type = "cluster"
+ version = "1.0.0"
+```
+
+Open the **terraform.tfvars** file, set the `deploy-azure-kubecost` variable to true, and save the file. Once applied,
+the host cluster will use version `1.1.0` of the cluster profile with the Kubecost pack.
+
+The snippet below displays the segment of the Terraform resource that creates the cluster profile version `1.1.0`. Note
+how the name `tf-azure-profile` is the same as in the first cluster profile resource, but the version is different.
+
+```hcl {4,9}
+resource "spectrocloud_cluster_profile" "azure-profile-kubecost" {
+ count = var.deploy-azure ? 1 : 0
+
+ name = "tf-azure-profile"
+ description = "A basic cluster profile for Azure with Kubecost"
+ tags = concat(var.tags, ["env:azure"])
+ cloud = "azure"
+ type = "cluster"
+ version = "1.1.0"
+```
+
+In the terminal window, issue the following command to plan the changes.
+
+```bash
+terraform plan
+```
+
+The output states that one resource will be modified. The deployed cluster will now use version `1.1.0` of the cluster
+profile.
+
+```text hideClipboard
+Plan: 0 to add, 1 to change, 0 to destroy.
+```
+
+Issue the `apply` command to deploy the changes.
+
+```bash
+terraform apply -auto-approve
+```
+
+Palette will now reconcile the current state of your workloads with the desired state specified by the new cluster
+profile version.
+
+To visualize the reconciliation behavior, log in to [Palette](https://console.spectrocloud.com), and click **Clusters**
+from the left **Main Menu**.
+
+Select the cluster named `azure-cluster`. Click on the **Events** tab. Note how a cluster reconciliation action was
+triggered due to cluster profile changes.
+
+![Image that shows the cluster profile reconciliation behavior](/getting-started/azure/getting-started_deploy-manage-k8s-cluster_reconciliation.webp)
+
+Next, click on the **Profile** tab. Observe that the cluster is now using version `1.1.0` of the `tf-azure-profile`
+cluster profile.
+
+![Image that shows the new cluster profile version with Kubecost](/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp)
+
+Once the changes have been completed, Palette marks the cluster layers with a green status indicator. Click the
+**Overview** tab to verify that the Kubecost pack was successfully deployed.
+
+![Image that shows the cluster with Kubecost](/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp)
+
+Next, download the [kubeconfig](../../clusters/cluster-management/kubeconfig.md) file for your cluster from the Palette
+UI. This file enables you and other users to issue `kubectl` commands against the host cluster.
+
+![Image that shows the cluster's kubeconfig file location](/getting-started/azure/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp)
+
+Open a new terminal window and set the environment variable `KUBECONFIG` to point to the kubeconfig file you downloaded.
+
+```bash
+export KUBECONFIG=~/Downloads/admin.azure-cluster.kubeconfig
+```
+
+Forward the Kubecost UI to your local network. The Kubecost dashboard is not exposed externally by default, so the
+command below will allow you to access it locally on port **9090**. If port 9090 is already taken, you can choose a
+different one.
+
+```bash
+kubectl port-forward --namespace kubecost deployment/cost-analyzer-cost-analyzer 9090
+```
+
+Open your browser window and navigate to `http://localhost:9090`. The Kubecost UI provides you with a variety of cost
+information about your cluster. Read more about
+[Navigating the Kubecost UI](https://docs.kubecost.com/using-kubecost/navigating-the-kubecost-ui) to make the most of
+the cost analyzer pack.
+
+![Image that shows the Kubecost UI](/getting-started/azure/getting-started_deploy-manage-k8s-cluster_kubecost.webp)
+
+Once you are done exploring the Kubecost dashboard, stop the `kubectl port-forward` command by closing the terminal
+window it is executing from.
+
+## Roll Back Cluster Profiles
+
+One of the key advantages of using cluster profile versions is that they make it possible to maintain a copy of
+previously known working states. The ability to roll back to a previously working cluster profile in one action shortens
+the time to recovery in the event of an incident.
+
+The process of rolling back to a previous version using Terraform is similar to the process of applying a new version.
+
+Open the **terraform.tfvars** file, set the `deploy-azure-kubecost` variable to false, and save the file. Once applied,
+this action will make the active cluster use version **1.0.0** of the cluster profile again.
+
+In the terminal window, issue the following command to plan the changes.
+
+```bash
+terraform plan
+```
+
+The output states that the deployed cluster will now use version `1.0.0` of the cluster profile.
+
+```text hideClipboard
+Plan: 0 to add, 1 to change, 0 to destroy.
+```
+
+Issue the `apply` command to deploy the changes.
+
+```bash
+terraform apply -auto-approve
+```
+
+Palette now makes the changes required for the cluster to return to the state specified in version `1.0.0` of your
+cluster profile. Once your changes have completed, Palette marks your layers with the green status indicator.
+
+![Image that shows the cluster using version 1.0.0 of the cluster profile](/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp)
+
+## Cleanup
+
+Use the following steps to clean up the resources you created for the tutorial. Use the `destroy` command to remove all
+the resources you created through Terraform.
+
+```shell
+terraform destroy --auto-approve
+```
+
+A successful execution of `terraform destroy` will output the following.
+
+```shell
+Destroy complete! Resources: 4 destroyed.
+```
+
+:::info
+
+If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for force delete. To trigger a force
+delete action, navigate to the cluster’s details page and click on **Settings**. Click on **Force Delete Cluster** to
+delete the cluster. Palette automatically removes clusters stuck in the cluster deletion phase for over 24 hours.
+
+:::
+
+If you are using the tutorial container, type `exit` in your terminal session and press the **Enter** key. Next, issue
+the following command to stop and remove the container.
+
+
+
+
+
+```shell
+docker stop tutorialContainer && \
+docker rmi --force ghcr.io/spectrocloud/tutorials:1.1.7
+```
+
+
+
+
+
+```shell
+podman stop tutorialContainer && \
+podman rmi --force ghcr.io/spectrocloud/tutorials:1.1.7
+```
+
+
+
+
+
+## Wrap-Up
+
+In this tutorial, you learned how to create different versions of a cluster profile using Terraform. You deployed a host
+Azure cluster and then updated it to use a different version of a cluster profile. Finally, you learned how to perform
+cluster profile roll backs.
+
+We encourage you to check out the
+[Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest) provider page to
+learn more about the Palette resources you can deploy using Terraform.
diff --git a/docs/docs-content/getting-started/azure/setup.md b/docs/docs-content/getting-started/azure/setup.md
index 17c9b8bf9d..22cca48e83 100644
--- a/docs/docs-content/getting-started/azure/setup.md
+++ b/docs/docs-content/getting-started/azure/setup.md
@@ -13,16 +13,11 @@ order to authenticate Palette and allow it to deploy host clusters.
## Prerequisites
-The prerequisite steps to getting started with Palette on AWS are as follows.
-
-- Sign up to [Palette](https://www.spectrocloud.com/get-started).
-
- - Your Palette account role must have the `clusterProfile.create` permission to create a cluster profile. Refer to the
- [Roles and Permissions](../../user-management/palette-rbac/project-scope-roles-permissions.md#cluster-profile-admin)
- documentation for more information.
+- A Palette account with [tenant admin](../../tenant-settings/tenant-settings.md) access.
- Sign up to a public cloud account from
- [Azure](https://learn.microsoft.com/en-us/training/modules/create-an-azure-account).
+ [Azure](https://learn.microsoft.com/en-us/training/modules/create-an-azure-account). The Azure cloud account must have
+ the [required permissions](../../clusters/public-cloud/azure/required-permissions.md).
- Access to a terminal window.
@@ -38,8 +33,18 @@ Palette needs access to your Azure cloud account in order to create and manage A
### Create and Upload an SSH Key
+Follow the steps below to create an SSH key using the terminal and upload it to Palette. This step is not required for
+the [Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) tutorial.
+
+### Create a Palette API Key
+
+Follow the steps below to create a Palette API key. This is required for the
+[Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) tutorial.
+
+
+
## Validate
You can verify your account is added.
diff --git a/docs/docs-content/getting-started/azure/update-k8s-cluster.md b/docs/docs-content/getting-started/azure/update-k8s-cluster.md
index ccf3fd5b10..f35cd02690 100644
--- a/docs/docs-content/getting-started/azure/update-k8s-cluster.md
+++ b/docs/docs-content/getting-started/azure/update-k8s-cluster.md
@@ -286,5 +286,5 @@ Cluster profiles provide consistency during the cluster creation process, as wel
They can be versioned to keep a record of previously working cluster states, giving you visibility when updating or
rolling back workloads across your environments.
-We recommend that you continue to the [Deploy a Cluster with Terraform](./deploy-k8s-cluster-tf.md) page to learn about
-how you can use Palette with Terraform.
+We recommend that you continue to the [Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) page to
+learn about how you can use Palette with Terraform.
diff --git a/docs/docs-content/getting-started/gcp/deploy-k8s-cluster-tf.md b/docs/docs-content/getting-started/gcp/deploy-k8s-cluster-tf.md
deleted file mode 100644
index 91cb37031d..0000000000
--- a/docs/docs-content/getting-started/gcp/deploy-k8s-cluster-tf.md
+++ /dev/null
@@ -1,555 +0,0 @@
----
-sidebar_label: "Deploy a Cluster with Terraform"
-title: "Deploy a Cluster with Terraform"
-description: "Learn to deploy a Palette host cluster with Terraform."
-icon: ""
-hide_table_of_contents: false
-sidebar_position: 50
-tags: ["getting-started", "gcp"]
----
-
-The [Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) provider
-enables you to create and manage Palette resources in a codified manner by leveraging Infrastructure as Code (IaC). Some
-notable reasons why you would want to utilize IaC are:
-
-- The ability to automate infrastructure.
-
-- Improved collaboration in making infrastructure changes.
-
-- Self-documentation of infrastructure through code.
-
-- Allows tracking all infrastructure in a single source of truth.
-
-If want to become more familiar with Terraform, we recommend you check out the
-[Terraform](https://developer.hashicorp.com/terraform/intro) learning resources from HashiCorp.
-
-This tutorial will teach you how to deploy a host cluster with Terraform using Google Cloud Platform (GCP). You will
-learn about _Cluster Mode_ and _Cluster Profiles_ and how these components enable you to deploy customized applications
-to Kubernetes with minimal effort using the
-[Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) provider.
-
-## Prerequisites
-
-To complete this tutorial, you will need the following items
-
-- Basic knowledge of containers.
-- [Docker Desktop](https://www.docker.com/products/docker-desktop/), [Podman](https://podman.io/docs/installation) or
- another container management tool.
-
-- Follow the steps described in the [Set up Palette with GCP](./setup.md) guide to authenticate Palette for use with
- your GCP cloud account.
-
-## Set Up Local Environment
-
-You can clone the tutorials repository locally or follow along by downloading a Docker image that contains the tutorial
-code and all dependencies.
-
-
-
-:::warning
-
-If you choose to clone the repository instead of using the tutorial container make sure you have Terraform v1.4.0 or
-greater installed.
-
-:::
-
-
-
-
-
-
-
-Ensure Docker Desktop on your local machine is available. Use the following command and ensure you receive an output
-displaying the version number.
-
-```bash
-docker version
-```
-
-Download the tutorial image to your local machine.
-
-```bash
-docker pull ghcr.io/spectrocloud/tutorials:1.1.3
-```
-
-Next, start the container, and open a bash session into it.
-
-```shell
-docker run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.3 bash
-```
-
-Navigate to the tutorial code.
-
-```shell
-cd /terraform/iaas-cluster-deployment-tf
-```
-
-
-
-
-
-If you are not running a Linux operating system, create and start the Podman Machine in your local environment.
-Otherwise, skip this step.
-
-```bash
-podman machine init
-podman machine start
-```
-
-Use the following command and ensure you receive an output displaying the installation information.
-
-```bash
-podman info
-```
-
-Download the tutorial image to your local machine.
-
-```bash
-podman pull ghcr.io/spectrocloud/tutorials:1.1.3
-```
-
-Next, start the container, and open a bash session into it.
-
-```shell
-podman run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.3 bash
-```
-
-Navigate to the tutorial code.
-
-```shell
-cd /terraform/iaas-cluster-deployment-tf
-```
-
-
-
-
-
-Open a terminal window and download the tutorial code from GitHub.
-
-```shell
-git@github.com:spectrocloud/tutorials.git
-```
-
-Change the directory to the tutorial folder.
-
-```shell
-cd tutorials/
-```
-
-Check out the following git tag.
-
-```shell
-git checkout v1.1.3
-```
-
-Change the directory to the tutorial code.
-
-```shell
-cd terraform/iaas-cluster-deployment-tf/
-```
-
-
-
-
-
-## Create an API Key
-
-Before you can get started with the Terraform code, you need a Spectro Cloud API key.
-
-To create an API key, log in to [Palette](https://console.spectrocloud.com) and click on the user **User Menu** and
-select **My API Keys**.
-
-![Image that points to the user drop-down Menu and points to the API key link](/tutorials/deploy-clusters/clusters_public-cloud_deploy-k8s-cluster_create_api_key.webp)
-
-Next, click on **Add New API Key**. Fill out the required input field, **API Key Name**, and the **Expiration Date**.
-Click on **Confirm** to create the API key. Copy the key value to your clipboard, as you will use it shortly.
-
-
-
-In your terminal session, issue the following command to export the API key as an environment variable.
-
-
-
-```shell
-export SPECTROCLOUD_APIKEY=YourAPIKeyHere
-```
-
-The [Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) provider
-requires credentials to interact with the Palette API. The Spectro Cloud Terraform provider will use the environment
-variable to authenticate with the Spectro Cloud API endpoint.
-
-## Resources Review
-
-To help you get started with Terraform, the tutorial code is structured to support deploying a cluster to either Azure,
-GCP, or AWS. Before you deploy a host cluster to your target provider, take a few moments to review the following files
-in the folder structure.
-
-- **provider.tf** - This file contains the Terraform providers that are used to support the deployment of the cluster.
-
-- **inputs.tf** - This file contains all the Terraform variables for the deployment logic.
-
-- **data.tf** - This file contains all the query resources that perform read actions.
-
-- **cluster_profiles.tf** - This file contains the cluster profile definitions for each cloud provider.
-
-- **clusters.tf** - This file has all the required cluster configurations to deploy a host cluster to one of the cloud
- providers.
-
-- **terraform.tfvars** - Use this file to customize the deployment and target a specific cloud provider. This is the
- primary file you will modify.
-
-- **outputs.tf** - This file contains content that will be output in the terminal session upon a successful Terraform
- `apply` action.
-
-The following section allows you to review the core Terraform resources more closely.
-
-#### Provider
-
-The **provider.tf** file contains the Terraform providers and their respective versions. The tutorial uses two
-providers - the Spectro Cloud Terraform provider and the TLS Terraform provider. Note how the project name is specified
-in the `provider "spectrocloud" {}` block. You can change the target project by changing the value specified in the
-`project_name` parameter.
-
-```hcl
-terraform {
- required_providers {
- spectrocloud = {
- version = ">= 0.13.1"
- source = "spectrocloud/spectrocloud"
- }
- tls = {
- source = "hashicorp/tls"
- version = "4.0.4"
- }
- }
-}
-
-provider "spectrocloud" {
- project_name = "Default"
-}
-```
-
-The next file you should become familiar with is the **cluster-profiles.tf** file.
-
-The Spectro Cloud Terraform provider has several resources available for use. When creating a cluster profile, use
-`spectrocloud_cluster_profile`. This resource can be used to customize all layers of a cluster profile. You can specify
-all the different packs and versions to use and add a manifest or Helm chart.
-
-In the **cluster-profiles.tf** file, the cluster profile resource is declared three times. Each instance of the resource
-is for a specific cloud provider. Using the AWS cluster profile as an example, note how the **cluster-profiles.tf** file
-uses `pack {}` blocks to specify each layer of the profile. The order in which you arrange contents of the `pack {}`
-blocks plays an important role, as each layer maps to the core infrastructure in a cluster profile.
-
-The first listed `pack {}` block must be the OS, followed by Kubernetes, the container network interface, and the
-container storage interface. The first `pack {}` block in the list equates to the bottom layer of the cluster profile.
-Ensure you define the bottom layer of the cluster profile - the OS layer - first in the list of `pack {}` blocks.
-
-```hcl
-resource "spectrocloud_cluster_profile" "gcp-profile" {
- count = var.deploy-gcp ? 1 : 0
-
- name = "tf-gcp-profile"
- description = "A basic cluster profile for GCP"
- tags = concat(var.tags, ["env:azure"])
- cloud = "gcp"
- type = "cluster"
-
- pack {
- name = data.spectrocloud_pack.gcp_ubuntu.name
- tag = data.spectrocloud_pack.gcp_ubuntu.version
- uid = data.spectrocloud_pack.gcp_ubuntu.id
- values = data.spectrocloud_pack.gcp_ubuntu.values
- }
-
- pack {
- name = data.spectrocloud_pack.gcp_k8s.name
- tag = data.spectrocloud_pack.gcp_k8s.version
- uid = data.spectrocloud_pack.gcp_k8s.id
- values = data.spectrocloud_pack.gcp_k8s.values
- }
-
- pack {
- name = data.spectrocloud_pack.gcp_cni.name
- tag = data.spectrocloud_pack.gcp_cni.version
- uid = data.spectrocloud_pack.gcp_cni.id
- values = data.spectrocloud_pack.gcp_cni.values
- }
-
- pack {
- name = data.spectrocloud_pack.gcp_csi.name
- tag = data.spectrocloud_pack.gcp_csi.version
- uid = data.spectrocloud_pack.gcp_csi.id
- values = data.spectrocloud_pack.gcp_csi.values
- }
-
- pack {
- name = "hello-universe"
- type = "manifest"
- tag = "1.0.0"
- values = ""
- manifest {
- name = "hello-universe"
- content = file("manifests/hello-universe.yaml")
- }
- }
-}
-```
-
-The last `pack {}` block contains a manifest file with all the Kubernetes configurations for the
-[Hello Universe](https://github.com/spectrocloud/hello-universe) application. Including the application in the profile
-ensures the application is installed during cluster deployment. If you wonder what all the data resources are for, head
-to the next section to review them.
-
-You may have noticed that each `pack {}` block contains references to a data resource.
-
-```hcl
- pack {
- name = data.spectrocloud_pack.gcp_csi.name
- tag = data.spectrocloud_pack.gcp_csi.version
- uid = data.spectrocloud_pack.gcp_csi.id
- values = data.spectrocloud_pack.gcp_csi.values
- }
-```
-
-[Data resources](https://developer.hashicorp.com/terraform/language/data-sources) are used to perform read actions in
-Terraform. The Spectro Cloud Terraform provider exposes several data resources to help you make your Terraform code more
-dynamic. The data resource used in the cluster profile is `spectrocloud_pack`. This resource enables you to query
-Palette for information about a specific pack. You can get information about the pack using the data resource such as
-unique ID, registry ID, available versions, and the pack's YAML values.
-
-Below is the data resource used to query Palette for information about the Kubernetes pack for version `1.27.5`.
-
-```hcl
-data "spectrocloud_pack" "gcp_k8s" {
- name = "kubernetes"
- version = "1.27.5"
- registry_uid = data.spectrocloud_registry.public_registry.id
-}
-```
-
-Using the data resource, you avoid manually typing in the parameter values required by the cluster profile's `pack {}`
-block.
-
-The **clusters.tf** file contains the definitions for deploying a host cluster to one of the cloud providers. To create
-a host cluster, you must use a cluster resource for the cloud provider you are targeting.
-
-In this tutorial, the following Terraform cluster resources are used.
-
-| Terraform Resource | Platform |
-| ------------------------------------------------------------------------------------------------------------------------------------- | -------- |
-| [`spectrocloud_cluster_aws`](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/resources/cluster_aws) | AWS |
-| [`spectrocloud_cluster_azure`](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/resources/cluster_azure) | Azure |
-| [`spectrocloud_cluster_gcp`](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/resources/cluster_gcp) | GCP |
-
-Using the `spectrocloud_cluster_gcp` resource in this tutorial as an example, note how the resource accepts a set of
-parameters. When deploying a cluster, you can change the same parameters in the Palette user interface (UI). You can
-learn more about each parameter by reviewing the resource documentation page hosted in the Terraform registry.
-
-```hcl
-resource "spectrocloud_cluster_gcp" "gcp-cluster" {
- count = var.deploy-gcp ? 1 : 0
-
- name = "gcp-cluster"
- tags = concat(var.tags, ["env:gcp"])
- cloud_account_id = data.spectrocloud_cloudaccount_gcp.account[0].id
-
- cloud_config {
- project = var.gcp_project_name
- region = var.gcp-region
- }
-
- cluster_profile {
- id = spectrocloud_cluster_profile.gcp-profile[0].id
- }
-
- machine_pool {
- control_plane = true
- control_plane_as_worker = true
- name = "master-pool"
- count = var.gcp_master_nodes.count
- instance_type = var.gcp_master_nodes.instance_type
- disk_size_gb = var.gcp_master_nodes.disk_size_gb
- azs = var.gcp_master_nodes.availability_zones
- }
-
- machine_pool {
- name = "worker-pool"
- count = var.gcp_worker_nodes.count
- instance_type = var.gcp_worker_nodes.instance_type
- disk_size_gb = var.gcp_worker_nodes.disk_size_gb
- azs = var.gcp_worker_nodes.availability_zones
- }
-
- timeouts {
- create = "30m"
- delete = "15m"
- }
-}
-```
-
-To deploy a cluster using Terraform, you must first modify the **terraform.tfvars** file. Open the **terraform.tfvars**
-file in the editor of your choice, and locate the cloud provider you will use to deploy a host cluster.
-
-To simplify the process, we added a toggle variable in the Terraform template, that you can use to select the deployment
-environment. Each cloud provider has a section in the template that contains all the variables you must populate.
-Variables to populate are identified with `REPLACE_ME`.
-
-In the example GCP section below, you would change `deploy-gcp = false` to `deploy-gcp = true` to deploy to GCP.
-Additionally, you would replace all the variables with a value `REPLACE_ME`. You can also update the values for nodes in
-the control plane pool or worker pool.
-
-```hcl
-###########################
-# GCP Deployment Settings
-############################
-deploy-gcp = false # Set to true to deploy to GCP
-
-gcp-cloud-account-name = "REPLACE_ME"
-gcp-region = "REPLACE_ME"
-gcp_project_name = "REPLACE_ME"
-gcp_master_nodes = {
- count = "1"
- control_plane = true
- instance_type = "n1-standard-4"
- disk_size_gb = "60"
- availability_zones = ["REPLACE_ME"] # If you want to deploy to multiple AZs, add them here. Example: ["us-central1-a", "us-central1-b"]
-}
-
-gcp_worker_nodes = {
- count = "1"
- control_plane = false
- instance_type = "n1-standard-4"
- disk_size_gb = "60"
- availability_zones = ["REPLACE_ME"] # If you want to deploy to multiple AZs, add them here. Example: ["us-central1-a", "us-central1-b"]
-}
-```
-
-When you are done making the required changes, issue the following command to initialize Terraform.
-
-```shell
-terraform init
-```
-
-Next, issue the `plan` command to preview the changes.
-
-```shell
-terraform plan
-```
-
-Output:
-
-```shell
-Plan: 2 to add, 0 to change, 0 to destroy.
-```
-
-If you change the desired cloud provider's toggle variable to `true,` you will receive an output message that two new
-resources will be created. The two resources are your cluster profile and the host cluster.
-
-To deploy all the resources, use the `apply` command.
-
-```shell
-terraform apply -auto-approve
-```
-
-To check out the cluster profile creation in Palette, log in to [Palette](https://console.spectrocloud.com), and from
-the left **Main Menu** click on **Profiles**. Locate the cluster profile with the name `tf-gcp-profile`. Click on the
-cluster profile to review its details, such as layers, packs, and versions.
-
-![A view of the cluster profile](/getting-started/gcp/getting-started_deploy-k8s-cluster-tf_profile_review.webp)
-
-You can also check the cluster creation process by navigating to the left **Main Menu** and selecting **Clusters**.
-
-![Update the cluster](/getting-started/gcp/getting-started_deploy-k8s-cluster_new_cluster.webp)
-
-Select your cluster to review its details page, which contains the status, cluster profile, event logs, and more.
-
-The cluster deployment may take several minutes depending on the cloud provider, node count, node sizes used, and the
-cluster profile. You can learn more about the deployment progress by reviewing the event log. Click on the **Events**
-tab to check the event log.
-
-![Update the cluster](/getting-started/gcp/getting-started_deploy-k8s-cluster_event_log.webp)
-
-## Verify the Application
-
-When the cluster deploys, you can access the Hello Universe application. From the cluster's **Overview** page, click on
-the URL for port **:8080** next to the **hello-universe-service** in the **Services** row. This URL will take you to the
-application landing page.
-
-:::warning
-
-It can take up to three minutes for DNS to properly resolve the public load balancer URL. We recommend waiting a few
-moments before clicking on the service URL to prevent the browser from caching an unresolved DNS request.
-
-:::
-
-![Deployed application](/getting-started/getting-started_deploy-k8s-cluster_hello-universe-without-api.webp)
-
-Welcome to Hello Universe, a demo application to help you learn more about Palette and its features. Feel free to click
-on the logo to increase the counter and for a fun image change.
-
-You have deployed your first application to a cluster managed by Palette through Terraform. Your first application is a
-single container application with no upstream dependencies.
-
-## Cleanup
-
-Use the following steps to clean up the resources you created for the tutorial. Use the `destroy` command to remove all
-the resources you created through Terraform.
-
-```shell
-terraform destroy --auto-approve
-```
-
-Output:
-
-```shell
-Destroy complete! Resources: 2 destroyed.
-```
-
-:::info
-
-If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for force delete. To trigger a force
-delete, navigate to the cluster’s details page and click on **Settings**. Click on **Force Delete Cluster** to delete
-the cluster. Palette automatically removes clusters stuck in the cluster deletion phase for over 24 hours.
-
-:::
-
-If you are using the tutorial container and want to exit the container, type `exit` in your terminal session and press
-the **Enter** key. Next, issue the following command to stop the container.
-
-
-
-
-
-```shell
-docker stop tutorialContainer && \
-docker rmi --force ghcr.io/spectrocloud/tutorials:1.1.3
-```
-
-
-
-
-
-```shell
-podman stop tutorialContainer && \
-podman rmi --force ghcr.io/spectrocloud/tutorials:1.1.3
-```
-
-
-
-
-
-## Wrap-Up
-
-In this tutorial, you created a cluster profile, which is a template that contains the core layers required to deploy a
-host cluster. You then deployed a host cluster to GCP using Terraform.
-
-We encourage you to check out the [Deploy an Application using Palette Dev Engine](../../devx/apps/deploy-app.md)
-tutorial to learn more about Palette. Palette Dev Engine can help you deploy applications more quickly through the usage
-of [virtual clusters](../../glossary-all.md#palette-virtual-cluster). Feel free to check out the reference links below
-to learn more about Palette.
-
-- [Palette Modes](../../introduction/palette-modes.md)
-
-- [Palette Clusters](../../clusters/clusters.md)
-
-- [Hello Universe GitHub repository](https://github.com/spectrocloud/hello-universe)
diff --git a/docs/docs-content/getting-started/gcp/deploy-k8s-cluster.md b/docs/docs-content/getting-started/gcp/deploy-k8s-cluster.md
index cb5930c49c..0f15e87ea9 100644
--- a/docs/docs-content/getting-started/gcp/deploy-k8s-cluster.md
+++ b/docs/docs-content/getting-started/gcp/deploy-k8s-cluster.md
@@ -55,11 +55,9 @@ The **Cluster Profile** section displays all the layers in the cluster profile.
Each layer has a pack manifest file with the deploy configurations. The pack manifest file is in a YAML format. Each
pack contains a set of default values. You can change the manifest values if needed. Click on **Next** to proceed.
-The **Cluster Config** section allows you to select the **Project**, **Region**, and **SSH Key** to apply to the host
-cluster. All clusters require you to assign an SSH key. Refer to the [SSH Keys](/clusters/cluster-management/ssh-keys)
-guide for information about uploading an SSH key.
+The **Cluster Config** section allows you to select the **Project** and **Region** to apply to the host cluster.
-After selecting a **Project**, **Region**, and **SSH Key**, click on **Next**.
+After selecting a **Project** and a **Region**, click on **Next**.
The **Nodes Config** section allows you to configure the nodes that make up the control plane and worker nodes of the
host cluster.
diff --git a/docs/docs-content/getting-started/gcp/deploy-manage-k8s-cluster-tf.md b/docs/docs-content/getting-started/gcp/deploy-manage-k8s-cluster-tf.md
new file mode 100644
index 0000000000..324001e5e8
--- /dev/null
+++ b/docs/docs-content/getting-started/gcp/deploy-manage-k8s-cluster-tf.md
@@ -0,0 +1,737 @@
+---
+sidebar_label: "Cluster Management with Terraform"
+title: "Cluster Management with Terraform"
+description: "Learn how to deploy and update a Palette host cluster to GCP with Terraform."
+icon: ""
+hide_table_of_contents: false
+sidebar_position: 50
+toc_max_heading_level: 2
+tags: ["getting-started", "gcp", "terraform"]
+---
+
+The [Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) provider
+allows you to create and manage Palette resources using Infrastructure as Code (IaC). With IaC, you can automate the
+provisioning of resources, collaborate on changes, and maintain a single source of truth for your infrastructure.
+
+This tutorial will teach you how to use Terraform to deploy and update a Google Cloud Platform (GCP) host cluster. You
+will learn how to create two versions of a cluster profile with different demo applications, update the deployed cluster
+with the new cluster profile version, and then perform a rollback.
+
+## Prerequisites
+
+To complete this tutorial, you will need the following items in place:
+
+- Follow the steps described in the [Set up Palette with GCP](./setup.md) guide to authenticate Palette for use with
+ your GCP cloud account and create a Palette API key.
+- [Docker Desktop](https://www.docker.com/products/docker-desktop/) or [Podman](https://podman.io/docs/installation)
+ installed if you choose to follow along using the tutorial container.
+- If you choose to clone the repository instead of using the tutorial container, make sure you have the following
+ software installed:
+ - [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) v1.9.0 or greater
+ - [Git](https://git-scm.com/downloads)
+ - [Kubectl](https://kubernetes.io/docs/tasks/tools/)
+
+## Set Up Local Environment
+
+You can clone the [Tutorials](https://github.com/spectrocloud/tutorials) repository locally or follow along by
+downloading a container image that includes the tutorial code and all dependencies.
+
+
+
+
+
+Start Docker Desktop and ensure that the Docker daemon is available by issuing the following command.
+
+```bash
+docker ps
+```
+
+Next, download the tutorial image, start the container, and open a bash session into it.
+
+```shell
+docker run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.7 bash
+```
+
+Navigate to the folder that contains the tutorial code.
+
+```shell
+cd /terraform/getting-started-deployment-tf
+```
+
+:::warning
+
+Do not exit the container until the tutorial is complete. Otherwise, you may lose your progress.
+
+:::
+
+
+
+
+
+If you are not using a Linux operating system, create and start the Podman Machine in your local environment. Otherwise,
+skip this step.
+
+```bash
+podman machine init
+podman machine start
+```
+
+Use the following command and ensure you receive an output displaying the installation information.
+
+```bash
+podman info
+```
+
+Next, download the tutorial image, start the container, and open a bash session into it.
+
+```shell
+podman run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.7 bash
+```
+
+Navigate to the folder that contains the tutorial code.
+
+```shell
+cd /terraform/getting-started-deployment-tf
+```
+
+:::warning
+
+Do not exit the container until the tutorial is complete. Otherwise, you may lose your progress.
+
+:::
+
+
+
+
+
+Open a terminal window and download the tutorial code from GitHub.
+
+```shell
+git clone https://github.com/spectrocloud/tutorials.git
+```
+
+Change the directory to the tutorial folder.
+
+```shell
+cd tutorials/
+```
+
+Check out the following git tag.
+
+```shell
+git checkout v1.1.7
+```
+
+Navigate to the folder that contains the tutorial code.
+
+```shell
+cd /terraform/getting-started-deployment-tf
+```
+
+
+
+
+
+## Resources Review
+
+To help you get started with Terraform, the tutorial code is structured to support deploying a cluster to either AWS,
+Azure, GCP, or VMware vSphere. Before you deploy a host cluster to GCP, review the following files in the folder
+structure.
+
+| **File** | **Description** |
+| ----------------------- | ---------------------------------------------------------------------------------------------------------------------- |
+| **provider.tf** | This file contains the Terraform providers that are used to support the deployment of the cluster. |
+| **inputs.tf** | This file contains all the Terraform variables required for the deployment logic. |
+| **data.tf** | This file contains all the query resources that perform read actions. |
+| **cluster_profiles.tf** | This file contains the cluster profile definitions for each cloud provider. |
+| **clusters.tf** | This file has the cluster configurations required to deploy a host cluster to one of the cloud providers. |
+| **terraform.tfvars** | Use this file to target a specific cloud provider and customize the deployment. This is the only file you must modify. |
+| **ippool.tf** | This file contains the configuration required for VMware deployments that use static IP placement. |
+| **ssh-key.tf** | This file has the SSH key resource definition required for Azure and VMware deployments. |
+| **outputs.tf** | This file contains the content that will be displayed in the terminal after a successful Terraform `apply` action. |
+
+The following section reviews the core Terraform resources more closely.
+
+#### Provider
+
+The **provider.tf** file contains the Terraform providers used in the tutorial and their respective versions. This
+tutorial uses four providers:
+
+- [Spectro Cloud](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs)
+- [TLS](https://registry.terraform.io/providers/hashicorp/tls/latest)
+- [vSphere](https://registry.terraform.io/providers/hashicorp/vsphere/latest)
+- [Local](https://registry.terraform.io/providers/hashicorp/local/latest)
+
+Note how the project name is specified in the `provider "spectrocloud" {}` block. You can change the target project by
+modifying the value of the `palette-project` variable in the **terraform.tfvars** file.
+
+```hcl
+terraform {
+ required_providers {
+ spectrocloud = {
+ version = ">= 0.20.6"
+ source = "spectrocloud/spectrocloud"
+ }
+
+ tls = {
+ source = "hashicorp/tls"
+ version = "4.0.4"
+ }
+
+ vsphere = {
+ source = "hashicorp/vsphere"
+ version = ">= 2.6.1"
+ }
+
+ local = {
+ source = "hashicorp/local"
+ version = "2.4.1"
+ }
+ }
+
+ required_version = ">= 1.9"
+}
+
+provider "spectrocloud" {
+ project_name = var.palette-project
+}
+```
+
+#### Cluster Profile
+
+The next file you should become familiar with is the **cluster_profiles.tf** file. The `spectrocloud_cluster_profile`
+resource allows you to create a cluster profile and customize its layers. You can specify the packs and versions to use
+or add a manifest or Helm chart.
+
+The cluster profile resource is declared eight times in the **cluster-profiles.tf** file, with each pair of resources
+being designated for a specific provider. In this tutorial, two versions of the GCP cluster profile are deployed:
+version `1.0.0` deploys the [Hello Universe](https://github.com/spectrocloud/hello-universe) pack, while version `1.1.0`
+deploys the [Kubecost](https://www.kubecost.com/) pack along with the
+[Hello Universe](https://github.com/spectrocloud/hello-universe) application.
+
+The cluster profiles include layers for the Operating System (OS), Kubernetes, container network interface, and
+container storage interface. The first `pack {}` block in the list equates to the bottom layer of the cluster profile.
+Ensure you define the bottom layer of the cluster profile - the OS layer - first in the list of `pack {}` blocks, as the
+order in which you arrange the contents of the `pack {}` blocks plays an important role in the cluster profile creation.
+The table below displays the packs deployed in each version of the cluster profile.
+
+| **Pack Type** | **Pack Name** | **Version** | **Cluster Profile v1.0.0** | **Cluster Profile v1.1.0** |
+| ------------- | ---------------- | ----------- | -------------------------- | -------------------------- |
+| OS | `ubuntu-gcp` | `22.04` | :white_check_mark: | :white_check_mark: |
+| Kubernetes | `kubernetes` | `1.28.3` | :white_check_mark: | :white_check_mark: |
+| Network | `cni-calico` | `3.27.0` | :white_check_mark: | :white_check_mark: |
+| Storage | `csi-gcp-driver` | `1.12.4` | :white_check_mark: | :white_check_mark: |
+| App Services | `hellouniverse` | `1.1.2` | :white_check_mark: | :white_check_mark: |
+| App Services | `cost-analyzer` | `1.103.3` | :x: | :white_check_mark: |
+
+The Hello Universe pack has two configured [presets](../../glossary-all.md#presets). The first preset deploys a
+standalone frontend application, while the second one deploys a three-tier application with a frontend, API server, and
+Postgres database. This tutorial deploys the three-tier version of the
+[Hello Universe](https://github.com/spectrocloud/hello-universe) pack. The preset selection in the Terraform code is
+specified within the Hello Universe pack block with the `values` field and by using the **values-3tier.yaml** file.
+Below is an example of version `1.0.0` of the GCP cluster profile Terraform resource.
+
+```hcl
+resource "spectrocloud_cluster_profile" "gcp-profile" {
+ count = var.deploy-gcp ? 1 : 0
+
+ name = "tf-gcp-profile"
+ description = "A basic cluster profile for GCP"
+ tags = concat(var.tags, ["env:GCP"])
+ cloud = "gcp"
+ type = "cluster"
+ version = "1.0.0"
+
+ pack {
+ name = data.spectrocloud_pack.gcp_ubuntu.name
+ tag = data.spectrocloud_pack.gcp_ubuntu.version
+ uid = data.spectrocloud_pack.gcp_ubuntu.id
+ values = data.spectrocloud_pack.gcp_ubuntu.values
+ type = "spectro"
+ }
+
+ pack {
+ name = data.spectrocloud_pack.gcp_k8s.name
+ tag = data.spectrocloud_pack.gcp_k8s.version
+ uid = data.spectrocloud_pack.gcp_k8s.id
+ values = data.spectrocloud_pack.gcp_k8s.values
+ type = "spectro"
+ }
+
+ pack {
+ name = data.spectrocloud_pack.gcp_cni.name
+ tag = data.spectrocloud_pack.gcp_cni.version
+ uid = data.spectrocloud_pack.gcp_cni.id
+ values = data.spectrocloud_pack.gcp_cni.values
+ type = "spectro"
+ }
+
+ pack {
+ name = data.spectrocloud_pack.gcp_csi.name
+ tag = data.spectrocloud_pack.gcp_csi.version
+ uid = data.spectrocloud_pack.gcp_csi.id
+ values = data.spectrocloud_pack.gcp_csi.values
+ type = "spectro"
+ }
+
+ pack {
+ name = data.spectrocloud_pack.hellouniverse.name
+ tag = data.spectrocloud_pack.hellouniverse.version
+ uid = data.spectrocloud_pack.hellouniverse.id
+ values = templatefile("manifests/values-3tier.yaml", {
+ namespace = var.app_namespace,
+ port = var.app_port,
+ replicas = var.replicas_number
+ db_password = base64encode(var.db_password),
+ auth_token = base64encode(var.auth_token)
+ })
+ type = "oci"
+ }
+}
+```
+
+#### Data Resources
+
+Each `pack {}` block contains references to a data resource.
+[Data resources](https://developer.hashicorp.com/terraform/language/data-sources) are used to perform read actions in
+Terraform. The Spectro Cloud Terraform provider exposes several data resources to help you make your Terraform code more
+dynamic. The data resource used in the cluster profile is `spectrocloud_pack`. This resource enables you to query
+Palette for information about a specific pack, such as its unique ID, registry ID, available versions, and YAML values.
+
+Below is the data resource used to query Palette for information about the Kubernetes pack for version `1.28.3`.
+
+```hcl
+data "spectrocloud_pack" "gcp_k8s" {
+ name = "kubernetes"
+ version = "1.28.3"
+ registry_uid = data.spectrocloud_registry.public_registry.id
+}
+```
+
+Using the data resource helps you avoid manually entering the parameter values required by the cluster profile's
+`pack {}` block.
+
+#### Cluster
+
+The **clusters.tf** file contains the definitions required for deploying a host cluster to one of the infrastructure
+providers. To create a GCP host cluster, you must set the `deploy-gcp` variable in the **terraform.tfvars** file to
+true.
+
+When deploying a cluster using Terraform, you must provide the same parameters as those available in the Palette UI for
+the cluster deployment step, such as the instance size and number of nodes. You can learn more about each parameter by
+reviewing the
+[GCP cluster resource](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/resources/cluster_gcp)
+documentation.
+
+```hcl
+resource "spectrocloud_cluster_gcp" "gcp-cluster" {
+ count = var.deploy-gcp ? 1 : 0
+
+ name = "gcp-cluster"
+ tags = concat(var.tags, ["env:gcp"])
+ cloud_account_id = data.spectrocloud_cloudaccount_gcp.account[0].id
+
+ cloud_config {
+ project = var.gcp_project_name
+ region = var.gcp-region
+ }
+
+ cluster_profile {
+ id = var.deploy-gcp && var.deploy-gcp-kubecost ? resource.spectrocloud_cluster_profile.gcp-profile-kubecost[0].id : resource.spectrocloud_cluster_profile.gcp-profile[0].id
+ }
+
+ machine_pool {
+ control_plane = true
+ control_plane_as_worker = true
+ name = "control-plane-pool"
+ count = var.gcp_control_plane_nodes.count
+ instance_type = var.gcp_control_plane_nodes.instance_type
+ disk_size_gb = var.gcp_control_plane_nodes.disk_size_gb
+ azs = var.gcp_control_plane_nodes.availability_zones
+ }
+
+ machine_pool {
+ name = "worker-pool"
+ count = var.gcp_worker_nodes.count
+ instance_type = var.gcp_worker_nodes.instance_type
+ disk_size_gb = var.gcp_worker_nodes.disk_size_gb
+ azs = var.gcp_worker_nodes.availability_zones
+ }
+
+ timeouts {
+ create = "30m"
+ delete = "15m"
+ }
+}
+```
+
+## Terraform Tests
+
+Before starting the cluster deployment, test the Terraform code to ensure the resources will be provisioned correctly.
+Issue the following command in your terminal.
+
+```bash
+terraform test
+```
+
+A successful test execution will output the following.
+
+```text hideClipboard
+Success! 16 passed, 0 failed.
+```
+
+## Input Variables
+
+To deploy a cluster using Terraform, you must first modify the **terraform.tfvars** file. Open it in the editor of your
+choice. The tutorial container includes the editor [Nano](https://www.nano-editor.org).
+
+The file is structured with different sections. Each provider has a section with variables that need to be filled in,
+identified by the placeholder `REPLACE_ME`. Additionally, there is a toggle variable named `deploy-`
+available for each provider, which you can use to select the deployment environment.
+
+In the **Palette Settings** section, modify the name of the `palette-project` variable if you wish to deploy to a
+Palette project different from the default one.
+
+```hcl {4}
+#####################
+# Palette Settings
+#####################
+palette-project = "Default" # The name of your project in Palette.
+```
+
+Next, in the **Hello Universe Configuration** section, provide values for the database password and authentication token
+for the Hello Universe pack. For example, you can use the value `password` for the database password and the default
+token provided in the
+[Hello Universe](https://github.com/spectrocloud/hello-universe/tree/main?tab=readme-ov-file#reverse-proxy-with-kubernetes)
+repository for the authentication token.
+
+```hcl {7-8}
+##############################
+# Hello Universe Configuration
+##############################
+app_namespace = "hello-universe" # The namespace in which the application will be deployed.
+app_port = 8080 # The cluster port number on which the service will listen for incoming traffic.
+replicas_number = 1 # The number of pods to be created.
+db_password = "REPLACE ME" # The database password to connect to the API database.
+auth_token = "REPLACE ME" # The auth token for the API connection.
+```
+
+Locate the GCP provider section and change `deploy-gcp = false` to `deploy-gcp = true`. Additionally, replace all
+occurrences of `REPLACE_ME` with their corresponding values, such as those for the `gcp-cloud-account-name`,
+`gcp-region`, `gcp_project_name`, and `availability_zones` variables. You can also update the values for the nodes in
+the control plane or worker node pools as needed.
+
+```hcl {4,7-9,16,24}
+###########################
+# GCP Deployment Settings
+############################
+deploy-gcp = false # Set to true to deploy to GCP.
+deploy-gcp-kubecost = false # Set to true to deploy to GCP and include Kubecost to your cluster profile.
+
+gcp-cloud-account-name = "REPLACE ME"
+gcp-region = "REPLACE ME"
+gcp_project_name = "REPLACE ME"
+
+gcp_control_plane_nodes = {
+ count = "1"
+ control_plane = true
+ instance_type = "n1-standard-4"
+ disk_size_gb = "60"
+ availability_zones = ["REPLACE ME"] # If you want to deploy to multiple AZs, add them here. Example: ["us-central1-a", "us-central1-b"].
+}
+
+gcp_worker_nodes = {
+ count = "1"
+ control_plane = false
+ instance_type = "n1-standard-4"
+ disk_size_gb = "60"
+ availability_zones = ["REPLACE ME"] # If you want to deploy to multiple AZs, add them here. Example: ["us-central1-a", "us-central1-b"].
+}
+```
+
+When you are done making the required changes, save the file.
+
+## Deploy the Cluster
+
+Before starting the cluster provisioning, export your [Palette API key](./setup.md#create-a-palette-api-key) as an
+environment variable. This step allows the Terraform code to authenticate with the Palette API.
+
+```bash
+export SPECTROCLOUD_APIKEY=
+```
+
+Next, issue the following command to initialize Terraform. The `init` command initializes the working directory that
+contains the Terraform files.
+
+```shell
+terraform init
+```
+
+```text hideClipboard
+Terraform has been successfully initialized!
+```
+
+:::warning
+
+Before deploying the resources, ensure that there are no active clusters named `gcp-cluster` or cluster profiles named
+`tf-gcp-profile` in your Palette project.
+
+:::
+
+Issue the `plan` command to preview the resources that Terraform will create.
+
+```shell
+terraform plan
+```
+
+The output indicates that three new resources will be created: two versions of the GCP cluster profile and the host
+cluster. The host cluster will use version `1.0.0` of the cluster profile.
+
+```shell
+Plan: 3 to add, 0 to change, 0 to destroy.
+```
+
+To deploy the resources, use the `apply` command.
+
+```shell
+terraform apply -auto-approve
+```
+
+To check that the cluster profile was created correctly, log in to [Palette](https://console.spectrocloud.com), and
+click **Profiles** from the left **Main Menu**. Locate the cluster profile named `tf-gcp-profile`. Click on the cluster
+profile to review its layers and versions.
+
+![A view of the cluster profile](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp)
+
+You can also check the cluster creation process by selecting **Clusters** from the left **Main Menu**.
+
+![Update the cluster](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp)
+
+Select your cluster to review its details page, which contains the status, cluster profile, event logs, and more.
+
+The cluster deployment may take 15 to 30 minutes depending on the cloud provider, cluster profile, cluster size, and the
+node pool configurations provided. You can learn more about the deployment progress by reviewing the event log. Click on
+the **Events** tab to check the log.
+
+![Update the cluster](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp)
+
+### Verify the Application
+
+In Palette, navigate to the left **Main Menu** and select **Clusters**.
+
+Select your cluster to view its **Overview** tab. When the application is deployed and ready for network traffic,
+indicated in the **Services** field, Palette exposes the service URL. Click on the URL for port **:8080** to access the
+Hello Universe application.
+
+:::warning
+
+It can take up to three minutes for DNS to properly resolve the public load balancer URL. We recommend waiting a few
+moments before clicking on the service URL to prevent the browser from caching an unresolved DNS request.
+
+:::
+
+![Deployed application](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp)
+
+Welcome to Hello Universe, a demo application developed to help you learn more about Palette and its features. Feel free
+to click on the logo to increase the counter and for a fun image change.
+
+## Version Cluster Profiles
+
+Palette supports the creation of multiple cluster profile versions using the same profile name. This provides you with
+better change visibility and control over the layers in your host clusters. Profile versions are commonly used for
+adding or removing layers and pack configuration updates.
+
+The version number of a given profile must be unique and use the semantic versioning format `major.minor.patch`. In this
+tutorial, you used Terraform to deploy two versions of a GCP cluster profile. The snippet below displays a segment of
+the Terraform cluster profile resource version `1.0.0` that was deployed.
+
+```hcl {4,9}
+resource "spectrocloud_cluster_profile" "gcp-profile" {
+ count = var.deploy-gcp ? 1 : 0
+
+ name = "tf-gcp-profile"
+ description = "A basic cluster profile for GCP"
+ tags = concat(var.tags, ["env:GCP"])
+ cloud = "gcp"
+ type = "cluster"
+ version = "1.0.0"
+```
+
+Open the **terraform.tfvars** file, set the `deploy-gcp-kubecost` variable to true, and save the file. Once applied, the
+host cluster will use version `1.1.0` of the cluster profile with the Kubecost pack.
+
+The snippet below displays the segment of the Terraform resource that creates the cluster profile version `1.1.0`. Note
+how the name `tf-gcp-profile` is the same as in the first cluster profile resource, but the version is different.
+
+```hcl {4,9}
+resource "spectrocloud_cluster_profile" "gcp-profile-kubecost" {
+ count = var.deploy-gcp ? 1 : 0
+
+ name = "tf-gcp-profile"
+ description = "A basic cluster profile for GCP with Kubecost"
+ tags = concat(var.tags, ["env:GCP"])
+ cloud = "gcp"
+ type = "cluster"
+ version = "1.1.0"
+```
+
+In the terminal window, issue the following command to plan the changes.
+
+```bash
+terraform plan
+```
+
+The output states that one resource will be modified. The deployed cluster will now use version `1.1.0` of the cluster
+profile.
+
+```text hideClipboard
+Plan: 0 to add, 1 to change, 0 to destroy.
+```
+
+Issue the `apply` command to deploy the changes.
+
+```bash
+terraform apply -auto-approve
+```
+
+Palette will now reconcile the current state of your workloads with the desired state specified by the new cluster
+profile version.
+
+To visualize the reconciliation behavior, log in to [Palette](https://console.spectrocloud.com), and click **Clusters**
+from the left **Main Menu**.
+
+Select the cluster named `gcp-cluster`. Click on the **Events** tab. Note how a cluster reconciliation action was
+triggered due to cluster profile changes.
+
+![Image that shows the cluster profile reconciliation behavior](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_reconciliation.webp)
+
+Next, click on the **Profile** tab. Observe that the cluster is now using version `1.1.0` of the `tf-gcp-profile`
+cluster profile.
+
+![Image that shows the new cluster profile version with Kubecost](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp)
+
+Once the changes have been completed, Palette marks the cluster layers with a green status indicator. Click the
+**Overview** tab to verify that the Kubecost pack was successfully deployed.
+
+![Image that shows the cluster with Kubecost](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp)
+
+Next, download the [kubeconfig](../../clusters/cluster-management/kubeconfig.md) file for your cluster from the Palette
+UI. This file enables you and other users to issue `kubectl` commands against the host cluster.
+
+![Image that shows the cluster's kubeconfig file location](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp)
+
+Open a new terminal window and set the environment variable `KUBECONFIG` to point to the kubeconfig file you downloaded.
+
+```bash
+export KUBECONFIG=~/Downloads/admin.gcp-cluster.kubeconfig
+```
+
+Forward the Kubecost UI to your local network. The Kubecost dashboard is not exposed externally by default, so the
+command below will allow you to access it locally on port **9090**. If port 9090 is already taken, you can choose a
+different one.
+
+```bash
+kubectl port-forward --namespace kubecost deployment/cost-analyzer-cost-analyzer 9090
+```
+
+Open your browser window and navigate to `http://localhost:9090`. The Kubecost UI provides you with a variety of cost
+information about your cluster. Read more about
+[Navigating the Kubecost UI](https://docs.kubecost.com/using-kubecost/navigating-the-kubecost-ui) to make the most of
+the cost analyzer pack.
+
+![Image that shows the Kubecost UI](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubecost.webp)
+
+Once you are done exploring the Kubecost dashboard, stop the `kubectl port-forward` command by closing the terminal
+window it is executing from.
+
+## Roll Back Cluster Profiles
+
+One of the key advantages of using cluster profile versions is that they make it possible to maintain a copy of
+previously known working states. The ability to roll back to a previously working cluster profile in one action shortens
+the time to recovery in the event of an incident.
+
+The process of rolling back to a previous version using Terraform is similar to the process of applying a new version.
+
+Open the **terraform.tfvars** file, set the `deploy-gcp-kubecost` variable to false, and save the file. Once applied,
+this action will make the active cluster use version **1.0.0** of the cluster profile again.
+
+In the terminal window, issue the following command to plan the changes.
+
+```bash
+terraform plan
+```
+
+The output states that the deployed cluster will now use version `1.0.0` of the cluster profile.
+
+```text hideClipboard
+Plan: 0 to add, 1 to change, 0 to destroy.
+```
+
+Issue the `apply` command to deploy the changes.
+
+```bash
+terraform apply -auto-approve
+```
+
+Palette now makes the changes required for the cluster to return to the state specified in version `1.0.0` of your
+cluster profile. Once your changes have completed, Palette marks your layers with the green status indicator.
+
+![Image that shows the cluster using version 1.0.0 of the cluster profile](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp)
+
+## Cleanup
+
+Use the following steps to clean up the resources you created for the tutorial. Use the `destroy` command to remove all
+the resources you created through Terraform.
+
+```shell
+terraform destroy --auto-approve
+```
+
+A successful execution of `terraform destroy` will output the following.
+
+```shell
+Destroy complete! Resources: 3 destroyed.
+```
+
+:::info
+
+If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for force delete. To trigger a force
+delete action, navigate to the cluster’s details page and click on **Settings**. Click on **Force Delete Cluster** to
+delete the cluster. Palette automatically removes clusters stuck in the cluster deletion phase for over 24 hours.
+
+:::
+
+If you are using the tutorial container, type `exit` in your terminal session and press the **Enter** key. Next, issue
+the following command to stop and remove the container.
+
+
+
+
+
+```shell
+docker stop tutorialContainer && \
+docker rmi --force ghcr.io/spectrocloud/tutorials:1.1.7
+```
+
+
+
+
+
+```shell
+podman stop tutorialContainer && \
+podman rmi --force ghcr.io/spectrocloud/tutorials:1.1.7
+```
+
+
+
+
+
+## Wrap-Up
+
+In this tutorial, you learned how to create different versions of a cluster profile using Terraform. You deployed a host
+GCP cluster and then updated it to use a different version of a cluster profile. Finally, you learned how to perform
+cluster profile roll backs.
+
+We encourage you to check out the
+[Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest) provider page to
+learn more about the Palette resources you can deploy using Terraform.
diff --git a/docs/docs-content/getting-started/gcp/gcp.md b/docs/docs-content/getting-started/gcp/gcp.md
index 1313eb8fc4..496138a886 100644
--- a/docs/docs-content/getting-started/gcp/gcp.md
+++ b/docs/docs-content/getting-started/gcp/gcp.md
@@ -43,10 +43,10 @@ your cluster is deployed, you can update it using cluster profile updates.
relativeURL: "./update-k8s-cluster",
},
{
- title: "Deploy a Cluster with Terraform",
- description: "Deploy a Palette host cluster with Terraform.",
+ title: "Cluster Management with Terraform",
+ description: "Deploy and update a Palette host cluster with Terraform.",
buttonText: "Learn more",
- relativeURL: "./deploy-k8s-cluster-tf",
+ relativeURL: "./deploy-manage-k8s-cluster-tf",
},
]}
/>
diff --git a/docs/docs-content/getting-started/gcp/setup.md b/docs/docs-content/getting-started/gcp/setup.md
index 3c9e3578ad..c32e42e3ff 100644
--- a/docs/docs-content/getting-started/gcp/setup.md
+++ b/docs/docs-content/getting-started/gcp/setup.md
@@ -8,24 +8,15 @@ sidebar_position: 10
tags: ["getting-started", "gcp"]
---
-In this guide, you will learn how to set up Palette for use with your GCP cloud account. These steps are required in
-order to authenticate Palette and allow it to deploy host clusters.
+In this guide, you will learn how to set up Palette for use with your Google Cloud Platform (GCP) cloud account. These
+steps are required in order to authenticate Palette and allow it to deploy host clusters.
## Prerequisites
-The prerequisite steps to getting started with Palette on GCP are as follows.
+- A Palette account with [tenant admin](../../tenant-settings/tenant-settings.md) access.
-- Sign up to [Palette](https://www.spectrocloud.com/get-started).
-
- - Your Palette account role must have the `clusterProfile.create` permission to create a cluster profile. Refer to the
- [Roles and Permissions](../../user-management/palette-rbac/project-scope-roles-permissions.md#cluster-profile-admin)
- documentation for more information.
-
-- Sign up to a public cloud account from [GCP](https://cloud.google.com/docs/get-started).
-
-- Access to a terminal window.
-
-- The utility `ssh-keygen` or similar SSH key generator software.
+- Sign up to a service account from [GCP](https://cloud.google.com/docs/get-started). The GCP account must have the
+ required [IAM permissions](../../clusters/public-cloud/gcp/required-permissions.md).
## Enablement
@@ -35,9 +26,12 @@ Palette needs access to your GCP cloud account in order to create and manage GCP
-### Create and Upload an SSH Key
+### Create a Palette API Key
+
+Follow the steps below to create a Palette API key. This is required for the
+[Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) tutorial.
-
+
## Validate
diff --git a/docs/docs-content/getting-started/gcp/update-k8s-cluster.md b/docs/docs-content/getting-started/gcp/update-k8s-cluster.md
index c32885620c..7ba316adc7 100644
--- a/docs/docs-content/getting-started/gcp/update-k8s-cluster.md
+++ b/docs/docs-content/getting-started/gcp/update-k8s-cluster.md
@@ -285,5 +285,5 @@ Cluster profiles provide consistency during the cluster creation process, as wel
They can be versioned to keep a record of previously working cluster states, giving you visibility when updating or
rolling back workloads across your environments.
-We recommend that you continue to the [Deploy a Cluster with Terraform](./deploy-k8s-cluster-tf.md) page to learn about
-how you can use Palette with Terraform.
+We recommend that you continue to the [Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) page to
+learn about how you can use Palette with Terraform.
diff --git a/docs/docs-content/getting-started/vmware/deploy-manage-k8s-cluster-tf.md b/docs/docs-content/getting-started/vmware/deploy-manage-k8s-cluster-tf.md
new file mode 100644
index 0000000000..219bf02a44
--- /dev/null
+++ b/docs/docs-content/getting-started/vmware/deploy-manage-k8s-cluster-tf.md
@@ -0,0 +1,797 @@
+---
+sidebar_label: "Cluster Management with Terraform"
+title: "Cluster Management with Terraform"
+description: "Learn how to deploy and update a Palette host cluster to VMware vSphere with Terraform."
+icon: ""
+hide_table_of_contents: false
+sidebar_position: 50
+toc_max_heading_level: 2
+tags: ["getting-started", "vmware", "terraform"]
+---
+
+The [Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) provider
+allows you to create and manage Palette resources using Infrastructure as Code (IaC). With IaC, you can automate the
+provisioning of resources, collaborate on changes, and maintain a single source of truth for your infrastructure.
+
+This tutorial will teach you how to use Terraform to deploy and update a VMware vSphere host cluster. You will learn how
+to create two versions of a cluster profile with different demo applications, update the deployed cluster with the new
+cluster profile version, and then perform a rollback.
+
+## Prerequisites
+
+To complete this tutorial, you will need the following items in place:
+
+- Follow the steps described in the [Set up Palette with VMware](./setup.md) guide to authenticate Palette for use with
+ your VMware vSphere account.
+- Follow the steps described in the [Deploy a PCG](./deploy-pcg.md) tutorial to deploy a VMware vSphere Private Cloud
+ Gateway (PCG).
+- [Docker Desktop](https://www.docker.com/products/docker-desktop/) or [Podman](https://podman.io/docs/installation)
+ installed if you choose to follow along using the tutorial container.
+- If you choose to clone the repository instead of using the tutorial container, make sure you have the following
+ software installed:
+ - [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) v1.9.0 or greater
+ - [Git](https://git-scm.com/downloads)
+ - [Kubectl](https://kubernetes.io/docs/tasks/tools/)
+
+## Set Up Local Environment
+
+You can clone the [Tutorials](https://github.com/spectrocloud/tutorials) repository locally or follow along by
+downloading a container image that includes the tutorial code and all dependencies.
+
+
+
+
+
+Start Docker Desktop and ensure that the Docker daemon is available by issuing the following command.
+
+```bash
+docker ps
+```
+
+Next, download the tutorial image, start the container, and open a bash session into it.
+
+```shell
+docker run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.7 bash
+```
+
+Navigate to the folder that contains the tutorial code.
+
+```shell
+cd /terraform/getting-started-deployment-tf
+```
+
+:::warning
+
+Do not exit the container until the tutorial is complete. Otherwise, you may lose your progress.
+
+:::
+
+
+
+
+
+If you are not using a Linux operating system, create and start the Podman Machine in your local environment. Otherwise,
+skip this step.
+
+```bash
+podman machine init
+podman machine start
+```
+
+Use the following command and ensure you receive an output displaying the installation information.
+
+```bash
+podman info
+```
+
+Next, download the tutorial image, start the container, and open a bash session into it.
+
+```shell
+podman run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.7 bash
+```
+
+Navigate to the folder that contains the tutorial code.
+
+```shell
+cd /terraform/getting-started-deployment-tf
+```
+
+:::warning
+
+Do not exit the container until the tutorial is complete. Otherwise, you may lose your progress.
+
+:::
+
+
+
+
+
+Open a terminal window and download the tutorial code from GitHub.
+
+```shell
+git clone https://github.com/spectrocloud/tutorials.git
+```
+
+Change the directory to the tutorial folder.
+
+```shell
+cd tutorials/
+```
+
+Check out the following git tag.
+
+```shell
+git checkout v1.1.7
+```
+
+Navigate to the folder that contains the tutorial code.
+
+```shell
+cd /terraform/getting-started-deployment-tf
+```
+
+
+
+
+
+## Resources Review
+
+To help you get started with Terraform, the tutorial code is structured to support deploying a cluster to either AWS,
+Azure, GCP, or VMware vSphere. Before you deploy a host cluster to VMware vSphere, review the following files in the
+folder structure.
+
+| **File** | **Description** |
+| ----------------------- | ---------------------------------------------------------------------------------------------------------------------- |
+| **provider.tf** | This file contains the Terraform providers that are used to support the deployment of the cluster. |
+| **inputs.tf** | This file contains all the Terraform variables required for the deployment logic. |
+| **data.tf** | This file contains all the query resources that perform read actions. |
+| **cluster_profiles.tf** | This file contains the cluster profile definitions for each cloud provider. |
+| **clusters.tf** | This file has the cluster configurations required to deploy a host cluster to one of the cloud providers. |
+| **terraform.tfvars** | Use this file to target a specific cloud provider and customize the deployment. This is the only file you must modify. |
+| **ippool.tf** | This file contains the configuration required for VMware deployments that use static IP placement. |
+| **ssh-key.tf** | This file has the SSH key resource definition required for Azure and VMware deployments. |
+| **outputs.tf** | This file contains the content that will be displayed in the terminal after a successful Terraform `apply` action. |
+
+The following section reviews the core Terraform resources more closely.
+
+#### Provider
+
+The **provider.tf** file contains the Terraform providers used in the tutorial and their respective versions. This
+tutorial uses four providers:
+
+- [Spectro Cloud](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs)
+- [TLS](https://registry.terraform.io/providers/hashicorp/tls/latest)
+- [vSphere](https://registry.terraform.io/providers/hashicorp/vsphere/latest)
+- [Local](https://registry.terraform.io/providers/hashicorp/local/latest)
+
+Note how the project name is specified in the `provider "spectrocloud" {}` block. You can change the target project by
+modifying the value of the `palette-project` variable in the **terraform.tfvars** file.
+
+```hcl
+terraform {
+ required_providers {
+ spectrocloud = {
+ version = ">= 0.20.6"
+ source = "spectrocloud/spectrocloud"
+ }
+
+ tls = {
+ source = "hashicorp/tls"
+ version = "4.0.4"
+ }
+
+ vsphere = {
+ source = "hashicorp/vsphere"
+ version = ">= 2.6.1"
+ }
+
+ local = {
+ source = "hashicorp/local"
+ version = "2.4.1"
+ }
+ }
+
+ required_version = ">= 1.9"
+}
+
+provider "spectrocloud" {
+ project_name = var.palette-project
+}
+```
+
+#### Cluster Profile
+
+The next file you should become familiar with is the **cluster_profiles.tf** file. The `spectrocloud_cluster_profile`
+resource allows you to create a cluster profile and customize its layers. You can specify the packs and versions to use
+or add a manifest or Helm chart.
+
+The cluster profile resource is declared eight times in the **cluster-profiles.tf** file, with each pair of resources
+being designated for a specific provider. In this tutorial, two versions of the VMware vSphere cluster profile are
+deployed: version `1.0.0` deploys the [Hello Universe](https://github.com/spectrocloud/hello-universe) pack, while
+version `1.1.0` deploys the [Kubecost](https://www.kubecost.com/) pack along with the
+[Hello Universe](https://github.com/spectrocloud/hello-universe) application.
+
+The cluster profiles also include layers for the Operating System (OS), Kubernetes, container network interface,
+container storage interface, and load balancer implementation for bare-metal clusters. The first `pack {}` block in the
+list equates to the bottom layer of the cluster profile. Ensure you define the bottom layer of the cluster profile - the
+OS layer - first in the list of `pack {}` blocks, as the order in which you arrange the contents of the `pack {}` blocks
+plays an important role in the cluster profile creation. The table below displays the packs deployed in each version of
+the cluster profile.
+
+| **Pack Type** | **Pack Name** | **Version** | **Cluster Profile v1.0.0** | **Cluster Profile v1.1.0** |
+| ------------- | ----------------- | ----------- | -------------------------- | -------------------------- |
+| OS | `ubuntu-vsphere` | `22.04` | :white_check_mark: | :white_check_mark: |
+| Kubernetes | `kubernetes` | `1.28.3` | :white_check_mark: | :white_check_mark: |
+| Network | `cni-calico` | `3.26.3` | :white_check_mark: | :white_check_mark: |
+| Storage | `csi-vsphere-csi` | `3.0.2` | :white_check_mark: | :white_check_mark: |
+| Load Balancer | `lb-metallb-helm` | `0.13.11` | :white_check_mark: | :white_check_mark: |
+| App Services | `hellouniverse` | `1.1.2` | :white_check_mark: | :white_check_mark: |
+| App Services | `cost-analyzer` | `1.103.3` | :x: | :white_check_mark: |
+
+The Hello Universe pack has two configured [presets](../../glossary-all.md#presets). The first preset deploys a
+standalone frontend application, while the second one deploys a three-tier application with a frontend, API server, and
+Postgres database. This tutorial deploys the three-tier version of the
+[Hello Universe](https://github.com/spectrocloud/hello-universe) pack. The preset selection in the Terraform code is
+specified within the Hello Universe pack block with the `values` field and by using the **values-3tier.yaml** file.
+Below is an example of version `1.0.0` of the VMware vSphere cluster profile Terraform resource.
+
+```hcl
+resource "spectrocloud_cluster_profile" "vmware-profile" {
+ count = var.deploy-vmware ? 1 : 0
+
+ name = "tf-vmware-profile"
+ description = "A basic cluster profile for VMware"
+ tags = concat(var.tags, ["env:VMware"])
+ cloud = "vsphere"
+ type = "cluster"
+ version = "1.0.0"
+
+ pack {
+ name = data.spectrocloud_pack.vmware_ubuntu.name
+ tag = data.spectrocloud_pack.vmware_ubuntu.version
+ uid = data.spectrocloud_pack.vmware_ubuntu.id
+ values = data.spectrocloud_pack.vmware_ubuntu.values
+ type = "spectro"
+ }
+
+ pack {
+ name = data.spectrocloud_pack.vmware_k8s.name
+ tag = data.spectrocloud_pack.vmware_k8s.version
+ uid = data.spectrocloud_pack.vmware_k8s.id
+ values = data.spectrocloud_pack.vmware_k8s.values
+ type = "spectro"
+ }
+
+ pack {
+ name = data.spectrocloud_pack.vmware_cni.name
+ tag = data.spectrocloud_pack.vmware_cni.version
+ uid = data.spectrocloud_pack.vmware_cni.id
+ values = data.spectrocloud_pack.vmware_cni.values
+ type = "spectro"
+ }
+
+ pack {
+ name = data.spectrocloud_pack.vmware_csi.name
+ tag = data.spectrocloud_pack.vmware_csi.version
+ uid = data.spectrocloud_pack.vmware_csi.id
+ values = data.spectrocloud_pack.vmware_csi.values
+ type = "spectro"
+ }
+
+ pack {
+ name = data.spectrocloud_pack.vmware_metallb.name
+ tag = data.spectrocloud_pack.vmware_metallb.version
+ uid = data.spectrocloud_pack.vmware_metallb.id
+ values = replace(data.spectrocloud_pack.vmware_metallb.values, "192.168.10.0/24", var.metallb_ip)
+ type = "oci"
+ }
+
+ pack {
+ name = data.spectrocloud_pack.hellouniverse.name
+ tag = data.spectrocloud_pack.hellouniverse.version
+ uid = data.spectrocloud_pack.hellouniverse.id
+ values = templatefile("manifests/values-3tier.yaml", {
+ namespace = var.app_namespace,
+ port = var.app_port,
+ replicas = var.replicas_number,
+ db_password = base64encode(var.db_password),
+ auth_token = base64encode(var.auth_token)
+ })
+ type = "oci"
+ }
+}
+```
+
+#### Data Resources
+
+Each `pack {}` block contains references to a data resource.
+[Data resources](https://developer.hashicorp.com/terraform/language/data-sources) are used to perform read actions in
+Terraform. The Spectro Cloud Terraform provider exposes several data resources to help you make your Terraform code more
+dynamic. The data resource used in the cluster profile is `spectrocloud_pack`. This resource enables you to query
+Palette for information about a specific pack, such as its unique ID, registry ID, available versions, and YAML values.
+
+Below is the data resource used to query Palette for information about the Kubernetes pack for version `1.28.3`.
+
+```hcl
+data "spectrocloud_pack" "vmware_k8s" {
+ name = "kubernetes"
+ version = "1.28.3"
+ registry_uid = data.spectrocloud_registry.public_registry.id
+}
+```
+
+Using the data resource helps you avoid manually entering the parameter values required by the cluster profile's
+`pack {}` block.
+
+#### Cluster
+
+The **clusters.tf** file contains the definitions required for deploying a host cluster to one of the infrastructure
+providers. To create a VMware vSphere host cluster, you must set the `deploy-vmware` variable in the
+**terraform.tfvars** file to true.
+
+When deploying a cluster using Terraform, you must provide the same parameters as those available in the Palette UI for
+the cluster deployment step, such as the instance size and number of nodes. You can learn more about each parameter by
+reviewing the
+[VMware vSphere cluster resource](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/resources/cluster_vsphere)
+documentation.
+
+```hcl
+resource "spectrocloud_cluster_vsphere" "vmware-cluster" {
+ count = var.deploy-vmware ? 1 : 0
+
+ name = "vmware-cluster"
+ tags = concat(var.tags, ["env:vmware"])
+ cloud_account_id = data.spectrocloud_cloudaccount_vsphere.account[0].id
+
+ cloud_config {
+ ssh_keys = [local.ssh_public_key]
+ datacenter = var.datacenter_name
+ folder = var.folder_name
+ static_ip = var.deploy-vmware-static # If true, the cluster will use static IP placement. If false, the cluster will use DDNS.
+ network_search_domain = var.search_domain
+ }
+
+ cluster_profile {
+ id = var.deploy-vmware && var.deploy-vmware-kubecost ? resource.spectrocloud_cluster_profile.vmware-profile-kubecost[0].id : resource.spectrocloud_cluster_profile.vmware-profile[0].id
+ }
+
+ scan_policy {
+ configuration_scan_schedule = "0 0 * * SUN"
+ penetration_scan_schedule = "0 0 * * SUN"
+ conformance_scan_schedule = "0 0 1 * *"
+ }
+
+ machine_pool {
+ name = "control-plane-pool"
+ count = 1
+ control_plane = true
+ control_plane_as_worker = true
+
+ instance_type {
+ cpu = 4
+ disk_size_gb = 60
+ memory_mb = 8000
+ }
+
+ placement {
+ cluster = var.vsphere_cluster
+ datastore = var.datastore_name
+ network = var.network_name
+ resource_pool = var.resource_pool_name
+ # Required for static IP placement.
+ static_ip_pool_id = var.deploy-vmware-static ? resource.spectrocloud_privatecloudgateway_ippool.ippool[0].id : null
+ }
+
+ }
+
+ machine_pool {
+ name = "worker-pool"
+ count = 1
+ control_plane = false
+
+ instance_type {
+ cpu = 4
+ disk_size_gb = 60
+ memory_mb = 8000
+ }
+
+ placement {
+ cluster = var.vsphere_cluster
+ datastore = var.datastore_name
+ network = var.network_name
+ resource_pool = var.resource_pool_name
+ # Required for static IP placement.
+ static_ip_pool_id = var.deploy-vmware-static ? resource.spectrocloud_privatecloudgateway_ippool.ippool[0].id : null
+ }
+ }
+}
+```
+
+## Terraform Tests
+
+Before starting the cluster deployment, test the Terraform code to ensure the resources will be provisioned correctly.
+Issue the following command in your terminal.
+
+```bash
+terraform test
+```
+
+A successful test execution will output the following.
+
+```text hideClipboard
+Success! 16 passed, 0 failed.
+```
+
+## Input Variables
+
+To deploy a cluster using Terraform, you must first modify the **terraform.tfvars** file. Open it in the editor of your
+choice. The tutorial container includes the editor [Nano](https://www.nano-editor.org).
+
+The file is structured with different sections. Each provider has a section with variables that need to be filled in,
+identified by the placeholder `REPLACE_ME`. Additionally, there is a toggle variable named `deploy-`
+available for each provider, which you can use to select the deployment environment.
+
+In the **Palette Settings** section, modify the name of the `palette-project` variable if you wish to deploy to a
+Palette project different from the default one.
+
+```hcl {4}
+#####################
+# Palette Settings
+#####################
+palette-project = "Default" # The name of your project in Palette.
+```
+
+Next, in the **Hello Universe Configuration** section, provide values for the database password and authentication token
+for the Hello Universe pack. For example, you can use the value `password` for the database password and the default
+token provided in the
+[Hello Universe](https://github.com/spectrocloud/hello-universe/tree/main?tab=readme-ov-file#reverse-proxy-with-kubernetes)
+repository for the authentication token.
+
+```hcl {7-8}
+##############################
+# Hello Universe Configuration
+##############################
+app_namespace = "hello-universe" # The namespace in which the application will be deployed.
+app_port = 8080 # The cluster port number on which the service will listen for incoming traffic.
+replicas_number = 1 # The number of pods to be created.
+db_password = "REPLACE ME" # The database password to connect to the API database.
+auth_token = "REPLACE ME" # The auth token for the API connection.
+```
+
+Locate the VMware vSphere provider section and change `deploy-vmware = false` to `deploy-vmware = true`. Additionally,
+replace all occurrences of `REPLACE_ME` with the required variable values.
+
+- **metallb_ip** - Range of IP addresses for your MetalLB load balancer. If using static IP placement, this range must
+ be included in the PCG's static IP pool range.
+- **pcg_name** - Name of the PCG that will be used to deploy the Palette cluster.
+- **datacenter_name** - Name of the data center in vSphere.
+- **folder_name** - Name of the folder in vSphere.
+- **search_domain** - Name of the network search domain.
+- **vsphere_cluster** - Name of the cluster as it appears in vSphere.
+- **datastore_name** - Name of the datastore as it appears in vSphere.
+- **network_name** - Name of the network as it appears in vSphere.
+- **resource_pool_name** - Name of the resource pool as it appears in vSphere.
+- **ssh_key** - Path to a public SSH key. If not provided, a new key pair will be created.
+- **ssh_key_private** - Path to a private SSH key. If not provided, a new key pair will be created.
+
+```hcl {4,7-15}
+############################
+# VMware Deployment Settings
+############################
+deploy-vmware = false # Set to true to deploy to VMware.
+deploy-vmware-kubecost = false # Set to true to deploy to VMware and include Kubecost to your cluster profile.
+
+metallb_ip = "REPLACE ME"
+pcg_name = "REPLACE ME"
+datacenter_name = "REPLACE ME"
+folder_name = "REPLACE ME"
+search_domain = "REPLACE ME"
+vsphere_cluster = "REPLACE ME"
+datastore_name = "REPLACE ME"
+network_name = "REPLACE ME"
+resource_pool_name = "REPLACE ME"
+ssh_key = ""
+ssh_key_private = ""
+```
+
+:::info
+
+If you deployed the PCG using static IP placement, you must create an
+[IPAM pool](../../clusters/pcg/manage-pcg/create-manage-node-pool.md) before deploying clusters. Set the
+`deploy-vmware-static` variable to true and provide the required values for the variables under the **Static IP Pool
+Variables** section.
+
+:::
+
+When you are done making the required changes, save the file.
+
+## Deploy the Cluster
+
+Before starting the cluster provisioning, export your [Palette API key](./setup.md#create-a-palette-api-key) as an
+environment variable. This step allows the Terraform code to authenticate with the Palette API.
+
+```bash
+export SPECTROCLOUD_APIKEY=
+```
+
+Next, issue the following command to initialize Terraform. The `init` command initializes the working directory that
+contains the Terraform files.
+
+```shell
+terraform init
+```
+
+```text hideClipboard
+Terraform has been successfully initialized!
+```
+
+:::warning
+
+Before deploying the resources, ensure that there are no active clusters named `vmware-cluster` or cluster profiles
+named `tf-vmware-profile` in your Palette project.
+
+:::
+
+Issue the `plan` command to preview the resources that Terraform will create.
+
+```shell
+terraform plan
+```
+
+The output indicates that six new resources will be created: two versions of the VMware vSphere cluster profile, the
+host cluster, and the files associated with the SSH key pair if you have not provided one. The host cluster will use
+version `1.0.0` of the cluster profile.
+
+```shell
+Plan: 6 to add, 0 to change, 0 to destroy.
+```
+
+To deploy the resources, use the `apply` command.
+
+```shell
+terraform apply -auto-approve
+```
+
+To check that the cluster profile was created correctly, log in to [Palette](https://console.spectrocloud.com), and
+click **Profiles** from the left **Main Menu**. Locate the cluster profile named `tf-vmware-profile`. Click on the
+cluster profile to review its layers and versions.
+
+![A view of the cluster profile](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp)
+
+You can also check the cluster creation process by selecting **Clusters** from the left **Main Menu**.
+
+![Update the cluster](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp)
+
+Select your cluster to review its details page, which contains the status, cluster profile, event logs, and more.
+
+The cluster deployment may take 15 to 30 minutes depending on the cloud provider, cluster profile, cluster size, and the
+node pool configurations provided. You can learn more about the deployment progress by reviewing the event log. Click on
+the **Events** tab to check the log.
+
+![Update the cluster](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp)
+
+### Verify the Application
+
+In Palette, navigate to the left **Main Menu** and select **Clusters**.
+
+Select your cluster to view its **Overview** tab. When the application is deployed and ready for network traffic,
+indicated in the **Services** field, Palette exposes the service URL. Click on the URL for port **:8080** to access the
+Hello Universe application.
+
+:::warning
+
+It can take up to three minutes for DNS to properly resolve the public load balancer URL. We recommend waiting a few
+moments before clicking on the service URL to prevent the browser from caching an unresolved DNS request.
+
+:::
+
+![Deployed application](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp)
+
+Welcome to Hello Universe, a demo application developed to help you learn more about Palette and its features. Feel free
+to click on the logo to increase the counter and for a fun image change.
+
+## Version Cluster Profiles
+
+Palette supports the creation of multiple cluster profile versions using the same profile name. This provides you with
+better change visibility and control over the layers in your host clusters. Profile versions are commonly used for
+adding or removing layers and pack configuration updates.
+
+The version number of a given profile must be unique and use the semantic versioning format `major.minor.patch`. In this
+tutorial, you used Terraform to deploy two versions of a VMware vSphere cluster profile. The snippet below displays a
+segment of the Terraform cluster profile resource version `1.0.0` that was deployed.
+
+```hcl {4,9}
+resource "spectrocloud_cluster_profile" "vmware-profile" {
+ count = var.deploy-vmware ? 1 : 0
+
+ name = "tf-vmware-profile"
+ description = "A basic cluster profile for VMware"
+ tags = concat(var.tags, ["env:VMware"])
+ cloud = "vsphere"
+ type = "cluster"
+ version = "1.0.0"
+```
+
+Open the **terraform.tfvars** file, set the `deploy-vmware-kubecost` variable to true, and save the file. Once applied,
+the host cluster will use version `1.1.0` of the cluster profile with the Kubecost pack.
+
+The snippet below displays the segment of the Terraform resource that creates the cluster profile version `1.1.0`. Note
+how the name `tf-vmware-profile` is the same as in the first cluster profile resource, but the version is different.
+
+```hcl {4,9}
+resource "spectrocloud_cluster_profile" "vmware-profile-kubecost" {
+ count = var.deploy-vmware ? 1 : 0
+
+ name = "tf-vmware-profile"
+ description = "A basic cluster profile for VMware with Kubecost"
+ tags = concat(var.tags, ["env:VMware"])
+ cloud = "vsphere"
+ type = "cluster"
+ version = "1.1.0"
+```
+
+In the terminal window, issue the following command to plan the changes.
+
+```bash
+terraform plan
+```
+
+The output states that one resource will be modified. The deployed cluster will now use version `1.1.0` of the cluster
+profile.
+
+```text hideClipboard
+Plan: 0 to add, 1 to change, 0 to destroy.
+```
+
+Issue the `apply` command to deploy the changes.
+
+```bash
+terraform apply -auto-approve
+```
+
+Palette will now reconcile the current state of your workloads with the desired state specified by the new cluster
+profile version.
+
+To visualize the reconciliation behavior, log in to [Palette](https://console.spectrocloud.com), and click **Clusters**
+from the left **Main Menu**.
+
+Select the cluster named `vmware-cluster`. Click on the **Events** tab. Note how a cluster reconciliation action was
+triggered due to cluster profile changes.
+
+![Image that shows the cluster profile reconciliation behavior](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_reconciliation.webp)
+
+Next, click on the **Profile** tab. Observe that the cluster is now using version `1.1.0` of the `tf-vmware-profile`
+cluster profile.
+
+![Image that shows the new cluster profile version with Kubecost](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp)
+
+Once the changes have been completed, Palette marks the cluster layers with a green status indicator. Click the
+**Overview** tab to verify that the Kubecost pack was successfully deployed.
+
+![Image that shows the cluster with Kubecost](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp)
+
+Next, download the [kubeconfig](../../clusters/cluster-management/kubeconfig.md) file for your cluster from the Palette
+UI. This file enables you and other users to issue `kubectl` commands against the host cluster.
+
+![Image that shows the cluster's kubeconfig file location](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp)
+
+Open a new terminal window and set the environment variable `KUBECONFIG` to point to the kubeconfig file you downloaded.
+
+```bash
+export KUBECONFIG=~/Downloads/admin.vmware-cluster.kubeconfig
+```
+
+Forward the Kubecost UI to your local network. The Kubecost dashboard is not exposed externally by default, so the
+command below will allow you to access it locally on port **9090**. If port 9090 is already taken, you can choose a
+different one.
+
+```bash
+kubectl port-forward --namespace kubecost deployment/cost-analyzer-cost-analyzer 9090
+```
+
+Open your browser window and navigate to `http://localhost:9090`. The Kubecost UI provides you with a variety of cost
+information about your cluster.
+
+To use Kubecost in VMware vSphere clusters, you must enable the
+[custom pricing](https://docs.kubecost.com/architecture/pricing-sources-matrix#cloud-provider-on-demand-api) option in
+the Kubecost UI and manually set the monthly cluster costs.
+
+Read more about [Navigating the Kubecost UI](https://docs.kubecost.com/using-kubecost/navigating-the-kubecost-ui) to
+make the most of the cost analyzer pack.
+
+![Image that shows the Kubecost UI](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_kubecost.webp)
+
+Once you are done exploring the Kubecost dashboard, stop the `kubectl port-forward` command by closing the terminal
+window it is executing from.
+
+## Roll Back Cluster Profiles
+
+One of the key advantages of using cluster profile versions is that they make it possible to maintain a copy of
+previously known working states. The ability to roll back to a previously working cluster profile in one action shortens
+the time to recovery in the event of an incident.
+
+The process of rolling back to a previous version using Terraform is similar to the process of applying a new version.
+
+Open the **terraform.tfvars** file, set the `deploy-vmware-kubecost` variable to false, and save the file. Once applied,
+this action will make the active cluster use version **1.0.0** of the cluster profile again.
+
+In the terminal window, issue the following command to plan the changes.
+
+```bash
+terraform plan
+```
+
+The output states that the deployed cluster will now use version `1.0.0` of the cluster profile.
+
+```text hideClipboard
+Plan: 0 to add, 1 to change, 0 to destroy.
+```
+
+Issue the `apply` command to deploy the changes.
+
+```bash
+terraform apply -auto-approve
+```
+
+Palette now makes the changes required for the cluster to return to the state specified in version `1.0.0` of your
+cluster profile. Once your changes have completed, Palette marks your layers with the green status indicator.
+
+![Image that shows the cluster using version 1.0.0 of the cluster profile](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp)
+
+## Cleanup
+
+Use the following steps to clean up the resources you created for the tutorial. Use the `destroy` command to remove all
+the resources you created through Terraform.
+
+```shell
+terraform destroy --auto-approve
+```
+
+A successful execution of `terraform destroy` will output the following.
+
+```shell
+Destroy complete! Resources: 6 destroyed.
+```
+
+:::info
+
+If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for force delete. To trigger a force
+delete action, navigate to the cluster’s details page and click on **Settings**. Click on **Force Delete Cluster** to
+delete the cluster. Palette automatically removes clusters stuck in the cluster deletion phase for over 24 hours.
+
+:::
+
+If you are using the tutorial container, type `exit` in your terminal session and press the **Enter** key. Next, issue
+the following command to stop and remove the container.
+
+
+
+
+
+```shell
+docker stop tutorialContainer && \
+docker rmi --force ghcr.io/spectrocloud/tutorials:1.1.7
+```
+
+
+
+
+
+```shell
+podman stop tutorialContainer && \
+podman rmi --force ghcr.io/spectrocloud/tutorials:1.1.7
+```
+
+
+
+
+
+## Wrap-Up
+
+In this tutorial, you learned how to create different versions of a cluster profile using Terraform. You deployed a host
+VMware vSphere cluster and then updated it to use a different version of a cluster profile. Finally, you learned how to
+perform cluster profile roll backs.
+
+We encourage you to check out the
+[Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest) provider page to
+learn more about the Palette resources you can deploy using Terraform.
diff --git a/docs/docs-content/getting-started/vmware/setup.md b/docs/docs-content/getting-started/vmware/setup.md
index 15ab5687ac..c51e22ef55 100644
--- a/docs/docs-content/getting-started/vmware/setup.md
+++ b/docs/docs-content/getting-started/vmware/setup.md
@@ -13,9 +13,8 @@ order to authenticate Palette and allow it to deploy host clusters.
## Prerequisites
-The prerequisite steps to getting started with Palette on VMware are as follows.
-
- A Palette account with [tenant admin](../../tenant-settings/tenant-settings.md) access.
+
- A [VMware vSphere](https://docs.vmware.com/en/VMware-vSphere/index.html) user account with the
[required permissions](../../clusters/data-center/vmware/permissions.md).
@@ -29,6 +28,9 @@ Palette needs access to your VMware user account in order to create and manage V
### Create and Upload an SSH Key
+Follow the steps below to create an SSH key using the terminal and upload it to Palette. This step is optional for the
+[Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) tutorial.
+
## Validate
@@ -48,7 +50,7 @@ You can verify your Palette API key is added.
## Next Steps
Now that you set up Palette for use with VMware vSphere, you can start deploying a Private Cloud Gateway (PCG), which is
-the bridge between Palette and you private cloud environment.
+the bridge between Palette and your private infrastructure environment.
To learn how to get started with deploying Kubernetes clusters to VMware virtual machines, we recommend that you
continue to the [Deploy a PCG with Palette CLI](./deploy-pcg.md) tutorial.
diff --git a/docs/docs-content/getting-started/vmware/update-k8s-cluster.md b/docs/docs-content/getting-started/vmware/update-k8s-cluster.md
index 8aea51c677..b28337a51f 100644
--- a/docs/docs-content/getting-started/vmware/update-k8s-cluster.md
+++ b/docs/docs-content/getting-started/vmware/update-k8s-cluster.md
@@ -291,3 +291,6 @@ three-tier application with a REST API backend server.
Cluster profiles provide consistency during the cluster creation process, as well as when maintaining your clusters.
They can be versioned to keep a record of previously working cluster states, giving you visibility when updating or
rolling back workloads across your environments.
+
+We recommend that you continue to the [Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) page to
+learn about how you can use Palette with Terraform.
diff --git a/docs/docs-content/getting-started/vmware/vmware.md b/docs/docs-content/getting-started/vmware/vmware.md
index 5a716edbe3..5dd67d6dbe 100644
--- a/docs/docs-content/getting-started/vmware/vmware.md
+++ b/docs/docs-content/getting-started/vmware/vmware.md
@@ -43,10 +43,10 @@ Once your cluster is deployed, you can update it using cluster profile updates.
relativeURL: "./deploy-k8s-cluster",
},
{
- title: "Deploy Cluster Profile Updates",
- description: "Update your deployed clusters using Palette Cluster Profiles.",
+ title: "Cluster Management with Terraform",
+ description: "Deploy and update a Palette host cluster with Terraform.",
buttonText: "Learn more",
- relativeURL: "./update-k8s-cluster",
+ relativeURL: "./deploy-manage-k8s-cluster-tf",
},
]}
/>
diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp
new file mode 100644
index 0000000000..70bddcba46
Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp differ
diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp
new file mode 100644
index 0000000000..f5d7912522
Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp differ
diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp
new file mode 100644
index 0000000000..10db37ed05
Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp differ
diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp
new file mode 100644
index 0000000000..76381ca2f7
Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp differ
diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp
new file mode 100644
index 0000000000..95bbb31f1a
Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp differ
diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_kubecost.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_kubecost.webp
new file mode 100644
index 0000000000..4955e70692
Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_kubecost.webp differ
diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp
new file mode 100644
index 0000000000..5dedb53ecc
Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp differ
diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp
new file mode 100644
index 0000000000..fc03c92ed4
Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp differ
diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp
new file mode 100644
index 0000000000..086877387e
Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp differ
diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_reconciliation.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_reconciliation.webp
new file mode 100644
index 0000000000..36d6fa3506
Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_reconciliation.webp differ
diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp
new file mode 100644
index 0000000000..ed094c1606
Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp differ
diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp
new file mode 100644
index 0000000000..8ee2af297e
Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp differ
diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp
new file mode 100644
index 0000000000..5fde92b5e9
Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp differ
diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp
new file mode 100644
index 0000000000..76381ca2f7
Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp differ
diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp
new file mode 100644
index 0000000000..7af3ae8aba
Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp differ
diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubecost.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubecost.webp
new file mode 100644
index 0000000000..86d1378bdd
Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubecost.webp differ
diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp
new file mode 100644
index 0000000000..31852355f3
Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp differ
diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp
new file mode 100644
index 0000000000..ffe842ad0a
Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp differ
diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp
new file mode 100644
index 0000000000..b87603d8b5
Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp differ
diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_reconciliation.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_reconciliation.webp
new file mode 100644
index 0000000000..41d674121d
Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_reconciliation.webp differ
diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp
new file mode 100644
index 0000000000..18e42a37a5
Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp differ
diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp
new file mode 100644
index 0000000000..1af5809d39
Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp differ
diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp
new file mode 100644
index 0000000000..d9d32fc8d1
Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp differ
diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp
new file mode 100644
index 0000000000..76381ca2f7
Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp differ
diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp
new file mode 100644
index 0000000000..0fba0ab22b
Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp differ
diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_kubecost.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_kubecost.webp
new file mode 100644
index 0000000000..d9226d7950
Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_kubecost.webp differ
diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp
new file mode 100644
index 0000000000..6d1ec023b3
Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp differ
diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp
new file mode 100644
index 0000000000..4c6748c11c
Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp differ
diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp
new file mode 100644
index 0000000000..253497fcc6
Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp differ
diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_reconciliation.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_reconciliation.webp
new file mode 100644
index 0000000000..5be0326d61
Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_reconciliation.webp differ