Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add vpc cluster and workerpool to the dhost example #3878

Merged
merged 2 commits into from
Jun 30, 2022

Conversation

z0za
Copy link
Contributor

@z0za z0za commented Jun 27, 2022

Community Note

  • Please vote on this pull request by adding a 👍 reaction to the original pull request comment to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for pull request followers and do not help prioritize the request

Relates OR Closes #0000

Output from acceptance testing:

$ terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # ibm_container_dedicated_host.dhost will be created
  + resource "ibm_container_dedicated_host" "dhost" {
...
    }

  # ibm_container_dedicated_host_pool.dhostpool will be created
  + resource "ibm_container_dedicated_host_pool" "dhostpool" {
...
    }

  # ibm_container_vpc_cluster.dhost_vpc_cluster will be created
  + resource "ibm_container_vpc_cluster" "dhost_vpc_cluster" {
...
    }

  # ibm_container_vpc_worker_pool.dhost_vpc_worker_pool will be created
  + resource "ibm_container_vpc_worker_pool" "dhost_vpc_worker_pool" {
...
    }

Plan: 4 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

ibm_container_dedicated_host_pool.dhostpool: Creating...
ibm_container_dedicated_host_pool.dhostpool: Still creating... [10s elapsed]
ibm_container_dedicated_host_pool.dhostpool: Creation complete after 12s [id=...]
ibm_container_dedicated_host.dhost: Creating...
...
ibm_container_dedicated_host.dhost: Creation complete after 1m52s [id=...]
ibm_container_vpc_cluster.dhost_vpc_cluster: Creating...
...
ibm_container_vpc_cluster.dhost_vpc_cluster: Creation complete after 17m56s [id=...]
ibm_container_vpc_worker_pool.dhost_vpc_worker_pool: Creating...
...
ibm_container_vpc_worker_pool.dhost_vpc_worker_pool: Creation complete after 6m29s [id=...]

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

$ terraform destroy

ibm_container_dedicated_host_pool.dhostpool: Refreshing state... [id=...]
ibm_container_dedicated_host.dhost: Refreshing state... [id=...]
ibm_container_vpc_cluster.dhost_vpc_cluster: Refreshing state... [id=...]
ibm_container_vpc_worker_pool.dhost_vpc_worker_pool: Refreshing state... [id=...]

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the last "terraform apply":

  # ibm_container_dedicated_host.dhost has been changed
  ~ resource "ibm_container_dedicated_host" "dhost" {
...
    }
  # ibm_container_dedicated_host_pool.dhostpool has been changed
  ~ resource "ibm_container_dedicated_host_pool" "dhostpool" {
...
    }
  # ibm_container_vpc_cluster.dhost_vpc_cluster has been changed
  ~ resource "ibm_container_vpc_cluster" "dhost_vpc_cluster" {
...
    }

...

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # ibm_container_dedicated_host.dhost will be destroyed
  - resource "ibm_container_dedicated_host" "dhost" {
...
    }

  # ibm_container_vpc_cluster.dhost_vpc_cluster will be destroyed
  - resource "ibm_container_vpc_cluster" "dhost_vpc_cluster" {
...
    }

  # ibm_container_vpc_worker_pool.dhost_vpc_worker_pool will be destroyed
  - resource "ibm_container_vpc_worker_pool" "dhost_vpc_worker_pool" {
...
    }

Plan: 0 to add, 0 to change, 4 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.
    Enter a value: yes

ibm_container_vpc_worker_pool.dhost_vpc_worker_pool: Destroying... [id=...]
...
ibm_container_vpc_worker_pool.dhost_vpc_worker_pool: Destruction complete after 4m19s
ibm_container_vpc_cluster.dhost_vpc_cluster: Destroying... [id=...]
...
ibm_container_vpc_cluster.dhost_vpc_cluster: Destruction complete after 5m5s
ibm_container_dedicated_host.dhost: Destroying... [id=...]
...
ibm_container_dedicated_host.dhost: Destruction complete after 2m8s
ibm_container_dedicated_host_pool.dhostpool: Destroying... [id=...]
...
ibm_container_dedicated_host_pool.dhostpool: Destruction complete after 1m55s

Destroy complete! Resources: 4 destroyed.

...

@z0za z0za marked this pull request as ready for review June 28, 2022 09:26
@hkantare hkantare merged commit 4e301be into IBM-Cloud:master Jun 30, 2022
hkantare pushed a commit that referenced this pull request Jul 1, 2022
* add vpc cluster and workerpool to the dhost example

* use subnet,vpc datasource instead ids

Co-authored-by: Zoltan Illes <zoltan.illes@ibm.com>
@z0za z0za deleted the dedicatedhost_example branch July 4, 2022 07:05
SunithaGudisagarIBM pushed a commit to ibm-vpc/terraform-provider-ibm that referenced this pull request Sep 14, 2022
* add vpc cluster and workerpool to the dhost example

* use subnet,vpc datasource instead ids

Co-authored-by: Zoltan Illes <zoltan.illes@ibm.com>
SunithaGudisagarIBM pushed a commit to ibm-vpc/terraform-provider-ibm that referenced this pull request Sep 14, 2022
* add vpc cluster and workerpool to the dhost example

* use subnet,vpc datasource instead ids

Co-authored-by: Zoltan Illes <zoltan.illes@ibm.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants