Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce lifecycle configuration to allow proper resource recreation #14

Merged

Conversation

szilveszter
Copy link
Member

@szilveszter szilveszter commented Nov 8, 2017

Unfortunately the current setup still has the issue that prevents us from replacing the API-GW resources when the Swagger file changes.

Fortunately someone else has chimed in with their solution, and this seemed to work during my tests:
hashicorp/terraform#6613 (comment)

Edit: After talking to @ryandub, and doing some more testing, it seems that some lifecycle configuration solves the original issue, and it looks cleaner than the intermediate stage approach.

Also changed the output of the module to reflect custom domains.

(My test case was only about changing the description variable of the API Gateway module, which affects the Swagger file.)

Unfortunately the current setup still has the issue that prevents us from replacing the API-GW resources when the Swagger file changes.

Fortunately someone else has chimed in with their solution, and this seemed to work during my tests:
hashicorp/terraform#6613 (comment)

Also changed the output of the module to reflect custom domains.

(My test case was only about changing the description variable of the API Gateway module, which affects the Swagger file.)
Let's use this instead of the intermediate step, seems much cleaner.
@szilveszter szilveszter changed the title Introduce intermediary stage to allow proper resource recreation Introduce lifecycle configuration to allow proper resource recreation Nov 9, 2017
@ryandub
Copy link
Contributor

ryandub commented Nov 9, 2017

For further context - looking at @szilveszter 's error, it appeared that Terraform was "destroying" the deployment resource, then creating a new one. From the provider code it looks like that process disassociates the stage from the deployment, then deletes the deployment (before then creating a new one) - however, @szilveszter was getting a 400 on some occasions where the stage did not fully disassociate before the attempt to delete the deployment. Hence the error message "BadRequestException: Active stages pointing to this deployment must be moved or deleted". Adding create_before_destroy here should cause a new deployment to be created and the stage assigned to that new deployment. Then, the delete on the old deployment resource should not have an issue.

@szilveszter szilveszter merged commit eebfc4a into rackerlabs:master Nov 9, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants