Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable habitat supervisor service before start #17225

Closed
wants to merge 57 commits into from

Conversation

smartb-pair
Copy link

Signed-off-by: Arash Zandi arash.zandi@smartb.eu #17224

Enable Habitat Supervisor before starting the service. We think this should be the default behavior.

Signed-off-by: Arash Zandi <arash.zandi@smartb.eu>
@nsdavidson
Copy link
Contributor

👍 This makes total sense to me, and it was an oversight on my part that this was not in the initial implementation.

@rwc
Copy link
Contributor

rwc commented Jan 30, 2018

Fixes #17224

@smartb-pair
Copy link
Author

As this is the first time we've looked at Terraform code, we'd appreciate some guidance here on properly handling tests. @nsdavidson, perhaps you can help?

bflad and others added 24 commits February 16, 2018 14:33
Destroy-time provisioners require us to re-evaluate during destroy.

Rather than destroying local values, which doesn't do much since they
aren't persisted to state, we always evaluate them regardless of the
type of apply. Since the destroy-time local node is no longer a
"destroy" operation, the order of evaluation need to be reversed. Take
the existing DestroyValueReferenceTransformer and change it to reverse
the outgoing edges, rather than in incoming edges. This makes it so that
any dependencies of a local or output node are destroyed after
evaluation.

Having locals evaluated during destroy failed one other test, but that
was the odd case where we need `id` to exist as an attribute as well as
a field.
Add a complex destroy provisioner testcase using locals, outputs and
variables.

Add that pesky "id" attribute to the instance states for interpolation.
Always evaluate outputs during destroy, just like we did for locals.
This breaks existing tests, which we will handle separately.

Don't reverse output/local node evaluation order during destroy, as they
are both being evaluated.
Now that outputs are always evaluated, we still need a way to remove
them from state when they are destroyed.

Previously, outputs were removed during destroy from the same
"Applyable" node type that evaluates them. Now that we need to possibly
both evaluate and remove output during an apply, we add a new node -
NodeDestroyableOutput.

This new node is added to the graph by the DestroyOutputTransformer,
which make the new destroy node depend on all descendants of the output
node.  This ensures that the output remains in the state as long as
everything which may interpolate the output still exists.
Using destroy provisioners again for edge cases during destroy.
Since outputs and local nodes are always evaluated, if the reference a
resource form the configuration that isn't in the state, the
interpolation could fail.

Prune any local or output values that have no references in the graph.
The id attribute can be missing during the destroy operation.
While the new destroy-time ordering of outputs and locals should prevent
resources from having their id attributes set to an empty string,
there's no reason to error out if we have the canonical ID field
available.

This still interrogates the attributes map first to retain any previous
behavior, but in the future we should settle on a single ID location.
github.com/joyent/triton-go replaced a bunch of other dependencies quite
some time ago, but the replaced dependencies were never removed. This
commit removes them from the vendor manifest and the vendor/ directory.
* Website: add PANOS links

* fix type

* edit
Similar to NodeApplyableOuptut, NodeDestroyableOutputs also need to stay
in the graph if any ancestor nodes

Use the same GraphNodeTargetDownstream method to keep them from being
pruned, since they are dependent on the output node and all its
descendants.
Better section linking within Module Sources page, and centralize the documentation on Terraform Registry sources.
The plan shutdown test often fail on slow CI hosts, becase the plan
completes befor the main thread can cancel it. Since attempting to make
the MockProvider concurrent proved too invasive for now, just slow the
test down a bit to help ensure Stop gets called.
* restructure community providers list

* add vRA

* add Gandi provider

* re-organize
Michael Mell and others added 24 commits February 16, 2018 14:33
This change allows the Habitat supervisor service name to be
configurable. Currently it is hard coded to `hab-supervisor`.

Signed-off-by: Nolan Davidson <ndavidson@chef.io>
Currently the provisioner will fail if the `hab` user already exists on
the target system.

This adds a check to see if we need to create the user before trying to
add it.

Fixes hashicorp#17159

Signed-off-by: Nolan Davidson <ndavidson@chef.io>
Add `host_key` and `bastion_host_key` fields to the ssh communicator
config for strict host key checking.

Both fields expect the contents of an openssh formated public key. This
key can either be the remote host's public key, or the public key of the
CA which signed the remote host certificate.

Support for signed certificates is limited, because the provisioner
usually connects to a remote host by ip address rather than hostname, so
the certificate would need to be signed appropriately. Connecting via
a hostname needs to currently be done through a secondary provisioner,
like one attached to a null_resource.
This tests basic known_hosts validation for the ssh communicator.
This checks that we can verify host certificates signed by a CA
Every provisioner that uses communicator implements its own retryFunc.
Take the remote-exec implementation (since it's the most complete) and
put it in the communicator package for each provisioner to use.

Add a public interface `communicator.Fatal`, which can wrap an error to
indicate a fatal error that should not be retried.
It's now in the communicator package
It's now in the communicator package
it's now in the communicator package
it's now in the communicator package
Fix a bug where the last error was not retrieved from errVal.Load
due to an incorrect type assertion.
This will let the retry loop abort when there are errors which aren't
going to ever be corrected.
There no reason to retry around the execution of remote scripts. We've
already established a connection, so the only that could happen here is
to continually retry uploading or executing a script that can't succeed.

This also simplifies the streaming output from the command, which
doesn't need such explicit synchronization. Closing the output pipes is
sufficient to stop the copyOutput functions, and they don't close around
any values that are accessed again after the command executes.
Add missing link to ectdv3.

Update etcd v2 link to the current v2 README, which highlights the pending
deprecation.
Signed-off-by: Arash Zandi <arash.zandi@smartb.eu>
@jbardin
Copy link
Member

jbardin commented Apr 4, 2018

Hi @smartb-pair,

I'm not sure what happened to this PR, but I moved the commit over to #17781.
Can you verify if that is all that is required?
Thanks!

@jbardin jbardin closed this Apr 4, 2018
@rwc
Copy link
Contributor

rwc commented Apr 4, 2018

Hey @jbardin - Looks like some rebase magic happened. #17781 looks good!

@ghost
Copy link

ghost commented Apr 3, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 3, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.