Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor hosts files #313

Merged
merged 1 commit into from
Dec 28, 2015
Merged

Refactor hosts files #313

merged 1 commit into from
Dec 28, 2015

Conversation

fullyint
Copy link
Contributor

Commands

  • Replace -i hosts/<environment> with -e env=<environment>
  • Dev can just be ansible-playbook dev.yml. You'd need vagrant vm in ssh config till Add ansible-ssh role to generate ssh config #314 is merged.
  • unchanged: ./deploy.sh <environment> <site>

I kept hosts files separate by environment. Alternatively, a single file could look like this:

# Each host machine should appear in the "Environments" section and in the "Types" section.
# Each host must only be listed once per [group], even if it will host multiple sites.


# Environments

[development]
127.0.0.1

[staging]
192.168.50.6

[production]
192.168.50.7


# Types

[web]
127.0.0.1
192.168.50.6
192.168.50.7

[db]

@austinpray
Copy link
Contributor

Going to try this out tonight

@swalkinshaw
Copy link
Member

Think I'm onboard with this 👍

@swalkinshaw
Copy link
Member

@fullyint can you do a PR for doc updates on https://github.com/roots/docs?

We should merge at the same time.

@fullyint
Copy link
Contributor Author

docs in roots/docs#4

After this PR, commands will use -e env=<environment> instead of -i hosts/<environment>. This PR initiates playbooks with a task to "Ensure environment is defined," giving a helpful message if not. Otherwise, omitting the new -e env would produce the message skipping: no hosts matched, which is less clear. We can remove this task in the future once most users have transitioned to the new command.

fail:
msg: "Environment missing. Use `-e` to define `env`:\nansible-playbook server.yml -e env=<environment>"
when: env is not defined

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could this be separated into its own playbook and included so it's DRY?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great idea.

I added a commit to include the new variable-check.yml playbook. It's DRY but also a bit more complex to look at. The messages needed to differ slightly per playbook.

You'll see that I went with yaml:

- include: variable-check.yml
  vars:
    playbook: server.yml

even though it could have been more succinct:

- include: variable-check.yml playbook=server.yml

@swalkinshaw
Copy link
Member

@fullyint this will need an update to windows.sh unfortunately

@fullyint
Copy link
Contributor Author

@swalkinshaw Re: updating windows.sh, I figure the ansible-playbook command is the only candidate for consideration.

Inventory. This PR removes the -i everywhere except in windows.sh. When ansible runs dev.yml within a Vagrant VM on Windows, I think it must use the chmod -x ${TEMP_HOSTS}/* copy made by windows.sh. So, I think windows.sh must retain the -i to override ansible.cfg's new default inventory = hosts.

env variable. windows.sh runs the dev.yml playbook which is actually the only playbook that doesn't use the env variable introduced in this PR. Instead, dev.yml specifies the environment group directly:

hosts: web:&development

So, I don't think windows.sh needs to add the -e option for defining env.

I don't notice anything else that might need adjustment in windows.sh. Let me know if I missed anything.

@rposborne
Copy link

+1

@swalkinshaw
Copy link
Member

@fullyint squash + changelog entry? Ready to merge after that 👍

@fullyint
Copy link
Contributor Author

Added changelog entry and squashed

swalkinshaw added a commit that referenced this pull request Dec 28, 2015
@swalkinshaw swalkinshaw merged commit 790265f into roots:master Dec 28, 2015
@fullyint fullyint deleted the hosts branch December 29, 2015 19:29
@scherii
Copy link

scherii commented Jan 27, 2016

Please feel free to tell me that I should open a new issue if this doesn't belong here:

With this change we now have to enter the domain name instead of the server's IP.

./deploy.sh alpha alpha.domain.tld

TASK: [Ensure site is valid] **************************************************
failed: [XXX.XXX.XXX.XXX] => {"failed": true}
msg: Site `alpha.domain.tld` is not valid. Available sites to deploy: beta.domain.tld

(Our staging stages are named alpha and beta.)
So our host files look like this:

[beta]
beta.domain.tld

[web]
beta.domain.tld

After replacing the IP with the domain name it works again.

@fullyint
Copy link
Contributor Author

@scherii

Setting the stage

./deploy.sh<group><site>

By specifying the group in your command, you select a subset of all the hosts in your hosts directory, specifically the hosts subset that are both in the group (or env) you specified and the web group.

The Ensure site is valid task/error checks that the site in your command is listed in the wordpress_sites that are applicable to the group specified.

For example, the default command ./deploy staging example.com checks that example.com is in the staging group's wordpress_sites (i.e., in group_vars/staging/wordpress_sites.yml).

The error you saw

If you ran the command ./deploy.sh alpha alpha.domain.tld and it gave the message Site```alpha.domain.tld```is not valid, that indicates that the alpha group's wordpress_sites didn't include the site name alpha.domain.tld. To try to help, it reports the sites it can find for the alpha group: Available sites to deploy: beta.domain.tld.

(sounds like you've already solved things, but...)
To fix this, there are several ways you could associate a list of wordpress_sites to the alpha group (to be sure that alpha.domain.tld is in the list). You could define them in group_vars/alpha/wordpress_sites.yml. Alternatively, you could leave them defined in group_vars/staging/wordpress_sites.yml but adjust your hosts/staging like this:

[alpha]
111.xxx.xxx.xxx

[beta]
122.xxx.xxx.xxx

[web]
111.xxx.xxx.xxx
122.xxx.xxx.xxx

[staging]
alpha
beta

There are probably other possible approaches too. Check out hosts and groups at Ansible docs.

Using IP vs domain

As for IP vs domain, I think you're saying that you used to be able to use IPs in hosts/staging but now it appears you must use domain names.

I think you can still use IPs in your hosts file. Given the example hosts file above, if you were to issue the command ./deploy.sh alpha example.com it will deploy the example.com site to the host 111.xxx.xxx.xxx. And of course ./deploy.sh beta example.com will deploy to 122.xxx.xxx.xxx.

You might also be able to use those same commands with this simplified hosts file (untested):

alpha ansible_ssh_host=111.xxx.xxx.xxx
beta ansible_ssh_host=122.xxx.xxx.xxx

[staging]
alpha
beta

[web]
alpha
beta

I hope that addresses your point. If you feel there is a bug or fix that is needed, feel free to respond here. If you have questions or would like assistance troubleshooting, check out Roots discourse.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants