Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrapping ansible, abstracting configuration, and making things even more useful #139

Open
umeboshi2 opened this issue Dec 1, 2015 · 7 comments

Comments

@umeboshi2
Copy link

This issue is intended to continue the discussion here: https://github.com/debops/ansible-dnsmasq/pull/20#issuecomment-160771596

I read the wiki page and liked the idea

The ideas I really liked about the approach of DebOps is the wrapping of ansible, so that it, with only a few system requirements, can be used in a virtualenv, with everything needed to configure an inventory contained entirely in a directory. It seemed that the scripts seem to be too tied into the default debops playbooks and roles, and I really wanted to take the wrapping idea further, and abstract how the playbooks and roles were cloned. I started a project called Demosthenes, to wrap ansible in a similar way using a demos command. Right now, I have just been rewriting parts of debops and ripping out apple and windows controller support, since I have no need for it. I'm just in a bit of a hurry, and need to convert all the salt states and formulae into roles and playbooks, using sensible roles already in the galaxy if appropriate.

Ansible really solves a couple of long standing problems I have experienced with using salt. However, with salt, since the minions were continuously running root processes subscribing to a service, the issue of privilege escalation didn't need to be confronted. I am very uncomfortable with a line like this in /etc/sudoers:

root@pokey:~# cat /etc/sudoers.d/admins 
Defaults: %admins env_check += "SSH_CLIENT"
%admins ALL = (ALL:ALL) NOPASSWD: SETENV: ALL

I would rather have a script in /usr/local/bin that adds the above file at the beginning of a playbook run, then removes it upon end or error. Alternatively, using the --ask-sudo-pass(I might be dyslexic here) option in a local config would be very useful. I don't have the same user password on every machine. Having it to where I can login remotely and be root without a password is not something I really want to do.

I also had this principle, when designing my salt states, that was basically there was no state after the bootstrap. If a machine was already bootstrapped, it could be included in the inventory and nothing would happen to it until variables were set that matched the machine. The default state was a meta-state where you could just include pieces from it, and still be able to configure other services and states. I couldn't find a way to separate (common/core)(I don't remember which) into pieces.

I ramble on, but I created, many years ago, and have kept going, a fully automated network install system, paella. The original code is on sourceforge, but I moved to berlios since they provided subversion support early. I have been doing a lot of netboot installs over the years, and I have a pyramid webserver that provides the preseeds from a mako template. The preseed bootstraps salt and an initscript starts a state run on reboot, then removes the boot script. I made a video, that may be a bit boring, but it will give you an idea of what I've been working with.

Anyway, I've decided to use ansible instead of salt, and the debops way of wrapping ansible and using a local configuration is really nice. Also, I really think that making a separate ansible wrapper that is a bit more agnostic about how things are laid out would be really great. If there is an easy way that I'm missing to perform some of this using how things currently exist in debops, please let me know. Also, I named Demosthenes, who was a famous orator from Greeze, from Valentine Wiggin of fictional variety that actually used an ansible.

@umeboshi2
Copy link
Author

And also, the scripts are modules in the scripts directory of the demosthenes package, using entry_points config for setuptools to make the scripts on build/install. I never import *. I am mostly pep8, but not when it compromises ability to either read or edit (whichever takes precedence). If you like the idea of abstracting the wrapper and using a config file (that can be generated with defaults that are already being used), please take charge and do it. I will happily delete my repo and get involved.

@drybjed
Copy link
Member

drybjed commented Dec 1, 2015

First of all, welcome. :-) I'm all for the idea of making DebOps scripts more modular and more independent from the default playbooks, so perhaps joining our forces will be a better idea in the long run. Your demosthenes project might be an interesting take on this. I'm not a Python developer, so I'll defer to my colleagues for opinion how to integrate your ideas in the project.

Honestly, I never once used virtualenv, rvm and other such wrappers. I consider myself an old school sysadmin, so I prefer my tools in $PATH at all times. The idea of running additional commands to enable environment never really got to me. I get it's kinda like chroot, but I don't use that either. I guess it's more a developer thing, to try stuff in different environments easily, not suited for production use. But if it works for you, it's fine I suppose.

As for the MacOSX and Windows support, it looks like it will have to stay. :-) I want DebOps scripts to be usable anywhere Ansible is, and there are users that prefer Ansible Controller machines to be MacBooks. But perhaps it could be modularized somehow, and enabled for each platform as needed?

I don't really use Ansible roles from Ansible Galaxy, and from the tales of those that do and did, it seems that rarely roles from Galaxy can be used together without any need for tweaking them one way or another. This was and still is one of the reasons for DebOps roles - they are designed to work together from the start, and on the flipside it's rare that a role from the project will work fine without some others. It's a blessing and a curse, I suppose. There's a balance between modularity, extensibility and amount of control you can get over a system like this.

Salt has an advantage over Ansible when it comes to authorization - it already runs on the remote host with privileged rights, so of course that's a non-issue as long as you work as root and don't need to switch privileges to a particular user for some reason, say, to use that user's environment. Ansible on the other hand sends Python scripts to remote hosts, getting access from the outside. Of course there's nothing stopping you from using SSH to login to root account directly, bypassing the need for sudo. In fact, DebOps sets up the Ansible Controller SSH keys both on root and on admin account, so you are free to use either one if you want. However, going through an admin account gives you at least some form of accountability in the logs, where you can check what user account rant that change through sudo.

At the moment DebOps doesn't really give you good sudo management capabilities, I'm afraid. There's no specific role to manage that (well, you can check debops.auth at the moment, but it's limited right now), and besides, the issue strongly depends on a particular environment DebOps is used.

The current NOPASSWD: ALL status quo has two sides. One is allowing a given admin account to run all commands through sudo, and with Ansible that cannot be avoided - privileged commands are executed not directly, but using Python scripts, so instead of running sudo ls / you get sudo /home/user/.ansible/tmp/ansible-python-script-xxxx-yyyy so that's pointless to reliably control, but perhaps it will change in the future.

The other aspect is the NOPASSWD, or the need to authorize said privileged access. Giving your password is a classic choise on standalone hosts, but we are talking about management of hundreds or thousands of machines here. Right now Ansible offers you to specify sudo password from the command line, and you can certainly use it with DebOps if you wish, but that password needs to be the same on all machines in the given Ansible run, which might be a security issue in itself.

Centralized password management using LDAP might be a solution, with configuration that locks an account after a set amount of failed login attempts, but that might be prone for denial of service attack, when someone just triggers to account lock on purpose. Then again, we are talking here about remote authorization, so perhaps solutions like Kerberos are a better option for this. I want to add Kerberos support at some point, when other needed services like DNS are present. Soon(TM).

Script that adds and removes sudo access on Ansible runs is not a solution either. Think about it - how do you trigger that access? Probably in /etc/sudoers you have something like:

%admins ALL = (ALL:ALL) NOPASSWD: /usr/local/sbin/open-sudo-for-me

This script will add/remove full sudo access for the admins group as you suggest. Then again, when someone gains access to the admin account and runs sudo -l they easily will see the command you specified, run it, and gain full sudo access. Ah, so require a password for the script, you say? Then we are back to square one with either a shared password on all machines, or painful Ansible operation. So it's not really a solution.

As for the bootstrapping idea - at the moment, when you use default DebOps playbooks, hosts need to be added to [debops_all_hosts] Ansible group in inventory to be managed by the common playbooks, and each non-common service has its own Ansible group. This solution isn't really elegant, but it works quite nicely so far. Of course if you use your own playbooks, you are free to design the environment as you wish.

As for network install, I just went straight with DHCP server directing newly booted hosts to a PXE server which boots Debian Installer. The debops.ipxe role has a custom menu that lets you specify Debian Preseed script to use with the D-I to easily configure a base system, which then can be configured entirely through Ansible/DebOps combo.

@umeboshi2
Copy link
Author

Virtual environments are also good for production. The main use is that they can be reproduced and run on systems that have only basic python. They are also great for testing different combinations of requirements. If you have never used virtualenv's, you should apt-get install virtualenvwrapper and try that. It's a really great tool. You get used to it, you'll probably want to create a devpi-server role to hold the local pypi cache.

I don't have a problem with apples and windows, it was just extra stuff in my way at the moment. I've been making the scripts a little bit differently, and including them in the package, instead of a bin directory. I was just wanting to keep with the wrapping and local config idea and get something running quickly. It just honestly looked easier to do that and add things on, rather than dig through all of the common roles and figure out how to selectively disable/tweak them.

As for sudo, perhaps I'll use a solution like this for some of my machines.

I use pxelinux for my network boot environment. For preseeded installs, there is a special config that is in $tftpd/pxelinux.cfg/$uuid/bootmenu (or similar). This is created/removed by an api call to the webserver. The preseed url is also an api call like /api/preseed/{uuid} which is basically a template. If you are interested in using the preseeds on bare metal, you may look into how I've been handling the disk recipes. I use a prep function to be able to edit the recipe in a regular text editor (I've been using Ace for this). Adding support for comments is straightforward, but I haven't bothered to do that yet.

Interestingly, I have only started using the official debian installer and preseeds for netboot installs very recently. Previously, for many years, I used a live system that ran a debootstrap and chroot install, which could do many more things that couldn't be done with the debian installer. Today, I have to wait for a reboot, then do the real configuration.

@drybjed
Copy link
Member

drybjed commented Dec 2, 2015

@umeboshi2 As long as we're talking, can you tell me more about why you decided to switch from SaltStack to Ansible? I haven't used SaltStack much, but I have a general idea how it's structured, and I wonder what would motivate a switch to Ansible from it. Would you see yourself using SaltStack for some specific things along with Ansible?

@umeboshi2
Copy link
Author

It's interesting. When I went looking, I found that this issue just got recent attention: saltstack/salt#19924. I can't seem to get GitHub to tell me the issues that I am subscribed to. There is a big problem with the inability for pillar variables to be able to reference other pillar variables. The workarounds to deal with this are ugly. This issue which seems to finally being close to closed by using an ugly jinja workaround isn't really that great of a solution. There is also the issue of storing files in the pillar, which is an easier way to handle keys and certs without having to inline them in yaml. This is a really old issue. Here is a newer version saltstack/salt#9569.

I ran into a situation where I found I couldn't use the yaml_mako renderer to render the .sls files. I was forced to use the yaml_jinja renderer: saltstack/salt#18775 (comment). This bug is still open, and has a lot to do with how the salt master manages files with the gitfs backend.

Yet, I also have a friend who wanted me to try out ansible, at a time when there was no zeromq transport (which I still haven't tested it on ansible, but I have some machines that are behind firewalls) and there was no support to manage windows machines. I used to work with him a long time ago. We had a network of diskless machines with tv cards, camcorders, barcode printers and scanners, which were used to sell ladies' clothes on ebay. Our system was efficient enough to make us top-volume seller in the world, since we had a good system for making auctions very quickly. Anyway, he moved away, and he is now working for ansible, so I figured, why not? Salt development is moving in a different direction. I have already refused to implement the formulae as separate git repos, since file lookup during a state run can take very, very long. I found things worked better by sticking all the formulae in their own repo. There is also a workflow benefit that I am just now recognizing, due to the decentralized nature of the controller. Rendering on the controller means that I don't need to worry about "minion" access to the configuration. I may need to get data from the "minion"(what is the ansible term for a targeted host?) before completing a book, but there doesn't have to be a very strict state tree, pillar tree separation. With salt, EVERY minion has full and complete access to all of the files in the state tree, including unrendered templates, etc. I don't have to worry about this with salt. I just send out what is necessary.

@ypid
Copy link
Member

ypid commented Aug 27, 2016

@umeboshi2 Just to clarify your last statement:

With salt, EVERY minion has full and complete access to all of the files in the state tree, including unrendered templates, etc. I don't have to worry about this with salt. I just send out what is necessary.

You mean "I don't have to worry about this with Ansible." I guess.

@umeboshi2
Copy link
Author

Yes, that's what it's supposed to say. Sorry about that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants