-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PARKED - switch from add_host to dynamic inventory #804
Conversation
My favorite person ever @liquidat
windows, network, rhel_90 and rhel are all working with dynamic inventory now
also tagging ansible/ansible#69227 in here
makesure workshop_inventory directory is present, failing zuul tests
This will break workshops when VMs fail to Tag because of AWS Rate Limiting (with add_hosts they still get setup properly). This will also break the ability to have provisioned multiple workshops at once. The 2nd workshop will override "{{playbook_dir}}/workshop_inventory/aws_ec2.yml". This stops you from being able to tear down the 1st workshop since it relies on the inventory script. |
this makes sense to me, I have been thinking about how to solve this with Sloane.
the instances get tagged with ec2_tag, not with inventory. I am confused here. |
time savings of dynamic inventory from initial testing of 2 students is 3 minutes and 5 seconds |
making provisioner more stable
{% for n in range(1, student_total + 1) %} | ||
student{{ n }}: "'student{{ n }}' == tags.Student" | ||
{% endfor %} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
{% for n in range(1, student_total + 1) %} | |
student{{ n }}: "'student{{ n }}' == tags.Student" | |
{% endfor %} | |
keyed_groups: | |
- key: tags.Student | |
prefix: '' | |
separator: '' |
Currently, the ec2 tagging can fail because of rate limiting. Its a known problem that doesn't affect the running of the workshop (the add_hosts ensure that they are still in the running inventory). They really need to fix the module as some of the other are to detect rate limiting and retry. With the move to using a dynamic inventory though, the playbook won't know about these hosts as they now don't match the filter. Thus they don't get added to the running inventory, and don't get setup. The only person that will notice is the students when they go to use their lab environment, as the nodes won't be able to be connected to. The playbook run itself will pass, because it will setup every host that it "knows" about. |
fix f5 verification
robustness for ansible-galaxy
but we use tags to use the add_host method->
are you saying the |
also removing old templates
fix windows workshop, accidentally removed this template
We do a ec2 call to create the instances (but we apply no tags at creation, not even the same tag we use for count_tag.... we should change that!) and then we run those through a loop and apply the ec2_tags. The instances I see that are untagged have zero tags applied to them, so yes, they wouldn't even be picked up by our "ec2_instance_info", so it seems they aren't being setup even today. I wish we could reproduce this reliably, then we could check the hosts and see exactly what is going on. For reference, in Skylight, we don't use the ec2_instance_info call at all, we already have all the information we need about the hosts from the ec2 variables, so why do more API calls to get the same data? |
ah this makes way more sense to me now, i need to noodle on this... we are on the same wavelength now |
I will point out, that you use the info call, because you use the variable in places such as the inventory template, and you use the tags from the variable too, which aren't present in the ec2 variable, since you tag later. |
fixing username with suggestino from @liquidat
this PR is parked, it will wait until we have a import_inventory at least |
SUMMARY
this switches from the add_host method to using the
aws_ec2
dynamic inventory methodPROBLEM STATEMENT WITH ADD_HOST
there may be an issue with this method as outlined in this github issue here: ansible/ansible#69227
I am going to work on a workaround
ISSUE TYPE
COMPONENT NAME
ADDITIONAL INFORMATION