You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a Vagrantfile that provisions three boxes running AlmaLinux 8 via libvirt, which use the Ansible provisioner to include this role.
I have no agent nodes, thus I'm not tainting the server nodes. I ran into no issues when provisioning a single node cluster, but I run into issues when specifying multiple server nodes in my Ansible inventory.
In High Availability mode:
I run into the following error on the task: Create keepalived config file
An exception occurred during task execution. To see the full traceback, use -vvv.
The error was: ansible.errors.AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_default_ipv4'.
'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_default_ipv4'
I was able to get past this issue by changing the below line to {{ hostvars[host].ansible_host }}
I know this is an ooooold issue, but i case anyone else ends up here, I had this same problem.
In my case it was down to the etcd instances tying themselves to the NAT Network Interface on my guests, and that results in the etcd nodes not being able to contact eachother on that network.
I tried playing with etcd configuration settings https://etcd.io/docs/v3.4/op-guide/configuration/ like the --listen-peer-urls but was never able to get a combination that really worked. I ended giving up on vagrant for muliple controlplane nodes testing and went with multipass as I was using ubuntu servers anyway.
Even on windows with hyperv, that worked really well.
Take a look at the etcd configuration and the logs from the etcd pod/containers, and see if any of the etcd urls in use are 10.0.2.15 which seems to be the default NAT eth0 Network Interface address on all the Vagrant guests Ive been playing with. If that is the case, then this is likely your issue.
Summary
I have a Vagrantfile that provisions three boxes running AlmaLinux 8 via libvirt, which use the Ansible provisioner to include this role.
I have no agent nodes, thus I'm not tainting the server nodes. I ran into no issues when provisioning a single node cluster, but I run into issues when specifying multiple server nodes in my Ansible inventory.
In High Availability mode:
I run into the following error on the task:
Create keepalived config file
I was able to get past this issue by changing the below line to
{{ hostvars[host].ansible_host }}
ansible-role-rke2/templates/keepalived.conf.j2
Line 48 in dc6d426
Issue Type
Bug Report
Ansible Version
Steps to Reproduce
Have libvirt setup and the vagrant-libvirt plugin installed along with Vagrant, Ansible, and this role.
Below are the three files necessary when running
vagrant up
:Vagrantfile
playbooks/provision.yml
inventory/hosts.ini
Expected Results
For three server nodes to be provisioned after running
vagrant up
Actual Results
All servers fail to provision rke2 Ansible role.
The text was updated successfully, but these errors were encountered: