-
Notifications
You must be signed in to change notification settings - Fork 25
Conversation
recheck |
Vagrant Cloud is broken at the moment, these CI failures are not because of your changes: hashicorp/vagrant#12390 |
@sio oh... Thanks for the info. I will refrain me from sending new PR or recheck then. |
I didn't mean to dissuade you from submitting new PRs, I was just letting you know that CI will not be functional for some time due to upstream service provider outage :) It took me quite some time this morning to understand that the problem is not on my end and I wanted to save you the same trouble. |
recheck |
hm. weird. Probably not related to this PR but ...
So the prepare.yml did install python but
Given that the scenario is the default configuration, aka:
I'm not convinced that setting the box to use the test one is a good idea.... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been testing this PR in my local dev workflow.
Without it, I was almost about to dump molecule-vagrant. It became too complicated.
With it, it works like a charm. VMs spin up faster. If I need to debug anything or connect to one of them, I just have to cd
to the scenario ephemeral directory and run vagrant ssh server0
.
Non-deep code review looks good also.
Thanks! This is a must have. 😃
hm. |
I didn't know it existed 😅 It's working great! (I mean: without the PR) |
In any case, I still think this different approach is a good enhancement. It is more similar to what one would do if not using molecule, and is faster. One problem I noticed is that, when using this PR, machines were created without a network interface. When using a4bd8b1, they got created with networking. My platforms definition is: platforms:
- &server
box: generic/ubuntu2004
groups:
- k8s_server
- k8s_node
name: server0
- <<: *server
name: server1
- <<: *server
name: server2 |
@yajo Networking should work. Can you share your molecule.yml so that I can try to reproduce and fix over the week ? It would be bad to see this PR merged if there's a know bug... |
Here it is: dependency:
# name: galaxy
# options:
# role-file: requirements.yaml
# requirements-file: requirements.yaml
name: shell
command: ansible-galaxy install -r requirements.yaml
driver:
name: vagrant
provider:
name: libvirt
platforms:
- &server
box: generic/ubuntu2004
groups:
- k8s_server
- k8s_node
name: server0
- <<: *server
name: server1
- <<: *server
name: server2
provisioner:
name: ansible
connection_options:
ansible_become: true
config_options:
defaults:
vault_password_file: ${MOLECULE_PROJECT_DIRECTORY}/.vault_password.txt
inventory:
links:
group_vars: ../../group_vars
verifier:
name: ansible |
@yajo tried quickly your setup and I can use |
It seems related to using |
Definitely the issues I'm experiencing are lower in the stack. Check vagrant-libvirt/vagrant-libvirt#1342 if you're interested. But nothing related to this PR. |
Current code only handle 1 vagrant VM. Adding a 'instances' parameter and a loop in the jinja template is mostly enough. Unfortunately, some extra work has been needed: New options - An option called 'cachier' used to control vagrant-cachier has been added. From what I understant it's not possible to configure it at instance level, so the cachier option has been removed from instance definition. - An option called 'default_box' has been added since it can't be specified when using instances list directly from molecule.yml. - An option called 'parallel' has been added to allow setting VAGRANT_NO_PARALLEL environment variable, since people may want to disable parallel VM creation with multiple instances. Moreover, some care has been added to avoid breaking current playbooks. This allows people updating theirs and the ones from molecule-vagrant will be updated in a later commit. Signed-off-by: Arnaud Patard <apatard@hupstream.com>
Now that the vagrant module can take directly the list of instances from molecule, update the playbooks accordingly. This should solve a bunch of issues when the module is overwriting the Vagrantfile when bringing up each VM. Moreover this should make things easier to debug and more robust. Since the provision option is not VM specific, the templates are now expecting it in the molecule.yml driver: dict. Signed-off-by: Arnaud Patard <apatard@hupstream.com>
…ule.yml: fix provision usage the provision parameter should now be specified outside the instances list, so update the molecule configuration for that. Signed-off-by: Arnaud Patard <apatard@hupstream.com>
… options The default scenario should really be minimal, so remove extra options. Signed-off-by: Arnaud Patard <apatard@hupstream.com>
Since the vagrant module is supposed to remain compatible atm with old create/destroy playbooks, add a test scenario for that. Signed-off-by: Arnaud Patard <apatard@hupstream.com>
…enario This scenario had some troubles: - the prepare.yml scenario was useless - no verify.yml - the converge.yml tasks were not so useful since better testing is provided by verify.yml - fix molecule.yml network definition to be provider-agnostic and not use .1 IP since it's usually the IP of the hypervisor with libvirt - add provider options for libvirt to not use session, since we're not setting network configuration to allow working such configuration. - reduce memory usage to 256 for each box, to avoid needing to much memory Signed-off-by: Arnaud Patard <apatard@hupstream.com>
- add the default and default-compat scenarii - add the multi-node scenario and ensure that the generated Vagrantfile has both instances in it. Signed-off-by: Arnaud Patard <apatard@hupstream.com>
Update README.rst molecule.yml section according to the changes done in the create.yml/destroy.yml playbooks. Signed-off-by: Arnaud Patard <apatard@hupstream.com>
0afb169
to
a007b3a
Compare
for more information, see https://pre-commit.ci
… network Avoid using dhcp as there are high chances it'll use a network forbidden by default vbox configuration in github actions. Signed-off-by: Arnaud Patard <apatard@hupstream.com>
3afba4b
to
e6e7024
Compare
This patchset is fixing multiplatform support. Until now, when multiple instances are declared, the
create.yml
playbook is looping over the instance list and calls the vagrant module. This leads to oneVagrantfile
per instance which are overwritten at each step of the loop.This make debugging harder and - in some cases - may confuse vagrant.
This patchset is allowing to directly give
molecule.yml
instances list to the vagrant module and then the module is producing a properVagrantfile
.