You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I recently upgraded packer and ansible to the latest versions and am having an SSH connection issue after the upgrade. When using the remote ansible provisioner in AWS, packer will create the temporary EC2 instance, create the temporary SSH key but then attempt to connect to 127.0.0.1 instead of the newly created EC2 instance. This was working before the ansible and packer upgrade. I believe the problem may be with ansible since this works if I leave packer at the latest version and downgrade ansible to 6.7.0 (or 2.13.13, not sure with their confusing versioning lately)
I've also tried specifying a host_alias in the packer build file and using that for hosts in the ansible playbook but that resulted in the same error. Tried settings use_proxy to false and while that did change the error message to reflect the correct IP instead of 127.0.0.1, it still failed with the same error
Reproduction Steps
Ensure you're using the latest ansible 9.1.0 (core 2.16.2)
Run packer using the ansible provisioner (not local)
Ansible will attempt a connection to 127.0.0.1 instead of the remote host IP
This works properly when using ansible version 6.7.0 (core 2.13.13)
After adding the working log output from ansible 6.7.0, it looks like connecting to 127.0.0.1 as the issue may not be related as both log outputs have the same localhost connection being made; so I'm not sure what's going on here, all I know if it works with an older version of ansible but not the latest
The text was updated successfully, but these errors were encountered:
yeah, this was definitely related to the use of -oHostKeyAlgorithms=+ssh-rsa and -oPubkeyAcceptedKeyTypes=+ssh-rsa. These were added to workaround an issue with Ansible and Ubuntu/SSH but looks to have been changed or fixed in the latest version of Ansible so the options are no longer needed and were causing the error I was seeing. I'll close this since it's not an actual issue but hopefully it helps anyone else who stumbles into this issue
Overview of the Issue
I recently upgraded packer and ansible to the latest versions and am having an SSH connection issue after the upgrade. When using the remote ansible provisioner in AWS, packer will create the temporary EC2 instance, create the temporary SSH key but then attempt to connect to 127.0.0.1 instead of the newly created EC2 instance. This was working before the ansible and packer upgrade. I believe the problem may be with ansible since this works if I leave packer at the latest version and downgrade ansible to 6.7.0 (or 2.13.13, not sure with their confusing versioning lately)
I've also tried specifying a
host_alias
in the packer build file and using that forhosts
in the ansible playbook but that resulted in the same error. Tried settingsuse_proxy
tofalse
and while that did change the error message to reflect the correct IP instead of 127.0.0.1, it still failed with the same errorReproduction Steps
This works properly when using ansible version 6.7.0 (core 2.13.13)
Plugin and Packer version
Simplified Packer Buildfile
Packer build file
Ansible configuration file
simplified ansible playbook:
Operating system and Environment details
Ubuntu Jammy 22.04 amd64
Log Fragments and crash.log files
Broken Packer and Ansible 9.1.0 log output
Working Packer and Ansible 6.7.0 log output
After adding the working log output from ansible 6.7.0, it looks like connecting to 127.0.0.1 as the issue may not be related as both log outputs have the same localhost connection being made; so I'm not sure what's going on here, all I know if it works with an older version of ansible but not the latest
The text was updated successfully, but these errors were encountered: