I'm trying to build CentOS templates in cloudstack using Packer.
Below is a config.json that works for ubuntu using temporary keys:
"builders": [{
"type": "cloudstack",
"communicator": "ssh",
"ssh_username": "{{user `ssh_username`}}",
"ssh_handshake_attempts": 2,
"ssh_password": "",
"api_url": "{{user `api_url`}}",
"api_key": "{{user `api_key`}}",
"network": "{{user `network`}}",
"secret_key": "{{user `secret_key`}}",
"service_offering": "{{user `service_offering`}}",
"source_template": "{{user `source_template`}}",
"template_os": "{{user `template_os`}}",
"zone": "{{user `zone`}}",
"expunge": true,
"public_ip_address": "{{user `public_ip`}}",
"template_name": "{{user `template_name`}}-{{isotime \"020106-0304\"}}",
"template_password_enabled": true,
"template_scalable": true
}],The part that is failing is SSH on to the newly built VM.
"handshake error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain"
I am able to log in to the machine with the password generated by cloud stack (passwordenabled = true) but not with the generated key. When I log in with password I can't see any .ssh/ directory and therefore the temporary public key isn't in the correct authorized_keys file.
That explains why SSH fails, it is trying public key auth but can't do it.
However, when using winrm communicator with windows VMs, if winrm_password is left blank or is left out entirely then it defaults to the generated password. Is this not possible with the SSH communicator?
Or is there a reason that the public key isn't being put on to the VM in the first place?