I have terraform provisioning a private VPC in AWS.
This has a front bastion server which acts as the SSH gateway to the private instances.
The actual creation of the instances works fine but you run into problems when trying to run provisioners. Because the SSH port is not open publicly we are currently uploading shell scripts to the bastion and then running the provisioner there (which SSHs to the relevant private instance and runs some commands).
This is similar to what is described in these posts -
Additionally we cannot use things such as the user_data field (unless I am doing something wrong) to provision using cloud-init scripts as well. Also, the actually provisioning itself is a bit cumbersome (upload scripts to baston, run provisioner from private instance to ssh to baston which runs a script which sshs again and runs some command) and is not particularly clear when you read the terraform code itself.
I'm wondering if anyone else is dealing with these issues in a bit of a nicer way than this?
If we were using pure SSH here we could set up a proxycommand in our ssh config file (e.g.) -
Host bastion
User ubuntu
IdentityFile ~/.ssh/ec2.pem
Host private_instance
User ubuntu
ProxyCommand ssh bastion nc %h %p
This would mean we could access the private instance directly from my machine via the bastion (via ssh private_instance), though this is not currently supported by terraform. Anyone have any other ideas how to make this a bit easier?