Hi All,
We run an OpenStack Grizzly environment that makes use of a config_drive presented to the instance as /dev/vdb for MetaData.
This setup works perfectly for us with all of the major OS's including Windows.
When we deploy the CoreOS OpenStack image, whether the beta or alpha version, the CoreOS instance does 'see' and mount the /dev/vdb config drive to /media/configdrive, however it appears that CoreOS does not apply the user_data presented to it.
So for example, the SSH key is not copied to /home/core/.ssh/authorized_keys, the hostname is not applied and so on, and as such, we cannot SSH to our newly created instances.
From another thread I read how to intercept the kernel boot to force the CoreOS to autologin, so I have been able to observe the following:-
# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 28G 135M 28G 1% /
devtmpfs 237M 0 237M 0% /dev
tmpfs 248M 0 248M 0% /dev/shm
tmpfs 248M 244K 247M 1% /run
tmpfs 248M 0 248M 0% /sys/fs/cgroup
/dev/vda9 28G 135M 28G 1% /
/dev/vda3 1008M 288M 670M 31% /usr
tmpfs 248M 0 248M 0% /media
tmpfs 248M 0 248M 0% /tmp
/dev/vda6 108M 76K 99M 1% /usr/share/oem
/dev/vdb 410K 410K 0 100% /media/configdrive
# ls /media/configdrive/
ec2 openstack
# cat /media/configdrive/openstack/latest/user_data
#cloud-config
coreos:
etcd:
# multi-region and multi-cloud deployments need to use $public_ipv4
addr: $private_ipv4:4001
peer-addr: $private_ipv4:7001
units:
- name: etcd.service
command: start
- name: fleet.service
command: start
ssh_authorized_keys:
# include one or more SSH public keys
ssh-rsa AAAAB3N........
# ls /home/core/.ssh/
# cat /home/core/.ssh/authorized_keys
cat: /home/core/.ssh/authorized_keys: No such file or directory
Is this something that I am doing wrong, is there anything else that I need to be doing ?
This is a production cloud and we are really keen to start offering our clients the ability to deploy CoreOS on top of it.
Thanks in advance.
Gavin