Hi Navi,
thanks for your reply.
oVirt is a kvm based hypervisor, it's the upstream version of
Red Hat Virtualization .
I'm trying to deploy the hyperconverged solution ( here you can find two how-tos :
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/ and
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure/1.0/html-single/deploying_red_hat_hyperconverged_infrastructure/index ) that needs three nodes to create a high-availability cluster where the Engine ( a software that works like VMWare's vCenter ) controlling all the hosts needs to be a VM that can move between each hosts without loosing connectivity. To do so, oVirt creates a network bridge on each host's nic used for management, the result is that the v-nic of the VM running the engine is able to move between the hosts while retaining the same ip adrress. During setup I can choose the mac address of the v-nic used by the engine, i tried with a random one and also with a subsequent free mac of the ones generated by GCE.
My issue is that Engine vm's ip is pingable only from the host hosting the vm and not from the other hosts on the same network. I had stepped on the very same issue the first time i tried this setup on a bare-metal nested environment and, thanks to the oVirt Community, i've been able to resolve it by enabling mac-spoofing on all hosts' nics used for the oVirt management network. That is why I think I need to been able to enable mac-spoofing on the nics generated by GCE.
To work, hosted-engine needs a pingable gateway, all the hosts and engine's vm itself have to have a resolvable FQDN and that engine vm's ip has to be on the same subnet of the hosts.
Going into the details, this is my enviroment on GCE:
- management subnet:
172.18.1.0/24- storage subnet:
172.18.2.0/24- gateway: and instance working as a NAT with lan address 172.18.1.2 (due to the unpingable GCE's gateways)
- host 1: an instance created with "--guest-os-features MULTI_IP_SUBNET" and two nics: etho 172.18.1.210 for management and eth1 172.18.2.210 for storage
- host 2: same as host1 but with eth0:172.18.1.220 eth1: 172.18.2.220
- host 3: eth0 172.18.1.230 eth1 172.18.2.230
You can find in the attachments two files named "before" and "after" with the details of the network interfaces and ifconfig of host1 before and after the deployment.
I hope my English was good enough to explain myself
Thanks for your time
Matteo