Fixing the weird netmasks on Google Compute Engine

16 views
Skip to first unread message

Péter Szilágyi

unread,
Nov 2, 2014, 3:33:29 PM11/2/14
to projec...@googlegroups.com
Hi all,

  [This is an excerpt from an off list conversation, that I'm including here to report a problem on Google Compute Engine and the fix just released on the development branch on github]

  For some time now Google decided to swap out the real netmasks to /32. I don not know the exact reason, but supposedly it's because this allows them to make sure that all guest OSes use proper network routing paths as that is the only possible one. Of course Iris completely blew up with a netmask of /32 since it had no address space to look for peers.

  I've uploaded a fix to GitHub which does a handful of things. First up, it tries to detect whether it is running on top of CGE or not by contacting Google's internal metadata service. If not, everything works as before. If however it is on GCE, it fetches its own instance/network ID from the metadata server and then issues an authenticated service request to the CGE API to fetch the real network IP range from that (the metadata server only tells me the network name, but no details). Up until this point everything's nice and dandy.

  By default if I launch a VM on GCE, it does not have permission to read the cloud configs, in the case of which, when I try to retrieve them, I get an "insufficient permissions" error. To try and save the situation even without having proper permissions, in such cases Iris will use the default network mask associated with its IP address. This could potentially lead to huge spaces to probe, but given the CoreOS addition, this is no problem. In the better case when the user does give at least read permissions (compute read) to the instance during its launch, the API access will succeed, and Iris will get the actual, real netmask and use that.

I still need to track another issue reported by James + finish up the bootstrap upgrade, but I'll try and release these fixes as soon as I can.

Cheers,
  Peter 
Reply all
Reply to author
Forward
0 new messages