Kubernetes, GCP, and IP Aliases

576 views
Skip to first unread message

Mark Petrovic

unread,
Sep 15, 2017, 7:13:53 PM9/15/17
to Kubernetes user discussion and Q&A
Hello.

I would have made this shorter if I could.  Sorry.  My context is
Kubernetes, but my immediate questions are around clusters I configure on
Google Compute Engine (GCE).  Someone out there is bound to be in my situation, so I feel
comfortable coming here, having been here a few times over the years.

I am in pre-production research mode for running Kubernetes clusters
on regular GCE  VMs.  I know about Google Container Engine, but I'm not ready to take that step.

My work history: I have a couple years of working with Kubernetes
in various ways, but I'm still far from an expert.

The very recent past:

I've successfully setup a k8s cluster on GCE where the control plane
VMs (master, scheduler, controller, kubelet, kube-proxy) resided
on a GCE-Custom VPC network 10.0.0.0/24  (I'm avoiding the regional
default networks because I'm in learning mode and I want and learn
from that control).  In this k8s cluster, I created a second VPC
"podVPC" 172.16.0.0/16 from which pod IPs are allocated.  On each
node's kubelet, I configure a /24 from the podVPC for pods.  I know
the controller-manager *can* be involved in pod CIDR management,
but I have chosen that it not be.  I tell the kubelet what pod cidr,
via the kubelet param --pod_cidr, it can use, not the controller.
I followed what I call the "cbr0" model in crafting the cluster
config, found here:
guide is dated, but I pieced it together.

In this model, to make pod IPs routable within the cluster you have
to create GCE VPC Routes that route pod IPs through their respective
nodes.  Did that, and it works fine.   You also need GCE firewall rules so the control plane members on net-10 can 
talk to each other; did that, works fine.

This cluster works as intended.

Now, the problem with this network approach is that if you want to route
pod IPs across a VPN to your corp network via, say, BGP + Cloud
Router, this design won't work because GCE just won't do that routing
yet.


The present:

I need those pod IPs routed to my corp network, so I need to evolve my design.

Keep the cluster configuration the same as the cluster above.
Meaning, no changes to the kubelet or controller manager.

However, the GCE VM configs *do* change.  Now you create VMs with
GCE-secondary subnets, aka IP Aliases.  Out of these per-VM secondary
ranges, you allocate pod IPs.  This means you do not create a second
podVPC as above and manually route pod CIDRs to their respective
nodes.  When you define a secondary subnet on a network, GCE will
setup those routes for you, and announce those routes over VPN to
your corp network.

My first problem:  if I bring up a couple of nodes with IP Alias
ranges defined on them, without any pods running at all, I can
already ping addresses where the pods will be allocated.  This makes
me think two things:  1) I've read the IP Alias docs carefully but
I've already screwed up my VM config, 2) my node VM config is correct
and nodes are supposed to masquerade as secondary workloads.  And
if 2 obtains, when a real pod does come up, how do I tell (via some
k8s control plane flag??) the GCE fabric to stop masquerading as
the pod?

Thanks for reading this far.

Tim Hockin

unread,
Sep 18, 2017, 1:29:05 PM9/18/17
to Kubernetes user discussion and Q&A
I am not clear what doesn't work for you. As far as I know GCP routes
work with *almost* everything else GCP offers (Peering being an
exception, for now). I am pretty convinced that Pods + VPN works.

> So, enter GCE IP Aliases: https://cloud.google.com/compute/docs/alias-ip/
>
> The present:
>
> I need those pod IPs routed to my corp network, so I need to evolve my
> design.
>
> Keep the cluster configuration the same as the cluster above.
> Meaning, no changes to the kubelet or controller manager.
>
> However, the GCE VM configs *do* change. Now you create VMs with
> GCE-secondary subnets, aka IP Aliases. Out of these per-VM secondary
> ranges, you allocate pod IPs. This means you do not create a second
> podVPC as above and manually route pod CIDRs to their respective
> nodes. When you define a secondary subnet on a network, GCE will
> setup those routes for you, and announce those routes over VPN to
> your corp network.
>
> My first problem: if I bring up a couple of nodes with IP Alias
> ranges defined on them, without any pods running at all, I can
> already ping addresses where the pods will be allocated. This makes
> me think two things: 1) I've read the IP Alias docs carefully but
> I've already screwed up my VM config, 2) my node VM config is correct
> and nodes are supposed to masquerade as secondary workloads. And
> if 2 obtains, when a real pod does come up, how do I tell (via some
> k8s control plane flag??) the GCE fabric to stop masquerading as
> the pod?

The default GCP VM images assign IP alias ranges to the local vNIC.
You need to turn that off in one of the google daemons, or you need to
run our COS image which does that.

Mark Petrovic

unread,
Sep 18, 2017, 4:21:17 PM9/18/17
to kubernet...@googlegroups.com
I could not find a way to articulate this in the GCP web UI.  To route the control plane VM VPC and the Pod VPC across VPN, I felt like I was being forced into creating *two* VPNs: one for the control plane and one for the pods, since a VPN on the GCP side can only source one VPC.  
This is new magic to me, but based on your comment I was able to suppress what I call the masquerading by setting ip_forwarding_daemon==false in /etc/default/instance_configs.cfg on the guest (the GCP CentOS7 image).  Such a host no longer responds to pings to its IP Aliases.  Just curious:  if this forwarding were left enabled, and a real workload was listening on an alias IP, would the workload respond to a prospective TCP connection, or would the host respond?  If the workload responds, how does the host know to not masquerade, as it seems not to know when I ping a 'workload'?


 

--
You received this message because you are subscribed to a topic in the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-users/mW6LGb9lIZM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.



--
Mark

Mark Petrovic

unread,
Sep 18, 2017, 5:34:43 PM9/18/17
to kubernet...@googlegroups.com

On Mon, Sep 18, 2017 at 1:21 PM, Mark Petrovic <mspet...@gmail.com> wrote:
Success!

By setting  ip_forwarding_daemon==false on my GCP CentOS7 VMs that host the control plane, I have proofed the entire cluster config, with connectivity where I need it inside the cluster, as well as announcing VM and pod routes across the VPN to corp-like environment so that dev-workstation-like hosts can consume pods.  
 

 

--
You received this message because you are subscribed to a topic in the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-users/mW6LGb9lIZM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.



--
Mark



--
Mark

divij.s...@gmail.com

unread,
Jan 8, 2018, 5:49:38 AM1/8/18
to Kubernetes user discussion and Q&A
Are there any problems present if I try running Kubernetes Cluster present in a distributed form across a number of VMs that are part of a VPN?

If yes, what problems might be headed my way and how can I avoid them?

Although this is a Kubernetes group, if someone has experience doing something similar, running docker in Swarm mode, running containers on VMs present in a VPN, instead of a Kubernetes cluster, kindly share your two cents. I am new to the Kubernetes/docker-swarm mode but have worked with Docker in the past.
Reply all
Reply to author
Forward
0 new messages