Thanks for the feedback. I see I didn't quite understand k8s networking
properly (and had my cluster misconfigured as a result).
I now have it configured as:
--cluster-cidr=
10.240.0.0/12
--service-cluster-ip-range=
10.128.0.0/16
And I'm deducing that the /12 in the cluster-cidr is what would then
allow this cluster to go beyond 256 nodes.
One other point about the networking I'm a little confused about that
I'd like to clarify: it seems that IP's in the cluster-cidr range
(i.e., service endpoints) are reachable from any host that is on the
flannel network, while IP's in the service-cluster-ip-range (i.e.,
services) are only reachable from the worker nodes in the cluster.
So, for example, I have a k8s setup with 4 machines: a master, 2 worker
nodes, and a "driver" machine. All 4 machines are on the flannel
network. I have a nginx service defined like so:
$ kubectl get svc nginx; kubectl get ep nginx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx 10.128.105.78 <nodes> 80:30207/TCP 2d
NAME ENDPOINTS AGE
nginx
10.240.14.5:80,
10.240.27.2:80 2d
Now "curl 10.128.105.78" only succeeds on the 2 worker node machines,
while "curl 10.240.14.5" succeeds on all 4.
I'm guessing this is expected / makes sense, since
10.240.0.0/12
addresses are accessible to any machine on the flannel network, whereas
10.128.0.0/16 addresses can only be reached via iptables rules - i.e.,
only accessible on machines running kube-proxy, aka the worker nodes.
Again I guess this is makes sense in retrospect. But I was a bit
surprised when I first saw this, as I had thought that services' cluster
IP's would be reachable from all machines. (Or at least from the master
too.)
Perhaps you could confirm that I'm understanding this all correctly.
(And have my cluster configured correctly?)
Thanks,
DR
On 2017-08-11 11:26 am, Matthias Rampke wrote:
> Oh hold on. the _service cluster IP range_ is not for pod IPs at all.
> It's for the ClusterIP of services, so you can have up to 64k services
> in a cluster at the default setting. The range for pods is the
> --cluster-cidr flag on kube-controller-manager.
>
> On Fri, Aug 11, 2017 at 3:05 PM David Rosenstrauch <
dar...@darose.net>
> wrote:
>
>> Actually, that begs another question. The docs also specify that
>> k8s
>> can support up to 5000 nodes. But I'm not clear on how the
>> networking
>> can support that.
>>
>> So let's go back to that service-cluster-ip-range with the /16 CIDR.
>> That only supports a maximum of 256 nodes.
>>
>> Now the maximum size for the service-cluster-ip-range appears to be
>> /12
>> - e.g., --service-cluster-ip-range=
10.240.0.0/12 [1] (Beyond that
>> [2] [2])
> [1]
http://10.240.0.0/12
> [2]
http://10.254.0.0/16