On Thu, 2016-02-25 at 10:50 -0800, Mike Spreitzer wrote:
> Dan, regarding the Calico case: remember I used the Calico
> *libnetwork*
> plugin with my simple CNI plugin; I did not use the Calico CNI
> plugin. It
> is my CNI plugin that gets the CNI_IFNAME parameter, but the CNI
> plugin can
> do nothing with that parameter --- `docker network connect` does not
> take
> an interface name parameter.
Yeah, well, that's a mismatch between the two APIs and you're SOL :(
However, as I suggested there is room to update/change the CNI spec
and I think that should probably be done. I'll take an action item to
push that forward.
> Regarding network configuration: I think there is still a
> problematic
> poverty of interface. The CNI plugin needs sufficient parameters to
> decide
> how to connect the container. Right now the k8s API user (e.g.,
> composer
> of a pod spec) can convey very little information down to the CNI
> plugin.
Yes. But the problem is that plugins are different and they may need
many different types of information. Most of us here will be creating
fairly complex networking backends that will need a lot of information.
I don't think kubelet should pass down whole stacks of API objects to
the plugins since there's no end to that and it would be completely
Kubernetes specific.
Instead your plugin probably needs to get the information it needs out-
of-band from the apiserver itself, which you can do by including some
of the same code that kubelet itself does and grab the objects that
way. We had discussed in the last meeting that plugins will likely
need long-running processes to interface with that keep this
information anyway, and mutate it into a form that they can consume
internally.
That said, kubelet could hugely help by passing references to the
authentication methods and apiserver addresses it's using. It seems
pointless to have to specify certificates/addresses in two places
(first in kubelet and second in some plugin-specific configuration).
I honestly don't think this is any different in libnetwork-land. CNI
and libnetwork are just simple ways to configure container networking,
but they don't have anything to do with how the logical network is set
up. For a libnetwork plugin you *still* need to map the docker network
to some construct that Neutron knows about, and that would be the same
thing in Kubernetes with CNI. What you're missing in Kubernetes is a
convenient "network" object, but as explained earlier I'm not sure
that's really appropriate for everyone.
> My next step will probably be to use a distinct Docker network for
> each k8s
> Namespace. My current main interest is using Neutron tenant network
> =
> Docker network via the Kuryr libnetwork plugin. Equating Neutron
> network
> with k8s Namespace has one mildly obscure benefit. In Neutron,
> security
> groups can isolate IP layer traffic but not other ethernet
> traffic. But I
> am currently focused on the case where each Neutron network is a
> distinct
> virtual ethernet, so there will be no non-IP traffic between Neutron
> networks ... and thus Neutron security groups will suffice to
> isolate
> between k8s Namespaces.
Yes, that's one approach. Unfortunately with OpenShift we found that
Namespace == Network was too limiting and something customers needed
flexibility on, so we moved away from that model. If it works for you
thats great though :)
Dan