deprecating kubernetes-cni / clarifying kube/plugin CNI requirements

517 views
Skip to first unread message

Dan Winship

unread,
Sep 5, 2019, 11:07:03 AM9/5/19
to kubernetes-...@googlegroups.com, kubernetes-...@googlegroups.com, kubernetes-sig-c...@googlegroups.com
TL;DR - sig-release would like to stop building/shipping kubernetes-cni.
We think this can be done without causing problems for third-party
network plugins. kubeadm and other people depending our our rpms/debs
may need to take action; this affects how quickly we can deprecate the
package.


There was a discussion on Slack yesterday
(https://kubernetes.slack.com/archives/C09QYUH5W/p1567615752063400)
about whether Kubernetes itself and/or network plugins are responsible
for shipping the default set of CNI plugins.

Currently we build a kubernetes-cni package that contains all the
default CNI plugins (https://github.com/containernetworking/plugins/),
and our kubeadm package depends on this.

Kubelet itself (and/or the container runtime) depends on the existence
of the CNI "loopback" plugin to correctly configure the "lo" interface
in containers. The docs
(https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni)
note that:

In addition to the CNI plugin specified by the configuration
file, Kubernetes requires the standard CNI lo plugin, at minimum
version 0.2.0.

But it's not clear who this is imposing a requirement on. In practice,
at least some Kubernetes distributions have shipped with no CNI plugins
out of the box, and so it is common for network plugin implementations
to build and install their own copies of any plugins they need (eg, the
"host-local" IPAM plugin) plus the plugins they expect Kubernetes itself
to need ("loopback"). But it was pointed out that this is kind of weird;
if Kubernetes itself needs "loopback", then it shouldn't be relying on
someone else to provide it.

sig-release would like to stop building kubernetes-cni because it's a
huge pain. (https://github.com/kubernetes/kubernetes/pull/78819,
https://github.com/kubernetes/release/pull/731,
https://github.com/kubernetes/kubernetes/issues/75485,
https://github.com/kubernetes/sig-release/issues/245).

OTOH, having the full set of standard CNI plugins available out of the
box may be useful to kubeadm users? It's not clear to what extent any of
the documented kubeadm network plugin options
(https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network)
depend on the existence of kubernetes-cni. Some people in the Slack
thread thought that some plugins might be assuming the existence of
"portmap" (for HostPort handling) without installing it themselves.

Although CNI upstream does not provide deb/rpm packages, they do provide
tarballs containing compiled binaries
(https://github.com/containernetworking/plugins/releases), so even if we
don't build kubernetes-cni, we could still pretty easily have our
kubeadm package provide all of the CNI plugins by just pulling them from
there. (And likewise, it's not difficult for other network plugins to
get compiled CNI plugin binaries if they need them.)


Discussion in the Slack thread eventually came to the conclusion that:

- We need to clarify who is responsible for providing "loopback" for
kubelet, and maybe we should vendor
github.com/containernetworking/plugins and built it ourselves.

- We should clarify in the documentation that other CNI plugins are
not guaranteed to be installed in any particular kubernetes
installation, and so network plugins that depend on standard CNI
plugins (portmap, host-local, etc) need to either install their
own copies, or else they need to require their users to install them
for them. (This is already true in practice, but it's not
documented, and some plugins may not work in environments that don't
provide them with all of the standard plugins.)

- We should stop building kubernetes-cni. Maybe with a GA-like
deprecation policy? So maybe, announce the deprecation in 1.16 but
don't do anything else. Then in 1.17 make kubeadm no longer depend
on kubernetes-cni (but possibly install its own copies of the
plugins, if that's needed). And in 1.18 kill kubernetes-cni.

Stephen Augustus

unread,
Sep 26, 2019, 5:08:05 PM9/26/19
to Dan Winship, release-...@kubernetes.io, kubernetes-...@googlegroups.com, kubernetes-sig-release, kubernetes-sig-cluster-lifecycle, Kubernetes Release Team

We missed the boat for 1.16, as it was already late in the cycle, but I'd like to run full speed at this for 1.17.

As for course of action, I think we should:

  • (SIG Network) Update user-facing documentation as Dan mentioned
  • other CNI plugins are not guaranteed to be installed in any particular kubernetes installation, and so network plugins that depend on standard CNI plugins (portmap, host-local, etc) need to either install their own copies, or else they need to require their users to install them

  • (SIG Network/Release) Announce the deprecation and start the clock in Kubernetes 1.17
  • (SIG Release) Move the CNI plugins into the kubelet debs/rpms and remove kubeadm's package dependency on kubernetes-cni, starting in Kubernetes 1.17
    • This way we don't break users of kubernetes-cni in future versions
    • Choosing to move it to the kubelet instead of the kubeadm package since we hit a wider swath of consumers this way
  • (SIG Release) Continue also publishing kubernetes-cni debs/rpms until Kubernetes 1.19

Several of the instructions I've [speed]read through (Calico, Weave, Cilium) suggest you BYO or use kubeadm to ensure /opt/cni/bin is configured, so I think this plan is fine.

I've opened an issue for this[1] and will capture the decision there once we make one.

This will also require some minor refactoring of our package building tools.
I've started a PR[2] that's at a good stage for a first pass review if people have bandwidth.

Do we think this is a good plan forward?

-- Stephen

[1] https://github.com/kubernetes/release/issues/885 
[2] https://github.com/kubernetes/release/pull/884


--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-cluster-lifecycle" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-cluster...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-cluster-lifecycle/a068d26c-a302-212f-86ed-32f16efcec00%40redhat.com.

Lubomir I. Ivanov

unread,
Sep 26, 2019, 5:45:36 PM9/26/19
to Dan Winship, kubernetes-...@googlegroups.com, kubernetes-sig-release, kubernetes-sig-cluster-lifecycle, coo...@vmware.com, Tim St. Clair
adding sig-cluster-lifecycles and some other folks on CC.

On Thu, 5 Sep 2019 at 17:00, Dan Winship <dwin...@redhat.com> wrote:
>
> TL;DR - sig-release would like to stop building/shipping kubernetes-cni.
> We think this can be done without causing problems for third-party
> network plugins. kubeadm and other people depending our our rpms/debs
> may need to take action; this affects how quickly we can deprecate the
> package.
>

should be GA (1 year) at minimum.
this package is likely used in production and we have no statistics of
it's usage.

>
> Currently we build a kubernetes-cni package that contains all the
> default CNI plugins (https://github.com/containernetworking/plugins/),
> and our kubeadm package depends on this.
>

that is very true. yet we don't have statistics of how many users it
has outside of kubeadm.

> Kubelet itself (and/or the container runtime) depends on the existence
> of the CNI "loopback" plugin to correctly configure the "lo" interface
> in containers. The docs
> (https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni)
> note that:
>
> In addition to the CNI plugin specified by the configuration
> file, Kubernetes requires the standard CNI lo plugin, at minimum
> version 0.2.0.
>
> But it's not clear who this is imposing a requirement on. In practice,
> at least some Kubernetes distributions have shipped with no CNI plugins
> out of the box, and so it is common for network plugin implementations
> to build and install their own copies of any plugins they need (eg, the
> "host-local" IPAM plugin) plus the plugins they expect Kubernetes itself
> to need ("loopback"). But it was pointed out that this is kind of weird;
> if Kubernetes itself needs "loopback", then it shouldn't be relying on
> someone else to provide it.
>

loopback is a weird decoupling between kubelet and CNI.
also not all popular CNi plugins install the plugins they need e.g.
WeaveNet, which according to the kubeadm survey was the 2nd most
popular pod network plugin last year.

> sig-release would like to stop building kubernetes-cni because it's a
> huge pain. (https://github.com/kubernetes/kubernetes/pull/78819,
> https://github.com/kubernetes/release/pull/731,
> https://github.com/kubernetes/kubernetes/issues/75485,
> https://github.com/kubernetes/sig-release/issues/245).
>
> OTOH, having the full set of standard CNI plugins available out of the
> box may be useful to kubeadm users? It's not clear to what extent any of
> the documented kubeadm network plugin options
> (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network)
> depend on the existence of kubernetes-cni. Some people in the Slack
> thread thought that some plugins might be assuming the existence of
> "portmap" (for HostPort handling) without installing it themselves.
>

reminding again that this is probably not only a kubeadm problem, but
a generic deployer problem.
we have to assume that pod network addons do not install all the CNI
plugins they need.
documenting such artifacts is out of scope for
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
(not to mention that sig-docs recently proposed to remove this section
completely).

> Although CNI upstream does not provide deb/rpm packages, they do provide
> tarballs containing compiled binaries
> (https://github.com/containernetworking/plugins/releases), so even if we
> don't build kubernetes-cni, we could still pretty easily have our
> kubeadm package provide all of the CNI plugins by just pulling them from
> there. (And likewise, it's not difficult for other network plugins to
> get compiled CNI plugin binaries if they need them.)
>

existing kubernetes-cni users will have to transition to using the
upstream tarballs.
we need to make sure everyone understands that.

>
> Discussion in the Slack thread eventually came to the conclusion that:
>
> - We need to clarify who is responsible for providing "loopback" for
> kubelet, and maybe we should vendor
> github.com/containernetworking/plugins and built it ourselves.
>

i personally think the loopback plugin should be part of the kubelet deb/rpm.

> - We should clarify in the documentation that other CNI plugins are
> not guaranteed to be installed in any particular kubernetes
> installation, and so network plugins that depend on standard CNI
> plugins (portmap, host-local, etc) need to either install their
> own copies, or else they need to require their users to install them
> for them. (This is already true in practice, but it's not
> documented, and some plugins may not work in environments that don't
> provide them with all of the standard plugins.)

the purpose of the kubernetes-cni package was to install all the
plugins and allow users of deployers (e.g. kubeadm) to transparently
install any Pod network plugin without the need to understand lower
level details such as CNI plugins.

>
> - We should stop building kubernetes-cni. Maybe with a GA-like
> deprecation policy? So maybe, announce the deprecation in 1.16 but
> don't do anything else. Then in 1.17 make kubeadm no longer depend
> on kubernetes-cni (but possibly install its own copies of the
> plugins, if that's needed). And in 1.18 kill kubernetes-cni.
>

if the deprecation takes action, a GA period (1 year) is the only
option that makes sense.

building all the plugin binaries as part of the kubeadm package is a
solution for kubeadm users.
directing non-kubeadm users to the upstream CNI tarbals can work, but
this has to be documented and announced.

lubomir
--

Lubomir I. Ivanov

unread,
Sep 26, 2019, 5:52:54 PM9/26/19
to Stephen Augustus, Dan Winship, release-...@kubernetes.io, kubernetes-...@googlegroups.com, kubernetes-sig-release, kubernetes-sig-cluster-lifecycle, Kubernetes Release Team
On Fri, 27 Sep 2019 at 00:08, Stephen Augustus <Ste...@agst.us> wrote:
>
> We missed the boat for 1.16, as it was already late in the cycle, but I'd like to run full speed at this for 1.17.
>
> As for course of action, I think we should:
>
> (SIG Network) Update user-facing documentation as Dan mentioned
>
> other CNI plugins are not guaranteed to be installed in any particular kubernetes installation, and so network plugins that depend on standard CNI plugins (portmap, host-local, etc) need to either install their own copies, or else they need to require their users to install them
>
> (SIG Network/Release) Announce the deprecation and start the clock in Kubernetes 1.17
> (SIG Release) Move the CNI plugins into the kubelet debs/rpms and remove kubeadm's package dependency on kubernetes-cni, starting in Kubernetes 1.17
>
> This way we don't break users of kubernetes-cni in future versions
> Choosing to move it to the kubelet instead of the kubeadm package since we hit a wider swath of consumers this way
>

indeed, moving them in kubelet vs kubeadm package is an interesting topic.
the kubelet package is a better location, as CNI is really a
Kubernetes Node dependency and not a deployer (kubeadm) dependency.

> (SIG Release) Continue also publishing kubernetes-cni debs/rpms until Kubernetes 1.19
>

3 releases vs 1 year is up to SIG Release, i guess.

> Several of the instructions I've [speed]read through (Calico, Weave, Cilium) suggest you BYO or use kubeadm to ensure /opt/cni/bin is configured, so I think this plan is fine.
>

like i've explained in the reply to Dan, kubeadm is not the only
deployer in the ecosystem.
Pod network addon documentation, instructing the users to first
install kubeadm to be able to use this their solution is not a great
practice.
ideally all of them now have to change to recommended installing the
CNI plugin tarballs.

lubomir
--

Stephen Augustus

unread,
Sep 27, 2019, 5:28:33 PM9/27/19
to Lubomir I. Ivanov, Dan Winship, release-...@kubernetes.io, kubernetes-...@googlegroups.com, kubernetes-sig-release, kubernetes-sig-cluster-lifecycle, Kubernetes Release Team
Responses inline.

On Thu, Sep 26, 2019 at 5:52 PM Lubomir I. Ivanov <neol...@gmail.com> wrote:
On Fri, 27 Sep 2019 at 00:08, Stephen Augustus <Ste...@agst.us> wrote:
>
> We missed the boat for 1.16, as it was already late in the cycle, but I'd like to run full speed at this for 1.17.
>
> As for course of action, I think we should:
>
> (SIG Network) Update user-facing documentation as Dan mentioned
>
> other CNI plugins are not guaranteed to be installed in any particular kubernetes installation, and so network plugins that depend on standard CNI plugins (portmap, host-local, etc) need to either install their own copies, or else they need to require their users to install them
>
> (SIG Network/Release) Announce the deprecation and start the clock in Kubernetes 1.17
> (SIG Release) Move the CNI plugins into the kubelet debs/rpms and remove kubeadm's package dependency on kubernetes-cni, starting in Kubernetes 1.17
>
> This way we don't break users of kubernetes-cni in future versions
> Choosing to move it to the kubelet instead of the kubeadm package since we hit a wider swath of consumers this way
>

indeed, moving them in kubelet vs kubeadm package is an interesting topic.
the kubelet package is a better location, as CNI is really a
Kubernetes Node dependency and not a deployer (kubeadm) dependency.

Yep, that's exactly what I was thinking.
 
> (SIG Release) Continue also publishing kubernetes-cni debs/rpms until Kubernetes 1.19
>

3 releases vs 1 year is up to SIG Release, i guess.

Either would be okay, honestly. I'm thinking that we can do 3 releases instead since we started the discussion before 1.16 was cut and people seemed to be in agreement that we could move towards deprecation.
This does not imply that all changes to the system are governed by this policy. This applies only to significant, user-visible behaviors which impact the correctness of applications running on Kubernetes or that impact the administration of Kubernetes clusters, and which are being removed entirely

Since we're just relocating the contents of the package, while continuing to publish it for older versions, we should be fine to call kubernetes-cni officially deprecated in Kubernetes 1.19.
It's low-cost to make this longer if people aren't okay w/ 1.19.

> Several of the instructions I've [speed]read through (Calico, Weave, Cilium) suggest you BYO or use kubeadm to ensure /opt/cni/bin is configured, so I think this plan is fine.
>

like i've explained in the reply to Dan, kubeadm is not the only
deployer in the ecosystem.
Pod network addon documentation, instructing the users to first
install kubeadm to be able to use this their solution is not a great
practice.
ideally all of them now have to change to recommended installing the
CNI plugin tarballs.

I think I might've not been as clear as I could've been here.
Some examples of the language:
So I feel we'd be in the clear as long as our docs are up-to-date.

-- Stephen

lubomir
--


--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-cluster-lifecycle" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-cluster...@googlegroups.com.

Derek Carr

unread,
Sep 27, 2019, 6:27:17 PM9/27/19
to Stephen Augustus, Dan Winship, Kubernetes Release Team, Lubomir I. Ivanov, kubernetes-sig-cluster-lifecycle, kubernetes-...@googlegroups.com, kubernetes-sig-release, release-...@kubernetes.io
Isn’t the CNI selection similar to the container runtime choice?  Both are host level prerequisites to the kubelet that a deployer satisfies?

Lubomir I. Ivanov

unread,
Sep 27, 2019, 6:38:48 PM9/27/19
to Derek Carr, Stephen Augustus, Dan Winship, Kubernetes Release Team, kubernetes-sig-cluster-lifecycle, kubernetes-...@googlegroups.com, kubernetes-sig-release, release-...@kubernetes.io
On Sat, 28 Sep 2019 at 01:13, Derek Carr <dec...@redhat.com> wrote:
>
> Isn’t the CNI selection similar to the container runtime choice? Both are host level prerequisites to the kubelet that a deployer satisfies?
>

the container runtime can also be a pre-requisite to the kubelet and
the deployer, in case the deployer does not install the container
runtime.
for the pod network addon, some deployers install one by default, some
leave it again to the user in which case the user or deployer have to
install a set of CNI plugins for the pod network to work.

our dilemma here is who installs the CNI plugins (such as "loopback"),
we are settling to bundle them in the "kubelet" deb/rpm official
Kubernetes package.
unless there are objections.

lubomir
--

Casey Callendrello

unread,
Sep 30, 2019, 10:36:57 AM9/30/19
to Lubomir I. Ivanov, Derek Carr, Stephen Augustus, Dan Winship, Kubernetes Release Team, kubernetes-sig-cluster-lifecycle, kubernetes-...@googlegroups.com, kubernetes-sig-release, release-...@kubernetes.io
This may have some implications for testing as well. AIUI the scaffolding still uses the Kubenet plugin for the dockershim, which wraps (and thus depends on) the CNI bridge plugin being present on the node. So, we'll have to solve this somehow.

--cdc

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-network" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-ne...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/CAGDbWi9HCKybpoydSSKCyvw9_yLtWeJ4Mhanm9ZVkzPX%3DgbUEA%40mail.gmail.com.

Benjamin Elder

unread,
Sep 30, 2019, 10:47:11 AM9/30/19
to Casey Callendrello, Lubormir Ivanov, Derek Carr, Stephen Augustus, Dan Winship, Kubernetes Release Team, kubernetes-sig-cluster-lifecycle, kubernetes-...@googlegroups.com, kubernetes-sig-release, release-...@kubernetes.io
Which scaffolding?

Some of the Kubernetes CI does not use kubenet, and to my knowledge none of the Kubernetes project CI uses the debian / rpm packages (which are built separately as part of releasing).

You received this message because you are subscribed to the Google Groups "kubernetes-sig-cluster-lifecycle" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-cluster...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-cluster-lifecycle/CALbOP4Hh5CvMWGmJGWVxhpJydt3P00GVaaYH5st8asw478fmpg%40mail.gmail.com.

Stephen Augustus

unread,
Oct 9, 2019, 4:57:24 PM10/9/19
to Benjamin Elder, Casey Callendrello, Lubomir I. Ivanov, Derek Carr, Dan Winship, Kubernetes Release Team, kubernetes-sig-cluster-lifecycle, kubernetes-...@googlegroups.com, kubernetes-sig-release, release-...@kubernetes.io
Hey everyone,

Do we think this is a reasonable plan to move forward?
As Ben mentioned, this will not affect CI as we don't leverage the debs/rpms in non-k/release tests.

-- Stephen

Lubomir I. Ivanov

unread,
Oct 9, 2019, 5:01:32 PM10/9/19
to Stephen Augustus, Benjamin Elder, Casey Callendrello, Derek Carr, Dan Winship, Kubernetes Release Team, kubernetes-sig-cluster-lifecycle, kubernetes-...@googlegroups.com, kubernetes-sig-release, release-...@kubernetes.io
+1 on moving the plugins to the kubelet package.

lubomir
--
> To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-cluster-lifecycle/CAFQm5yQ7FWajmMGOWoZuQ4ZVyOZyh8Y8Aeu_k4805x%2BDNG%3DVvw%40mail.gmail.com.
Reply all
Reply to author
Forward
0 new messages