Migrating away from Hyperkube Kubelet on Container Linux-like distros

110 views
Skip to first unread message

Marko Mudrinić

unread,
Jul 22, 2020, 7:59:21 AM7/22/20
to kubernete...@googlegroups.com, kubernetes-sig-release
Hello,

It has been announced that Hyperkube has been deprecated and that it will not be present in the upcoming v1.19 release. The following email to the k-dev mailing list has some more details: [1]. 

For almost all control plane components (kube-apiserver, kube-controller-manager, and more), the migration path is clear and it boils down to that you should use the upstream images (e.g. k8s.gcr.io/kube-apiserver).

However, there is no upstream image for Kubelet and it remains unclear how users should run Kubelet. One of the most popular approaches was to use Hyperkube to run Kubelet inside a container. This was very suitable for the Container Linux based/like distros, such as CoreOS, Flatcar, and more, because all the needed dependencies were included in the image. At this point, there is no guideline or recommendation on how users of Container Linux-like distros should run Kubelet.

Compared to other distros, running it directly as a binary is not suitable because it's hard to install all the needed dependencies. In the Hyperkube image, all the dependencies are already installed, including conntrack (AFAIK required by both kubeadm and kubelet), gluterfs-client, nfs-common, and ceph-common (required by Kubelet if users are using the respective features). Those and many other dependencies are not installed by default on many Container Linux-like distros.

On the other side, it's unclear how is it possible to run Kubelet as a container without Hyperkube. As there is no official image, users have only two options:

* Build their own Kubelet images, however, there is no any recommendation how this image should look like, what image base should be used, and what packages should be included
* Use third-party images such as [2]. The problem with this approach is that users fully depend on third parties to maintain and keep those images up-to-date, working, and secure.

Both of those options are not good user experience wise, so I asked SIG-Release what is the recommended way to run Kubelet on Container Linux-distros, and is it possible to get an official Kubelet image. The discussion has been ongoing on the #sig-release Slack channel and you can check the following thread for more details: [3].

Shortly, I've got a recommendation to reach out to SIG-Node, as you're responsible for Kubelet, but I've also been told that running containerized kubelet is deprecated/removed with a reference to the following thread [4]. However, it's unclear what does removing containerized kubelet support means. Does it mean that you don't support running it a container anymore or the way to run it in a container has been changed?

The main questions are:

* What is the recommended and supported way for running Kubelet on Container Linux-like distros (e.g. Flatcar and more)?
* If Kubelet can be run in a container, can we get an official image or some recommendations what image should be used instead?
* If it can't be run in a container, can we get recommendations/docs what should be done about the Kubelet dependencies?

I think this is very important from the user-experience side, so a more official response/recommendation from the SIG-Node would be very useful.

Thank you!

Josh Berkus

unread,
Jul 27, 2020, 7:23:03 PM7/27/20
to Marko Mudrinić, kubernete...@googlegroups.com, kubernetes-sig-release
On 7/22/20 4:59 AM, Marko Mudrinić wrote:
>
> * Build their own Kubelet images, however, there is no any
> recommendation how this image should look like, what image base should
> be used, and what packages should be included
> * Use third-party images such as [2]. The problem with this approach is
> that users fully depend on third parties to maintain and keep those
> images up-to-date, working, and secure.

You're missing one:

* Take over maintenance of Hyperkube so that it can continue to publish
images.

Note that the difficulties with making containerized kubelets work
properly won't go away. But having some dedicated maintainers would
make hyperkube a lot more viable.

--
--
Josh Berkus
Kubernetes Community
Red Hat OSPO

Niko Penteridis

unread,
Jul 28, 2020, 3:04:45 AM7/28/20
to Josh Berkus, Marko Mudrinić, kubernete...@googlegroups.com, kubernetes-sig-release
From my experience with encountering hyperkube in production some
years ago, seeing hyperkube being deprecated is quite welcoming...
containerized kubelet is helplessly buggy, the kind of bugs which are
really not worth the investment, hence why it's deprecated I suppose
amongst strong security concerns.

Are there any intrinsic benefits of running kubelet in a container at all?
> --
> You received this message because you are subscribed to the Google Groups
> "kubernetes-sig-release" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-sig-re...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kubernetes-sig-release/88af72e5-55c8-47e8-3847-e70f3cf9cbd8%40redhat.com.
>

Iacopo Rozzo

unread,
Jul 28, 2020, 4:27:30 AM7/28/20
to kubernetes-sig-node
You have the intrinsic benefits of containers. The most relevant here is that kubelet is packed with all its required dependencies. As pointed out by Marko this is even more valuable when using CoreOS derived distros that do not come with a package manager.

Rodrigo Campos

unread,
Jul 28, 2020, 1:59:15 PM7/28/20
to Josh Berkus, Marko Mudrinić, kubernete...@googlegroups.com, kubernetes-sig-release
Hi!

My name is Rodrigo and I'm working at Kinvolk, we are behind Flatcar.
I can offer my help to maintain an image for the kubelet (I'm part of
the Kubernetes org, if that helps).

Something worth mentioning is that Flatcar Kubernetes users usually
use the kubelet-wrapper script shipped with Flatcar. That script uses
rkt stage 1, that is: you specify a container image and launch it as a
container, but is actually a process in the host with only chroot
isolation. So, while a container image is used, is not running as a
regular container.

The idea is to deprecate kubelet-wrapper, though, as it is using rkt
which is an unmaintained project now. I don't know what the
deprecation window might be, as there are several users and we want to
play nice. We think it might be possible to maybe replace the usage of
rkt with docker (not sure if it will be baked into Flatcar), just
using a combination of flags.

But we definitely see value in a kubelet container image. In fact, we
also have a Kubernetes distribution and we are also using the kubelet
as a daemonset (we have a bootstrap kubelet on the node to start the
daemonset). That has been working well for us, AFAIK.

If SIG-node is okay distributing the kubelet container image, I
volunteer to help to make that happen and maintain it :)

If people see value in hyperkube, I can help with that too. From our
usage, though, is quite the same if we use hyperkube or just the
specific container image. If people want to go down this route, I
think that maybe using a smal go program to replace the hyperkube bash
script can help a lot. It will be only a static binary, so no need for
a shell and that due to hyperkube itself.




Best,
Rodrigo
> --
> You received this message because you are subscribed to the Google Groups "kubernetes-sig-node" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-node/88af72e5-55c8-47e8-3847-e70f3cf9cbd8%40redhat.com.



--
Rodrigo Campos
---
Kinvolk GmbH | Adalbertstr.6a, 10999 Berlin | tel: +491755589364
Geschäftsführer/Directors: Alban Crequy, Chris Kühl, Iago López Galeiras
Registergericht/Court of registration: Amtsgericht Charlottenburg
Registernummer/Registration number: HRB 171414 B
Ust-ID-Nummer/VAT ID number: DE302207000

Davanum Srinivas

unread,
Jul 28, 2020, 3:06:59 PM7/28/20
to Rodrigo Campos, Josh Berkus, Marko Mudrinić, kubernete...@googlegroups.com, kubernetes-sig-release
Rodrigo,

Thanks for the insight/options for flatcar. Hopefully one of them clicks for Marko.

Usual community rules apply, get some proposal going, get folks behind it, shop it to various SIGs and see what happens.

Thanks,
Dims


You received this message because you are subscribed to the Google Groups "kubernetes-sig-release" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-re...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-release/CACaBj2aX7h7RA2Z66OvEqBJb-204KksW6ObHPU1iT%3Dc3bQi_bA%40mail.gmail.com.


--
Davanum Srinivas :: https://twitter.com/dims

Rodrigo Campos

unread,
Jul 28, 2020, 3:12:38 PM7/28/20
to Davanum Srinivas, Josh Berkus, Marko Mudrinić, kubernete...@googlegroups.com, kubernetes-sig-release
On Tue, Jul 28, 2020 at 4:06 PM Davanum Srinivas <dav...@gmail.com> wrote:
>
> Rodrigo,
>
> Thanks for the insight/options for flatcar. Hopefully one of them clicks for Marko.

Thank you!

>
> Usual community rules apply, get some proposal going, get folks behind it, shop it to various SIGs and see what happens.

Oh, okay. Marko (or anyone else interested) do you want to work with
me on a KEP (as the KEP template says, add summary, goals, non-goals
and open the PR early, so probably just that) and discuss this on
SIG-node meetings? (I might not be available next Tuesday for sig-node
meeting, but the week after that)


Best,
Rodrigo

Tim Allclair

unread,
Jul 28, 2020, 3:28:17 PM7/28/20
to Rodrigo Campos, Davanum Srinivas, Josh Berkus, Marko Mudrinić, kubernetes-sig-node, kubernetes-sig-release
Speaking as a member of the product security committee, one of the challenges we've had behind Hyperkube is keeping the image patched. Since it has so many dependencies, it frequently has CVEs that need to be patched. I'd prefer if Kubernetes wasn't in the business of maintaining binary distributions of third party dependencies. We've mostly chosen to rely on Debian Linux to provide the dependencies, but IMO we haven't done a sufficient job in keeping the images up-to-date with the latest patches.

If we do go the route of shipping a Kubelet image packaged with the dependencies, I'd like to see a plan for automatically patching the image with every release. Challenges to this approach include:
- regression testing with updated dependencies
- build time & flakiness from pulling the dependencies from third party servers

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-node" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.

Rodrigo Campos

unread,
Jul 28, 2020, 3:36:01 PM7/28/20
to Tim Allclair, Davanum Srinivas, Josh Berkus, Marko Mudrinić, kubernetes-sig-node, kubernetes-sig-release
On Tue, Jul 28, 2020 at 4:28 PM Tim Allclair <tall...@google.com> wrote:
>
> Speaking as a member of the product security committee, one of the challenges we've had behind Hyperkube is keeping the image patched. Since it has so many dependencies, it frequently has CVEs that need to be patched. I'd prefer if Kubernetes wasn't in the business of maintaining binary distributions of third party dependencies. We've mostly chosen to rely on Debian Linux to provide the dependencies, but IMO we haven't done a sufficient job in keeping the images up-to-date with the latest patches.

Thanks for the input :)

> If we do go the route of shipping a Kubelet image packaged with the dependencies, I'd like to see a plan for automatically patching the image with every release. Challenges to this approach include:
> - regression testing with updated dependencies
> - build time & flakiness from pulling the dependencies from third party servers

Makes sense. Will make sure to include ways to address those concerns
if we create such a KEP :)
Reply all
Reply to author
Forward
0 new messages