Migrating away from Hyperkube Kubelet on Container Linux-like distros

83 views
Skip to first unread message

Marko Mudrinić

unread,
Jul 22, 2020, 7:59:20 AM7/22/20
to kubernete...@googlegroups.com, kubernetes-sig-release
Hello,

It has been announced that Hyperkube has been deprecated and that it will not be present in the upcoming v1.19 release. The following email to the k-dev mailing list has some more details: [1]. 

For almost all control plane components (kube-apiserver, kube-controller-manager, and more), the migration path is clear and it boils down to that you should use the upstream images (e.g. k8s.gcr.io/kube-apiserver).

However, there is no upstream image for Kubelet and it remains unclear how users should run Kubelet. One of the most popular approaches was to use Hyperkube to run Kubelet inside a container. This was very suitable for the Container Linux based/like distros, such as CoreOS, Flatcar, and more, because all the needed dependencies were included in the image. At this point, there is no guideline or recommendation on how users of Container Linux-like distros should run Kubelet.

Compared to other distros, running it directly as a binary is not suitable because it's hard to install all the needed dependencies. In the Hyperkube image, all the dependencies are already installed, including conntrack (AFAIK required by both kubeadm and kubelet), gluterfs-client, nfs-common, and ceph-common (required by Kubelet if users are using the respective features). Those and many other dependencies are not installed by default on many Container Linux-like distros.

On the other side, it's unclear how is it possible to run Kubelet as a container without Hyperkube. As there is no official image, users have only two options:

* Build their own Kubelet images, however, there is no any recommendation how this image should look like, what image base should be used, and what packages should be included
* Use third-party images such as [2]. The problem with this approach is that users fully depend on third parties to maintain and keep those images up-to-date, working, and secure.

Both of those options are not good user experience wise, so I asked SIG-Release what is the recommended way to run Kubelet on Container Linux-distros, and is it possible to get an official Kubelet image. The discussion has been ongoing on the #sig-release Slack channel and you can check the following thread for more details: [3].

Shortly, I've got a recommendation to reach out to SIG-Node, as you're responsible for Kubelet, but I've also been told that running containerized kubelet is deprecated/removed with a reference to the following thread [4]. However, it's unclear what does removing containerized kubelet support means. Does it mean that you don't support running it a container anymore or the way to run it in a container has been changed?

The main questions are:

* What is the recommended and supported way for running Kubelet on Container Linux-like distros (e.g. Flatcar and more)?
* If Kubelet can be run in a container, can we get an official image or some recommendations what image should be used instead?
* If it can't be run in a container, can we get recommendations/docs what should be done about the Kubelet dependencies?

I think this is very important from the user-experience side, so a more official response/recommendation from the SIG-Node would be very useful.

Thank you!

Josh Berkus

unread,
Jul 27, 2020, 7:23:03 PM7/27/20
to Marko Mudrinić, kubernete...@googlegroups.com, kubernetes-sig-release
On 7/22/20 4:59 AM, Marko Mudrinić wrote:
>
> * Build their own Kubelet images, however, there is no any
> recommendation how this image should look like, what image base should
> be used, and what packages should be included
> * Use third-party images such as [2]. The problem with this approach is
> that users fully depend on third parties to maintain and keep those
> images up-to-date, working, and secure.

You're missing one:

* Take over maintenance of Hyperkube so that it can continue to publish
images.

Note that the difficulties with making containerized kubelets work
properly won't go away. But having some dedicated maintainers would
make hyperkube a lot more viable.

--
--
Josh Berkus
Kubernetes Community
Red Hat OSPO

Niko Penteridis

unread,
Jul 28, 2020, 3:04:44 AM7/28/20
to Josh Berkus, Marko Mudrinić, kubernete...@googlegroups.com, kubernetes-sig-release
From my experience with encountering hyperkube in production some
years ago, seeing hyperkube being deprecated is quite welcoming...
containerized kubelet is helplessly buggy, the kind of bugs which are
really not worth the investment, hence why it's deprecated I suppose
amongst strong security concerns.

Are there any intrinsic benefits of running kubelet in a container at all?
> --
> You received this message because you are subscribed to the Google Groups
> "kubernetes-sig-release" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-sig-re...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kubernetes-sig-release/88af72e5-55c8-47e8-3847-e70f3cf9cbd8%40redhat.com.
>

Marko Mudrinić

unread,
Jul 28, 2020, 5:34:59 AM7/28/20
to kubernetes-sig-release
> You're missing one: 

> * Take over maintenance of Hyperkube so that it can continue to publish 
> images. 

There is a big difference between Hyperkube (and maintaining Hyperkube) and providing a Kubelet image (or some other solution).
Hyperkube includes all the other control plane components as well, including the API server, controller manager, and more.
Maintaining such a solution is definitely a hard task and I understand the intentions behind deprecating Hyperkube.

However, as I mentioned in the initial email, all the other components in Hyperkube (besides Kubelet), have the alternative: there is an official image that users can use to run the component.
The workflow is a little bit different, you have multiple images, compared to just a single Hyperkube image, but in the end, you get the same result (a component running in a container).

On the other hand, if you want to run Kubelet in a container, you either have to use some of the community maintained images or to build your image.

While this is not a bad alternative, I've thought that there would be much more demand for a Kubelet image and that having one would improve the user experience (some of the problematic points are recapped in the initial email), so that's why I proposed that we consider building an official image, just like for the other components.

> Note that the difficulties with making containerized kubelets work 
> properly won't go away.  But having some dedicated maintainers would 
> make hyperkube a lot more viable. 

I'd just like to make sure that we are talking about the same thing:

* containerized kubelet, i.e. the --containerized flag -- a feature that has been deprecated and removed in 1.16
* running Kubelet in a container without the --containerized flag (I assume that Hyperkube also does this for at least the 1.16+)

To my understanding, the containerized kubelet was meant to be used in some very special use cases, where the host's root (`/`) was mounted to the `/rootfs` inside the container. The flag/feature was used to make Kubelet aware of this and chroot to it later.

That was definitely a source for many issues and it's a good thing that it got removed, but I believe that running Kubelet in a container is a totally different thing compared to that.

> Are there any intrinsic benefits of running kubelet in a container at all? 

To recap the initial email, there can be many benefits in containerized environments/operating systems. It's much easier and more natural to handle all the needed dependencies and run a component as a container than to just run it as a binary. I've recapped some more details in the initial email, so I'll not repeat myself again, but if there's anything unclear, please let me know.

On Tuesday, July 28, 2020 at 9:04:44 AM UTC+2, Niko Penteridis wrote:
From my experience with encountering hyperkube in production some
years ago, seeing hyperkube being deprecated is quite welcoming...
containerized kubelet is helplessly buggy, the kind of bugs which are
really not worth the investment, hence why it's deprecated I suppose
amongst strong security concerns.

Are there any intrinsic benefits of running kubelet in a container at all?

On 7/27/20, Josh Berkus <jbe...@redhat.com> wrote:
> On 7/22/20 4:59 AM, Marko Mudrinić wrote:
>>
>> * Build their own Kubelet images, however, there is no any
>> recommendation how this image should look like, what image base should
>> be used, and what packages should be included
>> * Use third-party images such as [2]. The problem with this approach is
>> that users fully depend on third parties to maintain and keep those
>> images up-to-date, working, and secure.
>

>
> Note that the difficulties with making containerized kubelets work
> properly won't go away.  But having some dedicated maintainers would
> make hyperkube a lot more viable.
>
> --
> --
> Josh Berkus
> Kubernetes Community
> Red Hat OSPO
>
> --
> You received this message because you are subscribed to the Google Groups
> "kubernetes-sig-release" group.
> To unsubscribe from this group and stop receiving emails from it, send an

Davanum Srinivas

unread,
Jul 28, 2020, 5:58:00 AM7/28/20
to Marko Mudrinić, kubernetes-sig-release
Marko,

We begged and pleaded for folks to show up to the work multiple times over the years and no one bothered to stick around to do the work needed.

Anyone who is interested in this topic and are willing to resurrect hyperkube please feel free to start a KEP and shop around for SIG(s) to sponsor them etc (follow usual process). There are two variations we have tried so far. 


If you really want support for flatcar. Please bring them to the table. Folks like Vincent Batts and Dongsu Park are working on how to support for example image-builder for cluster-api with flatcar:

If anyone wants to do the work then KEP first and then kubernetes-sigs repo etc are a possibility. We will NOT be adding this back to k/k for sure. that's water under the bridge. 

Sorry and thanks.
Dims

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-release" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-re...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-release/dc4bcef3-86f2-4dc1-99d5-3e7a4b350113o%40googlegroups.com.


--
Davanum Srinivas :: https://twitter.com/dims

Marko Mudrinić

unread,
Jul 28, 2020, 6:15:54 AM7/28/20
to kubernetes-sig-release
Hello Dims,

Thank you so much for reaching out and making it clear what are the possibilities!

I'll take a look at the links you posted. For now, I'll see about the alternatives and test them a bit more in-depth.

Thanks to all of you!
Marko

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-release" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-release+unsub...@googlegroups.com.

Rodrigo Campos

unread,
Jul 28, 2020, 2:00:27 PM7/28/20
to Josh Berkus, Marko Mudrinić, kubernete...@googlegroups.com, kubernetes-sig-release
Hi!

My name is Rodrigo and I'm working at Kinvolk, we are behind Flatcar.
I can offer my help to maintain an image for the kubelet (I'm part of
the Kubernetes org, if that helps).

Something worth mentioning is that Flatcar Kubernetes users usually
use the kubelet-wrapper script shipped with Flatcar. That script uses
rkt stage 1, that is: you specify a container image and launch it as a
container, but is actually a process in the host with only chroot
isolation. So, while a container image is used, is not running as a
regular container.

The idea is to deprecate kubelet-wrapper, though, as it is using rkt
which is an unmaintained project now. I don't know what the
deprecation window might be, as there are several users and we want to
play nice. We think it might be possible to maybe replace the usage of
rkt with docker (not sure if it will be baked into Flatcar), just
using a combination of flags.

But we definitely see value in a kubelet container image. In fact, we
also have a Kubernetes distribution and we are also using the kubelet
as a daemonset (we have a bootstrap kubelet on the node to start the
daemonset). That has been working well for us, AFAIK.

If SIG-node is okay distributing the kubelet container image, I
volunteer to help to make that happen and maintain it :)

If people see value in hyperkube, I can help with that too. From our
usage, though, is quite the same if we use hyperkube or just the
specific container image. If people want to go down this route, I
think that maybe using a smal go program to replace the hyperkube bash
script can help a lot. It will be only a static binary, so no need for
a shell and that due to hyperkube itself.




Best,
Rodrigo
> --
> You received this message because you are subscribed to the Google Groups "kubernetes-sig-node" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-node/88af72e5-55c8-47e8-3847-e70f3cf9cbd8%40redhat.com.



--
Rodrigo Campos
---
Kinvolk GmbH | Adalbertstr.6a, 10999 Berlin | tel: +491755589364
Geschäftsführer/Directors: Alban Crequy, Chris Kühl, Iago López Galeiras
Registergericht/Court of registration: Amtsgericht Charlottenburg
Registernummer/Registration number: HRB 171414 B
Ust-ID-Nummer/VAT ID number: DE302207000

Davanum Srinivas

unread,
Jul 28, 2020, 3:06:58 PM7/28/20
to Rodrigo Campos, Josh Berkus, Marko Mudrinić, kubernete...@googlegroups.com, kubernetes-sig-release
Rodrigo,

Thanks for the insight/options for flatcar. Hopefully one of them clicks for Marko.

Usual community rules apply, get some proposal going, get folks behind it, shop it to various SIGs and see what happens.

Thanks,
Dims


You received this message because you are subscribed to the Google Groups "kubernetes-sig-release" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-re...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-release/CACaBj2aX7h7RA2Z66OvEqBJb-204KksW6ObHPU1iT%3Dc3bQi_bA%40mail.gmail.com.

Tim Allclair

unread,
Jul 28, 2020, 3:28:16 PM7/28/20
to Rodrigo Campos, Davanum Srinivas, Josh Berkus, Marko Mudrinić, kubernetes-sig-node, kubernetes-sig-release
Speaking as a member of the product security committee, one of the challenges we've had behind Hyperkube is keeping the image patched. Since it has so many dependencies, it frequently has CVEs that need to be patched. I'd prefer if Kubernetes wasn't in the business of maintaining binary distributions of third party dependencies. We've mostly chosen to rely on Debian Linux to provide the dependencies, but IMO we haven't done a sufficient job in keeping the images up-to-date with the latest patches.

If we do go the route of shipping a Kubelet image packaged with the dependencies, I'd like to see a plan for automatically patching the image with every release. Challenges to this approach include:
- regression testing with updated dependencies
- build time & flakiness from pulling the dependencies from third party servers

On Tue, Jul 28, 2020 at 12:12 PM Rodrigo Campos <rod...@kinvolk.io> wrote:
On Tue, Jul 28, 2020 at 4:06 PM Davanum Srinivas <dav...@gmail.com> wrote:
>
> Rodrigo,
>
> Thanks for the insight/options for flatcar. Hopefully one of them clicks for Marko.

Thank you!


>
> Usual community rules apply, get some proposal going, get folks behind it, shop it to various SIGs and see what happens.

Oh, okay. Marko (or anyone else interested) do you want to work with
me on a KEP (as the KEP template says, add summary, goals, non-goals
and open the PR early, so probably just that) and discuss this on
SIG-node meetings? (I might not be available next Tuesday for sig-node
meeting, but the week after that)


Best,
Rodrigo


--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-node" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.

Rodrigo Campos

unread,
Jul 28, 2020, 4:55:09 PM7/28/20
to Davanum Srinivas, Josh Berkus, Marko Mudrinić, kubernete...@googlegroups.com, kubernetes-sig-release
On Tue, Jul 28, 2020 at 4:06 PM Davanum Srinivas <dav...@gmail.com> wrote:
>
> Rodrigo,
>
> Thanks for the insight/options for flatcar. Hopefully one of them clicks for Marko.

Thank you!

>
> Usual community rules apply, get some proposal going, get folks behind it, shop it to various SIGs and see what happens.

Rodrigo Campos

unread,
Jul 28, 2020, 4:55:10 PM7/28/20
to Tim Allclair, Davanum Srinivas, Josh Berkus, Marko Mudrinić, kubernetes-sig-node, kubernetes-sig-release
On Tue, Jul 28, 2020 at 4:28 PM Tim Allclair <tall...@google.com> wrote:
>
> Speaking as a member of the product security committee, one of the challenges we've had behind Hyperkube is keeping the image patched. Since it has so many dependencies, it frequently has CVEs that need to be patched. I'd prefer if Kubernetes wasn't in the business of maintaining binary distributions of third party dependencies. We've mostly chosen to rely on Debian Linux to provide the dependencies, but IMO we haven't done a sufficient job in keeping the images up-to-date with the latest patches.

Thanks for the input :)

> If we do go the route of shipping a Kubelet image packaged with the dependencies, I'd like to see a plan for automatically patching the image with every release. Challenges to this approach include:
> - regression testing with updated dependencies
> - build time & flakiness from pulling the dependencies from third party servers

Makes sense. Will make sure to include ways to address those concerns
if we create such a KEP :)
Reply all
Reply to author
Forward
0 new messages