[URGENT] Proposal to Stop Building and Releasing Artifacts for mips64le, ppc64le, and s390x

794 views
Skip to first unread message

Jeremy Rickard

unread,
Oct 4, 2024, 1:47:24 PM10/4/24
to d...@kubernetes.io, kubernete...@googlegroups.com, kubernetes-sig-release

Hello kubefolx!


We are reaching out to discuss supported platforms and our Go base images! During our work to bump to Go 1.22.8 and Go 1.23.2, we have discovered that the Go team doesn't provide Debian bullseye-based Go images for certain architectures (mips64le, ppc64le, s390x), as bullseye has transitioned into “LTS” support and is no longer maintained by the Debian team[1][2]. However, these architectures appear to be supported on Debian bookworm[3].

The Kubernetes images are already based on Debian bookworm, however, we’re building all Kubernetes binaries using Debian bullseye. This is done to maintain the glibc compatibility, as Debian bookworm is shipping with glibc 2.36[4] which is still not widely adopted at the moment.


This presents a challenge for our current effort to bump Go versions on both the master branch and the supported release branches. We are at the moment working on mitigating this for the v1.32.0-alpha.1 release (which we plan to release using Go 1.23.0), however this will also block all upcoming releases.


There are two options that we can pursue here, however, it’s very important to highlight that this will apply to both the master branch and the supported release branches (v1.28 to v1.31):


  • We can migrate to bookworm, and as a result increase the required glibc version to 2.36 or newer (for all supported releases) 

  • We can stop creating release artifacts for mips64le, ppc64le, and s390x, and continue using bullseye to build the Kubernetes artifacts, keeping the glibc requirement unchanged


Without applying this change to the actively supported release branches, this will hinder our ability to bump Go versions on these branches. This can impose many implications and security risks if a serious Go vulnerability is identified.


We’re fully aware that whatever decision we make, we’ll introduce a breaking change for some users. At the moment, the SIG Release leads think that stopping creating release artifacts for mips64le, ppc64le and s390x architectures poses the least amount of risk and affects the least number of users, compared to increasing the minimum glibc version.


Because of that, foremost, we kindly ask you for understanding. Moreover, we want to hear your feedback on the preferred path forward, so please reply to this email or reach out to SIG Release via our communication channels. Because this is time critical, we would like all feedback by Friday, October 11th and will treat October 11th as a lazy consensus deadline.


Our existing policy[5] states the following:


Please note that actively supported release branches are not affected by the removal. This ensures compatibility with existing artifact consumers.


Any option that we choose will in a way break this policy. If we drop the mentioned architectures, we’re directly breaking this policy. However, switching to bookworm will significantly change the supported operating systems (and their versions). We want to discuss the best path forward here.


It’s important to clarify that we’re not in favor of combining those two options to reduce user confusion. Whatever option we decide on will be applied to both the master branch and the release branches. We want to avoid situations where we randomly start and stop creating release artifacts for different branches and architectures.


Thanks,


Jeremy Rickard and Marko Mudrinić  // on behalf of SIG Release



[1] https://github.com/docker-library/official-images/pull/17640/files#diff-262b5154873802fd4abff07283ae9bd83663325957229799a17e8262a5268b27

[2] https://github.com/docker-library/golang/issues/536

[3] https://github.com/docker-library/official-images/blob/a22705bb8eb8e84123c08a62de343a5e8a98ab61/library/buildpack-deps#L17

[4] https://sources.debian.org/src/glibc/

[5] https://github.com/kubernetes/sig-release/blob/master/release-engineering/platforms/guide.md#deprecating-and-removing-supported-platforms



Tim Hockin

unread,
Oct 9, 2024, 4:06:32 PM10/9/24
to benth...@google.com, dav...@gmail.com, thiru...@gmail.com, dev, Jeremy Rickard, kubernetes-sig-release
I think the line to take is something like:
* we ACTIVELY SUPPORT platforms A, B, C -- we regularly test them and
build releases for them
* we PASSIVELY SUPPORT platforms D, E, F -- we will consume and
publish 3rd party test results for them, but we do not test them or
build releases for them
* we OPPORTUNISTICALLY SUPPORT other platforms -- we will take PRs to
make Kubernetes build/run on them, but we do not test them or build
releases for them

On Wed, Oct 9, 2024 at 11:24 AM 'Benjamin Elder' via dev
<d...@kubernetes.io> wrote:
>
> > Can I just confirm what is meant by artifacts here? Would it mean both executables such as kubelet, and images containing the api-server etc, that are used by kubeadm to create a basic cluster?
>
> The files published in a release by the project. That includes binaries and container images and anything else advertised with a release.
>
> > Would there still be scripts and Dockerfiles etc maintained by the community that could be used to build binaries and images for these architectures?
>
> To some extent. We would not be actively building these.
>
> > Would kubeadm built downstream need to be pointed at a different image repository other than registry.k8s.io in order to be used in future?
>
> Yes, unless the downstream distro chose to patch it. The registry used is configurable in kubeadm with a single config field already for use cases like airgapping and custom distros.
>
> For prior art see: https://kubernetes.io/blog/2020/01/15/kubernetes-on-mips/
>
> On Wed, Oct 9, 2024 at 10:07 AM Davanum Srinivas <dav...@gmail.com> wrote:
>>
>> Manoj,
>>
>> Thank you for the feedback. Can you help identify specific items from the kubernetes release that you use? For example from latest 1.31.1 release here's the set we released:
>> https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#downloads-for-v1311
>>
>> How exactly are you building a kubernetes cluster with these artifacts? Which installer are you using?
>>
>> thanks,
>> Dims
>>
>> PS: it looks like you are an IBMer as well. please confirm.
>>
>> On Wed, Oct 9, 2024 at 12:57 PM Manoj Pattabhiraman <thiru...@gmail.com> wrote:
>>>
>>> Hi
>>>
>>> In ASEAN region, we predominantly use s390x architecture based K8s artifacts and other vanilla OSS for some of the major startups (fintechs, stock trading and other enterprise clients). Especially platforms like s390x - are widely used in financial and government sectors in this region for their performance and inherent security.
>>>
>>> Thanks .. Manoj.
>>> --
>>> You received this message because you are subscribed to the Google Groups "dev" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@kubernetes.io.
>>> To view this discussion on the web visit https://groups.google.com/a/kubernetes.io/d/msgid/dev/4aa7380e-6207-43f1-9206-ffa136a72a3cn%40kubernetes.io.
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> --
>> You received this message because you are subscribed to the Google Groups "dev" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@kubernetes.io.
>> To view this discussion on the web visit https://groups.google.com/a/kubernetes.io/d/msgid/dev/CANw6fcEUVXuFRqWt4URToHC-P7dnw8NiBXD7hSp3zvY60YExAA%40mail.gmail.com.
>
> --
> You received this message because you are subscribed to the Google Groups "dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@kubernetes.io.
> To view this discussion on the web visit https://groups.google.com/a/kubernetes.io/d/msgid/dev/CAOZRXm9hg%3D0bRk1z9zr-64GUpYWVyGZ6dgWEK-vUW52k2w%3D59Q%40mail.gmail.com.

Jeremy Rickard

unread,
Oct 14, 2024, 2:29:03 PM10/14/24
to dev, Manoj Pattabhiraman, Davanum Srinivas, dev, Jeremy Rickard, kubernetes-sig-release
Thanks everyone for all the feedback on this topic. 

We have sent an update out in a separate thread to ensure it's seen: https://groups.google.com/a/kubernetes.io/g/dev/c/12uRwQIi51U/m/QLEk7V_SAQAJ



On Monday, October 14, 2024 at 6:18:41 AM UTC-6 Manoj Pattabhiraman wrote:
Hi, 

The current deployment for an stock trading organisation is happening with  K8S 1.31.1 and we are using valero and calico for storage and networks.  Yes. i work for ibm - but we leave the deployment products and tools to be decided by our customers. 

On Thursday, October 10, 2024 at 1:07:47 AM UTC+8 Davanum Srinivas wrote:
Manoj,

Thank you for the feedback. Can you help identify specific items from the kubernetes release that you use? For example from latest 1.31.1 release here's the set we released:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#downloads-for-v1311

How exactly are you building a kubernetes cluster with these artifacts? Which installer are you using?

thanks,
Dims

PS: it looks like you are an IBMer as well. please confirm.

On Wed, Oct 9, 2024 at 12:57 PM Manoj Pattabhiraman <thiru...@gmail.com> wrote:
Hi 

In ASEAN region, we predominantly use s390x architecture based K8s artifacts and other vanilla OSS for some of the major startups (fintechs, stock trading and other enterprise clients).  Especially platforms like s390x - are widely used in financial and government sectors in this region for their performance and inherent security.

Thanks .. Manoj.  

On Saturday, October 5, 2024 at 1:47:24 AM UTC+8 Jeremy Rickard wrote:

Tim Hockin

unread,
Dec 6, 2024, 2:38:04 AM12/6/24
to Benjamin Elder, Davanum Srinivas, kaydiam, dev, Jeremy Rickard, Manoj Pattabhiraman, kubernetes-sig-release
Expandng from my comment in the doc. 

I cannot make the meeting, but IMO that is way too low for our community to carry a significant burden of blocking builds etc.

As discussed elsewhere, I think it's reasonable to accept patches for multi-platform support, but if the burden of building and releasing for those platforms is non-trivial then I don't think it's resources well spent.

The reality is that we are al in a resource crunch. Anywhere we can squeeze out some efficiency is a win. I'm sure there are other efforts which will have bigger impact which could use attention.

I recognize that this is unpleasant for the users of those platforms. But without the ability to actively test and get real user feedback on a regular basis, we're sort of blind.

Tim

On Tue, Dec 3, 2024, 12:48 AM 'Benjamin Elder' via dev <d...@kubernetes.io> wrote:
> In my opinion, it’s worth the team reviewing the usage statistics for all supported architectures across all supported releases. This would help gauge how many users might be surprised by the decision to remove an architecture. While the data may not be 100% accurate, it's better than having none at all.

Based on this thread and recent discussions I have compiled usage data here: https://docs.google.com/document/d/1sXNiqtypt2BBt-4SnvkX1hCp1_4cJYy_hjqtDqcmGqg/edit?usp=sharing

(you must be a member of d...@kubernetes.io or kubernetes-sig-release to view)

TLDR: for registry.k8s.io/kube-proxy:v1.30.x images (querying more would get slow and expensive, this should be a reasonable estimate)

AMD64: 93.11%

Arm64: 06.13% 

PPC64LE: 00.34%

S390x: 00.43%

So <1% combined for these platforms.


On Mon, Oct 14, 2024 at 12:41 PM Davanum Srinivas <dav...@gmail.com> wrote:
Kay,

Please open an issue in image-builder repository and let's deal with it there. The changes we are talking about here are not yet landed and not for right now...

On Mon, Oct 14, 2024 at 3:30 PM kaydiam <kay....@gmail.com> wrote:
Hi colleagues,

I'm not sure if this is related but out pipeline fails to create ppc64le images: https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/cloud-provider-openstack-push-images/1845898968966369280
I haven't found any references to ppc64le in our code, but it seems to be related to the recent announcement.
Could you please clarify whether we need to update certain dependencies? Thanks!

Regards,

--
You received this message because you are subscribed to the Google Groups "dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@kubernetes.io.
Reply all
Reply to author
Forward
0 new messages