[URGENT] Proposal to Stop Building and Releasing Artifacts for mips64le, ppc64le, and s390x

230 views
Skip to first unread message

Jeremy Rickard

unread,
Oct 4, 2024, 1:47:10 PMOct 4
to d...@kubernetes.io, kubernete...@googlegroups.com, kubernetes-sig-release

Hello kubefolx!


We are reaching out to discuss supported platforms and our Go base images! During our work to bump to Go 1.22.8 and Go 1.23.2, we have discovered that the Go team doesn't provide Debian bullseye-based Go images for certain architectures (mips64le, ppc64le, s390x), as bullseye has transitioned into “LTS” support and is no longer maintained by the Debian team[1][2]. However, these architectures appear to be supported on Debian bookworm[3].

The Kubernetes images are already based on Debian bookworm, however, we’re building all Kubernetes binaries using Debian bullseye. This is done to maintain the glibc compatibility, as Debian bookworm is shipping with glibc 2.36[4] which is still not widely adopted at the moment.


This presents a challenge for our current effort to bump Go versions on both the master branch and the supported release branches. We are at the moment working on mitigating this for the v1.32.0-alpha.1 release (which we plan to release using Go 1.23.0), however this will also block all upcoming releases.


There are two options that we can pursue here, however, it’s very important to highlight that this will apply to both the master branch and the supported release branches (v1.28 to v1.31):


  • We can migrate to bookworm, and as a result increase the required glibc version to 2.36 or newer (for all supported releases) 

  • We can stop creating release artifacts for mips64le, ppc64le, and s390x, and continue using bullseye to build the Kubernetes artifacts, keeping the glibc requirement unchanged


Without applying this change to the actively supported release branches, this will hinder our ability to bump Go versions on these branches. This can impose many implications and security risks if a serious Go vulnerability is identified.


We’re fully aware that whatever decision we make, we’ll introduce a breaking change for some users. At the moment, the SIG Release leads think that stopping creating release artifacts for mips64le, ppc64le and s390x architectures poses the least amount of risk and affects the least number of users, compared to increasing the minimum glibc version.


Because of that, foremost, we kindly ask you for understanding. Moreover, we want to hear your feedback on the preferred path forward, so please reply to this email or reach out to SIG Release via our communication channels. Because this is time critical, we would like all feedback by Friday, October 11th and will treat October 11th as a lazy consensus deadline.


Our existing policy[5] states the following:


Please note that actively supported release branches are not affected by the removal. This ensures compatibility with existing artifact consumers.


Any option that we choose will in a way break this policy. If we drop the mentioned architectures, we’re directly breaking this policy. However, switching to bookworm will significantly change the supported operating systems (and their versions). We want to discuss the best path forward here.


It’s important to clarify that we’re not in favor of combining those two options to reduce user confusion. Whatever option we decide on will be applied to both the master branch and the release branches. We want to avoid situations where we randomly start and stop creating release artifacts for different branches and architectures.


Thanks,


Jeremy Rickard and Marko Mudrinić  // on behalf of SIG Release



[1] https://github.com/docker-library/official-images/pull/17640/files#diff-262b5154873802fd4abff07283ae9bd83663325957229799a17e8262a5268b27

[2] https://github.com/docker-library/golang/issues/536

[3] https://github.com/docker-library/official-images/blob/a22705bb8eb8e84123c08a62de343a5e8a98ab61/library/buildpack-deps#L17

[4] https://sources.debian.org/src/glibc/

[5] https://github.com/kubernetes/sig-release/blob/master/release-engineering/platforms/guide.md#deprecating-and-removing-supported-platforms



abdul...@gmail.com

unread,
Oct 4, 2024, 7:51:28 PMOct 4
to kubernetes-ann...@googlegroups.com, d...@kubernetes.io, kubernete...@googlegroups.com, kubernetes-sig-release
I find s390x an important architecture to support as many customers are now returning to the mainframe architecture due to various reasons like being environment friendly linuxone boxes. 

Regards
Abdul

On 4. Oct 2024, at 19:47, Jeremy Rickard <jeremy.r...@gmail.com> wrote:


--
You received this message because you are subscribed to the Google Groups "kubernetes-announce" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-anno...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-announce/CAEHNk2yfYRqQsFOF4gFEhwDTc%3D_VtCxtNX4RTDi3Gxm5JKaRZQ%40mail.gmail.com.

Manjunath Kumatagi

unread,
Oct 5, 2024, 10:38:09 AMOct 5
to dev, abdul...@gmail.com, d...@kubernetes.io, kubernete...@googlegroups.com, kubernetes-sig-release, kubernetes-ann...@googlegroups.com
Same here, I consider ppc64le to be a crucial architecture and a strong alternative for running cloud-native workloads. 

Thanks,
Manjunath Kumatagi.

Antonio Ojea

unread,
Oct 6, 2024, 11:27:43 AMOct 6
to manjun...@gmail.com, dev, abdul...@gmail.com, kubernete...@googlegroups.com, kubernetes-sig-release, kubernetes-ann...@googlegroups.com
I think it is worth clarifying something, this is about supporting
building artifacts, not to support those architectures by the project,
AFAIK there is no automated testing on anything that is not amd64,
some jobs were added recently for arm64 IIRC, but the other
architectures , if they are tested, is out of the project. If some of
these architectures fail we don't know and, despite that, I have
doubts we'll block a release or development because of this.

On the other hand, libc compatibility is a real problem that impacts
all the existing users, we already suffered that in the project
https://github.com/kubernetes/test-infra/pull/31447

We need to put this in proportion, we want to provide as much support
as we can, but if there is the need to make trade offs, we need to
favor the majority and the less protected, libc will impact all
platforms and types of users, on the other hand, mips64le, ppc64le,
s390x seems to impact enterprise users only, that can afford to
rebuild the binaries since most probably have support contracts ...
> You received this message because you are subscribed to the Google Groups "dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@kubernetes.io.
> To view this discussion on the web visit https://groups.google.com/a/kubernetes.io/d/msgid/dev/a4cf9f9b-4c5d-41a0-8a6d-a4c72218add1n%40kubernetes.io.

Jay Pipes

unread,
Oct 6, 2024, 12:12:22 PMOct 6
to Antonio Ojea, manjun...@gmail.com, dev, abdul...@gmail.com, kubernete...@googlegroups.com, kubernetes-sig-release, kubernetes-ann...@googlegroups.com
Well said Antonio.

We should make decisions based on impact to the largest proportion of users.

Best,
-jay

You received this message because you are subscribed to the Google Groups "kubernetes-sig-release" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-re...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-release/CABhP%3DtZvR_%3DBzKLJ_kFvzFHs3yGykWHzmz0oNJ967JecafdWwQ%40mail.gmail.com.

Manjunath Kumatagi

unread,
Oct 6, 2024, 1:08:26 PMOct 6
to Antonio Ojea, dev, abdul...@gmail.com, kubernete...@googlegroups.com, kubernetes-sig-release, kubernetes-ann...@googlegroups.com
As far as I know, the artifacts were cross-built, and the issue mentioned surfaced recently during the build of the kube-cross image [1] as part of the Golang update. If I’m correct, we don’t need this kube-cross image to be a multi-arch fat manifest except for amd64 (which is the architecture most build systems use) to generate the artifacts. I believe we can safely exclude the problematic architectures while building this image to unblock the process and generate the artifacts smoothly. I’ve made that change here [2], and the image built fine.

I understand your concern about the libc issue, which is a general issue that can happen on any platform, and could arise later when we eventually move to bookworm, potentially causing the build artifacts to not work properly on distros like Ubuntu, where they have very long extended support cycles. We need to find another way to handle this situation, such as building the kubelet binary statically (all other binaries are already static except this one) or building with the older libc version (though I’m not sure how easy that would be).

Lastly, for ppc64le architectures, we are regularly running tests (unit, conformance, and node) offline, posting the results in the test grid[3], and collaborating with the community to address any issues found. Of course, there is room for improvement, both in terms of coverage and how these tests are run, and we will work on closing these gaps in the future.


Thanks,
Manjunath Kumatagi

Benjamin Elder

unread,
Oct 6, 2024, 3:28:04 PMOct 6
to Manjunath Kumatagi, Antonio Ojea, dev, abdul...@gmail.com, kubernete...@googlegroups.com, kubernetes-sig-release, kubernetes-ann...@googlegroups.com
> As far as I know, the artifacts were cross-built, and the issue mentioned surfaced recently during the build of the kube-cross image [1] as part of the Golang update. If I’m correct, we don’t need this kube-cross image to be a multi-arch fat manifest except for amd64 (which is the architecture most build systems use) to generate the artifacts. I believe we can safely exclude the problematic architectures while building this image to unblock the process and generate the artifacts smoothly. I’ve made that change here [2], and the image built fine.

If we can't even get support for the base build image, why are we shipping these? The ecosystem support is just not there.

> I understand your concern about the libc issue, which is a general issue that can happen on any platform, and could arise later when we eventually move to bookworm, potentially causing the build artifacts to not work properly on distros like Ubuntu, where they have very long extended support cycles. We need to find another way to handle this situation, such as building the kubelet binary statically (all other binaries are already static except this one) or building with the older libc version (though I’m not sure how easy that would be).

We already build from debian stable (we're already intrinsically linked to debian packages in other places and releng is on the distributor list), which is unlikely to have glibc symbols not available on supported distros. Users on even older distros or glibc-less distros can either use distro supplied packages or build their own binaries.

Other than for CI testing / quality assurance, the default-built and hosted binaries are a convenience, not a necessity.

I don't think we should statically link kubelet just for this, we're not actually encountering this issue otherwise and statically linking is a breaking change (e.g. to DNS resolution behavior).

> Lastly, for ppc64le architectures, we are regularly running tests (unit, conformance, and node) offline, posting the results in the test grid[3], and collaborating with the community to address any issues found. Of course, there is room for improvement, both in terms of coverage and how these tests are run, and we will work on closing these gaps in the future.

Yes ... All of which appears to be on IBM's vendor-controlled downstream infrastructure, which could also be running and hosting builds, and to this day is not eligible for release blocking statushttps://git.k8s.io/sig-release/release-blocking-jobs.md, because it's still just posting unverified downstream results the community has no ability to fix or debug.

------

> We need to put this in proportion, we want to provide as much support
as we can, but if there is the need to make trade offs, we need to
favor the majority and the less protected, libc will impact all
platforms and types of users, on the other hand, mips64le, ppc64le,
s390x seems to impact enterprise users only, that can afford to
rebuild the binaries since most probably have support contracts ...

Exactly.

If you came to us today with a new architecture like RISCV, and it was in the same state, I guarantee the project would refuse to add it to the default builds. In fact, we have done this already, more than once.

So the question is how long do we continue to ship untested and relatively unsupported architectures, with a limited user base?

I think it's past time to focus on widely used architectures in the default build.
It costs us a fair bit of time and resources to cross compile, store/host, and maintain the builds for all of these.
We spend >1h just cross compiling on ~7 cores and ~50GB in CI, most of which is for architectures with no CI coverage.

We can still permit building for architectures that we do not build and ship by default.

You received this message because you are subscribed to the Google Groups "kubernetes-sig-release" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-re...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-release/CAP1uJeYoMtB9X6RnOeH8arqyxZQJV1yG8i-B9nYcPkUoLTgBag%40mail.gmail.com.

Dan Millwood

unread,
Oct 8, 2024, 4:42:26 AMOct 8
to dev, Benjamin Elder, Antonio Ojea, dev, abdul...@gmail.com, kubernete...@googlegroups.com, kubernetes-sig-release, kubernetes-ann...@googlegroups.com, Manjunath Kumatagi
Hello,

I have a few questions about the proposal:

Can I just confirm what is meant by artifacts here?  Would it mean both executables such as kubelet, and images containing the api-server etc, that are used by kubeadm to create a basic cluster?  
Would there still be scripts and Dockerfiles etc maintained by the community that could be used to build binaries and images for these architectures?
Would kubeadm built downstream need to be pointed at a different image repository other than registry.k8s.io   in order to be used in future?

Thanks, Dan

Sebastien Leger

unread,
Oct 9, 2024, 3:27:15 AMOct 9
to dev, Dan Millwood, Benjamin Elder, Antonio Ojea, dev, abdul...@gmail.com, kubernete...@googlegroups.com, kubernetes-sig-release, kubernetes-ann...@googlegroups.com, Manjunath Kumatagi
Hello,

We've been working with IBM teams and clients (finance industry) using s390x. We're looking at ways to improve automated CI testing for different architectures, including linux/s390x. CI runners spinning up temporary s390x kind clusters is one of the alternatives.

Davanum Srinivas

unread,
Oct 9, 2024, 7:05:01 AMOct 9
to sle...@ripple.com, dev, Dan Millwood, Benjamin Elder, Antonio Ojea, abdul...@gmail.com, kubernete...@googlegroups.com, kubernetes-sig-release, kubernetes-ann...@googlegroups.com, Manjunath Kumatagi
Sebastien,

We are not looking to increase our exposure to these architectures more than we already have. We are looking for folks who use the current artifacts we ship as part of the kubernetes release in their production workloads.

-- Dims

Reply all
Reply to author
Forward
0 new messages