[URGENT] Proposal to Stop Building and Releasing Artifacts for mips64le, ppc64le, and s390x

1,660 views
Skip to first unread message

Jeremy Rickard

unread,
Oct 4, 2024, 1:47:24 PMOct 4
to d...@kubernetes.io, kubernete...@googlegroups.com, kubernetes-sig-release

Hello kubefolx!


We are reaching out to discuss supported platforms and our Go base images! During our work to bump to Go 1.22.8 and Go 1.23.2, we have discovered that the Go team doesn't provide Debian bullseye-based Go images for certain architectures (mips64le, ppc64le, s390x), as bullseye has transitioned into “LTS” support and is no longer maintained by the Debian team[1][2]. However, these architectures appear to be supported on Debian bookworm[3].

The Kubernetes images are already based on Debian bookworm, however, we’re building all Kubernetes binaries using Debian bullseye. This is done to maintain the glibc compatibility, as Debian bookworm is shipping with glibc 2.36[4] which is still not widely adopted at the moment.


This presents a challenge for our current effort to bump Go versions on both the master branch and the supported release branches. We are at the moment working on mitigating this for the v1.32.0-alpha.1 release (which we plan to release using Go 1.23.0), however this will also block all upcoming releases.


There are two options that we can pursue here, however, it’s very important to highlight that this will apply to both the master branch and the supported release branches (v1.28 to v1.31):


  • We can migrate to bookworm, and as a result increase the required glibc version to 2.36 or newer (for all supported releases) 

  • We can stop creating release artifacts for mips64le, ppc64le, and s390x, and continue using bullseye to build the Kubernetes artifacts, keeping the glibc requirement unchanged


Without applying this change to the actively supported release branches, this will hinder our ability to bump Go versions on these branches. This can impose many implications and security risks if a serious Go vulnerability is identified.


We’re fully aware that whatever decision we make, we’ll introduce a breaking change for some users. At the moment, the SIG Release leads think that stopping creating release artifacts for mips64le, ppc64le and s390x architectures poses the least amount of risk and affects the least number of users, compared to increasing the minimum glibc version.


Because of that, foremost, we kindly ask you for understanding. Moreover, we want to hear your feedback on the preferred path forward, so please reply to this email or reach out to SIG Release via our communication channels. Because this is time critical, we would like all feedback by Friday, October 11th and will treat October 11th as a lazy consensus deadline.


Our existing policy[5] states the following:


Please note that actively supported release branches are not affected by the removal. This ensures compatibility with existing artifact consumers.


Any option that we choose will in a way break this policy. If we drop the mentioned architectures, we’re directly breaking this policy. However, switching to bookworm will significantly change the supported operating systems (and their versions). We want to discuss the best path forward here.


It’s important to clarify that we’re not in favor of combining those two options to reduce user confusion. Whatever option we decide on will be applied to both the master branch and the release branches. We want to avoid situations where we randomly start and stop creating release artifacts for different branches and architectures.


Thanks,


Jeremy Rickard and Marko Mudrinić  // on behalf of SIG Release



[1] https://github.com/docker-library/official-images/pull/17640/files#diff-262b5154873802fd4abff07283ae9bd83663325957229799a17e8262a5268b27

[2] https://github.com/docker-library/golang/issues/536

[3] https://github.com/docker-library/official-images/blob/a22705bb8eb8e84123c08a62de343a5e8a98ab61/library/buildpack-deps#L17

[4] https://sources.debian.org/src/glibc/

[5] https://github.com/kubernetes/sig-release/blob/master/release-engineering/platforms/guide.md#deprecating-and-removing-supported-platforms



abdul...@gmail.com

unread,
Oct 4, 2024, 7:51:52 PMOct 4
to kubernetes-ann...@googlegroups.com, d...@kubernetes.io, kubernete...@googlegroups.com, kubernetes-sig-release
I find s390x an important architecture to support as many customers are now returning to the mainframe architecture due to various reasons like being environment friendly linuxone boxes. 

Regards
Abdul

On 4. Oct 2024, at 19:47, Jeremy Rickard <jeremy.r...@gmail.com> wrote:


--
You received this message because you are subscribed to the Google Groups "kubernetes-announce" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-anno...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-announce/CAEHNk2yfYRqQsFOF4gFEhwDTc%3D_VtCxtNX4RTDi3Gxm5JKaRZQ%40mail.gmail.com.

Manjunath Kumatagi

unread,
Oct 5, 2024, 4:20:55 AMOct 5
to dev, abdul...@gmail.com, d...@kubernetes.io, kubernete...@googlegroups.com, kubernetes-sig-release, kubernetes-ann...@googlegroups.com
Same here, I consider ppc64le to be a crucial architecture and a strong alternative for running cloud-native workloads. 

Thanks,
Manjunath Kumatagi.

Antonio Ojea

unread,
Oct 6, 2024, 11:27:53 AMOct 6
to manjun...@gmail.com, dev, abdul...@gmail.com, kubernete...@googlegroups.com, kubernetes-sig-release, kubernetes-ann...@googlegroups.com
I think it is worth clarifying something, this is about supporting
building artifacts, not to support those architectures by the project,
AFAIK there is no automated testing on anything that is not amd64,
some jobs were added recently for arm64 IIRC, but the other
architectures , if they are tested, is out of the project. If some of
these architectures fail we don't know and, despite that, I have
doubts we'll block a release or development because of this.

On the other hand, libc compatibility is a real problem that impacts
all the existing users, we already suffered that in the project
https://github.com/kubernetes/test-infra/pull/31447

We need to put this in proportion, we want to provide as much support
as we can, but if there is the need to make trade offs, we need to
favor the majority and the less protected, libc will impact all
platforms and types of users, on the other hand, mips64le, ppc64le,
s390x seems to impact enterprise users only, that can afford to
rebuild the binaries since most probably have support contracts ...
> You received this message because you are subscribed to the Google Groups "dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@kubernetes.io.
> To view this discussion on the web visit https://groups.google.com/a/kubernetes.io/d/msgid/dev/a4cf9f9b-4c5d-41a0-8a6d-a4c72218add1n%40kubernetes.io.

Jay Pipes

unread,
Oct 6, 2024, 12:12:28 PMOct 6
to Antonio Ojea, manjun...@gmail.com, dev, abdul...@gmail.com, kubernete...@googlegroups.com, kubernetes-sig-release, kubernetes-ann...@googlegroups.com
Well said Antonio.

We should make decisions based on impact to the largest proportion of users.

Best,
-jay

You received this message because you are subscribed to the Google Groups "kubernetes-sig-release" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-re...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-release/CABhP%3DtZvR_%3DBzKLJ_kFvzFHs3yGykWHzmz0oNJ967JecafdWwQ%40mail.gmail.com.

Manjunath Kumatagi

unread,
Oct 6, 2024, 1:08:36 PMOct 6
to Antonio Ojea, dev, abdul...@gmail.com, kubernete...@googlegroups.com, kubernetes-sig-release, kubernetes-ann...@googlegroups.com
As far as I know, the artifacts were cross-built, and the issue mentioned surfaced recently during the build of the kube-cross image [1] as part of the Golang update. If I’m correct, we don’t need this kube-cross image to be a multi-arch fat manifest except for amd64 (which is the architecture most build systems use) to generate the artifacts. I believe we can safely exclude the problematic architectures while building this image to unblock the process and generate the artifacts smoothly. I’ve made that change here [2], and the image built fine.

I understand your concern about the libc issue, which is a general issue that can happen on any platform, and could arise later when we eventually move to bookworm, potentially causing the build artifacts to not work properly on distros like Ubuntu, where they have very long extended support cycles. We need to find another way to handle this situation, such as building the kubelet binary statically (all other binaries are already static except this one) or building with the older libc version (though I’m not sure how easy that would be).

Lastly, for ppc64le architectures, we are regularly running tests (unit, conformance, and node) offline, posting the results in the test grid[3], and collaborating with the community to address any issues found. Of course, there is room for improvement, both in terms of coverage and how these tests are run, and we will work on closing these gaps in the future.


Thanks,
Manjunath Kumatagi

Benjamin Elder

unread,
Oct 6, 2024, 3:28:23 PMOct 6
to Manjunath Kumatagi, Antonio Ojea, dev, abdul...@gmail.com, kubernete...@googlegroups.com, kubernetes-sig-release, kubernetes-ann...@googlegroups.com
> As far as I know, the artifacts were cross-built, and the issue mentioned surfaced recently during the build of the kube-cross image [1] as part of the Golang update. If I’m correct, we don’t need this kube-cross image to be a multi-arch fat manifest except for amd64 (which is the architecture most build systems use) to generate the artifacts. I believe we can safely exclude the problematic architectures while building this image to unblock the process and generate the artifacts smoothly. I’ve made that change here [2], and the image built fine.

If we can't even get support for the base build image, why are we shipping these? The ecosystem support is just not there.

> I understand your concern about the libc issue, which is a general issue that can happen on any platform, and could arise later when we eventually move to bookworm, potentially causing the build artifacts to not work properly on distros like Ubuntu, where they have very long extended support cycles. We need to find another way to handle this situation, such as building the kubelet binary statically (all other binaries are already static except this one) or building with the older libc version (though I’m not sure how easy that would be).

We already build from debian stable (we're already intrinsically linked to debian packages in other places and releng is on the distributor list), which is unlikely to have glibc symbols not available on supported distros. Users on even older distros or glibc-less distros can either use distro supplied packages or build their own binaries.

Other than for CI testing / quality assurance, the default-built and hosted binaries are a convenience, not a necessity.

I don't think we should statically link kubelet just for this, we're not actually encountering this issue otherwise and statically linking is a breaking change (e.g. to DNS resolution behavior).

> Lastly, for ppc64le architectures, we are regularly running tests (unit, conformance, and node) offline, posting the results in the test grid[3], and collaborating with the community to address any issues found. Of course, there is room for improvement, both in terms of coverage and how these tests are run, and we will work on closing these gaps in the future.

Yes ... All of which appears to be on IBM's vendor-controlled downstream infrastructure, which could also be running and hosting builds, and to this day is not eligible for release blocking statushttps://git.k8s.io/sig-release/release-blocking-jobs.md, because it's still just posting unverified downstream results the community has no ability to fix or debug.

------

> We need to put this in proportion, we want to provide as much support
as we can, but if there is the need to make trade offs, we need to
favor the majority and the less protected, libc will impact all
platforms and types of users, on the other hand, mips64le, ppc64le,
s390x seems to impact enterprise users only, that can afford to
rebuild the binaries since most probably have support contracts ...

Exactly.

If you came to us today with a new architecture like RISCV, and it was in the same state, I guarantee the project would refuse to add it to the default builds. In fact, we have done this already, more than once.

So the question is how long do we continue to ship untested and relatively unsupported architectures, with a limited user base?

I think it's past time to focus on widely used architectures in the default build.
It costs us a fair bit of time and resources to cross compile, store/host, and maintain the builds for all of these.
We spend >1h just cross compiling on ~7 cores and ~50GB in CI, most of which is for architectures with no CI coverage.

We can still permit building for architectures that we do not build and ship by default.

You received this message because you are subscribed to the Google Groups "kubernetes-sig-release" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-re...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-release/CAP1uJeYoMtB9X6RnOeH8arqyxZQJV1yG8i-B9nYcPkUoLTgBag%40mail.gmail.com.

Dan Millwood

unread,
Oct 8, 2024, 4:23:15 AMOct 8
to dev, Benjamin Elder, Antonio Ojea, dev, abdul...@gmail.com, kubernete...@googlegroups.com, kubernetes-sig-release, kubernetes-ann...@googlegroups.com, Manjunath Kumatagi
Hello,

I have a few questions about the proposal:

Can I just confirm what is meant by artifacts here?  Would it mean both executables such as kubelet, and images containing the api-server etc, that are used by kubeadm to create a basic cluster?  
Would there still be scripts and Dockerfiles etc maintained by the community that could be used to build binaries and images for these architectures?
Would kubeadm built downstream need to be pointed at a different image repository other than registry.k8s.io   in order to be used in future?

Thanks, Dan

Sebastien Leger

unread,
Oct 9, 2024, 3:10:47 AMOct 9
to dev, Dan Millwood, Benjamin Elder, Antonio Ojea, dev, abdul...@gmail.com, kubernete...@googlegroups.com, kubernetes-sig-release, kubernetes-ann...@googlegroups.com, Manjunath Kumatagi
Hello,

We've been working with IBM teams and clients (finance industry) using s390x. We're looking at ways to improve automated CI testing for different architectures, including linux/s390x. CI runners spinning up temporary s390x kind clusters is one of the alternatives.

Davanum Srinivas

unread,
Oct 9, 2024, 7:05:07 AMOct 9
to sle...@ripple.com, dev, Dan Millwood, Benjamin Elder, Antonio Ojea, abdul...@gmail.com, kubernete...@googlegroups.com, kubernetes-sig-release, kubernetes-ann...@googlegroups.com, Manjunath Kumatagi
Sebastien,

We are not looking to increase our exposure to these architectures more than we already have. We are looking for folks who use the current artifacts we ship as part of the kubernetes release in their production workloads.

-- Dims

Manoj Pattabhiraman

unread,
Oct 9, 2024, 12:57:40 PMOct 9
to dev, Jeremy Rickard, kubernete...@googlegroups.com, kubernetes-sig-release
Hi 

In ASEAN region, we predominantly use s390x architecture based K8s artifacts and other vanilla OSS for some of the major startups (fintechs, stock trading and other enterprise clients).  Especially platforms like s390x - are widely used in financial and government sectors in this region for their performance and inherent security.

Thanks .. Manoj.  

Davanum Srinivas

unread,
Oct 9, 2024, 1:07:47 PMOct 9
to thiru...@gmail.com, dev, Jeremy Rickard, kubernetes-sig-release
Manoj,

Thank you for the feedback. Can you help identify specific items from the kubernetes release that you use? For example from latest 1.31.1 release here's the set we released:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#downloads-for-v1311

How exactly are you building a kubernetes cluster with these artifacts? Which installer are you using?

thanks,
Dims

PS: it looks like you are an IBMer as well. please confirm.

--
You received this message because you are subscribed to the Google Groups "dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@kubernetes.io.

Benjamin Elder

unread,
Oct 9, 2024, 2:24:52 PMOct 9
to dav...@gmail.com, thiru...@gmail.com, dev, Jeremy Rickard, kubernetes-sig-release
> Can I just confirm what is meant by artifacts here?  Would it mean both executables such as kubelet, and images containing the api-server etc, that are used by kubeadm to create a basic cluster?  

The files published in a release by the project. That includes binaries and container images and anything else advertised with a release.


> Would there still be scripts and Dockerfiles etc maintained by the community that could be used to build binaries and images for these architectures?

To some extent. We would not be actively building these.

> Would kubeadm built downstream need to be pointed at a different image repository other than registry.k8s.io   in order to be used in future?

Yes, unless the downstream distro chose to patch it. The registry used is configurable in kubeadm with a single config field already for use cases like airgapping and custom distros.

For prior art see: https://kubernetes.io/blog/2020/01/15/kubernetes-on-mips/

Tim Hockin

unread,
Oct 9, 2024, 4:06:31 PMOct 9
to benth...@google.com, dav...@gmail.com, thiru...@gmail.com, dev, Jeremy Rickard, kubernetes-sig-release
I think the line to take is something like:
* we ACTIVELY SUPPORT platforms A, B, C -- we regularly test them and
build releases for them
* we PASSIVELY SUPPORT platforms D, E, F -- we will consume and
publish 3rd party test results for them, but we do not test them or
build releases for them
* we OPPORTUNISTICALLY SUPPORT other platforms -- we will take PRs to
make Kubernetes build/run on them, but we do not test them or build
releases for them
> To view this discussion on the web visit https://groups.google.com/a/kubernetes.io/d/msgid/dev/CAOZRXm9hg%3D0bRk1z9zr-64GUpYWVyGZ6dgWEK-vUW52k2w%3D59Q%40mail.gmail.com.

Manjunath Kumatagi

unread,
Oct 9, 2024, 10:39:11 PMOct 9
to dev, Jeremy Rickard, kubernete...@googlegroups.com, kubernetes-sig-release
After attending the SIG Release biweekly call on Tuesday, here are my key takeaways:

- The Mentioned issue is more a blocker for the release with the workaround mentioned here[1]
- The main concern raised was about the usage of these published artifacts, and it was acknowledged that there is no standard way to track the statistics for released items like packages or binaries, though it is possible for container images.

SIG Release team,
In my opinion, it’s worth the team reviewing the usage statistics for all supported architectures across all supported releases. This would help gauge how many users might be surprised by the decision to remove an architecture. While the data may not be 100% accurate, it's better than having none at all.


Thanks,
Manjunath Kumatagi

Manjunath Kumatagi

unread,
Oct 11, 2024, 3:16:39 AMOct 11
to dev, Jeremy Rickard, kubernete...@googlegroups.com, kubernetes-sig-release

Some direct consequences of implementing this proposal are that it will break several open source projects that consume these artefacts directly such as Calico[1], kcp[2], etc., All other repositories[3] that currently consume the k8s published artefacts will also break immediately. As seen with the Calico, kcp exceptions above there could be many other projects (will have to get all combinations of regex to get the full list) that will get affected, that is hard to quantify. The net result would be to create a parallel ecosystem for the affected architectures, taking the foundation out of a structure that has taken years to build.

Attempts have been made in the past to enable CI - see [4] and [5], however couldn’t reach the desired result. 

Instead of turning off these architectures the preference would be to continue building these architectures with this interim solution.

In the medium term, IBM management is willing to explore picking up the infrastructure costs of performing cross-builds for ppc64le and s390x architectures. Our intent would be to supplement the sig-release team with members to get these architectures to be release-informing, with the hope that it would be a stepping stone to getting them to be release-blocking.

References: 

[1] https://github.com/search?q=org%3Aprojectcalico+%2Fdl%5C.k8s%5C.io%2F&type=code

[2] https://github.com/search?q=org%3Akcp-dev+%2Fdl%5C.k8s%5C.io.*%28ppc64le%7C390x%29%2F+NOT+language%3AMarkdown+&type=code
[3] 
https://github.com/search?q=%2Fdl%5C.k8s%5C.io.*%28ppc64le%7C390x%29%2F+NOT+language%3AMarkdown++NOT+language%3AHTML+&type=code

[4] Attempt 1(2016): On reaching out to community to enable power in the CI, realised that it is just running on Google controlled GCE/GKE CI environment; we were asked to run CI in our on IBM environment and that is what is being done right nowand reporting the results back to k8s testgrid.​

[5] Attempt 2(2022): When the community began working on migrating away from Google-controlled CI, we approached them again to offer our CI support. In response, we received feedback suggesting the donation of infrastructure ownership to the CNCF. This is when we paused our efforts. Additionally, there was a comment in the issue noting that the community has not yet fully transitioned away from Google's infrastructure.​

Thanks,

Manjunath Kumatagi


On Friday 4 October 2024 at 23:17:24 UTC+5:30 Jeremy Rickard wrote:

Benjamin Elder

unread,
Oct 11, 2024, 12:51:07 PMOct 11
to manjun...@gmail.com, dev, Jeremy Rickard, kubernetes-sig-release
> In my opinion, it’s worth the team reviewing the usage statistics for all supported architectures across all supported releases. This would help gauge how many users might be surprised by the decision to remove an architecture. While the data may not be 100% accurate, it's better than having none at all.

As already mentioned in the SIG Release meeting: We have previously deeply analyzed traffic to k8s.gcr.io and discussed at length in SIG K8s Infra in the past two years and the majority of traffic (in the literal sense of the word, think 86+%) goes to AWS + GCP (mostly AWS). Those platforms only offer AMD64 and ARM64. We know that other architectures are long-tail already.

When we did that analysis, it was simpler, because everything was on just a few backends, it's more involved today.

However, I really don't expect to see a significant difference looking today, the delta between the top images and ASNs versus the rest is staggering.

I don't think this data is really going to help us decide.
We can prove conclusively that it's a small portion of traffic, but how does that change the picture? There's a lot of ways to frame that.

On the other hand: significantly increasing total build time, difficulty with the builds (qemu flakes...) and lack support from downstream dependencies, lack of release engineering bandwidth (much as I appreciate your contributions to the project), not meeting the platform expectations, and limited user demand ... seem more important.

Even if there were plenty of users consuming these, those aspects would still be problematic, and previously built images are not being deleted.
For downloading new images, users could pull them from a downstream build (like the blog post linked above), it wouldn't be the first time in the past year or so that we required some or all users migrate where they download builds from.

I agree in principle with what Tim is suggesting.
We should have some platforms that are passively supported, that we actively accept bug fixes for, but upstream untested and not part of the default build.

When we dropped ARM32, we did not pull this data.
It was breaking the build, we didn't have testing or resources for this, and you do not _have_ to use project provided builds. 


Some direct consequences of implementing this proposal
 are that it will break several open source projects that consume these artefacts directly such as Calico[1], kcp[2], etc., All other repositories[3] that currently consume the k8s published artefacts will also break immediately. As seen with the Calico, kcp exceptions above there could be many other projects (will have to get all combinations of regex to get the full list) that will get affected, that is hard to quantify. The net result would be to create a parallel ecosystem for the affected architectures, taking the foundation out of a structure that has taken years to build.

That's really not a hard requirement, they will just have to consume builds published by someone else, or make their own builds.

--
You received this message because you are subscribed to the Google Groups "dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@kubernetes.io.

Frank Heimes

unread,
Oct 11, 2024, 12:51:07 PMOct 11
to dev, Jeremy Rickard, kubernete...@googlegroups.com, kubernetes-sig-release
I think it would be a great loss not seeing new ppc64el nor s390x releases anymore,
since these are indirectly, but ongoingly tested (as part of microk8s) on both platforms, and are known as pretty stable.
And especially in this packaging format the easiest entry into the kubernetes world.

I personally would vote for keeping ppc64el and s390x future builds alive - especially since I'm noticing that the original issue cane be overcome by moving to bookworm
and again especially because IBM offers support, also in terms of build resources.

BR, Frank

Fox, Kevin M

unread,
Oct 11, 2024, 2:26:19 PMOct 11
to manjun...@gmail.com, benth...@google.com, dev, Jeremy Rickard, kubernetes-sig-release
Are there metrics on package variants too? How many users using rpms vs alternative arches? The proposed change could potentially affect all rpm based users.

Thanks,
Kevin

________________________________________
From: 'Benjamin Elder' via dev <d...@kubernetes.io>
Sent: Friday, October 11, 2024 7:43 AM
To: manjun...@gmail.com
Cc: dev; Jeremy Rickard; kubernetes-sig-release
Subject: Re: [URGENT] Proposal to Stop Building and Releasing Artifacts for mips64le, ppc64le, and s390x

Check twice before you click! This email originated from outside PNNL.

> In my opinion, it’s worth the team reviewing the usage statistics for all supported architectures across all supported releases. This would help gauge how many users might be surprised by the decision to remove an architecture. While the data may not be 100% accurate, it's better than having none at all.

As already mentioned in the SIG Release meeting: We have previously deeply analyzed traffic to k8s.gcr.io<http://k8s.gcr.io/> and discussed at length in SIG K8s Infra in the past two years and the majority of traffic (in the literal sense of the word, think 86+%) goes to AWS + GCP (mostly AWS). Those platforms only offer AMD64 and ARM64. We know that other architectures are long-tail already.

When we did that analysis, it was simpler, because everything was on just a few backends, it's more involved today.

However, I really don't expect to see a significant difference looking today, the delta between the top images and ASNs versus the rest is staggering.

I don't think this data is really going to help us decide.
We can prove conclusively that it's a small portion of traffic, but how does that change the picture? There's a lot of ways to frame that.

On the other hand: significantly increasing total build time, difficulty with the builds (qemu flakes...) and lack support from downstream dependencies, lack of release engineering bandwidth (much as I appreciate your contributions to the project), not meeting the platform expectations<https://github.com/kubernetes/sig-release/blob/master/release-engineering/platforms/guide.md>, and limited user demand ... seem more important.

Even if there were plenty of users consuming these, those aspects would still be problematic, and previously built images are not being deleted.
For downloading new images, users could pull them from a downstream build (like the blog post linked above), it wouldn't be the first time in the past year or so that we required some or all users migrate where they download builds from.

I agree in principle with what Tim is suggesting.
We should have some platforms that are passively supported, that we actively accept bug fixes for, but upstream untested and not part of the default build.

When we dropped ARM32, we did not pull this data.
It was breaking the build, we didn't have testing or resources for this, and you do not _have_ to use project provided builds.

> Some direct consequences of implementing this proposal<https://groups.google.com/a/kubernetes.io/g/dev/c/LBlLCWdFN-o/m/YtGXW-qZBAAJ?utm_medium=email&utm_source=footer> are that it will break several open source projects that consume these artefacts directly such as Calico[1], kcp[2], etc., All other repositories[3] that currently consume the k8s published artefacts will also break immediately. As seen with the Calico, kcp exceptions above there could be many other projects (will have to get all combinations of regex to get the full list) that will get affected, that is hard to quantify. The net result would be to create a parallel ecosystem for the affected architectures, taking the foundation out of a structure that has taken years to build.

That's really not a hard requirement, they will just have to consume builds published by someone else, or make their own builds.

On Fri, Oct 11, 2024 at 12:16 AM Manjunath Kumatagi <manjun...@gmail.com<mailto:manjun...@gmail.com>> wrote:

Some direct consequences of implementing this proposal<https://groups.google.com/a/kubernetes.io/g/dev/c/LBlLCWdFN-o/m/YtGXW-qZBAAJ?utm_medium=email&utm_source=footer> are that it will break several open source projects that consume these artefacts directly such as Calico[1], kcp[2], etc., All other repositories[3] that currently consume the k8s published artefacts will also break immediately. As seen with the Calico, kcp exceptions above there could be many other projects (will have to get all combinations of regex to get the full list) that will get affected, that is hard to quantify. The net result would be to create a parallel ecosystem for the affected architectures, taking the foundation out of a structure that has taken years to build.

Attempts have been made in the past to enable CI - see [4] and [5], however couldn’t reach the desired result.

Instead of turning off these architectures the preference would be to continue building these architectures with this interim solution<https://github.com/kubernetes/release/pull/3779#pullrequestreview-2350571526>.

In the medium term, IBM management is willing to explore picking up the infrastructure costs of performing cross-builds for ppc64le and s390x architectures. Our intent would be to supplement the sig-release team with members to get these architectures to be release-informing, with the hope that it would be a stepping stone to getting them to be release-blocking.

References:

[1] https://github.com/search?q=org%3Aprojectcalico+%2Fdl%5C.k8s%5C.io%2F&type=code<https://github.com/search?q=org%3Aprojectcalico+%2Fdl%5C.k8s%5C.io%2F&type=code>

[2] https://github.com/search?q=org%3Akcp-dev+%2Fdl%5C.k8s%5C.io.*%28ppc64le%7C390x%29%2F+NOT+language%3AMarkdown+&type=code<https://github.com/search?q=org%3Akcp-dev+%2Fdl%5C.k8s%5C.io.*%28ppc64le%7C390x%29%2F+NOT+language%3AMarkdown+&type=code>
[3] https://github.com/search?q=%2Fdl%5C.k8s%5C.io.*%28ppc64le%7C390x%29%2F+NOT+language%3AMarkdown++NOT+language%3AHTML+&type=code<https://github.com/search?q=%2Fdl%5C.k8s%5C.io.*%28ppc64le%7C390x%29%2F+NOT+language%3AMarkdown++NOT+language%3AHTML+&type=code>

[4] Attempt 1(2016): On reaching out to community to enable power in the CI, realised that it is just running on Google controlled GCE/GKE<https://github.com/kubernetes/kubernetes/issues/25730#issuecomment-220414656> CI environment; we were asked to run CI in our on IBM environment<https://github.com/kubernetes/kubernetes/issues/25730#issuecomment-444147677> and that is what is being done right now<https://prow.ppc64le-cloud.cis.ibm.net/>and reporting the results back to k8s testgrid<https://testgrid.k8s.io/ibm>.​

[5] Attempt 2(2022): When the community began working on migrating<https://github.com/kubernetes/k8s.io/issues/1469> away from Google-controlled CI, we approached them again to offer our CI support. In response, we received feedback suggesting the donation of infrastructure ownership to the CNCF. This is when we paused our efforts. Additionally, there was a comment in the issue noting that the community has not yet<https://github.com/kubernetes/test-infra/issues/26717#issuecomment-1208718372> fully transitioned away from Google's infrastructure.​

Thanks,

Manjunath Kumatagi

On Friday 4 October 2024 at 23:17:24 UTC+5:30 Jeremy Rickard wrote:

Hello kubefolx!

We are reaching out to discuss supported platforms and our Go base images! During our work to bump to Go 1.22.8 and Go 1.23.2, we have discovered that the Go team doesn't provide Debian bullseye-based Go images for certain architectures (mips64le, ppc64le, s390x), as bullseye has transitioned into “LTS” support and is no longer maintained by the Debian team[1][2]. However, these architectures appear to be supported on Debian bookworm[3].


The Kubernetes images are already based on Debian bookworm, however, we’re building all Kubernetes binaries using Debian bullseye. This is done to maintain the glibc compatibility, as Debian bookworm is shipping with glibc 2.36[4] which is still not widely adopted at the moment.


This presents a challenge for our current effort to bump Go versions on both the master branch and the supported release branches. We are at the moment working on mitigating this for the v1.32.0-alpha.1 release (which we plan to release using Go 1.23.0), however this will also block all upcoming releases.


There are two options that we can pursue here, however, it’s very important to highlight that this will apply to both the master branch and the supported release branches (v1.28 to v1.31):


* We can migrate to bookworm, and as a result increase the required glibc version to 2.36 or newer (for all supported releases)

* We can stop creating release artifacts for mips64le, ppc64le, and s390x, and continue using bullseye to build the Kubernetes artifacts, keeping the glibc requirement unchanged


Without applying this change to the actively supported release branches, this will hinder our ability to bump Go versions on these branches. This can impose many implications and security risks if a serious Go vulnerability is identified.


We’re fully aware that whatever decision we make, we’ll introduce a breaking change for some users. At the moment, the SIG Release leads think that stopping creating release artifacts for mips64le, ppc64le and s390x architectures poses the least amount of risk and affects the least number of users, compared to increasing the minimum glibc version.


Because of that, foremost, we kindly ask you for understanding. Moreover, we want to hear your feedback on the preferred path forward, so please reply to this email or reach out to SIG Release via our communication channels. Because this is time critical, we would like all feedback by Friday, October 11th and will treat October 11th as a lazy consensus deadline.


Our existing policy[5] states the following:


Please note that actively supported release branches are not affected by the removal. This ensures compatibility with existing artifact consumers.


Any option that we choose will in a way break this policy. If we drop the mentioned architectures, we’re directly breaking this policy. However, switching to bookworm will significantly change the supported operating systems (and their versions). We want to discuss the best path forward here.


It’s important to clarify that we’re not in favor of combining those two options to reduce user confusion. Whatever option we decide on will be applied to both the master branch and the release branches. We want to avoid situations where we randomly start and stop creating release artifacts for different branches and architectures.


Thanks,


Jeremy Rickard and Marko Mudrinić // on behalf of SIG Release



[1] https://github.com/docker-library/official-images/pull/17640/files#diff-262b5154873802fd4abff07283ae9bd83663325957229799a17e8262a5268b27<https://github.com/docker-library/official-images/pull/17640/files#diff-262b5154873802fd4abff07283ae9bd83663325957229799a17e8262a5268b27>

[2] https://github.com/docker-library/golang/issues/536<https://github.com/docker-library/golang/issues/536>

[3] https://github.com/docker-library/official-images/blob/a22705bb8eb8e84123c08a62de343a5e8a98ab61/library/buildpack-deps#L17<https://github.com/docker-library/official-images/blob/a22705bb8eb8e84123c08a62de343a5e8a98ab61/library/buildpack-deps#L17>

[4] https://sources.debian.org/src/glibc/<https://sources.debian.org/src/glibc/>

[5] https://github.com/kubernetes/sig-release/blob/master/release-engineering/platforms/guide.md#deprecating-and-removing-supported-platforms<https://github.com/kubernetes/sig-release/blob/master/release-engineering/platforms/guide.md#deprecating-and-removing-supported-platforms>



--
You received this message because you are subscribed to the Google Groups "dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@kubernetes.io<mailto:dev+uns...@kubernetes.io>.
To view this discussion on the web visit https://groups.google.com/a/kubernetes.io/d/msgid/dev/9e287de1-e299-4206-9615-565e0f3a502cn%40kubernetes.io<https://groups.google.com/a/kubernetes.io/d/msgid/dev/9e287de1-e299-4206-9615-565e0f3a502cn%40kubernetes.io?utm_medium=email&utm_source=footer>.

--
You received this message because you are subscribed to the Google Groups "dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@kubernetes.io<mailto:dev+uns...@kubernetes.io>.
To view this discussion on the web visit https://groups.google.com/a/kubernetes.io/d/msgid/dev/CAOZRXm8ziW%2BGPOGPBsOLOCcsTaJzwWBRG4Pj1U-yEhcUgsrzPw%40mail.gmail.com<https://groups.google.com/a/kubernetes.io/d/msgid/dev/CAOZRXm8ziW%2BGPOGPBsOLOCcsTaJzwWBRG4Pj1U-yEhcUgsrzPw%40mail.gmail.com?utm_medium=email&utm_source=footer>.

Davanum Srinivas

unread,
Oct 11, 2024, 4:29:18 PMOct 11
to frank....@canonical.com, dev, Jeremy Rickard, kubernetes-sig-release
Frank,

Does microk8s use unmodified binaries from kubernetes release artifacts?

thanks,
Dims

--
You received this message because you are subscribed to the Google Groups "dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@kubernetes.io.
To view this discussion on the web visit https://groups.google.com/a/kubernetes.io/d/msgid/dev/5e99aee8-d11d-453a-8c6a-2296b1c7e736n%40kubernetes.io.

Frank Heimes

unread,
Oct 13, 2024, 3:02:09 PMOct 13
to Davanum Srinivas, dev
Hi Davanum, 
no Microk8s does not use unmodified (pre-build) binaries from upstream.
microk8s is build from source and incl. even some patches, which are listed here:
https://github.com/canonical/microk8s/tree/master/build-scripts/components/kubernetes/patches

BR, Frank

Davanum Srinivas

unread,
Oct 13, 2024, 5:44:58 PMOct 13
to frank....@canonical.com, dev
Frank,

thank you!. So it looks like nothing will change for microk8s. You will still be able to build/ship on the above mentioned platforms.

As an aside, some of those in your listed patches may be interesting to upstream as well, so please evaluate and open PRs for those you wish.

thanks,
Dims

Davanum Srinivas

unread,
Oct 14, 2024, 8:18:05 AMOct 14
to Manoj Pattabhiraman, dev, Jeremy Rickard, kubernetes-sig-release
Manoj,

Please also let us know the answer to the question we asked ...  "identify specific items from the kubernetes release" (or which kubernetes distro) is being used.

-- Dims

On Sun, Oct 13, 2024 at 9:58 PM Manoj Pattabhiraman <thiru...@gmail.com> wrote:
Hi, 

The current deployment for an stock trading organisation is happening with  K8S 1.31.1 and we are using valero and calico for storage and networks.  Yes. i work for ibm - but we leave the deployment products and tools to be decided by our customers. 

Manoj Pattabhiraman

unread,
Oct 14, 2024, 8:18:41 AMOct 14
to dev, Davanum Srinivas, dev, Jeremy Rickard, kubernetes-sig-release, thiru...@gmail.com
Hi, 

The current deployment for an stock trading organisation is happening with  K8S 1.31.1 and we are using valero and calico for storage and networks.  Yes. i work for ibm - but we leave the deployment products and tools to be decided by our customers. 

On Thursday, October 10, 2024 at 1:07:47 AM UTC+8 Davanum Srinivas wrote:

Jeremy Rickard

unread,
Oct 14, 2024, 2:28:54 PMOct 14
to dev, Manoj Pattabhiraman, Davanum Srinivas, dev, Jeremy Rickard, kubernetes-sig-release
Thanks everyone for all the feedback on this topic. 

We have sent an update out in a separate thread to ensure it's seen: https://groups.google.com/a/kubernetes.io/g/dev/c/12uRwQIi51U/m/QLEk7V_SAQAJ

Davanum Srinivas

unread,
Oct 14, 2024, 3:41:07 PMOct 14
to kaydiam, dev, Jeremy Rickard, Manoj Pattabhiraman, kubernetes-sig-release
Kay,

Please open an issue in image-builder repository and let's deal with it there. The changes we are talking about here are not yet landed and not for right now...

On Mon, Oct 14, 2024 at 3:30 PM kaydiam <kay....@gmail.com> wrote:
Hi colleagues,

I'm not sure if this is related but out pipeline fails to create ppc64le images: https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/cloud-provider-openstack-push-images/1845898968966369280
I haven't found any references to ppc64le in our code, but it seems to be related to the recent announcement.
Could you please clarify whether we need to update certain dependencies? Thanks!

Regards,

Davanum Srinivas

unread,
Oct 14, 2024, 3:42:55 PMOct 14
to Timothy Pearson, dev, Jeremy Rickard, Manoj Pattabhiraman, kubernetes-sig-release
Timothy,

We would love for you all to come and learn how these are built etc. There is a lot of existing documentation, best course of action is for folks to join the kubernetes slack and join the important to this conversation channels like #release-management #sig-testing etc.

thanks,
Dims

On Mon, Oct 14, 2024 at 3:34 PM Timothy Pearson <tpearso...@gmail.com> wrote:
Just saw the official announcement.  That is disappointing.

We will likely be setting up build infrastructure at Solid Silicon to continue to produce the artifacts, as we have the ability to recompile the affected software on Bullseye.  Is it possible to retain the scripting needed to simplify this process on our end, and if so are there any official instructions for generating the artifacts that I can pass to the IT team?

Thanks!

Timothy Pearson

unread,
Oct 14, 2024, 3:43:14 PMOct 14
to dev, Jeremy Rickard, Manoj Pattabhiraman, Davanum Srinivas, dev, kubernetes-sig-release
Just to chime in here, both Raptor and Solid Silicon have been evaluating a large deployment of Kubernetes on POWER systems (POWER9+, not POWER8 or below), and I am aware of at least one other hosting provider that is even further along but is not yet publicly active.  Solid Silicon was planning to be online in the next few months, and Raptor was originally planning to follow in the middle of 2025.  However, this thread has caused significant concern and we may need to reevaluate those plans.

How can Raptor and Solid Silicon help keep these builds in-tree?  We don't want to see more official builds disappear for this platform, especially considering the product launch we have planned next year that would dovetail very nicely with Kubernetes and massive scale-out deployment.

Thank you!

On Monday, October 14, 2024 at 1:28:54 PM UTC-5 Jeremy Rickard wrote:

kaydiam

unread,
Oct 14, 2024, 3:43:15 PMOct 14
to dev, Jeremy Rickard, Manoj Pattabhiraman, Davanum Srinivas, dev, kubernetes-sig-release
Hi colleagues,

I'm not sure if this is related but out pipeline fails to create ppc64le images: https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/cloud-provider-openstack-push-images/1845898968966369280
I haven't found any references to ppc64le in our code, but it seems to be related to the recent announcement.
Could you please clarify whether we need to update certain dependencies? Thanks!

Regards,

On Monday, October 14, 2024 at 8:28:54 PM UTC+2 Jeremy Rickard wrote:

Timothy Pearson

unread,
Oct 14, 2024, 3:43:15 PMOct 14
to dev, Jeremy Rickard, Manoj Pattabhiraman, Davanum Srinivas, dev, kubernetes-sig-release
Just saw the official announcement.  That is disappointing.

We will likely be setting up build infrastructure at Solid Silicon to continue to produce the artifacts, as we have the ability to recompile the affected software on Bullseye.  Is it possible to retain the scripting needed to simplify this process on our end, and if so are there any official instructions for generating the artifacts that I can pass to the IT team?

Thanks!

On Monday, October 14, 2024 at 1:28:54 PM UTC-5 Jeremy Rickard wrote:
Reply all
Reply to author