Hello kubefolx!
The Kubernetes images are already based on Debian bookworm, however, we’re building all Kubernetes binaries using Debian bullseye. This is done to maintain the glibc compatibility, as Debian bookworm is shipping with glibc 2.36[4] which is still not widely adopted at the moment.
This presents a challenge for our current effort to bump Go versions on both the master branch and the supported release branches. We are at the moment working on mitigating this for the v1.32.0-alpha.1 release (which we plan to release using Go 1.23.0), however this will also block all upcoming releases.
There are two options that we can pursue here, however, it’s very important to highlight that this will apply to both the master branch and the supported release branches (v1.28 to v1.31):
We can migrate to bookworm, and as a result increase the required glibc version to 2.36 or newer (for all supported releases)
We can stop creating release artifacts for mips64le, ppc64le, and s390x, and continue using bullseye to build the Kubernetes artifacts, keeping the glibc requirement unchanged
Without applying this change to the actively supported release branches, this will hinder our ability to bump Go versions on these branches. This can impose many implications and security risks if a serious Go vulnerability is identified.
We’re fully aware that whatever decision we make, we’ll introduce a breaking change for some users. At the moment, the SIG Release leads think that stopping creating release artifacts for mips64le, ppc64le and s390x architectures poses the least amount of risk and affects the least number of users, compared to increasing the minimum glibc version.
Because of that, foremost, we kindly ask you for understanding. Moreover, we want to hear your feedback on the preferred path forward, so please reply to this email or reach out to SIG Release via our communication channels. Because this is time critical, we would like all feedback by Friday, October 11th and will treat October 11th as a lazy consensus deadline.
Our existing policy[5] states the following:
Please note that actively supported release branches are not affected by the removal. This ensures compatibility with existing artifact consumers.
Any option that we choose will in a way break this policy. If we drop the mentioned architectures, we’re directly breaking this policy. However, switching to bookworm will significantly change the supported operating systems (and their versions). We want to discuss the best path forward here.
It’s important to clarify that we’re not in favor of combining those two options to reduce user confusion. Whatever option we decide on will be applied to both the master branch and the release branches. We want to avoid situations where we randomly start and stop creating release artifacts for different branches and architectures.
Thanks,
Jeremy Rickard and Marko Mudrinić // on behalf of SIG Release
[2] https://github.com/docker-library/golang/issues/536
Hi,The current deployment for an stock trading organisation is happening with K8S 1.31.1 and we are using valero and calico for storage and networks. Yes. i work for ibm - but we leave the deployment products and tools to be decided by our customers.
On Thursday, October 10, 2024 at 1:07:47 AM UTC+8 Davanum Srinivas wrote:
Manoj,Thank you for the feedback. Can you help identify specific items from the kubernetes release that you use? For example from latest 1.31.1 release here's the set we released:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#downloads-for-v1311How exactly are you building a kubernetes cluster with these artifacts? Which installer are you using?thanks,Dims
PS: it looks like you are an IBMer as well. please confirm.On Wed, Oct 9, 2024 at 12:57 PM Manoj Pattabhiraman <thiru...@gmail.com> wrote:
HiIn ASEAN region, we predominantly use s390x architecture based K8s artifacts and other vanilla OSS for some of the major startups (fintechs, stock trading and other enterprise clients). Especially platforms like s390x - are widely used in financial and government sectors in this region for their performance and inherent security.Thanks .. Manoj.
On Saturday, October 5, 2024 at 1:47:24 AM UTC+8 Jeremy Rickard wrote:
> In my opinion, it’s worth the team reviewing the usage statistics for all supported architectures across all supported releases. This would help gauge how many users might be surprised by the decision to remove an architecture. While the data may not be 100% accurate, it's better than having none at all.Based on this thread and recent discussions I have compiled usage data here: https://docs.google.com/document/d/1sXNiqtypt2BBt-4SnvkX1hCp1_4cJYy_hjqtDqcmGqg/edit?usp=sharing
(you must be a member of d...@kubernetes.io or kubernetes-sig-release to view)
TLDR: for registry.k8s.io/kube-proxy:v1.30.x images (querying more would get slow and expensive, this should be a reasonable estimate)AMD64: 93.11%
Arm64: 06.13%
PPC64LE: 00.34%
S390x: 00.43%
So <1% combined for these platforms.On Mon, Oct 14, 2024 at 12:41 PM Davanum Srinivas <dav...@gmail.com> wrote:Kay,Please open an issue in image-builder repository and let's deal with it there. The changes we are talking about here are not yet landed and not for right now...On Mon, Oct 14, 2024 at 3:30 PM kaydiam <kay....@gmail.com> wrote:Hi colleagues,I'm not sure if this is related but out pipeline fails to create ppc64le images: https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/cloud-provider-openstack-push-images/1845898968966369280I haven't found any references to ppc64le in our code, but it seems to be related to the recent announcement.Could you please clarify whether we need to update certain dependencies? Thanks!Regards,
To view this discussion on the web visit https://groups.google.com/a/kubernetes.io/d/msgid/dev/CANw6fcFHdJ28NktsAxyW30O2J29CWrxvm9mJhGM%2BPnekwXextA%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@kubernetes.io.