Hello kubefolx!
The Kubernetes images are already based on Debian bookworm, however, we’re building all Kubernetes binaries using Debian bullseye. This is done to maintain the glibc compatibility, as Debian bookworm is shipping with glibc 2.36[4] which is still not widely adopted at the moment.
This presents a challenge for our current effort to bump Go versions on both the master branch and the supported release branches. We are at the moment working on mitigating this for the v1.32.0-alpha.1 release (which we plan to release using Go 1.23.0), however this will also block all upcoming releases.
There are two options that we can pursue here, however, it’s very important to highlight that this will apply to both the master branch and the supported release branches (v1.28 to v1.31):
We can migrate to bookworm, and as a result increase the required glibc version to 2.36 or newer (for all supported releases)
We can stop creating release artifacts for mips64le, ppc64le, and s390x, and continue using bullseye to build the Kubernetes artifacts, keeping the glibc requirement unchanged
Without applying this change to the actively supported release branches, this will hinder our ability to bump Go versions on these branches. This can impose many implications and security risks if a serious Go vulnerability is identified.
We’re fully aware that whatever decision we make, we’ll introduce a breaking change for some users. At the moment, the SIG Release leads think that stopping creating release artifacts for mips64le, ppc64le and s390x architectures poses the least amount of risk and affects the least number of users, compared to increasing the minimum glibc version.
Because of that, foremost, we kindly ask you for understanding. Moreover, we want to hear your feedback on the preferred path forward, so please reply to this email or reach out to SIG Release via our communication channels. Because this is time critical, we would like all feedback by Friday, October 11th and will treat October 11th as a lazy consensus deadline.
Our existing policy[5] states the following:
Please note that actively supported release branches are not affected by the removal. This ensures compatibility with existing artifact consumers.
Any option that we choose will in a way break this policy. If we drop the mentioned architectures, we’re directly breaking this policy. However, switching to bookworm will significantly change the supported operating systems (and their versions). We want to discuss the best path forward here.
It’s important to clarify that we’re not in favor of combining those two options to reduce user confusion. Whatever option we decide on will be applied to both the master branch and the release branches. We want to avoid situations where we randomly start and stop creating release artifacts for different branches and architectures.
Thanks,
Jeremy Rickard and Marko Mudrinić // on behalf of SIG Release
[2] https://github.com/docker-library/golang/issues/536
--
You received this message because you are subscribed to the Google Groups "kubernetes-announce" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-anno...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-announce/CAEHNk2yfYRqQsFOF4gFEhwDTc%3D_VtCxtNX4RTDi3Gxm5JKaRZQ%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "kubernetes-sig-release" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-re...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-release/CABhP%3DtZvR_%3DBzKLJ_kFvzFHs3yGykWHzmz0oNJ967JecafdWwQ%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "kubernetes-sig-release" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-re...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-release/CAP1uJeYoMtB9X6RnOeH8arqyxZQJV1yG8i-B9nYcPkUoLTgBag%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/kubernetes.io/d/msgid/dev/4ed9717c-30a2-440d-b736-e8108f2aa317n%40kubernetes.io.
--
You received this message because you are subscribed to the Google Groups "dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@kubernetes.io.
To view this discussion on the web visit https://groups.google.com/a/kubernetes.io/d/msgid/dev/4aa7380e-6207-43f1-9206-ffa136a72a3cn%40kubernetes.io.
To view this discussion on the web visit https://groups.google.com/a/kubernetes.io/d/msgid/dev/CANw6fcEUVXuFRqWt4URToHC-P7dnw8NiBXD7hSp3zvY60YExAA%40mail.gmail.com.
Some direct consequences of implementing this proposal are that it will break several open source projects that consume these artefacts directly such as Calico[1], kcp[2], etc., All other repositories[3] that currently consume the k8s published artefacts will also break immediately. As seen with the Calico, kcp exceptions above there could be many other projects (will have to get all combinations of regex to get the full list) that will get affected, that is hard to quantify. The net result would be to create a parallel ecosystem for the affected architectures, taking the foundation out of a structure that has taken years to build.
Attempts have been made in the past to enable CI - see [4] and [5], however couldn’t reach the desired result.
Instead of turning off these architectures the preference would be to continue building these architectures with this interim solution.
In the medium term, IBM management is willing to explore picking up the infrastructure costs of performing cross-builds for ppc64le and s390x architectures. Our intent would be to supplement the sig-release team with members to get these architectures to be release-informing, with the hope that it would be a stepping stone to getting them to be release-blocking.
References:
[1] https://github.com/search?q=org%3Aprojectcalico+%2Fdl%5C.k8s%5C.io%2F&type=code
[2] https://github.com/search?q=org%3Akcp-dev+%2Fdl%5C.k8s%5C.io.*%28ppc64le%7C390x%29%2F+NOT+language%3AMarkdown+&type=code
[3] https://github.com/search?q=%2Fdl%5C.k8s%5C.io.*%28ppc64le%7C390x%29%2F+NOT+language%3AMarkdown++NOT+language%3AHTML+&type=code
[4] Attempt 1(2016): On reaching out to community to enable power in the CI, realised that it is just running on Google controlled GCE/GKE CI environment; we were asked to run CI in our on IBM environment and that is what is being done right nowand reporting the results back to k8s testgrid.
[5] Attempt 2(2022): When the community began working on migrating away from Google-controlled CI, we approached them again to offer our CI support. In response, we received feedback suggesting the donation of infrastructure ownership to the CNCF. This is when we paused our efforts. Additionally, there was a comment in the issue noting that the community has not yet fully transitioned away from Google's infrastructure.
Thanks,
Manjunath Kumatagi
--
You received this message because you are subscribed to the Google Groups "dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@kubernetes.io.
To view this discussion on the web visit https://groups.google.com/a/kubernetes.io/d/msgid/dev/9e287de1-e299-4206-9615-565e0f3a502cn%40kubernetes.io.
--
You received this message because you are subscribed to the Google Groups "dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@kubernetes.io.
To view this discussion on the web visit https://groups.google.com/a/kubernetes.io/d/msgid/dev/5e99aee8-d11d-453a-8c6a-2296b1c7e736n%40kubernetes.io.
Hi,The current deployment for an stock trading organisation is happening with K8S 1.31.1 and we are using valero and calico for storage and networks. Yes. i work for ibm - but we leave the deployment products and tools to be decided by our customers.
Hi colleagues,I'm not sure if this is related but out pipeline fails to create ppc64le images: https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/cloud-provider-openstack-push-images/1845898968966369280I haven't found any references to ppc64le in our code, but it seems to be related to the recent announcement.Could you please clarify whether we need to update certain dependencies? Thanks!Regards,
Just saw the official announcement. That is disappointing.We will likely be setting up build infrastructure at Solid Silicon to continue to produce the artifacts, as we have the ability to recompile the affected software on Bullseye. Is it possible to retain the scripting needed to simplify this process on our end, and if so are there any official instructions for generating the artifacts that I can pass to the IT team?Thanks!