Kubernetes deprecation policy

234 views
Skip to first unread message

John Gardiner Myers

unread,
Jan 3, 2022, 3:41:49 PM1/3/22
to kubernetes-si...@googlegroups.com
I would like to bring up a problem which I consider to be a direct
consequence of the Kubernetes deprecation policy.

Many users are effectively prevented from upgrading to Kubernetes 1.22
or later. This is because they depend on
kubernetes-sigs/aws-load-balancer-controller (AWS LBC) to manage their
cloud load balancers, as it supports features that they depend on and
which are not provided by the deprecated in-tree load balancer controller.

AWS LBC does not support Kubernetes 1.22 or later because it uses the
v1beta1 version of the Ingresses API. Per the July 14 comment on
kubernetes-sigs/aws-load-balancer-controller#2050, the maintainers have
not migrated the controller to the v1 version of the Ingress API because
the v1 version is not supported by Kubernetes 1.18.

AWS has made a commitment to support their Kubernetes 1.18-derived
hosted service (EKS 1.18) past the time that the Kubernetes project
drops support for 1.18. This is a reasonable thing for them to do. If
they were to migrate the AWS LBC to the v1 Ingress API, it would need to
drop support for Kubernetes 1.18, as having a controller support
multiple API versions is impractical with the current apimachinery. As
the AWS LBC approvers are Amazon employees and dropping support for
Kubernetes 1.18 would have a negative impact on AWS’s ability to meet
their commitment to support EKS 1.18, it is rational for the AWS LBC
maintainers to not do so.

The end result is that a large portion of Kubernetes users are unable to
take advantage of the improvements in Kubernetes versions 1.22 and 1.23.

This situation is of the Kubernetes project’s own making. It is a direct
result of the removal of the deprecated v1beta1 Ingress API.

It is entirely reasonable for commercial entities to provide support for
their Kubernetes distributions and/or hosted services past the time when
the Kubernetes project drops support for  their corresponding releases,
as long as they are willing to provide the necessary support for
security fixes.

As the Kubernetes support period has been extended to 14 months, the
deprecation policy allows beta and stable APIs to be removed while
Kubernetes versions that do not support the replacement are still under
the Kubernetes project’s support. This is untenable for controllers.

The API support times for beta and stable APIs in Rule #4a of the
Kubernetes Deprecation Policy are too short.  The minimum durations for
both should be extended to at least the 14 month support cadence. To
allow time for controllers to ship updates, this should be more like 18
months.

Tim Hockin

unread,
Jan 3, 2022, 4:23:38 PM1/3/22
to John Gardiner Myers, kubernetes-si...@googlegroups.com
I am open to discussing extending deprecation windows, but I disagree
with your message:

> it would need to drop support for Kubernetes 1.18, as having a controller support multiple API versions is impractical with the current apimachinery.

I have seen (and written) programs that support multiple API versions.
Yes, it requires some in-out conversion, but I don't think it rises to
the level of "can't do it". And if I am wrong for some reason I have
not encountered yet, let's fix _that_?

On Mon, Jan 3, 2022 at 12:41 PM 'John Gardiner Myers' via
kubernetes-sig-architecture
> --
> You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-architecture/3ca2893d-4f19-d595-a672-e53ca446274f%40proofpoint.com.

Monis Khan

unread,
Jan 3, 2022, 5:20:45 PM1/3/22
to Tim Hockin, John Gardiner Myers, kubernetes-sig-architecture
AWS LBC / any controller should be able to support the use of either ingress API version with a small amount of wrapper code.

Two concrete examples of SIG Auth code that dynamically handles API version / availability skew: CSR v1 and v1beta1 APIs [1] and Token Request API [2].

[1] https://github.com/kubernetes/kubernetes/blob/3bce0502aac87f9907af0ef19df5935632ceafdf/staging/src/k8s.io/client-go/util/certificate/csr/csr.go#L127-L163

John Gardiner Myers

unread,
Jan 3, 2022, 5:35:49 PM1/3/22
to kubernetes-si...@googlegroups.com
On 1/3/22 1:23 PM, 'Tim Hockin' via kubernetes-sig-architecture wrote:
I am open to discussing extending deprecation windows, but I disagree with your message: > it would need to drop support for Kubernetes 1.18, as having a controller support multiple API versions is impractical with the current apimachinery.
ZjQcmQRYFpfptBannerStart
I have seen (and written) programs that support multiple API versions.
Yes, it requires some in-out conversion, but I don't think it rises to
the level of "can't do it".  And if I am wrong for some reason I have
not encountered yet, let's fix _that_?

That defeats the entire point of designing the server do API version conversions.



Clayton

unread,
Jan 3, 2022, 5:41:39 PM1/3/22
to John Gardiner Myers, kubernetes-si...@googlegroups.com


On Jan 3, 2022, at 5:35 PM, 'John Gardiner Myers' via kubernetes-sig-architecture <kubernetes-si...@googlegroups.com> wrote:



Due to our decisions to eventually deprecate and remove pre v1 apis, some clients do have to deal with servers with the permutations of {(v1beta1), (v1beta1, v1), (v1)}.  Also, creation defaulting is not guaranteed equivalent between versions, so it will matter to some clients which version you submit.

Mo’s example is just one variation, different clients may need to handle skew differently.




--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.

Brendan Burns

unread,
Jan 3, 2022, 6:05:29 PM1/3/22
to Clayton, John Gardiner Myers, kubernetes-si...@googlegroups.com
I'm wondering why AWS can't just release different binaries for different kubernetes versions.

This is a standard practice for handling this sort of situation. It's unlikely that there are major changes needed in the support of 1.18 clusters for load balancing, so if they cut a release branch, cherry-pick extreme bug fixes as necessary and move forward with development that only supports 1.19+ it's hard for me to imagine why there is a problem.

From their release page here: Releases · kubernetes/cloud-provider-aws · GitHub it seems that they are cutting a release per Kubernetes release cycle anyway.

It seems like there are two decent options for AWS here, either do version conversion in some handler, or just cut a final version that supports 1.18 and move on.

I don't think changing the Kubernetes deprecation policy will actually help here, it may make the windows longer, but someone will eventually hit the same problem when they decide to support a version of Kubernetes for longer than the community plans to.

--brendan

From: kubernetes-si...@googlegroups.com <kubernetes-si...@googlegroups.com> on behalf of Clayton <smarter...@gmail.com>
Sent: Monday, January 3, 2022 2:41 PM
To: John Gardiner Myers <jgm...@proofpoint.com>
Cc: kubernetes-si...@googlegroups.com <kubernetes-si...@googlegroups.com>
Subject: [EXTERNAL] Re: Kubernetes deprecation policy
 

Andrew Kim

unread,
Jan 3, 2022, 8:13:09 PM1/3/22
to Brendan Burns, Clayton, John Gardiner Myers, kubernetes-si...@googlegroups.com, n...@amazon.com, yy...@amazon.com
Adding to what others have already said, there is a KEP [1] on how external providers should be versioned w.r.t Kubernetes versions.

AWS LB controller and cloud-provider-aws are distinct projects but I think there's enough overlap to warrant a similar versioning policy for the AWS LB controller project.

+ nckturner M00nF1sh (maintainers of AWS LB controller) 

Andrew Sy  


Sandor Szuecs

unread,
Jan 4, 2022, 6:47:47 AM1/4/22
to John Gardiner Myers, kubernetes-si...@googlegroups.com
Hi!

TBH, it's not Kubernetes fault that AWS is not providing a solution, yet.
You can also just change to https://github.com/zalando-incubator/kube-ingress-aws-controller we provide a flag to switch API versions.
I am just migrating our 150 clusters to v1, works great.

Best, sandor
--

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.


--
Sandor Szücs | 418 I'm a teapot

Jordan Liggitt

unread,
Jan 4, 2022, 9:54:34 AM1/4/22
to Brendan Burns, Clayton, John Gardiner Myers, kubernetes-si...@googlegroups.com
On Mon, Jan 3, 2022 at 6:05 PM 'Brendan Burns' via kubernetes-sig-architecture <kubernetes-si...@googlegroups.com> wrote:
I'm wondering why AWS can't just release different binaries for different kubernetes versions.

This is a standard practice for handling this sort of situation. It's unlikely that there are major changes needed in the support of 1.18 clusters for load balancing, so if they cut a release branch, cherry-pick extreme bug fixes as necessary and move forward with development that only supports 1.19+ it's hard for me to imagine why there is a problem.

I agree, something along those lines is what I would expect/recommend if a component wanted to continue supporting pre-1.19 versions.

Kishor Joshi

unread,
Jan 4, 2022, 4:19:21 PM1/4/22
to kubernetes-sig-architecture
I’m one of the maintainers of kubernetes-sigs/aws-load-balancer-controller, and aware of this issue. It is in our top priority for the upcoming v2.4.0 release. Thanks John for the PR #2433, we are reviewing it and will merge to the main branch.

Here is our current strategy
  • Support networking.k8s.io/v1 starting in the v2.4.0 release. This will require k8s v1.19 or later
  • For k8s 1.18 and earlier versions, we will continue to maintain the v2.3.x releases for critical fixes until EKS 1.18 reaches EOL
Moving forward, we plan to be more proactive about meeting requirements of users running self managed Kubernetes on AWS, by releasing versions of the load balancer controller more closely aligned with upstream Kubernetes releases. Like we are doing with v2.4, future versions of the load balancer controller will also drop support for older versions of Kubernetes, and EKS customers will have to upgrade Kubernetes versions to use the new load balancer controller version. We plan to publish a version matrix (similar to the EBS CSI driver matrix) to make it easy for users to understand version compatibility.

If there are concerns, feel free create or comment on the kubernetes-sigs/aws-load-balancer-controller issues.

John Gardiner Myers

unread,
Jan 4, 2022, 6:25:49 PM1/4/22
to kubernetes-si...@googlegroups.com
The issue is not just about
kubernetes-sigs/aws-load-balancer-controller. That project is just a
leading indicator of a wider problem.

And it's not just clients that want to support Kubernetes versions
longer than the Kubernetes project itself. Kubernetes 1.19 and later
have a year of support per KEP 1498, yet beta APIs can be removed after
9 months.

Having to implement version conversion in a client or maintain multiple
branches of a client are costs that the Kubernetes project is imposing
on clients (and, indirectly, on users). What benefit is being gained to
offset these costs?


Jordan Liggitt

unread,
Jan 4, 2022, 6:36:07 PM1/4/22
to John Gardiner Myers, kubernetes-si...@googlegroups.com
The deprecation policy for beta APIs is 9 months or 3 releases, whichever is longer. Ingress v1beta1 was supported after being deprecated for 3 releases (1.19-1.21), and was removed in 1.22, ~1 year later.

Moving beta APIs through their lifecycle more rapidly helps avoid accumulating usage by clients and components that have long-term support expectations, and spurs progress towards GA APIs (which have longer-term stability guarantees).



--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.

John Gardiner Myers

unread,
Jan 4, 2022, 6:44:43 PM1/4/22
to kubernetes-si...@googlegroups.com
On 1/4/22 3:35 PM, Jordan Liggitt wrote:
The deprecation policy for beta APIs is 9 months or 3 releases, whichever is longer. Ingress v1beta1 was supported after being deprecated for 3 releases (1.19-1.21), and was removed in 1.22, ~1 year later. Moving beta APIs through their lifecycle
ZjQcmQRYFpfptBannerStart
The deprecation policy for beta APIs is 9 months or 3 releases, whichever is longer. Ingress v1beta1 was supported after being deprecated for 3 releases (1.19-1.21), and was removed in 1.22, ~1 year later.

Ingress v1beta1 was supported for 11 months and 9 days, which is shorter than 1 year.


Moving beta APIs through their lifecycle more rapidly helps avoid accumulating usage by clients and components that have long-term support expectations, and spurs progress towards GA APIs (which have longer-term stability guarantees).

Releasing stable versions of APIs promptly helps avoid accumulating beta usage, but removing them rapidly thereafter does absolutely nothing to spur progress towards GA APIs.

Before an API is stable, clients have a Hobson's choice. It does not seem appropriate to punish them for it.


Davanum Srinivas

unread,
Jan 5, 2022, 6:45:21 AM1/5/22
to Kishor Joshi, kubernetes-sig-architecture
Thanks Kishor!

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.

Tim Bannister

unread,
Jan 5, 2022, 10:40:02 AM1/5/22
to kubernetes-sig-architecture
We also blogged about the reasons for the policy change around moving on from - and deprecating, and then removing - beta APIs: https://kubernetes.io/blog/2020/08/21/moving-forward-from-beta/

It's a general policy and covers more than just Ingress (which graduated to stable just after we published that article).


Of course we'd prefer to have in-project add-ons, such as the AWS load balancer controller, to switch to stable APIs as soon as those graduate. Should we delay API removals because either in-project or third party code has not yet caught up? I think if we did, we'd end up never able to remove any API.

Tim

Eric Tune

unread,
Jan 5, 2022, 12:36:58 PM1/5/22
to Tim Bannister, kubernetes-sig-architecture
John Gardiner Myers wrote:
> That defeats the entire point of designing the server do API version conversions.

When multi-version support was designed for Kubernetes built-ins, very early in the project, the main concerns were to reduce boilerplate within core K8s code while supporting multiple api versions within a single kubernetes release.  This just allowed controllers that are part of the same release bundle to be upgraded progressively and without downtime.    

It was not designed to solve, and does not solve  the problem you have pointed out: that separately released controllers wanting to work with k different minor versions of Kubernetes need to either have a small value of k, or support multiple api versions at runtime, or have multiple branches.    

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.

Justin Santa Barbara

unread,
Jan 5, 2022, 9:04:39 PM1/5/22
to kubernetes-sig-architecture
> And if I am wrong for some reason I have not encountered yet, let's fix _that_?

As I understand the discussion, it is that we have the fix - we have written all this cross-version code in the apiserver.  The question is why do we stop using that code, which we do when we stop serving API versions.  We know that it has a high cost: it breaks workflows for clients (both humans and systems).  The value of doing so is unclear: it seems predominantly that it encourages usage of newer API versions.

A few alternatives:

1) Don't stop serving API versions (as quickly).  As I understand it, this costs us ~ nothing; we can't remove the old version code anyway because of the stored version in etcd.
2) Enhance client-go to do the conversion logic client-side - we continue to ship the deprecated API versions in client-go, we have the version conversion logic, so this could be a one-off enhancement to copy the conversion logic into the client and (somehow) use it there.
3) Backport newer API versions to older kubernetes versions.

I have no objection to us marking an API as deprecated and I think the idea of returning warnings through kubectl is a nice approach to encourage migration.  I suggest though that if working controllers are broken by our  "encouragement" that we need a better answer.  I think any answer that isn't the kubernetes api versioning system is a loss for the ecosystem.

My 2c is that continuing to serve the API versions is the best option, because it is the most user-friendly option.

Justin

Tim Bannister

unread,
Jan 6, 2022, 5:03:39 PM1/6/22
to kubernetes-sig-architecture
Ingress might be a bit of a special case. No other API was in beta for as long a time; the ecosystem around (beta) Ingress was considerable well before its graduation to stable triggered the deprecation of an API.

Here's a risk: if we promise long-term support for beta versions of stabilised APIs, then we need to bear in mind the risk of making a higher bar around graduation from alpha to beta. I worry that if we might need to have to live with that beta API for a long time, it could make people more cautious about moving features to beta.

In retrospect, maybe the deprecation period for Ingress could have been longer? My gut feeling is that it should have been. Even at the end of that longer window, perhaps we could have left the beta API in for a few more versions, disabled and behind an off-by-default feature gate?
If that choice would have served cluster operators better, let's learn from the outcome. The policies we have set a minimum period for retaining deprecated APIs; nothing stops us as a project from extending deprecation periods above that minimum.

There's another thing we can do, as a project: provide better guidance, including sample code in multiple languages, for making a client of the API that does handle multiple versions and auto-detects the right one to use.
That's especially relevant for APIs where people might want to write custom controllers (Service and NetworkPolicy would be other good examples). If it's too hard to move from v1 to v2 of an API, people won't.

With all that said, I'm conscious that it's the end and not the start of the deprecation period that has galvanised action. A third viewpoint is that the missing detail was copious, clear communication about the removal of any API that has reached beta, along with a call-to-action so that stakeholders can make plans from the moment the deprecation is finalized, rather than the release announcement for the version that removes it.

(I'm a technical lead for SIG Docs; if you've got views on how we communicate changes, feel free to comment in our Slack channel or join in our next virtual SIG meeting).

Tim
Reply all
Reply to author
Forward
0 new messages