@verb Hello - I’m the enhancement’s lead for 1.14 and I’m checking in on this issue to see what work (if any) is being planned for the 1.14 release. Enhancements freeze is Jan 29th and I want to remind that all enhancements must have a KEP
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
the goal is to merge the design during 1.14 timeframe and implement in 1.15
/milestone v1.15
I've got a pr I made for 1.13. depending on how much effort it takes to get it back into shape against @vladimirvivien's patch set, there is a small chance it can get in as alpha for 1.14?
I'd like to get this contributed too, so I have an in org ephemeral driver to test against.
kubernetes-csi/drivers#133
It works using design documented in the pod inline kep.
Can someone help me get a git repo for it?
Doh. This is ephemeral containers, not ephemeral volumes. Please ignore me. :)
Updated with link to KEP
Hey, @verb 👋 I'm the v1.15 docs Lead.
Does this enhancement require any new docs (or modifications)?
Just a friendly reminder we're looking for a PR against k/website (branch dev-1.15) due by Thursday, May 30th. It would be great if it's the start of the full documentation, but even a placeholder PR is acceptable. Let me know if you have any questions
Is this properly labelled as sig/auth?
@MAKOSCAFEE My goal for 1.15 is to merge the API change. We won't need any separate docs for k/website until there's an end-user UI which I don't expect to land in 1.15.
@verb are these the current PRs being tracked for this issue? Which ones are we monitoring for being merged by Thursday for Code Freeze?
kubernetes/kubernetes#10834
kubernetes/kubernetes#27140
kubernetes/kubernetes#59416
kubernetes/kubernetes#59484
@kacole2 the API change kubernetes/kubernetes#59416, but I have low confidence anything will merge by freeze.
/milestone clear
/milestone 1.16
@verb: You must be a member of the kubernetes/milestone-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your and have them propose you as an additional delegate for this responsibility.
In response to this:
/milestone 1.16
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@kacole2 @dchen1107 can we add this to the 1.16 milestone?
/milestone v1.16
Hey, @verb I'm the v1.16 docs release lead.
Does this enhancement (or the work planned for v1.16) require any new docs (or modifications)?
Just a friendly reminder we're looking for a PR against k/website (branch dev-1.16) due by Friday,August 23rd. It would be great if it's the start of the full documentation, but even a placeholder PR is acceptable. Let me know if you have any questions!
Hi @simplytunde, I could use your help figuring that out. Right now it looks like the feature might be available in the API without being part of kubectl. My expectation is that we'll want to include this in the release notes, but not document it on the website until there's an official utility.
Let me know if you disagree, and I'll let you know if kubectl support materializes sooner than expected. Thanks!
@verb is it possible to use say kubectl edit
to add a ephemeral container easily?
@dims no, the only way to add an ephemeral container is using a new ephemeralcontainers
subresource of pod once the pod has been created.
Update: API and kubelet PRs have merged. Ephemeral containers will be available as an alpha, API-only feature in 1.16. I've opened a KEP to sig-cli to begin a discussion about how debugging should be supported in kubectl.
I've published a proof-of-concept kubectl plugin to https://github.com/verb/kubectl-debug to gather feedback on the feature and kubectl implementation.
@verb code freeze for 1.16 is on Thursday 8/29. Are there any outstanding k/k PRs that still need to be merged for this to go Alpha? Looks like kubernetes/kubernetes#59416 is merged.
@kacole2 Only kubernetes/kubernetes#80847 and kubernetes/kubernetes#80644 are left to make this work as expected. I have a meeting tomorrow with @seans3 to go over these.
We should try to include the documentation-only kubernetes/kubernetes#79614 (cc @smarterclayton)
kubernetes/kubernetes#81936 and kubernetes/kubernetes#81678 would be nice to have, but won't prevent anyone from trying the feature.
Hey there @verb -- 1.17 Enhancements lead here. I wanted to check in and see if you think this Enhancement will be graduating to beta in 1.17?
The current release schedule is:
If you do, please list all relevant k/k PRs in this issue so they can be tracked properly. 👍
/milestone clear
hello @verb, I am wondering that if the resource limit of Ephemeral container does not set, what would happen when reaching the resources(memory/cpu) limits of a target pod? Would it be evicted by eviction manager as the stability of the node is more vital than the pod?
@markzhang0928 That's correct. Others are working on vertical pod scaling, so we may be able to support resources for ephemeral containers some day, but right now it's not possible.
@mrbobbytables This will remain alpha in 1.17, but it will see additional work. I will link k/k PRs here once I've scoped the work. Thanks!
It seems that the default capability for ephemeral containers are CAP_SYS_ADMIN
and CAP_SYS_PTRACE
and security policy for ephemeral containers are not configurable.
How could I set up a ephemeral containers with CAP_NET_ADMIN
?
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
It seems that the default capability for ephemeral containers are
CAP_SYS_ADMIN
andCAP_SYS_PTRACE
and security policy for ephemeral containers are not configurable.How could I set up a ephemeral containers with
CAP_NET_ADMIN
?
Thats an interesting issue... I'd like to enable the feature on my multitenant clusters, but would be highly reluctant to give them CAP_SYS_ADMIN. That needs some restriction somehow.
@verb Are there any future plan for the ephemeral containers currently? For example, what feature will be added in k8s 1.17, 1.18...? Thanks!
@YangKeao It's not possible currently, but I'd like to support configurable securityContext in the future. I'll open an issue to discuss it in the near future.
@kfox1111 The current default is an empty securityContext, so CAP_SYS_ADMIN
is disabled. I'd advise against enabling this feature on production clusters quite yet, though.
@shuiqing05 My top priorities are API correctness, testability and container namespace targeting. Don't expect deleting containers in this release, but it's a high priority. There's a rough roadmap in the description of this issue, but it needs to be expanded. Perhaps I'll split it off into separate issues to allow people to vote on them.
Hi @verb, for killing the ephemeral containers, do you plan to adding it in next release? What concern (security or any other) are raised for deleting the container?
@shuiqing05 Could you open an enhancement request describing your use case and assign it to me?
Hi @verb, for killing the ephemeral containers, do you plan to adding it in next release? What concern (security or any other) are raised for deleting the container?
@shuiqing05 Could you open an enhancement request describing your use case and assign it to me?
Hi, I had concluded my user case at kubernetes/kubernetes#84764 (comment).
Thanks!
Say my current ephemeral container terminated and the next day I want to run it again on that pod. Are these my only two options or is there a another way?
@dguendisch Currently these are the only options. It's something I would like to address soon, either by allowing deletes or restarts, but I need more use cases. Could you add your use case to (and maybe vote for) kubernetes/kubernetes#84764? Thanks!
Hey there @verb-- 1.18 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating to beta in 1.18?
The current release schedule is:
Monday, January 6th - Release Cycle Begins
Tuesday, January 28th EOD PST - Enhancements Freeze
Thursday, March 5th, EOD PST - Code Freeze
Monday, March 16th - Docs must be completed and reviewed
Tuesday, March 24th - Kubernetes 1.18.0 Released
If you do, once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍
@jeremyrickard Thanks for checking. We're not planning to graduate in 1.18.
To let everyone know what's going on for 1.18, we'll be focusing on:
kubectl debug
to allow adding ephemeral containers to pods using standard tools.shareProcessNamespace
is false.There's still lots to do. Interested in helping out? Check out the list of linked issues in the description and let us know if you want to pitch in!
#88276 should go in before GA.
@tedyu Noted, but I don't think that will be a concern. After the 1.18 window closes we can begin talking about beta, but even that would take a while. I want to get some feedback and resolve some outstanding items before beta, in particular:
kubectl alpha debug
usersIt seems that the default capability for ephemeral containers are
CAP_SYS_ADMIN
andCAP_SYS_PTRACE
and security policy for ephemeral containers are not configurable.
How could I set up a ephemeral containers withCAP_NET_ADMIN
?Thats an interesting issue... I'd like to enable the feature on my multitenant clusters, but would be highly reluctant to give them CAP_SYS_ADMIN. That needs some restriction somehow.
^ needs fixing before beta if not already. That may be one of those things you mentioned as setting security policy. but wanted to make sure its covered.
@kfox1111 yes, that's the security policy issue I mentioned as an issue to resolve prior to beta, but also to be clear ephemeral containers do not currently grant CAP_SYS_ADMIN
.
Hey there @verb -- 1.19 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating in 1.19?
In order to have this part of the release:
The current release schedule is:
If you do, I'll add it to the 1.19 tracking sheet (http://bit.ly/k8s-1-19-enhancements). Once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍
Thanks!
—
Hi @msedzins, no graduation in 1.19 scheduled, thanks.
Thank you @verb for letting me know.
Hi @verb do you need a hand on this?
I have some open PRs related to this issue:
#277 (comment)
I may have some cycles if help with other subtasks is needed.
@matthyx @tedyu Yes! Help would be great. I'm focusing right now on these broad areas:
kubectl debug
(#1441). SIG CLI has been very supportive and we're making good progress.You're welcome to dive into these bigger issues and help make quicker progress, or if you prefer smaller scoped items, a few are listed in the issue description and I'm happy to discuss them further on slack.
@verb can you clarify which components exactly need the feature gate to enabled for this to work?
@verb can you clarify which components exactly need the feature gate to enabled for this to work?
@amrmahdi off the top of my head I think it's just kubelet and apiserver
Thanks, would be helpful if the docs is updated to mention the gates, the TTL Controller docs is a good example https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/
I asked about which components care about the "EphemeralContainers" feature gate today in the "kubernetes-users" Slack channel.
@verb A quick question about providing strace capabilities in debugging: Does the current "kubectl debug" feature allow a Kubernetes user to run strace against a pod he owns without giving him full cluster admin capabilities?
We have a use case where developers have access to one or more namespaces where they are free to run their applications and they want to be able to run strace against their applications easily.
@garo It's not currently supported, but it's a planned improvement, kubernetes/kubernetes#53188.
@garo I've a bit of success running strace
by just installing it into the container's /tmp folder.
./oc-inject --oc-command=kubectl -it <pod-id> -- strace -p <process-id>
https://developers.redhat.com/blog/2020/01/15/installing-debugging-tools-into-a-red-hat-openshift-container-with-oc-inject/
This works with the default settings on OpenShift which allow ptrace for non-admin. It should also work with any Kubernetes container that has ptrace capability (if you can enable that for your users in a way that doesn't amount to 'cluster admin' capabilities).
Hi @kikisdeliveryservice, I plan on working on some of the linked FRs for 1.20, but no graduation. Thanks!
Thanks @verb !
related PR: #2029
@verb Sorry, I have the following problem when using ephemeral container, I would like to ask you.
My k8s version is 1.16.9, I patched a ephemeral container to my Pod by kubectl replace
. Command executed successfully. And I also saw ephemeralcontainers in my POD YAML.
But I can't see the ephemeral container running on my node, so I can't access the newly created ephemeral container through kubectl exec
.
My ephemeral container is simple, like this:
{ "apiVersion": "v1", "kind": "EphemeralContainers", "metadata": { "name": "tomcat-6f4c4bdfd-nflgf" }, "ephemeralContainers": [{ "targetContainerName": "tomcat", "name": "tomcat-debug", "image": "busybox", "command": [ "sh" ], "resources": {}, "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent", "stdin": true, "tty": true }] }
Hi @zhhray, Does tomcat-debug
show up in pod.status.ephemeralContainerStatuses
? (This is also visible in kubectl describe pod
if it's not empty.) If it's accepted by the API server but a status doesn't appear in ephemeralContainerStatuses
then it could mean something is going wrong on the kubelet. Here are some things to check:
EphemeralContainers
feature gate enabled on the kubelet?kubectl describe pod
)targetContainerName
?FYI, starting with kubectl version 1.18 you can use kubectl alpha debug
to create ephemeral containers, even if the cluster is older, and there's a kubectl plugin (also available in krew) that will work with older versions of kubectl.
Hi @zhhray, Does
tomcat-debug
show up inpod.status.ephemeralContainerStatuses
? (This is also visible inkubectl describe pod
if it's not empty.) If it's accepted by the API server but a status doesn't appear inephemeralContainerStatuses
then it could mean something is going wrong on the kubelet. Here are some things to check:
* Is the `EphemeralContainers` feature gate enabled on the kubelet? * Do you see the kubelet attempting to start the container in the pod's event log? (visible in `kubectl describe pod`) * Are you using a container runtime that supports `targetContainerName`?
FYI, starting with kubectl version 1.18 you can use
kubectl alpha debug
to create ephemeral containers, even if the cluster is older, and there's a kubectl plugin (also available in krew) that will work with older versions of kubectl.
Thank you very much. I'm sorry for my mistake. After I enabled on the feature gate of Kubelet, everything was ok.
@verb Sorry, I have the following problem when using ephemeral container, I would like to ask you.
kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.9", GitCommit:"a17149e1a189050796ced469dbd78d380f2ed5ef", GitTreeState:"clean", BuildDate:"2020-04-16T11:44:51Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.9", GitCommit:"a17149e1a189050796ced469dbd78d380f2ed5ef", GitTreeState:"clean", BuildDate:"2020-04-16T11:36:15Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
docker version Client: Docker Engine - Community Version: 19.03.9 API version: 1.40 Go version: go1.13.10 Git commit: 9d988398e7 Built: Fri May 15 00:22:47 2020 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.9 API version: 1.40 (minimum version 1.12) Go version: go1.13.10 Git commit: 9d988398e7 Built: Fri May 15 00:28:17 2020 OS/Arch: linux/amd64 Experimental: true containerd: Version: v1.2.13 GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429 runc: Version: 1.0.0-rc10 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd docker-init: Version: 0.18.0 GitCommit: fec3683
I deployed a ephemeral container, and it worked.
spec: containers: - env: - name: TOMCAT_USERNAME value: user - name: TOMCAT_PASSWORD valueFrom: secretKeyRef: key: tomcat-password name: tomcat - name: TOMCAT_ALLOW_REMOTE_MANAGEMENT value: "0" image: harbor.alauda.cn/3rdparty/bitnami-tomcat:9.0.37-debian-10-r31 imagePullPolicy: IfNotPresent name: tomcat ports: - containerPort: 8080 protocol: TCP resources: limits: cpu: 200m memory: 500Mi requests: cpu: 200m memory: 500Mi securityContext: allowPrivilegeEscalation: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /bitnami/tomcat name: data - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-6kjsp readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true ephemeralContainers: - command: - sh image: busybox imagePullPolicy: IfNotPresent name: tomcat-debug resources: {} stdin: true targetContainerName: tomcat terminationMessagePolicy: File tty: true
ephemeralContainerStatuses: - containerID: docker://ba9e18116eac13418f271e6907d2c0d70b46b3a0906f9663f32bbf8a74db6902 image: busybox:latest imageID: docker-pullable://busybox@sha256:bde48e1751173b709090c2539fdf12d6ba64e88ec7a4301591227ce925f3c678 lastState: {} name: tomcat-debug ready: false restartCount: 0 state: running: startedAt: "2020-12-10T11:35:38Z"
I explicitly specified targetContainer, but when I exec into the debug container, ps-ef can't see the process of the targetContainer, why?
The above problem is k8s1.16.9 cluster version, but the same operating in my own Mac environment of minikube (k8s1.19.2) where results was ok.
Hi @zhhray,
Support for container namespace targeting landed in 1.18, so it won't be available in an 1.16.9 cluster. Now that you mention it, I should double check that it works with containerd (it requires support from the CRI) now that dockershim is being retired.
It doesn't look like the separate minimum version for namespace targeting is called out explicitly on the kubernetes website. If you let me know which pages have been guiding you I'll make sure they're updated. Was it https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod?
@dchen1107 @derekwaynecarr FYI, targeting beta for 1.21.
Hi @zhhray,
Support for container namespace targeting landed in 1.18, so it won't be available in an 1.16.9 cluster. Now that you mention it, I should double check that it works with containerd (it requires support from the CRI) now that dockershim is being retired.
It doesn't look like the separate minimum version for namespace targeting is called out explicitly on the kubernetes website. If you let me know which pages have been guiding you I'll make sure they're updated. Was it https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod?
Yeah, it is. I feel sorry to see your reply. Because I see that the ephemeral container has shared the network namespace, UTS namespace and volumes with the target container, but except that the IPC namespace is not shared.
Yeah, it is. I feel sorry to see your reply. Because I see that the ephemeral container has shared the network namespace, UTS namespace and volumes with the target container, but except that the IPC namespace is not shared.
This seems like something we could fix. Could you open a feature request at https://issues.k8s.io?
Yeah, it is. I feel sorry to see your reply. Because I see that the ephemeral container has shared the network namespace, UTS namespace and volumes with the target container, but except that the IPC namespace is not shared.
This seems like something we could fix. Could you open a feature request at https://issues.k8s.io?
You mean fix it on version 1.16? The version is a bit old, and it is indeed supported after 1.18.You have given me the answer, 1.16 does not support this feature, that is enough. Thank you very much.
/milestone v1.21
This is Joseph v1.21 enhancement shadow. For the enhancement to be included in the 1.21 milestone, it must meet the following criteria:
The KEP must be merged in an implementable stateDone
The KEP must have test plans Done
The KEP must have graduation criteria Done
The KEP must have a production readiness review
Also starting 1.21, all KEPs must include a production readiness review. Please make sure to take a look at the instructions and update the KEP. There are a few steps needing completion.
Thank you!
Thanks for the update @verb!
Enhancements Freeze is 2 days away, Feb 9th EOD PST
Enhancements team is aware that KEP update is currently in progress (PR #2244) . Please make sure PR merges before the freeze. For PRR related questions or to boost the PR for PRR review, please reach out in slack #prod-readiness
Any enhancements that do not complete the following requirements by the freeze will require an exception.
[DONE] The KEP must be merged in an implementable state
[DONE] The KEP must have test plans
[DONE] The KEP must have graduation criteria
[IN PROGRESS] The KEP must have a production readiness review
Greetings @verb ,
Since your Enhancement is scheduled to be in 1.21, please keep in mind the important upcoming dates:
Greetings @verb,
A friendly reminder that Code freeze is 5 days away, March 9th EOD PST
Any enhancements that are NOT code complete by the freeze will be removed from the milestone will require an exception to be added back.
Please also keep in mind that if this enhancement requires new docs or modification to existing docs, you'll need to follow the steps in the Open a placeholder PR doc to open a PR against k/website repo by March 16th EOD PST
Thanks!
Hi @jrsapi,
sig-auth has requested a large API change, so we won't be graduating to beta in 1.21. Could you remove this from tracking for 1.21?
kubernetes/kubernetes#101034 merged, changing the API for ephemeral containers for the 1.22 release. We will allow the new API to soak for a release and pursue beta again in 1.23. In the mean time I plan to work on the outstanding feature requests and beta requirements.
I'll start planning the work in the coming weeks and start a slack thread with everyone who offered to contribute. Many thanks to everyone who reached out to volunteer! 👍
I would like to work on it. @verb
We have some scenarios that will use this feature.
I would like to help out with the efforts for 1.23 too @verb!
Thanks @pacoxu and @MadhavJivrajani! I've added you to a slack thread in #sig-node that we can use to coordinate.
The description of this issue has a list of issues in the "Future Work" section. These are are available, so feel free to take on that looks interesting and assign it to yourself.
#1441 (comment) has open kubectl debug FRs. Most of these are owned and have open PRs but may be helped by collaboration.
Does the current implementation of ephemeral containers still require cluster-admin? It would be a shame if regular namespace admins can't debug their own pods.
there is no built in role that grants it by default other than cluster-admin. A particular cluster can aggregate this permission into namespaced user roles like admin or edit if desired.
Is it safe to give an untrusted namespace admin the permissions though, or does it give them too much privilege?
It entirely depends on whether the pod protection mechanisms you have in place guard the ephemeralContainers field and the pods/ephemeralcontainers subresource. The PodSecurity and PodSecurityPolicy admission plug-ins will guard those before the feature graduates from alpha, but there's no guarantee custom admission webhooks pay attention to those
Hmm... Not what I intended to ask. Restricting access to the api can be done lots of different ways. (opa, gatekeeper, custom webhook, etc). Let me try asking with some more detail:
Is the ephemeral container feature safe enough to give to a namespace admin who is not a cluster-admin so they can debug their own workloads but can't use the functionality to break out of their workloads to other namespaces workloads? At one point in the past, it required privilege enough that it was unsafe for non cluster-admins to have access to.
Hi, this enhancement has been granted an exception for v1.22. As per Savitha's message, please ensure that your PRs are merged by 12th July, end of day Pacific time. For this we are tracking the following PR:
/milestone v1.22
@kfox1111 There's been some confusion around this in the past. Do you recall which privilege made it unsafe for non-cluster-admins?
Ephemeral containers have never granted any additional privileges than what is described in the fields of v1.EphemeralContainer
. The kubelet converts the v1.EphemeralContainer
into a v1.Container
and creates it using the same methods as the rest of the containers in the pod.
It's basically equivalent to if you had included the ephemeral container spec in the original pod.Spec.Containers
, except that it doesn't affect pod-level conditions or resources, and you're allowed to share the PID namespace of another container in the same pod.
Whether it's safe depends on what's guarding the fields of ephemeral containers and what they allow. I expect many of the custom admission webhooks are ignoring these fields currently.
No. Just semi remembering the possibility of something from a long time ago. Or maybe an initial implementation restriction only allowing cluster-admins to use the feature or something. Could be totally misremembering too.
Just wondering in general if there were any known gotcha's in the implementation since its been so long since I've looked at it, that I care about multitenant issues, and want to prepare my clusters for the feature. :)
Hello @verb 👋, 1.22 Docs release lead here.
This enhancement is marked as ‘Needs Docs’ for 1.22 release.
Please follow the steps detailed in the documentation to open a PR against dev-1.22 branch in the k/website repo. This PR can be just a placeholder at this time and must be created by EOD today, the docs placeholder PR deadline was 9th of July.
Also, take a look at Documenting for a release to familiarize yourself with the docs requirement for the release.
Hi @PI-Victor, docs were updated for this release in kubernetes/website#27988.
@verb my bad, i did remember your PR but didn't realize it was for this, thank you!
/milestone v1.23
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
Hi @verb! 1.23 Enhancements team here. Just checking in as we approach enhancements freeze on Thursday 09/09. Here's where this enhancement currently stands:
implementable
Starting with 1.23, we have implented a soft freeze on production readiness reviews beginning on Thursday 09/02. If your enhancement needs a PRR, please make sure to try and complete it by that date!
For this enhancements it looks like we would need a PRR for the beta release, as well as updating the key.yaml for the correct latest release and stage.
Thanks!
Thanks!
Hi @salaxander, just to make sure I understand the PRR process: the KEP should be ready for PRR by tomorrow so that the PRR reviewers have time to review it before enhancement freeze, right? You're not expecting PRR review to be finished by tomorrow? Thanks!
Hi @verb - sorry for the confusion. The soft PRR freeze is something new we're trying, and we'll definitely iterate on it. We are hoping to have PRRs completed and reviewed by midnight PST tomorrow.
That said, this is a soft freeze so as long as things get as far as possible before then, we should be fine. If you need any help moving things along feel free to post in the #release-enhancements slack channel on K8s slack
@salaxander Whoops, I misunderstood then. I think #2892 is close, and we've been approved for beta once before, so maybe we'll be ok. I'll ping the PRR reviewer.
Hi @verb! 1.23 Docs team here.
This enhancement issue is listed as 'None required' for docs in the tracking sheet. Though docs are complete, I believe we need a small PR to update the feature gate to 'beta'. If I'm mistaken, let me know!
Otherwise, please follow the steps detailed in the documentation to open a PR against the dev-1.23 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thu November 18, 11:59 PM PDT.
Thanks!
Hi. Can we have this feature as Beta in 1.23?
Yes, it's targetted for beta in 1.23 :)