@pragyamehta: Reiterating the mentions to trigger a notification:
@kubernetes/sig-cli-bugs
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
/lifecycle froze
This flag in combination with this command needs clarification.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Closed #62821.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
/remove-lifecycle rotten
@MHBauer Clarification would definitely be helpful.
Documentation says:
$ kubectl help port-forward
...
Options:
...
--pod-running-timeout=1m0s: The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running
...
But when I try to use it with a pending pod:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mytest 0/1 Pending 0 104s
$ time kubectl port-forward --pod-running-timeout=10m mytest 1234:1234
error: unable to forward port because pod is not running. Current status=Pending
real 0m0,374s
No waiting is done.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.![]()
@MHBauer This flag is used by kubectl for timeout when their is no pod at all. It's calling a function that is attaching a pod given an object (here pod name I guess), but if the pod doesn't exist their is a timeout with a default value:
So if a pod exist the attachment is working fine otherwise it's waiting for a pod until timeout. Then you have the exit when the pod is in pending state:
https://github.com/kubernetes/kubernetes/blob/f06baf9f36a5d973ad84779bbca44f2fb4a93483/staging/src/k8s.io/kubectl/pkg/cmd/portforward/portforward.go#L394-L396
But the option --pod-running-timeout is not link to that state and work as expected:
Here a service without running pod :
apiVersion: v1
kind: Service
metadata:
name: test
namespace: default
spec:
clusterIP: 10.96.141.131
clusterIPs:
- 10.96.141.131
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 3000
protocol: TCP
targetPort: 3000
selector:
app.kubernetes.io/instance: test
app.kubernetes.io/name: test
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
You will have:
/usr/bin/time -p kubectl -n default port-forward svc/test 8001:3000 --pod-running-timeout=5s
error: timed out waiting for the condition
Command exited with non-zero status 1
real 5.61
user 0.17
sys 0.08
All that to say that it will be nice to have a new flag for the pending state like --pod-running-timeout :)
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are on a team that was mentioned.![]()