Re: [kubernetes/kubernetes] kubectl port-forward service/<service-name> with --pod-running-timeout option doesn't work as expected (#62821)

337 views
Skip to first unread message

k8s-ci-robot

unread,
Apr 19, 2018, 4:08:12 PM4/19/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

@pragyamehta: Reiterating the mentions to trigger a notification:
@kubernetes/sig-cli-bugs

In response to this:

@kubernetes/sig-cli-bugs

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

fejta-bot

unread,
Jul 18, 2018, 4:15:12 PM7/18/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Morgan Bauer

unread,
Aug 7, 2018, 12:48:59 AM8/7/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

/remove-lifecycle stale
/lifecycle froze
This flag in combination with this command needs clarification.

fejta-bot

unread,
Nov 5, 2018, 12:46:43 AM11/5/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot

unread,
Dec 5, 2018, 1:33:21 AM12/5/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

fejta-bot

unread,
Jan 4, 2019, 2:17:56 AM1/4/19
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.


Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Kubernetes Prow Robot

unread,
Jan 4, 2019, 2:17:57 AM1/4/19
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Closed #62821.

Kubernetes Prow Robot

unread,
Jan 4, 2019, 2:17:57 AM1/4/19
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sean Sullivan

unread,
Jan 6, 2019, 10:52:07 PM1/6/19
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

/remove-lifecycle rotten

Stefan Hacker

unread,
Apr 28, 2020, 11:23:25 AM4/28/20
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

@MHBauer Clarification would definitely be helpful.

Documentation says:

$ kubectl help port-forward
...
Options:
    ...
     --pod-running-timeout=1m0s: The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running
...

But when I try to use it with a pending pod:

$ kubectl get all
NAME           READY   STATUS    RESTARTS   AGE
pod/mytest   0/1     Pending   0          104s
$ time kubectl port-forward --pod-running-timeout=10m mytest 1234:1234
error: unable to forward port because pod is not running. Current status=Pending

real	0m0,374s

No waiting is done.


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

nneram

unread,
Mar 16, 2022, 9:23:16 AM3/16/22
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

@MHBauer This flag is used by kubectl for timeout when their is no pod at all. It's calling a function that is attaching a pod given an object (here pod name I guess), but if the pod doesn't exist their is a timeout with a default value:

https://github.com/kubernetes/kubernetes/blob/f06baf9f36a5d973ad84779bbca44f2fb4a93483/staging/src/k8s.io/kubectl/pkg/cmd/portforward/portforward.go#L97

https://github.com/kubernetes/kubernetes/blob/f06baf9f36a5d973ad84779bbca44f2fb4a93483/staging/src/k8s.io/kubectl/pkg/cmd/portforward/portforward.go#L321

https://github.com/kubernetes/kubernetes/blob/f06baf9f36a5d973ad84779bbca44f2fb4a93483/staging/src/k8s.io/kubectl/pkg/polymorphichelpers/attachablepodforobject.go#L32

So if a pod exist the attachment is working fine otherwise it's waiting for a pod until timeout. Then you have the exit when the pod is in pending state:
https://github.com/kubernetes/kubernetes/blob/f06baf9f36a5d973ad84779bbca44f2fb4a93483/staging/src/k8s.io/kubectl/pkg/cmd/portforward/portforward.go#L394-L396

But the option --pod-running-timeout is not link to that state and work as expected:
Here a service without running pod :

apiVersion: v1
kind: Service
metadata:
  name: test
  namespace: default
spec:
  clusterIP: 10.96.141.131
  clusterIPs:
  - 10.96.141.131
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    port: 3000
    protocol: TCP
    targetPort: 3000
  selector:
    app.kubernetes.io/instance: test
    app.kubernetes.io/name: test
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

You will have:

/usr/bin/time -p kubectl -n default port-forward svc/test 8001:3000 --pod-running-timeout=5s
error: timed out waiting for the condition
Command exited with non-zero status 1
real 5.61
user 0.17
sys 0.08

All that to say that it will be nice to have a new flag for the pending state like --pod-running-timeout :)


Reply to this email directly, view it on GitHub, or unsubscribe.

Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/62821/1069123931@github.com>

Reply all
Reply to author
Forward
0 new messages