Re: [kubernetes/kubernetes] Improve default kubectl behavior when it doesn't know what cluster to talk to (#24420)

2 views
Skip to first unread message

Brian Grant

unread,
Apr 12, 2017, 1:33:11 AM4/12/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

cc @kubernetes/sig-cli-feature-requests


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

fejta-bot

unread,
Dec 22, 2017, 10:55:26 PM12/22/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

fejta-bot

unread,
Jan 21, 2018, 11:43:11 PM1/21/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.

/lifecycle rotten
/remove-lifecycle stale

Brian Grant

unread,
Jan 22, 2018, 12:07:44 PM1/22/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle rotten

fejta-bot

unread,
Apr 22, 2018, 2:05:31 PM4/22/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Ryan McGinnis

unread,
Apr 23, 2018, 11:28:15 AM4/23/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle stale

fejta-bot

unread,
Jul 22, 2018, 11:44:37 AM7/22/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

Nikhita Raghunath

unread,
Jul 23, 2018, 1:29:41 PM7/23/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle stale

Sean Suchter

unread,
Feb 12, 2020, 11:31:19 AM2/12/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

This issue is now hitting downstream projects (e.g. Istio). See this Istio discuss thread. The user didn't have a ~/.kube/config and got this inscrutable message:

Failed to wait for resources ready: Get http://localhost:8080/api/v1/namespaces/istio-system: dial tcp [::1]:8080: connect: connection refused

Fortunately the user noticed the localhost:8080 in there and was able to proceed.

I have the perception that localhost connections are very likely to succeed the first time they are queried (as opposed to off-host connections).

I'd propose that when kubectl (or other tools, perhaps?) specifically detects that:

  • It's talking to localhost
  • because it was the default, due to a non-existent ~/.kube/config
  • it cannot connect

It should output an error or warning saying something like:

Unable to connect to Kubernetes at localhost:8080. It's likely that you are trying to reach a different cluster, please configure it in a ~/.kube/config file.


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Daniel Smith

unread,
Feb 12, 2020, 3:43:56 PM2/12/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Maybe it should just always print a warning when there's no .kube/config file. Warning: You have no ~/.kube/config; assuming a local cluster in insecure mode (localhost:8080). Please make a ~/.kube/config file to ensure you're talking to the cluster you intend to talk to.

Personally I think it'd be better to refuse to connect without an explicit destination, but almost certainly that'd be "fixing" a load-bearing bug.

penkong

unread,
May 27, 2020, 3:08:42 AM5/27/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

did anybody find solution? run in fedora, somebody provide a sample /.kube/config file. plz

Tim Bannister

unread,
Dec 21, 2022, 8:27:49 AM12/21/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

This is a kubectl issue
/transfer kubectl


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/24420/1361312543@github.com>

Reply all
Reply to author
Forward
0 new messages