cc @kubernetes/sig-cli-feature-requests
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle rotten
/remove-lifecycle stale
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
This issue is now hitting downstream projects (e.g. Istio). See this Istio discuss thread. The user didn't have a ~/.kube/config and got this inscrutable message:
Failed to wait for resources ready: Get http://localhost:8080/api/v1/namespaces/istio-system: dial tcp [::1]:8080: connect: connection refused
Fortunately the user noticed the localhost:8080 in there and was able to proceed.
I have the perception that localhost connections are very likely to succeed the first time they are queried (as opposed to off-host connections).
I'd propose that when kubectl (or other tools, perhaps?) specifically detects that:
It should output an error or warning saying something like:
Unable to connect to Kubernetes at localhost:8080. It's likely that you are trying to reach a different cluster, please configure it in a ~/.kube/config file.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Maybe it should just always print a warning when there's no .kube/config file. Warning: You have no ~/.kube/config; assuming a local cluster in insecure mode (localhost:8080). Please make a ~/.kube/config file to ensure you're talking to the cluster you intend to talk to.
Personally I think it'd be better to refuse to connect without an explicit destination, but almost certainly that'd be "fixing" a load-bearing bug.
did anybody find solution? run in fedora, somebody provide a sample /.kube/config file. plz
This is a kubectl issue
/transfer kubectl
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.