cc @kubernetes/sig-cli-feature-requests
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
cc @vfreex you might be interested
It would be nice to have this feature. However, like @witrin said, it is hard to implement.
In order to capture the traffic targeting the remote port, a program running in Pod is required to listen on that port. In the case of ssh -R
, sshd
running on the remote side will open a port. But in the case of a Pod, I don't think it is a good idea to put an agent like sshd
to do so.
Any ideas?
I would think this would just be modifying the arguments passed to socat
in the kubelet/cri. It should be possible to do.
You doing this with socat
? 😁 That's funny, because I thought about building a container running socat
to get this done, so I have not to touch the configuration of the hosts SSH daemon. If this is really cheap to implement like @ncdc estimated it would be such a helpful feature.
Yes, kubectl port-forward
uses socat
in the kubelet/cri. This would be a matter of modifying the protocol to be able to specify remote vs local ports, and then adding the support for the other direction.
@ncdc That would be very interesting. I would like to take a look.
/label sig/cli sig/node
/sig node
@vfreex How could I test this when I'm using GKE?
@witrin, do you mean testing my PR? If so, I will update my PR a few days later, after the new year's day. The functionality has been almost completed.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
I see a PR, is there a workaround available to do this with a socat sidecar container?
Any progress on this? Would be awesome 🚀
Hi @bbzg,
This PR is actually a PoC, which is already working. However, the Kubernetes community refuses to add new features until SPDY is deprecated and migrated to WebSocket or HTTP/2.
@dims Thank you. Sorry for my late update because I was so busy recently. I've rebased the PR so that it can be tried with the latest version.
Since it involves many SIGs, it could not be easy to get approved. I am planning to open a KEP to track this issue. Do you have any suggestions or anyone I can ask for help?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
/remove-lifecycle stale
Any update or workaround? I would really like this feature.
Hi, I really need these feature. Is there any update?
Bumping to prevent stale. Is there any update here?
Any updates?
Not a solution really, but I've temporarily solved my problem using: ngrok.io
which is 3rd party service exposing your local ports to public internet. Not ideal from many perspectives including, latency, security (do you trust 3rd party?).
Something that many won't be able to accept as a production solution, but this satisfies my needs to debug things on remote environment (very rarely done anyway).
I still hope there will be an official solution that can address this issues natively.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
Bump
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
@vfreex Has SPDY been deprecated and migrated to WebSocket or HTTP/2 yet?
@tomekit (#20227 (comment)): can you post your ngrok
example details here?
Any update?
We're encountering this situation when trying to debug with PyCharm. Since PyCharm expects the app being debugged to connect to it, we would need to find a way to forward traffic from the container to the local machine. Is there any update on this?
I would also really appreciate this feature
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Bump!
@nathan-beneke @guusvw Have you tried Telepresence? https://www.telepresence.io/
While Telepresence technically does the job, it doesn't really integrate with many of the tools like skaffold and PyCharm.
@nathan-beneke @guusvw Have you tried Telepresence? https://www.telepresence.io/
Telepresence does this job terribly
More precisely, I want to see a solution which intercept network connection initiation to specific service/deployment/pod port and forwards it to my local tool.
If kubectl lost connection to cluster, all incoming network connections to this port should go previously (without any changes in the cluster)
Telepresence does not address these issues.
Tech-debt: (Sig-Architecture)
Category: Usability
Reason: Improve debuggability for editors, help attach debugger to containers
I wrote an article on how to do this with only FOSS tools: https://layerci.com/blog/container-tcp-tunnel/
I wrote an article on how to do this with only FOSS tools: https://layerci.com/blog/container-tcp-tunnel/
TL;DR:
nc 127.0.0.1 8000 | kubectl exec -i web-pod tcpserver 127.0.0.1 8080 cat
You solution does not work in case of https://github.com/GoogleContainerTools/distroless containers or in case if nc
tool missed inside container.
The generic solution for any possible docker images possible only on kubernetes level.
I wrote an article on how to do this with only FOSS tools: https://layerci.com/blog/container-tcp-tunnel/
TL;DR:nc 127.0.0.1 8000 | kubectl exec -i web-pod tcpserver 127.0.0.1 8080 cat
You solution does not work in case of https://github.com/GoogleContainerTools/distroless containers or in case if
nc
tool missed inside container.The generic solution for any possible docker images possible only on kubernetes level.
Yes, but it's possible to build a statically linked "tcpserver" and copy it in if necessary - which should work for almost all use cases
Yes, but it's possible to build a statically linked "tcpserver" and copy it in if necessary - which should work for almost all use cases
Of course, it possible. You have the number of options:
In my company, we first write telepresense equivalent, now returns redirects from our nginx-based container and process it on our side.
But ticket is not about workarounds, ticket about built-in universal solution to address reverse port-forward issues.
@ColinChartier I found the solution you proposed to really only work in a single-shot fashion, for sending back a single response. It doesn't work well for a PyCharm/PHPStorm/XDebug scenario where: 1) client and server need to communicate back and forth during a single connection, and 2) the remote end needs to be listening across multiple remote requests.
Instead, here's a solution that involves spinning-up a sidecar SSH container + service into the cluster that just has SSH forward a remote port locally:
https://github.com/GuyPaddock/inveniem-nextcloud-azure/blob/develop/launch_xdebug_proxy.sh
@ikogan Let me know if this type of solution would work for your PyCharm scenario. I wrote the script for PHP XDebug but there's nothing protocol-specific in the script.
Prior to my SSH-based approach, I tried crafting a solution with socat
running remotely that communicates with a socat
instance running locally, but it ended up really convoluted because:
socat
instance running on the developer machine.socat
instance (over a kubectl port-forward
, and one to connect to the IDE.socat
would quit after a single connection. But forking on the local end resulted in the equivalent to a busy-poll fork bomb because it would just keep trying to connect to the remote end.For those curious, here's how this bad idea looked:
https://gist.github.com/GuyPaddock/783957b1b6d7d89751d1796a2448ce5d
Hey, until there's a native solution out there, here's ktunnel - https://github.com/omrikiei/ktunnel
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Still looking for an update.
I released an open source kubectl plugin to proxy Kubernetes Services on Thursday: https://github.com/soluble-ai/kubetap
I am anxiously awaiting movement on this issue, or in @vfreex's yet unmerged PR which has been on hold for more than a year:
@mikedanese and I have been talking about security and safety related changes to how kubelet to apiserver communication works in light of last months security vulnerability. Until we have some time to discuss how that might evolve, we want to limit the number of changes made to the related protocols.
For that reason I’m going to put a hold on this - we may decide to remove Spdy altogether and this needs to be a part of that usecase.
Is this still a reason to not pursue this extremely useful feature? I am concerned by not merging this after four years, it is encouraging people (like me) to jump through hoops... that are on fire... over lava.
This feature would have been extremely useful for all sorts of local development scenarios. I really hope this gets implemented.
+1
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
+1
/remove-lifecycle stale
/assign
My this project supports container-to-local port forwarding as well as local-to-container and container-to-container: https://github.com/norouter/norouter
Hi :)
Does the kubectl debug command (https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container) with ephemeral containers solve this?
If so, this issue can be closed as the feature is already in Alpha, right?
cc @verb
An additional thing, I understand that the idea is to open a local port that forwards the content to a side container, as an example so I actually don't know if you can create an ephemeral container and do a Port Forward into that.
Let me know if there's something else I'm missing here :)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Does the kubectl debug command (https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container) with ephemeral containers solve this?
No.
/remove-lifecycle stale
+1 to this. We have Java processes backed by Kube pods with sidecars that pipe stdio to the original calling process. This works fine if the calling process is in Kube. This does not work if the callling process isn't in Kube space. Having reverse port forward would allow us to test these Kube-backed processes as part of the standard Java test suite, instead of jumping through manual hoops.
Having a kubectl native method to support remote port forwarding would be great
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
—
/remove-lifecycle stale
There's a kubectl ssh
plugin that I think can be used with the -R (though I didn't tried it) at https://github.com/jordanwilson230/kubectl-plugins
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are on a team that was mentioned.
Out of curiosity did someone use this project for reverse tunneling? https://github.com/omrikiei/ktunnel
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are on a team that was mentioned.
@tomekit well, I obviously did :)
my work has been using it for a while with tilt.dev to add remote debugging to pycharm...
feel free to reach out to me if you have any questions
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are on a team that was mentioned.
This tool seems to solve the issue for me as well:
https://github.com/omrikiei/ktunnel
🎉
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
/remove-lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
would be good to have this reverse proxy setup for local development and debugging
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
/remove-lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
/remove-lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
/remove-lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
Ktunnel rescued my life! Won't fight with telepresence or similar, please add the ktunnel functionality to kubectl.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
/remove-lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
/remove-lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
/remove-lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
mdaniel opened this issue on Jan 27, 2016 · 86 comments
In less than a year and a half, I look forward to celebrating this feature request having been open for a decade.
I've long since left k8s due to a lack of basic tooling like this, but I've grown an odd attachment to this thread. Seeing the consistent bumps via the "/remove-lifecycle stale" posts always leaves me with a feeling of nostalgia. Every so often when I see the bump, I wonder how @mdaniel and @vfreex must feel, if they haven't muted the thread long ago.
At one time I wanted k8s to get this feature, but after so many years I think I'd miss seeing this thread rise to the top of my inbox every so often.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
/remove-lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
/remove-lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
IMHO we should keep this issue open. It would be a great improvement allowing many new use cases.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
/remove-lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
Almost 10 years
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
Please
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.