Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: Transferring file to local machine when SSHing into a foreign box

228 views
Skip to first unread message

Ángel González

unread,
May 12, 2012, 6:45:10 PM5/12/12
to
On 12/05/12 19:45, Dotan Cohen wrote:
> I imagine something like this: The user would run a command such as
> the following: remoteServer$ cp2local someFile.c The SSH server on the
> remote host would then push the file to the SSH client running locally
> just as if scp had been used, but it would reuse the existing
> connection. The local SSH client would then write the file just as it
> would have had scp been used.
The big problem with that approach is that you're trusting your
credentials to the remote side.
If I ssh from A to B, and B is compromised, it shouldn't be able to
compromise A.
Can you provide an alternative usage without that hole?

It may be an issue to be solved in a client, which allowed you to switch
between console and file view (sftp).

>> You could reconfigure your current connection adding a tunnel, and then
>> use that for transfering the files, but you'd still need a local daemon
>> (eg. ftpd) where to drop them.
> I am sure that you recognise the added complexity for the user by way
> of the workaround that you mention. From a technical point of view,
> OpenSSH already has the components necessary to make this a simple
> procedure.
Sure. I was throwing out some ideas which could, perhaps, turn out to be
useful (eg. for doing it in a script).
Regards

_______________________________________________
openssh-unix-dev mailing list
openssh-...@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev

Dotan Cohen

unread,
May 13, 2012, 3:52:59 AM5/13/12
to
On Sun, May 13, 2012 at 1:45 AM, Ángel González <kei...@gmail.com> wrote:
> The big problem with that approach is that you're trusting your
> credentials to the remote side.
> If I ssh from A to B, and B is compromised, it shouldn't be able to
> compromise A.
> Can you provide an alternative usage without that hole?
>

Sure: just reuse the existing connection. Just like how sftp works.


> It may be an issue to be solved in a client, which allowed you to switch
> between console and file view (sftp).
>

That would be nice.


--
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com

Dotan Cohen

unread,
May 13, 2012, 3:49:52 AM5/13/12
to
On Sun, May 13, 2012 at 12:05 AM, Bert Wesarg
<bert....@googlemail.com> wrote:
>> I am sure that you recognise the added complexity for the user by way
>> of the workaround that you mention. From a technical point of view,
>> OpenSSH already has the components necessary to make this a simple
>> procedure.
>
> It's not. Because SSH does not monitor what commands you execute in
> the remote shell and the shell does not know that it's a remote shell.
>

I am not suggesting that the local SSH client monitor what is being
typed into the shell. Rather, the SSH server on the remote host would
have a command cpLocal that would wrap scp with all the needed
connection information. If the user is not connected via SSH then
cpLocal could throw an error.

Ángel González

unread,
May 13, 2012, 6:06:20 AM5/13/12
to
On 13/05/12 09:52, Dotan Cohen wrote:
> On Sun, May 13, 2012 at 1:45 AM, Ángel González <kei...@gmail.com> wrote:
>> The big problem with that approach is that you're trusting your
>> credentials to the remote side.
>> If I ssh from A to B, and B is compromised, it shouldn't be able to
>> compromise A.
>> Can you provide an alternative usage without that hole?
> Sure: just reuse the existing connection. Just like how sftp works.
???
If a command such as the proposed cp2local is able to write arbitrary
files in the local end*, it allows such compromise.

* For instance, a profile file run by your shell each time you log in, see
CVE-2010-2252.

Gert Doering

unread,
May 13, 2012, 7:06:30 AM5/13/12
to
Hi,

On Sun, May 13, 2012 at 01:41:31PM +0300, Dotan Cohen wrote:
> I counter that the proposed cp2Local is no more of a security risk
> than scp because it _also_ requires the user of a username/password or
> keypair to explicitly express intent (establishing the initial SSH
> connection). Assuming the worst-case scenario that this feature is
> enabled and the user SSHes into a compromised box, the user will be
> only downloading unwanted, malicious files to his local machine, he
> will not be executing them automatically. This is no different than
> visiting a webpage. In fact, this is safer: web browsers _can_ run
> arbitrary code in the form of Javascript.

"unwanted, malicious files" could be .ssh/authorized_keys, .shosts,
.profile / .bashrc, etc. - which might not be executed right away, but
will give the attacker interesting options to attack the original client
machine.

[..]
> In short, I recognise the problem of allowing the remote machine
> access to write to your local machine. However, this has been a
> problem with many other technologies (www, email, ftp, etc.) and it is
> a solved issue in the general sense. That is, best practices and
> damage-mitigation strategies have already been established.

Actually, none of these technologies allow downloading arbitrary files
to the client machine, using server-controlled file names, just by
logging into a malicious server.

gert
--
USENET is *not* the non-clickable part of WWW!
//www.muc.de/~gert/
Gert Doering - Munich, Germany ge...@greenie.muc.de
fax: +49-89-35655025 ge...@net.informatik.tu-muenchen.de

Dotan Cohen

unread,
May 13, 2012, 9:59:29 AM5/13/12
to
On Sun, May 13, 2012 at 2:06 PM, Gert Doering <ge...@greenie.muc.de> wrote:
> "unwanted, malicious files" could be .ssh/authorized_keys, .shosts,
> .profile / .bashrc, etc. - which might not be executed right away, but
> will give the attacker interesting options to attack the original client
> machine.
>

Let's assume that a compromised machine pushes a malicious file called
authorized_keys. It gets put in the user's Downloads directory, or in
the case of a misconfigured configuration gets put in $HOME. Now what?
The user would have to explicitly place that file in another location
for it to do any harm.


>> In short, I recognise the problem of allowing the remote machine
>> access to write to your local machine. However, this has been a
>> problem with many other technologies (www, email, ftp, etc.) and it is
>> a solved issue in the general sense. That is, best practices and
>> damage-mitigation strategies have already been established.
>
> Actually, none of these technologies allow downloading arbitrary files
> to the client machine, using server-controlled file names, just by
> logging into a malicious server.
>

I see the point about the file names. Actually, web browsers _do_
allow arbitrary file names by using an unrecognised (by the browser)
MIME type, though by default in that case the user must accept the
download. If the problem is the server-specified filename, then
perhaps a client-side confirmation is appropriate. How do you propose
that work, from a UI perspective?

John Olsson M

unread,
May 14, 2012, 3:02:01 AM5/14/12
to
Hi,


> I imagine something like this:
> The user would run a command such as the following:
> remoteServer$ cp2local someFile.c
> The SSH server on the remote host would then push the file to the
> SSH client running locally just as if scp had been used, but it
> would reuse the existing connection. The local SSH client would
> then write the file just as it would have had scp been used.

You also need to consider the case where the user is *not* running a normal (like TCSH, Bash, ZSH, ...) shell on the server and where the file system is exposed as a virtual filesystem via SFTP (which might run in another chrooted directory than the SSH subsystem).

What would a path to a local file look like in this context?

I see this as a security hole since you suddenly get acess to files via SSH which you do not get access to via SFTP (since it is chrooted)...


/John

Dotan Cohen

unread,
May 14, 2012, 5:55:57 AM5/14/12
to
On Mon, May 14, 2012 at 10:02 AM, John Olsson M
<john.m...@ericsson.com> wrote:
> You also need to consider the case where the user is *not* running a normal (like TCSH, Bash, ZSH, ...) shell on
> the server and where the file system is exposed as a virtual filesystem via SFTP (which might run in another
> chrooted directory than the SSH subsystem).
>
> What would a path to a local file look like in this context?
>

The feature would obviously not be available in the SFTP context. For
one thing, the feature requires a remote server script / command
cpLocal which initiates the transfer and in SFTP there is no access to
scripts / commands.


> I see this as a security hole since you suddenly get acess to files via SSH which you do not get access to via
> SFTP (since it is chrooted)...
>

If the user has access to read a file in a BASH shell then what is to
prevent him from copying the text of that file right from his
terminal? In fact, that is exactly what I have been doing and is quite
the reason for suggesting the download feature.

Steffen Daode Nurpmeso

unread,
May 14, 2012, 6:23:30 AM5/14/12
to
John Olsson M <john.m...@ericsson.com> wrote:

| > I imagine something like this:
| > The user would run a command such as the following:
| > remoteServer$ cp2local someFile.c
| > The SSH server on the remote host would then push the file to the
| > SSH client running locally just as if scp had been used, but it
| > would reuse the existing connection. The local SSH client would
| > then write the file just as it would have had scp been used.
|
| You also need to consider the case where the user is *not* running a normal (like TCSH, Bash, ZSH, ...) shell on the server and where the file system is exposed as a virtual filesystem via SFTP (which might run in another chrooted directory than the SSH subsystem).
|
| What would a path to a local file look like in this context?
|
| I see this as a security hole since you suddenly get acess to files via SSH which you do not get access to via SFTP (since it is chrooted)...

As i understood him (unfortunately i've dropped the mail after
i've got the impression this will not make it anyway, sorry!) he
thought about something like

myself@local-host$ ssh myself@host-over-ssh
myself@host-over-ssh$ ~Copy_file path_on_local-host path(_on_host-over-ssh)

Why should this open a security hole, given that
myself@host-over-ssh has proper permissions for
path_on_host-over-ssh? E.g., the session can do

myself@host-over-ssh$ echo $(date) > path(_on_host-over-ssh)

The problem i see however is that there will be no filename
completion for at least path_on_local-host.

| /John

--steffen
Forza Figa!

John Olsson M

unread,
May 14, 2012, 6:33:42 AM5/14/12
to
> If the user has access to read a file in a BASH shell then
> what is to prevent him from copying the text of that file
> right from his terminal? In fact, that is exactly what I
> have been doing and is quite the reason for suggesting the
> download feature.

You are missing my point. I'm talking about a node/computer/machine/... that offers a CLI interface via SSH on port 22 that is *not* a generic Bash-like shell. Instead it is a text-based managmenet interface of some equipment (for instance a switch or a router). This interface does not operate on files, instead it is configuration commands.

This node also offers an SFTP interface where a file system is exposed (some kind of virtual filesystem) where files can be uploaded and downloaded. Files in this virtual filesystem can ofcourse be referenced from the SSH CLI interface (e.g. configuration data is read from a file etc.).

The SFTP service might run in a chrooted environemnt, whereas the SSH CLI interface can not do this due to that it must be able to access (behind the scenes) all of the physical filesystem.


If you now enable support so that you could transfer /etc/passwd via a built-in SSH command from a node that does not expose a filesystem in the shell I see this as a security problem. That is, since the SSH CLI process can access a larger/different part of the filesystem, the proposed built-in SSH CLI filesystem transfer command could then expose any file that the process can access, right?

I'm just raising this issue, since not all nodes that offer SSH access looks and behave the same way. Not everything is a Bash shell. :)


/John

-----Original Message-----
From: Dotan Cohen [mailto:dotan...@gmail.com]
Sent: den 14 maj 2012 11:56
To: John Olsson M
Cc: Ángel González; openssh-...@mindrot.org
Subject: Re: Transferring file to local machine when SSHing into a foreign box

On Mon, May 14, 2012 at 10:02 AM, John Olsson M <john.m...@ericsson.com> wrote:
> You also need to consider the case where the user is *not* running a
> normal (like TCSH, Bash, ZSH, ...) shell on the server and where the
> file system is exposed as a virtual filesystem via SFTP (which might run in another chrooted directory than the SSH subsystem).
>
> What would a path to a local file look like in this context?
>

The feature would obviously not be available in the SFTP context. For one thing, the feature requires a remote server script / command cpLocal which initiates the transfer and in SFTP there is no access to scripts / commands.


> I see this as a security hole since you suddenly get acess to files
> via SSH which you do not get access to via SFTP (since it is chrooted)...
>

If the user has access to read a file in a BASH shell then what is to prevent him from copying the text of that file right from his terminal? In fact, that is exactly what I have been doing and is quite the reason for suggesting the download feature.



--
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com

Gert Doering

unread,
May 14, 2012, 7:23:29 AM5/14/12
to
Hi,

On Mon, May 14, 2012 at 12:23:30PM +0200, Steffen Daode Nurpmeso wrote:
> myself@local-host$ ssh myself@host-over-ssh
> myself@host-over-ssh$ ~Copy_file path_on_local-host path(_on_host-over-ssh)
>
> Why should this open a security hole, given that
> myself@host-over-ssh has proper permissions for
> path_on_host-over-ssh?

If you're just talking about from-local-to-remote, one thing that comes
to mind is "an evil remote host stealing your local files without your
doing".

So while I can understand the convenience factor of this, making it
properly secure (like "only operate out of a well-defined quarantaine
folder on local-host, and do not permit absolute or relative path names
with '..' in them") are likely ging to make this inconvenient enough
to then not-use it...

gert
--
USENET is *not* the non-clickable part of WWW!
//www.muc.de/~gert/
Gert Doering - Munich, Germany ge...@greenie.muc.de
fax: +49-89-35655025 ge...@net.informatik.tu-muenchen.de

Peter Stuge

unread,
May 14, 2012, 9:02:52 AM5/14/12
to
Dotan Cohen wrote:
> I understand that you feel that allowing the remote server to write
> (not execute) arbitrary files to the local machine is a security risk.

Correct. It's completely unacceptable in the general case.


> I also assume that you do not feel that scp being able to write
> arbitrary files to the local machine is not a security risk because it
> requires the explicit typing of a username and password, or better yet
> of a keypair. Please confirm or deny if my assumption is correct.

Incorrect. What you clearly do not understand is that scp being
invoked is an explicit action taken on the client, whereas something
happening automatically on the client in response to something being
invoked on the server is quite different.


//Peter

Dotan Cohen

unread,
May 14, 2012, 9:38:47 AM5/14/12
to
On Mon, May 14, 2012 at 2:23 PM, Gert Doering <ge...@greenie.muc.de> wrote:
> If you're just talking about from-local-to-remote, one thing that comes
> to mind is "an evil remote host stealing your local files without your
> doing".
>
> So while I can understand the convenience factor of this, making it
> properly secure (like "only operate out of a well-defined quarantaine
> folder on local-host, and do not permit absolute or relative path names
> with '..' in them") are likely ging to make this inconvenient enough
> to then not-use it...
>

Actually, I personally am most interested in the remote-to-local bit
and that is all that has been discussed so far.

I agree that the local-to-remote feature would be nice as well, but
with the necessity that the remote side is initiating, I agree that it
could be problematic. How about only initiating a transfer on the
remote side, but having the local side decide which file. Like this:

me@local:~$ ls
file1 file2
me@local:~$ ssh anotherMe@remote
anotherMe@remote:~$ ls
document1 document2
anotherMe@remote:~$ cpFromLocal
------------------------------------ <-- Here opens a "VIM-window"
(see previous message)
| me@local: Please browse to file |
| $ ls |
| file1 file2 |
| $ send file1 |

anotherMe@remote:~$ ls
document1 document2 file1
anotherMe@remote:~$ exit
me@local:~$ ls
file1 file2
me@local:~$

Peter Stuge

unread,
May 14, 2012, 9:26:00 AM5/14/12
to
Dotan Cohen wrote:
> As we have described it, the cpLocal feature would work something
> like this:
>
> me@local:~$ ls
> file1 file2
> me@local:~$ ssh anotherMe@remote
> anotherMe@remote:~$ ls
> document1 document2
> anotherMe@remote:~$ cpLocal document1
>
> ---------------------------------- <-- Here opens a "VIM-window" (see above)
> | me@local: Download document1? |
> | [y/N/r/l]? | <-- 'r' is Rename, 'l' is Choose Location
>
> anotherMe@remote:~$ exit
> me@local:~$ ls
> document1 file1 file2
> me@local:~$

Thanks for answering my question about user interface. Unfortunately
it seems that you are not so aware of how the systems you use
actually work, and your proposal further does not address the very
important concerns about a remote system being able to control a
local client.

What you call the "VIM-popup" can not be created by the local SSH
client because it is not operating the terminal in a windowed mode.

You would have to study a bit of systems programming and with
particular focus on how applications can interact with their
controlling terminal to have a good background for finding a good
yet viable solution to this user interface problem.

And as I mentioned the above has a rather severe security problem.
The above can be abused by a remote server to make the logged-in
session unusable. Re-using your analogy with web browsers, think of
having a prompt enabled about approving cookies while navigating to
yahoo.com or cnn.com or some other site with lots of cookies and
banners. The client effectively becomes useless due to all the
prompts.

Now, the ~C command line was mentioned. This can be used to realize a
feature similar to what you ask for, but with some differences, and a
few technical problems to solve:

* difference: file transfer requests must always be initiated manually
on the client in the ~C command line.
* problem: the client command line does not know the cwd of the
remote shell, meaning that relative paths can not be used, which is
somehow the whole point of this proposal.

A further variant on this is where on the remote system a file
transfer is prepared with the semantics of the proposed cp2local
command, but no transfer begins until an explicit ~C command (or
perhaps ~D for download) is entered on the client to actually perform
the transfer. There will be no notification from the client that a
transfer is pending, because in fact nothing is pending, the transfer
has only been prepared on the remote side, and will still be
initiated only by the client, just that on the remote server there is
now the marker running which identifies what should be transfered.

A technical problem still remains; how all this should work in terms
of the SSH protocol and what exactly the marker command (cp2local)
does.


//Peter

Dotan Cohen

unread,
May 14, 2012, 9:30:42 AM5/14/12
to
On Mon, May 14, 2012 at 1:33 PM, John Olsson M
<john.m...@ericsson.com> wrote:
>> If the user has access to read a file in a BASH shell then
>> what is to prevent him from copying the text of that file
>> right from his terminal? In fact, that is exactly what I
>> have been doing and is quite the reason for suggesting the
>> download feature.
>
> You are missing my point. I'm talking about a node/computer/machine/... that offers a CLI interface via SSH on port 22 that is *not* a generic Bash-like shell. Instead it is a text-based managmenet interface of some equipment (for instance a switch or a router). This interface does not operate on files, instead it is configuration commands.
>

So the feature does not have to be exposed under those connection
conditions. Only expose the feature when connecting to a shell
instance.


> This node also offers an SFTP interface where a file system is exposed (some kind of virtual filesystem) where files can be uploaded and downloaded. Files in this virtual filesystem can ofcourse be referenced from the SSH CLI interface (e.g. configuration data is read from a file etc.).
>

So this interface doesn't even need the feature. Terrific. I don't
understand what the problem is.


> The SFTP service might run in a chrooted environemnt, whereas the SSH CLI interface can not do this due to that it must be able to access (behind the scenes) all of the physical filesystem.
>

If the administrator of the machine provides the user with read access
to a file via BASH or any other shell in SSH, then the user can
provide himself with the contents of the file. Their currently is no
straightforward method, but there exist painstaking methods and if
that is not considered a security hole (copy-pasta from console) then
the proposed method is neither a security hole. In other words, the
proposed method exposes no information that is not already exposed.


> If you now enable support so that you could transfer /etc/passwd via a built-in SSH command from a node that does not expose a filesystem in the shell I see this as a security problem. That is, since the SSH CLI process can access a larger/different part of the filesystem, the proposed built-in SSH CLI filesystem transfer command could then expose any file that the process can access, right?
>

How is this any different than the user typing "vim /etc/password" in
a shell via SSH and then copying the contents of the file from his
terminal?


> I'm just raising this issue, since not all nodes that offer SSH access looks and behave the same way. Not everything is a Bash shell. :)
>

I appreciate that you raise the issue! I expect that there will be
contingencies that I do not think of, and it is better to work them
out right now. I thank you for providing your input, _especially_ when
it mentions things that I have not thought of.


Also, presumably there will be configuration options to disable this
feature, something like "PermitCpLocal".

John Olsson M

unread,
May 14, 2012, 9:50:20 AM5/14/12
to
> So the feature does not have to be exposed under those
> connection conditions. Only expose the feature when
> connecting to a shell instance.

How does the SSH server know this? Shall it be a configuration option per subsystem?


> So this interface doesn't even need the feature. Terrific.
> I don't understand what the problem is.

I do not want a situation where you suddenly out-of-the-box get file transfer capabilities for a node which does not intend to offer file transfer capabilities. Or for that matter gives access to another set of files compared to what is exposed via SFTP.

I really hope I have misunderstood what it is you want to do. :)


> If the administrator of the machine provides the user with read
> access to a file via BASH or any other shell in SSH, then the
> user can provide himself with the contents of the file. Their
> currently is no straightforward method, but there exist
> painstaking methods and if that is not considered a security
> hole (copy-pasta from console) then the proposed method is
> neither a security hole. In other words, the proposed method
> exposes no information that is not already exposed.

I'm just considering the case when the "shell" is not an ordinary Bash-like shell that offers a filesystem view. Thus I do not want any mechanism that allows for escaping out of the sandboxed environment offered by the "shell". For instance being able to ferret out files that is not possible to see via the "shell". Or, for that matter, place files in the filesystem.

Please also note that the logon procedure might not be based on uid and gid. The "shell" might have its own security model based on custom LDAP attributes which restricts what the user is capable of doing and if the user is able to escape from the "shell" everything is done with the rights of the "shell" process.


> How is this any different than the user typing "vim /etc/password"
> in a shell via SSH and then copying the contents of the file from
> his terminal?

If it is possible to access /etc/passwd from an SSH built-in feature to escape from the "shell" to be able to get file access of the nodes filesystem to transfer files in and out it is a huge difference.


> I appreciate that you raise the issue! I expect that there will
> be contingencies that I do not think of, and it is better to
> work them out right now. I thank you for providing your input,
> _especially_ when it mentions things that I have not thought of.


Excellent! That is my intention. People are doing weird things and all computers out there is not just a "Linux box with Bash"... ;)


> Also, presumably there will be configuration options to disable
> this feature, something like "PermitCpLocal".

I would definitely prefer it the other way around; Opt-In instead of Opt-out. That is you must explicitly ask for the feature to enable it; default it should be turned off.


/John

-----Original Message-----
From: Dotan Cohen [mailto:dotan...@gmail.com]
Sent: den 14 maj 2012 15:31
To: John Olsson M
Cc: Ángel González; openssh-...@mindrot.org
Subject: Re: Transferring file to local machine when SSHing into a foreign box

Ángel González

unread,
May 14, 2012, 10:43:52 AM5/14/12
to
On 14/05/12 09:02, John Olsson M wrote:
>> I imagine something like this:
>> The user would run a command such as the following:
>> remoteServer$ cp2local someFile.c
>> The SSH server on the remote host would then push the file to the
>> SSH client running locally just as if scp had been used, but it
>> would reuse the existing connection. The local SSH client would
>> then write the file just as it would have had scp been used.
> You also need to consider the case where the user is *not* running a normal (like TCSH, Bash, ZSH, ...) shell on the server and where the file system is exposed as a virtual filesystem via SFTP (which might run in another chrooted directory than the SSH subsystem).
>
> What would a path to a local file look like in this context?
>
> I see this as a security hole since you suddenly get acess to files via SSH which you do not get access to via SFTP (since it is chrooted)...
>
> /John
If you have shell in the server, and are able to run the cp2local
command, you could presumably also run cat <file> and copy files that
way. So not really a security hole.
But you raise a good point in that opening a sftp connection in the same
ssh session may not be equivalent to the view through the shell.
Maybe cp2local should simply pass the descriptor to a unix socket (or
equivalent, the cp2local connection would be obsiously implementation
defined).

Dotan Cohen

unread,
May 14, 2012, 10:40:29 AM5/14/12
to
On Mon, May 14, 2012 at 4:50 PM, John Olsson M
<john.m...@ericsson.com> wrote:
>> So the feature does not have to be exposed under those
>> connection conditions. Only expose the feature when
>> connecting to a shell instance.
>
> How does the SSH server know this? Shall it be a configuration option per subsystem?
>

The feature will be initiated by a CLI command on the server (remote)
side. So if there is no shell to run commands, then the feature is not
exposed to use by the user. See the prior mail on the subject, dated
Sun, 13 May 2012 17:32:58 +0300 by me. Later I'll summarise the issue
so that we won't have to keep referring back to eclectic emails.


>> So this interface doesn't even need the feature. Terrific.
>> I don't understand what the problem is.
>
> I do not want a situation where you suddenly out-of-the-box get file transfer capabilities for a node which does not intend to offer file transfer capabilities. Or for that matter gives access to another set of files compared to what is exposed via SFTP.
>

Then a config setting such as PermitCpLocal would be prudent.


> I really hope I have misunderstood what it is you want to do. :)
>

I hope not! I would rather that you point out the flaws to help hone
it into something feasible _and_ safe.


>> If the administrator of the machine provides the user with read
>> access to a file via BASH or any other shell in SSH, then the
>> user can provide himself with the contents of the file. Their
>> currently is no straightforward method, but there exist
>> painstaking methods and if that is not considered a security
>> hole (copy-pasta from console) then the proposed method is
>> neither a security hole. In other words, the proposed method
>> exposes no information that is not already exposed.
>
> I'm just considering the case when the "shell" is not an ordinary Bash-like shell that offers a filesystem view. Thus I do not want any mechanism that allows for escaping out of the sandboxed environment offered by the "shell". For instance being able to ferret out files that is not possible to see via the "shell". Or, for that matter, place files in the filesystem.
>

That is not an issue with the current idea of the implementation. Only
files that the user has read access to in his SSH session are
available to transfer, and if the user already has read access then he
can already just copy the (text) files out of his terminal.


> Please also note that the logon procedure might not be based on uid and gid. The "shell" might have its own security model based on custom LDAP attributes which restricts what the user is capable of doing and if the user is able to escape from the "shell" everything is done with the rights of the "shell" process.
>

There is no escaping from the shell. Only those file for which the
user has explicit read access are available.


>> How is this any different than the user typing "vim /etc/password"
>> in a shell via SSH and then copying the contents of the file from
>> his terminal?
>
> If it is possible to access /etc/passwd from an SSH built-in feature to escape from the "shell" to be able to get file access of the nodes filesystem to transfer files in and out it is a huge difference.
>

Huge difference? There is no escaping from the shell, I don't know why
you insist on using that term. If the user can open a file in VIM,
then he can download the file. It is the same exact file access which
UNIX has been providing for decades, and SELinux is refining.

Imagine the script running like this behind the scenes:
cat someFile > scp

Only if the user can cat the file then can he transfer it.


>> I appreciate that you raise the issue! I expect that there will
>> be contingencies that I do not think of, and it is better to
>> work them out right now. I thank you for providing your input,
>> _especially_ when it mentions things that I have not thought of.
>
>
> Excellent! That is my intention. People are doing weird things and all computers out there is not just a "Linux box with Bash"... ;)
>
>
>> Also, presumably there will be configuration options to disable
>> this feature, something like "PermitCpLocal".
>
> I would definitely prefer it the other way around; Opt-In instead of Opt-out. That is you must explicitly ask for the feature to enable it; default it should be turned off.
>

That might be a good idea to start with. New access features should
always be opt-in in my opinion.

Peter Stuge

unread,
May 14, 2012, 10:43:03 AM5/14/12
to
John Olsson M wrote:
> If it is possible to access /etc/passwd from an SSH built-in
> feature to escape from the "shell" to be able to get file access
> of the nodes filesystem to transfer files in and out it is a huge
> difference.

Indeed. I think the sane way to implement this may be in sftp-server.
The problem is of course the marker IPC from the user's shell over to
the not-yet-running sftp-server. :)


> I would definitely prefer it the other way around; Opt-In instead
> of Opt-out. That is you must explicitly ask for the feature to
> enable it; default it should be turned off.

Yes absolutely.


//Peter

Ángel González

unread,
May 14, 2012, 10:59:56 AM5/14/12
to
I have been considering a variant of this, where you use a ~command.
~C is already taken, but if could be eg. ~F (for transfering *f*iles).

So when you typed ~F the client opens a sftp channel over the same
connection, and shows you a tree view of files and folders to
browse/download.
If you were on Windows, it could be equivalent to being on a PuTTY session,
and on that action getting a WinSCP spawned (reusing the connection).

Steffen Daode Nurpmeso

unread,
May 14, 2012, 12:06:37 PM5/14/12
to
Hallo,

Gert Doering <ge...@greenie.muc.de> wrote:

| Hi,
|
| On Mon, May 14, 2012 at 12:23:30PM +0200, Steffen Daode Nurpmeso wrote:
| > myself@local-host$ ssh myself@host-over-ssh
| > myself@host-over-ssh$ ~Copy_file path_on_local-host path(_on_host-over-ssh)
| >
| > Why should this open a security hole, given that
| > myself@host-over-ssh has proper permissions for
| > path_on_host-over-ssh?
|
| If you're just talking about from-local-to-remote, one thing that comes
| to mind is "an evil remote host stealing your local files without your
| doing".

I don't think this would be possible, since this should all end up
in process_escapes() (talking about command setup and such).
I.e., it should all be filtered by the local client which drives
the interactive terminal session, before any data is sent over the
connection at all.

| So while I can understand the convenience factor of this, making it
| properly secure (like "only operate out of a well-defined quarantaine
| folder on local-host, and do not permit absolute or relative path names
| with '..' in them") are likely ging to make this inconvenient enough
| to then not-use it...

It's not the convenience, it's just sitting on front of the
computer and using the keyboard and having that schizophrenic
situation best described as

All i want to do is '$ cp LOCAL/.vimrc ~/.vimrc', the
connection is established and i could use '$ cat > ~/.vimrc' and
copy+paste and it would do exactly that!

Grrrrmmpf!

Instead i need to switch console and use an explicit scp or
whatever, that does *so many things* before that simple operation
is actually performed.

I'm with the original poster - i know these feelings as of my own
experience.

However i'm not familiar with the actual protocol/RFCs and thus
the question how this could be implemented on the client/server
interaction side beyond my knowledge for a foreseeable period of
time. And one of the previous answers doesn't give that much hope
in respect to this.

--steffen
Forza Figa!

Peter Stuge

unread,
May 14, 2012, 12:21:38 PM5/14/12
to
Steffen Daode Nurpmeso wrote:
> However i'm not familiar with the actual protocol/RFCs and thus
> the question how this could be implemented on the client/server
> interaction side beyond my knowledge for a foreseeable period of
> time.

So take the time to study the protocol and what an actual sshd must
do. It's not very difficult.


> And one of the previous answers doesn't give that much hope
> in respect to this.

It's less about the protocol and more about sending messages to a
process which does not yet exist.


//Peter

Ángel González

unread,
May 16, 2012, 5:35:30 PM5/16/12
to
On 14/05/12 18:01, Peter Stuge wrote:
> There are lots of
> variations, ~G(et) ~P(ut), ~S(end) ~R(eceive), and so on.
Sure, the actual option is not that important.

>> So when you typed ~F the client opens a sftp channel over the same
>> connection, and shows you a tree view of files and folders to
>> browse/download.
> Now you are reinventing an SFTP user interface.
I think it's what they need.

> I think this may be going too far for OpenSSH. I think it's bad enough that the SFTP
> protocol has to be added to the ssh client..
Yes, it seems odd for the command line client provided by OpenSSH. It
may be possible to simply spawn sftp binary instead of implementing sftp
in ssh, though.

> I agree strongly that a filexfer channel is what must be used to
> actually perform the transfer.
>
> On Linux it's easy to get the shell's cwd: /proc/childpid/cwd but
> what is the situation like on other systems? Unless there is a
> portable solution this feature can't really be taken seriously
> IMO.
It seems to be platform dependant. Not all UNIX systems contain a /proc
fs, and even on those who have one, they might not have cwd. For
instance, Solaris seems to have a cwd symlink on /proc/pid, but it
apparently points nowhere :S

Also, even on Linux, take into account that we are not communicating
with the shell, /proc/childpid/cwd would have to be sent to local by
sshd. Also it may have a different view of the fs tree, I'm not sure if
it's under the same chroot as the shell.


> Also, this is a very different user interface than what was
> originally requested, and which I think is what makes the most sense.
> The question of how exactly the cp2local command will IPC to
> sftp-server what file to transfer remains to be solved.
Well, the interface was "You write something on the shell and the file
gets copied",
I just changed cp2local with ~F so that it would be trapped by the local
client :)

>> If you were on Windows, it could be equivalent to being on a PuTTY
>> session, and on that action getting a WinSCP spawned (reusing the
>> connection).
> IMO having an sftp client reuse a connection is a different feature
> than requested feature, whose purpose is to save time by being able
> to transfer one or more files from remote shell to local filesystem
> with very little user interaction.
Not at all. Half of the advantage of doing it from the open shell is
that they skip all the connection and authentication steps (specially if
they're using keyboard-interactive).
0 new messages