cmd < cmd.sh p1 p2
Unfortunately p1 and p2 are not interpreted as parameters to cmd.sh. I
tried various combinations of quoting, such as
cmd < "cmd.sh p1 p2"
and
cmd < cmd.sh "p1 p2"
as well as using here-strings:
cmd <<< "cmd.sh p2 p2"
and
cmd <<< cmd.sh "p2 p2"
none of these worked.
Substituting the real command, script, and parameters I am working
with looks like this with some redaction for privacy (the purpose of
all this is to deploy a Java war file to a remote Tomcat server).
ssh oabbott@stageserver < deploy.sh /var/www sitename ../build/
www.sitename.com.war
deploy.sh requires three parameters.
> I want to send a parameterized bash script to a cmd, something like
> this:
>
> cmd < cmd.sh p1 p2
>
> Unfortunately p1 and p2 are not interpreted as parameters to cmd.sh. I
> tried various combinations of quoting, such as
>
> cmd < "cmd.sh p1 p2"
>
> and
>
> cmd < cmd.sh "p1 p2"
>
> as well as using here-strings:
>
> cmd <<< "cmd.sh p2 p2"
>
> and
>
> cmd <<< cmd.sh "p2 p2"
>
> none of these worked.
Of course. You have to supply a file and, in the latter two cases, a string.
Never a program.
But whay can't you do the standard way, ie
cmd.sh p1 p2 | cmd
or, since you're using bash, if you absolutely want to use <,
cmd < <(cmd.sh p1 p2)
--
echo 0|sed 's909=oO#3u)o19;s0#0ooo)].O0;s()(0bu}=(;s#}#.1m"?0^2{#;
s)")9v2@3%"9$);so%op]t(p$e#!o;sz(z^+.z;su+ur!z"au;sxzxd?_{h)cx;:b;
s/\(\(.\).\)\(\(..\)*\)\(\(.\).\)\(\(..\)*#.*\6.*\2.*\)/\5\3\1\7/;
tb'|awk '{while((i+=2)<=length($1)-18)a=a substr($1,i,1);print a}'
If I understood correctly I should try pipes instead of redirection.
This doesn't seem to work. If I have a file that contains "echo
remote" called cmd.sh and I do
ssh oabbott@stageserver < cmd.sh
it works properly.
cmd.sh | ssh oabbott@stageserver
does not.
I also tried
ssh oabbott@BOSEXTSTAGE51 < < (deploy.sh)
this gave an error.
ssh oabbott@BOSEXTSTAGE51 << (deploy.sh)
of course prompted for a here document.
Your statement "You have to supply a file and, in the latter two
cases, a string. Never a program." has me thinking. A bash script
(program) without parameters works properly via redirection to ssh. I
guess it's up to the receiving program whether to interpret the file
redirected to stdin as a program but bash will not recognize the
parameters as we have seen.
Surely there's some way to make this work though.
In the first instance, you are reading from the file and
sending its contents as commands to ssh; in the second you are
executing the command and sending its output to ssh. To do the
same as the first, but using a pipe, it should be:
cat cmd.sh | ssh oabbott@stageserver
> I also tried
>
> ssh oabbott@BOSEXTSTAGE51 < < (deploy.sh)
>
> this gave an error.
Why wouldn't it?
> ssh oabbott@BOSEXTSTAGE51 << (deploy.sh)
>
> of course prompted for a here document.
As it should.
> Your statement "You have to supply a file and, in the latter two
> cases, a string. Never a program." has me thinking. A bash script
> (program) without parameters works properly via redirection to ssh. I
> guess it's up to the receiving program whether to interpret the file
> redirected to stdin as a program but bash will not recognize the
> parameters as we have seen.
The difference is in what you want to do with the file.
> Surely there's some way to make this work though.
To make what work?
--
Chris F.A. Johnson, author <http://cfaj.freeshell.org/shell/>
Shell Scripting Recipes: A Problem-Solution Approach (2005, Apress)
===== My code in this post, if any, assumes the POSIX locale
===== and is released under the GNU General Public Licence
> Thanks for your response Dave.
>
> If I understood correctly I should try pipes instead of redirection.
> This doesn't seem to work. If I have a file that contains "echo
> remote" called cmd.sh and I do
>
> ssh oabbott@stageserver < cmd.sh
>
> it works properly.
>
> cmd.sh | ssh oabbott@stageserver
>
> does not.
Ok, ssh might seem a bit special in that regard, but it's actually logical.
ssh takes what it reads from stdin in the local machine and sends it to the
remote command's stdin, so that you can do things like
tar -cvf - | ssh user@host "tar -xvf -"
(the above line may contain some errors, but the idea should be clear). If
you don't specify a command to run on the remote host, a shell is run by
default. But if there is input available on stdin, that shell will read that
input, assume it contains commands, and try to execute those commands. It's
the same principle that makes possible things like
echo "ls; cat file; time" | bash
So, that's why
ssh oabbott@stageserver < cmd.sh
works and
cmd.sh | ssh oabbott@stageserver
does not.
But if you understand why the first form works, then you should also realize
that this
cat cmd.sh | ssh oabbott@stageserver
will work instead.
> I also tried
>
> ssh oabbott@BOSEXTSTAGE51 < < (deploy.sh)
>
> this gave an error.
>
> ssh oabbott@BOSEXTSTAGE51 << (deploy.sh)
No. It should be
ssh oabbott@BOSEXTSTAGE51 < <(deploy.sh)
(spaces *exactly* as shown).
But note that, for the reasons discussed above, the *output* of deploy.sh
should consist of shell commands, which are executed by the remote shell.
> Your statement "You have to supply a file and, in the latter two
> cases, a string. Never a program." has me thinking. A bash script
> (program) without parameters works properly via redirection to ssh. I
> guess it's up to the receiving program whether to interpret the file
> redirected to stdin as a program but bash will not recognize the
> parameters as we have seen.
See above.
> Surely there's some way to make this work though.
Sure.
Thanks for replying. I see that I have not been completely clear on
what I'm doing.
Sending the output of cmd.sh to ssh is definitely not what I want to
do. What I'm trying to achieve is to send a local bash script, along
with its parameters, to ssh for remote execution. The local bash
script looks like this
DIRECTORY=$1
SITENAME=$2
WARFILE=$3
cd $DIRECTORY
rm -rf $SITENAME
unzip -d $SITENAME $WARFILE
What I hope to see is for bash to evaluate
cmd.sh aDir aSite aWar
making the variable substitutions and then send this (which I've been
trying with redirection) to ssh for remote execution
cd aDir
rm -rf aSite
unzip -d aSite aWar
Does this make sense?
1) the local machine simply sends a script, along with it's
parameters, to the remote machine for execution, and the remote
machine performs the parameter evaluation and execution
2) the local machine performs the parameter evaluation and send the
transformed script to the remote machine for execution
By "parameter evaluation" I mean the process of reading the parameters
and substituting them for the appropriate variables in the script as I
described in the 06/24/08 15:30 post.
> Rereading my post I realized I contradicted myself. There's two ways I
> can see this work:
>
> 1) the local machine simply sends a script, along with it's
> parameters, to the remote machine for execution, and the remote
> machine performs the parameter evaluation and execution
>
> 2) the local machine performs the parameter evaluation and send the
> transformed script to the remote machine for execution
But can't you simply copy the script to the remote host? Then you could just do
ssh user@host "cmd.sh $arg1 $arg2 $arg3"
and be done with it.
I could, but I have many machines and scripts to support and I wanted
to do it the way described previously for maximum flexibility. This
isn't a one-shot deal. I expect to have dozens of scripts like this
and keeping everything synchronized would be a nightmare. Changing a
script would require copying it to dozens of machines. I would rather
have the code in one place and execute it remotely.
remote_execute() {
script=$1
machine=$2
shift 2
scp "$script" $machine:/tmp/remote.script
ssh $machine /tmp/remote.script "$@"
}
remote_execute cmd.sh host $arg1 $arg2 $arg3
--
Barry Margolin, bar...@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
If you don't quote the posts you are replying to, no one is going
to know what you are talking about.
> Sending the output of cmd.sh to ssh is definitely not what I want to
> do.
Then don't do it.
> What I'm trying to achieve is to send a local bash script, along
> with its parameters, to ssh for remote execution.
That's not what you want. You want to send a script, but _not_ the
local script.
> The local bash script looks like this
>
> DIRECTORY=$1
> SITENAME=$2
> WARFILE=$3
> cd $DIRECTORY
> rm -rf $SITENAME
> unzip -d $SITENAME $WARFILE
>
> What I hope to see is for bash to evaluate
>
> cmd.sh aDir aSite aWar
>
> making the variable substitutions and then send this (which I've been
> trying with redirection) to ssh for remote execution
>
> cd aDir
> rm -rf aSite
> unzip -d aSite aWar
>
> Does this make sense?
Yes. Rewrite your script so that it prints a script that can be fed
to ssh:
DIRECTORY=$1
SITENAME=$2
WARFILE=$3
echo 'cd "$DIRECTORY"
echo 'rm -rf "$SITENAME"'
echo 'unzip -d "$SITENAME" "$WARFILE"'
cmd.sh aDir aSite aWar | ssh ...
You can generate locally a script with the replaced parameters, and send
that to the remote host. See Chris' other reply.
> oli...@gmail.com wrote:
>
>> ssh oabbott@stageserver < cmd.sh
[...]
> Ok, ssh might seem a bit special in that regard, but it's actually
> logical. ssh takes what it reads from stdin in the local machine and
> sends it to the remote command's stdin, [...]
Perhaps the OP is looking for:
ssh oabbott@stagesserver \
"sh /dev/stdin arg1 arg2 arg3" <cmd.sh
Or would that make sh read the "wrong" stdin?
Regards,
Marcel
Well, sort of:
$ ssh -t dave@kermit "sh /dev/stdin foo bar" < <(echo 'ls "$@"')
Pseudo-terminal will not be allocated because stdin is not a terminal.
bash: /dev/stdin: No such device or address
Since the man page says that multiple -t options force pseudo-tty allocation
(although I expected a single -t to be enough), I did
$ ssh -tt dave@kermit "sh /dev/stdin foo bar" < <(echo 'ls "$@"')
tcgetattr: Invalid argument
ls: foo: No such file or directory
ls: bar: No such file or directory
It seems to work, but, since a tty has been allocated, it hangs. So I did
$ ssh -tt dave@kermit "sh /dev/stdin foo bar" < <(echo 'ls "$@";exit')
tcgetattr: Invalid argument
ls: foo: No such file or directory
ls: bar: No such file or directory
Connection to kermit closed.
$
Now I'm not able to tell where the tcgetattr message comes from and whether
it's a symptom of something going wrong. I'd be happy to know more about that.
That's a Linux only limitation. As you can't open
/dev/fd/<x> when fd <x> is a socket there. On Solaris, it
would be OK as opening /dev/fd/<x> is more like a dup(<x>)
there.
You could work around that limitation by doing:
echo 'ls "$@"' | ssh host 'cat | sh /dev/stdin foo bar'
which turns stdin from a socket to a pipe (which both Linux and
Solaris can open).
You can also do
echo 'ls "$@"' | ssh host 'set foo bar; eval "$(cat)"'
(assuming your login shell on "host" can interpret the provided
script).
Beware that some shells like bash do read the .bashrc when
called over ssh, which can break things in your script if for
instance the .bashrc turns some options or redefines some
commands as functions.
> Since the man page says that multiple -t options force pseudo-tty allocation
> (although I expected a single -t to be enough), I did
>
> $ ssh -tt dave@kermit "sh /dev/stdin foo bar" < <(echo 'ls "$@"')
> tcgetattr: Invalid argument
> ls: foo: No such file or directory
> ls: bar: No such file or directory
>
> It seems to work, but, since a tty has been allocated, it hangs. So I did
>
> $ ssh -tt dave@kermit "sh /dev/stdin foo bar" < <(echo 'ls "$@";exit')
> tcgetattr: Invalid argument
> ls: foo: No such file or directory
> ls: bar: No such file or directory
> Connection to kermit closed.
> $
>
> Now I'm not able to tell where the tcgetattr message comes from and whether
> it's a symptom of something going wrong. I'd be happy to know more about that.
ssh -t is meant to be used for interactive use, so when the
input is from a terminal. Here, ssh stdin is not a terminal, so
tcgetattr fails on it.
--
Stéphane
> $ ssh -t dave@kermit "sh /dev/stdin foo bar" < <(echo 'ls "$@"')
> Pseudo-terminal will not be allocated because stdin is not a terminal.
> bash: /dev/stdin: No such device or address
>
> Since the man page says that multiple -t options force pseudo-tty
> allocation (although I expected a single -t to be enough), I did
>
> $ ssh -tt dave@kermit "sh /dev/stdin foo bar" < <(echo 'ls "$@"')
> tcgetattr: Invalid argument
> ls: foo: No such file or directory
> ls: bar: No such file or directory
>
> It seems to work, but, since a tty has been allocated, it hangs. So I
> did
>
> $ ssh -tt dave@kermit "sh /dev/stdin foo bar" < <(echo 'ls "$@";exit')
> tcgetattr: Invalid argument
> ls: foo: No such file or directory
> ls: bar: No such file or directory
> Connection to kermit closed.
> $
>
> Now I'm not able to tell where the tcgetattr message comes from and
> whether it's a symptom of something going wrong. I'd be happy to know
> more about that.
I get a slightly different error message from ssh:
tcgetattr: Inappropriate ioctl for device
The "tcgetattr: Invalid argument" is generated by ssh, when
the tcgetattr(3) function call fails with errno==EINVAL.
The failure is self-evident, as the redirected stdin is not a
terminal (ssh: "Pseudo-terminal will not be allocated because
stdin is not a terminal."). As long as the termios functionality
is not used by the shell script, no harm will be done.
Regards,
Marcel
>> $ ssh -t dave@kermit "sh /dev/stdin foo bar" < <(echo 'ls "$@"')
>> Pseudo-terminal will not be allocated because stdin is not a terminal.
>> bash: /dev/stdin: No such device or address
>
> That's a Linux only limitation. As you can't open
> /dev/fd/<x> when fd <x> is a socket there. On Solaris, it
> would be OK as opening /dev/fd/<x> is more like a dup(<x>)
> there.
>
> You could work around that limitation by doing:
>
> echo 'ls "$@"' | ssh host 'cat | sh /dev/stdin foo bar'
>
> which turns stdin from a socket to a pipe (which both Linux and
> Solaris can open).
Yes, this works perfectly, and thus may be a solution for the OP too.
The key is running cat on the remote host instead of sh directly, so that
/dev/stdin is forced to spring into existence to create the pipe, IIUC.
Am I correct in my understanding that, in this case, reading from /dev/stdin
effectively "consumes" the same stdin that is coming from the pipe? If the
script contains commands that read from stdin, those commands should get
that stdin from the pipe, but that's the same stdin that sh reads in
/dev/stdin, no?
Doing some tests, it seems that if the script contains commands that read
from stdin, those commands read nothing, eg
$ echo 'echo "foobar"; cat; # this is a comment' | sh /dev/stdin
foobar
the above does not print "#this is a comment", since it seems that the whole
stdin is "stolen" in advance by sh which takes it as the script to execute
(as if a regular script file had been specified). So, is it correct that the
cat | sh construct effectively ignores input coming from cat, but has the
side effect of creating /dev/stdin which provides the same input?
> You can also do
>
> echo 'ls "$@"' | ssh host 'set foo bar; eval "$(cat)"'
>
> (assuming your login shell on "host" can interpret the provided
> script).
Nice solution too.
>> $ ssh -tt dave@kermit "sh /dev/stdin foo bar" < <(echo 'ls "$@";exit')
>> tcgetattr: Invalid argument
>> ls: foo: No such file or directory
>> ls: bar: No such file or directory
>> Connection to kermit closed.
>> $
>>
>> Now I'm not able to tell where the tcgetattr message comes from and whether
>> it's a symptom of something going wrong. I'd be happy to know more about that.
>
> ssh -t is meant to be used for interactive use, so when the
> input is from a terminal. Here, ssh stdin is not a terminal, so
> tcgetattr fails on it.
Ah yes, that makes perfect sense now that you tell me.
>> Now I'm not able to tell where the tcgetattr message comes from and
>> whether it's a symptom of something going wrong. I'd be happy to know
>> more about that.
>
> I get a slightly different error message from ssh:
> tcgetattr: Inappropriate ioctl for device
>
> The "tcgetattr: Invalid argument" is generated by ssh, when
> the tcgetattr(3) function call fails with errno==EINVAL.
> The failure is self-evident, as the redirected stdin is not a
> terminal (ssh: "Pseudo-terminal will not be allocated because
> stdin is not a terminal."). As long as the termios functionality
> is not used by the shell script, no harm will be done.
Yes, thanks. I made a bit of confusion.
doing an open("/dev/stdin") is meant to be the same as doing a
dup(0). It's more true on Solaris than on Linux. But when the fd
is refering to a pipe, at least functionaly in this case it's
true on both Linux and Solaris (we saw it was not true when the
fd is a socket on Linux).
So, sh opening /dev/stding and doing a read on that new fd is
as if it was reading directly from its stdin.
Its stdin is the reading end of a pipe. The other end of the
pipe is being is written to by cat. The cat command writes to
its stdout what it reads from its stdin. cat's stdin is a Unix
domain socket (probably created by socketpair(2) by sshd) sshd
reads the encrypted data it receives from ssh over a TCP socket
and writes the decrypted data to the other end of that unix
socket.
The ssh command reads it's stdin and writes it encrypted to a
the TCP socket to sshd.
ssh's stdin is a pipe whose other end is the written to by the
echo command.
So basically, that command line starts all of the:
local[echo, ssh] remote[sshd,<user's-shell>,cat,sh] at the same
time, are they are interconnected by pipes, tcp sockets, unix
domain sockets. Every time the "echo" command writes something,
that "something" is passed along in all those pipes and sockets
(possibly undergoing encryption/decryption).
>
> Doing some tests, it seems that if the script contains commands that read
> from stdin, those commands read nothing, eg
>
> $ echo 'echo "foobar"; cat; # this is a comment' | sh /dev/stdin
> foobar
[...]
The best way to understand how it works is to type:
'echo "foobar"; cat; # this is a comment
at the prompt of a shell.
The shell reads one line at a time.
$ printf '%s\n' "echo foo" "cat" "test" | sh
foo
test
You'd have gotten the same had you typed "echo foo", "cat" and
"test" and (<Ctrl-D> for EOF) at the prompt.
> the above does not print "#this is a comment", since it seems that the whole
> stdin is "stolen" in advance by sh which takes it as the script to execute
> (as if a regular script file had been specified). So, is it correct that the
> cat | sh construct effectively ignores input coming from cat, but has the
> side effect of creating /dev/stdin which provides the same input?
No,
cat | sh
makes sh's stdin the output of cat (the reading end of a pipe
whose other end is being written by cat). The thing is that sh's
fd 0 is now a pipe, not a unix domain socket, so that Linux
doesn't have an issue with opening /dev/fd/0 (aka /dev/stdin).
--
Stéphane
> The thing is that sh's fd 0 is now a pipe, not a unix domain
> socket, so that Linux doesn't have an issue with opening
> /dev/fd/0 (aka /dev/stdin).
Would you happen to know if this behaviour (refusal to
open the socket) is "just another Linux kernel bug" or a
conscious choice?
Thanks for joining the discussion.
I must admit that things got over my head on the last few postings.
Was a solution obtained? I'm under the impression that I need to adapt
Stephane's command
echo 'ls "$@"' | ssh host 'set foo bar; eval "$(cat)"'
to my script but I'm not positive.
I don't know. But sockets support a different API from fds
created by open. That could be ground for a "conscious choice".
You can't open(2) a Unix domain socket, you have to connect to
it.
On Linux, /proc/xxx/fds/yyy are some sort of special symlinks,
while on Solaris /dev/fd/xxx are devices.
Linux allows you to open some other processes's fd, Solaris
doesn't, it's 2 different APIs.
On Linux:
~$ seq 3 > a
~$ exec 3< a
~$ read <&3
~$ cat /dev/fd/3
1
2
3
/dev/fd/3 is actually a symlink to the "a" file.
On Solaris:
~$ seq 3 > a
~$ exec 3< a
~$ read <&3
~$ cat /dev/fd/3
2
3
--
Stéphane
You could make it:
echo "set foo bar" | cat - cmd.sh | ssh host sh
--
Stéphane
> The shell reads one line at a time.
>[snip]
Ok, I think I've defined (to myself first of all) what was escaping me
before, thanks to your reply. My problems were not about the various
pipe/socket connections, which I know, but rather in trying to visually
picture what goes on when the shell reads from standard input; it turns out
that it was just convoluted in my mind, while it's actually simple.
The point is, in a pipeline like cat | sh, where stdin must serve both for
reading commands and for "stdin" in the usual sense, the behavior of
commands that read from stdin can vary depending upon how the lines of the
script are formatted.
If I do
$ echo 'echo foo;cat;ls' | sh
foo
file1 file2
The shell consumes all stdin when reading the first line of the script, so
"cat" gets EOF (or something equivalent).
On the other hand, if I do
$ echo -e 'echo foo;cat\nls' | sh
foo
ls
the shell reads "echo foo;cat" in one go, and the rest of stdin is consumed
by cat, however long it may be. If we had another (less greedy) program
instead of cat, the data left in stdin after the command has read its input
would again have been interpreted by the shell as commands:
$ echo -e 'echo foo;read -n 1 a\nls' | sh
foo
sh: line 2: s: command not found
(pardon the bashisms, but they don't affect the discussion here)
and so on. And, of course, all this is exactly what happens with an
interactive shell, which reads from stdin by definition.
Using "cat | sh /dev/stdin" instead does not really change anything, since
the shell is really reading from the same descriptor in both places, with
the added benefit that /dev/stdin is syntactic sugar that lets you add
parameters on the command line.
Thanks for your clarifications, they helped a lot.
> Was a solution obtained? I'm under the impression that I need to adapt
> Stephane's command
>
> echo 'ls "$@"' | ssh host 'set foo bar; eval "$(cat)"'
>
> to my script but I'm not positive.
ssh user@host 'cat | sh /dev/stdin argument1 argument2...' <cmd.sh
The first positional parameter ($0) will be '/dev/stdin', not
'cmd.sh', the other positional parameters ($1, $2 etc.) will
be as expected.
Yes, you can do something like
cat cmd.sh | ssh host "cat | sh /dev/stdin $arg1 $arg2 $arg3"
or, if you want to use eval
cat cmd.sh | ssh host "set $arg1 $arg2 $arg3; eval \"$(cat)\""
where cmd.sh is your original command.
For both forms, you can remove the UUOC "cat cmd.sh" from the beginning and
put it at the end using redirection ( < cmd.sh)
That works!
> or, if you want to use eval
> cat cmd.sh | ssh host "set $arg1 $arg2 $arg3; eval \"$(cat)\""
> where cmd.sh is your original command.
That one did not but may have mistyped something. Of course, I
substituted the appropriate strings (host, arg, etc) rather than using
it literally.
cat cmd.sh | ssh host "cat | sh /dev/stdin $arg1 $arg2 $arg3" (Dave's
solution)
and
ssh host "cat |sh /dev/stdin $arg1 $arg2 $arg3" < cmd.sh (Marcel's
solution)
(providing proper credit, Stephane contributed greatly to finding a
solution)
both work, with the difference as piping the script at the beginning
or redirecting at the end. I'll have to study how this works. I'm
thinking of writing a mini howto on this since I'm surely not the only
person who needs this. Alternatively, if anyone could provide an
article-like explanation that would be great. As I wrote before I got
lost in the discussion.
> $ echo -e 'echo foo;read -n 1 a\nls' | sh
> foo
> sh: line 2: s: command not found
To make things really weird (OS = Linux):
$ cat zzz
echo foo
read -n 1 a </dev/stdin; echo $a
ls
$ cat zzz | sh
foo
l
sh: line 3: s: command not found
$ sh <zzz
foo
e
zzz
Without Stéphane's explanations the 'e'-line would
still be a mystery to me.
Regards,
Marcel
>> Would you happen to know if this behaviour (refusal to
>> open the socket) is "just another Linux kernel bug" or a
>> conscious choice?
>
> I don't know. But sockets support a different API from fds
> created by open. That could be ground for a "conscious choice".
>
> You can't open(2) a Unix domain socket, you have to connect to
> it.
Sure, but when connected you can read(2)/write(2) like a pipe.
> On Linux, /proc/xxx/fds/yyy are some sort of special symlinks,
> while on Solaris /dev/fd/xxx are devices.
Yes, indeed, I noticed that apart from the “self-explanatory”
symlinks to regular files or character devices there are many
symlinks to either “pipe:[NNN]” or “socket:[NNN]”. Since
those names do not exist, I assume the Linux kernel does
some sort of “auto-magical” translation on them. This
would be a “violation” of the general Unix transparency
rule. Furthermore it's by no means clear to me, why an
open(2) on “pipe:[NNN]” should succeed, while a similar
open on “socket:[NNN]” fails. So, perhaps, one could
classify the Linux implementation as a “conscious bug” :-)
> On Linux:
>
> ~$ seq 3 > a
> ~$ exec 3< a
> ~$ read <&3
> ~$ cat /dev/fd/3
> 1
> 2
> 3
>
> /dev/fd/3 is actually a symlink to the "a" file.
>
> On Solaris:
>
> ~$ seq 3 > a
> ~$ exec 3< a
> ~$ read <&3
> ~$ cat /dev/fd/3
> 2
> 3
The latter (Solaris) is what one would expect of a dup(2):
shared file offset (and status flags). The Linux behaviour
isn't very convincing, and definitely not what I would
expect. IMO this implementation is at least
“questionable”.
is predictable, as /proc/<pid>/fd/3 is a common symlink
to /some/place/a, but somehow it is not what I would
expect from open("/dev/fd/3",...). Of course, in practice,
there's not much point in opening an existing descriptor
through the non-portable /dev/fd interface, in stead of
using dup/dup2 (or, horrors, fcntl :-) ), but that's (IMO)
not a good reason to “invent” a questionable (or at least
not very convincing) implementation.
Regards,
Marcel
>> Would you happen to know if this behaviour (refusal to
>> open the socket) is "just another Linux kernel bug" or a
>> conscious choice?
>
> I don't know. But sockets support a different API from fds
> created by open. That could be ground for a "conscious choice".
>
> You can't open(2) a Unix domain socket, you have to connect to
> it.
Sure, but when connected you can read(2)/write(2) like a pipe.
> On Linux, /proc/xxx/fds/yyy are some sort of special symlinks,
> while on Solaris /dev/fd/xxx are devices.
Yes, indeed, I noticed that apart from the “self-explanatory”
symlinks to regular files or character devices there are many
symlinks to either “pipe:[NNN]” or “socket:[NNN]”. Since
those names do not exist, I assume the Linux kernel does
some sort of “auto-magical” translation on them. This
would be a “violation” of the general Unix transparency
rule. Furthermore it's by no means clear to me, why an
open(2) on “pipe:[NNN]” should succeed, while a similar
open on “socket:[NNN]” fails. So, perhaps, one could
classify the Linux implementation as a “conscious bug” :-)
> On Linux:
>
> ~$ seq 3 > a
> ~$ exec 3< a
> ~$ read <&3
> ~$ cat /dev/fd/3
> 1
> 2
> 3
>
> /dev/fd/3 is actually a symlink to the "a" file.
>
> On Solaris:
>
> ~$ seq 3 > a
> ~$ exec 3< a
> ~$ read <&3
> ~$ cat /dev/fd/3
> 2
> 3
The latter (Solaris) is what one would expect of a dup(2):
shared file offset (and status flags). The Linux behaviour
isn't very convincing, and definitely not what I would
expect. IMO this implementation is at least
“questionable”.
Regards,
Marcel
> To make things really weird (OS = Linux):
>
> $ cat zzz
> echo foo
> read -n 1 a </dev/stdin; echo $a
> ls
> $ cat zzz | sh
> foo
> l
> sh: line 3: s: command not found
> $ sh <zzz
> foo
> e
> zzz
Yes, that's because, with < redirection, /dev/stdin points to zzz, and the
file can actually be reopened independently and re-read from the start.
With |, on the other hand, /dev/stdin points to a pipe created on the fly
and reading from that consumes input. That can be seen if you modify the
script like this:
$ cat zzz
ls -l /dev/fd/0
$ cat zzz | sh
lr-x------ 1 dave users 64 2008-06-25 21:01 /dev/fd/0 -> pipe:[4635272]
$ sh < zzz
lr-x------ 1 dave users 64 2008-06-25 21:01 /dev/fd/0 -> /home/dave/tmp/zzz
> Without Stéphane's explanations the 'e'-line would
> still be a mystery to me.
To me also. This last example of yours is also quite enlightning. Thanks!
> I want to send a parameterized bash script to a cmd, something like
> this:
>
> cmd < cmd.sh p1 p2
>
> Unfortunately p1 and p2 are not interpreted as parameters to cmd.sh. I
> tried various combinations of quoting, such as
>
> cmd < "cmd.sh p1 p2"
>
> and
>
> cmd < cmd.sh "p1 p2"
>
> as well as using here-strings:
>
> cmd <<< "cmd.sh p2 p2"
>
> and
>
> cmd <<< cmd.sh "p2 p2"
>
> none of these worked.
>
> Substituting the real command, script, and parameters I am working with
> looks like this with some redaction for privacy (the purpose of all this
> is to deploy a Java war file to a remote Tomcat server).
>
> ssh oabbott@stageserver < deploy.sh /var/www sitename ../build/
> www.sitename.com.war
>
> deploy.sh requires three parameters.
I believe you want what lisp and python people call "lambda", and I don't
think POSIX shell or bash support it directly.
You'll probably have to do one of the following:
1) scp your script and run it with parameters on the remote host
2) sed your script to receive your parameters into a new file on the
local host, and then scp that and run it without parameters
3) sed your script to receive your parameters to stdout and send the
result of that send into your ssh command
I'm guessing #3 is closest to what you're hoping for.
When I say "sed your script", I mean putting some sort of unique string
into your script for each parameter, and then replacing the right unique
string with the value you want it to have.
Thanks Dan. the solution was demonstrated by Dave, Marcel, and
Stephane earlier and it works like I hoped it would.