Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

piping stdout and stderr to different processes?

2,201 views
Skip to first unread message

Neil Cherry

unread,
Jul 24, 2007, 4:58:52 PM7/24/07
to
I've found name pipes (fifos) but I am confused on using them
properly. What I want to do is to take the stdout of a process and
send it to another process to be filtered. I also want to take the
stderr of the first process and send it to another process to
also be filtered. Any examples?

Thanks (an no it's not for school).

--
Linux Home Automation Neil Cherry nch...@linuxha.com
http://www.linuxha.com/ Main site
http://linuxha.blogspot.com/ My HA Blog
Author of: Linux Smart Homes For Dummies

jellybean stonerfish

unread,
Jul 24, 2007, 8:01:40 PM7/24/07
to
On Tue, 24 Jul 2007 15:58:52 -0500, Neil Cherry wrote:

> I've found name pipes (fifos) but I am confused on using them
> properly. What I want to do is to take the stdout of a process and
> send it to another process to be filtered. I also want to take the
> stderr of the first process and send it to another process to
> also be filtered. Any examples?
>
> Thanks (an no it's not for school).
>

Maybe this simple example will help. I will create two named pipes.
Then I will in the background, cat these pipes to files. Next I will
create a file named "afile". To make it all happen I do a directory
listing of afile and notafile (notafile does not exist and should give an
error) and redirect the output to the named pipes. Finally I will cat out
the files with the data that the background cats read from the fifos.
First a cut and paste of my terminal, following is a breakdown of what
happens.

$ mkfifo stderrpipe
$ mkfifo stdoutpipe
$ cat stderrpipe > errorlog &
[1] 5860
$ cat stdoutpipe > outlog &
[2] 5863
$ touch afile
$ ls afile notafile > stdoutpipe 2> stderrpipe
[1]- Done cat stderrpipe > errorlog
[2]+ Done cat stdoutpipe > outlog
$ cat errorlog
ls: notafile: No such file or directory
$ cat outlog
afile


DEEPER EXPLANATIONS FOLLOW

CREATE PIPE FOR ERROR
$ mkfifo stderrpipe

CREATE PIPE FOR STDOUT
$ mkfifo stdoutpipe

CAT THE ERROR PIPE TO A FILE IN BACKGROUND
$ cat stderrpipe > errorlog &
BACKGROUND PROCESS #1
[1] 5860

CAT THE STDOUT PIPE TO A FILE IN BACKGROUND
$ cat stdoutpipe > outlog &
BACKGROUND PROCESS #2
[2] 5863

CREATE A FILE "afile"
$ touch afile

REDIRECT THE OUTPUT OF ls TO THE PIPES
$ ls afile notafile > stdoutpipe 2> stderrpipe

BOTH BACKGROUND PROCESSES FINISH WHEN THEIR
PIPES GET TO END OF FILE. THAT IS NOW, BECAUSE
THE LS COMMAND IS FINISHED GIVING THEM DATA

[1]- Done cat stderrpipe > errorlog
[2]+ Done cat stdoutpipe > outlog

OUTPUT THE FILES CREATED BY THE ABOVE cats
$ cat errorlog
ls: notafile: No such file or directory
$ cat outlog
afile

I hope that helps.

stonerfish

Stephane CHAZELAS

unread,
Jul 25, 2007, 5:27:33 AM7/25/07
to
2007-07-24, 15:58(-05), Neil Cherry:

> I've found name pipes (fifos) but I am confused on using them
> properly. What I want to do is to take the stdout of a process and
> send it to another process to be filtered. I also want to take the
> stderr of the first process and send it to another process to
> also be filtered. Any examples?
[...]

No need for named pipes here.

{
{
cm1 3>&- |
cmd2 2>&3 3>&-
} 2>&1 >&4 4>&- |
cmd3 3>&- 4>&-
} 3>&2 4>&1


--
Stéphane

Neil Cherry

unread,
Jul 26, 2007, 9:41:16 PM7/26/07
to

Thanks! That helps big time.

Neil Cherry

unread,
Jul 26, 2007, 9:44:50 PM7/26/07
to

Thanks.

OK now I see why I didn't get it to work. I didn't try that far.
But I have to say I'm not quite sure what I'm reading just yet.
I'll have to hit the man pages as I'm not used to the >&-
syntax.

Janis

unread,
Jul 27, 2007, 4:12:47 AM7/27/07
to
On 27 Jul., 03:44, Neil Cherry <n...@cookie.uucp> wrote:
> On Wed, 25 Jul 2007 09:27:33 GMT, Stephane CHAZELAS wrote:
> > 2007-07-24, 15:58(-05), Neil Cherry:
> >> I've found name pipes (fifos) but I am confused on using them
> >> properly. What I want to do is to take the stdout of a process and
> >> send it to another process to be filtered. I also want to take the
> >> stderr of the first process and send it to another process to
> >> also be filtered. Any examples?
> > [...]
>
> > No need for named pipes here.
>
> > {
> > {
> > cm1 3>&- |
> > cmd2 2>&3 3>&-
> > } 2>&1 >&4 4>&- |
> > cmd3 3>&- 4>&-
> > } 3>&2 4>&1
>
> Thanks.
>
> OK now I see why I didn't get it to work. I didn't try that far.
> But I have to say I'm not quite sure what I'm reading just yet.
> I'll have to hit the man pages as I'm not used to the >&-
> syntax.

N>&- close the file descriptor with number N.

Janis

>
> --
> Linux Home Automation Neil Cherry nche...@linuxha.comhttp://www.linuxha.com/ Main sitehttp://linuxha.blogspot.com/ My HA Blog

Stephane CHAZELAS

unread,
Jul 27, 2007, 5:08:23 AM7/27/07
to
2007-07-26, 20:44(-05), Neil Cherry:
[...]

>> {
>> {
>> cm1 3>&- |
>> cmd2 2>&3 3>&-
>> } 2>&1 >&4 4>&- |
>> cmd3 3>&- 4>&-
>> } 3>&2 4>&1
>
> Thanks.
>
> OK now I see why I didn't get it to work. I didn't try that far.
> But I have to say I'm not quite sure what I'm reading just yet.
> I'll have to hit the man pages as I'm not used to the >&-
> syntax.

3>&- is for closing fd 3. It's not necessary, but it's for tidy
up. None of the commands will ever try (not should they) to
access the fd 3 and 4, so it's best to close them before
executing those commands so that they can use those fds for
something else.

{
{
cm1 |
cmd2 2>&3
} 2>&1 >&4 |
cmd3
} 3>&2 4>&1

is functionnaly equivalent.

if cmd2 doesn't output anything on its stdout nor stderr, it can
even be simplified to:
{ cm1 | cmd2; } 2>&1 | cmd3

Or if you want to be sure:

{ cm1 | cmd2 > /dev/null 2>&1; } 2>&1 | cmd3

--
Stéphane

Neil Cherry

unread,
Jul 27, 2007, 4:49:41 PM7/27/07
to

Thanks that helps. :-)

Mike Wu

unread,
Mar 8, 2015, 10:41:39 AM3/8/15
to
I have a somewhat different challenge. I want to direct the stdout of TWO processes into a third process. Here is something close to working

{ { echo $key|base64 -d; } 1>&3 | { dd if=./big.tar.enc | openssl enc -d -out big.tar -pass fd:3; } }

but somehow, I have to hit ENTER key to TWICE to get the command fully executed. Any suggestions would be greatly appreciated.

Mike

Kaz Kylheku

unread,
Mar 8, 2015, 10:56:00 AM3/8/15
to
On 2015-03-08, Mike Wu <qin...@gmail.com> wrote:
> I have a somewhat different challenge. I want to direct the stdout of TWO
> processes into a third process.

$ ( echo proces 1 & echo proces 2 ) | sed -e s/s/ss/
process 1
process 2

The two echos and cat all run in separate processes; the spelling fix
done by cat shows the output has passed through it.

Next challenge?

Aragorn

unread,
Mar 8, 2015, 11:19:33 AM3/8/15
to
On Sunday 08 March 2015 15:55, Kaz Kylheku conveyed the following to
comp.unix.shell...
Must have been a LOLcat then, because I didn't see one. :p ITYM "sed",
rather than "cat"? ;-)

--
= Aragorn =

http://www.linuxcounter.net - registrant #223157

Mike Wu

unread,
Mar 8, 2015, 11:52:48 AM3/8/15
to
On Sunday, March 8, 2015 at 10:56:00 AM UTC-4, Kaz Kylheku wrote:
> On 2015-03-08, Mike Wu wrote:
> > I have a somewhat different challenge. I want to direct the stdout of TWO
> > processes into a third process.
>
> $ ( echo proces 1 & echo proces 2 ) | sed -e s/s/ss/
> process 1
> process 2
>
> The two echos and cat all run in separate processes; the spelling fix
> done by cat shows the output has passed through it.
>
> Next challenge?

I'm afraid this is not what I was looking for. the third process need to read the two incoming I/O streams on separate channels. one for the (encrypted data), another for the key. So it kind of boil down to if I use the pipe process1|process2, which connects the stdout of process1 with stdin of the process2. Is there a way, I can change that, so that process2 will read the input from a different fd, say 3?

Janis Papanagnou

unread,
Mar 8, 2015, 12:05:55 PM3/8/15
to
Am 08.03.2015 um 16:52 schrieb Mike Wu:
>
> I'm afraid this is not what I was looking for. the third process
> need to read the two incoming I/O streams on separate channels. one
> for the (encrypted data), another for the key. So it kind of boil
> down to if I use the pipe process1|process2, which connects the
> stdout of process1 with stdin of the process2. Is there a way, I can
> change that, so that process2 will read the input from a different
> fd, say 3?

The pipe connects stdout of the left process to stdin of the right
process; you cannot simply pass multiple "locical" channels through
a single physical channel.

But what you posted originally gave the impression that you don't
even need to do that.

Your code

{ echo $key|base64 -d; } 1>&3 | { ... | openssl enc -d -out big.tar
-pass fd:3; }

seems to just need a command line argument for openssl for the key.

... | openssl ... -pass $( base64 -d <<< "$key" )


Janis

Mike Wu

unread,
Mar 8, 2015, 12:37:56 PM3/8/15
to
thanks. Janis. so close. but unfortunately no. They key is a binary value. the only two ways to pass a binary value as key are
-pass file:filename
-pass fd:filedescriptor

I didn't want to store the key in a file, so I read the key material into the $key variable. Now I need a way to pass it to openssl.

any number of things could solve my problem, but I couldn't get any of them working
1. is there such a thing as |>&2, like a pipe, but connects the p1.stdout to p2.stderr instead of p2.stdin

2. is there a way to do something like
exec 3< keyfile but instead, exec 3< $(base64 -d <<< $key)

3. is there a way to redirect the I/O in such a way that I attempted like below,
{ echo $key|base64 -d; } 1>&2 | { dd if=./big.tar.enc 2>/dev/null | openssl enc -d -out big.tar -pass fd:2; } 0>&2

Janis Papanagnou

unread,
Mar 8, 2015, 1:00:40 PM3/8/15
to
Am 08.03.2015 um 17:37 schrieb Mike Wu:
> On Sunday, March 8, 2015 at 12:05:55 PM UTC-4, Janis Papanagnou wrote:
>> Am 08.03.2015 um 16:52 schrieb Mike Wu:
>>>
>>> I'm afraid this is not what I was looking for. the third process
>>> need to read the two incoming I/O streams on separate channels. one
>>> for the (encrypted data), another for the key. So it kind of boil
>>> down to if I use the pipe process1|process2, which connects the
>>> stdout of process1 with stdin of the process2. Is there a way, I can
>>> change that, so that process2 will read the input from a different
>>> fd, say 3?
>>
>> The pipe connects stdout of the left process to stdin of the right
>> process; you cannot simply pass multiple "locical" channels through
>> a single physical channel.
>>
>> But what you posted originally gave the impression that you don't
>> even need to do that.
>>
>> Your code
>>
>> { echo $key|base64 -d; } 1>&3 | { ... | openssl enc -d -out big.tar
>> -pass fd:3; }
>>
>> seems to just need a command line argument for openssl for the key.
>>
>> ... | openssl ... -pass $( base64 -d <<< "$key" )
>>
>>
>> Janis
>
> thanks. Janis. so close. but unfortunately no. They key is a binary value. the only two ways to pass a binary value as key are
> -pass file:filename
> -pass fd:filedescriptor

There's the shell option of process substitution:

<( process_or_pipeline_creating_the_key )

which accesses a pathname of the form /dev/fd/N that the program
can open and read from. Try that with

... -pass file:<( ... )


Janis

Janis Papanagnou

unread,
Mar 8, 2015, 1:05:18 PM3/8/15
to
Am 08.03.2015 um 18:00 schrieb Janis Papanagnou:
> Am 08.03.2015 um 17:37 schrieb Mike Wu:
>>
>> [...] the only two ways to pass a binary value as key are
>> -pass file:filename
>> -pass fd:filedescriptor
>
> There's the shell option of process substitution:
>
> <( process_or_pipeline_creating_the_key )
>
> which accesses a pathname of the form /dev/fd/N that the program
> can open and read from. Try that with
>
> ... -pass file:<( ... )

A space may be required by the shell interpreter

... -pass file: <( ... )

(Hope that the openssl "-pass file:" syntax doesn't mind.)

>
> Janis
[...]

Mike Wu

unread,
Mar 8, 2015, 1:28:21 PM3/8/15
to
absolutely amazing!!!!!

dd if=big.tar.enc | openssl enc -d -out big.tar -pass file:<(echo $key|base64 -d)

works. I wasn't sure before I tried. I thought the file:filename had to be the filename in the form of a string like "keyfile". Thank you so much! Janis.

Kaz Kylheku

unread,
Mar 8, 2015, 1:39:53 PM3/8/15
to
On 2015-03-08, Mike Wu <qin...@gmail.com> wrote:
> absolutely amazing!!!!!
>
> dd if=big.tar.enc | openssl enc -d -out big.tar -pass file:<(echo $key|base64 -d)
>
> works. I wasn't sure before I tried. I thought the file:filename had to be
> the filename in the form of a string like "keyfile". Thank you so much!
> Janis.

But that is true: the openssl utility requires a piece of text denoting the
name of a file.

It works because the <(...) syntax, in fact, expands to a string which the
utility can treat as a filename that can be opened to get to the data.

Mike Wu

unread,
Mar 8, 2015, 1:40:17 PM3/8/15
to
apparently <(echo $key|base64 -d) translates into /dev/fd/63 on my machine as your stated.

Barry Margolin

unread,
Mar 8, 2015, 1:41:36 PM3/8/15
to
In article <5c3e7b7d-6c94-402f...@googlegroups.com>,
It is. <(command) is automatically replaced with a filename that
contains the output of the command. You can think of it as being like:

TEMP=filename
echo $key | base64 -d > $TEMP
... -pass file:$TEMP
rm $TEMP

However, the shell uses named pipes or /dev/fd/N internally to hide
this, and since it's a pipe it allows the processes to run concurrently.

--
Barry Margolin, bar...@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***

Kenny McCormack

unread,
Mar 8, 2015, 1:43:17 PM3/8/15
to
...
>absolutely amazing!!!!!
>
>dd if=big.tar.enc | openssl enc -d -out big.tar -pass file:<(echo $key|base64 -d)
>
>works. I wasn't sure before I tried. I thought the file:filename had to be the
>filename in the form of a string like "keyfile". Thank you so much! Janis.

Well, I'm glad you got it working. I was about to suggest another general
approach, but that seems irrelevant now.

But, you do seem to have a "useless use of 'dd'" in there. Note that 'dd'
is often unnecessary, in that a simple 'cat' would suffice. But in this
case, you don't even need that. This should suffice:

openssl enc -d -out big.tar -pass file:<(echo $key|base64 -d) < big.tar.enc

--
Religion is what keeps the poor from murdering the rich.

- Napoleon Bonaparte -

Mike Wu

unread,
Mar 8, 2015, 1:43:23 PM3/8/15
to
On Sunday, March 8, 2015 at 1:39:53 PM UTC-4, Kaz Kylheku wrote:
Kaz,
if you up for a challenge, I'd love to see a I/O redirect based solution from you.
Mike

Mike Wu

unread,
Mar 8, 2015, 1:48:42 PM3/8/15
to
On Sunday, March 8, 2015 at 1:43:17 PM UTC-4, Kenny McCormack wrote:
> In article <5c3e7b7d-6c94-402f...@googlegroups.com>,
or openssl enc -d -in big.tar.enc -out big.tar -pass:file<(echo $key|base64 -d)

There is a reason that I had to read the encrypted file using dd. I would still love to see someone posting an I/O redirect based solution. seeing some of the previous postings dates back to 2007, I found it's quite fascinating.
0 new messages