Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Setting a socket to unbuffered?

95 views
Skip to first unread message

Dom Fulton

unread,
Jan 17, 2008, 5:50:36 PM1/17/08
to
Does anyone know how I can do this? I need a setvbuf equivalent for
file descriptor I/O. I guess that I could use fdopen to get a stream,
but it doesn't feel like the right thing to do.

The background is that I have a client which is talking over a socket
to a server. The server forks a process which has to have
stdout/stderr (and possibly stdin) redirected back to the client over
the socket. This works, but there appears to be a buffering problem.
The client produces its own output to stdout, which is mixed with the
server's redirected output. I hadn't expected there to be any
confusion, because the client and server operate at different times,
in sequence, but the two sets of output are getting mixed anyway. My
reasoning is that if I can set the socket to unbuffered I should be
able to fix this.

I guess that I could also try setvbuf on the client's stdout, but it
seems to me that this may be too late to fix the problem.

Thanks -

Dom

David Schwartz

unread,
Jan 17, 2008, 6:48:27 PM1/17/08
to
On Jan 17, 2:50 pm, Dom Fulton <wes...@yahoo.com> wrote:
> Does anyone know how I can do this? I need a setvbuf equivalent for
> file descriptor I/O. I guess that I could use fdopen to get a stream,
> but it doesn't feel like the right thing to do.

A socket does not have this kind of buffering, so there is nothing to
shut off.

> The background is that I have a client which is talking over a socket
> to a server. The server forks a process which has to have
> stdout/stderr (and possibly stdin) redirected back to the client over
> the socket. This works, but there appears to be a buffering problem.

Right, but it's not in the socket. It's probably in the programs.

> The client produces its own output to stdout, which is mixed with the
> server's redirected output. I hadn't expected there to be any
> confusion, because the client and server operate at different times,
> in sequence, but the two sets of output are getting mixed anyway. My
> reasoning is that if I can set the socket to unbuffered I should be
> able to fix this.

You need to set the applications to not buffer and to write to the
socket immediately. The socket will not hold the data, but the
applications might.

> I guess that I could also try setvbuf on the client's stdout, but it
> seems to me that this may be too late to fix the problem.

Normally, 'stdout' is line-buffered.

It's hard to know what your issue is because you provide almost no
details. How much of this is under your control? When you say they run
at different times, does the program producing the problem output stop
existing? Or does it just stop sending output?

DS

Logan Shaw

unread,
Jan 17, 2008, 10:24:00 PM1/17/08
to
David Schwartz wrote:
> On Jan 17, 2:50 pm, Dom Fulton <wes...@yahoo.com> wrote:

>> The background is that I have a client which is talking over a socket
>> to a server. The server forks a process which has to have
>> stdout/stderr (and possibly stdin) redirected back to the client over
>> the socket. This works, but there appears to be a buffering problem.

> Right, but it's not in the socket. It's probably in the programs.

In which case, the solution is to make the programs not buffer their
output.

The obvious thing is to request that the program does that by some
kind of parameter or config file. Or change the source code.

But if you cannot do that, often programs have the behavior that
they will not buffer when writing to a terminal. If that's the case,
one possible solution is to open a pty and run the program within that.
Then the program is, as far as it knows, running in a terminal, and
it will behave as if there watching it and wants to see the data
immediately.

- Logan

George Peter Staplin

unread,
Jan 17, 2008, 10:47:38 PM1/17/08
to
Dom Fulton wrote:
> Does anyone know how I can do this? I need a setvbuf equivalent for
> file descriptor I/O. I guess that I could use fdopen to get a stream,
> but it doesn't feel like the right thing to do.


Use the FIONBIO (non-blocking I/O knob)

int on = 1;
if (ioctl(sockfd, FIONBIO, &on)) {
perror ("ioctl");
...
}

> The background is that I have a client which is talking over a socket
> to a server. The server forks a process which has to have
> stdout/stderr (and possibly stdin) redirected back to the client over
> the socket. This works, but there appears to be a buffering problem.
> The client produces its own output to stdout, which is mixed with the
> server's redirected output. I hadn't expected there to be any
> confusion, because the client and server operate at different times,
> in sequence, but the two sets of output are getting mixed anyway. My
> reasoning is that if I can set the socket to unbuffered I should be
> able to fix this.

I don't understand your stated problem.


George

David Schwartz

unread,
Jan 18, 2008, 12:13:47 AM1/18/08
to
On Jan 17, 7:47 pm, George Peter Staplin
<georgepsSPAMME...@xmission.com> wrote:

> Dom Fulton wrote:

> > Does anyone know how I can do this? I need a setvbuf equivalent for
> > file descriptor I/O. I guess that I could use fdopen to get a stream,
> > but it doesn't feel like the right thing to do.

> Use the FIONBIO (non-blocking I/O knob)
>
> int on = 1;
> if (ioctl(sockfd, FIONBIO, &on)) {
> perror ("ioctl");
> ...

Whoa, no!

1) You can't pass a program a non-blocking stdin/stdout/stderr.

2) Even if you could, how would that help solve his problem?

DS

Dom Fulton

unread,
Jan 18, 2008, 4:58:51 AM1/18/08
to
Thanks for the replies. I can't actually control or modify the program
running on the server - it's an arbitrary program that the user at the
client has requested the server to run. However, the program has
actually *terminated* when my problem occurs, and I would have
expected the exit to flush stdout/stderr, but it seems that it isn't.

I guess this is getting pretty confusing - this is exactly what I'm
doing, in more detail:

------- ---------
Client |------ Socket 1 (commands)------| Server
|-------Socket 2 (stdout/err)----|
------- --------

Socket 1 carries a command protocol. The client issues a request to
the server, and the server fulfills the request. Very simple, and
unidirectional. There's one exception, though, which is that the
client can also issue an 'execute program' (EXEC) request to the
server, and the client needs to see the stdout from that program, as
well as the program's exit code. I couldn't think of a way to do this
on one socket, which is why I'm using two (both on the same server
port).

The client issues EXEC together with the required program name and
args. The server forks, and exec's the requested program. Before
calling 'exec', however, the server dup's FDs such that stdout/err for
the requested program are redirected onto Socket 2. The server then
waits for the requested program to exit. When 'waitpid' returns, the
server sends the program's exit status back to the client on Socket 1.
This is the only message that goes back from the server to the client
on Socket 1.

At the client, after issuing EXEC, the client code calls 'select' and
waits for incoming data on both of Socket 1 and Socket 2. The theory
is that there will be program output arriving on Socket 2, which will
eventually stop when the EXEC program terminates. So, in principle, as
soon as I have ready data on Socket 1 (ie. the EXEC program's exit
code) I can exit my select loop, respond to the exit status, and carry
on normal unidirectional client processing, ignoring Socket 2.

Here's the problem. In the client's select loop, I display the Socket
2 data a single character at a time, using a read on the socket,
followed by a write to STDOUT_FILENO. I stop doing this when the exit
code arrives on Socket 1, *but* I haven't yet received all the stdout
data from the exec'ed program. So, the program has terminated, and the
exit code has arrived on Socket 1, *before* the program's stdout
output has been flushed onto Socket 2, and that output has been lost.
It works most of the time, but there can be a lot of lost output -
it's not just a single line.

So, given what you're saying, it seems that I have to arrange either
(a) for the server system to ensure that stdout/err are flushed during
exit (but surely that just happens anyway?), or (b) I have to try to
set up unbuffered stdout for the exec'ed program. Is (b) even
possible? Doesn't exec re-open stdout anyway, and would any unbuffered
status be inherited by the program?

Oh - and this needs to work on Solaris, Linux, and Cygwin... :)
I haven't got around to trying Solaris yet; I'm seeing this behaviour
on Linux.

Many thanks -

Dom

Dom Fulton

unread,
Jan 18, 2008, 5:07:53 AM1/18/08
to
On Fri, 18 Jan 2008 09:58:51 +0000, Dom Fulton <wes...@yahoo.com>
wrote:

>So, given what you're saying, it seems that I have to arrange either
>(a) for the server system to ensure that stdout/err are flushed during
>exit (but surely that just happens anyway?), or (b) I have to try to
>set up unbuffered stdout for the exec'ed program.

I forgot to say - my original thought was that the problem was
actually:

(c) the 'exit' of the exec'ed program *did* actually flush stdout, but
the stdout data was still buffered in Socket 2 when the exit status
arrived on Socket 1, and so was lost when I left the select loop after
receiving the Socket 1 message.

- Dom

Message has been deleted

Dom Fulton

unread,
Jan 18, 2008, 8:37:22 AM1/18/08
to
On Fri, 18 Jan 2008 04:03:41 -0800 (PST), David Schwartz
<dav...@webmaster.com> wrote:

>On Jan 18, 2:07 am, Dom Fulton <wes...@yahoo.com> wrote:
>
>> (c) the 'exit' of the exec'ed program *did* actually flush stdout, but
>> the stdout data was still buffered in Socket 2 when the exit status
>> arrived on Socket 1, and so was lost when I left the select loop after
>> receiving the Socket 1 message.
>

>Exactly. Your problem has nothing to do with buffers or flushing. It
>has to do with the fact that you have two things that have no
>synchronization whatsoever and you are expecting them to complete in a
>particular order.

Hmm. Yes, it does sound pretty dumb when you put it like that.

>Using a single socket would solve your problem. Closing the second
>socket at the end of output (and checking for that rather than a
>status on the first) would solve the problem.

I don't think I can do these without putting a pipe between the server
and the program anyway, so that I get a chance to intercept the
program output and package it up as a message for the single socket.

>There are a lot of other reasonable solutions. Probably the best is to
>open a pty and pass *that* to the program you start. Then the server
>proxies between the pty and the client, giving it complete control
>over data flow and timing.

I like the sound of this, but I don't quite understand it. Doesn't it
just move the buffering problem closer to the program? If I still have
to wait for waitpid to find out when the program has completed, and to
get its exit code, surely I still have no guarantee that all the
program output has arrived on the pty? Or can I somehow get this
status from the pty itself?

Thanks -

Dom

George Peter Staplin

unread,
Jan 18, 2008, 2:22:57 PM1/18/08
to
["Followup-To:" header set to comp.unix.programmer.]

David Schwartz wrote:
> On Jan 17, 7:47 pm, George Peter Staplin
><georgepsSPAMME...@xmission.com> wrote:
>
>> Dom Fulton wrote:
>
>> > Does anyone know how I can do this? I need a setvbuf equivalent for
>> > file descriptor I/O. I guess that I could use fdopen to get a stream,
>> > but it doesn't feel like the right thing to do.
>
>> Use the FIONBIO (non-blocking I/O knob)
>>
>> int on = 1;
>> if (ioctl(sockfd, FIONBIO, &on)) {
>> perror ("ioctl");
>> ...
>
> Whoa, no!

Whoa, yes :-)

>
> 1) You can't pass a program a non-blocking stdin/stdout/stderr.
>
> 2) Even if you could, how would that help solve his problem?
>
> DS

Re-read my message I was answering a question. I stated later on that
I didn't understand the rest of the problem. It seems the rest of you
don't either. I think the OP is probably confused, but I answered at
least one of the questions.

Also, I fail to see why you couldn't pass a program a non-blocking
stdout, or stdin, if the program assumes that it's in non-blocking mode.


George

Alex Fraser

unread,
Jan 18, 2008, 3:34:54 PM1/18/08
to
"Dom Fulton" <wes...@yahoo.com> wrote in message
news:au91p39um1iepgni3...@4ax.com...

> On Fri, 18 Jan 2008 04:03:41 -0800 (PST), David Schwartz
> <dav...@webmaster.com> wrote:
>>On Jan 18, 2:07 am, Dom Fulton <wes...@yahoo.com> wrote:
>>> (c) the 'exit' of the exec'ed program *did* actually flush stdout, but
>>> the stdout data was still buffered in Socket 2 when the exit status
>>> arrived on Socket 1, and so was lost when I left the select loop after
>>> receiving the Socket 1 message.
>>
>>Exactly. Your problem has nothing to do with buffers or flushing. It
>>has to do with the fact that you have two things that have no
>>synchronization whatsoever and you are expecting them to complete in a
>>particular order.
>
> Hmm. Yes, it does sound pretty dumb when you put it like that.
>
>>Using a single socket would solve your problem. Closing the second
>>socket at the end of output (and checking for that rather than a
>>status on the first) would solve the problem.
>
> I don't think I can do these without putting a pipe between the server
> and the program anyway, so that I get a chance to intercept the
> program output and package it up as a message for the single socket.

David's first suggestion would require a pipe between the server (parent)
and exec'd program (child).

The second (simpler) suggestion could result in a protocol something like
FTP. After the EXEC command, you would establish a second connection to read
from the child. The parent would close the connection after fork(), so that
the client can detect the end of output from the child. Only after that has
happened would the client go back to the command connection to read the exit
status.

>>There are a lot of other reasonable solutions. Probably the best is to
>>open a pty and pass *that* to the program you start. Then the server
>>proxies between the pty and the client, giving it complete control
>>over data flow and timing.
>
> I like the sound of this, but I don't quite understand it. Doesn't it
> just move the buffering problem closer to the program?

As far as I can see, all you gain by using a pty is the possibility that the
child's output will be read (and hence received by the client) sooner. In
other respects it is the same as the pipe idea.

> If I still have to wait for waitpid to find out when the program has
> completed, and to get its exit code, surely I still have no guarantee
> that all the program output has arrived on the pty? Or can I somehow get
> this status from the pty itself?

To be neat, the parent should call waitpid() as soon as the child exits (ie
in response to SIGCHLD), and send the status after the last bit of child
output. However, the time between these two events will typically be small
so you could simply take no action on SIGCHLD and call waitpid() after the
last bit of output is written.

Alex


Dom Fulton

unread,
Jan 18, 2008, 4:29:47 PM1/18/08
to
On Fri, 18 Jan 2008 20:34:54 -0000, "Alex Fraser" <m...@privacy.net>
wrote:

>The second (simpler) suggestion could result in a protocol something like
>FTP. After the EXEC command, you would establish a second connection to read
>from the child. The parent would close the connection after fork(), so that
>the client can detect the end of output from the child.

> ...


>To be neat, the parent should call waitpid() as soon as the child exits (ie
>in response to SIGCHLD), and send the status after the last bit of child
>output.

Ok, sorry to be dumb - the bit I don't understand is: how does the
parent know when the child has finished output? The only indication I
get is either waitpid returning, or SIGCHLD arriving, but neither of
these guarantee that all the output has arrived from the child.

I've got a really nasty hack at the moment, which is that I sleep(1)
after waitpid returns and before sending the status back. This is long
enough to get things to work in my current test.

Thanks -

Dom

Dom Fulton

unread,
Jan 18, 2008, 5:07:50 PM1/18/08
to
Current code attached below -

- Dom
------------------------------------------------------------
// server pseudo-code
server() {
...
pid = fork();
if (pid == 0) {
dup2(fildesB, STDIN_FILENO);
dup2(fildesB, STDOUT_FILENO);
dup2(fildesB, STDERR_FILENO);

// progname is an arbitrary client-requested program;
// we need to capture its stdout on fildesB and return
// it to the client
execvp(progname, child_argv); // shouldn't return...
_exit(EXIT_FAILURE); // ...but in case it does
}

if(pid < 0) {
doLog(ERROR, "'fork' failed (%s)", strerror(errno));
return -1;
}

// wait till the child terminates
errno = 0;
if(waitpid(pid, &status, 0) != pid) {
// ... log error message
status = -1;
}

// nasty hack: without this sleep, we lose some output on
// fildesB. 1s is long enough to get all the fildesB data
// back to the client before we return the status on fildesA
sleep(1);

// send 'status' back to the client on fildesA
...
write(fildesA, statusbuff, etc);
}

-----------------------------------------------------------------

// client pseudo-code. loop displaying the incoming data on fildesB
// until we get the exit status from fildesA; fildesB can be ignored
// after this point
client() {
...
while(1) {
...
select(...);
for(fd = 0; fd < FD_SETSIZE; ++fd) {
if(FD_ISSET(fd, &read_fd_set)) {
if(fd == fildesA)
return getCommandReturnCode(fildesA);
if(fd == fildesB)
output_character(fd);
}
}
}
}
-----------------------------------------------------------------

Frank Cusack

unread,
Jan 18, 2008, 7:52:19 PM1/18/08
to
On Fri, 18 Jan 2008 20:34:54 -0000 "Alex Fraser" <m...@privacy.net> wrote:
> As far as I can see, all you gain by using a pty is the possibility that the
> child's output will be read (and hence received by the client) sooner. In
> other respects it is the same as the pipe idea.

I think the idea is that the exec'd program thinks it is writing to a
terminal, instead of something else, and changes its behavior accordingly.
eg, 'ls' vs 'ls|cat'.

-frank

Barry Margolin

unread,
Jan 18, 2008, 10:42:11 PM1/18/08
to
In article <k162p3l952f5fpvvk...@4ax.com>,
Dom Fulton <wes...@yahoo.com> wrote:

> On Fri, 18 Jan 2008 20:34:54 -0000, "Alex Fraser" <m...@privacy.net>
> wrote:
>
> >The second (simpler) suggestion could result in a protocol something like
> >FTP. After the EXEC command, you would establish a second connection to read
> >from the child. The parent would close the connection after fork(), so that
> >the client can detect the end of output from the child.
> > ...
> >To be neat, the parent should call waitpid() as soon as the child exits (ie
> >in response to SIGCHLD), and send the status after the last bit of child
> >output.
>
> Ok, sorry to be dumb - the bit I don't understand is: how does the
> parent know when the child has finished output? The only indication I
> get is either waitpid returning, or SIGCHLD arriving, but neither of
> these guarantee that all the output has arrived from the child.

Whoever is reading from the child (either the parent if you set up a
pipe, or the client if you use a second socket) will read EOF when the
child has finished its output and exits.

--
Barry Margolin, bar...@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***

Barry Margolin

unread,
Jan 18, 2008, 10:44:12 PM1/18/08
to
In article
<e773d2af-e455-4b54...@e6g2000prf.googlegroups.com>,
David Schwartz <dav...@webmaster.com> wrote:

> On Jan 18, 2:07 am, Dom Fulton <wes...@yahoo.com> wrote:
>

> > (c) the 'exit' of the exec'ed program *did* actually flush stdout, but
> > the stdout data was still buffered in Socket 2 when the exit status
> > arrived on Socket 1, and so was lost when I left the select loop after
> > receiving the Socket 1 message.
>

> Exactly. Your problem has nothing to do with buffers or flushing. It
> has to do with the fact that you have two things that have no
> synchronization whatsoever and you are expecting them to complete in a
> particular order.
>

> Suppose the output of the program is very large. Nothing ensures that
> all that data will have been actually received by the other end before
> the server detects the termination of the program.


>
> Using a single socket would solve your problem. Closing the second
> socket at the end of output (and checking for that rather than a
> status on the first) would solve the problem.

I disagree with the suggestion to use a single socket. How will the
client be able to tell when the program output has completed and the
next output is the exit code?

Casper H.S. Dik

unread,
Jan 19, 2008, 5:36:53 AM1/19/08
to
Dom Fulton <wes...@yahoo.com> writes:

>At the client, after issuing EXEC, the client code calls 'select' and
>waits for incoming data on both of Socket 1 and Socket 2. The theory
>is that there will be program output arriving on Socket 2, which will
>eventually stop when the EXEC program terminates. So, in principle, as
>soon as I have ready data on Socket 1 (ie. the EXEC program's exit
>code) I can exit my select loop, respond to the exit status, and carry
>on normal unidirectional client processing, ignoring Socket 2.

Even if Socket 2 carries all the output and error data, there is NO
guarantee that the exit code arrives *after* all the data has been
send. Data is transferred asynchronously.

Without an end marker of the data such as closing the second socket,
you can't reliably determine of all data has been send.

You could try setting TCP_NODELAY to disable Nagle (which is something
you need to do with TCP IP if all you do is send data in one direction
as it will wait for some time sending the last partial segment)

Casper
--
Expressed in this posting are my opinions. They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.

Alex Fraser

unread,
Jan 19, 2008, 6:20:28 AM1/19/08
to
"Dom Fulton" <wes...@yahoo.com> wrote in message
news:k162p3l952f5fpvvk...@4ax.com...

> On Fri, 18 Jan 2008 20:34:54 -0000, "Alex Fraser" <m...@privacy.net>
> wrote:
>>The second (simpler) suggestion could result in a protocol something like
>>FTP. After the EXEC command, you would establish a second connection to
>>read from the child. The parent would close the connection after fork(),
>>so that the client can detect the end of output from the child.
>> ...
>>To be neat, the parent should call waitpid() as soon as the child exits
>>(ie in response to SIGCHLD), and send the status after the last bit of
>>child output.
>
> Ok, sorry to be dumb - the bit I don't understand is: how does the
> parent know when the child has finished output? The only indication I
> get is either waitpid returning, or SIGCHLD arriving, but neither of
> these guarantee that all the output has arrived from the child.

Using the second connection idea you could use a protocol which goes
something like this:

C1: EXEC prog...
S1: OK, connect to <sockaddr>
C2: connect
S2: <output from command>
S2: EOF
S1: exit status: ...

In the server, in response to the EXEC command you would do something along
these lines:

cmd_listen = socket(...);
bind(cmd_listen, ...);
listen(cmd_listen, ...);
/* send address to client */
if (!fork()) {
close(client);
cmd_sock = accept(cmd_listen, ...);
close(cmd_listen);
dup2(cmd_sock, 0);
dup2(cmd_sock, 1);
dup2(cmd_sock, 2);
close(cmd_output);
execvp(...);
_exit(1);
}
close(cmd_listen);
waitpid(...);
/* send status to client */

And in the client:

/* send EXEC command */
/* wait for address */
cmd_sock = socket(...);
connect(cmd_sock, ...);
/* read until EOF on cmd_sock */
close(cmd_sock);
/* wait for status */

The logic here is essentially the same as a passive FTP file transfer
(substituting the command's output for the file content).

The "neatness" aspect I mentioned applies if you have the server passing the
data between the exec'd program (child) and the client. There may be a delay
between when the child exits and when you have written the last of its
output to the client. If you wait until you have written all the output
before calling waitpid(), there will be a zombie during this delay, which
will be long if the child produces a lot of output and the client<->server
connection is relatively slow.

Alex


David Schwartz

unread,
Jan 19, 2008, 11:36:21 PM1/19/08
to
On Jan 18, 5:37 am, Dom Fulton <wes...@yahoo.com> wrote:
> I like the sound of this, but I don't quite understand it. Doesn't it
> just move the buffering problem closer to the program? If I still have
> to wait for waitpid to find out when the program has completed, and to
> get its exit code, surely I still have no guarantee that all the
> program output has arrived on the pty? Or can I somehow get this
> status from the pty itself?

How can the program write to the pty after it has terminated?

DS

David Schwartz

unread,
Jan 19, 2008, 11:38:56 PM1/19/08
to
On Jan 18, 7:44 pm, Barry Margolin <bar...@alum.mit.edu> wrote:

> I disagree with the suggestion to use a single socket. How will the
> client be able to tell when the program output has completed and the
> next output is the exit code?

Any way you want. For example, when the server receives the request to
start the command, it can return with a random 160-bit sequence that
will mark the end of the program's output. Alternatively, the data can
be broken into chunks with each chunk prefixed by its length.

If the data is known to be text, a single line consisting of a '.' can
be used. If the program actually produces a line that starts with a
'.', the server can double it. The client software can then eliminate
leading dots until it sees a line with just a dot.

The point is, once you solve the synchronization problem in the
server, this can be communicate to the client any way you want.

DS

Barry Margolin

unread,
Jan 20, 2008, 6:03:56 PM1/20/08
to
In article
<0d225da1-e683-4d18...@q77g2000hsh.googlegroups.com>,
David Schwartz <dav...@webmaster.com> wrote:

> On Jan 18, 7:44 pm, Barry Margolin <bar...@alum.mit.edu> wrote:
>
> > I disagree with the suggestion to use a single socket. How will the
> > client be able to tell when the program output has completed and the
> > next output is the exit code?
>
> Any way you want. For example, when the server receives the request to
> start the command, it can return with a random 160-bit sequence that
> will mark the end of the program's output. Alternatively, the data can
> be broken into chunks with each chunk prefixed by its length.
>
> If the data is known to be text, a single line consisting of a '.' can
> be used. If the program actually produces a line that starts with a
> '.', the server can double it. The client software can then eliminate
> leading dots until it sees a line with just a dot.

This requires the server to interpose itself, using a pipe, so that it
can check the output to make sure it doesn't contain the delimiter.
Similarly with the chunked output. You didn't mention this earlier, so
I thought you were talking about duping the socket to the child
process's stdout.

Basically, you're describing how web servers invoke CGI programs and
return the output to the client. They either buffer up the entire
result, so they can put the length in the Content-Length header, use the
chunked approach, or close the connection when the child process exits.

0 new messages