Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

concatenated-stream - which component is being read from?

10 views
Skip to first unread message

Sam Steingold

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
Is there a way to figure out which component of a concatenated stream is
being read from?

--
Sam Steingold (http://www.podval.org/~sds)
Micros**t is not the answer. Micros**t is a question, and the answer is Linux,
(http://www.linux.org) the choice of the GNU (http://www.gnu.org) generation.
cogito cogito ergo cogito sum

Erik Naggum

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
* Sam Steingold <s...@gnu.org>

| Is there a way to figure out which component of a concatenated stream is
| being read from?

from the definition of concatenated-stream-streams:

Returns a list of input streams that constitute the ordered set of streams
the concatenated-stream still has to read from, starting with the current
one it is reading from. The list may be empty if no more streams remain to
be read.

#:Erik

Sam Steingold

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
>>>> In message <31633447...@naggum.no>
>>>> On the subject of "Re: concatenated-stream - which component is being read from?"
>>>> Sent on 29 Mar 2000 18:52:17 +0000

thanks, Erik! (dunno how I missed this phrase!)

Unfortunately, only Allegro CL does this, both CLISP and CMUCL return
the whole list at all times.
I just fixed CLISP to do the right thing though.

Another issue is closing of the streams.

I understand that `close' doesn't close the constituent streams:

The effect of close on a constructed stream is to close the argument
stream only. There is no effect on the constituents of composite
streams.

does this mean that I will have to keep the list of all the streams
myself and close them one by one?

ouch!

[I am writing an XML parser, and I define a Gray stream whose main slot
is a concatenated-stream, and I keep adding constituents as I encounter
system entities].

Thanks.

--
Sam Steingold (http://www.podval.org/~sds)
Micros**t is not the answer. Micros**t is a question, and the answer is Linux,
(http://www.linux.org) the choice of the GNU (http://www.gnu.org) generation.

The only thing worse than X Windows: (X Windows) - X

Barry Margolin

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
In article <usnx93...@ksp.com>, Sam Steingold <s...@gnu.org> wrote:
>Another issue is closing of the streams.
>
>I understand that `close' doesn't close the constituent streams:
>
>The effect of close on a constructed stream is to close the argument
>stream only. There is no effect on the constituents of composite
>streams.
>
>does this mean that I will have to keep the list of all the streams
>myself and close them one by one?

Yes. If you opened them using WITH-OPEN-FILE or WITH-OPEN-STREAM, this
will happen automatically. The general idea is that whoever opens a stream
has the responsibility to make sure it's closed. Also, you might stop
using the concatenated stream before you use up all the constituent
streams, and it would be inappropriate to close the rest prematurely.

--
Barry Margolin, bar...@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Joerg-Cyril Hoehle

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
Hi,

Barry Margolin <bar...@bbnplanet.com> writes:
> >does this mean that I will have to keep the list of all the streams
> >myself and close them one by one?
>

> Yes. [...] The general idea is that whoever opens a stream


> has the responsibility to make sure it's closed. Also, you might stop
> using the concatenated stream before you use up all the constituent
> streams, and it would be inappropriate to close the rest prematurely.

It's indeed best for a general-purpose object (such as a concatenated
stream) and it's manipulating methods *not* to close any constituents
that it received as arguments for the reason you mention or because
other parts of an application may still wish to write to some of them.

I'd like to show here that there's a need for the other way round too,
following the principle of freeing resources as early as possible. But
I don't know whether that defines a need for application-specific
subclasses which should propagate closing of consituents or whether
this is an architectural pattern that should be dealt with in a more
general way.

Only the application (the architecture), not a generic module has the
overview and knows when to shut down resources.

I wonder whether the above separation of object manipulation into a)
creation, b) processing and c) destruction, where a) and c) should be
hold tightly together is restricted to OO and procedural programming
and not as general as it may seem. Especially functional programming
and data flow oriented programming may have other needs.

I believe there's a need for *particular* objects which *will* close
constituents when it has finished with them.

Reasons: - "GC leakage": the creating agent (which by the above is
also the destructing agent) may only be invoked CPU years after the
resource has been used for the last time.
I: . open file
. process data -> do years of computations on what was read
. close file
+ If the object is bound to an external resource, you may get too
many open files in between, yet strictly speaking you don't really
need so many open files.
+ Or you just need much more memory than necessary to hold all these
objects, since the creator must hold a reference to them to be able to
destroy them at the end.

I: is different from
II: . open file
. build representation of contents
. close file
. do years of computations based on representation
This has the benefit of not holding external resources forever, but
needs temporary memory resources and may not be suitable for
interactive I/O. It's not CPS-like.

It seems like the best of all worlds would be obtained by having
III: . open file
. process data
-> close file as soon as all was read, while
doing years of computations
Yet how does that work with a general purpose processing function? I
don't want to change a general module and add CLOSE here and there.

How could I possibly "redirect" (or tell) some generic module that
*internally* uses the functionality of a concatenated stream to
instead use an concatenating-and-closing stream for one particular
task defined by my application?

It seems to me like the architecture, which defines everything
(I.Jacobson :-) and thus has all knowledge and magnifying glasses,
needs the ability to modify fine-grained parts of generic module's and
package's behaviour which ideally constitute the application -- how?

So what's needed?
- Application-specific glue code?
- General "design" patterns for building a whole system out of components?
- Protocols with added functionality "close-enhanced"?
- Streams that auto-close on EOF on top of each normal one?
- whole program analysis?

Regards,
Jorg Hohle
Telekom Research Center -- SW-Reliability

PS: Remembers me of the "there cannot be one COPY suitable for all
purposes" discussion...

Barry Margolin

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
In article <qkpvh1y...@tzd.dont.telekom.spam.de.me>,

Joerg-Cyril Hoehle <hoehle...@tzd.dont.telekom.spam.de.me> wrote:
>How could I possibly "redirect" (or tell) some generic module that
>*internally* uses the functionality of a concatenated stream to
>instead use an concatenating-and-closing stream for one particular
>task defined by my application?

If the function does years of processing between reading EOF and returning
to the caller, perhaps you could pass in a function to call when EOF is
reached. This function can be a closure that closes the composite stream
and all its constituent streams.

>So what's needed?
>- Application-specific glue code?
>- General "design" patterns for building a whole system out of components?
>- Protocols with added functionality "close-enhanced"?
>- Streams that auto-close on EOF on top of each normal one?
>- whole program analysis?

The auto-close feature also seems reasonable to me. A :CLOSE-ON-EOF option
to OPEN would make sense. The functions that create composite streams
could also take a :CLOSE-CONSTITUENTS option.

Erik Naggum

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
* hoehle...@tzd.dont.telekom.spam.de.me (Joerg-Cyril Hoehle)

| I believe there's a need for *particular* objects which *will* close
| constituents when it has finished with them.

you _could_ define an after method on read-char and read-byte on that
stream that would close it once it got exhausted, using the widely
available Grey streams, or something better if and when it comes along
with such features available. (hi, Duane! hint for the taking! :)

in an application that needed a little simpler life than it could get out
of the box, I added code to the socket layer this way to automatically
shut down the input side, forcing an EOF that consequently shut down the
output side gracefully as well when they ran into trouble of any kind, as
the socket error handling in most Unices is a disgraceful mess of special
cases that neither match nor attempt to match the TCP or IP
specifications.

#:Erik

0 new messages