Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ZWEI (Re: emacs rules and vi sucks)

810 views
Skip to first unread message

James A. Crippen

unread,
Sep 21, 2001, 8:31:37 PM9/21/01
to
Ole Aamot <o...@ping.uio.no> writes:

> * ja...@unlambda.com (James A. Crippen)
> | I wonder if Saint IGNUcius approves of using ZWEI...
>
> I used to believe that there could only be EIN Emacs.
> But I do not believe in that ZWEI was eine initially.

And TECO Emacs begat EINE, and EINE begat ZWEI, and ZWEI begat Zmacs.
Thus was the ancestry of the Lisp Machine editor begun. And the
bastard child of EINE was Hemlock, which did abide in the land of
Spice Lisp, whose secrets were held by the Priests of the Carnegie
Mellon. And Hemlock did prosper in the land of Spice Lisp, and did
grow beyond the narrow borders of PERQ. For the land of Spice Lisp
became bound by marriage to the Common Lisp, and Hemlock did forsake
the language of MacLisp and its father EINE, and did sow new children
upon the ground of the Common Lisp of the Priests of Carnegie Mellon,
where its progeny would spread across Unix systems everywhere. But
the Great Emacs of Saint IGNUcius would overtake the spread of
Hemlock, for Saint IGNUcius spread a gospel far sweeter to the ears of
the masses than that of the authors of the Common Lisp Standard. Yea,
Hemlock, though it continue on even to this day amongst the
worshippers of the Priests of Carnegie Mellon, Hemlock did become
marginalized as it required a particular Common Lisp, whereas the
Great Emacs of Saint IGNUcius did spread its gospel in the language of
C, spoken by teeming masses of Unix hackers and other heathen.

'james

--
James A. Crippen <ja...@unlambda.com> ,-./-. Anchorage, Alaska,
Lambda Unlimited: Recursion 'R' Us | |/ | USA, 61.2069 N, 149.766 W,
Y = \f.(\x.f(xx)) (\x.f(xx)) | |\ | Earth, Sol System,
Y(F) = F(Y(F)) \_,-_/ Milky Way.

Jym Dyer

unread,
Sep 21, 2001, 9:10:07 PM9/21/01
to
> [Lineagectomy]

=v= Where do FINE ("FINE Is Not EMACS") and BRIEF ("BRIEF Really
Isn't Even FINE") it into your list of begats?
<_Jym_>

Jason Trenouth

unread,
Sep 24, 2001, 6:43:26 AM9/24/01
to
On 21 Sep 2001 16:31:37 -0800, ja...@unlambda.com (James A. Crippen) wrote:

> Ole Aamot <o...@ping.uio.no> writes:
>
> > * ja...@unlambda.com (James A. Crippen)
> > | I wonder if Saint IGNUcius approves of using ZWEI...
> >
> > I used to believe that there could only be EIN Emacs.
> > But I do not believe in that ZWEI was eine initially.
>
> And TECO Emacs begat EINE, and EINE begat ZWEI, and ZWEI begat Zmacs.
> Thus was the ancestry of the Lisp Machine editor begun. And the

> bastard child of EINE was Hemlock, ...

And yet in the darkness Hemlock did begat the LispWorks Editor.

__Jason

Scott McKay

unread,
Sep 25, 2001, 8:52:54 AM9/25/01
to

"Jason Trenouth" <ja...@harlequin.com> wrote in message
news:gb3uqtgqo43o9v8ag...@4ax.com...

Here's a little more historical detail, for anyone interested.

gnuemacs is quite different from the Eine/Zwei family of
editors, in that it uses the "bigline" structure to model the
contents of its buffers. Hemlock and the LW editor also
use this representation. Buffer pointers (BPs) are then simply
integers that point into the bigline. This can be a very space-
efficient structure, but the downside is that it is very hard to
have any sort of polymorphic "line" object. This makes it
much tougher to do things like graphics; a friend from Lucid
told me that Jamie Zawinski, a formidable hacker, spent about
a year a year wrestling with gnuemacs before he could make
it general enough to do the sorts of things he got Xemacs to do.

Zwei models buffers as linked lists of line objects, and BPs
are a pair {line,index}. This makes it easier to do some
clever stuff in Zwei, but IIRC lines in Zwei are structures,
not classes, so it turned out that we had to wrestle quite a
bit with Zwei to get display of multiple fonts and graphics
to work (on the order of many weeks).

The editor for FunO's Dylan product -- Deuce -- is the
next generation of Zwei in many ways. It has first class
polymorphic lines, first class BPs, and introduces the idea
first class "source containers" and "source sections". A
buffer is then dynamically composed of "section nodes".
This extra generality costs in space (it takes about 2 bytes of
storage for every byte in a source file, whereas gnuemacs
and the LW editor takes about 1 byte), and it costs a little
in performance, but in return it's much easier to build some
cool features:
- Multiple fonts and colors fall right out (it took me about
1 day to get this working, and most of the work for fonts
was because FunO Dylan doesn't have built-in support for
"rich characters", so I had to roll my own).
- Graphics display falls right out (e.g., the display of a buffer
can show lines that separate sections, and there is a column
of icons that show where breakpoints are set, where there
are compiler warnings, etc. Doing both these things took
less than 1 day, but a comparable feature in Zwei took a
week. I wonder how long it took to do the icons in Lucid's
C/C++ environment, whose name I can't recall.)
- "Composite buffers" (buffers built by generating functions
such as "callers of 'foo'" or "subclasses of 'window') fall right
out of this design, and again, it took less than a day to do this.
It took a very talented hacker more than a month to build a
comparable (but non-extensible) version in Zwei for an in-house
VC system, and it never really worked right.
Of course, the Deuce design was driven by knowing about the
sorts of things that gnuemacs and Zwei didn't get right (*). It's so
much easier to stand on other people shoulders...

(*) By "didn't get right" I really mean that gnuemacs and Zwei had
design goals different from Deuce, and in fact, they both had initial
design goals that were different from where they ended up.

Barry Margolin

unread,
Sep 25, 2001, 12:18:40 PM9/25/01
to
In article <G6%r7.26054$vq.54...@typhoon.ne.mediaone.net>,

Scott McKay <s...@mediaone.net> wrote:
>Zwei models buffers as linked lists of line objects, and BPs
>are a pair {line,index}. This makes it easier to do some
>clever stuff in Zwei, but IIRC lines in Zwei are structures,
>not classes, so it turned out that we had to wrestle quite a
>bit with Zwei to get display of multiple fonts and graphics
>to work (on the order of many weeks).

Of course, the most likely reason for this, I think, is that ZWEI was first
implemented *before* Flavors, so there was no class system available. Most
of the higher-level data structures (e.g. buffers and windows) were later
Flavorized, but I guess no one felt that it was critical enough to redesign
the low-level lines and buffer pointers. Perhaps because these objects are
used in so many inner loops there might have been a worry about the
performance impact (we all remember what things were like when Dynamic
Windows first came out and suddenly the whole UI became a conglomeration of
instances with mouse sensitivity).

>The editor for FunO's Dylan product -- Deuce -- is the
>next generation of Zwei in many ways.

I'm just curious: is Deuce a self-referential acronym, and if so what does
it stand for?

--
Barry Margolin, bar...@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Tim Moore

unread,
Sep 25, 2001, 1:08:20 PM9/25/01
to
In article <G6%r7.26054$vq.54...@typhoon.ne.mediaone.net>, "Scott McKay"
<s...@mediaone.net> wrote:
> Here's a little more historical detail, for anyone interested. gnuemacs
> is quite different from the Eine/Zwei family of editors, in that it uses
> the "bigline" structure to model the contents of its buffers. Hemlock
> and the LW editor also use this representation. Buffer pointers (BPs)
> are then simply integers that point into the bigline.
FWIW, Hemlock now seems to use the linked list of lines scheme.

> - Graphics display falls right out (e.g., the display of a buffer
> can show lines that separate sections, and there is a column of icons
> that show where breakpoints are set, where there are compiler
> warnings, etc. Doing both these things took less than 1 day, but a
> comparable feature in Zwei took a week. I wonder how long it took to
> do the icons in Lucid's C/C++ environment, whose name I can't
> recall.)

Cadillac.

Tim

Kent M Pitman

unread,
Sep 25, 2001, 3:33:56 PM9/25/01
to
[ replying to comp.lang.lisp only
http://world.std.com/~pitman/pfaq/cross-posting.html ]

Barry Margolin <bar...@genuity.net> writes:

> In article <G6%r7.26054$vq.54...@typhoon.ne.mediaone.net>,
> Scott McKay <s...@mediaone.net> wrote:
> >Zwei models buffers as linked lists of line objects, and BPs
> >are a pair {line,index}. This makes it easier to do some
> >clever stuff in Zwei, but IIRC lines in Zwei are structures,
> >not classes, so it turned out that we had to wrestle quite a
> >bit with Zwei to get display of multiple fonts and graphics
> >to work (on the order of many weeks).
>
> Of course, the most likely reason for this, I think, is that ZWEI was first
> implemented *before* Flavors, so there was no class system available. Most
> of the higher-level data structures (e.g. buffers and windows) were later
> Flavorized, but I guess no one felt that it was critical enough to redesign
> the low-level lines and buffer pointers.

In my last days at Symbolics, worried that the hardware would go away, I
ported Zmacs to Symbolics Common Lisp (it still used the TV windows and
needed a CLIM port, but the data structures all ran in CL data structures;
I got rid of array leaders and special instance variables, something Moon
had previously claimed was too hard to do--I just love a challenge). The
port is on some Symbolics backup tape somewhere, I suppose. It had a few
glitches but basically worked; I had it on Select Epsilon so that I could
switch back and forth between it and regular Zmacs. It wasn't part
of my tasked activity--just a little hack I was doing in my free time as a
"backup plan" for Symbolics because I didn't believe the company was on track
for survival and I wanted the tools to survive. But I was laid off thenabouts
and the port went nowhere.

The ported code is called TRES (third in the series that begins with eine
and zwei, but since I'm a Spanish speaker not a German speaker, I switched
languages). TRES stands for "TRES Replaces Eine's Successor".

Raymond Toy

unread,
Sep 25, 2001, 4:16:53 PM9/25/01
to
>>>>> "Barry" == Barry Margolin <bar...@genuity.net> writes:

Barry> performance impact (we all remember what things were like when Dynamic
Barry> Windows first came out and suddenly the whole UI became a conglomeration of
Barry> instances with mouse sensitivity).

Not me. Before my (Lisp) time. What was it like?

Ray

James A. Crippen

unread,
Sep 25, 2001, 4:43:05 PM9/25/01
to
Kent M Pitman <pit...@world.std.com> writes:

> In my last days at Symbolics, worried that the hardware would go
> away, I ported Zmacs to Symbolics Common Lisp (it still used the TV
> windows and needed a CLIM port, but the data structures all ran in
> CL data structures; I got rid of array leaders and special instance
> variables, something Moon had previously claimed was too hard to
> do--I just love a challenge). The port is on some Symbolics backup
> tape somewhere, I suppose. It had a few glitches but basically
> worked; I had it on Select Epsilon so that I could switch back and
> forth between it and regular Zmacs. It wasn't part of my tasked
> activity--just a little hack I was doing in my free time as a
> "backup plan" for Symbolics because I didn't believe the company was
> on track for survival and I wanted the tools to survive. But I was
> laid off thenabouts and the port went nowhere.
>
> The ported code is called TRES (third in the series that begins with
> eine and zwei, but since I'm a Spanish speaker not a German speaker,
> I switched languages). TRES stands for "TRES Replaces Eine's
> Successor".

Damn, I'd love to get a copy of that. Does Kalman or Dave Schmidt
read c.l.l? Perhaps you should mention this on SLUG and see if they
remember what tape it might have ended up on.

Heck, some aspiring Lispm hacker could pick up from where you left off
and keep hacking it out until TRES was fully portable using CLIM.
Then you could run it in Allegro, for instance. It would completely
kick ass over using [X]Emacs and an inferior Lisp process.

Honestly, if they finished the job Symbolics could even sell that.
And I wouldn't be surprised if people snapped it up. Even with my
novice-level experience ZWEI seems a more powerful editor for Lisp
hacking because it's integrated with the running Lisp system. But
then, to sell it they'd have to finish the rewrite. And it's not like
Kalman doesn't have enough to do already.

(I think Symbolics needs more people. But let's not start another
flamewar over that...)

Actually, that sounds like from what you describe even in its current
state it could be superior to ZWEI, at least in terms of
maintainability/debuggability. I looked at the sources to ZWEI not
too long ago and ran away frightened. The compiler looked easier to
understand... :-P

Barry Margolin

unread,
Sep 25, 2001, 4:48:20 PM9/25/01
to
In article <4nadzjt...@rtp.ericsson.se>,

Slow as molasses. Moving the mouse over a Lisp Listener causes the CPU to
spin heavily.

Kent M Pitman

unread,
Sep 25, 2001, 8:47:28 PM9/25/01
to
ja...@unlambda.com (James A. Crippen) writes:

> Kent M Pitman <pit...@world.std.com> writes:
>
> > In my last days at Symbolics, worried that the hardware would go
> > away, I ported Zmacs to Symbolics Common Lisp (it still used the TV
> > windows and needed a CLIM port, but the data structures all ran in
> > CL data structures; I got rid of array leaders and special instance
> > variables, something Moon had previously claimed was too hard to
> > do--I just love a challenge). The port is on some Symbolics backup
> > tape somewhere, I suppose. It had a few glitches but basically
> > worked; I had it on Select Epsilon so that I could switch back and
> > forth between it and regular Zmacs. It wasn't part of my tasked
> > activity--just a little hack I was doing in my free time as a
> > "backup plan" for Symbolics because I didn't believe the company was
> > on track for survival and I wanted the tools to survive. But I was
> > laid off thenabouts and the port went nowhere.
> >
> > The ported code is called TRES (third in the series that begins with
> > eine and zwei, but since I'm a Spanish speaker not a German speaker,
> > I switched languages). TRES stands for "TRES Replaces Eine's
> > Successor".
>
> Damn, I'd love to get a copy of that. Does Kalman or Dave Schmidt
> read c.l.l? Perhaps you should mention this on SLUG and see if they
> remember what tape it might have ended up on.

I've told Kalman it's there. It was probably in my personal dir. That
may have gone to a different backup tape than the "valuable assets".

Btw, recall that I mean "Symbolics Common Lisp" and not "ANSI Common Lisp".
So there would be a little more porting to do beyond that. But I think
the remaining stuff was largely straightforward, and mostly had to do with
language extensions like all the extra search functions, etc.

> Heck, some aspiring Lispm hacker could pick up from where you left off
> and keep hacking it out until TRES was fully portable using CLIM.
> Then you could run it in Allegro, for instance. It would completely
> kick ass over using [X]Emacs and an inferior Lisp process.

I agree. That's why I did it in the first place. It was my personal
backup plan in case the VLM (er, Open Genera) didn't fly.

> Honestly, if they finished the job Symbolics could even sell that.
> And I wouldn't be surprised if people snapped it up. Even with my
> novice-level experience ZWEI seems a more powerful editor for Lisp
> hacking because it's integrated with the running Lisp system.

Yeah, I really miss the ability to stack up possibilities buffers and
to use Tags Multiple Query Replace From Buffer.

> But then, to sell it they'd have to finish the rewrite. And it's not like
> Kalman doesn't have enough to do already.

I'm sure some consultant would do it in trade for a share of the revenues.

> (I think Symbolics needs more people. But let's not start another
> flamewar over that...)
>
> Actually, that sounds like from what you describe even in its current
> state it could be superior to ZWEI, at least in terms of
> maintainability/debuggability. I looked at the sources to ZWEI not
> too long ago and ran away frightened. The compiler looked easier to
> understand... :-P

I can't speak to that. I don't think it got any worse in the process but
I can't remember if it got better. That wasn't a goal. Getting it to run,
and getting past those "impossible" things was my goal. I hate it when people
say things are impossible. Especially on a Lisp Machine.

Sashank Varma

unread,
Sep 25, 2001, 10:39:19 PM9/25/01
to
In article <sfwsnda...@world.std.com>, Kent M Pitman
<pit...@world.std.com> wrote:

>> Actually, that sounds like from what you describe even in its current
>> state it could be superior to ZWEI, at least in terms of
>> maintainability/debuggability. I looked at the sources to ZWEI not
>> too long ago and ran away frightened. The compiler looked easier to
>> understand... :-P
>
>I can't speak to that. I don't think it got any worse in the process but
>I can't remember if it got better. That wasn't a goal. Getting it to run,
>and getting past those "impossible" things was my goal. I hate it when people
>say things are impossible. Especially on a Lisp Machine.

You guys sounds like astronauts in the NASA of the 1960s.

Sashank.

PS: That was a complement.

Scott McKay

unread,
Sep 25, 2001, 10:44:34 PM9/25/01
to

"Barry Margolin" <bar...@genuity.net> wrote in message
news:A72s7.2$5w4.728@burlma1-snr2...

> In article <G6%r7.26054$vq.54...@typhoon.ne.mediaone.net>,
> Scott McKay <s...@mediaone.net> wrote:
> >Zwei models buffers as linked lists of line objects, and BPs
> >are a pair {line,index}. This makes it easier to do some
> >clever stuff in Zwei, but IIRC lines in Zwei are structures,
> >not classes, so it turned out that we had to wrestle quite a
> >bit with Zwei to get display of multiple fonts and graphics
> >to work (on the order of many weeks).
>
> Of course, the most likely reason for this, I think, is that ZWEI was
first
> implemented *before* Flavors, so there was no class system available.
Most
> of the higher-level data structures (e.g. buffers and windows) were later
> Flavorized, but I guess no one felt that it was critical enough to
redesign
> the low-level lines and buffer pointers. Perhaps because these objects
are
> used in so many inner loops there might have been a worry about the
> performance impact (we all remember what things were like when Dynamic
> Windows first came out and suddenly the whole UI became a conglomeration
of
> instances with mouse sensitivity).

Yes, definitely. Remember ":ordered-instance-variables", which
was a hack to be able to access Flavor instance variables at
roughly the same speed as structure slots?

> >The editor for FunO's Dylan product -- Deuce -- is the
> >next generation of Zwei in many ways.
>
> I'm just curious: is Deuce a self-referential acronym, and if so what does
> it stand for?
>

Actually, I called it Deuce as a conscious homage to Zwei, then
force-fit an acronym: Dylan Environment Universal Code Editor.
"Universal" was an adjective that I and some other high-school
hacker friends always seemed to self-importantly apply to our
(in retrospect) silly programs, so it seemed like a good way to
poke fun at myself.

cbbr...@acm.org

unread,
Sep 26, 2001, 12:36:12 AM9/26/01
to

[Unfortunately, these days, NASA getting something into the sky
requires that the weight of the stack documentation exceeds the weight
of the fuel required to put the device into orbit. And working on it
requires reading the documentation first...]
--
(reverse (concatenate 'string "ac.notelrac.teneerf@" "454aa"))
http://www.cbbrowne.com/info/wp.html
Who needs fault-tolerant computers when there's obviously an ample
market of fault-tolerant users?

James A. Crippen

unread,
Sep 26, 2001, 3:20:11 AM9/26/01
to
Kent M Pitman <pit...@world.std.com> writes:

> ja...@unlambda.com (James A. Crippen) writes:
>
> > Damn, I'd love to get a copy of that. Does Kalman or Dave Schmidt
> > read c.l.l? Perhaps you should mention this on SLUG and see if they
> > remember what tape it might have ended up on.
>
> I've told Kalman it's there. It was probably in my personal dir. That
> may have gone to a different backup tape than the "valuable assets".

Well, if I was in your shoes I'd offer to come over and visit one
weekend to go hunting through the tapes yourself. But that's me.

> Btw, recall that I mean "Symbolics Common Lisp" and not "ANSI Common Lisp".
> So there would be a little more porting to do beyond that. But I think
> the remaining stuff was largely straightforward, and mostly had to do with
> language extensions like all the extra search functions, etc.

Ahh, just replace all the greek and math symbols and strip out the
font stuff and it should work fine, as long as you didn't use any of
that new-fangled CLOS stuff. ;-)



> > Heck, some aspiring Lispm hacker could pick up from where you left off
> > and keep hacking it out until TRES was fully portable using CLIM.
> > Then you could run it in Allegro, for instance. It would completely
> > kick ass over using [X]Emacs and an inferior Lisp process.
>
> I agree. That's why I did it in the first place. It was my personal
> backup plan in case the VLM (er, Open Genera) didn't fly.

I was telling someone about the VLM today... About how many lines of Alpha
assembler is it, do you remember? A lot, I know that. I guess it did
fly, after a fashion. But it didn't fix the bad management. Oh well.

> > Honestly, if they finished the job Symbolics could even sell that.
> > And I wouldn't be surprised if people snapped it up. Even with my
> > novice-level experience ZWEI seems a more powerful editor for Lisp
> > hacking because it's integrated with the running Lisp system.
>
> Yeah, I really miss the ability to stack up possibilities buffers and
> to use Tags Multiple Query Replace From Buffer.

I miss how M-. worked on *anything*. And DocEx. I *really* *really*
*really* miss having DocEx. Sigh.

> > But then, to sell it they'd have to finish the rewrite. And it's not like
> > Kalman doesn't have enough to do already.
>
> I'm sure some consultant would do it in trade for a share of the revenues.

Oh yeah, certainly. I can hear people volunteering already... :-)

> > Actually, that sounds like from what you describe even in its current
> > state it could be superior to ZWEI, at least in terms of
> > maintainability/debuggability. I looked at the sources to ZWEI not
> > too long ago and ran away frightened. The compiler looked easier to
> > understand... :-P
>
> I can't speak to that. I don't think it got any worse in the
> process but I can't remember if it got better. That wasn't a goal.

Well, it's not in C, so that's a big step in the right direction
anyway. If it was in CL then the process of making it easier to
maintain could be done gradually.

> Getting it to run, and getting past those "impossible" things was my
> goal. I hate it when people say things are impossible. Especially
> on a Lisp Machine.

Nothing is impossible on a Lisp Machine. Except maybe ... uhh
... getting the damned tape drive to work. They love eating tapes!

Kent M Pitman

unread,
Sep 26, 2001, 4:05:24 AM9/26/01
to
ja...@unlambda.com (James A. Crippen) writes:

> > Yeah, I really miss the ability to stack up possibilities buffers and
> > to use Tags Multiple Query Replace From Buffer.
>
> I miss how M-. worked on *anything*. And DocEx. I *really* *really*
> *really* miss having DocEx. Sigh.

Remember though that when you make a standalone Zmacs that is an application
running on another operating system, it won't suddenly be able to M-. the
sources to the Windows operating system. ;-)

Raymond Wiker

unread,
Sep 26, 2001, 4:26:37 AM9/26/01
to
Kent M Pitman <pit...@world.std.com> writes:

That's a feature, not a bug.

--
Raymond Wiker
Raymon...@fast.no

Antonio Leitao

unread,
Sep 26, 2001, 4:47:00 AM9/26/01
to
Kent M Pitman <pit...@world.std.com> writes:

> Yeah, I really miss the ability to stack up possibilities buffers and
> to use Tags Multiple Query Replace From Buffer.

Can you explain what's the purpose of the command you mention?

I implemented (sort of) in Emacs some of the Explorer Lisp Machine
commands, including, Tags Query Search/Replace. If it's not too hard,
I might be tempted to add that one.

António Leitão.

Tim Bradshaw

unread,
Sep 26, 2001, 4:50:10 AM9/26/01
to
* Tim Moore wrote:

> Cadillac.

I think that was the internal name, the external one was Energize.

--tim

Kent M Pitman

unread,
Sep 26, 2001, 7:54:22 AM9/26/01
to
Antonio Leitao <a...@gia.ist.utl.pt> writes:

Multiple Query Replace does parallel substitution (kinda like the purpose
of PSETQ). It's the only way to implement a swap of strings. Multiple
Query Replace of foo for bar and bar for foo can't be done as two sequential
replacements. (Multiple Query Replace just keeps prompting string1 replacement1
string2 replacement2 etc. until a stringN is null.)

Tags xxx commands do a command for all files in a tags table.

The "xxx from buffer" commands read the arguments from a buffer rather
than interactively, so you can edit up a huge truckload of replacements
and then do them all at once. It was really useful for Zetalisp=>Common-Lisp
conversions when there were a bunch of operators to rename.

The Lisp Machine was full of cool stuff like this that really
addressed "big system" stuff.

Of course, the other neat feature was that you could stop in the
middle and resume later with control-. EVEN IF you had done other tags
table commands in the interim (and even if you had suspended those,
too); it maintained a stack of buffers called possibilities buffers,
each of which was a reminder of what state your search was in. This
was very cool because mid-replacement you could see something else you
wanted to do and just do that and then later pop back to the original
search. I often found myself pushed 10 or 20 searches deep and
eventually was able to get back to and finish all my operations
successfully even working interrupt-driven.


Scott McKay

unread,
Sep 26, 2001, 9:36:47 AM9/26/01
to
"James A. Crippen" <ja...@unlambda.com> wrote in message
news:m3d74e8...@kappa.unlambda.com...

> Kent M Pitman <pit...@world.std.com> writes:
>
> > I agree. That's why I did it in the first place. It was my personal
> > backup plan in case the VLM (er, Open Genera) didn't fly.
>
> I was telling someone about the VLM today... About how many lines of
Alpha
> assembler is it, do you remember? A lot, I know that. I guess it did
> fly, after a fashion. But it didn't fix the bad management. Oh well.

This is more than you wanted to know, but...

Open Genera -- like the Lispm-in-a-Sun-box -- had a bunch of
"life support" code that glued the Alpha (or Sun) hardware and OS
to the Ivory hardware. This was what connected up the disk, network,
window system, etc. I don't recall how much code this was, but it
was neither a small amount nor a huge amount -- probably in the
range of 20K to 30K lines of C and Lisp code. Gary Palter did all
of this code.

The Ivory emulator consisted of a "compiler" and "instruction scheduler",
implemented as a set of macros and some post-processors that generated
annotated (*) Alpha assembly source code. The macros and assembler
were probably 2K to 3K lines of Lisp code. The emulator source was
probably about 15K lines, but expanded into more actual assembly code.
Paul Robertson wrote the original "compiler", I did the instruction
scheduler,
and the two of us implemented the bulk of the Ivory instruction set.
Tucker Withington did the memory architecture (no trivial job given the
tagged memory, "invisible" pointers, GC traps, and all that) and
implementation.
Tucker and I did the instruction decode and main "dispatch" loop, which
I think we got down to somewhere around 8 or 9 cycles.

(*) The annotations consisted mainly of cycle and stall counts for all the
instructions, which told you how good/bad a job our compiler/scheduler
did. Using these annotatations, we could do some hand optimization
to get important bits of code running fast. On the original release of
the DEC Alpha workstation, the VLM achieved a performance of
about 85% of the XL1200-class Ivory chip. New Alphas run several
times faster than this. We were all quite proud of this performance,
given the extreme schedule pressure we were under.

Craig Brozefsky

unread,
Sep 26, 2001, 10:54:36 AM9/26/01
to

Never has my emacs seemed so shallow. My hopes and dream disappear
like smoke from a burn test.

--
Craig Brozefsky <cr...@red-bean.com>
http://www.red-bean.com/~craig
The outer space which me wears it has sexual intercourse. - opus

Erik Winkels

unread,
Sep 26, 2001, 12:41:40 PM9/26/01
to
Craig Brozefsky <cr...@red-bean.com> writes:
>
> Never has my emacs seemed so shallow. My hopes and dream disappear
> like smoke from a burn test.

One starts wondering how much more has been lost in the depths of time
since the "worse is better"-attitude started prevailing :-\


Erik.
--
"SCSI is not magic. There are fundamental technical reasons why it
is necessary to sacrifice a young goat to your SCSI chain now and
then." -- John Woods

James A. Crippen

unread,
Sep 26, 2001, 1:05:51 PM9/26/01
to
Kent M Pitman <pit...@world.std.com> writes:

Oh, I'm sure I could figure out how to hack that in... :-)

James A. Crippen

unread,
Sep 26, 2001, 1:15:37 PM9/26/01
to
"Scott McKay" <s...@mediaone.net> writes:

> "James A. Crippen" <ja...@unlambda.com> wrote in message
> news:m3d74e8...@kappa.unlambda.com...
> > Kent M Pitman <pit...@world.std.com> writes:
> >
> > > I agree. That's why I did it in the first place. It was my personal
> > > backup plan in case the VLM (er, Open Genera) didn't fly.
> >
> > I was telling someone about the VLM today... About how many lines of
> Alpha
> > assembler is it, do you remember? A lot, I know that. I guess it did
> > fly, after a fashion. But it didn't fix the bad management. Oh well.
>
> This is more than you wanted to know, but...
>
> Open Genera -- like the Lispm-in-a-Sun-box -- had a bunch of "life
> support" code that glued the Alpha (or Sun) hardware and OS to the
> Ivory hardware. This was what connected up the disk, network,
> window system, etc. I don't recall how much code this was, but it
> was neither a small amount nor a huge amount -- probably in the
> range of 20K to 30K lines of C and Lisp code. Gary Palter did all
> of this code.

Right. That stuff wouldn't be terribly difficult to port, I think.



> The Ivory emulator consisted of a "compiler" and "instruction
> scheduler", implemented as a set of macros and some post-processors
> that generated annotated (*) Alpha assembly source code. The macros
> and assembler were probably 2K to 3K lines of Lisp code. The
> emulator source was probably about 15K lines, but expanded into more
> actual assembly code. Paul Robertson wrote the original "compiler",
> I did the instruction scheduler, and the two of us implemented the
> bulk of the Ivory instruction set. Tucker Withington did the memory
> architecture (no trivial job given the tagged memory, "invisible"
> pointers, GC traps, and all that) and implementation. Tucker and I
> did the instruction decode and main "dispatch" loop, which I think
> we got down to somewhere around 8 or 9 cycles.

8 or 9 cycles? That's pretty incredible. It's this bit that sounds
like it'd be hell to convert to something else. I've often wondered
exactly what would be necessary to convert the VLM to run on Intel x86
boxen. The latest round of >= 1GHz processors should be able to at
least approximate an XL1200 on most things, even accounting for using
double words (64b). Certainly the PDP-10 emulators do similar work
and do it fairly well (although the PDP-10s weren't exactly fast to
begin with).

I think all the weirdness with memory would be a difficult hotspot
though.



> (*) The annotations consisted mainly of cycle and stall counts for all the
> instructions, which told you how good/bad a job our compiler/scheduler
> did. Using these annotatations, we could do some hand optimization
> to get important bits of code running fast. On the original release of
> the DEC Alpha workstation, the VLM achieved a performance of
> about 85% of the XL1200-class Ivory chip. New Alphas run several
> times faster than this. We were all quite proud of this performance,
> given the extreme schedule pressure we were under.

Kalman Reti says that he gets 80 times the speed of the XL1200 on
certain operations. So the VLM just needed to wait for hardware to
catch up with it.

I just can't afford the $5000 price tag plus the associated DEC Unix
license.

j...@itasoftware.com

unread,
Sep 26, 2001, 3:50:18 PM9/26/01
to
Erik Winkels <aer...@xs4all.nl> writes:

> Craig Brozefsky <cr...@red-bean.com> writes:
> >
> > Never has my emacs seemed so shallow. My hopes and dream disappear
> > like smoke from a burn test.
>
> One starts wondering how much more has been lost in the depths of time
> since the "worse is better"-attitude started prevailing :-\

Suddenly, I feel old.

Friedrich Dominicus

unread,
Sep 27, 2001, 1:57:39 AM9/27/01
to
Well, this is probably off-topic, but I really like to read such
stories. I wished they were collected and available somewhere in a
book the like. But I wonder too who is using OpenGenera? What are
people doing with it? And could it be that it could and should (?) be
ported e.g to an "actual" 64-bit Processor. It seems that the end of
the alpha processors is near. So what will happen than?

Regards
Friedrich

Christopher Stacy

unread,
Sep 27, 2001, 3:36:37 AM9/27/01
to
>>>>> On 27 Sep 2001 07:57:39 +0200, Friedrich Dominicus ("Friedrich") writes:

Friedrich> Well, this is probably off-topic, but I really like to read such
Friedrich> stories. I wished they were collected and available somewhere in a
Friedrich> book the like. But I wonder too who is using OpenGenera? What are
Friedrich> people doing with it? And could it be that it could and should (?) be
Friedrich> ported e.g to an "actual" 64-bit Processor. It seems that the end of
Friedrich> the alpha processors is near. So what will happen than?

I think that Symbolics said at some point in the past that if someone
was interested in paying for it, it could be ported.

Martin Cracauer

unread,
Sep 28, 2001, 1:46:14 PM9/28/01
to
Anybody worked on LispMs and CMUCL's hemlock? Would you think we can
do some of the fancy stuff in Hemlock?


For those who don't know: Hemlock is an emacs-like editor written in
Common Lisp, thereby allowing much better integration with the Lisp it
controls. Not to speak of static scope. Although it runs on CMUCL
only for now (I think), it is quite generic CL + CLX code and should
be easy to port to any CL that has CLX (and that may include a Windows
Lisp).

Martin

--
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <crac...@bik-gmbh.de> http://www.bik-gmbh.de/~cracauer/
FreeBSD - where you want to go. Today. http://www.freebsd.org/

James A. Crippen

unread,
Sep 28, 2001, 3:19:58 PM9/28/01
to
crac...@counter.bik-gmbh.de (Martin Cracauer) writes:

> Anybody worked on LispMs and CMUCL's hemlock? Would you think we can
> do some of the fancy stuff in Hemlock?
>
> For those who don't know: Hemlock is an emacs-like editor written in
> Common Lisp, thereby allowing much better integration with the Lisp it
> controls. Not to speak of static scope. Although it runs on CMUCL
> only for now (I think), it is quite generic CL + CLX code and should
> be easy to port to any CL that has CLX (and that may include a Windows
> Lisp).

The only *major* drawback that I see is that because Hemlock is
somewhat aged it may require some hacking to pull apart some of its
pieces. I haven't looked at it yet, but I suspect that being as old
as it is it may have some cruft in it that needs to be rehacked or
thought out again. I have a feeling that it's not particularly
CLOSified, in any case, which seems to be an important goal for many
people.

But yes, it would be an excellent place to start from, even if a lot
of it got torn out. (And for us historical fans, it would immediately
place the new CL Emacs at a well known position in the Emacs family
tree. :-)

I have a question for the people interested in a CL Emacs. Do you all
think starting a mailing list for this subject would be a good idea?
Or is it better to continue this thread in c.l.l and on CLiki?

If people think a mailing list is a good idea I can offer my server.
I also have the ability to serve CVS, shell acounts, and web pages.
The machine is already hosting a couple of other small Lisp-related
projects.

Martin Cracauer

unread,
Sep 28, 2001, 3:54:00 PM9/28/01
to
ja...@unlambda.com (James A. Crippen) writes:

[hemlock]


>I have a feeling that it's not particularly
>CLOSified, in any case, which seems to be an important goal for many
>people.

It doesn't use CLOS.

The raises an interesting question: Would a CL Emacs with a tight CLIM
integration be the ultimate goal? Or a standalone CL emacs?

Web browser operating from an emacs-like buffer, interaction through
CLIM? How does that sound.

[...]


>I have a question for the people interested in a CL Emacs. Do you all
>think starting a mailing list for this subject would be a good idea?
>Or is it better to continue this thread in c.l.l and on CLiki?

Sorry, CLiki?

A mailing list is often a good idea. Although my service for the list
was often crappy (I apologize, I took too much stuff besides CMUCL and
FreeBSD), the free CLIM project got a long way,

At least one prominent CMUCL guy uses Hemlock and fixes things, so we
might have an expert. He doesn't read usenet, though (I think).

Tim Moore

unread,
Sep 28, 2001, 5:24:48 PM9/28/01
to
In article <9p2kgo$125l$1...@counter.bik-gmbh.de>, "Martin Cracauer"
<crac...@counter.bik-gmbh.de> wrote:


> The raises an interesting question: Would a CL Emacs with a tight CLIM
> integration be the ultimate goal? Or a standalone CL emacs? Web browser
> operating from an emacs-like buffer, interaction through CLIM? How does
> that sound.

I think the answer is tight CLIM integration, especially with McCLIM, if for no
other reason than the fact that CLIM requires some emacs-like behavior in text
fields and command processing and it would be nice to kill two birds with
one stone: get both a CLIM-based editor and an editor for CLIM. I've
been noodling around with implementing a core of emacs for the text-field
and text-editor gadgets in McCLIM, but it's not quite there yet.


> [...]
>>I have a question for the people interested in a CL Emacs. Do you all
>>think starting a mailing list for this subject would be a good idea? Or
>>is it better to continue this thread in c.l.l and on CLiki?
> Sorry, CLiki?

http://ww.telent.net/cliki. The home of Lisp hackers who are too young
to remember Jimmy Carter :):):)

Tim

James A. Crippen

unread,
Sep 28, 2001, 5:53:30 PM9/28/01
to
crac...@counter.bik-gmbh.de (Martin Cracauer) writes:

> ja...@unlambda.com (James A. Crippen) writes:
>
> [hemlock]
> >I have a feeling that it's not particularly
> >CLOSified, in any case, which seems to be an important goal for many
> >people.
>
> It doesn't use CLOS.
>
> The raises an interesting question: Would a CL Emacs with a tight CLIM
> integration be the ultimate goal? Or a standalone CL emacs?

I opt for loose CLIM integration. Thus, the interface between CLEmacs
and its display system should be abstracted enough to where the
display backend could be some funky extension of CLX or maybe Garnet,
or CLIM. Or even Dynamic Windows. Or (gh0ddess help you) TV!

Thus CLEmacs needs its own display library whose back end is portable
to some custom CLX (which would distribute bundled with a Free Lisp),
or to a CLIM (which would necessitate a Commercial Lisp, currently).

When CLIM is Free then we can consider removing this library. Sounds
like extra work? Then someone had better finish a Free CLIM RSN. I
don't think we should wait.

> Web browser operating from an emacs-like buffer, interaction through
> CLIM? How does that sound.

Web browser necessitates some sort of networking library. Another
thing I think important. Should be portable, but again its backend
can be customized for particular Lisp implementations.

> [...]
> >I have a question for the people interested in a CL Emacs. Do you all
> >think starting a mailing list for this subject would be a good idea?
> >Or is it better to continue this thread in c.l.l and on CLiki?
>
> Sorry, CLiki?

CLiki is a wiki for Common Lisp. http://ww.telent.net/cliki/index

> A mailing list is often a good idea. Although my service for the list
> was often crappy (I apologize, I took too much stuff besides CMUCL and
> FreeBSD), the free CLIM project got a long way,

Okay, I can make a mailing list. News to follow.

> At least one prominent CMUCL guy uses Hemlock and fixes things, so we
> might have an expert. He doesn't read usenet, though (I think).

Just would have to get him on board.

James A. Crippen

unread,
Sep 28, 2001, 6:03:48 PM9/28/01
to
ja...@unlambda.com (James A. Crippen) writes:

> crac...@counter.bik-gmbh.de (Martin Cracauer) writes:
>
> > ja...@unlambda.com (James A. Crippen) writes:
> >
> > [hemlock]
> > >I have a feeling that it's not particularly
> > >CLOSified, in any case, which seems to be an important goal for many
> > >people.
> >
> > It doesn't use CLOS.
> >
> > The raises an interesting question: Would a CL Emacs with a tight CLIM
> > integration be the ultimate goal? Or a standalone CL emacs?
>
> I opt for loose CLIM integration. Thus, the interface between CLEmacs
> and its display system should be abstracted enough to where the
> display backend could be some funky extension of CLX or maybe Garnet,
> or CLIM. Or even Dynamic Windows. Or (gh0ddess help you) TV!
>
> Thus CLEmacs needs its own display library whose back end is portable
> to some custom CLX (which would distribute bundled with a Free Lisp),
> or to a CLIM (which would necessitate a Commercial Lisp, currently).

I should mention that I mean a *lightweight* library. Basically a big
pile of wrappers which should behave similarly with each different
backend.

> When CLIM is Free then we can consider removing this library. Sounds
> like extra work? Then someone had better finish a Free CLIM RSN. I
> don't think we should wait.

And I'd love to see the CLX and McCLIM backends done concurrently.

> > Web browser operating from an emacs-like buffer, interaction through
> > CLIM? How does that sound.
>
> Web browser necessitates some sort of networking library. Another
> thing I think important. Should be portable, but again its backend
> can be customized for particular Lisp implementations.

This also points to a filesystem handling library (including mechanisms
to treat FTP, HTTP, TAR, etc as filesystems).

James A. Crippen

unread,
Sep 28, 2001, 6:17:44 PM9/28/01
to
Okay, I've got a mailing list set up for discussion of a CL
implementation of Emacs.

Subscribe at http://lists.unlambda.com/mailman/listinfo/cl-emacs .

Usual mailing list rules apply -- ie, don't post the address on
Usenet, or else I will have to restrict posting to members only which
is irritating.

'james

Erik Naggum

unread,
Sep 28, 2001, 6:57:52 PM9/28/01
to
* James A. Crippen

> I opt for loose CLIM integration. Thus, the interface between CLEmacs
> and its display system should be abstracted enough to where the display
> backend could be some funky extension of CLX or maybe Garnet, or CLIM.
> Or even Dynamic Windows. Or (gh0ddess help you) TV!

One of my desires when starting to look at clemacs was to build something
that could be a replacement for _xterm_ as well as my trusty old Emacs.
That would be the client side. The client side would also take care of
file system accesses, and would be able to invoke remote user-level file
system services and ship files back to the server. The server would be a
heavy-duty persistent Lisp process that I would attach to and detach from
when my clients wanted to move around, and which probably served more
than one user, just like database servers do. The upside of this is that
the display client would not need to be given more than screenfuls at a
time, and the file system clients could also serve information on demand
instead of stuffing the entire file across the wire. All sorts of very
interesting issues crop up in such a "distributed" system that show off
Common Lisp's abstraction power.

///
--
Why, yes, I love freedom, Mr. President, but freedom means accepting risks.
So, no, Mr. President, I am not with you, and I am not with the terrorists.
I would be happier if you left the airports alone and took care of all the
cars that are a much bigger threat to my personal safety than any hijacking.

Friedrich Dominicus

unread,
Sep 29, 2001, 1:41:08 AM9/29/01
to
ja...@unlambda.com (James A. Crippen) writes:

>
> I opt for loose CLIM integration. Thus, the interface between CLEmacs
> and its display system should be abstracted enough to where the
> display backend could be some funky extension of CLX or maybe Garnet,
> or CLIM. Or even Dynamic Windows. Or (gh0ddess help you) TV!
>

Well CLIM is there and it seems to work on all platforms I do not
think the same is true for CLX. Well even it it's commercial, that
does not mean that it will stay there forever. On the other hand it
was told here more than once that some are working on a "free"
alternative. I expect they will make quite a better progress than the
CL-Emacs implementers. BTW how will one handle non-windowing systems?

What is still unanswered too is if it must be "compatible" to
Emacs. And what has to be compatible. Maybe one should think what one
expects from an editor today and what this editor should be capable
doing. Probably we're not talking about an editor but a whole
"Personal Information Manager?" And if we see it from there probably
the decisons will look completly different.

Regards
Friedrich

Daniel Barlow

unread,
Sep 29, 2001, 10:35:00 AM9/29/01
to
Erik Naggum <er...@clemacs.org> writes:

> One of my desires when starting to look at clemacs was to build something
> that could be a replacement for _xterm_ as well as my trusty old Emacs.

:-)

I find myself wishing for M-/ support in my xterms far too often.


-dan

--

http://ww.telent.net/cliki/ - Link farm for free CL-on-Unix resources

Peter Wood

unread,
Sep 29, 2001, 12:47:52 PM9/29/01
to
Daniel Barlow <d...@telent.net> writes:

> Erik Naggum <er...@clemacs.org> writes:
>
> > One of my desires when starting to look at clemacs was to build something
> > that could be a replacement for _xterm_ as well as my trusty old Emacs.
>
> :-)
>
> I find myself wishing for M-/ support in my xterms far too often.
>

It's possible to do quite a bit with readline. I have my Clisp set up
to do loading (C-x l), macro-expand (C-x m), compile (C-x k),
write-to-file (C-x w) and lisp indentation (F1) in an xterm or
console. Clisp has both lisp and pathname completion out of the box.
Ok, it's not dabbrev-expand, but still convenient :-)

Regards,
Peter

Erik Naggum

unread,
Sep 29, 2001, 1:09:41 PM9/29/01
to
* Peter Wood

> It's possible to do quite a bit with readline.

One of the things I would like to do in xterm, is to see the output of a
pipeline and then decide what to do with it. One way to accomplish this
is to send the output into a temporary file, view the file, and then use
the file for input into the next pipeline. However, it would be nice if
it were possible to use a short-hand. Suppose you could request to use
the output of the previous pipeline to be interpolated as arguments or
used as standard input, possibly after altering it somehow so you knew
you got what you wanted instead of half-trusting that regexp-abusing data
mangler. That is, I think the output of a process is a _value_ that I
should be able to use for something more than merely gawk at.

Kent M Pitman

unread,
Sep 29, 2001, 1:32:14 PM9/29/01
to
Erik Naggum <er...@clemacs.org> writes:

> * Peter Wood
> > It's possible to do quite a bit with readline.
>
> One of the things I would like to do in xterm, is to see the output of a
> pipeline and then decide what to do with it. One way to accomplish this
> is to send the output into a temporary file, view the file, and then use
> the file for input into the next pipeline. However, it would be nice if
> it were possible to use a short-hand. Suppose you could request to use
> the output of the previous pipeline to be interpolated as arguments or
> used as standard input, possibly after altering it somehow so you knew
> you got what you wanted instead of half-trusting that regexp-abusing data
> mangler. That is, I think the output of a process is a _value_ that I
> should be able to use for something more than merely gawk at.

To some degree, this is what Dick Waters' iteration facility is all about.

Also, I have the distant memory that Mike McMahon (key contributor to
and maintainer of Zmacs/Zmail, among many other things while at
Symbolics) went off to do this after leaving Symbolics. Was that
company Oberon or was there an intermediate? I believe Oberon had
some visual programming tools vaguely in this family, but I don't know
if they were first or second generation. I would check out this area
and where it went before starting from scratch; there's surely lots
already done that you could learn from.

Rob Warnock

unread,
Sep 30, 2001, 6:25:54 AM9/30/01
to
Erik Naggum <er...@clemacs.org> wrote:
+---------------

| One of the things I would like to do in xterm, is to see the output of a
| pipeline and then decide what to do with it. One way to accomplish this
| is to send the output into a temporary file, view the file, and then use
| the file for input into the next pipeline. ... That is, I think the output

| of a process is a _value_ that I should be able to use for something more
| than merely gawk at.
+---------------

This is a very useful model, but every time I've thought about it before I
kept bumping into the issue of non-terminating processes and/or interrupts
and/or the need for implied pagers. The problem with dumping the output
into a file and *then* viewing the file is that the pipeline might never
terminate and/or might fill up your disk. Also, for some tasks (e.g., finds,
greps) you want to see the partial progress in realtime, as it occurs.

But this time, given the CL context, I realized that maybe what was
needed is for there to be a parameterizable output limit that throws
a restartable exception when violated and/or drops you into a debugger
where you could scroll back and peruse the output so far and decide
whether to let it continue for a while longer (which with a smallish
limit gives you a "pager" function), or stop the pipeline (perhaps
by sending a signal, in a Unix environment) and save (or discard) the
output so far, or choose some other result entirely.

In any case, I think that any "CL Shell"[1] needs to provide for
real-time viewing of the progress of a pipeline *and* some sort
of built-in pager *and* selectable output limits with interactive
exception handling, all without interfering with the ability to
treat the output as a "value".


-Rob

[1] Scsh (a Scheme Shell) <URL:http://www-swiss.ai.mit.edu/ftpdir/scsh/>
contains a lot of very useful ideas and idioms (especially the
macros for describing & running pipelines) which would be worth
at least looking at before attempting a CL Shell, but AFAIK even
Scsh doesn't directly address the above "excess output" issue.

-----
Rob Warnock, 30-3-510 <rp...@sgi.com>
SGI Network Engineering <http://reality.sgi.com/rpw3/> [R.I.P.]
1600 Amphitheatre Pkwy. Phone: 650-933-1673
Mountain View, CA 94043 PP-ASEL-IA

[Note: aaan...@sgi.com and zedw...@sgi.com aren't for humans ]

Bruce Hoult

unread,
Sep 30, 2001, 8:59:03 AM9/30/01
to
In article <9p6rvi$ntc1s$1...@fido.engr.sgi.com>, nobody@localhost wrote:

> Erik Naggum <er...@clemacs.org> wrote:
> +---------------
> | One of the things I would like to do in xterm, is to see the
> | output of a pipeline and then decide what to do with it. One
> | way to accomplish this is to send the output into a temporary
> | file, view the file, and then use the file for input into the
> | next pipeline. ... That is, I think the output of a process is
> | a _value_ that I should be able to use for something more
> | than merely gawk at.
> +---------------
>
> This is a very useful model, but every time I've thought about it
> before I kept bumping into the issue of non-terminating processes
> and/or interrupts and/or the need for implied pagers. The problem
> with dumping the output into a file and *then* viewing the file is
> that the pipeline might never terminate and/or might fill up your
> disk. Also, for some tasks (e.g., finds, greps) you want to see
> the partial progress in realtime, as it occurs.

Should be pretty easy to make a kind of combination of "tee" and "less"
which can be inserted into an arbitrary pipeline but also diverts the
stuff it is passing through to the terminal (if one is attached...) and
gives "less"-style prompts. You'd just need a single extra command in
addition to what "less" already has meaning "don't show me the rest but
just send it on to stdout". Actually, other than the "tee"
functionality that would be pretty much like running "less" with -E
("--QUIT-AT-EOF") and then hitting the G key when you'd seen enough.

-- Bruce

Lieven Marchand

unread,
Sep 30, 2001, 7:14:20 AM9/30/01
to
rp...@rigden.engr.sgi.com (Rob Warnock) writes:

> Erik Naggum <er...@clemacs.org> wrote:
> +---------------
> | One of the things I would like to do in xterm, is to see the output of a
> | pipeline and then decide what to do with it. One way to accomplish this
> | is to send the output into a temporary file, view the file, and then use
> | the file for input into the next pipeline. ... That is, I think the output
> | of a process is a _value_ that I should be able to use for something more
> | than merely gawk at.
> +---------------
>
> This is a very useful model, but every time I've thought about it before I
> kept bumping into the issue of non-terminating processes and/or interrupts
> and/or the need for implied pagers. The problem with dumping the output
> into a file and *then* viewing the file is that the pipeline might never
> terminate and/or might fill up your disk. Also, for some tasks (e.g., finds,
> greps) you want to see the partial progress in realtime, as it occurs.

The most advanced implementation of this model that I know of is for a
relatively obscure IBM mainframe OS, CMS Pipelines. It's very well
integrated with the operating system, the system editor XEDIT (which
has an emacs like status since it incorporates the os help system,
mail and news support and is programmable in a full programming
language Rexx) and the network.

More information can be found at

http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/HCSH1A20/CCONTENTS
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/HCSG5A20/CCONTENTS

--
Lieven Marchand <m...@wyrd.be>
She says, "Honey, you're a Bastard of great proportion."
He says, "Darling, I plead guilty to that sin."
Cowboy Junkies -- A few simple words

Erik Naggum

unread,
Sep 30, 2001, 1:02:19 PM9/30/01
to
* Erik Naggum

| One of the things I would like to do in xterm, is to see the output of a
| pipeline and then decide what to do with it. One way to accomplish this
| is to send the output into a temporary file, view the file, and then use
| the file for input into the next pipeline. ... That is, I think the
| output of a process is a _value_ that I should be able to use for
| something more than merely gawk at.

* Rob Warnock


| This is a very useful model, but every time I've thought about it before
| I kept bumping into the issue of non-terminating processes and/or
| interrupts and/or the need for implied pagers.

I did not suggest that this xterm/shell/emacs should not display the
output to the user as it happened. Quite the contrary, or so I thought
would be _very_ strongly implied. Emacs keeps the process output in a
buffer and a user may grab the output and do something with it, albeit
not astonishingly conveniently. The "and then" part was intended to
pertain to the _next_ command, not the viewing-of-the-file part, which I
really thought would be obviously done while it occurred, as that is the
whole point of using xterm/shell/emacs to execute commands to begin with.
Never had I thought anyone would dispense with the fundamental idea just
because an additional idea is suggesed. It feels sort of like suggesting
to XML people that they reconsinder some particular concet -- their first
inclination is to fear that you suggest rebooting the whole universe.

Kaz Kylheku

unread,
Sep 30, 2001, 4:30:50 PM9/30/01
to
In article <32107721...@clemacs.org>, Erik Naggum wrote:
>* Peter Wood
>> It's possible to do quite a bit with readline.
>
> One of the things I would like to do in xterm, is to see the output of a
> pipeline and then decide what to do with it. One way to accomplish this
> is to send the output into a temporary file, view the file, and then use
> the file for input into the next pipeline. However, it would be nice if
> it were possible to use a short-hand.

You can do this with the less pager, to some degree of satisfaction; it
lets you pipe portions of the captured output to a command. (Some editors
can capture standard input, but less lets you work with it before it's
all finished).

The feature is a little stupid. Scroll until the starting line is the top
one, then mark it (ma). Then scroll unil the last line of the desired
region is at the bottom, and type |a<command>.

> Suppose you could request to use
> the output of the previous pipeline to be interpolated as arguments or

For constructing argument lists from input, there is xargs.

Goldhammer

unread,
Sep 30, 2001, 4:39:37 PM9/30/01
to
In article <9p6rvi$ntc1s$1...@fido.engr.sgi.com>, Rob Warnock wrote:
> Erik Naggum <er...@clemacs.org> wrote:


> +---------------
> | One of the things I would like to do in xterm, is to see the output of a
> | pipeline and then decide what to do with it. One way to accomplish this
> | is to send the output into a temporary file, view the file, and then use
> | the file for input into the next pipeline. ... That is, I think the output
> | of a process is a _value_ that I should be able to use for something more
> | than merely gawk at.
> +---------------
>
> This is a very useful model, but every time I've thought about it before I
> kept bumping into the issue of non-terminating processes and/or interrupts
> and/or the need for implied pagers. The problem with dumping the output
> into a file and *then* viewing the file is that the pipeline might never
> terminate and/or might fill up your disk.


You could try 'tail -f -n<N> <filename> | <whatever>'. This
way you can view or pipe the last N number of lines while the output
is being dumped, and take whatever action is necessary. No need
to wait until the disk fills up.


Erik Naggum

unread,
Sep 30, 2001, 4:51:00 PM9/30/01
to
* Kaz Kylheku

| You can do this with the less pager

No. Using less is wrong, and also requires premeditation.

| For constructing argument lists from input, there is xargs.

No. xargs is _fundamentally_ braindamaged, and so is the fact that is
necessary. Unixes with argument list maxima less than 4M are broken.
Unfortunately, _every_ Linux kernel I have used needs _patching_ to get
this fixed (it is not even a configuration option!), and I just stop
using a Linux machine that has a smaller argmax limit if I run into that
braindamaged limitation -- networks being what they are, I work elsewhere.

I think you have missed the _convenience_ argument. The fact that I have
* and / as values and values-list of the previous (three) computations in
Common Lisp is something I want elsewhere. It would not be the same if I
had to (setq *** ** ** * * ...) every time I wanted to use that feature.

///

Erik Naggum

unread,
Sep 30, 2001, 4:53:06 PM9/30/01
to
* Goldhammer

| You could try 'tail -f -n<N> <filename> | <whatever>'.

Look, guys, this has to be _automatic_, _transparent_, and so
_convenient_ that people actually see the point in using it.

After all, pipelines are not about creating separate input and output
handles _manually_ and tying them together, it is a language construct
and therefore powerful. Why is not obvious in this newsgroup? *sigh*

///

Hrvoje Niksic

unread,
Oct 2, 2001, 6:14:36 AM10/2/01
to
k...@ashi.footprints.net (Kaz Kylheku) writes:

> You can do this with the less pager, to some degree of satisfaction;
> it lets you pipe portions of the captured output to a command.
> (Some editors can capture standard input, but less lets you work
> with it before it's all finished).

Actually, there is an editor that allows this, except that it's
currently unmaintained and not frequently used by programmers -- it's
`joe' (stands for Joe's Own Editor.)

With joe you can do things like:

some-command | joe - | mailx -s "Hi mom"

This works exactly as one would hope -- it allows you to edit the
output of SOME-COMMAND and makes the "save&exit" key pipe the modified
contents of the buffer to `mailx'.

Of course, you can also do a simpler:

some-command | joe -

and save the output of SOME-COMMAND into a file of your choosing. And
so on.

There are differences, though. Unlike less, joe will insist that
SOME-COMMAND finish before you are allowed to see and edit its
output. Still, no other editors I know of can do this, and the
feature is extremely useful.

Kent M Pitman

unread,
Oct 2, 2001, 11:22:01 AM10/2/01
to
Hrvoje Niksic <hni...@arsdigita.com> writes:

It may well be quite useful and I don't mean to detract from that, but
(and I'm not a shell programmer so I could be wrong here) can't you just
do:

some-command > temp.text; emacs temp.text; mailx -s "Hi mom" < temp.text

(No, I didn't try. How lazy is that?) Yes, you have to remember to
save back out the result of the edit, but remembering that would be
small price to pay in exchange for getting my own editor instead of
someone else's.

Tim Bradshaw

unread,
Oct 2, 2001, 11:38:44 AM10/2/01
to
Kent M Pitman <pit...@world.std.com> writes:


> It may well be quite useful and I don't mean to detract from that, but
> (and I'm not a shell programmer so I could be wrong here) can't you just
> do:
>
> some-command > temp.text; emacs temp.text; mailx -s "Hi mom" < temp.text
>
> (No, I didn't try. How lazy is that?) Yes, you have to remember to
> save back out the result of the edit, but remembering that would be
> small price to pay in exchange for getting my own editor instead of
> someone else's.

The sorts of problems this has are strangely related to Lisp: you need
to choose a known-unique filename if you do this more than a few
times. You probably want to make sure that this file is not readable
by others, and finally you want to make sure that you delete it when
you are done. So you need at least GENSYM and UNWIND-PROTECT...

(and of course this only works for commands where it's OK to wait
until all the output has happened).

--tim

Christian Lynbech

unread,
Oct 2, 2001, 4:14:04 PM10/2/01
to
>>>>> "Kent" == Kent M Pitman <pit...@world.std.com> writes:

Kent> Hrvoje Niksic <hni...@arsdigita.com> writes:
>>
>> With joe you can do things like:
>>
>> some-command | joe - | mailx -s "Hi mom"
>>

Yes, this is indeed the shell oriented way of working. Also vi users
are often in the habit of starting new vi's up all the time, I have
colleagues who even use vi instead of less.

Kent> some-command > temp.text; emacs temp.text; mailx -s "Hi mom" < temp.text

However, I do think that there are better ways with emacs than going
the shell route.

If I needed output from some command put into a mail, I would start
composing the mail and type something like

C-u M-! some-command

which would execute some-command and insert its output into the
current buffer.

Emacs promotes a quite different way of working than the traditional
UNIX shell with its associated tools.


------------------------+-----------------------------------------------------
Christian Lynbech | Ericsson Telebit, Skanderborgvej 232, DK-8260 Viby J
Phone: +45 8938 5244 | email: christia...@ted.ericsson.dk
Fax: +45 8938 5101 | web: www.ericsson.com
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
- pet...@hal.com (Michael A. Petonic)

0 new messages