Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Rob Pike article

701 views
Skip to first unread message

Tim Bradshaw

unread,
Jun 6, 2000, 3:00:00 AM6/6/00
to
I don't even know if this is relevent to cll, but I thought it was
kind of interesting, although it's probably wrong of course. I liked
the stuff about the mass of standards being crippling.

http://www.cs.bell-labs.com/cm/cs/who/rob/utah2000.ps

--tim

William Deakin

unread,
Jun 7, 2000, 3:00:00 AM6/7/00
to
Tim wrote:

> I don't even know if this is relevent to cll, but I thought it was
> kind of interesting, although it's probably wrong of course. I liked
> the stuff about the mass of standards being crippling.

I think it is an excellent article (more that kind-of it *is*
interesting). (sigh)

:) will


Jeff Dalton

unread,
Jun 13, 2000, 3:00:00 AM6/13/00
to
Tim Bradshaw <t...@cley.com> writes:

> I don't even know if this is relevent to cll, but I thought it was
> kind of interesting, although it's probably wrong of course. I liked
> the stuff about the mass of standards being crippling.
>

> http://www.cs.bell-labs.com/cm/cs/who/rob/utah2000.ps

A very interesting paper.

He's right about standards, at least in some jobs, but the nightmare
is even worse than that: every little thing you want to do is being
endlessly elaborated, because it's what's someone's research
programme or career is about.

For example, I want to send a message from one running program to
another. Nothing fancy needed; it can just be text. Simple, no?
Use a socket/pipe, define a simple syntax, ... All very easy in
Lisp because you can use READ and PRINT.

Not any more. You have to use XML. Humm. That doesn't sound too
bad. But wait, you can't just use XML. You'll have to write a DTD.
And then there are various things that are being seen as ways to
define XML "semantics" (basically: what it means as a data structure).
So you'll have to learn about them too. But you'll need to be a
bit more serious about semantics than that: you need ontologies.

"Ontologies, you say?. Well, once upon a time one might have said
something was an "int". The next step is to worry about units. Ok,
so it's an integer number of seconds. But wait a minute, what is this
"seconds"? It sounds suspiciously like a time. There's a whole world
of semantics in that word "time". You'll at least need an ontology
that covers time, and if you're not careful you'll find yourself
reading up on temporal logic. It could be worse. There are people
out there doing metaphysics and calling it "ontologies". Metaphysics
as in "just what are the fundamental constituents of the world and how
do they fit together?" Metaphysics as in thousands of years of
philosophy and still no answers. This is all just so you can send, oh
I don't know, a "3" from one program to another.

And what's this "send from one program to another?". No, no, these
days we have agents. And things that manage agents. "Brokers", for
instance, that find an agent with certain "capabilities" for you.
Capabilities? Need a language for that, and more semantics.
Don't forget the ontology.

You can forget about just "sending" too. No, no. What if the agent
on the other end doesn't want to talk to you right now, or isn't
allowed to, or wants to negotiate about the price? You can bet there
are people interested in those very issues, and not just "interested":
they plan to write and publish papers, develop software to support such
"transactions", have students doing PhDs on it, and so on.

There are - seriously - I am not kidding - people out there who will
tell you that the particular message exchange you want to do "does not
have agent semantics" and that you therefore ought to do things in
some other way that you can already tell you won't understand until
you've read several anthologies of papers.

Thank God we can still just do procedure calls. Ah, but for how much
longer? Procedures are very like agents, but they don't quite have
agent semantics ...

[I guess I'd better point out that much of this research into agents,
ontologies, etc, is both interesting and useful. But there is a real
danger that formerly simple tasks will elaborated to such an extent
that they become major projects, involving an up-to-date knowledge of
several rapidly moving fields. And I haven't even mentioned Java
Beans and a whole bunch of other stuff that will start to get in there
too.]

Marc Battyani

unread,
Jun 13, 2000, 3:00:00 AM6/13/00
to

Jeff Dalton <je...@todday.aiai.ed.ac.uk> wrote in message
news:x2wvjta...@todday.aiai.ed.ac.uk...

> Thank God we can still just do procedure calls. Ah, but for how much
> longer? Procedures are very like agents, but they don't quite have
> agent semantics ...

Sorry but it's too late... XML has started to corrupt even procedure calls.
Now to be buzz-word compliant you must call procedures by SOAP or XML-RPC!

Marc Battyani

qt...@my-deja.com

unread,
Jun 13, 2000, 3:00:00 AM6/13/00
to
In article <ey37lc2...@cley.com>,
Tim Bradshaw <t...@cley.com> wrote:
> I don't even know if this is relevent to cll.
>
> http://www.cs.bell-labs.com/cm/cs/who/rob/utah2000.ps
>


I think it's quite relevant to this NG. It has bothered me that the
whole Unix/gcc/emacs world seems to be dominated by people, ideas, and
tools from the 1970's. Compared to developments in computer hardware,
that is incredible stagnation. A good example, just how modern is emacs
Lisp?

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

Craig Brozefsky

unread,
Jun 13, 2000, 3:00:00 AM6/13/00
to
qt...@my-deja.com writes:

> I think it's quite relevant to this NG. It has bothered me that the
> whole Unix/gcc/emacs world seems to be dominated by people, ideas, and
> tools from the 1970's. Compared to developments in computer hardware,
> that is incredible stagnation. A good example, just how modern is emacs
> Lisp?

Emacs Lisp is as modern as anyone who has ever put in the effort to
modernize it has been able to make it. Which is not very much. We
have packages slathering CL compatability and a CLOS-like object
system on top of it, but it's otherwise the same elisp we've had for
years.

Our problem is not that too few people have the intellectual awareness
to point out that Emacs Lisp is sagging under the load of history and
is unable to keep up with it's modern siblings. This is a well known
fact that even the backwards tribesman who promote the GNU toolchain
lament. If those pygmies can figure it out you can be damn sure the
fully evolved Lispers here can.

But alas, knowledge of a problem does not seem to automagically gift
us with the code to solve it. There is a step in there that has
eluded us for the last few years. After careful consideration,
consulting with experts, and observing the amusing customs of the
intellectual neandrethals trapped in the 70s, I believe I have found
the missing step. The Philosphers Stone that turns the lead of
intellectual understanding to the gold of working code turns out to be
something we call have available to us, Labor.

That this staple of primitive cultures should present itself as the
solution to our present problem is as surprising to me as it is to any
respectable Lisper. It seems that having the most beautiful,
productive, and modern language is not sufficient in itself.


--
Craig Brozefsky <cr...@red-bean.com>
Lisp Web Dev List http://www.red-bean.com/lispweb
--- The only good lisper is a coding lisper. ---

Craig Brozefsky

unread,
Jun 13, 2000, 3:00:00 AM6/13/00
to
qt...@my-deja.com writes:

> > The Philosphers Stone that turns the lead of
> > intellectual understanding to the gold of working code turns out to be
> > something we call have available to us, Labor.
> >

> > It seems that having the most beautiful,
> > productive, and modern language is not sufficient in itself.
>

> Ummm. So your point is that there is no problem with elisp, you just
> have to work a little harder at using it? Then my criticism was totally
> uncalled for. My apologies.

Actually that was not my point.

qt...@my-deja.com

unread,
Jun 14, 2000, 3:00:00 AM6/14/00
to
In article <87vgzdt...@piracy.red-bean.com>,
Craig Brozefsky <cr...@red-bean.com> wrote:

> qt...@my-deja.com writes:
>
>> just how modern is emacs Lisp?
>
> The Philosphers Stone that turns the lead of
> intellectual understanding to the gold of working code turns out to be
> something we call have available to us, Labor.
>
> It seems that having the most beautiful,
> productive, and modern language is not sufficient in itself.

Ummm. So your point is that there is no problem with elisp, you just
have to work a little harder at using it? Then my criticism was totally
uncalled for. My apologies.

Tom

Christopher Browne

unread,
Jun 14, 2000, 3:00:00 AM6/14/00
to
Centuries ago, Nostradamus foresaw a time when qt...@my-deja.com would say:

No, the problems with Elisp include:

- Using dynamic scope, by default, which means preserving state is
rather more complex than with static scoping, and rather more
time-consuming, and;

- Does not (partly due to dynamic scoping) support multithreading.

The net result of the two considerations above are that:
a) Sometimes Elisp winds up being pretty slow, and
b) If you need to "multitask," you need to fork an extra process
_before_ things get busy.

For instance, if you're running GNUS, it can pretty much hang
everything else up while it's reading spools. As a result, for all to
"play nicely," you need to invoke multiple Emacs sessions, rather than
having some way for all the _buffers_ to play nicely.
--
cbbr...@acm.org - <http://www.ntlug.org/~cbbrowne/lisp.html>
"Some sins carry with them their own automatic punishment. Microsoft
is one such. Live by the Bill, suffer by the Bill, die by the Bill."
-- Tom Christiansen

Erik Naggum

unread,
Jun 14, 2000, 3:00:00 AM6/14/00
to
* Craig Brozefsky <cr...@red-bean.com>

| Emacs Lisp is as modern as anyone who has ever put in the effort to
| modernize it has been able to make it. Which is not very much.

FWIW, it's a lot less modern than that. Several ridiculously simple
improvements to Emacs Lisp have been turned down because they were
deemed to be moving it in the direction of Common Lisp, when some
folks behind Emacs have gone off and created this cretinous Scheme
bastard called GUILE and want to base Emacs on it. Phooey!

#:Erik
--
If this is not what you expected, please alter your expectations.

Martin Cracauer

unread,
Jun 14, 2000, 3:00:00 AM6/14/00
to
qt...@my-deja.com writes:

>In article <ey37lc2...@cley.com>,
> Tim Bradshaw <t...@cley.com> wrote:

>> I don't even know if this is relevent to cll.
>>
>> http://www.cs.bell-labs.com/cm/cs/who/rob/utah2000.ps

>I think it's quite relevant to this NG. It has bothered me that the
>whole Unix/gcc/emacs world seems to be dominated by people, ideas, and
>tools from the 1970's.

What I find amusing is that the article says researchers use emacs and
TeX (at least here in Germany they use Windows/Word), that the article
is obviously typeset in TeX and that at least the version I downloaded
had major typesetting errors (probably font/dvips problems).

>Compared to developments in computer hardware,
>that is incredible stagnation.

In what way did hardware *really* move? A register-based CPU, RAM,
Rotating magnetic disks, VM with the same MMU design as ever. Even
the mouse is not never than C or current Lisp derivates. The guys
didn't even succeed in getting RISC above of Intel's i386
architecture. Only widly available SMP can be counted and some more
or less autonomous graphics subprocessors.

Amorphous computing would really count. Massive parallel computing
a'la' The Connectuon Machine even failed.

Hardware got faster, but "progress" I wouldn't say.

Martin
--
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <crac...@bik-gmbh.de> http://www.bik-gmbh.de/~cracauer/
FreeBSD - where you want to go. Today. http://www.freebsd.org/

Tim Bradshaw

unread,
Jun 14, 2000, 3:00:00 AM6/14/00
to
* Martin Cracauer wrote:

> In what way did hardware *really* move?

It doubles in performance every 12-18 months, and has done for more
than 20 years years: that's pretty impressive. Unfortunately it now
seems to take 10,000 times as much computing power to run a word
processor than it did 20 years ago.

Modern commodity hardware can expect to run without failure for
appreciable fractions of a decade: That's pretty impressive
too. Unfortunately the software (*all* the software) they run is
written by halfwits like us who haven't worked out how to stop
buffer-overflow attacks in the same time-period.

> Only widly available SMP can be counted and some more or less
> autonomous graphics subprocessors.

The non-pervasiveness of parallel machines is a *software* problem.
If you can write systems that parallelise well, hardware people will
have no problem at all producing a machine for them to run on. The
current big shared-memory machines are amazing feats of technology --
go look at interconnect design sometime -- to work around the fact
that software people can't produce parallel code. The failure of the
really big parallel machines like the CM was because no one could work
out how to program them.

--tim

vsync

unread,
Jun 14, 2000, 3:00:00 AM6/14/00
to
cbbr...@dantzig.brownes.org (Christopher Browne) writes:

> - Using dynamic scope, by default, which means preserving state is
> rather more complex than with static scoping, and rather more
> time-consuming, and;

What do these two terms mean, and what are the differences between
them?

--
vsync
http://quadium.net/ - last updated Mon Jun 12 23:31:13 MDT 2000
Orjner.

Craig Brozefsky

unread,
Jun 14, 2000, 3:00:00 AM6/14/00
to
Erik Naggum <er...@naggum.no> writes:

> * Craig Brozefsky <cr...@red-bean.com>
> | Emacs Lisp is as modern as anyone who has ever put in the effort to
> | modernize it has been able to make it. Which is not very much.
>
> FWIW, it's a lot less modern than that. Several ridiculously simple
> improvements to Emacs Lisp have been turned down because they were
> deemed to be moving it in the direction of Common Lisp, when some
> folks behind Emacs have gone off and created this cretinous Scheme
> bastard called GUILE and want to base Emacs on it. Phooey!

If these changes were implemented, is there an Emacs fork someplace
with them?

Erik Naggum

unread,
Jun 14, 2000, 3:00:00 AM6/14/00
to
* Craig Brozefsky <cr...@red-bean.com>

| If these changes were implemented, is there an Emacs fork someplace
| with them?

No. If you have ever tried to maintain your own version of a moving
target like Emacs, you quickly find that you're wasting more time on
it than you could ever save from using whatever improvements you
have added. _Some_ improvements have been adopted, however.

Barry Margolin

unread,
Jun 14, 2000, 3:00:00 AM6/14/00
to
In article <ey3pupk...@cley.com>, Tim Bradshaw <t...@cley.com> wrote:
>* Martin Cracauer wrote:
>
>> In what way did hardware *really* move?
>
>It doubles in performance every 12-18 months, and has done for more
>than 20 years years: that's pretty impressive. Unfortunately it now
>seems to take 10,000 times as much computing power to run a word
>processor than it did 20 years ago.

But I think that Martin's point is that just speeding up isn't really a
significant move. It's doing pretty much the same thing, just faster.

RISC was a slight paradigm shift, requiring a few changes in compiler code
generator design, but not so much that it impacted HLL code. We still
don't have mainstream machines with VLIW, MPP, or dataflow architectures.
These are the types of hardware changes that would be more analogous to
changing from Elisp to a more modern Lisp dialect (which would probably
require changes in many, if not most, extension packages).

--
Barry Margolin, bar...@genuity.net
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Tom Breton

unread,
Jun 14, 2000, 3:00:00 AM6/14/00
to
vsync <vs...@quadium.net> writes:

> cbbr...@dantzig.brownes.org (Christopher Browne) writes:
>
> > - Using dynamic scope, by default, which means preserving state is
> > rather more complex than with static scoping, and rather more
> > time-consuming, and;
>
> What do these two terms mean, and what are the differences between
> them?

It means that every variable is special.

Or in more detail, when you create a variable, it's visible
essentially everywhere. (Leaving out buffer-local and frame-local
variables, interning in obarrays other than the default, etc for
simplicity).

If you `let' it, it's visible to every function that is called during
the scope of the `let'. If you `defvar' it, it's visible to
everything from that point on. (Unless you take special steps like
calling makunbound)

It can be gotten around with lexical-let, which is a macro/gensym
trick, but it's rarely done.

--
Tom Breton, http://world.std.com/~tob
Not using "gh" since 1997. http://world.std.com/~tob/ugh-free.html
Some vocal people in cll make frequent, hasty personal attacks, but if
you killfile them cll becomes usable.

Rainer Joswig

unread,
Jun 14, 2000, 3:00:00 AM6/14/00
to
In article <pJT15.72$Nf7.2266@burlma1-snr2>, Barry Margolin
<bar...@genuity.net> wrote:

> But I think that Martin's point is that just speeding up isn't really a
> significant move. It's doing pretty much the same thing, just faster.

Yeah, but faster makes the difference between fun and no fun.
Try to edit a DV movie with an old (last year ;-) ) machine.

> RISC was a slight paradigm shift, requiring a few changes in compiler code
> generator design, but not so much that it impacted HLL code. We still
> don't have mainstream machines with VLIW, MPP, or dataflow architectures.

Apple's latest Macs are using the G4 with the so called
"velocity engine" from Motorola. I'd say this is quite
different and the performance from it will
help to make "desktop video" editing popular.

--
Rainer Joswig, BU Partner,
ISION Internet AG, Steinhöft 9, 20459 Hamburg, Germany
Tel: +49 40 3070 2950, Fax: +49 40 3070 2999
Email: mailto:rainer...@ision.net WWW: http://www.ision.net/


Tom Breton

unread,
Jun 14, 2000, 3:00:00 AM6/14/00
to

Rainer Joswig

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to

Martin Cracauer

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
Tim Bradshaw <t...@cley.com> writes:

>* Martin Cracauer wrote:

>> In what way did hardware *really* move?

>It doubles in performance every 12-18 months, and has done for more
>than 20 years years: that's pretty impressive. Unfortunately it now
>seems to take 10,000 times as much computing power to run a word
>processor than it did 20 years ago.

Just faster isn't an impressive progres for me.

>Modern commodity hardware can expect to run without failure for
>appreciable fractions of a decade: That's pretty impressive
>too. Unfortunately the software (*all* the software) they run is
>written by halfwits like us who haven't worked out how to stop
>buffer-overflow attacks in the same time-period.

*Some* quality software runs as long. Mostly cited are some Unix
kernels (most are free/OpenSource), and most-tested Unix daemons like
apache. Of course, they kind of cheat, since they do concurrency
mostly by fork()ing new processes that may crash at will, but at least
the kernel survives this waste for arbitray periods.

I'd be interestedt in the uptime of John Mallery's Symbolics.

>> Only widly available SMP can be counted and some more or less
>> autonomous graphics subprocessors.

>The non-pervasiveness of parallel machines is a *software* problem.
>If you can write systems that parallelise well, hardware people will
>have no problem at all producing a machine for them to run on. The
>current big shared-memory machines are amazing feats of technology --
>go look at interconnect design sometime -- to work around the fact
>that software people can't produce parallel code. The failure of the
>really big parallel machines like the CM was because no one could work
>out how to program them.

Good point. I counted the current presense of lightly parallel
machines as one of few hardware progresses. But not an impressive
one, and that is software's fault.

Tim Bradshaw

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
* Barry Margolin wrote:

> But I think that Martin's point is that just speeding up isn't really a
> significant move. It's doing pretty much the same thing, just faster.

But that's what hardware *does*: the same thing, only faster! I also
have a serious problem with someone claiming that the kind of
performance and reliability improvements that are happening as `not
significant': that's like saying the difference between a Newcomen
engine and a steam-turbine is `not significant', except much more so
-- it's hard to find an example outside of computer hardware where the
developments have been so extraordinary.

> RISC was a slight paradigm shift, requiring a few changes in compiler code
> generator design, but not so much that it impacted HLL code. We still
> don't have mainstream machines with VLIW, MPP, or dataflow architectures.

> These are the types of hardware changes that would be more analogous to
> changing from Elisp to a more modern Lisp dialect (which would probably
> require changes in many, if not most, extension packages).

Yes, and the reason we don't have that stuff is that we can't program
those machines: a software problem.

--tim

Tim Bradshaw

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
* Martin Cracauer wrote:
> Just faster isn't an impressive progres for me.

What do you want, magic?

Perhaps you want MPP machines? Doubling performance every 18 months
gives you a factor of a little more than 10,000 in 20 years: so you
*have* what would have been a 10,000 processor MPP machine in 1980
now, or a 100-processor machine in 1990.

Well, I guess I'm not going to make any progress and just get annoyed
about this, so I'll stop now.

--tim

Simon Leinen

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
>>>>> "mc" == Martin Cracauer <crac...@counter.bik-gmbh.de> writes:
[...]
>> In article <ey37lc2...@cley.com>,
>> Tim Bradshaw <t...@cley.com> wrote:
[...]
>>> http://www.cs.bell-labs.com/cm/cs/who/rob/utah2000.ps
[...]

> What I find amusing is that the article says researchers use emacs
> and TeX (at least here in Germany they use Windows/Word), that the
> article is obviously typeset in TeX [...]

Nit: Actually, according to the comments in the .ps file, it has been
generated from Troff source.

> and that at least the version I downloaded had major typesetting
> errors (probably font/dvips problems).

--
Simon.

Espen Vestre

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
crac...@counter.bik-gmbh.de (Martin Cracauer) writes:

> (at least here in Germany they use Windows/Word)

that's clearly not an absolute fact, inside math, CS and related subjects
LaTeX is clearly still used a *lot*, also in Germany (and that's no
wonder, using word for writing more than a few pages of math can
be a horrifying experience).

In fact, quite recently I read (to my pleasure!) an article in the
german computer magazine c't with a rather devastating review of
wysiwig word processors where they pointed out that TeX was still
in widespread use in universities and they remarked that there were
very good reasons for that, given the poor quality of mainstream
wysiwig wp programs.

--
(espen)

Martin Cracauer

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
Tim Bradshaw <t...@cley.com> writes:

>* Martin Cracauer wrote:
>> Just faster isn't an impressive progres for me.

>What do you want, magic?

I.E.:
- massive multiprocessor systems
- get rid of unreliable magnetic harddisks and tapes. While you're at
it, get rid of the RAM/disk difference (like the Palm Pilot does).
- much less power
- Flat/thin screen are finally coming
- Better input devices, like eye-following scrolling

Yes, most of this requires software changes. However, it is wrong to
say that hardware make better progress than software.

Tim Bradshaw

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
* Martin Cracauer wrote:

> I.E.:
> - massive multiprocessor systems

Built, no one could program them, because they don't look enough like
a PDP11, which we finally worked out how to program.

> - get rid of unreliable magnetic harddisks and tapes. While you're at
> it, get rid of the RAM/disk difference (like the Palm Pilot does).

Disks are *not* unreliable. If you're unhappy with the reliability of
a single disk, mirror it. It's easy (and cheap) to construct disk
subsystems with MTBFs of hundreds or thousands of years.

Getting rid opf the ram/disk difference is called virtual memory, I
think: kind of a solved problem. The xerox Dmachines and so on even
had the software to exploit this at the user level.

> - much less power

Low power systems can easily be bought. Palm pilots, mobile phones
and so on. I change the batteries in my HP48 (which has more than 1Mb
of memory and runs a reasonably sophisticated OS) a few times a year:
I guess palm pilots are similar. Sure it would be nice to only change
them once every two years, but I'd not upgrade for that.

> - Flat/thin screen are finally coming

> - Better input devices, like eye-following scrolling

Probably the single real example.

--tim

Erik Naggum

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
* Tim Bradshaw <t...@cley.com>

| Getting rid opf the ram/disk difference is called virtual memory, I
| think: kind of a solved problem. The xerox Dmachines and so on even
| had the software to exploit this at the user level.

I think he maans the MULTICS way to deal with disks. Not just kind
of a solved problem, but kind of a not-a-desirable-solution-problem.

There's another way to look at this: Opening files for reading as
streams is a very serious bottleneck to the development of much more
interesting ways to deal with persistent data, and I'm not talking
about "persistent objects" and databases as a solution to this
particular problem, but the inability to think about it when data is
stored on disks that are conceptually still thought of as serial
input devices because that's what they were in 1950. Streams with
sequential access is OK for serial input, such as network protocols,
users, and time-based data (audio, video, etc), but it is not OK for
data that is mapped into memory on demand. Not just virtual memory,
but the management of input sources.

Friedrich Dominicus

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
I find this thread really interesting. Just I don't see the following
things mentioned.

Why should using Emacs be a bad decision?

I can't see that anything has occured that suggest that programming
can be done better with anything else but written text. And Editors
are there to handle text. And Emacs does a good job on this IMHO.

And because I do not see any other workable approach one may ask if
programming by writing down text isn't the optimal thing. I guess
hardly many people would feel that using a hammer for nailing is quite
an optimal solution. And the idea of a hammer is quite old ;-)

Regards
Friedrich

Johan Kullstam

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
Tim Bradshaw <t...@cley.com> writes:

> * Martin Cracauer wrote:
>
> > I.E.:
> > - massive multiprocessor systems
>
> Built, no one could program them, because they don't look enough like
> a PDP11, which we finally worked out how to program.
>
> > - get rid of unreliable magnetic harddisks and tapes. While you're at
> > it, get rid of the RAM/disk difference (like the Palm Pilot does).
>
> Disks are *not* unreliable. If you're unhappy with the reliability of
> a single disk, mirror it. It's easy (and cheap) to construct disk
> subsystems with MTBFs of hundreds or thousands of years.
>

> Getting rid opf the ram/disk difference is called virtual memory, I
> think: kind of a solved problem. The xerox Dmachines and so on even
> had the software to exploit this at the user level.

it's not really solved hardware-wise. ram is about 10^5 times faster
than disk. while disk is getting faster, ram seems to be getting
faster even more quickly and cpu speed increase rates are outpacing
both ram and disk. if anything, the ram/disk problem is worse than
ever. virtual memory does not solve it since active swapping is not
viable option (as opposed to simply swapping out dormant space). a
one day job in ram turns into a five year job on disk. you may as
well just crash.

--
johan kullstam l72t00052

Erik Naggum

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
* Friedrich Dominicus <Friedrich...@inka.de>

| Why should using Emacs be a bad decision?

The argument is not against Emacs, but Emacs Lisp. Huge difference.

| I can't see that anything has occured that suggest that programming
| can be done better with anything else but written text.

Well, I think GUIs to GUI builders that let me worry less about the
stupid and repetitive code that is so often required to make these
things fly at all is a huge win.

| And because I do not see any other workable approach one may ask if
| programming by writing down text isn't the optimal thing.

It depends on your typing speed and accuracy. Some people are
incredibly inept typers. In fact, so incredibly inept that using a
freaking electronic _rodent_ and looking at the same stupid menus
over and over and over is faster than typing to these people.

Raymond Toy

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
>>>>> "Erik" == Erik Naggum <er...@naggum.no> writes:

Erik> * Friedrich Dominicus <Friedrich...@inka.de>

Erik> | And because I do not see any other workable approach one may ask if
Erik> | programming by writing down text isn't the optimal thing.

Erik> It depends on your typing speed and accuracy. Some people are
Erik> incredibly inept typers. In fact, so incredibly inept that using a
Erik> freaking electronic _rodent_ and looking at the same stupid menus
Erik> over and over and over is faster than typing to these people.

Hear, hear!

As a reasonably fast touch typist, I usually can't stand the GUI
thingies because having to reach for anything other than the keyboard
really slows things done. I don't even like reaching for the arrow
keys because it takes my hands away from the home row.

Ray


Espen Vestre

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
Raymond Toy <t...@rtp.ericsson.se> writes:

> I don't even like reaching for the arrow keys because it takes my
> hands away from the home row.

hear, hear :-)

My arrow keys (and the 'numpad') are covered by a CD cover half, which
I put my 'rodent' (which I mainly use for netscapism and window
selection) on to keep it close to the keyboard... (but now my *left*
arm is stiffer than the right, it's hit by the Escape Meta Alt Control
Shift Disease!)

Keyboards are perhaps the most disturbing proof of lack of innovation
in the computer industry (right now I'm typing from home on a Apple
Ergonomic Keyboard, I don't think they make them anymore :-().
--
(espen)

Tim Bradshaw

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
* Johan Kullstam wrote:

> it's not really solved hardware-wise. ram is about 10^5 times faster
> than disk. while disk is getting faster, ram seems to be getting
> faster even more quickly and cpu speed increase rates are outpacing
> both ram and disk. if anything, the ram/disk problem is worse than
> ever. virtual memory does not solve it since active swapping is not
> viable option (as opposed to simply swapping out dormant space). a
> one day job in ram turns into a five year job on disk. you may as
> well just crash.

But that's what life is going to be like, you need to get used to it.
Uniform access speeds are something you can get if your machine is
slow enough, and something you have to give up when it gets fast: at
1GHz light travels 0.3m in one cycle.

Of course to say `x is faster than y' is fairly meaningless in this
context: disk (and any other non-local storage) has unavoidably longer
*latency* than more local storage, but it may have as much bandwidth.
Even today it's perfectly possible to design machines with disk
systems that can saturate the processor: such machines exist.

This is what I meant by my remark about the PDP11 earlier in the
thread. That was a nice slow machine with a flat memory space with
uniform access time. And we worked out how to program that
eventually, and that's what software people usually think they're
using still. But modern machines are nothing like that, and they
*will never be like that again*, unless hardware people discover that
the physics we know is wrong, and Gallileo was right after all.

So in fact the HW people spend huge amounts of time and effort taking
a modern machine and faking it up look like a huge PDP11: several
levels of cache, intelligent prefetching, cache-coherent shared-memory
multiprocessors and so on. All this effort trying to hide latency
because the software people are stuck in the 60s.

Occasionally HW people get uppity and try and build machines that
don't expend all this effort on pretending to be a PDP11: the Cray t3d
is a relatively recent example at the MPP level (no shared memory, no
cache coherency), and Tera are a recent example at the
single-processor level (no data cache). But those machines don't look
anything like PDP11s, and they're generally just too hard for people
to program as a result.

--tim (a software person, stuck in the 60s too)

Johan Kullstam

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
Tim Bradshaw <t...@cley.com> writes:

being stuck in the 60s isn't so bad. it beats getting stuck in the
80s. remember pascal?

there is a silver lining in all this, generational garbage collection
can be a big win. there was thread a while back about moving dormant
data to disk in a smarter virtual memory scheme. lisp has a good
position here. C++, of course, deserves to lose. ;-)

--
J o h a n K u l l s t a m
[kull...@ne.mediaone.net]
Don't Fear the Penguin!

Sam Falkner

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
Raymond Toy <t...@rtp.ericsson.se> writes:

> As a reasonably fast touch typist, I usually can't stand the GUI
> thingies because having to reach for anything other than the keyboard

> really slows things done. I don't even like reaching for the arrow


> keys because it takes my hands away from the home row.

I agree with this. When I had a job that required me to use Microsoft
OS's and apps, the one thing I really did like was that I could go
*days* without even touching a mouse. I know this, because I'd
occasionally find the thing pushed way back on my desk and covered
with dust.

As for arrow keys, I don't mind them on my Kinesis contour keyboard.

http://www.kinesis-ergo.com/

I highly recommend this keyboard to anyone, but especially to fellow
emacs users.

- Sam

Per Bothner

unread,
Jun 15, 2000, 3:00:00 AM6/15/00
to
Erik Naggum <er...@naggum.no> writes:
>
> FWIW, it's a lot less modern than that. Several ridiculously simple
> improvements to Emacs Lisp have been turned down because they were
> deemed to be moving it in the direction of Common Lisp, when some
> folks behind Emacs have gone off and created this cretinous Scheme
> bastard called GUILE and want to base Emacs on it. Phooey!

I started working on Kawa (http://www.gnu.org/software/kawa/) mainly
in reaction to political and technical frustrations with Guile. The
situation has much improved - but I think Kawa is still technically
superior, in spite of (or perhaps because) it is a mostly-one-man
project.

Partly in reaction to the plan of basing Emacs on Guile, I started the
JEmacs project (http://www.JEmacs.net/). Instead of compiling Emacs
Lisp to Guile source code, I am compiling ELisp to Java bytecodes,
using the Kawa compiler. I have been making good progress; see the
screenshots at the web site.

The last week or so I have been concentrating on improving the
ELisp functionality, so I can compile and run existing Emacs
sources, as much as possible un-modified.

Unlike (say) RMS, I have no problems increasing Common Lisp
compatibility. For example, I have implemented default arguments
and (a subset of) typep - because I find them necessary or convenient
for writing low-level ELisp routines. (That way I can write them
in extended ELisp, instead of having to drop down to Java or Scheme.)

I have decided to add "Common Lisp" as a third language (after Scheme
and ELisp) that Kawa can compile. This will be a very pathetic subset
of CL - initially just Emacs Lisp with lexical scoping. However, it
should be easy to add Common Lisp features as time permits or as
contributed by volunteers.

Of source I will continue to support using Scheme as an Emacs
scripting/extension language.
--
--Per Bothner
p...@bothner.com http://www.bothner.com/~per/

Christopher Browne

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to
Centuries ago, Nostradamus foresaw a time when vsync would say:

>cbbr...@dantzig.brownes.org (Christopher Browne) writes:
>
>> - Using dynamic scope, by default, which means preserving state is
>> rather more complex than with static scoping, and rather more
>> time-consuming, and;
>
>What do these two terms mean, and what are the differences between
>them?

Um. Brain fart.

"Dynamic scope" indicates having _indefinite_ scope, that is, there is
no definite limit to where a binding may be accessible, as well as
dynamic extent, defined as an extent whose duration is bounded by
points of establishment and disestablishment within the execution of a
particular form. [See the ANSI CL glossary.]

Rather than "static scope," I intended to write "lexical scope,"
defined in the ANSI CL glossary as scope that is limited to a spatial
or textual region within the establishing form.

Emacs Lisp uses dynamic scope, as was common for early Lisps; Scheme
was one of the "pioneers" in supporting lexical scoping, and that
_appears_ to be the route whereby lexical scoping ultimately entered
Common Lisp.

The use of lexical scoping strictly limits the contexts in which
bindings are accessible, which makes it rather easier to optimize the
code, and probably has other useful effects.

<http://www.xemacs.org/Architecting-XEmacs/index.html> has some
interesting links on this. I can find quite a lot of places that
claim that lexical scoping is preferable to dynamic scoping; it is
certainly easier to _understand_. It is not as easy to find
references to why it would be preferable from a performance
standpoint, and I don't think I can explain it terribly coherently.
--
cbbr...@ntlug.org - <http://www.ntlug.org/~cbbrowne/emacs.html>
"All language designers are arrogant. Goes with the territory..."
-- Larry Wall

Christopher Browne

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to
Centuries ago, Nostradamus foresaw a time when qt...@my-deja.com would say:

>In article <ey37lc2...@cley.com>,
> Tim Bradshaw <t...@cley.com> wrote:
>> I don't even know if this is relevent to cll.
>>
>> http://www.cs.bell-labs.com/cm/cs/who/rob/utah2000.ps
>
>I think it's quite relevant to this NG. It has bothered me that the
>whole Unix/gcc/emacs world seems to be dominated by people, ideas, and
>tools from the 1970's. Compared to developments in computer hardware,
>that is incredible stagnation. A good example, just how modern is emacs
>Lisp?

Can I throw a flip side of the coin (possibly a nickel :-)) into the
fray?

The automobile industry has had various developments, and lacks there
of, over the years; it remains a constant that we still use that
thousands-of-years-old invention, The Wheel. The point being? Some
of these things may be "old" because the "old" stuff still happens to
be useful.

- It is quite unfortunate that Elisp hasn't progressed too much since
the '70s; there _are_ newer things worth learning from.

- On the other hand, the _popular_ newer developments in computer
languages could be mistaken for developments by people that never
_bothered_ to read the literature of the '70s, but merely distilled
bits of C and Simula, which were "'60s-based." (I _know_ GLS was
involved with CL and Scheme; I'm disappointed that Java _isn't_
better than it is... I'd like to blame that on him not being as
influential in its design as he should have been...)

- Another flip side (how many sides does this nickel have :-)) is that
the "conventional world of development" has a mindset dominated by
the "pundit circuit."

- Some years ago, the buzzword was "structured programming." At
which point people learned to write ugly-looking PL/1.

- Then, they headed of into "SQL world." And C.J. Date is still
griping about how badly they muffed up the implementation of
relational database systems. Effectively, the PL/1 guys seem to
have decided that SQL was a somewhat more robust way of accessing
flat files. They hadn't figured out Structured Programming yet,
by the way...

- Then they learned about Object Oriented Analysis, and everyone
started building GUIed applications using Bad C++. (Which begs
the question of whether or not (GOODP app-in-c++) could ever
*not* evaluate to NIL...) So we're on to OOA, after the populace
didn't "Get" Structured Analysis _or_ SQL.

- More recently, the programming methodology folks headed on to
UML, and so we have a "Unified Modelling Language," which is, in
no way, unified, but rather represents a conglomeration of a
dozen different diagramming schemes. Thus, the pundits are now
selling UML. Thus continuing the sequence, where the populace
couldn't "get" SSA, SQL, OOA, or, now, UML.

- Most recently, development has headed into "Internet
applications," built "on Internet Time," which, as near as I can
tell, means that you get someone's application framework, add a
scripting language, and hack it until it "sort-of works," and
then deploy that, within 60 days, because if you wait any longer,
someone will patent the techniques, and you'll get your pants
sued off. [And if _that_ happens, then there's a sexual
harassment suit in the wings since you've taken your pants
off...]

- The next "thing" appears to be using XML for data interchange.
Unfortunately, people didn't figure out SSA, SQL, OOA, UML, or
the Internet, and so the results seem easy to predict. Throw in
Peter Flynn's comments of some years ago, and it just strengthens
the result...

"DTDs are not common knowledge because programming students are
not taught markup. A markup language is not a programming
language." -- Peter Flynn <silm...@m-net.arbornet.org>

People never finish figuring out the existing stuff before trying to
adopt the next thing [that _also_ doesn't work], so that having some
"stagnant" things around that still work means that you can have
functioning systems despite all the pundits...
--
cbbr...@acm.org - <http://www.hex.net/~cbbrowne/languages.html>
"You're one of those condescending Unix computer users!"
"Here's a nickel, kid. Get yourself a real computer" - Dilbert.

Friedrich Dominicus

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to
> | I can't see that anything has occured that suggest that programming
> | can be done better with anything else but written text.
>
> Well, I think GUIs to GUI builders that let me worry less about the
> stupid and repetitive code that is so often required to make these
> things fly at all is a huge win.

Now this takes away one sort of work and may be a good or even best
solution for creating GUIS, neverthelsss you add the functionality by
writing down text. I have heard from languages in which you do
programming by choosing some sort of icons and place them on your
desktop but I hardly believe that many people use that kind of
programming. So probably typing is simpler and more adequate for
programming.

>
> | And because I do not see any other workable approach one may ask if

> | programming by writing down text isn't the optimal thing.
>

> It depends on your typing speed and accuracy. Some people are

> incredibly inept typers. In fact, so incredibly inept that using a

> freaking electronic _rodent_ and looking at the same stupid menus

> over and over and over is faster than typing to these people.

Now it's not too difficult to improve on typing speed and accuracy. I
think a programmer better takes care that he/she can type well. In
fact all the programmers I've seen which I regard as good are quite
fast and accurat typers. It's interesting to see what happens if they
find a keyboard which does not have the characters at the places they
expect them to be. But I guess that's going off-topic here.

Regards
Friedrich

vsync

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to
cbbr...@news.hex.net (Christopher Browne) writes:

> - Most recently, development has headed into "Internet
> applications," built "on Internet Time," which, as near as I can
> tell, means that you get someone's application framework, add a
> scripting language, and hack it until it "sort-of works," and
> then deploy that, within 60 days, because if you wait any longer,
> someone will patent the techniques, and you'll get your pants
> sued off. [And if _that_ happens, then there's a sexual
> harassment suit in the wings since you've taken your pants
> off...]

"It compiles... ship it!"

--
vsync
http://quadium.net/ - last updated Mon Jun 12 23:31:13 MDT 2000
Orjner.

Francis Leboutte

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to
Erik Naggum <er...@naggum.no> wrote:

>* Friedrich Dominicus <Friedrich...@inka.de>
>| Why should using Emacs be a bad decision?
>
> The argument is not against Emacs, but Emacs Lisp. Huge difference.
>

>| I can't see that anything has occured that suggest that programming
>| can be done better with anything else but written text.
>
> Well, I think GUIs to GUI builders that let me worry less about the
> stupid and repetitive code that is so often required to make these
> things fly at all is a huge win.
>

Instead of using a GUI builder I prefer to add abstraction layers (text)
that eases programming of dialogs , increases reuse, simplifies
maintenance and improves portability. I use ACL on Windows for a while,
have written applications with a lot of dialogs (hundreds) but never use
the GUI builder.
--
Francis Leboutte
f...@algo.be www.algo.be +32-(0)4.388.39.19

William Deakin

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to
Tim wrote:

> Of course to say `x is faster than y' is fairly meaningless in this
> context: disk (and any other non-local storage) has unavoidably longer
> *latency* than more local storage, but it may have as much bandwidth.
> Even today it's perfectly possible to design machines with disk
> systems that can saturate the processor: such machines exist.

I not sure about how true (or relevant) this is but here goes: I was
talking to a bloke who was working on the online processing system for
Hertz cars in the states. This consists of a continental size Oracle
database that sits in about 4GB of memory and so never touches disk. This
`single' instance services the whole of the US.

To back this up, when a checkpoint is called on the DB the 4GB is flushed
to some DEC solid-state backup (effectively some persistant memory) which
is then dumped to disk/tape for long term storage.

Best Regards,

:) will

ps: my apologies for this being a database example ;)


Jens Kilian

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to
Espen Vestre <espen@*do-not-spam-me*.vestre.net> writes:
> My arrow keys (and the 'numpad') are covered by a CD cover half, which
> I put my 'rodent' (which I mainly use for netscapism and window
> selection) on to keep it close to the keyboard...

Sounds like you need this: http://www.pfuca.com/products/hhkb/hhkbindex.html

--
mailto:j...@acm.org phone:+49-7031-464-7698 (HP TELNET 778-7698)
http://www.bawue.de/~jjk/ fax:+49-7031-464-7351
PGP: 06 04 1C 35 7B DC 1F 26 As the air to a bird, or the sea to a fish,
0x555DA8B5 BB A2 F0 66 77 75 E1 08 so is contempt to the contemptible. [Blake]

Erik Naggum

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to
* Friedrich Dominicus <Friedrich...@inka.de>

| Now it's not too difficult to improve on typing speed and accuracy.

You'd think it would be worth the investment in time and effort to
learn to type really fast and accurately, but people don't do this,
even when they are working in environments where typing speed has
become the predominant productivity factor.

Most people think much faster than they type, but they also find
that the computer is not responding fast enough, so there's no point
in typing at maximum speed.

| In fact all the programmers I've seen which I regard as good are
| quite fast and accurat typers. It's interesting to see what happens
| if they find a keyboard which does not have the characters at the
| places they expect them to be. But I guess that's going off-topic
| here.

Heh, I get completely lost when the keyboard is wrong, like having a
Norwegian layout. The QWERTY keyboard is also wrong: Parentheses
have no business being shifted, we have much less need for [ and ]
than parens, and { and } are useless unless you are C-damaged, and :
and ; are swapped, and so are + and =, ` and ~, and ' and ". OK, so
I hae made myself efficient on my keyboard, but it's hard to use
somebody else's "standard" keyboard...

Raymond Toy

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to
>>>>> "Erik" == Erik Naggum <er...@naggum.no> writes:

Erik> Heh, I get completely lost when the keyboard is wrong, like having a
Erik> Norwegian layout. The QWERTY keyboard is also wrong: Parentheses
Erik> have no business being shifted, we have much less need for [ and ]
Erik> than parens, and { and } are useless unless you are C-damaged, and :
Erik> and ; are swapped, and so are + and =, ` and ~, and ' and ". OK, so
Erik> I hae made myself efficient on my keyboard, but it's hard to use
Erik> somebody else's "standard" keyboard...

Many years ago, for kicks, I set a dip switch on my old Northgate
keyboard to Dvorak style. I got reasonably efficient with it after a
few hours. When I went to use someone elses standard QWERTY keyboard,
my typing speed dropped to essentially zero. At that point I switched
my keyboard back and took even longer to go from Dvorak to QWERTY than
from QWERTY to Dvorak and also vowed never to use any other keyboard
than QWERTY. (However, I do swap control and capslock and put ESC
next to 1 on my PC keyboards.)

That also means I never redefine standard keys in Emacs. If I use
someone's setup, I'd be totally lost and confused.

Ray

Rainer Joswig

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to
In article <4n66r9x...@rtp.ericsson.se>, Raymond Toy
<t...@rtp.ericsson.se> wrote:

Hey, you may want a Lisp machine keyboard:

http://kogs-www.informatik.uni-hamburg.de/~moeller/symbolics-info/symbolics-images/keyboard.JPG

--
Rainer Joswig, BU Partner,
ISION Internet AG, Steinhöft 9, 20459 Hamburg, Germany
Tel: +49 40 3070 2950, Fax: +49 40 3070 2999
Email: mailto:rainer...@ision.net WWW: http://www.ision.net/

Tim Bradshaw

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to
* William Deakin wrote:

> I not sure about how true (or relevant) this is but here goes: I was
> talking to a bloke who was working on the online processing system for
> Hertz cars in the states. This consists of a continental size Oracle
> database that sits in about 4GB of memory and so never touches disk. This
> `single' instance services the whole of the US.

I think this is an interesting datapoint -- I have a theory that a lot
of databases will actually fit in core. However think of people like
supermarkets, who are busy acquiring records of every transaction, or
telcos who are doing the same. Those databases will not fit in core:
not even a day's worth I expect.

--tim

Christopher Browne

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to
Centuries ago, Nostradamus foresaw a time when Tim Bradshaw would say:

... Which means that you probably want "transactions" to stream off
into lower-performance secondary storage, whilst "inventory" and
"configuration" sits in memory.

Yes, I'd think this to be a big win...
--
aa...@freenet.carleton.ca - <http://www.ntlug.org/~cbbrowne/>
Rules of the Evil Overlord #103. "I will make it clear that I do know
the meaning of the word "mercy"; I simply choose not show them any."
<http://www.eviloverlord.com/>

William Deakin

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to
Tim wrote:
> I think this is an interesting datapoint -- I have a theory that a lot
> of databases will actually fit in core.
Yes. I particularly liked the solidstate backup idea too. I would
imagine it to use this to dump a lisp memory image to this kind of
`solid-state' backup straight after gc. That is, if somebody isn't
already doing this.

> However think of people like supermarkets, who are busy acquiring records > of every transaction, or telcos who are doing the same. Those databases
> will not fit in core: not even a day's worth I expect.

True. (This sounds a bit like something we discussed a while back). I
suppose this is where `data-mining' and `data-warehousing' rear their
ugly heads. Or you dump the whole lot to a really, really big parallel
disk storage array and try to make sense of it later.

<ramble>
IIRC there was an article a read some time ago that the run away success
of introducting loyalty cards in supermarkets in the UK caused massive
headaches for the IT and marketing departments because you then end up
with more data than you can shake a stick at. Sounds like you need a
high-energy particle physicist to sort it out for you,
</ramble>

:)will

Will Hartung

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to

Tim Bradshaw wrote in message ...

>* William Deakin wrote:
>
>> I not sure about how true (or relevant) this is but here goes: I was
>> talking to a bloke who was working on the online processing system for
>> Hertz cars in the states. This consists of a continental size Oracle
>> database that sits in about 4GB of memory and so never touches disk. This
>> `single' instance services the whole of the US.
>
>I think this is an interesting datapoint -- I have a theory that a lot
>of databases will actually fit in core. However think of people like

>supermarkets, who are busy acquiring records of every transaction, or
>telcos who are doing the same. Those databases will not fit in core:
>not even a day's worth I expect.


Perhaps not telcos, but even a busy store could probably keep it entire
inventory in core, particularly if they regularly snapshot the transaction
activity out of the system to free up space. During the daily operations, I
imagine there isn't much that looks at the actual transaction activity.

100,000 distinct SKUs each with a 100 byte record is a mere 10MB. A, what 15
byte SKU with a transaction ID (say 8 bytes), a line number (2 bytes), qty
(2 bytes), amount (10 bytes). That's 37 bytes per item purchased. A half-gig
of RAM will hold 14,000,000 transaction items. This doesn't include indexes,
of course, but I don't even think Macy's in New York at christmas comes
anywhere close to this kind of volume in a single store.

Essentially, if you consider the stores to be simply data collectors, versus
data processors, this isn't a problem at all. All of the processing would
happen at regular intervals at the corporate data center that the stores
upload too either throughout the day, or at night in bulk.

Think about something like Amazon, which is a far more complex system (with
reviews and suggestions and such). Just visualize how much they must keep in
RAM, and for how long.

Will Hartung
(vft...@home.com)


Michael Livshin

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to
Per Bothner <p...@bothner.com> writes:

> I started working on Kawa (http://www.gnu.org/software/kawa/) mainly
> in reaction to political and technical frustrations with Guile. The
> situation has much improved - but I think Kawa is still technically
> superior, in spite of (or perhaps because) it is a mostly-one-man
> project.

IMHO this one is a definite "because".

--
only legal replies to this address are accepted.

Well, I wish you'd just tell me rather than trying to engage my enthusiasm,
because I haven't got one.

Tim Moore

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to
On Fri, 16 Jun 2000, Christopher Browne wrote:

> Emacs Lisp uses dynamic scope, as was common for early Lisps; Scheme
> was one of the "pioneers" in supporting lexical scoping, and that
> _appears_ to be the route whereby lexical scoping ultimately entered
> Common Lisp.

Also, MacLisp was lexically scoped in compiled code but dynamically scoped
in interpreted code! I don't know off hand what Lisp Machine Lisp and its
successors did.

Tim

Janos Blazi

unread,
Jun 16, 2000, 3:00:00 AM6/16/00
to

Erik Naggum <er...@naggum.no> schrieb in im Newsbeitrag:
31701373...@naggum.no...

> [...]
> than parens, and { and } are useless unless you are C-damaged.

80% of of my work is TeX and so I often need the curly braces but I stopped
using C and C++ years ago.

J.B.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 80,000 Newsgroups - 16 Different Servers! =-----

Courageous

unread,
Jun 17, 2000, 3:00:00 AM6/17/00
to
> Instead of using a GUI builder I prefer to add abstraction layers (text)
> that eases programming of dialogs , increases reuse, simplifies
> maintenance and improves portability. I use ACL on Windows for a while,
> have written applications with a lot of dialogs (hundreds) but never use
> the GUI builder.

Gui builders aren't the only answer to this, however. Take a look
at XMT for Motif if you want to see interface creation done right.
Motif is largely dead, but XMT was a true pleasure to use when I
was using it.


C/

Reini Urban

unread,
Jun 17, 2000, 3:00:00 AM6/17/00
to
Per Bothner wrote:
>Erik Naggum <er...@naggum.no> writes:
>> FWIW, it's a lot less modern than that. Several ridiculously simple
>> improvements to Emacs Lisp have been turned down because they were
>> deemed to be moving it in the direction of Common Lisp, when some
>> folks behind Emacs have gone off and created this cretinous Scheme
>> bastard called GUILE and want to base Emacs on it. Phooey!
>
>I started working on Kawa (http://www.gnu.org/software/kawa/) mainly
>in reaction to political and technical frustrations with Guile. The
>situation has much improved - but I think Kawa is still technically
>superior, in spite of (or perhaps because) it is a mostly-one-man
>project.
>
>Partly in reaction to the plan of basing Emacs on Guile, I started the
>JEmacs project (http://www.JEmacs.net/). Instead of compiling Emacs
>Lisp to Guile source code, I am compiling ELisp to Java bytecodes,
>using the Kawa compiler. I have been making good progress; see the
>screenshots at the web site.

Per,
I watched your LUGM98 report on kawa (also on
http://sourceware.cygnus.com/kawa/papers/KawaLisp98-html/)
and your recent JEmacs (!) progress with interest.
Could you elaborate a bit on the up and downsides of the JavaVM
regarding your new lisp plans.

I can remember that you couldn't handle full tail-call elimination
then but now you do with --full-tail-calls, only slower.
Would this easier with the planned CL subset instead of a fully
dynamic lisp/scheme?

>I have decided to add "Common Lisp" as a third language (after Scheme
>and ELisp) that Kawa can compile. This will be a very pathetic subset
>of CL - initially just Emacs Lisp with lexical scoping. However, it
>should be easy to add Common Lisp features as time permits or as
>contributed by volunteers.

Didn't you forget ECMAScript as third language? (in fact second)
So the lexical elisp will be the forth, won't it?

BTW: Wonder which new emacs will make it, gtk-emacs or JEmacs.
gtk seems to be better than swing, but swing is much easier to work
with. and already has much more than the gtk interface can offer now.
--
Reini Urban
http://xarch.tu-graz.ac.at/autocad/news/faq/autolisp.html

Marco Antoniotti

unread,
Jun 17, 2000, 3:00:00 AM6/17/00
to

Courageous <jkra...@san.rr.com> writes:

> Gui builders aren't the only answer to this, however. Take a look
> at XMT for Motif if you want to see interface creation done right.
> Motif is largely dead, but XMT was a true pleasure to use when I
> was using it.

Pointer to XMT?

--
Marco Antoniotti ===========================================

Steven M. Haflich

unread,
Jun 17, 2000, 3:00:00 AM6/17/00
to

At least after the very early LMs, variables were lexical by default
as in modern Common Lisp. There were a variety of other capabilities
and behaviors associated with variables that (fortunately) evolution didn't
favor. For example, LM lisp could close over a dynamic variable. We had to
reproduce this in Franz Lisp (not related to Franz ACL) back in 1985 in order
to implement LM Flavors.

Will Hartung

unread,
Jun 19, 2000, 3:00:00 AM6/19/00
to

William Deakin wrote in message <394A3EF3...@pindar.com>...

>Tim wrote:
>> I think this is an interesting datapoint -- I have a theory that a lot
>> of databases will actually fit in core.
>Yes. I particularly liked the solidstate backup idea too. I would
>imagine it to use this to dump a lisp memory image to this kind of
>`solid-state' backup straight after gc. That is, if somebody isn't
>already doing this.


On a similar note...

I imagine what they are doing here is pre-loading the databases into core,
not simply caching it as many databases allow selected tables (and indexes)
to be made resident. Meanwhile the transaction logs are streamed to disk
normally (of course, this could be a persistent solid state disk).

The snapshot is made to another device for recovery. If the database fails,
they can recover one of these snapshots, and then roll the logs forward. Of
course, then you must wait for the database to restart AND reload once you
have recovered.

OBLisp, for server applications, if rather than preloading into core your
persistent information, but instead it is caching your persistent info, then
one thing that can be done is that after the application has started up, the
application forks(2). The new image runs along happily, caching information
on demand etc. Whenever it's appropriate, the running image exit(2)'s, and
the parent then re-forks. This can prevent an ugly global GC on the
application, which even with native threads tends to bring the app to a
screeching halt, probably much more so than simply reloading information
from persistent store back into RAM. The GGC will certainly have less
concurrency than a cache reload.

Also, it's an interesting architecture for dynamic change. You can
(carefully) change the parent image while the child is running, and then
quickly restart it. This may be important depending on the scope of the
changes being made to the image. You certainly get into some ugly race
conditions when you start trying to change code with 100 active threads
running against it.

Finally, with this architecture your clients need never see a pause because
you can have two children running (one starting up, one exiting)
simultaneously. Nothing new here, Apache does this all the time, but it
can't change it's internal image beyond what's in the conf files. Although
it does have the GC problem (err...memory leaks)....

There won't be much of a memory hit doing this as most of the core image
isn't changing, and mapped in swap anyways. There might be a little swapping
as the new image fires up while the old one quits. But once the old one
fades away..woohoo! Free RAM!

"Fill 'er up!...With Ethyl!"

I would imagine most of this could be alleviated with good GC tuning, but
even a long running image would, I think, like a nice global GC at least
once in a while (once a week perhaps, rather than every day...).

Will Hartung
(vft...@home.com)


Seth Gordon

unread,
Jun 19, 2000, 3:00:00 AM6/19/00
to
William Deakin wrote:

> I not sure about how true (or relevant) this is but here goes: I was
> talking to a bloke who was working on the online processing system for
> Hertz cars in the states. This consists of a continental size Oracle
> database that sits in about 4GB of memory and so never touches disk. This
> `single' instance services the whole of the US.

This reminds me of this comment by Phillip Greenspun (from
http://www.arsdigita.com/asj/application-servers):

> My friend Jin and I spent some spare evenings building
> http://www.scorecard.org for the Environmental Defense Fund. When a user
> types in his zip code, the server shows him a map of the factories near his
> house. Clicking on a factory will list the chemicals released. Clicking on
> a chemical will list its health effects. The site was featured on ABC World
> News, in Newsweek, in the New York Times, on CNN, and was a Yahoo Pick of
> the Week. Every single page on the site is generated on-the-fly by querying
> a relational database management system (RDBMS). Some pages require five
> SQL queries. Each page requires at least one. The site gets about 30
> requests/second at peaks (on days when traffic is over 500,000 hits). There
> are only a handful of sites on the Internet that serve a larger number of
> db-backed pages.
>
> Our hardware for this monstrously popular site? A Sun Microsystems SPARC
> Ultra 2 pizza box Unix machine, built in 1996. Its dual 167-MHz CPUs would
> be laughed at by the average Quake-playing 10-year-old. The CPUs sit idle
> 80% of the time. The disks sit idle most of the time, partly because I
> spent $4,000 on enough RAM to hold the entire 750 MB data set. Oh yes, the
> machine also serves a few hundred thousand hits/day for other customers of
> arsdigita.com and runs the street cleaning and birthday reminder services
> that we built.

ObLisp: Greenspun is an MIT alum and Lisp bi^H^H^H^H^H^H^H^H^H^H^Hwho likes
Lisp. See http://www.arsdigita.com/books/tcl/introduction.adp and scroll
down to "Lisp Without a Brain".


--
--Why is it that most kids are attracted to computers while
most adults are quite wary of computers?
--Most adults are smarter than most kids. ["Ask Uncle Louie"]
== seth gordon == sgo...@kenan.com == standard disclaimer ==
== documentation group, kenan systems corp., cambridge, ma ==

Per Bothner

unread,
Jun 19, 2000, 3:00:00 AM6/19/00
to
rur...@sbox.tu-graz.ac.at (Reini Urban) writes:

> Per,
> I watched your LUGM98 report on kawa (also on
> http://sourceware.cygnus.com/kawa/papers/KawaLisp98-html/)
> and your recent JEmacs (!) progress with interest.
> Could you elaborate a bit on the up and downsides of the JavaVM
> regarding your new lisp plans.

Well, I still think the Java VM is is a reasonable target, and you can
compile Scheme/Lisp-style languages to it reasonable well. The Java
language does have its clumsy aspects - which Kawa's goal is to ameliorate.

> I can remember that you couldn't handle full tail-call elimination
> then but now you do with --full-tail-calls, only slower.
> Would this easier with the planned CL subset instead of a fully
> dynamic lisp/scheme?

Well, full tail-call-elimination is not required for Common Lisp - but
many people expect it as a quality-of-implementation issue. So in
practice, the situation is not that different from Scheme! I.e some
people care, but most people don't care, as long at least the obvious
self-tail-recursion cases are eliminated - which Kawa does.

> Didn't you forget ECMAScript as third language? (in fact second)
> So the lexical elisp will be the forth, won't it?

Strictly speaking yes, however, I have not done anything with the
EcmaScript implementation for a long time, and it never got to be even
minimally useful. On the other hand, I think I can get a minimally
useful CL subset (at least useful for toy programs) fairly quickly.

That does not mean I promise to make this a priority, though!
However, I'm guessing that enough people would be interested in
giving it a try that it is worth the small effort.

> BTW: Wonder which new emacs will make it, gtk-emacs or JEmacs.
> gtk seems to be better than swing, but swing is much easier to work
> with. and already has much more than the gtk interface can offer now.

Well, gtk-emacs still has the problems of an old substrate,
so I think JEmacs has its place. What I would like is a version of
JEmacs that uses gtk widgets. (JEmacs doesn't need all that much
from Swing, actually.)

Fabrice Popineau

unread,
Jun 24, 2000, 3:00:00 AM6/24/00
to
* Martin Cracauer <crac...@counter.bik-gmbh.de> writes:

> What I find amusing is that the article says researchers use emacs
> and TeX (at least here in Germany they use Windows/Word), that the
> article is obviously typeset in TeX and that at least the version I
> downloaded had major typesetting errors (probably font/dvips
> problems).

It is obviously _not typeset with TeX_ :

%!PS-Adobe-2.0
%%Version: 0.1
%%DocumentFonts: (atend)
%%Pages: (atend)
%%EndComments
%
% Version 3.3.2 prologue for troff files.
%

else it would be much nicer than it is.

Also : what typesetting tools run under Plan 9 ?
:-))

--
Fabrice POPINEAU
------------------------
e-mail: Fabrice....@supelec.fr | The difference between theory
voice-mail: +33 (0) 387764715 | and practice, is that
surface-mail: Supelec, 2 rue E. Belin, | theoretically,
F-57078 Metz Cedex 3 | there is no difference !

Christopher Browne

unread,
Jun 27, 2000, 3:00:00 AM6/27/00
to
Centuries ago, Nostradamus foresaw a time when Fabrice Popineau would say:

>* Martin Cracauer <crac...@counter.bik-gmbh.de> writes:
>
>> What I find amusing is that the article says researchers use emacs
>> and TeX (at least here in Germany they use Windows/Word), that the
>> article is obviously typeset in TeX and that at least the version I
>> downloaded had major typesetting errors (probably font/dvips
>> problems).
>
>It is obviously _not typeset with TeX_ :

... And it should be no great surprise that Rob Pike, who most
certainly _IS_ a researcher, would use tools like Sam and troff,
particularly when he is amongst the research group that _created_
those tools in the first place.
--
cbbr...@hex.net - <http://www.hex.net/~cbbrowne/oses.html>
He's not dead. He's electroencephalographically challenged.

David Combs

unread,
Jul 2, 2000, 3:00:00 AM7/2/00
to
In article <87itvan...@frown.inka.de>,
Friedrich Dominicus <Friedrich...@inka.de> wrote:
><SNIP>

>Now it's not too difficult to improve on typing speed and accuracy. I
>think a programmer better takes care that he/she can type well. In

>fact all the programmers I've seen which I regard as good are quite
>fast and accurat typers. It's interesting to see what happens if they
^----- right on! :-)

>find a keyboard which does not have the characters at the places they
>expect them to be. But I guess that's going off-topic here.

David


0 new messages