Lisp advocacy misadventures

842 views
Skip to first unread message

Tim Daly, Jr.

unread,
Oct 25, 2002, 1:38:20 PM10/25/02
to

I was talking with a friend of mine about Lisp. He said that people
write things in C because of speed. I said that Lisp will not
necessarily cause a program to be slow, and in fact, because it lets
you write a better program, things may even get much faster. He said
'like what?'

Hmm.

The first thing that came to mind was the OS. From somewhere I've
picked up the notion that an OS like Genera does not need to give each
process its own address space, and can forego complex IPC mechanisms,
because Lisp by its very nature won't let you shoot holes in memory.
So, because it was done in Lisp, task switching and IPC in Genera is
inherently simpler, and can therefore be faster.

Now, that's a stretch, I know. Especially because I'm talking out of
my ass - I've never used Genera.

Things proceeded:

"Doesn't the code that protects memory slow things down?"

You don't need code to protect memory. Lisp does not treat memory as
an array, and you cannot simply write a byte at an address, or get the
address of an object.

"Well, then how do you write low level code, like a kernel?"

Hmm.

Well, I'm blinded by the very misconceptions that led me to this
point, and I'm not sure what to tell him. Can you help me out?

-Tim

-----------== Posted via Newsfeed.Com - Uncensored Usenet News ==----------
http://www.newsfeed.com The #1 Newsgroup Service in the World!
-----= Over 100,000 Newsgroups - Unlimited Fast Downloads - 19 Servers =-----

Barry Margolin

unread,
Oct 25, 2002, 2:14:28 PM10/25/02
to
In article <m3d6py2...@www.tenkan.org>,

Tim Daly, Jr. <t...@tenkan.org> wrote:
>The first thing that came to mind was the OS. From somewhere I've
>picked up the notion that an OS like Genera does not need to give each
>process its own address space, and can forego complex IPC mechanisms,
>because Lisp by its very nature won't let you shoot holes in memory.
>So, because it was done in Lisp, task switching and IPC in Genera is
>inherently simpler, and can therefore be faster.
>
>Now, that's a stretch, I know. Especially because I'm talking out of
>my ass - I've never used Genera.
>
>Things proceeded:
>
>"Doesn't the code that protects memory slow things down?"
>
>You don't need code to protect memory. Lisp does not treat memory as
>an array, and you cannot simply write a byte at an address, or get the
>address of an object.
>
>"Well, then how do you write low level code, like a kernel?"

Lisp Machine Lisp includes a number of "sub-primitives" that allow you to
perform direct memory access. Although there's nothing preventing them
from being used anywhere, they are normally only used in low-level code,
like the OS, device drivers, and GC. They're all in the SYS package and
have names beginning with "%", so you're not likely to do this
unintentionally.

This is not considered a security hole because Genera is a single-user,
personal computer OS, and doesn't make any attempt to provide security.
Everything is in a single address space, with no inter-process protection;
there's no distinction between user and kernel modes.

--
Barry Margolin, bar...@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Wade Humeniuk

unread,
Oct 25, 2002, 2:22:12 PM10/25/02
to
"Tim Daly, Jr." <t...@tenkan.org> wrote in message news:m3d6py2...@www.tenkan.org...
>

Speed is something a motherhood issue that on close scrutiny is a
more specific question.

> I was talking with a friend of mine about Lisp. He said that people
> write things in C because of speed.

At this point you could have pointed out that if they were really concerned about
speed that they would be using assembler. So there has to be more to why people
use C. Than let him ponder it out.


> I said that Lisp will not
> necessarily cause a program to be slow, and in fact, because it lets
> you write a better program, things may even get much faster.

True. You could have just pointed out that the Lisp compilers produce code
which can run just as quickly a C (at least in the same neighborhood). A Lisp
program might run better because it is better designed and more robust.

As a retorical argument you might use,

"Say you are running a numerical simulation and it takes a week to run.
On the six day it throws an floating point exception. Without a great
deal error handling added to a C version will have to terminate, be
fixed and restarted from scratch (6 days later.....).
The Lisp version throws the exception, there is a greater chance that
with a day in the debugger, restart from the error and 1 day later
the program finishes."

Sometimes its just better to let sleeping friends lie.

Wade


Thien-Thi Nguyen

unread,
Oct 25, 2002, 2:44:54 PM10/25/02
to
t...@tenkan.org (Tim Daly, Jr.) writes:

> "Doesn't the code that protects memory slow things down?"

not (too much) if it's in hardware.

thi

Thomas F. Burdick

unread,
Oct 25, 2002, 2:56:06 PM10/25/02
to
t...@tenkan.org (Tim Daly, Jr.) writes:

> I was talking with a friend of mine about Lisp. He said that people
> write things in C because of speed. I said that Lisp will not
> necessarily cause a program to be slow, and in fact, because it lets
> you write a better program, things may even get much faster. He said
> 'like what?'
>
> Hmm.
>
> The first thing that came to mind was the OS.

An example springs to mind from the collection of papers I got on loan
from Stanford: the Interlisp-D I/O system. Their Interlisp compiler
targeted a stack-based VM, with minimal optimization. The VM code was
then optimized, and compiled to native code. Apparently this gave
them decent performance, but looking at the emitted code, they would
find opportunities for up to 15% speed improvement from
hand-optimizing the end result. So, it was good, but not speed-demon
status.

Originally, large amounts of the kernel were written in Bcpl, for
performance reasons. They rewrote large amounts of it in Interlisp,
and this improved performance. Because of the ability to write code
quickly, and their refactoring tool, they could try out more ideas,
and ended out with better algorithmic performance, which improved
overall system performance. The paper this annecdote is from is
"Interlisp-D: Overview and Status" by Richard R. Burton et al., but
it's hard to find.

Another example, I experienced personally. Trying to rewrite an
image-manipulation library from CL to (very low-level) C++. We ripped
out the use of high-level facilities in the C++ version, which
improved speed significantly. We used arena-based allocation so the
code wouldn't spend so much time chasing pointers through structures
it was freeing. We profiled, and hand-optimized. It never did get as
fast as the Lisp code. The Lisp version was built on top of layer
after layer after layer of macros and compiler macros. Each of these
was pretty easy to write, and to verify. But the resulting Lisp
(after expansion) was essentially unreadable. A carefully done
unreadable, though, that was amenable to compilation. A direct
translation of the expanded Lisp code to C++ might have gotten
performance as good or better than the Lisp version, but would have
been nearly impossible, even with all the gensyms having useful names.

> From somewhere I've picked up the notion that an OS like Genera does
> not need to give each process its own address space, and can forego
> complex IPC mechanisms, because Lisp by its very nature won't let
> you shoot holes in memory. So, because it was done in Lisp, task
> switching and IPC in Genera is inherently simpler, and can therefore
> be faster.
>
> Now, that's a stretch, I know. Especially because I'm talking out of
> my ass - I've never used Genera.
>
> Things proceeded:
>
> "Doesn't the code that protects memory slow things down?"
>
> You don't need code to protect memory. Lisp does not treat memory as
> an array, and you cannot simply write a byte at an address, or get the
> address of an object.
>
> "Well, then how do you write low level code, like a kernel?"
>
> Hmm.
>
> Well, I'm blinded by the very misconceptions that led me to this
> point, and I'm not sure what to tell him. Can you help me out?

Presumably, there would be low-level facilities used only by the
kernel, that would give you exactly the kind of access to memory you
don't want in user code?

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'

Will Deakin

unread,
Oct 25, 2002, 3:14:21 PM10/25/02
to
Tim Daly, Jr. wrote:
> "Well, then how do you write low level code, like a kernel?"
>
> Hmm.
Well, how *do* you write low level code, like a kernel? If you look at
something like linux rather a lot of the truely low level code is, well,
just that in the form of assembly language embedded in c. There then is
nothing to stop you from doing the same thing in lisp...

:)w

Erik Naggum

unread,
Oct 25, 2002, 6:56:26 PM10/25/02
to
* Tim Daly, Jr.

| I was talking with a friend of mine about Lisp. He said that people
| write things in C because of speed.

But this is incorrect. People use C because it /feels/ faster. Like, if
you build a catapult strong enough that it can hurl a bathtub with
someone crouching inside it from London to New York, it will feel /very/
fast both on take-off and landing, and probably durng the ride, too,
while a comfortable seat in business class on a transatlantic airliner
would probably take less time (except for getting to and from the actual
plane, of course, what with all the "security"น) but you would not /feel/
the speed nearly as much.

http://www.theonion.com/onion3838/faa_passenger_ban.html

| I said that Lisp will not necessarily cause a program to be slow, and in
| fact, because it lets you write a better program, things may even get
| much faster. He said 'like what?'
|
| Hmm.

Better algorithms and type systems are well known to produce better
performance by people who actually study these things. It is often very
hard to implement better algorithms correctly and efficiently in C
because of the type poverty of that language. Yes, you get to tinker
with the bits as fast as the machine can possibly tinker, but, and this
is the catch, you get to tinker with the bits. If you are not super smart
and exceptionally experienced, the compiler will produce code that is
faster than yours. If this holds from assembly to C, it holds from C to
Common Lisp, given that you want to do exactly the same thing.

The core problem is that C programmers think they can get away with doing
much less than the Common Lisp programmer causes the computer to do. But
this is actually wrong. Getting C programmers to understand that they
cause the computer to do less than minimum is intractable. They would
not /use/ C if they understood this point, so if you actually cause them
to understand it in the course of a discussion, you will only make them
miserable and hate their lives. People are pretty good at detecting that
this is a likely outcome of thinking, and it takes conscious effort to
brace yourself and get through such experiences. Most people are not
willing even to /listen/ to arguments or information that could threaten
their comfortable view of their own existence, much less think about it,
so when you cannot answer a C programmer's "arguments" that his way of
life is just great the way it is, it is a pretty good sign that you let
him set the agenda once he realized that his way of life was under threat.
Since you have nothing to defend, your self-preservation instinct will
not activate hitherto unused parts of your brain to come up with reasons
and rationalizations for what you have done, you will not be aware that
you have been taken for a ride before it is over and you "lost".

If you deny people the opportunity to defend something they feel is under
threat, however, some people go completely insane with rage and actually
believe that you threaten them on purpose and that you willfully seek to
destroy something very valuable to them. However, some of the time, you
meet people who /think/ and who are able to deal with threats in a calm
and rational way because they realize that the threat is all in their head
and it will not go away just because they can play word games with people
and stick their head in the sand. If it /is/ the threat they feel it is,
they realize they had better pay some real attention to it instead of
fighting off the messenger so they can feel good about themselves again.

Much of the New Jersey approach is about getting away with less than is
necessary to get the /complete/ job done. E.g., perl, is all about doing
as little as possible that can approximate the full solution, sort of the
entertainment industry's special effects and make-believe works, which
for all practical purposes /is/ the real thing. Regular expressions is a
pretty good approximation to actually parsing the implicit language of
the input, too, but the rub with all these 90% solutions is that you have
/no/ idea when they return the wrong value because the approximation
destroys any ability to determine correctness. Most of the time, however,
the error is large enough to cause a crash of some sort, but there is no
way to do transactions, either, so a crash usually causes a debugging and
rescue session to recover the state prior to the crash. This is deemed
acceptable in the New Jersery approach. The reason they think this also
/should/ be acceptable is that they believe that getting it exactly right
is more expensive than fixing things after crashes. Therefore, the whole
language must be optimized for getting the first approximations run fast.

See how elegantly this forms a completely circular argument? But if you
try to expose this circularity, you necessarily threaten the stabiliity
of the whole house of cards and will therefore be met with incredible
hostility and downright hatred, and you will not even hear about the
worst fits of insane rage until years later when some moron thinks he can
get back at you for "hurting" him only because his puny brain could not
handle the information he got at the time.

| Well, I'm blinded by the very misconceptions that led me to this point,
| and I'm not sure what to tell him. Can you help me out?

Ask him why he thinks he should be able to get away with unsafe code,
core dumps, viruses, buffer overruns, undetected errors, etc, just because
he wants "speed".

--
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.

Kaz Kylheku

unread,
Oct 26, 2002, 2:31:15 PM10/26/02
to
t...@tenkan.org (Tim Daly, Jr.) wrote in message news:<m3d6py2...@www.tenkan.org>...

> I was talking with a friend of mine about Lisp. He said that people
> write things in C because of speed.

By far the biggest reason people write things in C is because they are
idiots who hold on to thirty year old misconceptions (and many of them
are not even that old, but they gladly inherit their misconceptions
from others before them).

I said that Lisp will not
> necessarily cause a program to be slow, and in fact, because it lets
> you write a better program, things may even get much faster. He said
> 'like what?'
>
> Hmm.
>
> The first thing that came to mind was the OS. From somewhere I've
> picked up the notion that an OS like Genera does not need to give each
> process its own address space, and can forego complex IPC mechanisms,
> because Lisp by its very nature won't let you shoot holes in memory.
> So, because it was done in Lisp, task switching and IPC in Genera is
> inherently simpler, and can therefore be faster.
>
> Now, that's a stretch, I know. Especially because I'm talking out of
> my ass - I've never used Genera.
>
> Things proceeded:
>
> "Doesn't the code that protects memory slow things down?"

The proper answer to this is that you don't need protection when the
program can be proven not to have any bad accesses, because it's
written in a language which will not allow it. Protection and security
issues go away.

What slows things down is *unnecessary* checks performed by the
hardware, when the program is already safe and correct!

You can implement a web of trust in the toolchain that transforms
programs from source to executable representations. Only trusted
source code is allowed to use language features that gain access to
machine code which can do anything it wants with the hardware.

Using virtual memory for protection and security is abuse of the
concept; the purpose of virtual memory is to reduce fragmentation in
programs whose storage allocators cannot relocate objects. Any gaps in
the heap that are at least one page wide, and contain at least one
whole page, can be liberated by the creation of an unmapped hole, so
the memory can be mapped elsewhere.

> You don't need code to protect memory. Lisp does not treat memory as
> an array, and you cannot simply write a byte at an address, or get the
> address of an object.
>
> "Well, then how do you write low level code, like a kernel?"

The answer to this is that operating system kernels that are said to
be written in C are not entirely written in C. They contain
substantial amounts of machine language.

Writing an operating system in Lisp will also involve the same thing:
machine language.

But that operating system itself can then disallow use of machine
language to untrusted programs. The compiler will only allow low level
features when running as a supervisor, as configured by the user who
has the right password.

Without the supervisor privileges, it will reject any low-level
programs which request access to the underlying processor's
instruction set.

Thus things like device drivers can be compiled and installed as
privileged programs.

This isn't much different from the root account on UNIX, which causes
the kernel to let you do things that ordinary processes cannot, with
the potential of bringing the machine down, and losing important data.

Traditional architectures implement this protection in the CPU itself;
they separate a certain set of ``dangerous'' instructions, and
``virtualize'' them. So when the program tries to do something
illegal, a trap is generated, and the privileged kernel gains control.

However, accesses to memory are unprivileged, or privileged on the
granularity of segments or pages. You can't just say, all memory
accesses are privileged and will be trapped through the kernel. But
the granularity of doing it at the page level adds huge overheads. The
processor has to maintain a virtual address translation cache, which
is loaded from in-memory tables. Whenever there is a cache miss, it
has to walk memory to reload the entry in the translation cache. Some
unfortunate patterns of memory access can lead to extremely
pathological behavior.

If you can prove that a program does not access any memory that does
not belong to it, you can get rid of these hardware-based mechanisms,
and dispense with their inefficiency. You can prove that a program
cannot do that without even analyzing that program. Quite simply,
don't give it the language features for doing so, and write a
well-debugged interpreter and compiler, so that malicious programs
cannot exploit accidental holes in the handling of the safe language.

Kaz Kylheku

unread,
Oct 26, 2002, 2:42:08 PM10/26/02
to
Thien-Thi Nguyen <t...@glug.org> wrote in message news:<kk9elae...@glug.org>...

> t...@tenkan.org (Tim Daly, Jr.) writes:
>
> > "Doesn't the code that protects memory slow things down?"
>
> not (too much) if it's in hardware.

Doh, talk about missing the point.

Hardware is what C programmers rely on currently! When miraculously
correct programming falls short, it's hardware that picks up the
slack. It's ultimately hardware that stops your sendmail process from
stomping over your Apache process.

If you have a better language, you don't need it in the hardware, or
in the software. You just don't need the checks; you generate code
that is free of machine-level errors.

Will Deakin

unread,
Oct 26, 2002, 2:59:53 PM10/26/02
to
Kaz Kylheku wrote:
> Thien-Thi Nguyen wrote:

>>Tim Daly, Jr. writes:
>>>"Doesn't the code that protects memory slow things down?"
>>not (too much) if it's in hardware.
> Hardware is what C programmers rely on currently! When miraculously
> correct programming falls short, it's hardware that picks up the
> slack. It's ultimately hardware that stops your sendmail process from
> stomping over your Apache process.
(Pretend I'm very stupid -- it shouldn't be hard.) In what way does the
hardware stop processes from stomping over each other?

:)w

Wade Humeniuk

unread,
Oct 26, 2002, 3:16:46 PM10/26/02
to

"Kaz Kylheku" <k...@ashi.footprints.net> wrote in message
news:cf333042.02102...@posting.google.com...

> t...@tenkan.org (Tim Daly, Jr.) wrote in message news:<m3d6py2...@www.tenkan.org>...
> But that operating system itself can then disallow use of machine
> language to untrusted programs. The compiler will only allow low level
> features when running as a supervisor, as configured by the user who
> has the right password.
>

This reminds me of the hardware Ring protection that there was
on Multics and CDC NOS/VE systems. Code was executed within
rings which disallowed calls to various protected code at higher
ring levels (but within a range of ring values).

Where I worked they gained the rights to port Multics
to a new processor, I think they where looking at the Intel processors
since they had a hardware ring architecture (it was along time ago, I
think 1988).

> Without the supervisor privileges, it will reject any low-level
> programs which request access to the underlying processor's
> instruction set.

--
Wade

Email: (format nil "~A@~A.~A" "whumeniu" "telus" "net")

Thien-Thi Nguyen

unread,
Oct 26, 2002, 3:35:30 PM10/26/02
to
k...@ashi.footprints.net (Kaz Kylheku) writes:

> Doh, talk about missing the point.

presumably, you will explain how:

> Hardware is what C programmers rely on currently! When miraculously
> correct programming falls short, it's hardware that picks up the
> slack. It's ultimately hardware that stops your sendmail process from
> stomping over your Apache process.

there is a bit of news here: everyone relies on hardware, lest all this is a
mad yet very precise mass-hallucination (in which case, pass the hookah dude).

> If you have a better language, you don't need it in the hardware, or
> in the software. You just don't need the checks; you generate code
> that is free of machine-level errors.

what is this "it"?

i think the point i was making, which was typically delivered in the vague way
almost guaranteed to confuse some people, is the same as yours but expands on
it thus: some "errors" and "exceptions" can be semantically useful at a level
below that of "correct program operation", especially if their handling is
done by dedicated silicon optimized by 1000-man-year engineering gang bangs.

the end programmer writing explicit code to make use of these fruits is indeed
a WOMBAT activity, but language implementation (vendor) programmers are likely
to take these very things into account (and susbsequently market their efforts
as a competitive advantage), *if the language permits*. choosing a language
that permits this means choosing where you want to get reliable slack, and in
what shape and form.

thi

Erik Naggum

unread,
Oct 26, 2002, 3:47:31 PM10/26/02
to
* Kaz Kylheku

| By far the biggest reason people write things in C is because they are
| idiots who hold on to thirty year old misconceptions (and many of them
| are not even that old, but they gladly inherit their misconceptions from
| others before them).

In other words, they express a deep-rooted desire /not/ to be different
from anybody else, a deep-rooted desire to be /just like/ everybody else.
People whose only distinguishing mark is that they are not different are
fundamentally inconsequential. They will change when people around them
change, insofar as they do not believe that they have a /right/ not to
change because they think being just like everybody else is a /virtue/.
In a world where almost everything except human nature has changed so
much that an 80-year-old must have been /really/ mentally active all his
life to be indistinguishable from an Alzheimer's patient, the kind of
people who have a strong desire /not/ to think become not just a liability
on their immediate surroundings, they force a change in how civilization
can sustain itself when these people think they should have some power,
and indeed /have/ some power qua mass consumers, where everybody is in
fact just like everybody else and were being a minority costs real money
if not convenience. So why do I not want Common Lisp to be a mass market
language? Because this kind of people will want to exert influence over
something that is good because it has been restricted to the "elite" that
has made a conscious choice to be different from /something/, indeed to
/be/ something. The very word "exist" derives from "to step forth, to
stand out". To be just like everyone else is tantamount to not exist, to
leave not a single mark upon this world that says "I made this". Likewise
the people who form the mass do not want those exceptions, the minority
that has decided to stand out, to /exist/. All the brutality of the mass
hysteria against that which threatens the meaningless lives of those who
do not wish to have any meaning to their lives illustrate with which
vengeance meaningless people will fight the requirement to think, to form
an opinion, an idea, a thought of their own, different from what everybody
else have already said they would approve of. People who program in the
main-stream languages because they are main-stream languages have yet to
form the prerequisite concepts to say "I want to program in C". They
have not yet developed an "I" who can actually want anything on its own.

That said, there are things that I really miss from C. The ability to
make full use of the one resource that is the most scarce in modern
processors, the registers is sorely missing from the way Common Lisp has
been implemented. If I had the time, I would seriously investigate other
options for representing fixnums and pointers instead of just dabbling in
an area where I once considered myself knowledgeable, but failed to keep
up and it appears to be a full-time job just to catch up. *sigh*

Russ Allbery

unread,
Oct 26, 2002, 4:05:04 PM10/26/02
to
Kaz Kylheku <k...@ashi.footprints.net> writes:

> By far the biggest reason people write things in C is because they are
> idiots who hold on to thirty year old misconceptions (and many of them
> are not even that old, but they gladly inherit their misconceptions from
> others before them).

Hm. I'd say that by far the biggest reason people write things in C is
because they're building on work already done by other people, and that
work was done in C. There have been some recent threads in this newsgroup
about the lure of complete rewrites and how often that's not the right
thing to do when the system already fundamentally works, if you can
continue to maintain it without undue difficulty. (Iffy, I know, in the
case of large amounts of C code.)

Another common reason why free software in particular is written in C
rather than in some other language is simply that there are more people
who understand C, and therefore it's more likely that one will get help
with a project written in C. (For C, you can substitute Java, Perl,
Python, or the sort of lowest-common-denominator C++ that doesn't really
properly use the language as other languages that are often used simply
because one can get help in that language.)

In maintaining INN, I keep running hard into the fundamental limitations
of C, and spend a lot of time writing code that I wouldn't have to write
if it were written in Lisp. On the other hand, if I undertook the project
to rewrite it in Lisp, I'd most likely have to do it pretty much entirely
alone (while learning Lisp well enough to do a good job at it in the
process), and even when finished it's very unclear that I'd get very much
help at all, whereas right now INN has four or five active contributors.

It's still tempting, since the ease of doing things properly in Lisp may
well make up for that and then some, but part of the fun of working on
free software is working with other people I respect, and maintaining a
project all by myself isn't nearly as much fun. (And for me, the point of
free software is largely fun.) On the other hand, writing in a more
capable language would be more fun....

--
Russ Allbery (r...@stanford.edu) <http://www.eyrie.org/~eagle/>

ozan s yigit

unread,
Oct 26, 2002, 4:22:01 PM10/26/02
to
t...@tenkan.org (Tim Daly, Jr.) writes:

> I was talking with a friend of mine about Lisp. He said that people
> write things in C because of speed. I said that Lisp will not
> necessarily cause a program to be slow, and in fact, because it lets
> you write a better program, things may even get much faster. He said
> 'like what?'

> [...]

the problem i have with this sort of advocacy is that it is often just a
thought(!) experiment (``assume that you have a mail system like postfix
that can deliver 1,000,000 pieces of mail a day, but written in CL'')
or worse, just rigorous conversation about language details we are all too
familiar with. maybe a better approach is to take an actually useful piece
like the portable aserver and take something expertly written version in C
of equal functionality (?tthttpd? just guessing), and without bias analyze
them side by side to death. [you may know of other more interesting pieces
that are in heavy use and of comparable magnitude]

what you find out would be of interest to many people. [someone needs to
do a project paper?]

oz
--
No arrangement for perpetuation of ideas is secure if the ideas do not make
useful contact with the problems they are presumed to illuminate or resolve.
-- John Kenneth Galbraith

Duane Rettig

unread,
Oct 26, 2002, 5:00:01 PM10/26/02
to
Erik Naggum <er...@naggum.no> writes:

> That said, there are things that I really miss from C. The ability to
> make full use of the one resource that is the most scarce in modern
> processors, the registers is sorely missing from the way Common Lisp has
> been implemented. If I had the time, I would seriously investigate other
> options for representing fixnums and pointers instead of just dabbling in
> an area where I once considered myself knowledgeable, but failed to keep
> up and it appears to be a full-time job just to catch up. *sigh*

When you do catch up: if you come up with a new idea that enables the
use of all 32 bits (to which I assume you are referring above, since
you've discussed this desire before), and if it also susumes all of the
important features that the current most-used implementation, then I'd
love to hear about it. I'm always on the lookout for new, better ways
to do things, and am never afraid to rewrite even the implementation
subsystems in order to make things continuously better.

My current suspicion, based on the features of GP hardware, that any
new ideas (or even reapplication of old ideas with twists) will either
involve a memory management trick not necessarilty available on all
architectures, or else will involve a tradeoff consideration for the
positive behaviors that the current (2 or 3 bits of tag, 30 or 61 bits
of integer) implementation allows. The decision about what of these
positive behaviors is less important and thus can be removed might
either be subjective or application-dependent. I'm always ready to
discuss those tradeoffs.

--
Duane Rettig du...@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182

Faried Nawaz

unread,
Oct 27, 2002, 5:23:23 PM10/27/02
to
ozan s yigit <o...@blue.cs.yorku.ca> wrote in message news:<vi4pttx...@blue.cs.yorku.ca>...

> maybe a better approach is to take an actually useful piece
> like the portable aserver and take something expertly written version in C
> of equal functionality (?tthttpd? just guessing), and without bias analyze
> them side by side to death.

http://opensource.franz.com/ contains a few good apps to compare (there is
a DNS server there, an NFS server, etc).

Kaz Kylheku

unread,
Oct 27, 2002, 8:56:28 PM10/27/02
to
Will Deakin <aniso...@hotmail.com> wrote in message news:<apeon9$bsi$1...@helle.btinternet.com>...

The hardware, first of all, supports (at least) two privilege levels
of execution, let's call them god and mortal. In god mode, you can do
anything. In mortal mode, certain dangerous machine instructions are
inaccessible. Instructions for accessing devices, if the processor has
such special instructions, and instructions for changing certain
states, such as disabling interrupts---or switching into god mode.
When something that requires god privileges is run in mortal mode, an
exception occurs. That exception is vectored to privileged code in the
operating system which can decide how to handle it. It runs in god
mode, of course, but mortal mode has no powers to modify that code or
vector elsewhere. Also certain regions of the address space may be off
limits to mortal mode; in some processors, for instance, the entire
top half of the address space (any address with a 1 in the most
significant bit position) is off limits. So privileged data such as
the operating system code can be placed there.

Secondly, the hardware supports virtual address translation. This
means that memory accesses go through a layer of indirection supported
by a translation table. The operating system can arrange for each
process to have its own table. This means that when sendmail is
executing, it cannot even utter the address of a piece of memory that
belongs to the Apache process; no piece of Apache's memory is mapped
into sendmail's address space.

Processes can share memory only by request through the operating
system, subject to security checks. There is also some default sharing
to save space: when two processes use the same shared library or
executable, the same objects are mapped to both address spaces. But
these pages are read-only. If they are made writable (which is
normally done only for the sake of debugging support, so that a
debugger can insert breakpoint instructions into code), then the write
accesses are trapped by god mode and result in a process getting its
own copy of the page that was accessed.

Scott L. Burson

unread,
Oct 27, 2002, 11:43:20 PM10/27/02
to
Duane Rettig wrote:

> if you come up with a new idea that enables the

> use of all 32 bits, and if it also susumes all of the


> important features that the current most-used implementation, then I'd
> love to hear about it.

After spending some time on CMUCL's Python compiler [for those unaware, this
is a Common Lisp compiler named "Python"; it has nothing to do with the
language of that name], I am persuaded that the way it goes about type
inferencing is a step in the right direction. Basically, Python, rather
than just believing any type declarations provided by the programmer,
attempts to verify their correctness; and having done that, goes to more
effort than other CL implementations I have seen to extract value from
them. Just to take one example off the top of my head, if you declare a
structure slot to be of type single-float or double-float, the compiler
will store the number in unboxed form. Obviously this means it has to
reject any program that attempts to store anything other than the
appropriate kind of number in such a slot.

I can't do Python justice in a short post, so I encourage anyone who hasn't
already looked at it to read the documentation and maybe play around with
it a bit.

http://www.cons.org/cmucl/

While I think Python has made an excellent start, there is more that I think
could be done along these lines. For instance, I would like user-defined
multi-word types, like structures except that they would be immutable and
(as with bignums) the behavior of EQ on them would not be guaranteed, and
which, given the appropriate type declarations, the compiler would
manipulate unboxed. So, for instance, the compiler could pass one by
passing its contents in two or more registers. This would generalize and
give user access to mechanisms that must already exist for, e.g., unboxed
double-floats. Of course, this is a language extension, not just an
implementation trick.

Another thing I have long wanted to see is multiple entry points with
different type assumptions. Say you have a routine `foo' one of whose
parameters is a double-float. It could have a general entry point that
expects that double-float in boxed form, and a specialized entry point that
expects it in unboxed form. The general entry point is used by callers
that do not or cannot know anything about the routine they are calling;
basically, it unboxes the double-float and branches to the specialized
entry point. But the specialized entry point can also be invoked by
callers that are compiled with the knowledge that it exists.

So far, this is very similar to the block compilation that Python and some
other Lisp compilers already do. But what I'm suggesting is more automatic
and somewhat less aggressive than Python's block compilation, which (a)
would be too slow to use on an entire program and (b) interferes with
incremental redefinition. I want the Lisp compiler to keep track itself of
the existence of the specialized entry points and to call them when
possible without my having to know it's doing so.

Such a systen won't be capable of all the optimizations that block
compilation can provide; the ultimate form of which, of course, is
whole-program optimization like that of Stalin or Stephen Weeks' MLton. In
particular, it can be difficult to decide what sets of assumptions that a
routine could usefully make about its calling contexts deserve separate
entry points, the obviously intolerable worst case being one entry point
for each member of the powerset of the set of possible assumptions.

Still, it seems to me that something like this could be done and would
provide a useful performance boost while still providing compilation times
that support interactive development.

-- Scott

--
To send email, remove uppercase characters from address in header.

Russell Wallace

unread,
Oct 30, 2002, 11:50:25 AM10/30/02
to
On 25 Oct 2002 12:38:20 -0500, t...@tenkan.org (Tim Daly, Jr.) wrote:

>You don't need code to protect memory.

In the following cases:

1) You have an array of 100 elements, and you try to access element
#200

2) A function expects a reference to a structure of type Foo, and
tries to access element Bar of that structure, and you pass it NIL

what happens?

I would imagine this is what he meant by code to protect memory.

--
"Mercy to the guilty is treachery to the innocent."
Remove killer rodent from address to reply.
http://www.esatclear.ie/~rwallace

Pascal Costanza

unread,
Oct 30, 2002, 12:00:27 PM10/30/02
to
Russell Wallace wrote:
> On 25 Oct 2002 12:38:20 -0500, t...@tenkan.org (Tim Daly, Jr.) wrote:
>
>
>>You don't need code to protect memory.
>
>
> In the following cases:
>
> 1) You have an array of 100 elements, and you try to access element
> #200
>
> 2) A function expects a reference to a structure of type Foo, and
> tries to access element Bar of that structure, and you pass it NIL
>
> what happens?
>
> I would imagine this is what he meant by code to protect memory.
>

MMUs can help you to handle these kinds of things. For example, NIL
usually points to an address protected by the hardware, so the processor
can signal an exception and the runtime environment can deal with it -
no upfront checks are needed.

Sorry for the use of non-technical language, I don't have any detailed
knowledge about hardware issues.

Pascal

--
Pascal Costanza University of Bonn
mailto:cost...@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Russell Wallace

unread,
Oct 30, 2002, 12:37:13 PM10/30/02
to
On Wed, 30 Oct 2002 18:00:27 +0100, Pascal Costanza <cost...@web.de>
wrote:

>MMUs can help you to handle these kinds of things. For example, NIL
>usually points to an address protected by the hardware, so the processor
>can signal an exception and the runtime environment can deal with it -
>no upfront checks are needed.

However, this doesn't apply to arrays. (If you told the MMU about it
every time you allocated a small array, the overhead would be much
worse than just doing an explicit compare-and-branch on access.)

Note that I'm not trying to advance the view "Lisp is slow" here. For
most purposes we have such a surplus of machine resources these days
that spending effort on efficiency is a form of mental masturbation -
if one enjoys it, great, but one shouldn't indulge in it on company
time. I'm just clarifying what I understand to be what Tim's friend
was referring to.

>Sorry for the use of non-technical language, I don't have any detailed
>knowledge about hardware issues.

No problem, I'm no hardware engineer either :)

Nicholas Geovanis

unread,
Nov 1, 2002, 1:18:44 PM11/1/02
to
On 26 Oct 2002, Kaz Kylheku wrote:

> Using virtual memory for protection and security is abuse of the
> concept; the purpose of virtual memory is to reduce fragmentation in
> programs whose storage allocators cannot relocate objects.

Mmmmm, the purpose of virtual memory is to provide software with the
illusion of larger available storage than is physically present in the
machine. Its utility and implementation follow from the fact that only a
fraction of the needed storage can actually be in use at any given time.
The "protection and security" mechanism is actually a hardware assist for
the OS such that the OS's view of virtual storage is monolithic but the
applications' views are not, and so that the OS can implement paging and
swapping.

I think you might be suffering from the single-user Intel/DOS/Win model of
virtual storage. The multi-user model is broader and subsumes the
single-user model, and the Intel implementation (after a slow start)
bit the bullet and did the whole thing. That's how OS's like linux, whose
virtual storage model long predates the 386, can be implemented on it.
(and of course ATT had System V on intel long before then).

* Nick Geovanis
| IT Computing Svcs Computing's central challenge:
| Northwestern Univ How not to make a mess of it.
| n-geo...@nwu.edu -- Edsger Dijkstra
+------------------->

Reply all
Reply to author
Forward
0 new messages