Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Lisp is not an interpreted language

103 views
Skip to first unread message

Ralph Silverman

unread,
Oct 21, 1996, 7:00:00 AM10/21/96
to

Carl L. Gay (cg...@ix.cs.uoregon.edu) wrote:

: From: thomas hennes <aie...@pobox.oleane.com>
: Date: Sat, 19 Oct 1996 03:48:28 +0100

: Well, i am not sure about this. Consider the case of software agents
: --which is directly linked to AI if you bear in mind that GOOD agents
: are genetically evolutive (ideally). When dealing with networked agents,
: you just cannot let pointer arithmetic (an automatic feature of C/C++)
: get into the way, for obvious security reasons. LISP, on the other hand,
: is one language that manages this pretty well, as do others such as
: SafeTCL (another interpreted language!!).

: Another?

: What i actually believe is that interpreted languages and compiled
: languages have very different application domains. Interpreted languages
: work wonders when used as tiny specialized scripts.

: No, Common Lisp is not an interpreted language.

: Yes, Lisp interpreters exist.

: All commercial Common Lisps that I'm aware of are compiled by default.
: Even if you type in a "tiny specialized script" it is usually compiled
: before it is executed.

: Harumph. :)

--
***************begin r.s. response***************

lisp
is one of the earliest
high level languages,
dating to the 1950s...

certainly,
in early implementations,
lisp
was available primarily
as an interpreted language...

historically, lisp pioneered
routine availability of
recursion
to the programmer...
later...
so generally available in
the widespread tradition dating
to algol60(revised)...

for those with systems capable
of supporting (ms dr pc)dos
(286 good)...
versions of lisp interpreters
are freely available as shareware...
(lisp like)
xlisp
and
pc-lisp;

each of these is remarkable
and very well made...
(xlisp is available in a
variety of releases...)

***************end r.s. response*****************
Ralph Silverman
z007...@bcfreenet.seflin.lib.fl.us


Ralph Silverman

unread,
Oct 22, 1996, 7:00:00 AM10/22/96
to

Ralph Silverman (z007...@bcfreenet.seflin.lib.fl.us) wrote:

: : Another?

: : Yes, Lisp interpreters exist.

: : Harumph. :)

: --
: ***************begin r.s. response***************


--
******************begin r.s. response******************

during the time of the early
development of
lisp
,
limitations,
now virtually taken for granted,
were not necessarily accepted...

self-modification of software
particularly,
was thought, by some,
to be a design goal for
advanced computer
programming languages...

in early, interpreted forms,
lisp
supported such use...
various aspects of
self-modification were programmed
into systems...

******************end r.s. response********************
Ralph Silverman
z007...@bcfreenet.seflin.lib.fl.us


Ben Sauvin

unread,
Oct 23, 1996, 7:00:00 AM10/23/96
to

Ralph, please forgive me (this is NOT a flame), but I GOTTA know: what
text formatting language or utility are you using? :)

Ralph Silverman

unread,
Oct 24, 1996, 7:00:00 AM10/24/96
to

Liam Healy (Liam....@nrl.navy.mil) wrote:
: cg...@ix.cs.uoregon.edu (Carl L. Gay) writes:

: >

: >
: > From: thomas hennes <aie...@pobox.oleane.com>
: > Date: Sat, 19 Oct 1996 03:48:28 +0100
: >
: > Well, i am not sure about this. Consider the case of software agents
: > --which is directly linked to AI if you bear in mind that GOOD agents
: > are genetically evolutive (ideally). When dealing with networked agents,
: > you just cannot let pointer arithmetic (an automatic feature of C/C++)
: > get into the way, for obvious security reasons. LISP, on the other hand,
: > is one language that manages this pretty well, as do others such as
: > SafeTCL (another interpreted language!!).
: >
: > Another?
: >
: > What i actually believe is that interpreted languages and compiled
: > languages have very different application domains. Interpreted languages
: > work wonders when used as tiny specialized scripts.
: >
: > No, Common Lisp is not an interpreted language.
: >
: > Yes, Lisp interpreters exist.
: >
: > All commercial Common Lisps that I'm aware of are compiled by default.
: > Even if you type in a "tiny specialized script" it is usually compiled
: > before it is executed.
: >
: > Harumph. :)

: This is not the first time this assumption about LISP has been made.
: I think many people make the mistaken assumption that LISP is
: interpreted because it's interactive and dynamically linked, and
: confuse the two concepts. They look at the "batch"
: (edit-compile-link-run) model of conventional languages like C or
: Fortran and identify that with compilation. After all, what language
: other than LISP is interactive and compiled? (Forth is the only one I
: can think of.)

: All the more reason that LISP is an essential component of a
: programming education.

: --
: Liam Healy
: Liam....@nrl.navy.mil

--
***************begin r.s. response********************

i guess a
'dynamic compiler'
for a language is a fancy
kind of interpreter...
whatever the name sounds like;
yes???

after all,
when a program actually has been
compiled and linked
successfully,
it runs from binary ...
NOT A SCRIPT!!!!!!!!!!!!!!!!!!!!!!!!!!
^^^^^^^^^^^^

(a program compiled and linked
properly is not, routinely,
recompiled at
runtime
...
a program requiring
anything like that
^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^
is interpreted!!!
)
***************end r.s. response**********************
Ralph Silverman
z007...@bcfreenet.seflin.lib.fl.us


Mark

unread,
Oct 25, 1996, 7:00:00 AM10/25/96
to

Ralph Silverman <<z007...@bcfreenet.seflin.lib.fl.us>> wrote:
<CUT talk of LISP>

What about object code that is relocatable? Is that
interpreted? It sure ain't straight block load binary
to fixed address and jmp...

what about such as TAOS? Single object code loadable
onto mulitple heterogenous processors, with translation
taking place at loadtime? i.e. single object translated
on demand for specific processor? By your above logic
interpreted.

What about Intel code running on an alpha? Is that
interpreted? or compiled? It was compiled with regard
to an intel, but is now acting (to some extent) as
instructions for an interpreter.... by your above logic
an existance os both compiled and interpreted....
Hmmm

Andrew Gierth

unread,
Oct 25, 1996, 7:00:00 AM10/25/96
to

WATCH THOSE NEWSGROUPS LINES!!!

This thread is now in:

comp.ai
comp.ai.genetic
comp.ai.neural-nets
comp.lang.lisp
comp.lang.c++
comp.os.msdos.programmer
comp.lang.asm.x86
comp.unix.programmer
comp.ai.philosophy

When following up, exclude as many of these as possible.

It seems Mr. Silverman is up to his tricks again.

--
Andrew Gierth (and...@microlise.co.uk)

"Ceterum censeo Microsoftam delendam esse" - Alain Knaff in nanam

Cyber Surfer

unread,
Oct 25, 1996, 7:00:00 AM10/25/96
to

In article <54nr3t$d...@nntp.seflin.lib.fl.us>
z007...@bcfreenet.seflin.lib.fl.us "Ralph Silverman" writes:

> i guess a
> 'dynamic compiler'
> for a language is a fancy
> kind of interpreter...
> whatever the name sounds like;
> yes???

No. You might like to read 'Writing Interactive Compiler and
Interpreters'. Alternately, take a look at any Forth 'interpreter,
and you'll find that there's a _compiler_. The 'interpreter'
is the address interpreter, but in some Forths, the object code
will be native code. Not that you'll often need to know this!
Forth does an excellent job of hiding such details, and revealing
them when you need them.

In that sense, Lisp is very similar. So are many Basic 'interpreters'.



> after all,
> when a program actually has been
> compiled and linked
> successfully,
> it runs from binary ...
> NOT A SCRIPT!!!!!!!!!!!!!!!!!!!!!!!!!!
> ^^^^^^^^^^^^

Not necessarily. Concepts like 'compiling' come from certain
language implementations, and their (batch) enviroments.

There are also C interpreters, but few people call C an
interpreterd language. Some Pascal and C compilers are so fast
that you barely see them running, and there are incremental
compilers for Lisp and other languages, too. There are even
a few incremental C/C++ compilers!

'Linking' is another idea that's tied to 'batch' enviroments.
I find it sad that so called 'visual' enviroments perpetuate
such archaic practices. They're so common that some people,
like yourself, mistake them for the _only_ way to compile.



> (a program compiled and linked
> properly is not, routinely,
> recompiled at
> runtime
> ...
> a program requiring
> anything like that
> ^^^^^^^^^^^^^^^^^^
> ^^^^^^^^^^^^^^^^^^
> is interpreted!!!
> )

This is only because of a great deal of stagnation amoung
compilers for languages like Pascal, C/C++, etc. In case
you've not noticed this, Java is changing this. JIT is just
another name for what PJ Brown called a "throw away compiler".
Tao Systems have used this technique in an _operating system_.

Wake up and smell the coffee (pun intended). Anyway, it'll
get much harder for you to resist pretty soon, coz you'll find
that software using such compiler tech will be available to
_you_, on your machine (whatever that is). If not, then your
opinions will be irrelevant, assuming that they not already,
as there's already too much interest to stop it from happening.

I don't like to use the 'inevitable' to describe technology,
but I might compare it to a steamroller. ;-) I've been using
this kind of compiler for years, so it's rather satisfying to
see 'mainstream' developers discovering it too. At last we can
move on! Hurrah! It'll be very hard to deny it works when so
many people are using it.

Please forgive me if I sound a little smug...
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind


Cyber Surfer

unread,
Oct 25, 1996, 7:00:00 AM10/25/96
to

In article <Dzt9M...@world.std.com> d...@world.std.com "Jeff DelPapa" writes:

> Hell, there are some machine codes that qualify under that metric. If
> the machine uses some sort of microcode, the "native" machine code
> counts as interpreted. (also there are a small number of C systems
> that have limited incremental compilation. (if nothing else the ones
> for lispm's could do this.) -- One of the lispm vendors even had a
> dynamically compiling FORTRAN)

Let's not forget the interpreter that is _very_ popular with C
programmers: printf. Common Lisp programmers have format, which
kicks printf's butt, but at a price. Not everyone wants to print
a number in Roman numerals, and those that do might be happy with
some library code. Still, dynamic linking can handle that...

It seems to me that Lisp programmers can be more ambitions, in
that they reach for higher goals. I've not yet had a chance to
compare MFC with Garnet, but I suspect that it might be an unfair
comparison. Which tool looks "better" may depend on what you wish
to do, and how well it helps you accomplish it.

My own experience is that when I use Lisp, I consider techniques
that I dismiss as "too expensive" when I use C++. The truth is that
it just looks like too much work in C++! While you look at small
programming examples, it may seem that Lisp's advantages are small.
When you multiply the size of the code by several orders of magnitude,
Lisp begins to make a very significant difference, not only to the
amount of time to develop something, but in many other ways, too.
Unfortunately, if you've never experienced this yourself, you might
find it hard to believe.

I'm still impressed by it today. I keep thinking, "This shouldn't
be this easy, or should it?" Well, yes it should! It's the pain of
using C++ that fools me into thinking that some things are just
inherently difficult. If I think too much about how I'd do something
in Lisp, when that option isn't available to me (yes, it is possible
for that to happen), I get seriously frustrated.

Imagine that the only forms of control flow available in C were
'if' and 'goto', or if you didn't have 'struct' and 'typedef'.
That isn't even close to how ANSI C or C++ look to me, compared
to Lisp. Of course, I might be just as happy with ML, as Lisp's
syntax and relaxed attitude to type declarations isn't really
what attracts me to the language. I just discovered Lisp earlier.

Jeff Dalton

unread,
Oct 26, 1996, 7:00:00 AM10/26/96
to

>: > All commercial Common Lisps that I'm aware of are compiled by default.
>: > Even if you type in a "tiny specialized script" it is usually compiled
>: > before it is executed.

Humm. I've used a fair number of Common Lisps (mostly commercial
ones) that don't work that way. They have an interpreter as well
as a compiler, and interpretation is the default. However, most
Common Lisp implementations have a compiler that compiles to native
code and that can compile source files to object files -- and that
should be compiler-like enough to satisfy almost everyone. ^_^

In any case, Lisp is not an interpreted or compiled _language_.
_Implementations_ of Lisp might be interpreters or compilers or
some combination.

-- jd

Richard A. O'Keefe

unread,
Oct 29, 1996, 8:00:00 AM10/29/96
to

>In article <54nr3t$d...@nntp.seflin.lib.fl.us>,

>Ralph Silverman <z007...@bcfreenet.seflin.lib.fl.us> wrote:
>> after all,
>> when a program actually has been
>> compiled and linked
>> successfully,
>> it runs from binary ...
>> NOT A SCRIPT!!!!!!!!!!!!!!!!!!!!!!!!!!
>> ^^^^^^^^^^^^

This view is so far out of date that it wasn't even true in the SIXTIES.

I'll mention two examples from the 70s. Both of them apply to the B6700,
a machine which was so compiler-oriented that

- it didn't have an assembler. Period. NONE. There were about 20
instructions in the (very capable for its time) operating system that
weren't generated by the ESPOL compiler, and they were hand translated
binary stuck in an array.

- the average Joe program couldn't be a compiler either; object files
had an "is-a-compiler" bit which could only be set by an operator
in the control room "blessing" the program.

Example 1. Student Fortran compiler.
The Auckland University "STUFOR" compiler was an Algol program that
read student Fortran programs, generated *native* code for them,
and called that native code.

(The original student Pascal compiler for the CDC machines did
much the same thing. Pascal, remember, is a 60s language. Wirth
reported that compiling to native code directly and jumping to it
was significantly faster than using the operating system's native
linker.)

Example 2. The REDUCE symbolic algebra system.
REDUCE is written in a Lisp dialect called PSL.
When typing at the PSL system, you could ask it to compile
a file. It did this by
reading the file,
generating _Algol_ source code,
calling the Algol compiler to generate native code.
Since the B6700 operating system let code in one object file
*dynamically* call code from another object file, problem solved.

Now just think about things like
- VCODE (a package for dynamically generating native code; you specify
the code in a sort of abstract RISC and native code is generated for
SPARC, MIPS, or something else I've forgotten -- I have VCODE and a
SPARC but not the other machines it supports).
- dynamic linking, present in UNIX System V Release 4, Win32, OS/2, VMS,
...

Run-time native code generation has been used in a lot of interactive
programming languages including BASIC (for which "throw-away compiling"
was invented), Lisp, Pop, Smalltalk, Self, Prolog, ML, Oberon, ...
The original SNOBOL implementation compiled to threaded code, but SPITBOL
compiled to native code dynamically.

The B6700 is in fact the only computer I've ever had my hands on where
dynamic code generation was difficult. (Ok, so flushing the relevant
part of the I-cache is a nuisance, but it is _only_ a nuisance.)

Oh yes, the punchline is this: on the B6700, "scripts" in their job
control language (called the WorkFlow Language, WFL) were in fact
*compiled*, so just because something is a script, doesn't mean it
can't be or isn't compiled.

--
Mixed Member Proportional---a *great* way to vote!
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.

Chris

unread,
Oct 30, 1996, 8:00:00 AM10/30/96
to


Richard A. O'Keefe <o...@goanna.cs.rmit.edu.au> a écrit dans l'article
<554cdn$ncb$1...@goanna.cs.rmit.edu.au>...


> >In article <54nr3t$d...@nntp.seflin.lib.fl.us>,
> >Ralph Silverman <z007...@bcfreenet.seflin.lib.fl.us> wrote:
> >> after all,
> >> when a program actually has been
> >> compiled and linked
> >> successfully,
> >> it runs from binary ...
> >> NOT A SCRIPT!!!!!!!!!!!!!!!!!!!!!!!!!!
> >> ^^^^^^^^^^^^
>

I belieave Compiling means translate from a language to another. Not have
to translate in native binary code.

Chris

Bull Horse

unread,
Nov 2, 1996, 8:00:00 AM11/2/96
to

PASCAL is a 70's langauage. Developed in the early 70's I think 73...

Bull Horse

unread,
Nov 3, 1996, 8:00:00 AM11/3/96
to

Chris wrote:
> =

> Richard A. O'Keefe <o...@goanna.cs.rmit.edu.au> a =E9crit dans l'article


> <554cdn$ncb$1...@goanna.cs.rmit.edu.au>...
> > >In article <54nr3t$d...@nntp.seflin.lib.fl.us>,
> > >Ralph Silverman <z007...@bcfreenet.seflin.lib.fl.us> wrote:
> > >> after all,
> > >> when a program actually has been
> > >> compiled and linked
> > >> successfully,
> > >> it runs from binary ...
> > >> NOT A SCRIPT!!!!!!!!!!!!!!!!!!!!!!!!!!
> > >> ^^^^^^^^^^^^
> >

> =

> I belieave Compiling means translate from a language to another. Not have=

> to translate in native binary code.

> =

> Chris

Uh I think that Compiling means to translate anything into machine code. =

Interpreted means to identify a string value with an according machine =

code instruction.

scha...@wat.hookup.net

unread,
Nov 4, 1996, 8:00:00 AM11/4/96
to

In <327B42...@earthlink.net>, Bull Horse <s...@earthlink.net> writes:
>PASCAL is a 70's langauage. Developed in the early 70's I think 73...

Definitely much earlier. It must have been 68 or 69

Hartmann Schhaffer


Chris

unread,
Nov 4, 1996, 8:00:00 AM11/4/96
to

>Uh I think that Compiling means to translate anything into machine code.
>Interpreted means to identify a string value with an according machine
>code instruction.
----------
I don't think so. The compiler usually translate to assembly, then the
assembler translates to machine code. Sometimes the assembler is built in,
sometimes it may translate in one pass.

What about the Java compiler ? Is the code in machine language ? Isn't it a
compiler ?

Chris

Cyber Surfer

unread,
Nov 4, 1996, 8:00:00 AM11/4/96
to

In article <327D00...@earthlink.net> s...@earthlink.net "Bull Horse" writes:

> Uh I think that Compiling means to translate anything into machine code. =
>
> Interpreted means to identify a string value with an according machine =
>
> code instruction.

I'd use a more general definition, and say that "compiling" is
when you translate from one form into another. It might not even
be a one-way translation. I recall reading that QuickBasic code
exists in one of 3 different forms, depending on the state of the
"interpreter" (running, editing, and a 3rd state).

If we want to, we can imagine that a typical compiler will translate
one string (e.g. source code) into another (e.g. machine code). Not
all compiles use strings, however. Compiler theory is just a special
case for strings! We can even represent trees and other data structures
as strings. The reverse is also possible.

Isn't this fun?

Mike McDonald

unread,
Nov 5, 1996, 8:00:00 AM11/5/96
to

In article <01bbca97$15522cc0$LocalHost@gaijin>,

"Chris" <gai...@infonie.fr> writes:
>>Uh I think that Compiling means to translate anything into machine code.
>>Interpreted means to identify a string value with an according machine
>>code instruction.
> ----------
> I don't think so. The compiler usually translate to assembly, then the
> assembler translates to machine code. Sometimes the assembler is built in,
> sometimes it may translate in one pass.
>
> What about the Java compiler ? Is the code in machine language ? Isn't it a
> compiler ?
>
> Chris

Trying to distinguish between "compiled" and "interpreted" seems
like a complete waste of time, to me anyway. Afterall, everything is
interpreted eventually anyway. That's what a CPU is, afterall. Just
another interpreter.

Mike McDonald
mik...@engr.sgi.com

Mark

unread,
Nov 5, 1996, 8:00:00 AM11/5/96
to

Chris wrote:
>
> >Uh I think that Compiling means to translate anything into machine code.
> >Interpreted means to identify a string value with an according machine
> >code instruction.
> ----------
> I don't think so. The compiler usually translate to assembly, then
> the assembler translates to machine code. Sometimes the assembler is
> built in, sometimes it may translate in one pass.
> What about the Java compiler ? Is the code in machine language ?
> Isn't it a compiler ?

The compiler doesn't necessarily translate to assembly; the ones
that do are multipass compilers. Single pass compilers translate
source code directly to object code.

Java compiles to a bytecode; this is either: interpreted by a program
acting as a virtual processor, or is the instruction set of a real
processor. In both senses the bytecode has been compiled. In only
one sense is the code strictly in machine language. The Java virtual
machine is an interpreter; it is executing idealised machine code.

Norman L. DeForest

unread,
Nov 5, 1996, 8:00:00 AM11/5/96
to

[ newsgroups trimmed ]

Chris (gai...@infonie.fr) wrote:
: >Uh I think that Compiling means to translate anything into machine code.
: >Interpreted means to identify a string value with an according machine
: >code instruction.
: ----------
: I don't think so. The compiler usually translate to assembly, then the
: assembler translates to machine code. Sometimes the assembler is built in,
: sometimes it may translate in one pass.

: What about the Java compiler ? Is the code in machine language ? Isn't it a
: compiler ?

: Chris

You can compile to machine code, you can compile to a series of links
in a linked list (some implementations of Forth), or you can compile to
an intermediate pseudo-code that is interpreted at run time (GW-BASIC or
at least one version of Pascal (USCD Pascal if I'm not wrong)).

Norman De Forest
af...@chebucto.ns.ca
http://www.chebucto.ns.ca/~af380/Profile.html

.........................................................................
Q. Which is the greater problem in the world today, ignorance or apathy?
A. I don't know and I couldn't care less.
.........................................................................
For those robots that gather e-mail addresses from postings and sig. blocks:
Junk e-mail received so far this month from:
west...@leonardo.net and inv...@onlinenow.net and act...@icanect.net and
pat...@netsoft.ie and j...@mymail.com and in...@uar.com

--

Cyber Surfer

unread,
Nov 5, 1996, 8:00:00 AM11/5/96
to

In article <55m3kt$1...@fido.asd.sgi.com>
mik...@engr.sgi.com "Mike McDonald" writes:

> Trying to distinguish between "compiled" and "interpreted" seems
> like a complete waste of time, to me anyway. Afterall, everything is
> interpreted eventually anyway. That's what a CPU is, afterall. Just
> another interpreter.

Books have been written about this. My favourite is "Writing
Interactive Compilers and Interpreters", P.J. Brown, ISBN 0
471 27609 X, ISBN 0471 100722 pbk. John Wiley & Sons Ltd,
but it isn't unique. For example, there's "Structure and
Interpretation of Computer Programs, Second Edition, by Harold
Abelson and Gerald Jay Sussman with Julie Sussman.

Dan Mercer

unread,
Nov 6, 1996, 8:00:00 AM11/6/96
to

Bull Horse (s...@earthlink.net) wrote:
: Chris wrote:
: > =

: > Richard A. O'Keefe <o...@goanna.cs.rmit.edu.au> a =E9crit dans l'article
: > <554cdn$ncb$1...@goanna.cs.rmit.edu.au>...
: > > >In article <54nr3t$d...@nntp.seflin.lib.fl.us>,
: > > >Ralph Silverman <z007...@bcfreenet.seflin.lib.fl.us> wrote:
: > > >> after all,
: > > >> when a program actually has been
: > > >> compiled and linked
: > > >> successfully,
: > > >> it runs from binary ...
: > > >> NOT A SCRIPT!!!!!!!!!!!!!!!!!!!!!!!!!!
: > > >> ^^^^^^^^^^^^
: > >
: > =

: > I belieave Compiling means translate from a language to another. Not have=

: > to translate in native binary code.
: > =

: > Chris

: Uh I think that Compiling means to translate anything into machine code. =

: Interpreted means to identify a string value with an according machine =

: code instruction.

Then what does YACC stand for? (Yet Another Compiler-Compiler). Lex, yacc,
and cfront all compile input into C-src code, which must be further
compiled into machine code.

--
Dan Mercer
Reply To: dame...@mmm.com

Mukesh Prasad

unread,
Nov 6, 1996, 8:00:00 AM11/6/96
to

"Compiled" vs "Interpreted" are merely words -- if
any programming language which can be compiled into a byte-code
is to be called a "compiled" language, we should drop altogether
the concept of an "interpreted" language, since for
any given programming language, a byte-code and a byte-code
compiler can be found.

But if one has to have this distinction, Lisp should
fall into the "interpreted" category, since the
"compiled" byte-code is interpreted by sofware, not
the hardware. I don't know about the Lisp
machines though, do they (or did they) have hardware
instructions corresponding one-to-one with read/eval/print?

Rainer Joswig

unread,
Nov 7, 1996, 8:00:00 AM11/7/96
to

In article <3280FE...@dma.isg.mot.com>, Mukesh Prasad
<mpr...@dma.isg.mot.com> wrote:

> "Compiled" vs "Interpreted" are merely words -- if
> any programming language which can be compiled into a byte-code
> is to be called a "compiled" language, we should drop altogether
> the concept of an "interpreted" language, since for
> any given programming language, a byte-code and a byte-code
> compiler can be found.

Ahh, you guys you still don't get it.


A compiler-based system ->

Language A compiles to Language B.
How languge B runs (native, as byte code, ...) is not
of any importance. Important is that Language A gets
compiled into a target language (b).


An Interpreter-based system ->

Language A is being read by an Interpreter.
This results in some internal representation
for programs of the language A.
This internal representation then is being interpreted.
The internal representation is a direct mapping
from the original source.


To give you an example:

A Lisp compiler takes expression in the Lisp
language and compiles it for example to PowerPC
machine code. The PowerPC processor then executes
this machine code. This is a compiler-based system
(like Macintosh Common Lisp).

A Lisp compiler takes expression in the Lisp
language and compiles it to Ivory (!!)
instructions. The Ivory code is than
being interpreted by a virtual machine running
on a DEC Alpha processor.
This is still a compiler-based system
(like Open Genera from Symbolics).

A Lisp Interpreter takes expressions in
the Lisp language, interns them and
executes these interned representation
of the program. Still, every statement
has to be examined (is it a macro,
is it a function call, is it a constant, ...).
Still, macro expansion
will happen for every loop cycle over and
over. You can get the original expression back
and change it, because there is a one to
one relation between the original source
and the interned representation.


> But if one has to have this distinction, Lisp should
> fall into the "interpreted" category, since the
> "compiled" byte-code is interpreted by sofware, not
> the hardware.

Ahh, no.

There are also tons of compilers who generate optimized
machine code of CISC and RISC architectures.

>I don't know about the Lisp
> machines though, do they (or did they) have hardware
> instructions corresponding one-to-one with read/eval/print?

Sure not.

Read a basic computer science book, where they explain the
difference (get: Structure and Interpretation of
Computer Programs, by Abelson & Sussman, MIT Press).


Rainer Joswig

Patrick Juola

unread,
Nov 7, 1996, 8:00:00 AM11/7/96
to

In article <3280FE...@dma.isg.mot.com> Mukesh Prasad <mpr...@dma.isg.mot.com> writes:
>"Compiled" vs "Interpreted" are merely words -- if
>any programming language which can be compiled into a byte-code
>is to be called a "compiled" language, we should drop altogether
>the concept of an "interpreted" language, since for
>any given programming language, a byte-code and a byte-code
>compiler can be found.
>
>But if one has to have this distinction, Lisp should
>fall into the "interpreted" category, since the
>"compiled" byte-code is interpreted by sofware, not
>the hardware. I don't know about the Lisp

>machines though, do they (or did they) have hardware
>instructions corresponding one-to-one with read/eval/print?

You're palming a card. C, a compiled language by anyone's reckoning,
has no (single) hardware instruction corresponding to printf. Or even
necessarily to ||, because of the complex semantics of evaluation.

As a matter of fact, many of us consider a language that *does*
correspond 1-1 with a set of hardware instructions to be an
assembly language.... and the advantage of "compiled" languages
is that they let one get *away* from this level of detail.

Patrick

Ken Bibb

unread,
Nov 7, 1996, 8:00:00 AM11/7/96
to

>"Compiled" vs "Interpreted" are merely words -- if
>any programming language which can be compiled into a byte-code
>is to be called a "compiled" language, we should drop altogether
>the concept of an "interpreted" language, since for
>any given programming language, a byte-code and a byte-code
>compiler can be found.

>But if one has to have this distinction, Lisp should
>fall into the "interpreted" category, since the
>"compiled" byte-code is interpreted by sofware, not
>the hardware.

This is not necessarily the case. Most modern lisps allow you to create
binary executables.

--
Ken Bibb "If the boundary breaks I'm no longer alone
kb...@arastar.com Don't discourage me
kb...@best.com Bring out the stars/On the first day"
kb...@csd.sgi.com David Sylvian--"The First Day"

Seth Tisue

unread,
Nov 7, 1996, 8:00:00 AM11/7/96
to

In article <3280FE...@dma.isg.mot.com>,
Mukesh Prasad <mpr...@dma.isg.mot.com> wrote:
>But if one has to have this distinction, Lisp should
>fall into the "interpreted" category, since the
>"compiled" byte-code is interpreted by sofware, not
>the hardware.

There are many Lisp compilers which compile to native code, not byte
code. (It seems that many have been pointing this out on this thread
for some time, to no avail...)
--
== Seth Tisue <s-t...@nwu.edu> http://www.cs.nwu.edu/~tisue/

Jim Balter

unread,
Nov 7, 1996, 8:00:00 AM11/7/96
to

In article <55t27r$d...@godzilla.cs.nwu.edu>,

Seth Tisue <ti...@cs.nwu.edu> wrote:
>In article <3280FE...@dma.isg.mot.com>,
>Mukesh Prasad <mpr...@dma.isg.mot.com> wrote:
>>But if one has to have this distinction, Lisp should
>>fall into the "interpreted" category, since the
>>"compiled" byte-code is interpreted by sofware, not
>>the hardware.
>
>There are many Lisp compilers which compile to native code, not byte
>code. (It seems that many have been pointing this out on this thread
>for some time, to no avail...)

Ignorance memes are highly resistant and mutate readily.
--
<J Q B>


Dave Seaman

unread,
Nov 7, 1996, 8:00:00 AM11/7/96
to

In article <55t27r$d...@Godzilla.cs.nwu.edu>,

Seth Tisue <ti...@cs.nwu.edu> wrote:
>In article <3280FE...@dma.isg.mot.com>,
>Mukesh Prasad <mpr...@dma.isg.mot.com> wrote:
>>But if one has to have this distinction, Lisp should
>>fall into the "interpreted" category, since the
>>"compiled" byte-code is interpreted by sofware, not
>>the hardware.
>
>There are many Lisp compilers which compile to native code, not byte
>code. (It seems that many have been pointing this out on this thread
>for some time, to no avail...)

Not only that, many modern lisps use incremental compilation. A good
test is to use DEFUN to define a function and then immediately use
SYMBOL-FUNCTION to look a the definition of that function. If you are
using an interpreted lisp, you will see the lambda expression that you
specified with DEFUN, but if your lisp has incremental compilation
(such as Macintosh Common Lisp) you will instead see only that the
value is a compiled function. On systems that are not incrementally
compiled, you can probably still use (COMPILE 'FOO) in order to compile
an individual function, which may be confirmed by again using
(SYMBOL-FUNCTION 'FOO) to view the function definition.

If you want more proof that some lisps are truly compiled, consider the
fact that Macintosh Common Lisp has separate versions for the PowerPC
and for 68K Macs. The 68K version will run on either type of machine
(using the system's 68K interpreter when running on the PowerPC), but
the PowerPC version will not run on a 68K machine at all (because it's
native PowerPC code and it won't run on 68K Macs). The same goes for
compiled files and saved applications that you can produce with the two
versions.

If you compile a lisp file (foo.lisp) with MCL 3.0 (the 68K version),
you get a fast-loading file (foo.fasl) that contains native 68K code.
If you compile the same file using MCL 3.9PPC, you get a file foo.pfsl
that contains native PowerPC code and is not usable on a 68K Mac. If
you use SAVE-APPLICATION to produce a double-clickable application
program, the resulting program will not run on a 68K Mac if it was
produced by the PowerPC version of MCL.

If programs were being "compiled" to byte-code, there obviously would
be no need for all this -- just re-implement the byte-code interpreter
on the PowerPC.

--
Dave Seaman dse...@purdue.edu
++++ stop the execution of Mumia Abu-Jamal ++++
++++ if you agree copy these lines to your sig ++++
++++ see http://www.xs4all.nl/~tank/spg-l/sigaction.htm ++++

Richard A. O'Keefe

unread,
Nov 8, 1996, 8:00:00 AM11/8/96
to

Mukesh Prasad <mpr...@dma.isg.mot.com> writes:

>"Compiled" vs "Interpreted" are merely words

So? Are you claiming Humpty Dumpty's privilege?

There are important issues concerning binding time.

>But if one has to have this distinction, Lisp should
>fall into the "interpreted" category, since the
>"compiled" byte-code is interpreted by sofware, not
>the hardware.

This is complete and utter bull-dust.
The RUCI Lisp system I used on a DEC-10 in the early 80s compiled
Lisp to *native* DEC-10 instructions.
The Franz Lisp system I used on a VAX-11/780 around the same time
compiled Lisp to *native* VAX instructions.
The Gmabit Scheme system I use on a 680x0 Macintosh compiles
Scheme (a dialect of Lisp) to *native* 680x0 instructions.
The Lisp system I use on a a SPARC compiles Common Lisp
to *native* SPARC instructions.
Even the PSL system I used on a B6700 back in the 70s compiled
PSL (a dialect of Lisp) to *native* B6700 instructions.
The T system I used to use on our Encore Multimaxes compiled
T and Scheme (dialects of Lisp) to *native* NS32k instructions.

There are or have been *native-code* Lisp compilers for all the
major machines, from Univac 1108s, IBM 360s, all the way up to
Connection Machines and beyond.

>I don't know about the Lisp
>machines though, do they (or did they) have hardware
>instructions corresponding one-to-one with read/eval/print?

Why Lisp machines do you mean? CONS? CADR? LMI? Symbolics?
Xerox (1108, 1109, 1185, ...)? The Japanese ones? The European ones?

The plain fact of the matters is that Lisp
- *CAN* be interpreted by fairly simple interpreters
- *CAN* be compiled to very efficient native code on any reasonable
modern machine

[If you can compile a language to byte codes, you can compile it to
native code by treating the byte codes as "macros", and running an
optimiser over the result. This has actually been used as a route
for developing a native code compiler.]

Mukesh Prasad

unread,
Nov 8, 1996, 8:00:00 AM11/8/96
to

Jim Balter wrote:
> In article <55t27r$d...@godzilla.cs.nwu.edu>,
> Seth Tisue <ti...@cs.nwu.edu> wrote:
[...]

> >There are many Lisp compilers which compile to native code, not byte
> >code. (It seems that many have been pointing this out on this thread
> >for some time, to no avail...)

Ah, "compiling to native code" brings up a different issue,
that of whether or not you want to allow eval in the
language. If you do, there are some sleigh of hands
involved (like hiding an interpreter in your "compiled"
executable.) If you don't, is it Lisp? Why not
make a new language with just the features you
like and _can_ compile well?

The point was, if you claim Lisp is "Compiled" and not
"Interpreted", show me an "Interpreted" language, and
I will use your arguments to show why it should
be called "compiled" (because compilers can be written
to compile selected parts of the language to
native code.) So what is the point of such
a distinction?

> Ignorance memes are highly resistant and mutate readily.

How true!

David Longley

unread,
Nov 9, 1996, 8:00:00 AM11/9/96
to

Interesting....

ENGLISH is a 90s language, and an 80s language, and a ....... and
ANSI has never had a say in it.

I have long wondered why folk in Natural Language processing
research bother...Have they ever read "Word and Object"? Do they
realise the implications of it being an anarchic, evolving, yet
ANSI free system?
--
David Longley


Cyber Surfer

unread,
Nov 9, 1996, 8:00:00 AM11/9/96
to

In article <joswig-ya0231800...@news.lavielle.com>
jos...@lavielle.com "Rainer Joswig" writes:

> Read a basic computer science book, where they explain the
> difference (get: Structure and Interpretation of
> Computer Programs, by Abelson & Sussman, MIT Press).

Or an even more basic book, like PJ Brown's "Writing Interactive
Compilers and Interpreters". Brown also covers reconstructing
the source code from the internal representation. In his book,
he uses the Basic language for his examples, but the _same_
techniques can apply to Lisp and any other language. In the
case of Lisp, most implementations will actually store the
internal representation, instead of throwing it away, as most
"compilers" do.

Brown uses the word "compiler" to mean either interpreter or
compiler, because few - if any - implementations are "true"
compilers or interpreters, as that would be impractical. For
example, the C language uses a runtime - violating the "true
compiler" defintion - and functions like printf very small
interpreters. There are even C interpreters available, so
is C compiled or interpreted? The correct answer is that it
can be either, and all C implementations include a mixture
of both compiling (at compile time) _and_ interpreting (at
runtime and also compile time).

Compiler theory is a weird and wonderful area of computer
science, but some people can explain it in relatively clear
and simple terms. Brown is one of those people. If you can
cope with that book, then you _might_ be reader to take a
book like "Structure and Interpretation of Computer Programs".
Brown's book is in fact a book about any writing interactive
software, not just compilers. SICP is a much more demanding
book, and as the intro explains, the material can form the
basis of several programming courses! A fine book, but if
you're likely to get into MIT, then it's perhaps not the best
place to start.

Apart from that, I fully agree with what you say about compilers
and interpreters. I like to think of them both as specialised
examples of string and symbol crunching: turn a string into a
tree, and then turn the tree back into a string, only this time
in a different language. A simple example would be date parsing,
where you parse a date in one format and then print it in another.

Bruce Tobin

unread,
Nov 9, 1996, 8:00:00 AM11/9/96
to Mukesh Prasad

Mukesh Prasad wrote:
>
> Ah, "compiling to native code" brings up a different issue,
> that of whether or not you want to allow eval in the
> language. If you do, there are some sleigh of hands
> involved (like hiding an interpreter in your "compiled"
> executable.) If you don't, is it Lisp?

Of course it's Lisp. Most modern Lisp texts strongly discourage the use
of eval for just this reason. I've never written a line of code with
eval in it.

> Why not
> make a new language with just the features you
> like and _can_ compile well?
>
> The point was, if you claim Lisp is "Compiled" and not
> "Interpreted", show me an "Interpreted" language, and
> I will use your arguments to show why it should
> be called "compiled" (because compilers can be written
> to compile selected parts of the language to
> native code.) So what is the point of such
> a distinction?
>

A language can be called "interpreted", in a non-technical but widely
used sense, when the implementations most commonly used for production
applications fail to compile to native code. The Basic dialect used in
Visual Basic, for example, fits this criterion (though it may not for
much longer); Lisp doesn't.

Cyber Surfer

unread,
Nov 9, 1996, 8:00:00 AM11/9/96
to

My ISP is currently experiencing news server difficulties.
Apologies if you see this twice...


Erik Naggum

unread,
Nov 10, 1996, 8:00:00 AM11/10/96
to

* Mukesh Prasad

| Ah, "compiling to native code" brings up a different issue, that of
| whether or not you want to allow eval in the language. If you do, there
| are some sleigh of hands involved (like hiding an interpreter in your
| "compiled" executable.) If you don't, is it Lisp? Why not make a new

| language with just the features you like and _can_ compile well?

as has already been mentioned here at the myth-debunking central, many Lisp
systems "interpret" code by compiling it first, then executing the compiled
code. however, the amazing thing is that Lisp systems that don't have
`eval' (Scheme), force the programmers who need to evaluate expressions at
run-time to write their own `eval' look-a-likes. so if you allow lists,
your compiler's input language is lists, and you want to do some hard
things, and if your language does not include `eval', some smart programmer
will have to implement it by himself, and he will in all likelihood not be
smart enough to do _all_ the stuff right that would have been done right in
languages that _did_ include `eval'.

the same goes for many other language features. ever seen a C programmer
try to do dynamic types? he's likely to use a full machine word to keep
the type tag at the beginning of a struct that includes a union of the
types he needs, and will always have to dereference the pointer to find the
type! (and Lisp programmers fret about whether floating point numbers are
"boxed" or not. that's a luxury problem for the general case, where Lisp
is typically _much_ more efficient than even good C programmers would be.)
or a C programmer who wants to build a general list concept? you can bet
your favorite language construct that he will use a lot of `next' pointers
_inside_ each of the structures in the restricted set that can be elements
of his list. this is how textbooks treat the topic, too. messing with
pointers, utilizing the fact that the last three bits of a pointer on a
byte machine will always be zero if all your allocated objects are aligned
to 64 bits, such as a Lisp system can safely do, makes code non-portable,
hard to maintain, etc, and is therefore avoided in all but high-end systems
with big budgets for maintenance (such as Lisp systems).

`eval' is indispensable if you implement your own language. most of the
programs I use actually talk to their users or read configuration files and
so on, which do evaluate expressions read at run-time. one way or the
other, a program that interacts with its users or other programs, may find
itself explicitly or implicitly sporting an `eval' function, albeit for a
very different language than it was itself written in, but that's _also_ a
liability for those programming languages that inherently cannot have
`eval'. e.g., in the C/Unix world, that's what `lex' and `yacc' are for.
in the Lisp world, we already have `read' and `eval'. think about it.

#\Erik
--
Please address private replies to "erik". Mail to "nobody" is discarded.

Kosta Kostis

unread,
Nov 10, 1996, 8:00:00 AM11/10/96
to

Bull Horse wrote:
> PASCAL is a 70's langauage. Developed in the early 70's I think 73...

You're right. The preface of the "PASCAL User Manual and Report" states:

A preliminary version of the programming language Pascal was
drafted in 1968. It followed in its spirit the Algol-60 and
Algol-W line of languages. After an extensive development phase,
a first compiler became operational in 1970, and publication
followed a year later (see References 1 and 8, p.104). The
growing interest in the development of compilers for other
computers called for a consolidation of Pascal, and two years of
experience in the use of the language dictated a few revisions.
This led in 1973 to the publication of a Revised Report and a
definition of a language representation in terms of the ISO
characters set.

Yes, Niklaus used a type writer (fixed space) font... ;)

--
kos...@acm.org, ko...@live.robin.de, ko...@blues.sub.de
Kosta Kostis, Talstr. 25, D-63322 Rödermark, Germany
http://ourworld.compuserve.com/homepages/kosta/

Chris

unread,
Nov 10, 1996, 8:00:00 AM11/10/96
to


> A language can be called "interpreted", in a non-technical but widely
> used sense, when the implementations most commonly used for production
> applications fail to compile to native code. The Basic dialect used in
> Visual Basic, for example, fits this criterion (though it may not for
> much longer); Lisp doesn't.

A language could be called "compiled" when the code run is different from
the source code, and interpreted when THE source is translated at each
execution.

About native code, we call then a native compiler.


J. Christian Blanchette

unread,
Nov 10, 1996, 8:00:00 AM11/10/96
to

>
> What about the Java compiler ? Is the code in machine language ? Isn't it a
> compiler ?

Java source code is plain Unicode text (.java), it is compiled into .class
binaries compatible with the "Java Virtual Machine". The JVM is usually
implemented as an interpreter, although a program could convert Java bytecodes
into native machine code.

Jas.

"I take pride as the king of illiterature."
- K.C.


Mukesh Prasad

unread,
Nov 11, 1996, 8:00:00 AM11/11/96
to

Erik Naggum wrote:
>
> as has already been mentioned here at the myth-debunking central, many Lisp
> systems "interpret" code by compiling it first, then executing the compiled
> code.

So that's different from the "sleigh of hand" I mentioned? Are you
a one-person myth-debunking central, or a myth-creation one?

> however, the amazing thing is that Lisp systems that don't have
> `eval' (Scheme), force the programmers who need to evaluate > expressions at

I was expecting people to cite Scheme, T, Nil et al,
but it didn't happen in hordes (though there were very
interesting arguments on the order of "I never use Eval,
therefore Lisp is not interpreted" and "Until more than
N Basic compilers exist, Basic will be an interpreted
language and Lisp will be a compiled language...")

One interesting thing is, I have never seen C mentioned
as a variant of BCPL, and I have seldom seen Pascal
referred to as a kind of Algol. And nobody calls
C++ "a kind of C" anymore. Yet Scheme is even now
a "Lisp system"!

Perhaps instead of drawing fine lines in the sand about
distinctions between interpreted and compiled, and
trying to make being "compiled" the holy grail of Lisp
systems, the Lisp community should have instead tried
to see how well Lisp does as an Internet language!
Nobody cares if Java is an interpreted language, as long as
it does what they need done.

Or on second thoughts, perhaps Lisp could become a Smalltalk
like language -- a source of several ideas, instead
of something in a limbo with always having a small
but vocal minority needing to defend it by claiming
it is not interpreted and such.

Lou Steinberg

unread,
Nov 11, 1996, 8:00:00 AM11/11/96
to

In article <55uech$p61$1...@goanna.cs.rmit.edu.au> o...@goanna.cs.rmit.edu.au (Richard A. O'Keefe) writes:

Mukesh Prasad <mpr...@dma.isg.mot.com> writes:

>"Compiled" vs "Interpreted" are merely words

>But if one has to have this distinction, Lisp should
>fall into the "interpreted" category, since the
>"compiled" byte-code is interpreted by sofware, not
>the hardware.

This is complete and utter bull-dust.

... and O'Keefe goes on to list a myriad Lisp systems that compiled to
native code. But Prasad's comment is "bull-dust" for a more basic
reason: By his definition EVERY language
(A) run on a microcoded machine or
(B) compiled for a 68K Mac but run on a PPC Mac
is Interpreted because
(A) on a microcoded machine, what we call "machine code" _is_ just
a byte code that is interpreted by an interpreter written in the 'real'
machine language of this machine, microcode.
(B) on a PPC Mac, 68k machine code is treated as a byte code and
executed by an interpreter.

So the same _binary object code_ can be actual machine code or a byte
code, depending on what machine you run it on. So the notion of a
_language_ being "interpreted" or "compiled" makes no sense. A
particular _implementation_ on an particular _computer_ down to a
particular _level of abstraction_ (e.g., 'down to 68K machine code')
can be "interpreted" or "compiled", but not a language.


William Paul Vrotney

unread,
Nov 12, 1996, 8:00:00 AM11/12/96
to

In article <328738...@dma.isg.mot.com> Mukesh Prasad


<mpr...@dma.isg.mot.com> writes:
>
> Erik Naggum wrote:
> >
> > as has already been mentioned here at the myth-debunking central, many Lisp
> > systems "interpret" code by compiling it first, then executing the compiled
> > code.
>
> So that's different from the "sleigh of hand" I mentioned? Are you
> a one-person myth-debunking central, or a myth-creation one?
>
>

> Perhaps instead of drawing fine lines in the sand about
> distinctions between interpreted and compiled, and
> trying to make being "compiled" the holy grail of Lisp
> systems, the Lisp community should have instead tried
> to see how well Lisp does as an Internet language!
> Nobody cares if Java is an interpreted language, as long as
> it does what they need done.
>

Check out

http://www.ai.mit.edu/projects/iiip/doc/cl-http/home-page.html = Common Lisp
Hypermedia Server

> Or on second thoughts, perhaps Lisp could become a Smalltalk
> like language -- a source of several ideas, instead
> of something in a limbo with always having a small
> but vocal minority needing to defend it by claiming
> it is not interpreted and such.

This reads as though Lisp has a mind of it's own. Lisp is good for AI, I
didn't know it was that good! :-)

--

William P. Vrotney - vro...@netcom.com

Erik Naggum

unread,
Nov 12, 1996, 8:00:00 AM11/12/96
to

* Mukesh Prasad

| Yet Scheme is even now a "Lisp system"!

it's interesting to see just how little you know of what you speak.
Schemers call Scheme a Lisp system. many Schemers become irate when you
try to tell them that Scheme is not a Lisp.

| Or on second thoughts, perhaps Lisp could become a Smalltalk like
| language -- a source of several ideas, instead of something in a limbo
| with always having a small but vocal minority needing to defend it by
| claiming it is not interpreted and such.

this "source of several ideas" thing has been an ongoing process since its
inception. I'm surprised that you don't know this. people learn from Lisp
(then go off to invent a new syntax) all the time, all over the place.

when I was only an egg, at least I knew it. Mukesh Prasad may want to
investigate the option of _listening_ to those who know more than him,
instead of making a fool out of himself.

Jens Kilian

unread,
Nov 12, 1996, 8:00:00 AM11/12/96
to

Mukesh Prasad (mpr...@dma.isg.mot.com) wrote:
> Or on second thoughts, perhaps Lisp could become a Smalltalk
> like language -- a source of several ideas, instead
> of something in a limbo with always having a small
> but vocal minority needing to defend it by claiming
> it is not interpreted and such.

You need to realize that Lisp is the second oldest high-level programming
language. It has always been "a source of several ideas", in that every
functional language has had its roots in Lisp, and lots of stuff has been
carried over into other types of languages as well.

Greetings,

Jens.
--
Internet: Jens_...@bbn.hp.com Phone: +49-7031-14-7698 (TELNET 778-7698)
MausNet: [currently offline] Fax: +49-7031-14-7351
PGP: 06 04 1C 35 7B DC 1F 26 As the air to a bird, or the sea to a fish,
0x555DA8B5 BB A2 F0 66 77 75 E1 08 so is contempt to the contemptible. [Blake]

Mukesh Prasad

unread,
Nov 12, 1996, 8:00:00 AM11/12/96
to

Lou Steinberg wrote:

> particular _implementation_ on an particular _computer_ down to a
> particular _level of abstraction_ (e.g., 'down to 68K machine code')
> can be "interpreted" or "compiled", but not a language.

So what on earth is this thread about? Have you
read the topic heading?

You may not be aware of this (actually, you are obviously
not) but books on programming languages tend to divide
languages into two categories, "interpreted" and "compiled".
I repeat, *languages*, not *implementations*.

Since its inception, Lisp has been placed by programming
language theorists in the "interpreted" category.
The language itself, not any particular implementation.

However, Lisp systems have improved in technology.
In the early days, Lisp interpreters directly interpreted
the original source. An obvious improvement was
to "compact" the source code and to get rid of
comments, spaces etc prior to interpretation. But
this does not make the language "compiled".

Another improvement was to replace the original
source code by more compact and easy to interpret
"byte code". The function to do this is called
"compile", hence confusing the typical Lisp user
already.

To confuse matters more, the newer versions of the
"compile" function are more sophisticated, and can generate
machine code into which the interpreter transfers
the flow of control via a machine level jump
instruction. The confusion of the typical modern
day Lisp user is complete at this point!

However, having a function called "compile" doesn't
make language a compiled language.

An interpreted language is one which necessitates baggage
at run-time to interpret it. A compiled language
is one which doesn't. Lisp -- due to the nature
of the language definition -- necessitates baggage at
run-time, even with modern "compile" functions
which can generate machine code.

I will try once more (but not much more, this thread
has not attracted knowledgable responses or
intelligent, unbiased discourse) to explain this -- if the
Lisp language _itself_ is to be deemed "compiled" (irrespective
of any implementation of it), then by that definition,
all languages must be deemed "compiled languages".
For any given language, things which have been
done to Lisp can be done. Thus that language's
definition does not make the language "interpreted"
any more than Lisp is.

>So the same _binary object code_ can be actual machine code or a byte
>code, depending on what machine you run it on. So the notion of a
>_language_ being "interpreted" or "compiled" makes no sense. A

You should read some books on Computer Science. It is
actually a matter of definition, not "sense". It will
only make sense if you are familiar with the definitions.
Otherwise, you might as well look at a book of mathematics
and claim the term "factors" must have something to
do with "fact"s, because that is how you choose to
understand it.

"Interpreted" and "compiled", when applied to
languages, have specific meanings.

>particular _implementation_ on an particular _computer_ down to a
>particular _level of abstraction_ (e.g., 'down to 68K machine code')
>can be "interpreted" or "compiled", but not a language.

This and other such complicated gems occurring in
this thread, are neither compiled nor interpreted, but
simple and pure BS, arising out of ignorance, bias
and lack of clear thinking.

Mukesh Prasad

unread,
Nov 12, 1996, 8:00:00 AM11/12/96
to

> it's interesting to see just how little you know of what you speak.
> Schemers call Scheme a Lisp system. many Schemers become irate when you
> try to tell them that Scheme is not a Lisp.

Now if Scheme were a wild success, they would become
irate if you called it Lisp. Amazing how
it works, is it not?

(But I will admit I don't know enough Scheme to debate
how it is or is not Lisp -- I was just making
a point about human behavior...)

> | Or on second thoughts, perhaps Lisp could become a Smalltalk like
> | language -- a source of several ideas, instead of something in a limbo
> | with always having a small but vocal minority needing to defend it by
> | claiming it is not interpreted and such.

> this "source of several ideas" thing has been an ongoing process since its


> inception. I'm surprised that you don't know this. people learn from Lisp
> (then go off to invent a new syntax) all the time, all over the place.

Not too many people, Erik. The Emacs stuff that you _do_ know
about, is more an exception, not a rule. Out here, C and C++ rule,
and Java seems to be on the rise. Lisp is not really
in the picture anymore. A pity, but it made too many promises
and for some reason didn't deliver. I personally suspect it is
because it makes things too easy and encourages lax discipline,
but I may be wrong.


> when I was only an egg, at least I knew it. Mukesh Prasad may want to

Hmmm... If you depend upon things you knew as an egg (i.e.
never bothered to actualy learn,) no wonder you come out
with the proclamations you do!

> investigate the option of _listening_ to those who know more than him,
> instead of making a fool out of himself.

Many times, the only way not to appear a fool to fools is to
join in their foolishness.

Mukesh Prasad

unread,
Nov 12, 1996, 8:00:00 AM11/12/96
to

Jens Kilian wrote:
[snip]

> language. It has always been "a source of several ideas", in that every
> functional language has had its roots in Lisp, and lots of stuff has

That's true. Yet, for whatever reasons, none of the functional
languages have matched even the popularity of Lisp
itself, much less surpass it to become one of the highly popular
languages.

William Paul Vrotney

unread,
Nov 13, 1996, 8:00:00 AM11/13/96
to

In article <328881...@dma.isg.mot.com> Mukesh Prasad
<mpr...@dma.isg.mot.com> writes:

> Not too many people, Erik. The Emacs stuff that you _do_ know
> about, is more an exception, not a rule. Out here, C and C++ rule,
> and Java seems to be on the rise. Lisp is not really
> in the picture anymore. A pity, but it made too many promises

^
|
Sure, you think it is a pity. What hypocrisy!

Yes you are right Emacs, Lisp and AI are the exception. So why are *you*
posting your predictable opinions to such exceptional news groups? Is it
because you want to advance Emacs, Lisp or AI? ... I don't think so ...

Richard A. O'Keefe

unread,
Nov 13, 1996, 8:00:00 AM11/13/96
to

Mukesh Prasad <mpr...@dma.isg.mot.com> writes:
>Ah, "compiling to native code" brings up a different issue,
>that of whether or not you want to allow eval in the
>language. If you do, there are some sleigh of hands
>involved (like hiding an interpreter in your "compiled"
>executable.)

Wrong. A *compiler* in the executable will do fine.
What's more, a *dynamically linked* compiler will also do fine,
so no space need actually be taken up in the object file.

For example, in DEC-10 Prolog, the compiler was a "shared segment"
which was swapped in when you loaded a file and swapped out again
when it had finished.

Take Oberon as another example. Looks like a stripped down Pascal.
Acts like a stripped down Pascal, _except_ it dynamically loads
new modules, and in one implementation, the module loader generates
new native code from a machine-independent compiled format.

>If you don't, is it Lisp?

Well, it might be Scheme...

David Kastrup

unread,
Nov 13, 1996, 8:00:00 AM11/13/96
to

Erik Naggum <nob...@naggum.no> writes:

> when I was only an egg, at least I knew it. Mukesh Prasad may want

> to investigate the option of _listening_ to those who know more than


> him, instead of making a fool out of himself.

We'll leave *that* option to people who needn't learn anything or
admit a mistake or even fallability just after they left their egg
state.


--
David Kastrup Phone: +49-234-700-5570
Email: d...@neuroinformatik.ruhr-uni-bochum.de Fax: +49-234-709-4209
Institut f=FCr Neuroinformatik, Universit=E4tsstr. 150, 44780 Bochum, Germa=
ny

Richard A. O'Keefe

unread,
Nov 13, 1996, 8:00:00 AM11/13/96
to

Mukesh Prasad <mpr...@dma.isg.mot.com> writes:
>However, Lisp systems have improved in technology.
>In the early days, Lisp interpreters directly interpreted
>the original source.

Lisp 1.5 had a compiler.
No "mainstream" Lisp interpreter has _ever_ "directly interpreted
the original source".
(I have seen an interpreter for a C-like language that did exactly
that. It was in a book by Herbert Schildt, and as you might expect
it was seriously inefficient.)
Lisp interpreters deal with _abstract syntax trees_.

>An obvious improvement was
>to "compact" the source code and to get rid of
>comments, spaces etc prior to interpretation. But
>this does not make the language "compiled".

Once again, _all_ Lisp systems since 1.5 and before have
been based on abstract syntax trees, and most of them have
had _both_ an interpreter walking these trees (for debugging)
_and_ a compiler generating code (for execution).

>Another improvement was to replace the original
>source code by more compact and easy to interpret
>"byte code". The function to do this is called
>"compile", hence confusing the typical Lisp user
>already.

I once used a byte coded system, a Xerox 1108.
Thing was, the byte codes WERE THE NATIVE INSTRUCTION SET
of the machine. There was microcode underneath, but
there was microcode underneath the IBM 360 and M68000,
and nobody ever slammed BAL/360 for being an "interpreted"
language.

And again: the PSL system I was using in the 70s compiled
to *native* *instructions*.

>To confuse matters more, the newer versions of the
>"compile" function are more sophisticated, and can generate
>machine code into which the interpreter transfers
>the flow of control via a machine level jump
>instruction. The confusion of the typical modern
>day Lisp user is complete at this point!

Maybe you are confused, but Lisp users are not.
As far as a Lisp user is concerned, the question is
simply "do I get fine grain debugging, or do I get
high performance".

>However, having a function called "compile" doesn't
>make language a compiled language.

No, but in the 60s and 70s the mainstream Lisp
systems included the ability to compile to native machine
instructions, and this facility was *routinely* used.

Tell me this, and tell me honestly:
what properties does Scheme have (or lack)
compared with Fortran
that make Scheme "interpreted" and Fortran "compiled".
When you are answering this, consider the fact that I use
a Scheme compiler which is a "batch" compiler producing
code that often outperforms C, and the fact that I have
read a Fortran interpreter that was used to execute student
programs.

>An interpreted language is one which necessitates baggage
>at run-time to interpret it.

You haven't defined what "interpret" means.
Using any of several reasonable criteria, this makes Fortran and
C interpreted languages.

>A compiled language
>is one which doesn't. Lisp -- due to the nature
>of the language definition -- necessitates baggage at
>run-time, even with modern "compile" functions
>which can generate machine code.

Ah, but that "baggage" you are ranting about
- need not occupy any space in the executable form of the program
- need not take any *time* at run time unless it is actually *used*
- exists for C on UNIX, Windows, VMS, and other modern operating systems.

What am I getting at with that last point?
This:

A C program may *at run time* construct new code,
cause it to be compiled,
and cause it to become part of the running program.

There isn't any special _syntax_ for this, but it's in the _API_.
Presuambly you know about Windows DLLs; in UNIX SVr4 look for
'dlopen' in the manuals; in VMS I don't know what it's called but
I've used a program that used it. The CMS operating system has
had a dynamic LOAD command for a very long time.

However, calling compilation interpreting simply because it
happens at run time is a bizarre abuse of the English language.

>I will try once more (but not much more, this thread
>has not attracted knowledgable responses or

It has. People have tried to tell you that Lisp systems have
been generating native machine code for decades, since very early
days indeed. You keep calling them interpreters.

>intelligent, unbiased discourse)

You appear to accept only people who agree with you as unbiased.

to explain this -- if the
>Lisp language _itself_ is to be deemed "compiled"

With the notable exception of yourself, most people have
been arguing that it is EXTREMELY SILLY to call *ANY* language
"compiled" or "interpreted".
What people have been saying is that mainstream Lisp *SYSTEMS*
have since the earliest days offered compilers for the Lisp
language.

>(irrespective
>of any implementation of it),

Nobody in his right mind would so deem *any* programming language.
*Any* programming language can be interpreted.
Just about all of them *have* been. (There is at least one C++ interpreter.)

*Some* programming languages are hard to compile; APL2 springs to mind
because the *parsing* of an executable line can vary at run time. Even
so, there are *systems* that can reasonably be called APL compilers.
Another language that I would _hate_ to have to compile is M4; once
again the syntax can change while the program is executing.

>>So the same _binary object code_ can be actual machine code or a byte
>>code, depending on what machine you run it on. So the notion of a
>>_language_ being "interpreted" or "compiled" makes no sense. A

>You should read some books on Computer Science.

Lou Steinberg has probably read a *lot* of them.

>It is actually a matter of definition, not "sense". It will
>only make sense if you are familiar with the definitions.
>Otherwise, you might as well look at a book of mathematics
>and claim the term "factors" must have something to
>do with "fact"s, because that is how you choose to
>understand it.

This paragraph does not suggest any very profound acquaintance
with _either_ computing _or_ mathematics. One of the problems
that plagues both disciplines is that terminology and notation
are used differently by different authors. Most of the serious
mathematics and formal methods books I have include, BECAUSE
THEY NEED TO, a section explaining the notation they use.

>"Interpreted" and "compiled", when applied to
>languages, have specific meanings.

There are no *standard* meanings for those terms when applied to
languages. Why don't you cite the definitions and where you
found them?

For now, I've searched my fairly full bookshelves, and failed to
find any such definition. Amusingly, I did find the following
paragraph in the classic
Compiler Construction for Digital Computers
David Gries
Wiley International Edition, 1971.
At the beginning of Chapter 16, "Interpreters", we find this:

We use the term _interpreter_ for a program which performs
two functions:

1. Translates a source program written in the source language
(e.g. ALGOL) into an internal form; and
2. Executes (interprets, or simulates) the program in this
internal form.

The first part of the interpreter is like the first part of
a multi-pass cmpiler, and we will call it the "compiler".

The issue as Gries saw it back in 1971 (and I repeat that this is a
classic textbook) was between an internal form executed by software
and an internal form executed by hardware.

The question is, what are you trying to do in this thread?
Are you
(A) trying to make some substantive point about Lisp compared with
other languages?

In that case you have been proved wrong. There is no _useful_
sense in which Lisp is more "interpreted" than C, in existing
practical implementations of either.

(B) trying to make a _terminological_ point with no consequences for
what _happens_ to a program, only for what it is _called_?

In that case, you should be aware that you are NOT using
words in a way intelligible to people who have actually
*worked* on compilers, and you should explicitly state what
definitions you are using and where you got them.

The only point I care about is (A), whether there is any intrinsic
inefficiency in Lisp compared with C, and the answer is NO, there
isn't:

A programming language that is widely accepted as a Lisp dialect
(Scheme) not only _can_ be "batch" compiled like C, I routinely
use just such a compiler and get the same or better performance
out of it. (This is the Stalin compiler.)

In all the "popular" operating systems these days: DOS, Windows,
most modern UNIX systems, VMS, CMS, it is possible for a C
program to dynamically construct new code which then becomes
part of the running program.

Any book which states or implies that there is something about
"Lisp" (not otherwise specified as to dialect) is intrinsically
"interpreted" is not to be discarded lightly, but to be hurled
with great force.

Jens Kilian

unread,
Nov 13, 1996, 8:00:00 AM11/13/96
to

Mukesh Prasad (mpr...@dma.isg.mot.com) wrote:
[...]

> "Interpreted" and "compiled", when applied to
> languages, have specific meanings.

_Any_ programming language can be implemented by an interpreter or a compiler.
It just doesn't make sense to speak about "compiled languages" vs "interpreted
languages". I take it that you have never heard about C interpreters?

[...]


> This and other such complicated gems occurring in
> this thread, are neither compiled nor interpreted, but
> simple and pure BS, arising out of ignorance, bias
> and lack of clear thinking.

*Plonk*

Pot, kettle, black etc.

Cyber Surfer

unread,
Nov 13, 1996, 8:00:00 AM11/13/96
to

In article <328738...@dma.isg.mot.com>
mpr...@dma.isg.mot.com "Mukesh Prasad" writes:

> I was expecting people to cite Scheme, T, Nil et al,
> but it didn't happen in hordes (though there were very
> interesting arguments on the order of "I never use Eval,
> therefore Lisp is not interpreted" and "Until more than
> N Basic compilers exist, Basic will be an interpreted
> language and Lisp will be a compiled language...")

Check my email to you, and you'll find Scheme mentioned.
EVAL is a relic from an ancient time.



> One interesting thing is, I have never seen C mentioned
> as a variant of BCPL, and I have seldom seen Pascal
> referred to as a kind of Algol. And nobody calls

> C++ "a kind of C" anymore. Yet Scheme is even now
> a "Lisp system"!

Scheme is just a dialect of Lisp. VB is a dialect of Basic,
but who calls it Basic? The name distinguishes it from other
dialects.

Sadly, some people simply refer to Common Lisp as "Lisp".
This can be confusing. If you want a short name, refer to
it as CL. When I refer to Lisp, I include CL, Scheme, Dylan,
and anything else that is Lisp-like. Check the Lisp FAQ
for examples.



> Perhaps instead of drawing fine lines in the sand about
> distinctions between interpreted and compiled, and
> trying to make being "compiled" the holy grail of Lisp
> systems, the Lisp community should have instead tried
> to see how well Lisp does as an Internet language!
> Nobody cares if Java is an interpreted language, as long as
> it does what they need done.

I'm one of the few Lisp programmers who'd like to see Lisp
become more popular. I guess most people are happy with Lisp
the way it is.



> Or on second thoughts, perhaps Lisp could become a Smalltalk
> like language -- a source of several ideas, instead
> of something in a limbo with always having a small
> but vocal minority needing to defend it by claiming
> it is not interpreted and such.

Actually, it's only fools like youself calling it interpreted
that cause problems. You're confusing implementation details
with the language. Read a few books on compiler theory, esp
books like Brown's. <hint, hint>


--
<URL:http://www.enrapture.com/cybes/> You can never browse enough

Dave Newton

unread,
Nov 13, 1996, 8:00:00 AM11/13/96
to

Richard A. O'Keefe wrote:
> What am I getting at with that last point?
> This:
>
> A C program may *at run time* construct new code,
> cause it to be compiled,
> and cause it to become part of the running program.
>
> There isn't any special _syntax_ for this, but it's in the _API_.
> Presuambly you know about Windows DLLs; in UNIX SVr4 look for
> 'dlopen' in the manuals;

I'm confused on your point here-DLLs aren't compiled or "constructed"
at load time. I could, I suppose, spawn off a compile/link process and
create a DLL then explicitly load it, but I don't think that counts.

> In all the "popular" operating systems these days: DOS, Windows,
> most modern UNIX systems, VMS, CMS, it is possible for a C
> program to dynamically construct new code which then becomes
> part of the running program.

I used to do that on my TRaSh-80 by writing machine language into
string space. Worked pretty well. That counts. Dynamically-linked libraries
don't, IMHO. If you want to modify machine code inside a code or data
segment that probably _would_ count, but it's pretty much a pain to do.

--
Dave Newton | TOFU | (voice) (970) 225-4841
Symbios Logic, Inc. | Real Food for Real People. | (fax) (970) 226-9582
2057 Vermont Dr. | | david....@symbios.com
Ft. Collins, CO 80526 | The World Series diverges! | (Geek joke.)

Scott Nudds

unread,
Nov 13, 1996, 8:00:00 AM11/13/96
to

(Patrick Juola) wrote:
: You're palming a card. C, a compiled language by anyone's reckoning,
: has no (single) hardware instruction corresponding to printf.

Printf is an interpreter.


--
<---->


Cyber Surfer

unread,
Nov 13, 1996, 8:00:00 AM11/13/96
to

In article <328879...@dma.isg.mot.com>
mpr...@dma.isg.mot.com "Mukesh Prasad" writes:

> You may not be aware of this (actually, you are obviously
> not) but books on programming languages tend to divide
> languages into two categories, "interpreted" and "compiled".
> I repeat, *languages*, not *implementations*.

Some people may bever have used a language where the distinction
is hard to make.

Take Forth as an example. Is Forth compiled or interpreted? It
depends on how your definition, but there are Forths that generate
native code. One of my batch Forth compilers generated assembly
source code, while a later version generates threaded code.

Interactive Forths can compile to either native or threaded code.
You could even argue that the Novix compiler generates _microcode_,
but since the code is 16bit, that may be debatable. Can microcode
be 16bit? Perhaps. Perhaps not.

If you run x86 code on an emulator, is that interpreted? Is it
still "native"? Who cares?



> Since its inception, Lisp has been placed by programming
> language theorists in the "interpreted" category.
> The language itself, not any particular implementation.

In which category did PJ Brown put Lisp? Or Basic...
There are C interpreters. So what?

The language theorists may not be the ones who decide these things.
It could be the marketing people. We should also ask _when_ these
categorisations were made. You didn't say, did you? By playing the
same game, I could say that C is an obscure novelty that few people
have heard of, never mind actually use. I could also say that it
was only available for Unix. However, I won't, because it would no
longer be true.



> However, Lisp systems have improved in technology.
> In the early days, Lisp interpreters directly interpreted

> the original source. An obvious improvement was


> to "compact" the source code and to get rid of
> comments, spaces etc prior to interpretation. But
> this does not make the language "compiled".

This is just history. Very old history. See above.



> Another improvement was to replace the original
> source code by more compact and easy to interpret
> "byte code". The function to do this is called
> "compile", hence confusing the typical Lisp user
> already.

Basic used to do this, and perhaps still does. Which Basic, tho?
VB? No. QuickBasic? ST Basic? BBC Basic?



> To confuse matters more, the newer versions of the
> "compile" function are more sophisticated, and can generate
> machine code into which the interpreter transfers
> the flow of control via a machine level jump
> instruction. The confusion of the typical modern
> day Lisp user is complete at this point!

What confusion? All I see here is the bollocks that you're talking.
You're talking history, which most people will ignore.



> However, having a function called "compile" doesn't
> make language a compiled language.

Not necessailry, but producing native code might. Do you mean
"compile to native code"? It's not clear - perhaps you're confusing
the word compile with some special meaning that only you know?
See PJ Brown's book for my choice. What's yours?



> An interpreted language is one which necessitates baggage

> at run-time to interpret it. A compiled language


> is one which doesn't. Lisp -- due to the nature
> of the language definition -- necessitates baggage at
> run-time, even with modern "compile" functions
> which can generate machine code.

Most, if not all. langauges necessitates runtime baggage.
Perhaps you hadn't noticed this. C is a perfect example.
Very few programmes use it without the standard C library.
Some of us call the OS directly - which can be thought of
as an even larger runtime! Ok, that's bryond the scope of
the langauge, but since you're so intent on confusing the
issue with misinformation, why not? Some Lisps run without
the support of what most people would call an OS. So do
some Basics...

> I will try once more (but not much more, this thread
> has not attracted knowledgable responses or

> intelligent, unbiased discourse) to explain this -- if the
> Lisp language _itself_ is to be deemed "compiled" (irrespective
> of any implementation of it), then by that definition,
> all languages must be deemed "compiled languages".

Now you're getting it! See PJ Brown's book. The words "compile"
and "interpret" are a distraction that'll only confuse you.

> For any given language, things which have been
> done to Lisp can be done. Thus that language's
> definition does not make the language "interpreted"
> any more than Lisp is.

That's a good argument for not make such distinctions,
and yet you insist on making them, as if they still mean
anything.

> >So the same _binary object code_ can be actual machine code or a byte
> >code, depending on what machine you run it on. So the notion of a
> >_language_ being "interpreted" or "compiled" makes no sense. A
>

> You should read some books on Computer Science. It is


> actually a matter of definition, not "sense". It will
> only make sense if you are familiar with the definitions.

Ah, but _which_ CS books? Whose definitions? Let's have
some modern definitions, from the last 20 years or so.
In fact, we could go back a lot further than that and
still find that distinctions like yours are misplaced.
It's better to look at it all as merely data. In Turing's
day, this may have been better understood, but more
recently, we've been unnecessarily obsessed with hardware.

Finally, attention seems to be shifting back to the notion
that the hardware details are...just details, like any of
information inside a computer. Even marketing people are
becoming aware of it!

> Otherwise, you might as well look at a book of mathematics
> and claim the term "factors" must have something to
> do with "fact"s, because that is how you choose to
> understand it.

Now you're being silly again. Wake up and smell the coffee!
How long have you been asleep?



> "Interpreted" and "compiled", when applied to
> languages, have specific meanings.

No they don't. You're talking about _implementations_,
not languages. There is a difference, y'know. This lack
of understanding doesn't help your credibility.

> >particular _implementation_ on an particular _computer_ down to a
> >particular _level of abstraction_ (e.g., 'down to 68K machine code')
> >can be "interpreted" or "compiled", but not a language.
>

> This and other such complicated gems occurring in
> this thread, are neither compiled nor interpreted, but
> simple and pure BS, arising out of ignorance, bias
> and lack of clear thinking.

Wanna bet? Have you noticed how many emulators for the
instruction sets of major CPUs are available today? Intel
are working on their next generation of CPUs, which will
emulate the x86 familiy. Is that BS? Perhaps, as most of
us have only Intel's word that this is what they're doing.
However, they're not the only ones persuing this line.
Digital also have an x86 emulator, and others have produced
emulators before them. It's a rather _old_ idea.

You've mentioned CS books, so I wonder if you've read Vol 1
of Knuth's Art of Computer Programming? Take a look at the
Mix "machine" that he describes. It might just as well be
a real machine for all we care, and that's the point. Mix
allowed him to write code for a specific archetecture, which
can be useful for certain areas of programming, like writing
a floating point package - or a compiler.

You might also want to take a look at the Pascal P4 compiler,
and the portable BCPL compiler. There are books documenting
them. Oh, and let's not forget the VP code used by Tao (see
<URL:http://www.tao.co.uk> for details).

The weight of evidence against must be crushing you. ;-)

Cyber Surfer

unread,
Nov 13, 1996, 8:00:00 AM11/13/96
to

In article <328881...@dma.isg.mot.com>
mpr...@dma.isg.mot.com "Mukesh Prasad" writes:

> Now if Scheme were a wild success, they would become
> irate if you called it Lisp. Amazing how
> it works, is it not?

This is a hypothetical question. It might not happen that way at
all, and my personal opinion is that this is more likely. Whatever
Scheme's success may be, I think that Scheme and CL programmers
can life together far more harmoniously than C and Pascal programmers
ever could. so, I doubt anybody will make a fuss.

Besides, you didn't say anthing about Scheme being more successful
than CL, did you? ;-)



> (But I will admit I don't know enough Scheme to debate
> how it is or is not Lisp -- I was just making
> a point about human behavior...)

Noted. It's your point, not mine. While C and Pascal programmers
may behave that way, I've not noticed CL and Schemer programmers
being so childish. Ah, but using Lisp is a sign of muturity! ;-)



> Not too many people, Erik. The Emacs stuff that you _do_ know
> about, is more an exception, not a rule. Out here, C and C++ rule,
> and Java seems to be on the rise. Lisp is not really
> in the picture anymore. A pity, but it made too many promises

> and for some reason didn't deliver. I personally suspect it is
> because it makes things too easy and encourages lax discipline,
> but I may be wrong.

Yes, you may be wrong. My suspicion, based on the opinions of
C++ programmers that I've seen posted to UseNet, is that some
people just like programming to be _difficult_, and refuse to
use anything "too easy". In fact, they'll go further, and claim
that such tools can't be used.

This is curious behavious, considering the evidence to the
contrary. However, this evidence is frequently drawn to their
attention, in such discussions, and the issue is forgotten.



> Hmmm... If you depend upon things you knew as an egg (i.e.
> never bothered to actualy learn,) no wonder you come out
> with the proclamations you do!

What have you learned, eh? C'mon, quote some references that
support your assertions. Then we might have something to discuss.



> > investigate the option of _listening_ to those who know more than him,
> > instead of making a fool out of himself.
>

> Many times, the only way not to appear a fool to fools is to
> join in their foolishness.

As I'm doing. ;-) I could just ignore you, but it's more fun
to this way. We're playing the "who knows more about compilers"
game, in which nobody scores any points (none that count, anyway),
there are no prizes (just the survival of death of certain memes),
and we can all walk away thinking, "Well, that showed him!"

It would be very childish, if this kind of stupidity didn't effect
the nature of the software we use and the tools used to create it.
The effect may be small, it every meme and every head those memes
live in plays its part.

Erik Naggum

unread,
Nov 13, 1996, 8:00:00 AM11/13/96
to

* Mukesh Prasad

| Since its inception, Lisp has been placed by programming language
| theorists in the "interpreted" category. The language itself, not any
| particular implementation.

this should make you think very carefully about the qualifications of those
"programming language theorists".

| However, Lisp systems have improved in technology. In the early days,
| Lisp interpreters directly interpreted the original source.

the `read' function, which transforms a character string (the original
source) into lists/trees, has been with the language since the very
earliest days. nothing ever saw the original source code in the Lisp
system apart from this function, called the Lisp reader.

| An obvious improvement was to "compact" the source code and to get rid of
| comments, spaces etc prior to interpretation.

you have no inkling of a clue to what you talk about.

| Another improvement was to replace the original source code by more
| compact and easy to interpret "byte code".

geez. it appears that your _only_ exposure to "interpreted" languages are
BASIC systems. it is quite unbelievable that anyone should want to parade
the kind of ignorance you display across so many newsgroups.

| To confuse matters more, the newer versions of the "compile" function are
| more sophisticated, and can generate machine code into which the
| interpreter transfers the flow of control via a machine level jump
| instruction.

let me guess. you're an old ZX-80, TRS-80, CBM-64, etc, hacker, right?
you know the way the old BASIC interpreters worked, by heart, right? and
you think "interpreter" has to mean the same thing for toy computers in the
early 80's and a language "designed primarily for symbolic data processing
used for symbolic calculations in differential and integral calculus,
electrical circuit theory, mathematical logic, game playing, and other
fields of artificial intelligence" (McCarthy, et al: Lisp 1.5; MIT Press,
1962) in the early 60's.

| "Interpreted" and "compiled", when applied to languages, have specific
| meanings.

this is perhaps the first true statement I have you make in several weeks.
however, the specific meanings are not the ones you have in mind.

| This and other such complicated gems occurring in this thread, are
| neither compiled nor interpreted, but simple and pure BS, arising out of
| ignorance, bias and lack of clear thinking.

right. I was about to flame you for being a moron, so thanks for laying
the foundation. you don't know what you're talking about, you don't know
what Lisp is like, you don't know any of Lisp's history, you refuse to
listen when people tell you, and, finally, you don't seem to grasp even the
simplest of ideas so that you can express them legibly. in brief, you're a
moron. I sincerely hope that Motorola made an error in hiring you.

Erik Naggum

unread,
Nov 13, 1996, 8:00:00 AM11/13/96
to

* Mukesh Prasad

| > when I was only an egg, at least I knew it. Mukesh Prasad may want to
|
| Hmmm... If you depend upon things you knew as an egg (i.e. never bothered
| to actualy learn,) no wonder you come out with the proclamations you do!

the expression "I am only an egg" refers to Robert A. Heinlein's legendary
"Stranger on a Strange Land". so does the word "grok", which I assume is
even more unfamiliar to you, both as a concept and as a word. it was first
published in 1961. at that time I was _literally_ only an egg, but that is
not what the expression refers to.

Lou Steinberg

unread,
Nov 13, 1996, 8:00:00 AM11/13/96
to

I pointed out _reasons_ why calling a language compiled or interpreted
made little sense. Prasad responded with several appeals to authority:

books on programming languages tend to divide
languages into two categories, "interpreted" and "compiled".
I repeat, *languages*, not *implementations*.

Since its inception, Lisp has been placed by programming


language theorists in the "interpreted" category.
The language itself, not any particular implementation.

Can you cite any sources to back up these claims? I flatly do not believe
them. Please tell us which "books on programming languages" you are
referring to. Were they published within the last 20 years?
Can you direct us to any statement in the literature by any
programming language theorist that supports this claim?

In the early days, Lisp interpreters directly interpreted

the original source. An obvious improvement was


to "compact" the source code and to get rid of
comments, spaces etc prior to interpretation.

This is complete nonsense. One of the interesting features of _all_
interpreted implementations of Lisp, from the very first, was that
they did not interpret a character string but rather the internal
linked-list ("s-expression") representation. See, e.g., "Programming in
the Interactive Environment" by Erik Sandewall, Computing Surveys,
V. 10, # 1, March 1978, for a discussion of some of the consequences
of this approach.

--------------------------------------------------------------------
Prof. Louis Steinberg l...@cs.rutgers.edu
Department of Computer Science http://www.cs.rutgers.edu/~lou
Rutgers University

Jeff Barnett

unread,
Nov 13, 1996, 8:00:00 AM11/13/96
to

In article <56bv0i$lm0$1...@goanna.cs.rmit.edu.au>, o...@goanna.cs.rmit.edu.au (Richard A. O'Keefe) writes:
|> Mukesh Prasad <mpr...@dma.isg.mot.com> writes:
|> >Ah, "compiling to native code" brings up a different issue,
|> >that of whether or not you want to allow eval in the
|> >language. If you do, there are some sleigh of hands
|> >involved (like hiding an interpreter in your "compiled"
|> >executable.)
|>
|> Wrong. A *compiler* in the executable will do fine.
|> What's more, a *dynamically linked* compiler will also do fine,
|> so no space need actually be taken up in the object file.

Just a foolow up to the above: in a LIsp I implemented for the old
IBM 360/370 line, eval just called the compiler, ran the native code
produced, marked the function for garbage collect, then returned the
values. BTW, the reason for this was that I detested the way other
Lisp's had/enjoyed differences between compiler and eval semantics.

Jeff Barnett

Tim Olson

unread,
Nov 14, 1996, 8:00:00 AM11/14/96
to

In article <328879...@dma.isg.mot.com>
Mukesh Prasad <mpr...@dma.isg.mot.com> writes:

> You may not be aware of this (actually, you are obviously

> not) but books on programming languages tend to divide


> languages into two categories, "interpreted" and "compiled".
> I repeat, *languages*, not *implementations*.

What books are those? The ones I have read specifically state that it
is the *implementation* which can be divided into the two extremes of
"interpreted" (simulated) and "compiled" (translated). Some go on to
classify languages into these categories based upon the most common


*implementations*.

> Since its inception, Lisp has been placed by programming
> language theorists in the "interpreted" category.
> The language itself, not any particular implementation.
>

> However, Lisp systems have improved in technology.

> In the early days, Lisp interpreters directly interpreted
> the original source. An obvious improvement was
> to "compact" the source code and to get rid of
> comments, spaces etc prior to interpretation.

LISP systems never "directly interpreted the original source" --
rather, they convert input source to internal list representations of
"S-expressions". Comments and spaces are removed by the read
procedure.


> But
> this does not make the language "compiled".

"In the early days", LISP existed in both interpreted and compiled
forms. Read McCarthy's LISP1.5 Programmer's Manual (1965), which
describes the interpreter and the compiler (which generated IBM 7090
assembly language).


> An interpreted language is one which necessitates baggage
> at run-time to interpret it. A compiled language
> is one which doesn't. Lisp -- due to the nature
> of the language definition -- necessitates baggage at
> run-time, even with modern "compile" functions
> which can generate machine code.

What runtime baggage does the language LISP *require*? One might say
"garbage collection", but that can be considered a "helper function",
just like heap allocation via malloc() is for C.

-- Tim Olson
Apple Computer, Inc.
(t...@apple.com)

Patrick Juola

unread,
Nov 14, 1996, 8:00:00 AM11/14/96
to


Bingo! Thank you for so eloquently pouncing on the point I've been
trying to make for some time now.

Patrick

CHRISTOPHER ELIOT

unread,
Nov 14, 1996, 8:00:00 AM11/14/96
to

In article <LOU.96No...@atanasoff.rutgers.edu>,

Lou Steinberg <l...@cs.rutgers.edu> wrote:
>I pointed out _reasons_ why calling a language compiled or interpreted
>made little sense. Prasad responded with several appeals to authority:
>
> books on programming languages tend to divide
> languages into two categories, "interpreted" and "compiled".
> I repeat, *languages*, not *implementations*.


Dividing languages into interpreted and compiled is like saying that
"trucks run on diesel fuel and cars run on gasoline". It may be a
valid generalization, but there are exceptions. The terms "interpreter"
and "compiler" describe features of an implementation not a language.

I don't know what kind of "authority" it takes to be convincing
about this distinction. I was part of the implementation team
at MIT that wrote a Lisp compiler for the VAX in the mid 1980's.
You can look in the proceeedings of AAAI-96 for my paper
describing how I used a Lisp compiler to model multi-agent
reasoning. In fact, I wrote an interpreter for a simulation
langauge to do that.


> Since its inception, Lisp has been placed by programming
> language theorists in the "interpreted" category.
> The language itself, not any particular implementation.
>

>Can you cite any sources to back up these claims? I flatly do not believe
>them. Please tell us which "books on programming languages" you are
>referring to. Were they published within the last 20 years?
>Can you direct us to any statement in the literature by any
>programming language theorist that supports this claim?

> In the early days, Lisp interpreters directly interpreted


> the original source. An obvious improvement was
> to "compact" the source code and to get rid of
> comments, spaces etc prior to interpretation.
>

>This is complete nonsense. One of the interesting features of _all_
>interpreted implementations of Lisp, from the very first, was that
>they did not interpret a character string but rather the internal
>linked-list ("s-expression") representation. See, e.g., "Programming in
>the Interactive Environment" by Erik Sandewall, Computing Surveys,
>V. 10, # 1, March 1978, for a discussion of some of the consequences
>of this approach.

>--------------------------------------------------------------------
>Prof. Louis Steinberg l...@cs.rutgers.edu
>Department of Computer Science http://www.cs.rutgers.edu/~lou
>Rutgers University


--
Christopher R. Eliot, Senior Postdoctoral Research Associate
Center for Knowledge Communication, Department of Computer Science
University of Massachusetts, Amherst. (413) 545-4248 FAX: 545-1249
EL...@cs.umass.edu, http://rastelli.cs.umass.edu/~ckc/people/eliot/

Carl Donath

unread,
Nov 14, 1996, 8:00:00 AM11/14/96
to

Jens Kilian wrote:
> _Any_ programming language can be implemented by an interpreter or a compiler.
> It just doesn't make sense to speak about "compiled languages" vs "interpreted
> languages". I take it that you have never heard about C interpreters?

This does not take into account languages (Lisp, APL) where the program
may generate functions and execute them. A compiler could only do this
if the compiled program included a compiler to compile and execute the
generated-on-the-fly instructions, which is difficult and/or silly.

In a phrase, "self-modifying code".

--
----------------------------------------------------------------------
-- c...@nt.com ----- ctdo...@rpa.net ----- ctdo...@mailbox.syr.edu --
----------------------------------------------------------------------

Cyber Surfer

unread,
Nov 14, 1996, 8:00:00 AM11/14/96
to

In article <3289ED...@symbiosNOJUNK.com>
david....@symbiosNOJUNK.com "Dave Newton" writes:

> I used to do that on my TRaSh-80 by writing machine language into
> string space. Worked pretty well. That counts. Dynamically-linked libraries
> don't, IMHO. If you want to modify machine code inside a code or data
> segment that probably _would_ count, but it's pretty much a pain to do.

I also POKEed machine code into string space, as well as writing
Basic code that wrote assembly source code. My first compile took
the display memory map and wrote the code for a program to load
it back in, with the load address set for the memory map. Yes,
I used a TRS-80! I learned a lot from that "mess" of a machine,
perhaps _because_ of all its faults.

While I've not yet written machine code to a data segment and then
created a code alias for it, it doesn't look hard to do. The "hard"
part is the bit that writies the machine code. Having written an
assembler in Forth, and various Forth compilers, I think I understand
the principles. Still, all I'm saying is that I _could_ do it if
I ever had a good reason to. So far, I haven't.

However, I may prefer doing it that way to writing - let's say - C
source code that then using a C compiler, esp if the app/util/whatever
has to be delivered to a client who almost certainly won't have a
C compiler or any other development tool. Whether this is preferable
to writing a bytecode interpreter, and compiling to bytecodes, will
likely depend on the requirements of the program in which you're
embedding this code. If the compiled code won't survive after the
the program stops running, then using machine code may actually be
_easier_ than bytecodes.

Alternately, there's threaded code, but if the addresses are direct,
i.e. direct threading, then a change to the program will, like the
machine code approach, require all the code from your compile to be
recompiled, thus updating the addresses to match the runtime.

Complicated, isn't it? ;-) Even writing about it can be messy, but
I find it easier to write the code than to write _about_ the code.
PJ Brown explained it much better than I can!

So, I'm not disagreeing with either of you. Richard is right, you
can count a DLL, _if_ you have the tools to create a DLL handy when
your program runs. If not, then your point will be valid.

Cyber Surfer

unread,
Nov 14, 1996, 8:00:00 AM11/14/96
to

In article <3288BB...@dma.isg.mot.com>
mpr...@dma.isg.mot.com "Mukesh Prasad" writes:

> That's true. Yet, for whatever reasons, none of the functional
> languages have matched even the popularity of Lisp
> itself, much less surpass it to become one of the highly popular
> languages.

15 years ago, the same might've been said about OOP. All
you're telling us is where we currently stand in the history
of functional languages. I think we already know that.

Once again, you're trying to confuse the issue by misrepresenting
the facts. It doesn't help your argument. In fact, you appear
to be rather "clueless" when it comes to compile theory. While
Richard Gabriel has suggested some excellent books on the subject,
I'd recommend that you start with something more basic, and which
specifically explains how compiler theory relates to _interactive_
language systems.

You, on the other hand, have not given any references for
your sources of information. Where's _your_ credibility?
C'mon, put up or shut up. ;-)

It could just be that you're confusing "interactive" with
"interpreted". The two are _not_ the same, as you would be
aware by now if you were paying attention. So, kindly go away
and start reading (and hopefully _learning_ from) some of
these books, and then come back when you can refrain from
trying to teach your grandmother to suck eggs.

The difference between ignorance and stupidity is that an
ignorant person can be educated. Well, we've done our best
to help you in this regard. The next bit is all down to you.

Good luck!

Mukesh Prasad

unread,
Nov 14, 1996, 8:00:00 AM11/14/96
to

Richard A. O'Keefe wrote:

> A C program may *at run time* construct new code,
> cause it to be compiled,
> and cause it to become part of the running program.

True, but extremely contrived and misleading. Yes, it is possible
to execute "cc" from a C program. No, it is not necessary
for C programs to bundle "cc" with the C executable.
The case you are talking about is no different from
a particular C program needing to execute "ls", and
therefore needing "ls" to be bundled with it. There
are no intrinsic language features requiring such bundling. With
Lisp, there are, which is the basic difference. (You
may choose not to use those particular language features,
but that is your own business.)

> With the notable exception of yourself, most people have
> been arguing that it is EXTREMELY SILLY to call *ANY* language
> "compiled" or "interpreted".

Actually, nobody has been arguing along the lines of "this thread
is meaningless, because there is no such thing as an
'interpreted language'." Now that, I would have
considered honest and unbiased opinion. This argument
is only pulled out, somehow, in _defense_ of the thread!

> A programming language that is widely accepted as a Lisp dialect
> (Scheme) not only _can_ be "batch" compiled like C, I routinely
> use just such a compiler and get the same or better performance
> out of it. (This is the Stalin compiler.)

This thread is not about Scheme, but Lisp.

Mukesh Prasad

unread,
Nov 14, 1996, 8:00:00 AM11/14/96
to

Richard A. O'Keefe wrote:
>
>[snip]
... A *compiler* in the executable will do fine.

> What's more, a *dynamically linked* compiler will also do fine,
> so no space need actually be taken up in the object file.

Correct, "hiding an interpreter" was an example strategy.
But you must hide *something* in the run-time environment,
and invoke it at run-time. As opposed to doing it
all at compile time.

Whether or not it takes up extra space, and whether
or not it is dynamically linked, is an operating
system dependent issue, and is not relevant from
a language point of view. (But if you are just
trying to get people to be interested in Lisp, it is
an actual issue of concern. But laying and defending a false
foundation ir order to raise interest, does not give
one a good start.)

> >If you don't, is it Lisp?
> Well, it might be Scheme...

Sure. So why not say what you mean, instead
of talking about Lisp and switching to
a particular dialect in the middle?

Mukesh Prasad

unread,
Nov 14, 1996, 8:00:00 AM11/14/96
to

Cyber Surfer wrote:
[snip]

>Actually, it's only fools like youself calling it interpreted
>that cause problems. You're confusing implementation details
>with the language. Read a few books on compiler theory, esp
>books like Brown's. <hint, hint>
[snip]

Now what was the provocation for all these direct insults?
I am not singling you out -- but it seems a lot of
modern-day Lispers have been resorting to such tactics.
Well, at least it makes me glad I am not doing any more Lisp,
I might have been working with people like this!

Mukesh Prasad

unread,
Nov 14, 1996, 8:00:00 AM11/14/96
to

Jens Kilian wrote:

> _Any_ programming language can be implemented by an interpreter or a compiler.
> It just doesn't make sense to speak about "compiled languages" vs "interpreted
> languages". I take it that you have never heard about C interpreters?
>

That's a self-consistent set of definitions, but I didn't notice
you jumping in to say "this topic is meaningless because
there is no such thing as an interpreted language"?

If you are trying to use the term "interpreted language",
you must go by its past usage. Tactics like switching the meaning,
terms and subject in the middle of a discussion may be legitimate
in certain fields, but hopefully not in a technical
discussion on programming.

Jens Kilian

unread,
Nov 15, 1996, 8:00:00 AM11/15/96
to

Carl Donath (c...@cci.com) wrote:
> Jens Kilian wrote:
> > _Any_ programming language can be implemented by an interpreter or a compiler.
> > It just doesn't make sense to speak about "compiled languages" vs "interpreted
> > languages". I take it that you have never heard about C interpreters?

> This does not take into account languages (Lisp, APL) where the program


> may generate functions and execute them. A compiler could only do this
> if the compiled program included a compiler to compile and execute the
> generated-on-the-fly instructions, which is difficult and/or silly.

It is neither difficult nor silly. Several Prolog systems are doing this,
and the SELF language is also compiled on-the-fly.

> In a phrase, "self-modifying code".

As long as it's done in a structured fashion (i.e., generate a whole new
function/predicate/class/whatsit and use it), so what?

Greetings,

Patrick Juola

unread,
Nov 15, 1996, 8:00:00 AM11/15/96
to

In article <328B7F64...@cci.com> Carl Donath <c...@cci.com> writes:
>Jens Kilian wrote:
>> _Any_ programming language can be implemented by an interpreter or a compiler.
>> It just doesn't make sense to speak about "compiled languages" vs "interpreted
>> languages". I take it that you have never heard about C interpreters?
>
>This does not take into account languages (Lisp, APL) where the program
>may generate functions and execute them. A compiler could only do this
>if the compiled program included a compiler to compile and execute the
>generated-on-the-fly instructions, which is difficult and/or silly.

Difficult and/or silly perhaps, but also common.


Patrick

CHRISTOPHER ELIOT

unread,
Nov 15, 1996, 8:00:00 AM11/15/96
to

In article <328B7F64...@cci.com>, Carl Donath <c...@cci.com> wrote:
>Jens Kilian wrote:
>> _Any_ programming language can be implemented by an interpreter or a compiler.
>> It just doesn't make sense to speak about "compiled languages" vs "interpreted
>> languages". I take it that you have never heard about C interpreters?
>
>This does not take into account languages (Lisp, APL) where the program
>may generate functions and execute them. A compiler could only do this
>if the compiled program included a compiler to compile and execute the
>generated-on-the-fly instructions, which is difficult and/or silly.

Most modern Lisp environments include a Lisp compiler precisely so
that code can be generated on the fly and compiled. It is neither
difficult nor silly, it is actually a rather common practice.

>In a phrase, "self-modifying code".
>

>--
>----------------------------------------------------------------------
>-- c...@nt.com ----- ctdo...@rpa.net ----- ctdo...@mailbox.syr.edu --
>----------------------------------------------------------------------

Jim Balter

unread,
Nov 15, 1996, 8:00:00 AM11/15/96
to

CHRISTOPHER ELIOT wrote:
>
> In article <328B7F64...@cci.com>, Carl Donath <c...@cci.com> wrote:
> >Jens Kilian wrote:
> >> _Any_ programming language can be implemented by an interpreter or a compiler.
> >> It just doesn't make sense to speak about "compiled languages" vs "interpreted
> >> languages". I take it that you have never heard about C interpreters?
> >
> >This does not take into account languages (Lisp, APL) where the program
> >may generate functions and execute them. A compiler could only do this
> >if the compiled program included a compiler to compile and execute the
> >generated-on-the-fly instructions, which is difficult and/or silly.
>
> Most modern Lisp environments include a Lisp compiler precisely so
> that code can be generated on the fly and compiled. It is neither
> difficult nor silly, it is actually a rather common practice.

A compiler is just another component of a runtime system.
Code is just another form of data.
Eval is just another function.
Compilation is translation, interpretation is execution.

There really isn't much more worth saying, so please let this silly
thread die.

--
<J Q B>

Mukesh Prasad

unread,
Nov 15, 1996, 8:00:00 AM11/15/96
to

Erik Naggum wrote:
[snip]

> the expression "I am only an egg" refers to Robert A. Heinlein's legendary
> "Stranger on a Strange Land". so does the word "grok", which I assume is
> even more unfamiliar to you, both as a concept and as a word. it was first
> published in 1961. at that time I was _literally_ only an egg, but that is
> not what the expression refers to.

> #\Erik


We digress a bit, but "I am only an egg" was actually an
expression of politeness and humility - concepts you apparently
haven yet to grok :-)

First you assume I don't know Lisp so you can
get away with erroneous statements, then you assume
I haven't read book xyz -- and you always
manage to pick the wrong topic. Are you
always this lucky in life too? Here is a free
hint -- embed references to Joyce's works (get a copy of
Ulysses) if you want to talk about commonly read
things that I haven't read.

/Mukesh

Mukesh Prasad

unread,
Nov 15, 1996, 8:00:00 AM11/15/96
to

Tim Olson wrote:
[snip]

> What runtime baggage does the language LISP *require*? One might say
> "garbage collection", but that can be considered a "helper function",
> just like heap allocation via malloc() is for C.

I see garbage collection as not much more than a runtime
library. But eval, intern etc require language-processing
at run-time. This is what I was referreing to as "required
baggage". In other words, when the language-processor cannot
just do its work and go away, but may have to hide itself
in some guise or other in the generated executable.

Mukesh Prasad

unread,
Nov 15, 1996, 8:00:00 AM11/15/96
to

Lou Steinberg wrote:
[snip]

> Can you cite any sources to back up these claims? I flatly do not believe
> them. Please tell us which "books on programming languages" you are
> referring to. Were they published within the last 20 years?
> Can you direct us to any statement in the literature by any
> programming language theorist that supports this claim?

I will have to find and look through old books,
(not being academically involved, I don't keep
them on hand) but here are some books which may have
contributed to forming my opinion:

Pratt
The Dragon Book
Mehdi and Jayajeri

In general, until your very confident challenge,
I was very sure from all my reading that
languages themselves had been categorized as interpreted
vs compiled in the past. (Hence the reason
for this thread's name -- Lisp people never
liked Lisp being called an "interpreted
language".) But I will look.

> This is complete nonsense. One of the interesting features of _all_
> interpreted implementations of Lisp, from the very first, was that
> they did not interpret a character string but rather the internal
> linked-list ("s-expression") representation. See, e.g., "Programming in
> the Interactive Environment" by Erik Sandewall, Computing Surveys,

You are, of course, correct about this. This had slipped my mind.
In any event, some amount of internal processing would
obviously be necessary -- I was primarily trying to
distinguish this from subsequent implementations using
compilations to byte-code.

Cyber Surfer

unread,
Nov 16, 1996, 8:00:00 AM11/16/96
to

In article <328B7F64...@cci.com> c...@cci.com "Carl Donath" writes:

> Jens Kilian wrote:
> > _Any_ programming language can be implemented by an interpreter or a compiler.> > It just doesn't make sense to speak about "compiled languages" vs "interpreted> > languages". I take it that you have never heard about C interpreters?
>
> This does not take into account languages (Lisp, APL) where the program
> may generate functions and execute them. A compiler could only do this
> if the compiled program included a compiler to compile and execute the
> generated-on-the-fly instructions, which is difficult and/or silly.

This isn't hard to do. It's just unpopular.



> In a phrase, "self-modifying code".

In a phrase, an app that include some development tools. Err,
correct me if I'm wrong, but I do believe that there are many
apps that _do_ in fact include some form of development tool(s)!
In fact, there are APIs that support this. ActiveX allows a
programmer to very simply add a "script" language to their app,
without writing a compiler/interpreter/whatever. It's all done
inside the ActiveX classes. I don't see why this couldn't work
with native code, as the OS API supports that, too, by allowing
code to write code in memory, and then call it.

BTW, In the case of ActiveX, VBScript and JavaScript are already
available and being used. If anyone could produce Scheme or Common
Lisp classes for ActiveX scripting, then I'd be _very_ happy!

Warren Sarle

unread,
Nov 16, 1996, 8:00:00 AM11/16/96
to

Please stop cross-posting these interminable programming threads to
irrelevant newsgroups.

--

Warren S. Sarle SAS Institute Inc. The opinions expressed here
sas...@unx.sas.com SAS Campus Drive are mine and not necessarily
(919) 677-8000 Cary, NC 27513, USA those of SAS Institute.
*** Do not send me unsolicited commercial or political email! ***


David Longley

unread,
Nov 16, 1996, 8:00:00 AM11/16/96
to

In article <328B60...@dma.isg.mot.com>
mpr...@dma.isg.mot.com "Mukesh Prasad" writes:

> Jens Kilian wrote:
>
> > _Any_ programming language can be implemented by an interpreter or a compiler.> > It just doesn't make sense to speak about "compiled languages" vs "interpreted> > languages". I take it that you have never heard about C interpreters?
> >
>

> That's a self-consistent set of definitions, but I didn't notice
> you jumping in to say "this topic is meaningless because
> there is no such thing as an interpreted language"?
>
> If you are trying to use the term "interpreted language",
> you must go by its past usage. Tactics like switching the meaning,
> terms and subject in the middle of a discussion may be legitimate
> in certain fields, but hopefully not in a technical
> discussion on programming.
>

It's not legitimate *anywhere* except in sophistry, rhetoric and
poetry... equivocation is anathema to scientific discussion and
reliable communication in any language.
--
David Longley


Cyber Surfer

unread,
Nov 16, 1996, 8:00:00 AM11/16/96
to

In article <328B38...@dma.isg.mot.com>
mpr...@dma.isg.mot.com "Mukesh Prasad" writes:

> The case you are talking about is no different from
> a particular C program needing to execute "ls", and
> therefore needing "ls" to be bundled with it. There
> are no intrinsic language features requiring such bundling. With
> Lisp, there are, which is the basic difference. (You
> may choose not to use those particular language features,
> but that is your own business.)

No. Full Common Lisp includes EVAL and COMPILE, etc. You don't
necessarily get these functions when the code is delivered, but
that'll depend on the _implementation_. This is the same mistake
were making earlier.

Also, not all Lisps are CL. The Scheme language doesn't include
EVAL, altho some implementations may do so. It appears that
Gambit C supports this, and this is a Scheme compiler that procuces
C source code. However, you should not make the mistake of
generalising: this is only one compiler, after all.

Some Basics has an EVAL$ function that took a string (you guessed
that, I bet) and fed it into the interpreter. Not all Basics
support this, and some of them don't compile the the code into
tokenised form and then interpret it. Visual Basic doesn't work
this way, and it would be very costly if it did, and would be
worse if it support EVAL$. On the other hand, VB apps that link
to 3 MB of libraries are not unheard of.

Generalisations are dangerous. At best, they can make you look
a fool. I'll leave the worse case as an exercise for the reader.



> Actually, nobody has been arguing along the lines of "this thread
> is meaningless, because there is no such thing as an
> 'interpreted language'." Now that, I would have
> considered honest and unbiased opinion. This argument
> is only pulled out, somehow, in _defense_ of the thread!

See PJ Brown's book for my answer. ;-) Let me know when you've
read it...



> > A programming language that is widely accepted as a Lisp dialect
> > (Scheme) not only _can_ be "batch" compiled like C, I routinely
> > use just such a compiler and get the same or better performance
> > out of it. (This is the Stalin compiler.)
>
> This thread is not about Scheme, but Lisp.

Scheme is a dialect of Lisp. It's perfectly valid to refer to
it in this thread. See the Lisp FAQ.

Cyber Surfer

unread,
Nov 17, 1996, 8:00:00 AM11/17/96
to

In article <328B76...@dma.isg.mot.com>
mpr...@dma.isg.mot.com "Mukesh Prasad" writes:

_You're_ the one making a fool of themselves. Go away and read
some books about compiler theory - _interactive_ compiles, that
is. You seem unable to understand some very basic ideas.

Why not start with an excellent introduction, "Writing Interactive
Compilers and Interpreters". P.J. Brown, ISBN 0 471 27609 X, ISBN
0471 100722 pbk. John WIley & Sons Ltd. Please come back after
reading this fine book. _Then_ we may have something to discuss.

Meanwhile, I humbly suggest that you're trying to teach your
grandmother to suck eggs...Read Brown's book and you'll begin
to understand why.

Martin Rodgers
Enrapture Limited

Cyber Surfer

unread,
Nov 17, 1996, 8:00:00 AM11/17/96
to

In article <328C68...@dma.isg.mot.com>
mpr...@dma.isg.mot.com "Mukesh Prasad" writes:

> First you assume I don't know Lisp so you can
> get away with erroneous statements, then you assume

This appears to be a very reasonable conclusion (not an assumption),
based on your views. Until you can demonstrate your superior knowledge
and wisdom on this subject, it should be safe to assume that your
views are based on ignorance - which we've been trying to help you
change. Of course, if you prefer to either remain ignorance, or
appear that way, then I don't know what we can do for you.

You might as well take your views to a place (newgroup? whatever?)
where they'll be better appreciated/tolerated.

> I haven't read book xyz -- and you always
> manage to pick the wrong topic. Are you

Which book was that? Are you saing that you've read SIOCP? What,
if anything, did you learn from it? Please tell us, so that we
can avoid insulting your intelligence <ahem>, and so that you
can grant us the same courtesy.

> always this lucky in life too? Here is a free
> hint -- embed references to Joyce's works (get a copy of
> Ulysses) if you want to talk about commonly read
> things that I haven't read.

So you _have_ read SIOCP? Excellent. Did you understand the last
two chapters of the book? Have you also read PJ Brown's book, and
did you understand his defintions of "compiler" and "interpreter"?

Cyber Surfer

unread,
Nov 17, 1996, 8:00:00 AM11/17/96
to

In article <328C6C...@dma.isg.mot.com>
mpr...@dma.isg.mot.com "Mukesh Prasad" writes:

> The Dragon Book

There are a great many compiler techniques not covered by this
book. It has _nothing_ to say about compiling in an interactive
system, which could explain your confusion. "Interactive" does
_not_ mean "interpreted".

BTW, _which_ one do you mean? Compilers, Principles, Techniques,
and Tools is the later book.

> In general, until your very confident challenge,
> I was very sure from all my reading that
> languages themselves had been categorized as interpreted
> vs compiled in the past. (Hence the reason
> for this thread's name -- Lisp people never
> liked Lisp being called an "interpreted
> language".) But I will look.

I suspect that you've bee mislead by a very limited source of
information. I'm only an "ameteur" compiler writer, in the sense
that none of my compilers can be described as "industrial strength",
and yet I may easily have vastly more info on compiler techniques
than you. On one shelf alone, I have at least 6 books on language
implementation (mainly compilers, but also "interpreters" of several
kinds). Most of the books on the shelf below it include compilers
and interpreters of some kind.

Someday I'll put a list of all these books into my homepage. If I
had such a page (or more likely, set of pages), I could just recommend
that you browse it...



> > This is complete nonsense. One of the interesting features of _all_
> > interpreted implementations of Lisp, from the very first, was that
> > they did not interpret a character string but rather the internal
> > linked-list ("s-expression") representation. See, e.g., "Programming in
> > the Interactive Environment" by Erik Sandewall, Computing Surveys,
>
> You are, of course, correct about this. This had slipped my mind.

How convenient. Well, I'll give you the benefit of my doubt.
However, such mistakes don't help your credibility when making
such claims as yours. Somebody ignorant of Lisp's history might
be more easily forgiven.

> In any event, some amount of internal processing would
> obviously be necessary -- I was primarily trying to
> distinguish this from subsequent implementations using
> compilations to byte-code.

What kind of processing do you mean? Parser macros?

Cyber Surfer

unread,
Nov 17, 1996, 8:00:00 AM11/17/96
to

In article <328C69...@dma.isg.mot.com>
mpr...@dma.isg.mot.com "Mukesh Prasad" writes:

> I see garbage collection as not much more than a runtime
> library. But eval, intern etc require language-processing
> at run-time. This is what I was referreing to as "required
> baggage". In other words, when the language-processor cannot
> just do its work and go away, but may have to hide itself
> in some guise or other in the generated executable.

<ahem> The flaw in this argument has been pointed out to you
repeatedly. EVAL is not a feature of every Lisp, just as it
isn't supported by every Basic dialect. Not all Common Lisp
compilers support EVAL at runtime. Of course, we should take
care and define exactly what we mean by runtime. Since your
definitions of certain words can wildly differ from others
here, perhaps "runtime" doesn't mean the same thing to you
as it is to me and others? How can we tell?

Erik Naggum

unread,
Nov 18, 1996, 8:00:00 AM11/18/96
to

* Mukesh Prasad
| First you assume I don't know Lisp ...

it's an undeniable, irrefutable _fact_ that you don't know Lisp or anything
you have been saying about interpreters in this thread. the proof is in
your own articles. if you did know Lisp, you could not have said anything
of what you have said. it would be a crime of logic to conclude anything
_other_ than that you do not know what you're talking about.

#\Erik
--
Please address private replies to "erik". Mail to "nobody" is discarded.

J. A. Durieux

unread,
Nov 18, 1996, 8:00:00 AM11/18/96
to

In article <328B38...@dma.isg.mot.com>,
Mukesh Prasad <mpr...@dma.isg.mot.com> wrote:

>This thread is not about Scheme, but Lisp.

This thread is not about cows, but mammals, right?

Ralph Silverman

unread,
Nov 18, 1996, 8:00:00 AM11/18/96
to

Richard A. O'Keefe (o...@goanna.cs.rmit.edu.au) wrote:
: Mukesh Prasad <mpr...@dma.isg.mot.com> writes:

: >"Compiled" vs "Interpreted" are merely words

: So? Are you claiming Humpty Dumpty's privilege?

: There are important issues concerning binding time.

: >But if one has to have this distinction, Lisp should
: >fall into the "interpreted" category, since the
: >"compiled" byte-code is interpreted by sofware, not
: >the hardware.

: This is complete and utter bull-dust.
: The RUCI Lisp system I used on a DEC-10 in the early 80s compiled
: Lisp to *native* DEC-10 instructions.
: The Franz Lisp system I used on a VAX-11/780 around the same time
: compiled Lisp to *native* VAX instructions.
: The Gmabit Scheme system I use on a 680x0 Macintosh compiles
: Scheme (a dialect of Lisp) to *native* 680x0 instructions.
: The Lisp system I use on a a SPARC compiles Common Lisp
: to *native* SPARC instructions.
: Even the PSL system I used on a B6700 back in the 70s compiled
: PSL (a dialect of Lisp) to *native* B6700 instructions.
: The T system I used to use on our Encore Multimaxes compiled
: T and Scheme (dialects of Lisp) to *native* NS32k instructions.

: There are or have been *native-code* Lisp compilers for all the
: major machines, from Univac 1108s, IBM 360s, all the way up to
: Connection Machines and beyond.

: >I don't know about the Lisp
: >machines though, do they (or did they) have hardware
: >instructions corresponding one-to-one with read/eval/print?

: Why Lisp machines do you mean? CONS? CADR? LMI? Symbolics?
: Xerox (1108, 1109, 1185, ...)? The Japanese ones? The European ones?

: The plain fact of the matters is that Lisp
: - *CAN* be interpreted by fairly simple interpreters
: - *CAN* be compiled to very efficient native code on any reasonable
: modern machine

: [If you can compile a language to byte codes, you can compile it to
: native code by treating the byte codes as "macros", and running an
: optimiser over the result. This has actually been used as a route
: for developing a native code compiler.]

: --
: Mixed Member Proportional---a *great* way to vote!
: Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.

--
*********************begin r.s. response*******************

technical sophistication
of this old time programmer
is undenyable ...
and, certainly,
ideas of people like this
are of the greatest value here ...

however,
while this master has had his nose
in his terminal since the 1970s
he may have missed out on some of
the pernicious political trends
^^^^^^^^^^^^^^^^
in his poor field ...

compiled (or assembled) 'native code'
is like 'real money' ...
someone always wants to debase it ...

as though 'store coupons' were as good
as cash ...

a subtle slide of programming
from a 'native code' standard
(real programming) to an
interpreted standard is obviously
intended!!!

*********************end r.s. response*********************
Ralph Silverman
z007...@bcfreenet.seflin.lib.fl.us


Greg Heath

unread,
Nov 18, 1996, 8:00:00 AM11/18/96
to

PLEASE REMOVE COMP.AI.NEURAL-NETS FROM THIS THREAD.

THANK YOU VERY MUCH.


Gregory E. Heath he...@ll.mit.edu The views expressed here are
M.I.T. Lincoln Lab (617) 981-2815 not necessarily shared by
Lexington, MA (617) 981-0908(FAX) M.I.T./LL or its sponsors
02173-9185, USA


Mukesh Prasad

unread,
Nov 18, 1996, 8:00:00 AM11/18/96
to

Jeff Barnett wrote:

> Just a foolow up to the above: in a LIsp I implemented for the old
> IBM 360/370 line, eval just called the compiler, ran the native code
> produced, marked the function for garbage collect, then returned the
> values. BTW, the reason for this was that I detested the way other
> Lisp's had/enjoyed differences between compiler and eval semantics.
> Jeff Barnett

This is a good technique -- and as has been pointed out
earlier, in the modern operating systems even
this is not necessary because dynamic linking
is available on most systems, so the language
processor can simply be made available as
a dynamic library at minimal overhead. No
reason even to "run" the compiler, when
you can just call it in your own address space.

(Though, of course, this does not mean that one is free
from the obligation of making the language
processor available except in particular dialetcs...)

Mukesh Prasad

unread,
Nov 18, 1996, 8:00:00 AM11/18/96
to

Jim Balter wrote:

> A compiler is just another component of a runtime system.
> Code is just another form of data.
> Eval is just another function.
> Compilation is translation, interpretation is execution.


This is nomenclature (as I was saying originally,) but if at
run-time you need to lex and parse the language, to me that is
interpretation.

If to you it isn't, then there is no such thing as
an "interpreted language" by your definitions, because
all the techniques used in Lisp can be applied to any
given language, as I was saying originally.

What was so difficult to understand about either of these
two very simple points?

Mukesh Prasad

unread,
Nov 18, 1996, 8:00:00 AM11/18/96
to

Right. Well, you can talk about cows to support
a viewpoint, as long as you don't make statements
like "all mammals have four legs and two horns".

Michael Greenwald

unread,
Nov 19, 1996, 8:00:00 AM11/19/96
to

I know I'm going to regret joining in here.

Mukesh Prasad <mpr...@dma.isg.mot.com> writes:

>This is nomenclature (as I was saying originally,) but if at
>run-time you need to lex and parse the language, to me that is
>interpretation.

Nomenclature needs to be useful.

I think the point of contention here is your use of the word "need".
It is true that most dialects of Lisp have historically provided
implementation independent ways of lexing, parsing, and compiling the
language. These are *extensions*. They do not affect the semantics
of the language, nor do they categorize the *language* (as opposed to
a given implementation) as interpreted or compiled. (Of course,
however, the presence of these extensions has had an effect on what
programs were chosen to be written in Lisp --- but let's not get
confused by sociological artifacts.)

To see what I mean, consider the following assertion (ignoring the
fact that it needs some qualifications in order to be universally
true):

Any Lisp program which "needs" to use READ, EVAL, or COMPILE has
functionality such that a translation into another language (e.g. C)
would require external calls to a parser or a
compiler. (e.g. exec'ing cc).

Now, if you can make some claims about a program that, when written in
Lisp, needs to use these extensions but, when written in C, doesn't
need to use these, *then* I'll grant you that "interpreted" is a
property of the language and not the program (or the language
implementation).

>If to you it isn't, then there is no such thing as
>an "interpreted language" by your definitions, because
>all the techniques used in Lisp can be applied to any
>given language, as I was saying originally.

>What was so difficult to understand about either of these
>two very simple points?

The fact that "Interpreted language" vs. "Compiled language" doesn't
strike me (and probably others as well) as a useful distinction. On
the other hand, knowing whether a given implementation of a language
is interpreted (for example, Sabre C) will likely give me some
expectations (e.g. performance, debugging environment) which might or
might not turn out to be true. Additionally, if you told me that a
program called out to the compiler to generate new code (or was
self-modifying, or required lexing or parsing of user input, or ...)
that *might* tell me something interesting about the program (although
to me it seems much less useful than the first case. We're always
"interpreting" user input; if it happens that the "interpreter" we use
for the input is the same as the language the program is written in,
well, that just seems like an interesting coincidence, but not
necessarily something important.)

I hope this clears up why your remarks seem difficult to understand to
some readers of this newsgroup.

(One final comment: I suppose one can argue that it is valid to
interpret "Interpreted language" as a statistical comment about the
majority of implementations of a given programming language. In the
case of Lisp, though, even then I believe that it would be incorrect
to call it an "interpreted language" since the large majority of Lisp
implementations are compiled.)

Casper H.S. Dik - Network Security Engineer

unread,
Nov 19, 1996, 8:00:00 AM11/19/96
to

t...@apple.com (Tim Olson) writes:

>What runtime baggage does the language LISP *require*? One might say
>"garbage collection", but that can be considered a "helper function",
>just like heap allocation via malloc() is for C.


A compiled lisp program typically requires a lisp interpreter; you can
always construct programs and execute them; if need be, compile them first.

So the lisp runtime system requires a interpreter/compiler. That is not
the case for languages in which it is not possible to create and/or
manipulate executable objects.

Casper
--
Expressed in this posting are my opinions. They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.

Erik Naggum

unread,
Nov 19, 1996, 8:00:00 AM11/19/96
to

* Casper H. S. Dik

| So the lisp runtime system requires a interpreter/compiler. That is not
| the case for languages in which it is not possible to create and/or
| manipulate executable objects.

first, you need to distinguish between the "runtime system" and
"libraries". the _runtime_ system does not require an interpreter or
compiler. the runtime system would include error handlers, the garbage
collector, the function that was called by and returns control to the
operating system and is responsible for setting up and tearing down all
sorts of things, etc.

second, if the program has a command language, it has an interpreter for
another language embedded, and implements a micro-EVAL all of its own.

third, if the program reads data files that have any non-trivial format, it
implements the equivalent of READ, including lots of special-purpose code
to it and lex/yacc tables.

just because the functions are _not_ called READ, EVAL or COMPILE, doesn't
mean they aren't there. a language doesn't have to have executable objects
to run code not its own. any data-driven or table-driven implementation
can be regarded as an interpreter. etc. just because you can see it by
name in one language, and not in another, doesn't mean they differ in these
important regards.

it's been said that any sufficiently large programming system contains a
Common Lisp struggling to get out.

Jim Balter

unread,
Nov 19, 1996, 8:00:00 AM11/19/96
to

Mukesh Prasad wrote:
>
> Jim Balter wrote:
>
> > A compiler is just another component of a runtime system.
> > Code is just another form of data.
> > Eval is just another function.
> > Compilation is translation, interpretation is execution.
>
> This is nomenclature (as I was saying originally,) but if at
> run-time you need to lex and parse the language, to me that is
> interpretation.
>
> If to you it isn't, then there is no such thing as
> an "interpreted language" by your definitions, because
> all the techniques used in Lisp can be applied to any
> given language, as I was saying originally.
>
> What was so difficult to understand about either of these
> two very simple points?

The difficult thing to understand is why you are still on about this,
other than as some sort of ego gratification at the expense of these
newsgroups.

--
<J Q B>

David Longley

unread,
Nov 20, 1996, 8:00:00 AM11/20/96
to

Now that's *ironic*..

Not long ago, you were claiming that this "distinction" was
almost a test for one's competence in computing.....

What goes around, comes around eh?

Or is this just another example of the "fragmentation" I keep
drawing your (and others') attention to (which you seem
determined to conceive as 'hypocrisy').

http://www.uni-hamburg.de/~kriminol/TS/tskr.htm

--
David Longley


Jim Balter

unread,
Nov 20, 1996, 8:00:00 AM11/20/96
to

Oh, I "said that", did I? I'm afraid that your fragmented mental
defects make you incapable of comprehending what I say, so you should
limit yourself to direct quotation.

> What goes around, comes around eh?
>
> Or is this just another example of the "fragmentation" I keep
> drawing your (and others') attention to (which you seem
> determined to conceive as 'hypocrisy').

Longley, you are apparently too mentally defective to understand that,
since you apply your standards to others but not to yourself, you are
in *moral* error. Hypocrisy is a matter of *ethics*, which are
apparently beyond your sociopathic grasp. But if I point out that
I'm not talking science here, you will criticize me for being in a
muddle, which just further demonstrates your not-quite-sane autistic
mental defect. But that is just another example of your fragmentation,
I suppose (which can only prove your point through affirmation of the
consequent, which would only prove your point about humans having
trouble with logic through affirmation of the consequent, which ... oh,
never mind.)

--
<J Q B>

Warren Sarle

unread,
Nov 20, 1996, 8:00:00 AM11/20/96
to

You forgot to crosspost this stupid thread to sci.bio.bovine

Paul Schlyter

unread,
Nov 20, 1996, 8:00:00 AM11/20/96
to

In article <328C69...@dma.isg.mot.com>,
Mukesh Prasad <mpr...@dma.isg.mot.com> wrote:

> Tim Olson wrote:
> [snip]

>> What runtime baggage does the language LISP *require*? One might say
>> "garbage collection", but that can be considered a "helper function",
>> just like heap allocation via malloc() is for C.
>
> I see garbage collection as not much more than a runtime
> library. But eval, intern etc require language-processing
> at run-time. This is what I was referreing to as "required
> baggage". In other words, when the language-processor cannot
> just do its work and go away, but may have to hide itself
> in some guise or other in the generated executable.

Garbage collection is definitely more than a runtime library -- it
requires language support as well. Consider implementing garbage
collection in a language like C -- it would be next to impossible,
because of all the dangling pointers that may remain here, there
and everywhere in the program, which must be updated automatically
if garbage collection is to be useful. But such a feat would be
impossible from a runtime library only, and even very very hard if
the compiler generated support code for this.

Thus a language with garbage collection must be much more restrictive
on pointer usage than C. LISP fits this description pretty well, since
it doesn't have pointers that the programmer is allowed to manipulate
in the LISP program.

--
----------------------------------------------------------------
Paul Schlyter, Swedish Amateur Astronomer's Society (SAAF)
Grev Turegatan 40, S-114 38 Stockholm, SWEDEN
e-mail: pau...@saaf.se p...@home.ausys.se pa...@inorbit.com

David Longley

unread,
Nov 21, 1996, 8:00:00 AM11/21/96
to

In article <329207...@netcom.com> j...@netcom.com "Jim Balter" writes:
>
> The difficult thing to understand is why you are still on about this,
> other than as some sort of ego gratification at the expense of these
> newsgroups.
>
> --
> <J Q B>
>

David Longley wrote:
>
> In article <329207...@netcom.com> j...@netcom.com "Jim Balter" writes:
>
> > Mukesh Prasad wrote:
> > >
> > > Jim Balter wrote:
> > >
> > > > A compiler is just another component of a runtime system.
> > > > Code is just another form of data.
> > > > Eval is just another function.
> > > > Compilation is translation, interpretation is execution.
> > >
> > > This is nomenclature (as I was saying originally,) but if at
> > > run-time you need to lex and parse the language, to me that is
> > > interpretation.
> > >
> > > If to you it isn't, then there is no such thing as
> > > an "interpreted language" by your definitions, because
> > > all the techniques used in Lisp can be applied to any
> > > given language, as I was saying originally.
> > >
> > > What was so difficult to understand about either of these
> > > two very simple points?
> >
> > The difficult thing to understand is why you are still on about this,
> > other than as some sort of ego gratification at the expense of these
> > newsgroups.
> >
> > --
> > <J Q B>
> >

<DL>
>> Now that's *ironic*..
>
>> Not long ago, you were claiming that this "distinction" was
>> almost a test for one's competence in computing.....

<JB>
>Oh, I "said that", did I? I'm afraid that your fragmented mental
>defects make you incapable of comprehending what I say, so you should
>limit yourself to direct quotation.

<DL>
>> What goes around, comes around eh?
>>
>> Or is this just another example of the "fragmentation" I keep
>> drawing your (and others') attention to (which you seem
>> determined to conceive as 'hypocrisy').

<JB>
>Longley, you are apparently too mentally defective to understand that,
>since you apply your standards to others but not to yourself, you are
>in *moral* error. Hypocrisy is a matter of *ethics*, which are
>apparently beyond your sociopathic grasp. But if I point out that
>I'm not talking science here, you will criticize me for being in a
>muddle, which just further demonstrates your not-quite-sane autistic
>mental defect. But that is just another example of your fragmentation,
>I suppose (which can only prove your point through affirmation of the
>consequent, which would only prove your point about humans having
>trouble with logic through affirmation of the consequent, which ... oh,
>never mind.)

I think the above is a bit of a muddle, but, as I keep trying to
point out - that's folk psychology for you...

http://www.uni-hamburg.de/~kriminol/TS/tskr.htm

What I have tried to point out is that we do best when folk
psychological notions are eschewed altogether, except to describe
these *as* heuristics.. (in itself quite a difficult concept for
many to grasp..but then, perhaps you have to be a psychologist to
begin to understand how psychologists empirically describe these
behaviours without giving them a normative status.

What I have had to say about "Fragmentation" is a consequence of
the failure of Leibniz's Law within intensional contexts, which
dominate folk psychological language. This in turn, I propose, is
characteristic of the way in whuch we are constrained to work out
side of the extensional stance.

'Suppose that each line of the truth table for the
conjunction of all [of a person's] beliefs could be
checked in the time a light ray takes to traverse the
diameter of a proton, an approximate "supercycle" time,
and suppose that the computer was permitted to run for
twenty billion years, the estimated time from the "big-
bang dawn of the universe to the present. A belief
system containing only 138 logically independent
propositions would overwhelm the time resources of this
supermachine.'

C. Cherniak (1986)
Minimal Rationality p.93

'Cherniak goes on to note that, while it is not easy to
estimate the number of atomic propositions in a typical
human belief system, the number must be vastly in excess
of 138. It follows that, whatever its practical benefits
might be, the proposed consistency-checking algorithm is
not something a human brain could even approach. Thus,
it would seem perverse, to put it mildly, to insist that
a person's cognitive system is doing a bad job of
reasoning because it fails to periodically execute the
algorithm and check on the consistency of the person's
beliefs.'

S. Stich (1990)
The Fragmentation of Reason p.152

'I should like to see a new conceptual apparatus of a
logically and behaviourally straightforward kind by
which to formulate, for scientific purposes, the sort of
psychological information that is conveyed nowadays by
idioms of propositional attitude.'

W V O Quine (1978)

In the extract from Cherniak, the point being made is that as the
number of discrete propositions increase, the possible combinations
increases dramatically, or, as Shafir and Tversky 1992 say:

'Uncertain situations may be thought of as disjunctions
of possible states: either one state will obtain, or
another....

Shortcomings in reasoning have typically been attributed
to quantitative limitations of human beings as
processors of information. "Hard problems" are typically
characterized by reference to the "amount of knowledge
required," the "memory load," or the "size of the search
space"....Such limitations, however, are not sufficient
to account for all that is difficult about thinking. In
contrast to many complicated tasks that people perform
with relative ease, the problems investigated in this
paper are computationally very simple, involving a
single disjunction of two well defined states. The
present studies highlight the discrepancy between
logical complexity on the one hand and psychological
difficulty on the other. In contrast to the "frame
problem" for example, which is trivial for people but
exceedingly difficult for AI, the task of thinking
through disjunctions is trivial for AI (which routinely
implements "tree search" and "path finding" algorithms)
but very difficult for people. The failure to reason
consequentially may constitute a fundamental difference
between natural and artificial intelligence.'

E. Shafir and A. Tversky (1992)
Thinking through Uncertainty: Nonconsequantial Reasoning
and Choice
Cognitive Psychology 24,449-474

From a pattern recognition or classification stance, it is known that
as the number of predicates increase, the number of linearly separable
functions becomes proportionately smaller as is made clear by the
following extract from Wasserman (1989) when discussing the concept of
linear separability:

'We have seen that there is no way to draw a straight
line subdividing the x-y plane so that the exclusive-or
function is represented. Unfortunately, this is not an
isolated example; there exists a large class of
functions that cannot be represented by a single-layer
network. These functions are said to be linearly
inseparable, and they set definite bounds on the
capabilities of single-layer networks.

Linear separability limits single-layer networks to classification
problems in which the sets of points (corresponding to input values)
can be separated geometrically. For our two-input case, the separator
is a straight line. For three inputs, the separation is performed by a
flat plane cutting through the resulting three-dimensional space. For
four or more inputs, visualisation breaks down and we must mentally
generalise to a space of n dimensions divided by a "hyperplane", a
geometrical object that subdivides a space of four or more
dimensions.... A neuron with n binary inputs can have 2 exp n
different input patterns, consisting of ones and zeros. Because each
input pattern can produce two different binary outputs, one and zero,
there are 2 exp 2 exp n different functions of n variables.

As shown [below], the probability of any randomly selected function
being linearly separable becomes vanishingly small with even a modest
number of variables. For this reason single-layer perceptrons are, in
practice, limited to simple problems.

n 2 exp 2 exp n Number of Linearly Separable Functions
1 4
2 16 14
3 256 104
4 65,536 1,882
5 4.3 x 10 exp 9 94,572
6 1.8 x 10 exp 19 5,028,134

P. D. Wasserman (1989)
Linear Separability: Ch2. Neural Computing Theory and Practice

In later sections evidence is presented in the context of clinical vs.
actuarial judgment that human judgement is severely limited to
processing only a few variables. Beyond that, non- linear fits become
more frequent. This is discussed later in the context of connectionist
'intuitive',inductive inference and constraints on short-term or
working memory span (c.f. Kyllonen & Christal 1990 - "Reasoning
Ability Is (LIttle More Than) Working-Memory Capacity?!"), but it is
worth mentioning here that in the epilogue to their expanded re-print
of their 1969 review of neural nets 'Perceptrons - An Introduction to
Computational Geometry', after reiterating their original criticism
that neural networks had only been shown to be capable of solving 'toy
problems', ie problems with a small number of dimensions, using 'hill
climbing' algorithms, Minsky and Papert (1988) effectively did a
'volte face' and said:

'But now we propose a somewhat shocking alternative:
Perhaps the scale of the toy problem is that on which,
in physiological actuality, much of the functioning of
intelligence operates. Accepting this thesis leads into
a way of thinking very different from that of the
connectionist movement. We have used the phrase "society
of mind" to refer to the idea that mind is made up of a
large number of components, or "agents," each of which
would operate on the scale of what, if taken in
isolation, would be little more than a toy problem.'

M Minsky and S Papert (1988) p266-7

and a little latter, which is very germane to the fragmentation of
behaviour view being advanced in this volume:

'On the darker side, they [parallel distributed
networks] can limit large-scale growth because what any
distributed network learns is likely to be quite opaque
to other networks connected to it.'

ibid p.274

This *opacity* of aspects, or elements, of our own behaviour to
ourselves is central to the theme being developed in this volume,
namely that a science of behaviour must remain entirely extensional
and that there can not therefore be a science or technology of
psychology to the extent that this remains intensional (Quine
1960,1992). The discrepancy between experts' reports of the
information they use when making diagnoses (judgments) is reviewed in
more detail in a later section, however, research reviewed in Goldberg
1968, suggests that even where diagnosticians are convinced that they
use more than additive models (ie use interactions between variables -
which statistically may account for some of the non-linearities),
empirical evidence shows that in fact they only use a few linear
combinations of variables (cf. Nisbett and Wilson 1977, in this
context).

As an illustration of methodological solipsism (intensionalism) in
practice consider the following which neatly contrasts subtle
difference between the methodological solipsist approach and that of
the methodological or 'evidential' behaviourist.

Several years ago, a prison psychologist sought the views of prison
officers and governors as to who they considered to be 'subversives'.
Those considered 'subversive' were flagged 1, those not considered
subversive were flagged 0. The psychologist then used multiple
regression to predict this classification from a number of other
behavioural variables. From this he was able to produce an equation
which predicted subversiveness as a function of 4 variables: whether
or not the inmate had a firearms offence history, the number of
reports up to arrival at the current prison, the number of moves up to
arrival where the inmate had stayed more than 28 days, and the number
of inmate assaults up to arrival.

Note that the dependent variable was binary, the inmate being
classified as 'subversive' or 'not subversive'. The prediction
equation, which differentially weighted the 4 variables, therefore
predicted the dependent variable as a value between 0 and 1. Now the
important thing to notice here is that the behavioural variables were
being used to predict something which is essentially a propositional
attitude, ie the degree of certainty of the officers beliefs that
certain inmates were subversive.

The methodological solipsist may well hold that the officer's beliefs
are what are important, however, the methodological behaviourist would
hold that what the officers thought was just *an approximation of what
the actual measures of inmate behaviour represented*, ie his thoughts
were just vague, descriptive terms for inmates who had lots of
reports, assaulted inmates and had been moved through lots of prisons,
and were probably in prison for violent offences. What the officers
thought was not perhaps, all that important, since we could just go to
the records and identify behaviours which are characteristic of
troublesome behaviour and then identify inmates as a function of those
measures (cf. Williams and Longley 1986).

In the one case the concern is likely to be with developing better and
better predictors of what staff THINK, and in the other, it becomes a
matter of simply recording better measures of classes of behaviour and
empirically establishing functional relations between those classes.
In the case of the former, intensional stance, one becomes interested
in the *psychology* of those exposed to such factors (ie those exposed
to the behaviour of inmates, and what they *vaguely or intuitively
describe it as)*. From the extensional stance (methodological
behaviourist) defended in these volumes, such judgments can only be a
**function** of the data that staff have had access to. From the
extensional stance, one is simply interested in recording *behaviour*
itself and deducing implicit relations. Ryle (1949) and many
influential behaviourists since (Quine 1960), have, along with Hahn
(1933) suggested that this is our intellectual limit anyway:

'It is being maintained throughout this book that when
we characterize people by mental predicates, we are not
making untestable inferences to any ghostly processes
occurring in streams of consciousness which we are
debarred from visiting; we are describing the ways in
which those people conduct parts of their predominantly
public behaviour.'

G. Ryle
The Concept of Mind (1949)

Using regression technology as outlined above is essentially how
artificial neural network software is used to make classifications, in
fact, there is now substantial evidence to suggest that the two
technologies are basically one and the same (Stone 1986), except that
in neural network technology, the regression variable weights are
opaque to the judge, cf. Kosko (1992):

'These properties reduce to the single abstract property
of *adaptive model-free function estimation*:Intelligent
systems adaptively estimate continuous functions from
data without specifying mathematically how outputs
depend on inputs...A function f, denoted f: X Y, maps
an input domain X to an output range Y. For every
element x in the input domain X, the function f uniquely
assigns the element y to the output range Y.. Functions
define causal hypotheses. Science and engineering paint
our pictures of the universe with functions.

B. Kosko (1992)
Neural Networks and Fuzzy Systems: A Dynamical Systems
Approach to Machine Intelligence p 19.

The rationale behind Sentence Management as outlined in the paper
'What are Regimes?' (Longley 1992) and in section D below, is that the
most effective way to bring about sustained behaviour change is not
through specific, formal training programmes, but through a careful
strategy of apposite allocation to activities which *naturally require
the behavioural skills* which an inmate may be deficient in. This
depends on standardised recording of activity and programme behaviour
*throughout sentence* which will provide a *historical and actuarial,
record of attainment.* This will provide differential information to
guide management's decisions as how best to help inmates lead a
constructive life whilst in custody, and, hopefully, after release.
Initially, it will serve to support actuarial analysis of behaviour as
a practical working, inmate, and management, information system. In
time, it should provide data to enable managers to focus resources
where they are most required (ie provide comprehensive regime
profiles, which highlight strong and weak elements). Such a system is
only interested in what inmates 'think' or 'believe' to the extent
that what they 'think' and 'believe' are specific skills which the
particular activities and programmes require, and which can therefore
be systematically assessed as criteria of formative behaviour
profiling. What is required for effective decision making and
behaviour management is a history of behavioural performance in
activities and programmes, much like the USA system of Grade Point
Averages and attendance. All such behaviours are the natural skills
required by the activities and programmes, and all such assessment is
criterion reference based.

The alternative, intensional approach, of asking staff to identify
risk factors from the documented account of the offence, and
subsequently asking staff to look out for them in the inmate's prison
behaviour may well only serve to shape inmates to inhibit
(conditionally suppress) such behaviour, especially if their
progression through the prison system is contingent on this. However,
from animal studies of acquisition-extinction-reacquisition, there is
no evidence that such behaviour inhibition is likely to produce a
*permanent* change in the inmate's behaviour in the absence of the
inmate *learning new behaviours*. Such an approach is also blind to
base rates of behaviours. Only through a system which encouraged the
acquisition of *new* behaviours can we expect there to be a change in
risk, and even this would have to be *actuarially* determined. For a
proper estimate of risk, one requires a system where inmates can be
assessed with respect to standard demands of the regime. The standard
way to determine risk factors was to derive these from *statistical*
*analysis,* not from *clinical (intensional) judgement*.

Much of the rationale for this stance can be deduced from the
following. Throughout the 20th century, psychologists' evaluation of
the extent to which reasoning can be formally taught has been
pessimistic. From Thorndike (1913) through Piaget (see Brainerd 1978)
to Newell (1980) it has been maintained that:

'the modern.....position is that learned problem-solving
skills are, in general, idiosyncratic to the task.'

A. Newell 1980.

Furthermore, it has been argued that whilst people may in fact use
abstract inferential rules, these rules can not be formally taught to
any significant degree. They are learned instead under natural
conditions of development and cannot be improved by formal
instruction. This is essentially Piaget's position.

The above is, in fact, how Nisbett et al (1987) opened their SCIENCE
paper '*Teaching Reasoning*'. Reviewing the history of the concept of
formal discipline which looked to the use of latin and the classics to
train the 'muscles of the mind', Nisbett et. al provided some
empirical evidence on the degree to which one class of inferential
rules can be taught. They describe these rules as 'a family of
pragmatic inferential rule systems that people induce in the context
of solving recurrent everyday problems'. These include "causal
schemas", "contractual schemas" and "statistical heuristics". The
latter are clearly instances of inductive rather than deductive
inference.

Nisbett et. al. clearly pointed out that the same can not be said for
the teaching of deductive inference (i.e. formal instruction in
deductive logic or other syntactic rule systems). With respect to the
teaching of logical reasoning, Nisbett et. al. had the following to
say:

'Since highly abstract statistical rules can be taught
in such a way that they can be applied to a great range
of everyday life events, is the same true of the even
more abstract rules of deductive logic? We can report no
evidence indicating that this is true, and we can
provide some evidence indicating that it is not.....In
our view, when people reason in accordance with the
rules of formal logic, they normally do so by using
pragmatic reasoning schemas that happen to map onto the
solutions provided by logic.'

ibid. p.628

Such 'causal schemas' are known as 'intensional heuristics' (Agnoli
and Krantz 1989) and have been widely studied in psychology since the
early 1970s, primarily by research psychologists such as Tversky and
Kahneman (1974), Nisbett and Ross (1980), Kahneman, Slovic and Tversky
(1982), Holland et. al (1986) and Ross and Nisbett (1991).

A longitudinal study by Lehman and Nisbett (1990) looked at
differential improvements in the use of such heuristics in college
students classified by different subject groups. They found
improvements in the use of statistical heuristics in social science
students, but no improvement in conditional logic (such as the Wason
selection task). Conversely, the natural science and humanities
produced significant improvements in conditional logic. Interestingly,
there were no changes in students studying chemistry. Whilst the
authors took the findings to provide some support for their thesis
that reasoning can be taught, it must be appreciated that the findings
at the same time lend considerable support to the view that each
subject area inculcates its own particular type of reasoning, even in
highly educated individuals. That is, the data lend support to the
thesis that training in particular skills must look to training for
transfer and application within particular skill areas. This is
elaborated below in the context of the system of Sentence Management.

Today, formal modelling of such intensional processes is researched
using a technology known as 'Neural Computing' which uses inferential
statistical technologies closely related to regression analysis.
However, such technologies are inherently inductive. They take samples
and generalise to populations. They are at best pattern recognition
systems.

Such technologies must be contrasted with formal deductive logical
systems which are algorithmic rather than heuristic (extensional
rather than intensional). The algorithmic, or computational, approach
is central to classic Artificial Intelligence and is represented today
by the technology of relational databases along with rule and
Knowledge Information Based System (KIBS) which are based on the First
Order Predicate Calculus, the Robinson Resolution Principle (Robinson
1965,1979) and the long term objectives of automated reasoning (e.g.
Wos et. al 1992 and the Japanese Fifth Generation computing project) -
see Volume 2 and 3.

The degree to which intensional heuristics can be suppressed by
training is now controversial (Kahneman and Tversky 1983; Nisbett and
Ross 1980; Holland et al. 1986; Nisbett et al 1987; Agnoli and Krantz
1989; Gladstone 1989; Fong and Nisbett 1991; Ploger and Wilson 1991;
Smith et al 1992). In fact, the degree to which they are or are not
may be orthogonal to the main theme of this paper, since the main
thrust of the argument is that behaviour science should look to
deductive inferential technology, not inductive inference. Central to
the controversy, however, is the degree to which the suppression is
sustained, and the degree of generalisation and practical application
of even 'statistical heuristics'. For example, Ploger and Wilson
(1991) said in commentary on the 1991 Fong and Nisbett paper:

'G. T. Fong and R. E. Nisbett argued that, within the
domain of statistics, people possess abstract rules;
that the use of these rules can be improved by training;
and that these training effects are largely independent
of the training domain. Although their results indicate
that there is a statistically significant improvement in
performance due to training, they also indicate that,
even after training, most college students do not apply
that training to example problems.

D. Ploger & M. Wilson
Statistical reasoning: What is the role of inferential rule training?
Comment on Fong and Nisbett.
Journal of Experimental Psychology General; 1991 Jun Vol
120(2) 213-214

Furthermore, Gladstone (1989) criticises the stance adopted by the
same group in an article in American Psychologist (1988):

'[This paper]' criticizes the assertion by D. R. Lehman
et al. that their experiments support the doctrine of
formal discipline. The present author contends that the
work of Lehman et al. provides evidence that one must
teach for transfer, not that transfer occurs
automatically. The problems of creating a curriculum and
teaching it must be addressed if teachers are to help
students apply a rule across fields. Support is given to
E. L. Thorndike's (1906, 1913) assessment of the general
method of teaching for transfer.'

R. Gladstone (1989)
Teaching for transfer versus formal discipline.
American Psychologist; 1989 Aug Vol 44(8) 1159

What this research suggests is that whilst improvements can be made by
training in formal principles (such as teaching the 'Law of Large
Numbers'), this does not in fact contradict the stance of Piaget and
others that most of these inductive skills are in fact learned under
natural lived experience ('erlbnis' and 'lebenswelt' Husserl 1952, or
'Being-in-the-world' Heidegger 1928). Furthermore, there is evidence
from short term longitudinal studies of training in such skills that
not only is there a decline in such skills after even a short time,
but there is little evidence of application of the heuristics to novel
problem situations outside the training domain. This is the standard
and conventional criticism of 'formal education'. Throughout this
work, the basic message seems to be to focus training on specific
skills acquisition which will not so much generalise to novel
contexts, but find application in other, similar if not identical
contexts.

Most recently, Nisbett and colleagues have looked further at the
criteria for assessing the efficacy of cognitive skills training:

'A number of theoretical positions in psychology
(including variants of case-based reasoning, instance-
based analogy, and connectionist models) maintain that
abstract rules are not involved in human reasoning, or
at best play a minor role. Other views hold that the use
of abstract rules is a core aspect of human reasoning.
The authors propose 8 criteria for determining whether
or not people use abstract rules in reasoning. They
examine evidence relevant to each criterion for several
rule systems. There is substantial evidence that several
inferential rules, including modus ponens, contractual
rules, causal rules, and the law of large numbers, are
used in solving everyday problems. Hybrid mechanisms
that combine aspects of instance and rule models are
considered.'

E. E. Smith, C. Langston and R. E. Nisbett:
The case for rules in reasoning.
Cognitive Science; 1992 Jan-Mar Vol 16(1) 1-40

We use rules, it can be argued, when we apply extensionalist
strategies which are of course, by design, domain specific. Note that
in the history of logic it took until 1879 to discover Quantification
Theory. Furthermore, research on deductive reasoning itself suggests
strongly that the view developed in this volume is sound:

'Reviews 3 types of computer program designed to make
deductive inferences: resolution theorem-provers and
goal-directed inferential programs, implemented
primarily as exercises in artificial intelligence; and
natural deduction systems, which have also been used as
psychological models. It is argued that none of these
methods resembles the way in which human beings usually
reason. They [humans] appear instead to depend, not on
formal rules of inference, but on using the meaning of
the premises to construct a mental model of the relevant
situation and on searching for alternative models of the
premises that falsify putative conclusions.'

P. N. Johnson-Laird
Human and computer reasoning.
Trends in Neurosciences; 1985 Feb Vol 8(2) 54-57

'Contends that the orthodox view in psychology is that
people use formal rules of inference like those of a
natural deduction system. It is argued that logical
competence depends on mental models rather than formal
rules. Models are constructed using linguistic and
general knowledge; a conclusion is formulated based on
the model that maintains semantic information, expresses
it parsimoniously, and makes explicit something not
directly stated by the premise. The validity of the
conclusion is tested by searching for alternative models
that might refute the conclusion. The article summarizes
a theory developed in a 1991 book by P. N. Johnson-Laird
and R. M. Byrne.'

P. N. Johnson-Laird & R. M. Byrne
Precis of Deduction.
Behavioral and Brain Sciences; 1993 Jun Vol 16(2) 323-
380

That is, human reasoning tends to focus on content or intension. As
has been argued elsewhere, such heuristic strategies invariably suffer
as a consequence of their context specificity and constraints on
working memory capacity.

--
David Longley


David Longley

unread,
Nov 21, 1996, 8:00:00 AM11/21/96
to

Balter Abuse (2)

<JB>
>Oh, I "said that", did I? I'm afraid that your fragmented mental
>defects make you incapable of comprehending what I say, so you should
>limit yourself to direct quotation.

<DL>
>> What goes around, comes around eh?
>
>> Or is this just another example of the "fragmentation" I keep
>> drawing your (and others') attention to (which you seem
>> determined to conceive as 'hypocrisy').
>>
><JB>
>Longley, you are apparently too mentally defective to understand that,
>since you apply your standards to others but not to yourself, you are
>in *moral* error. Hypocrisy is a matter of *ethics*, which are
>apparently beyond your sociopathic grasp. But if I point out that
>I'm not talking science here, you will criticize me for being in a
>muddle, which just further demonstrates your not-quite-sane autistic
>mental defect. But that is just another example of your fragmentation,
>I suppose (which can only prove your point through affirmation of the
>consequent, which would only prove your point about humans having
>trouble with logic through affirmation of the consequent, which ... oh,
>never mind.)
>

>--
><J Q B>

As to my being sociopathic or whatever - is this really likely
given the context of my work? Is it not more likely that you
still have not grasped what these issues are really all about?

Whilst the following is written with prison inmates in mind, the
same principles and system are being advocated for education &
training programmes more generally. Whilst some of the material
within this extract bears on this thread, it is posted in the
hope that it will elicit wider evaluation and discussion -
hopefully, what emerges from these fragments is an empirical
conclusion, not a particular ideaology.

BEHAVIOUR MODIFICATION: SENTENCE MANAGEMENT & PLANS

'No predictions made about a single case in clinical work are
ever certain, but are always probable. The notion of
probability is inherently a frequency notion, hence
statements about the probability of a given event are
statements about frequencies, although they may not seem to
be so. Frequencies refer to the occurrence of events in a
class; therefore all predictions; even those that from their
appearance seem to be predictions about individual concrete
events or persons, have actually an implicit reference to a
class....it is only if we have a reference class to which the
event in question can be ordered that the possibility of
determining or estimating a relative frequency exists.....
the clinician, if he is doing anything that is empirically
meaningful, is doing a second-rate job of actuarial
prediction. There is fundamentally no logical difference
between the clinical or case-study method and the actuarial
method. The only difference is on two quantitative continua,
namely that the actuarial method is #more explicit# and #more
precise#.'

P. E. Meehl (1954)
Clinical versus Statistical Prediction
A Theoretical Analysis and a Review of the Evidence

This section outlines the second phase of PROBE's technology, that of
PROgramming BEhaviour. Monitoring behaviour is one essential function
of PROBE, and the major developments to date having been outlined in
Section 2. Effective *control* of behaviour on the other hand requires
staff and inmates to make use of that information in the interests of
programming or shaping behaviour in a pro-social (non-delinquent)
direction. This is what the Sentence Management and Planning system,
covered in this section is designed to provide. Further technical
details can be found in *Volumes 1 & 2* of this system specification.

If there is to be any change in an inmate's behaviour after release,
there will need to be a change in behaviour from the time he was
convicted, either through acquisition of new behaviours or simple
maturation (as in the age-report rate function). In ascertaining the
characteristic behaviour of classes, it is not that we make
predictions of future behaviour, but that we describe behaviour
characteristic of classes. This is clearly seen in discriminant
analysis and regression in general. We analyse the relationship
between one class and others, and, providing that an individual can be
allocated to one class or another, we can say, as a consequence of his
class membership, what other characteristics are likely to be the case
as a function of that class membership. Temporality, i.e. pre-diction
has nothing to do with it.

Any system which provides a record of skill acquisition during
sentence must therefore be an asset in the long term management of
inmates towards this objective. However, research in education and
training, perhaps the most practical areas of application of Learning
Theory, clearly endorse the conclusions drawn in *Volume 1* on the
context specificity of the intensional. Some of the most influential
models of cognitive processing in the early to mid 1970s took context
as critical for encoding and recall of memory (Tulving and Thompson
1972). Generalisation Theory, ie that area of research which looks at
transfer-of-training has almost unequivocally concluded that learning
is context specific. Empirical research supports the logical
conclusion that skill acquisition does not readily transfer from one
task to another. This is another illustration of the failure of
substitutivity in psychological contexts. In fact, upon detailed
analysis, many of the attractive notions of intensionalism, so
characteristic of cognitivism, may reveal themselves to be vacuous on
closer analysis:

'Generalizability theory (Cronbach, Gleser, Nada & Rajaratnam
1972; see also, Brennan, 1983; Shavelson, Webb, & Rowley,
1989) provides a natural framework for investigating the
degree to which performance assessment results can be
generalised. At a minimum, information is needed on the
magnitude of variability due to raters and to the sampling of
tasks. Experience with performance assessments in other
contexts such as the military (e.g. Shavelson, Mayberry, Li &
Webb, 1990) or medical licensure testing (e.g. Swanson,
Norcini, & Grosso, 1987) suggests that there is likely
substantial variability due to task. Similarly,
generalizability studies of direct writing assessments that
manipulate tasks also indicate that the variance component
for the sampling of tasks tends to be greater than for the
sampling of raters (Breland, Camp, Jones, Morris, & Rock,
1987; Hieronymous & Hoover 1986).

Shavelson, Baxter & Pine (1990) recently investigated the
generalizability of performance across different hands-on
performance tasks such as experiments to determine the
absorbency of paper towels and experiments to discover the
reactions of sowbugs to light and dark and to wet and dry
conditions. Consistent with the results of other contexts,
Shavelson et al. found that performance was highly task
dependent. The limited generalizability from task to task is
consistent with research in learning and cognition that
emphasizes the situation and context-specific nature of
thinking (Greeno, 1989).'

R. L. Linn, E. L. Baker & S. B. Dunbar (1991)
Complex, Performance-Based Assessment:
Expectations and Validation Criteria
Educational Researcher, vol 20, 8, pp15-21

Intensionalists, holding that what happens inside the head matters, ie
that intension determines extension, appeal to our common, folk
psychological intuitions to support arguments for the merits of
abstract cognitive skills. However, such strategies are not justified
on the basis of educational research (see also *Volume 1*):

'Critics of standardized tests are quick to argue that such
instruments place too much emphasis on factual knowledge and
on the application of procedures to solve well-structured
decontextualized problems (see e.g. Frederiksen 1984). Pleas
for higher order thinking skills are plentiful. One of the
promises of performance-based assessments is that they will
place greater emphasis on problem solving, comprehension,
critical thinking, reasoning, and metacognitive processes.
These are worthwhile goals, but they will require that
criteria for judging all forms of assessment include
attention to the processes that students are required to
exercise.

It should not simply be assumed, for example, that a hands-on
scientific task encourages the development of problem solving
skills, reasoning ability, or more sophisticated mental
models of the scientific phenomenon. Nor should it be assumed
that apparently more complex, open-ended mathematics problems
will require the use of more complex cognitive processes by
students. The report of the National Academy of Education's
Committee that reviewed the Alexander-James (1987) study
group report on the Nation's Report Card (National Academy of
Education, 1987) provided the following important caution in
that regard:

It is all too easy to think of higher-order skills
as involving only difficult subject matter as, for
example, learning calculus. Yet one can memorize
the formulas for derivatives just as easily as
those for computing areas of various geometric
shapes, while remaining equally confused about the
overall goals of both activities.
(p.54)

The construction of an open-ended proof of a
theorem in geometry can be a cognitively complex
task or simply the display of a memorized sequence
of responses to a particular problem, depending on
the novelty of the task and the prior experience of
the learner. Judgments regarding the cognitive
complexity of an assessment need to start with an
analysis of the task; they also need to take into
account student familiarity with the problems and
the ways in which students attempt to solve them.'

ibid p. 19

As covered at length in Section 1.3, skills do not seem to generalise
well. Dretske (1980) put the issue as follows:

'If I know that the train is moving and you know that its
wheels are turning, it does not follow that I know what you
know just because the train never moves without its wheels
turning. More generally, if all (and only) Fs are G, one can
nonetheless know that something is F without knowing that it
is G. Extensionally equivalent expressions, when applied to
the same object, do not (necessarily) express the same
cognitive content. Furthermore, if Tom is my uncle, one can
not infer (with a possible exception to be mentioned later)
that if S knows that Tom is getting married, he thereby knows
that my uncle is getting married. The content of a cognitive
state, and hence the cognitive state itself, depends (for its
identity) on something beyond the extension or reference of
the terms we use to express the content. I shall say,
therefore, that a description of a cognitive state, is non-
extensional.'

F. I. Dretske (1980)
The Intentionality of Cognitive States
Midwest Studies in Philosophy 5,281-294

As noted above, this is corroborated by transfer of training research:

'Common descriptions of skills are not, it is concluded, an
adequate basis for predicting transfer. Results support J.
Fotheringhame's finding that core skills do not automatically
transfer from one context to another.'

C. Myers
Core skills and transfer in the youth training schemes:
A field study of trainee motor mechanics.
Journal of Organizational Behavior;1992 Nov Vol13(6) 625-632

'G. T. Fong and R. E. Nisbett (1993) claimed that human
problem solvers use abstract principles to accomplish
transfer to novel problems, based on findings that Ss were
able to apply the law of large numbers to problems from a
different domain from that in which they had been trained.
However, the abstract-rules position cannot account for
results from other studies of analogical transfer that
indicate that the content or domain of a problem is important
both for retrieving previously learned analogs (e.g., K. J.
Holyoak and K. Koh, 1987; M. Keane, 1985, 1987; B. H. Ross,
1989) and for mapping base analogs onto target problems
(Ross, 1989). It also cannot account for Fong and Nisbett's
own findings that different-domain but not same-domain
transfer was impaired after a 2-wk delay. It is proposed that
the content of problems is more important in problem solving
than supposed by Fong and Nisbett.'

L. M. Reeves & R. W. Weisberg
Abstract versus concrete information as the basis for
transfer in problem solving: Comment on Fong and Nisbett (1991).
Journal of Experimental Psychology General; 1993 Mar Vol
122(1) 125-128

'Content', recall, is a cognate of 'intension' or 'meaning'. A major
argument for the system of Sentence Management is that if we wish to
expand the range of an individuals' skills (behaviours), we can do no
better than to adopt *effective* (ie algorithmic) practices to guide
placements of inmates into activities based on actuarial models of
useful relations which exists between skills, both positive and
negative. We are unlikely to identify these other than through
empirical analyses. These should identify where such skills will be
naturally acquired and practised. As discussed at length in Section 1
and in Volume 1, there is now overwhelming evidence that behaviour is
context specific. Given that conclusion, which is supported by social
role expectations (see any review of Attribution Theory), we are well
advised to focus all attempts at behaviour engineering via inmate
programmes and activities with this fully understood. Furthermore,
within the PROBE project at least, we have no alternative but to
eschew psychological ie intensional (cognitive) processes because
(Section 1.3), as we have seen, valid inference is logically
impossible in principle within such contexts.

The work on Sentence Planning and Management represents work on the
second phase of PROBE's development between 1990 and 1994. The work on
Sentence Planning is a direct development of the original CRC
recommendations, and comprises record 33 and 34 of the PROBE system.
Sentence Management, comprising records 30,31 and 32 is designed as an
essential substrate, or support structure, for Sentence Planning.

3.1 SENTENCE MANAGEMENT

'I wish I had said that', said Oscar Wilde in applauding one
of Whistler's witticisms, Whistler, who took a dim view of
Wilde's originality, retorted, 'You will, Oscar; you will.
This tale reminds us that an expression like 'Whistler said
that' may on occasion serve as a grammatically complete
sentence. Here we have, I suggest, the key to a correct
analysis of indirect discourse, an analysis that opens a lead
to an analysis of psychological sentences generally
(sentences about propositional attitudes, so-called), and
even, though this looks beyond anything to be discussed in
the present paper, a clue to what distinguishes psychological
concepts from others.'

D. Davidson (1969)
On Saying That p.93

'Finding right words of my own to communicate another's
saying is a problem of translation. The words I use in the
particular case may be viewed as products of my total theory
(however vague and subject to correction) of what the
originating speaker means by anything he says: such a theory
is indistinguishable from a characterization of a truth
predicate, with his language as object language and mine as
metalanguage. The crucial point is that there will be equally
acceptable alternative theories which differ in assigning
clearly non-synonymous sentences of mine as translations of
his same utterance. This is Quine's thesis of the
indeterminacy of translation.'

ibid. p.100

'Much of what is called for is to mechanize as far as
possible what we now do by art when we put ordinary English
into one or another canonical notation. The point is not that
canonical notation is better than the rough original idiom,
but rather that if we know what idiom the canonical notation
is for, we have as good a theory for the idiom as for its
kept companion.'

D. Davidson (1967)
Truth and Meaning


Delinquency, *simply construed*, is a failure to co-operate with the
some social requirements. As an alternative to a purely custodial
model, the following outlines a positive incentive approach to
structuring time in custody. It is designed to map on to all elements
of inmate programmes, providing a systematic way of collating and
managing progress as reported by experts, which is analysed
objectively to produce reports based on actual behaviour rather than
casual judgement. It is, by design, a system which will allow
management of behaviour to be based on individual merit and
performance.


Regime & Sentence Management as a POSITIVE Behaviour Management System
An R & D Proposal

Introduction

Research between 1989 and 1991 led to the conclusion that Sentence
Planning will require such a fundamental, systematic, and nationally
implemented information base and that this can most efficiently be
derived from the management of inmate activities throughout the
estate. According to this view, Sentence Planning needs to be
supported by a system of 'Sentence Management' which focuses on the
structure and functions of available and potential inmate activities.
In this way, Sentence Planning would be integrated with the Regime
Monitoring System, effectively developing within the framework of
'accountable regime;. This implies that the most effective way to
launch Sentence Planning is not as an additional task grafted onto the
regime, but as a natural development and improvement of inmate review
and reporting practices.

The system specified below is efficient and cost-effective with the
potential infrastructure to support and integrate several initiatives
which have begun since the re-organisation. Although not covered in
this note, two of the most significant are Prisoners Pay, and The
Place of Work in the Regime.

In broad outline, what is proposed has much in common with the
Department of Education and Science's 1984 initiative Records of
Achievement and has the benefit of using this nationally implemented
programme in behaviour assessment as a source of best practice. Whilst
the initiative outlined below is an independent development which took
its cue from recommendations published in the 1984 HMSO CRC Report,
from which the PROBE (PROfiling Behaviour) project developed, results
of R&D work over the past 6 years are reassuringly compatible with the
work done throughout the English education system during the same
period. In this context, what is outlined below focuses on what the
Department of Education and Science refered to as Formative Profiling
(continuous assessment and interactive profiling involving the inmate
throughout his career) rather than Summative Profiling (which provides
a review somewhat akin to the parole review, or more locally, Long
Term Reviews). In all that follows, the recommendations of the 1984
HMSO CRC Report are seen to be integrally related.

Broad Outline

The system, for national implementation, across all sentence groups
can be specified as a 5 step cycle:

1. Inmates are observed under natural conditions of activities.
2. Observed behaviour is rated and recorded (continuous assessment).
3. Profiles of behaviour become the focus for interview dialogues/contracts.
4. Inmates are set targets based on the behaviour ratings/observations.
5. Elements of problem behaviour are addressed by apposite allocation.

Some immediate comments follow.

With little intrusion into the running of Inmate Activities, behaviour
which is central to these activities can be monitored and recorded
more directly to identify levels of inmate competence across the range
of activities. The records of competence would guide the setting and
auditing of individual targets.

Targets will be identified within the Activity Areas supported by the
regime. This requires continuous assessment of inmates within
activities, and the setting of targets based on a set of possible
attainments drawn from those activities. Such attainment profiles
would serve to identify and audit targets and would enable allocation
staff to judge the general standard of attainment within and across
activities, thereby enhancing both target-setting and auditing.

The frequency of behaviour assessment within activities and routines,
and the auditing of the whole process must be driven by what is
practicable. The system requires assessment of attainment to be
undertaken monthly, in order to ensure standardisation in collection
of Regime Monitoring data. Targets set are to be based on observations
of behaviour which are already fundamental to the running of
activities and routines, and the progress in achieving targets will be
discussed with the inmate, guiding allocation to activities within and
between prisons. These steps are in accordance with the policy
guidelines. Whilst the targets set will be individual, and when
collated will comprise a set of short and long term objectives
defining the 'Sentence Plan', they will fall into some broad areas
(social behaviour, health, performance at work, and so on).

By making more systematic use of the information which is already
being used to select, deselect and manage inmates within activities
and with respect to routines, Sentence Planning will become a natural
co-ordinating feature of the prison's regime.

Specific programmes for problem behaviour (e.g. sex offenders) can be
seen as particular inmate activities with their own, more intensive
assessment, activity and target setting procedures explicitly designed
to address problem behaviour. Development of, and allocation to such
programmes will be integrated with other activities. These programmes
are seen as both drawing on and informing 'Risk Assessment'.

Specific Details

Fundamental to the system outlined above is the fact that classes of
behaviour (as opposed to properties of inmates) are taken as the basic
data. These classes of behaviour are demanded by activities and
routines, and should serve as basic data for Regime Monitoring.
Observations of inmate behaviour are observations of an inmate's level
of attainment with respect to characteristics that staff responsible
for the activities have specified in advance as essential to the task.

Activities and routines have a structure quite independent of the
particular inmates who are subject to the demands of activities and
routines. Perhaps the defining feature of Sentence Management is that
it comprises a process of objective continuous assessment, where what
are assessed are levels of attainment with respect to pre-set aims and
objectives, themselves defining activities and routines. Since the
focus is on classes of behaviour rather than attributes of inmates,
all of the assessments are of progress with respect to pre-determined
classes of behaviour which are requirements of activities and
routines.

Attainment Areas

Each activity area can be specified in terms of classes of behaviour
which the activity requires. These classes of behaviour are basic
skill areas which are fundamental to the nature of the activity, which
in combination account for activities being distinguishable from each
other. These basic skill areas will be referred to as Attainment
Areas. They need to be carefully selected as they will be taken to be
the defining features of the activity. From this point of view, any
part of the daily routines should be specifiable in these terms, and
staff should be encouraged to think about how best their area of
inmate supervision could be so sub-classified. Whilst the
identification of Attainment Areas may, at first glance seem a
demanding or unfamiliar task, it is soon appreciated that the
identification of Attainment Areas is in fact a pre-requisite to the
establishment of any activity in prison, be it an education course,
industrial activity or simple housework.

Attainment Criteria

Each Attainment Area can be further classified into up to five levels
of attainment. These are levels of the same skill, progressing from a
low level of competence to a high level of competence. These must be
described in a series of direct statements, specifying particular
skills of graded sophistication which can be observed, and checked as
having been observed. Levels of competence are therefore NOT to be
specified as a scale from LOW to HIGH, but rather as a series of
specific, and observable behaviours. These are the Attainment Criteria
of the activity or routine. Just as Attainment Areas are naturally
identified by staff who design activities, so too are Attainment
Criteria natural pre-requisites for day to day supervision.

Competence Checklists (SM-1s)

For each set of Attainment Areas the Attainment Criteria comprises a
COMPETENCE CHECKLIST, against which performance can be monitored.
Competence Checklists are referred to within the system as SM-1s.

Record of Targets (SM-2s)

Targets are identified using a second form, referred to as SM-2.
Targets will generally be identified from the profile of Attainment
Criteria within Activities, (Competence Checklists being completed on
a monthly basis provide a record of progress). But Targets may also be
identified outside of standard activities, based on an analysis of
what is available within the Regime Digest, or Directory which will be
a natural product of the process of defining Attainment Areas and
Attainment Criteria, and the printing of the Competence Checklists

The two forms, ATTAINMENTS (SM-1) and RECORD OF TARGETS (SM-2)
comprise the building blocks of the system. These forms are now
available as final drafts (and will incidentally be machine readable).
Both forms are designed to be stored in the third element of the
system, the inmate's Sentence Management Dossier. This is simply a
'pocket file' to hold the sets of the two forms, and the proposal is
that the Head of Inmate Activities and his staff be responsible for
maintaining the system.

Through an analysis of the SM-1s both within and across activity
areas, Heads of Inmate Activities would have a better picture of the
structure of the activities, and of the relative progress of inmates
within activities. With inmates actively involved in the process of
target negotiation, and with the system being objective, problems of
confidentiality so characteristic of subjective reports, would become
substantially reduced. Whilst the system can run as a paper system,
once computerised, the data collected via SM-1s and SM-2s will form
the basis of automated reports.

Relationship to the Regime Monitoring System

The proposed procedure for recording Sentence Management is intimately
related to Regime Monitoring, as it is largely based on the same
Reporting Points within Activity areas making up the RMS. This will be
even more apparent when regime Monitoring embraces more activities
that it does at present. It has the promise also of providing the more
qualitative measure of regime delivery in that the record of
attainments will be an objective record of achievement.

The design of SM-1 form enables the capture of the basic data required
for maintenance of the Regime Monitoring System (RMS). The form
provides an efficient means of collecting such data since each SM-1
records an inmate's daily attendance in the activity via a 1-28 day
register covering each morning and afternoon session attended.

Since the form is designed to record attendance and attainment data
each month, it implicitly allows the number of hours to be calculated
for each inmate, each reporting point, and at a higher level of
aggregation to produce data on the number of inmates for each activity
area, sub-establishment and so on.

In terms of paperwork, this is not a demanding task, and in
capitalising on what is already done at Reporting Points (where daily
logs are maintained already) it promises to be an efficient and
accurate way of collecting the required data.

For a Reporting Point with 15 inmates, the system would require 15 SM-
1s to be completed and returned to the Head of Inmate Activities each
month. As mentioned above, the design of the forms renders them
potentially able to be processed by an Optical Mark Reader, allowing
the data to be converted to computer storable data, thereby making the
whole system easier to manage and audit.

Fundamental to the design of the SM-1 is the fact that the Attainment
Criteria are generated by staff who will be using them, each SM-1
being tied specifically to an activity. The content of the form is
'user definable'.

More than one SM-1 form will be completed per inmate per month since
the inmate will be assessed at more than one Reporting Point. To
record behaviour in daytime activities and domestically on the wings,
one SM-1 would be completed each month as a record of attainment at
the allocated work/education Reporting Point, and another on the
wings, the latter providing an assessment of the inmate's level of co-
operation/contribution to the general running of the routines, though
not necessarily contributing to the overall Regime Monitoring figures.

Although inextricably linked to the Regime Monitoring System (RMS),
the focus is at a more fundamental level of the regime - the recording
of attainment levels of individual inmates - with the RMS data being
logically compiled or deduced from those individual assessments. In
defining Attainment Areas and Attainment Criteria by staff supervising
the Reporting Point, in consultation with the HIA, the SM-1s and SM-2s
would allow staff to define the nature and objectives of the Reporting
Points, storing them within the proposed Sentence Management System to
serve as the basic statements for any subsequent computer profiling
of the inmate's progress as well as serving as the basic material for
a local and national directory or digest of activities and their
curricula.

Costs and Benefits

The cost of an Optical Mark Reader (OMR: the machine to read the
contents of the forms directly into a computer) to automate the
storage of the attainments data would be in the order of 8,000 per
prison, and probably substantially less if the systems were bought in
bulk. A small system to hold and analyse the data (including
appropriate software) would be in the order of 6000. We suggest that
the management of the Sentence Management System would most naturally
rest with the HIA, who would naturally liaise with other relevant
functional managers.

This relatively simple monitoring system would provide both Sentence
Management and Regime Monitoring information in one system.
Furthermore, the system could be Wide Area Networked (WAN), with each
AREA of 8-14 prisons being polled automatically by the Area Managers'
systems at HQ, these in turn being polled by a central system. The
system would be able directly to provide regime providers with
information bearing on their areas of concern.

This communications improvement is something which has already
proposed to improve the efficiency of Regime Monitoring.

With data being collated once a month via SM-1s, weekly data would
only be available retrospectively once a month. Nevertheless, this may
well be a small price to pay for a substantial reduction in data
handling and the provision of a far more useful system. Such a system
would make Regime Monitoring a naturally emergent indicator of
Sentence Management, and could be implemented using much of the
already installed infrastructure for running and auditing inmate
activities.

A significant benefit is in the potential for automatic machine-
generated reports of inmate progress. These could save many thousands
of officer-hours. The practicality of such reports is already being
demonstrated in HMP Parkhurst.

Coverage of Non-Standard Inmate Activities

The SM-1 form is designed to allow all staff to formally assess any
programme of activity in a standard manner (ie, marking whether
behaviour in the activity matches the attainment criteria on the
Competence Checklist). This form has provision to record a Checklist
Code, along with the activity and reporting point identifier. This
Checklist Code will allow more than one checklist to be generated for
each Reporting Point if the extent or modular nature of the activity
requires multiple checklists for comprehensive assessment of the
skills which the activity offers.

Similarly, the SM-2 form allows targets to be identified by staff both
within an activity, or from a knowledge of what the regime has on
offer. The Head of Inmate Activities, in building a library of
Attainment Areas and Attainment Criteria, (the Regime Digest, or
Directory) will be able to provide interested staff, such as Review
boards, with a digest of what activities are available and how they
are broken down by attainment areas and criteria.

In this way, short duration intervention programmes can be included in
the 'Sentence Management Dossier' in the same way as are the more
formal activities. Formal activities (as currently defined within the
Regime Monitoring System) are so regarded because they tend to occupy
large groups of inmates in activities which are basically structured
to have inmates participate for a relatively fixed period (8 weeks to
several years).

Using this form of assessment, the staff wishing to run ad hoc
programmes, occupying either small groups or single inmates in short
modules would be tasked with defining Attainment Areas and Attainment
Criteria as a sine qua non for running the proposed programme,
submitting the proposal to the HIA to be considered as an element of
the regime.

The fact that each SM-1 has an attendance register will permit the
system to capture the extent of all activity throughout the regime,
thereby contributing to a more comprehensive profile of activity
within each establishment and the estate in general. The Head of
Inmate Activies' task would more clearly become one of co-ordinating
Attainment Areas to bring about a balanced and appropriately monitored
regime, and the data would serve as a sound information base from
which staff could build Sentence Plans.

The system is designed to support full recording of inmate behaviour
and, based on co-operation with the demands of the routines and
activities, allow staff to negotiate and contract behaviour targets
based on their level of behaviour and known empirical relations which
hold between classes of behaviour.

See: "Fragments of Behaviour: The Extensional Stance" for a more
comprehensive account: http://www.uni-hamburg.de/~kriminol/TS/tskr.htm

--
David Longley


Rob Warnock

unread,
Nov 22, 1996, 8:00:00 AM11/22/96
to

Paul Schlyter <pau...@electra.saaf.se> wrote:
+---------------
| Garbage collection is definitely more than a runtime library -- it
| requires language support as well. Consider implementing garbage
| collection in a language like C -- it would be next to impossible...
+---------------

Gee, then I guess I better stop linking the Boehm/Demers "Conservative
Garbage Collector for C and C++" with my C programs, hadn't I? I had
*thought* it was just a runtime library, but I guess I just didn't realize
it was "impossible". ;-} ;-}


-Rob

References:
http://reality.sgi.com/employees/boehm_mti/gc.html
ftp://parcftp.xerox.com/pub/gc/gc.html

-----
Rob Warnock, 7L-551 rp...@sgi.com
Silicon Graphics, Inc. http://reality.sgi.com/rpw3/
2011 N. Shoreline Blvd. Phone: 415-933-1673 FAX: 415-933-0979
Mountain View, CA 94043 PP-ASEL-IA

!@?*$%

unread,
Nov 22, 1996, 8:00:00 AM11/22/96
to

> Gee, then I guess I better stop linking the Boehm/Demers "Conservative
> Garbage Collector for C and C++" with my C programs, hadn't I? I had
> *thought* it was just a runtime library, but I guess I just didn't realize
> it was "impossible". ;-} ;-}

It is impossible, in general.

You're just not writing general enough programs. Few people are that sadistic.

--
Where walks their Brother wan and lone |For the time being, email
who marched from halls of marbled stone?| to me might be lost or
The Brothers brood their bristling mood;| delayed. Email to the
their anger grows till air will moan. |sender will definitely go

David Longley

unread,
Nov 23, 1996, 8:00:00 AM11/23/96
to

There is a logical possibility that in restricting the subject matter
of psychology, and thereby the deployment of psychologists, to what
can only be analysed and managed from a Methodological Solipsistic
(cognitive) perspective, one will render some very significant results
of research in psychology irrelevant to applied *behaviour* science
and technology, unless taken as a vindication of the stance that
behaviour is essentially context specific. As explicated above,
intensions are not, in principle, amenable to quantitative analysis.
They are, in all likelihood, only domain or context specific. A few
further examples should make these points clearer.

Many Cognitive Psychologists study 'Deductive Inference' from the
perspective of 'psychologism', a doctrine, which, loosely put,
equates the principles of logic with those of thinking. Yet the work
of Church (1936), Post (1936) and Turing (1937) clearly established
that the principles of 'effective' computation are not psychological,
and can in fact be mechanically implemented. However, researchers in
'Cognitive Science' such as Johnson-Laird and Byrne (1992) have
reviewed 'mental models' which provide an account for some of the
difficulties and some of the errors observed in human deductive
reasoning (Wason 1966). Throughout the 1970s, substantial empirical
evidence began to accumulate to refute the functionalist (Putnam 1967)
thesis that human cognitive processes were formal and computational.
Even well educated subjects it seems, have considerable difficulty
with relatively simple deductive Wason Selection tasks such as the
following:


_____ _____ _____ _____
| | | | | | | |
| A | | T | | 4 | | 7 |
|_____| |_____| |_____| |_____|

Where the task is to test the rule "if a card has a vowel on one side
it has an even number on the other".

Or in the following:

_____ _____ _____ _____
| | | | | | | |
| A | | 7 | | D | | 3 |
|_____| |_____| |_____| |_____|

where subjects are asked to test the rule 'each card that has an A on
one side will have a 3 on the other'. In both problems they can only
turn over a maximum of two cards to ascertain the truth of the rule.

Similarly, the majority have difficulty with the following,
similar problem, where the task is to reveal up to two hidden
halves of the cards to ascertain the truth or falsehood of the rule
'whenever there is a O on the left there is a O on the right':


_____________ _____________ ____________ ____________
| ||||||| | ||||||| ||||||| | ||||||| |
| O ||||||| | ||||||| ||||||| O | ||||||| |
|______||||||| |______||||||| |||||||______| |||||||______|
(a) (b) (c) (d)

Yet computer technology has no difficulty with these examples of the
application of basic deductive inference rules (modus ponens and modus
tollens). The above require the application of the material
conditional. [1] is falsified by turning cards A and 9, [2] by turning
cards A and 7, and [3] by turning cards (a) and (d). Logicians, and
others trained in the formal rules of deductive logic often fail to
solve such problems:

'Time after time our subjects fall into error. Even some
professional logicians have been known to err in an
embarrassing fashion, and only the rare individual takes
us by surprise and gets it right. It is impossible to
predict who he will be. This is all very puzzling....'

P. C. Wason and P. N. Johnson-Laird (1972)
Psychology of Reasoning

Furthermore, there is impressive empirical evidence that formal
training in logic does not generalise to such problems (Nisbett et al
1987). Yet why is this so if, in fact, human reasoning is, as the
cognitivists, have claimed, essentially logical and computational?
Wason (1966) also provided subjects with numbers which increased in
series, asking them to identify the rule. In most cases, the simple
fact that all examples shared no more than simple progression was
skipped, and whatever hypotheses they created were held onto even
though the actual rule was subsequently made clear. This persistence
of belief, and rationalisation of errors despite debriefing and
exposure to contrary evidence, is well documented in psychology, and
is a phenomenon which methodologically is, as Popper makes clear in
the leading quote to this paper, at odds with the formal advancement
of knowledge. Here is what Sir Karl Popper (1965) had to say about
this matter:

'My study of the CONTENT of a theory (or of any
statement whatsoever) was based on the simple and
obvious idea that the informative content of the
CONJUNCTION, ab, of any two statements, a, and b, will
always be greater than, or at least equal to, that of
its components.

Let a be the statement 'It will rain on Friday'; b the
statement 'It will be fine on Saturday'; and ab the
statement 'It will rain on Friday and it will be fine on
Saturday': it is then obvious that the informative
content of this last statement, the conjunction ab, will
exceed that of its component a and also that of its
component b. And it will also be obvious that the

probability of ab (or, what is the same, the probability
that ab will be true) will be smaller than that of
either of its components.

Writing Ct(a) for 'the content of the statement a', and
Ct(ab) for 'the content of the conjunction a and b', we
have

(1) Ct(a) <= Ct(ab) => Ct(b)

This contrasts with the corresponding law of the
calculus of probability,

(2) p(a) => p(ab) <= p(b)

where the inequality signs of (1) are inverted. Together
these two laws, (1) and (2), state that with increasing
content, probability decreases, and VICE VERSA; or in
other words, that content increases with increasing
IMprobability. (This analysis is of course in full
agreement with the general idea of the logical CONTENT
of a statement as the class of ALL THOSE STATEMENTS
WHICH ARE LOGICALLY ENTAILED by it. We may also say that
a statement a is logically stronger than a statement b
if its content is greater than that of b - that is to
say, if it entails more than b.)

This trivial fact has the following inescapable
consequences: if growth of knowledge means that we
operate with theories of increasing content, it must
also mean that we operate with theories of decreasing
probability (in the sense of the calculus of
probability). Thus if our aim is the advancement or
growth of knowledge, then a high probability (in the
sense of the calculus of probability) cannot possibly be
our aim as well: THESE TWO AIMS ARE INCOMPATIBLE.

I found this trivial though fundamental result about
thirty years ago, and I have been preaching it ever
since. Yet the prejudice that a high probability must be
something highly desirable is so deeply ingrained that
my trivial result is still held by many to be
'paradoxical'.

K. Popper
Truth, Rationality, and the Growth of Knowledge
Ch. 10, p 217-8
CONJECTURES AND REFUTATIONS (1965)

Modus tollens and the extensional principle that a compound event can
only be less probable (or equal) to its component events independently
is fundamental to the logic of scientific discovery, and yet this,
along with other principles of extensionality (deductive logic) seem
to be principles which are in considerable conflict with intuition, as
Kahneman and Tversky (1983) demonstrated with their illustration of
the 'Linda Problem'. In conclusion, the above authors wrote, twenty
years after Wason's experiments on deductive reasoning and Popper's
(1965) remarks on Conjectures and Refutation':

'In contrast to formal theories of belief, intuitive
judgments of probability are generally not extensional.
People do not normally analyse daily events into
exhaustive lists of possibilities or evaluate compound
probabilities by aggregating elementary ones. Instead,
they use a limited number of heuristics, such as
representativeness and availability (Kahneman et al.
1982). Our conception of judgmental heuristics is based
on NATURAL ASSESSMENTS that are routinely carried out as
part of the perception of events and the comprehension
of messages. Such natural assessments include
computations of similarity and representativeness,
attributions of causality, and evaluations of the
availability of associations and exemplars. These
assessments, we propose, are performed even in the
absence of a specific task set, although their results
are used to meet task demands as they arise. For
example, the mere mention of "horror movies" activates
instances of horror movies and evokes an assessment of
their availability. Similarly, the statement that Woody
Allen's aunt had hoped that he would be a dentist
elicits a comparison of the character to the stereotype
and an assessment of representativeness. It is
presumably the mismatch between Woody Allen's
personality and our stereotype of a dentist that makes
the thought mildly amusing.. Although these assessments
are not tied to the estimation of frequency or
probability, they are likely to play a dominant role
when such judgments are required. The availability of
horror movies may be used to answer the question "What
proportion of the movies produced last year were horror
movies?", and representativeness may control the
judgement that a particular boy is more likely to be an
actor than a dentist.

The term JUDGMENTAL HEURISTIC refers to a strategy -
whether deliberate or not - that relies a natural
assessment to produce an estimation or a prediction.

.Previous discussions or errors of judgement have
focused on deliberate strategies and on
misinterpretations of tasks. The present treatment calls
special attention to the processes of anchoring and
assimilation, which are often neither deliberate nor
conscious. An example from perception may be
instructive: If two objects in a picture of a three-
dimensional scene have the same picture size, the one
that appears more distant is not only seen as "really"
larger but also larger in the picture. The natural
computation of real size evidently influences the (less
natural) judgement of picture size, although observers
are unlikely to confuse the two values or to use the
former to estimate the latter.

The natural assessments of representativeness and
availability do not conform to the extensional logic of
probability theory.'

A. Tversky and D. Kahneman
Extensional Versus Intuitive Reasoning:
The Conjunction Fallacy in Probability Judgment.
Psychological Review Vol 90(4) 1983 p.294

The study of Natural Deduction (Gentzen 1935;Prawitz 1971; Tenant
1990) as a psychological process (1983) is really just the study of
the performance of a skill (like riding a bicycle in fact), which
attempts to account for why some of the difficulties with deduction
per se occur. The best models here may turn out to be connectionist,
where each individual's model ends up being almost unique in its fine
detail. There is a problem for performance theories, as Johnson Laird
and Byrne (1991) point out:

'A major difficulty for performance theories based on
formal logic is that people are affected by the content
of a deductive system..yet formal rules ought to apply
regardless of content. That is what they are: rules that
apply to the logical form of assertions, once it has
been abstracted from their content.'

P. N. Johnson-Laird and R. M. J. Byrne (1991)
Deduction p.31

The theme of this volume up to this point has been that methodological
solipsism is unlikely to reveal much more than the shortcomings and
diversity of social and personal judgment and the context specificity
of behaviour. It took until 1879 for Frege to discover the Predicate
Calculus (Quantification Theory), and a further half century before
Church (1936), Turing (1937) and others laid the foundations for
computer and cognitive science through their collective work on
recursive function theory. From empirical evidence, and developments
in technology, it looks like human and other animal reasoning is
primarily inductive and heuristic, not deductive and algorithmic.
Human beings have considerable difficulties with the latter, and this
is normal. It has taken considerable intellectual effort to discover
formal, abstract, extensional principles, often only with the support
of logic, mathematics and computer technology itself. The empirical
evidence, reviewed in this volume is that extensional principles are
not widely applied except in specific professional capacities which
are domain-specific. In fact, on the simple grounds that the discovery
of such principles required considerable effort should perhaps make us
more ready to accept that they are unlikely to be spontaneously
applied in everyday reasoning and problem solving.

For further coverage of the 'counter-intuitive' nature of deductive
reasoning (and therefore its low frequency in everyday practice) see
Sutherland's 1992 popular survey 'Irrationality', or Plous (1993) for
a recent review of the psychology of judgment and decision making. For
a thorough survey of the rise (and possibly the fall) of Cognitive
Science, see Putnam 1986, or Gardner 1987. The latter concluded his
survey of the Cognitive Revolution within psychology with a short
statement which he referred to as the 'computational paradox'. One
thing that Cognitive Science has shown us is that the computer or
Turing Machine is not a good model of how people reason, at least not
in the Von-Neumann Serial processing sense. Similarly, people do not
seem to think in accordance with the axioms of formal, extensional
logic. Instead, they learn rough and ready heuristics which they which
they try to apply to problems in a very rough, approximate way.
Accordingly, Cognitive Science may well turn to the work of Church,
Turing and other mathematical logicians who, in the wake of Frege,
have worked to elaborate what effective processing is. We will then be
faced with the strange situation of human psychology being of little
practical interest, except as a historical curiosity, an example of
pre-Fregian logic and pre-Church (1936) computation. Behaviour science
will pay as little attention to the 'thoughts and feelings' of 'folk
psychology' as contemporary physics does to quaint notions of 'folk
physics'. For some time, experimental psychologists working within the
information processing (computational) tradition have been working to
replace such concepts such as 'general reasoning capacity' with more
mechanistic notions such as 'Working Memory' (Baddeley 1986):

'This series of studies was concerned with determining
the relationship between general reasoning ability (R)
and general working-memory capacity (WM). In four
studies, with over 2000 subjects, using a variety of
tests to measure reasoning ability and working-memory
capacity, we have demonstrated a consistent and
remarkably high correlation between the two factors. Our
best estimates of the correlation between WM and R were
.82, .88., .80 and .82 for studies 1 through 4
respectively.
...
The finding of such a high correlation between these two
factors may surprise some. Reasoning and working-memory
capacity are thought of differently and they arise from
quite different traditions. Since Spearman (1923),
reasoning has been described as an abstract, high level
process, eluding precise definition. Development of good
tests of reasoning ability has been almost an art form,
owing more to empirical trial-and-error than to a
systematic delineation of the requirements such tests
must satisfy. In contrast, working memory has its roots
in the mechanistic, buffer-storage model of information
processing. Compared to reasoning, short-term storage
has been thought to be a more tractable, demarcated
process.'

P. C. Kyllonen & R. E. Christal (1990)
Reasoning Ability Is (Little More Than) Working-Memory
Capacity
Intelligence 14, 389-433

Such evidence stands well with the logical arguments of Cherniak which
were introduced in Section A, and which are implicit in the following
introductory remarks of Shinghal (1992) on automated reasoning:

'Suppose we are given the following four statements:

1. John awakens;
2. John brings a mop;
3. Mother is delighted, if John awakens and cleans his room;
4. If John brings a mop, then he cleans his room.

The statements being true, we can reason intuitively to
conclude that Mother is delighted. Thus we have deduced
a fact that was not explicitly given in the four
statements. But if we were given many statements, say a
hundred, then intuitive reasoning would be difficult.

Hence we wish to automate reasoning by formalizing it
and implementing it on a computer. It is then usually
called automated theorem proving. To understand
computer-implementable procedures for theorem proving,
one should first understand propositional and predicate
logics, for those logics form the basis of the theorem
proving procedures. It is assumed that you are familiar
with these logics.'

R. Shinghal (1992)
Formal Concepts in Artificial Intelligence: Fundamentals
Ch.2 Automated Reasoning with Propositional Logic p.8

Automated report writing and automated reasoning drawing on actuarial
data is fundamental to the PROBE project. In contrast to such work
using deductive inference, Gluck and Bower (1988) have modelled human
inductive reasoning using artificial neural network technology (which
are heuristic, based on constraint satisfaction/approximation, or
'best fit' rather than being 'production rule' based). That is, it is
unlikely that anyone spontaneously reasons using truth-tables or the
Resolution Rule (Robinson 1965). Furthermore, Rescorla (1988), perhaps
the dominant US spokesman for research in Pavlovian Conditioning, has
drawn attention to the fact that Classical Conditioning should perhaps
be seen as the experimental modelling of inductive inferential
'cognitive' heuristic processes. Throughout this paper, it is being
argued that such inductive inferences are in fact best modelled using
artificial neural network technology, and that such processing is
intensional, with all of the traditional problems of intensionality:

'Connectionist networks are well suited to everyday
common sense reasoning. Their ability to simultaneously
satisfy soft constraints allows them to select from
conflicting information in finding a plausible
interpretation of a situation. However, these networks
are poor at reasoning using the standard semantics of
classical logic, based on truth in all possible models.'

M. Derthick (1990)
Mundane Reasoning by Settling on a Plausible Model
Artificial Intelligence 46,1990,107-157

and perhaps even more familiarly:

'Induction should come with a government health warning.

A baby girl of sixteen months hears the word 'snow' used
to refer to snow. Over the next months, as Melissa
Bowerman has observed, the infant uses the word to refer
to: snow, the white tail of a horse, the white part of a
toy boat, a white flannel bed pad, and a puddle of milk
on the floor. She is forming the impression that 'snow'
refers to things that are white or to horizontal areas
of whiteness, and she will gradually refine her concept
so that it tallies with the adult one. The underlying
procedure is again inductive.'

P. N. Johnson-Laird (1988)
Induction, Concepts and Probability p.238: The Computer
and The Mind

Connectionist systems, it is claimed, do not represent knowledge as
production rules, ie as well-formed-formulae represented in the syntax
of the predicate calculus (using conditionals, modus ponens, modus
tollens and the quantifiers), but as connection weights between
activated predicates in a parallel distributed network:

'Lawful behavior and judgments may be produced by a
mechanism in which there is no explicit representation
of the rule. Instead, we suggest that the mechanisms
that process language and make judgments of
grammaticality are constructed in such a way that their
performance is characterizable by rules, but that the
rules themselves are not written in explicit form
anywhere in the mechanism.'

D E Rumelhart and D McClelland (1986)
Parallel Distributed Processing Ch. 18

Such systems are function-approximation systems, and are
mathematically a development of Kolmogorov's Mapping Neural Network
Existence Theorem (1957). Such networks consist of three layers of
processing elements. Those of the bottom layer simply distribute the
input vector (a pattern of 1s and 0s) to the processing elements of
the second layer. The processing elements of this middle or hidden
layer implement a *'transfer function'* (more on this below). The top
layer are output units.

An important feature of Kolmogorov's Theorem, is that it is not
constructive. That is, it is not algorithmic or 'effective'. Since the
proof of the theorem is not constructive, we do not know how to
determine the key quantities of the transfer functions. The theorem
simply tells us that such a three layer mapping network must exist. As
Hecht-Nielsen (1990) remarks:

'Unfortunately, there does not appear to be too much
hope that a method of finding the Kolmogorov network
will be developed soon. Thus, the value of this result
is its intellectual assurance that continuous vector
mappings of a vector variable on the unit cube
(actually, the theorem can be extended to apply to any
COMPACT, ie, closed and bounded, set) can be implemented
EXACTLY with a three-layer neural network.'

R. Hecht-Nielsen (1990)
Kolmogorov's Theorem
Neurocomputing

That is, we may well be able to find weight-matrices which capture or
embody certain functions, but we may not be able to say 'effectively'
what the precise equations are which algorithmically compute such
functions. This is often summarised by statements to the effect that
neural networks can model or fit solutions to sample problems, and
generalise to new cases, but they can not provide a rule as to how
they make such classifications or inferences. Their ability to do so
is distributed across the weightings of the whole weight matrix of
connections between the three layers of the network. The above is to
be contrasted with the fitting of linear discriminant functions to
partition or classify an N dimensional space (N being a direct
function of the number of classes or predicates). Fisher's
discriminant analysis (and the closely related linear multiple
regression technology) arrive at the discriminant function
coefficients through the Gaussian method of Least Mean Squares, each b
value and the constant being arrived at deductively via the solution
of simultaneous equations. Function approximation, or the
determination of hidden layer weights or connections is based on
recursive feedback, elsewhere within behaviour science, this is known
as 'reinforcement', the differential strengthening or weakening of
connections depending on feedback or knowledge of results. Kohonen
(1988) commenting on "Connectionist Models" in contrast to
conventional, extensionalist relational databases, writes:

'Let me make it completely clear that one of the most
central functions coveted by the "connectionist" models
is the ability to solve *simplicitly defined relational
structures*. The latter, as explained in Sect. 1.4.5,
are defined by *partial relations*, from which the
structures are determined in a very much similar way as
solutions to systems of algebraic equations are formed;
all the values in the universe of variables which
satisfy the conditions expressed as the equations
comprise, by definition, the possible solutions. In the
relational structures, the knowledge (partial
statements, partial relations) stored in memory
constitutes the universe of variables, from which the
solutions must be sought; and the conditions expressed
by (eventually incomplete) relations, ie, the "control
structure" [9.20] correspond to the equations.

Contrary to the conventional database machines which
also have been designed to handle such relational
structures, the "connectionist" models are said to take
the relations, or actually their strengths into account
statistically. In so doing, however they only apply the
Euclidean metric, or the least square loss function to
optimize the solution. This is not a very good
assumption for natural data.'

T. Kohonen (1988)
Ch. 9 Notes on Neural Computing
In Self-Organisation and Associative Memory

Throughout the 1970s Nisbett and colleagues studied the use of
probabilistic heuristics in real world human problem solving,
primarily in the context of Attribution Theory (H. Kelley 1967, 1972).
Such inductive as opposed to deductive heuristics of inference do
indeed seem to be influenced by training (Nisbett and Krantz 1983,
Nisbett et. al 1987). Statistical heuristics are naturally applied in
everyday reasoning if subjects are trained in the Law of Large
Numbers. This is not surprising, since application of such heuristics
is an example of response generalisation - which is how psychologists
have traditionally studied the vicissitudes of inductive inference
within Learning Theory. As Wagner (1981) has pointed out, we are
perfectly at liberty to use the language of Attribution Theory as an
alternative, this exchangeability of reference system being an
instance of Quinean Ontological Relativity, where what matters is not
so much the names in argument positions, or even the predicates
themselves, but the *relations* (themselves at least two-place
predicates) which emerge from such systems.

Under most natural circumstances, inductive inference is irrational
(cf. Popper 1936, Kahneman et al. 1982, Dawes, Faust and Meehl 1989,
Sutherland 1992). This is because it is generally based on
unrepresentative sampling (drawing on the 'availability' and
'representativeness' heuristics), and this is so simply because that
is how data in a structured culture often naturally presents itself.
Research has therefore demonstrated that human inference is seriously
at odds with formal deductive logical reasoning, and the algorithmic
implementation of those inferential processes by computers (Church
1936, Post 1936, Turing 1936). One of the main points of this paper is
that we generally turn to the formal deductive technology of
mathematico-logical method (science) to compensate for the heuristics
and biases which typically characterise natural inductive inference.
Where possible, we turn to *relational databases and 4GLs* (recursive
function theory and mathematical logic) to provide descriptive, and
deductively valid pictures of individuals and collectives.

This large, and unexpected body of empirical evidence from decision-
theory, cognitive experimental social psychology and Learning Theory,
began accumulating in the mid to late 1970s (cf. Kahneman, Tversky and
Slovic 1982, Putnam 1986, Stich 1990), and began to cast serious doubt
on the viability of the 'computational theory' of mind (Fodor
1975,1980) which was basic to functionalism (Putnam 1986). That is,
the substantial body of empirical evidence which accumulated within
Cognitive Psychology itself suggested that, contrary to the doctrine
of functionalism, there exists a system of independent, objective
knowledge, and reasoning against which we can judge human, and other
animal cognitive processing. However, it gradually became appreciated
that the digital computer is not a good model of human information
processing, at least not unless this is conceived in terms of 'neural
computing' (also known as 'connectionism' or 'Parallel Distributed
Processing). The application of formal rules of logic and mathematics
to the analysis of behaviour solely within the language of formal
logic is the professional business of Applied Behaviour Scientists.
Outside of the practice of those professional skills, the scientist
himself is as prone to the irrationality of intensional heuristics as
are laymen (Wason 1966). Within the domain of formal logic applied to
the analysis of behaviour, the work undertaken by applied scientists
is impersonal. The scientists' professional views are dictated by the
laws of logic and mathematics rather than personal opinion
(heuristics).

Applied psychologists, particularly those working in the area of
Criminological Psychology, are therefore faced with a dilemma. Whilst
many of their academic colleagues are *studying* the heuristics and
biases of human cognitive processing, the applied psychologist is
generally called upon to do something quite different, yet is largely
prevented from doing so for lack of relational systems to provide the
requisite distributional data upon which to use the technology of
algorithmic decision making. In the main, the applied criminological
psychologists as behaviour scientist is called upon to bring about
behaviour change, rather than to better understand or explicate the
natural heuristics of cognitive (clinical) judgement. To the applied
psychologist, the low correlation between self-report and actual
behaviour, the low consistency of behaviour across situations, the low
efficacy of prediction of behaviours such as 'dangerousness' on the
basis of clinical judgment, and the fallibility of assessments based
on interviews, are all testament to the now *well documented
unreliability of intensional heuristics (cognitive processes) as data
sources, and we have already pointed to why this is so.* Yet
generally, psychologists can rely on no other sources, as there are in
fact, inadequate Inmate Information Systems. Thus, whilst applied
psychologists know from research that they must rely on distributional
data to establish their professional knowledge base, and that they
must base their work with individuals (whether prisoners, governors or
managers) on extensional analysis of such knowledge bases, *they
neither have the systems available nor the influence to have such
systems established, despite powerful scientific evidence (Dawes,
Faust and Meehl 1989) that their professional services in many areas
depend on the existence and use of such systems.* What applied
psychologists have learned therefore is to eschew intensional
heuristics and look instead to the formal technology of extensional
analysis of observations of behaviour. The fact that training in
formal statistical and deductive logic is difficult, particularly the
latter, makes this a challenge, since most of the required skills are
only likely to be applicable when sitting in front of a computer
keyboard (Holland et al 1986). It is particularly challenging in that
the information systems are generally inadequate to allow
professionals to do what they are trained to do.

Over the past five years (1988-1993), a programme has been developed
which is explicitly naturalistic on that it seeks to record
inmate/environment (regime) interactions. This system is the
PROBE/Sentence Management system. It breaks out of solipsism by making
all assessments of behaviour, and all inmate targets *RELATIVE to
predetermined requirements of the routines and structured activities
defined under function 17 of the annual Governors Contract*. It is by
design a 'formative profiling system' which is 'criterion reference'
based.

The alternative, intensional heuristics, which are the mark of natural
human judgement (hence our rich folk psychological vocabulary of
metaphor) have to be contrasted with extensional analysis and
judgement using technology based on the deductive algorithms of the
First Order Predicate Calculus (Relational Database Technology). This
is not only coextensive with the 'scope and language of science'
(Quine 1954) but is also, to the best of our knowledge from research
in Cognitive Psychology, an effective compensatory system to the
biases of natural intensional, inductive heuristics (Agnoli and Krantz
1989). Whilst a considerable amount of evidence suggests that training
in formal logic and statistics is not in itself sufficient to suppress
usage of intensional heuristics in any enduring sense, ie that
generalisation to extra-training contexts is limited, there is
evidence that judgement can be rendered more rational by training in
the use of extensional technology. The demonstration by Kahneman and
Tversky 1983, that subjects generally fail to apply the extensional
conjunction rule in probability that conjunctions are always equal or
less probable than its elements, and that this too is generally
resistant to counter-training, is another example, this time within
probability theory (a deductive system) of the failure of extensional
rules in applied contexts. Careful use of I.T. and principles of
deductive inference (e.g. semantic tableaux, Herbrand models, and
Resolution methods) promise, within the limits imposed by Godel's
Theorem, to keep us on track if we restrict our technology to the
extensional.

Before leaving the concept of Methodological Solipsism, here's how one
commentator reviewed the situation in the context of the work of
perhaps psychology's best known radical behaviourist:

'Meanings Are Not 'In the Head'

Skinner has developed a case for this claim in the book,
VERBAL BEHAVIOR (1957), and elsewhere, where he
maintains that meaning, rather than being a property of
an utterance itself, is to be found in the nature of the
relationship between occurrence of the utterance and its
context. It is important enough to put in his own words.

..meaning is not properly regarded as a property either
of a response or a situation but rather of the
contingencies responsible for both the topography of
behavior and the control exerted by stimuli. To take a
primitive example, if one rat presses a lever to obtain
food when hungry while another does so to obtain water
when thirsty, the topographies of their behaviors may be
indistinguishable, but they may be said to differ in
meaning: to one rat pressing the lever 'means food'; to
the other it 'means' water. But these are aspects of the
contingencies which have brought behavior under the
control of the current occasion. Similarly, if a rat is
reinforced with food when it presses the lever in the
presence of a flashing light but with water when the
light is steady, then it could be said that the flashing
light means food and the steady light means water, but
again these are references not to some property of the
light but to the contingencies of which the lights have
been parts.

The same point may be made, but with many more
implications, in speaking of the meaning of verbal
behavior. The over-all function of the behavior is
crucial. In an archetypal pattern a speaker is in
contact with a situation to which a listener is disposed
to respond but with which he is not in contact. A verbal
response on the part of the speaker makes it possible
for the listener to respond appropriately. For example,
let us suppose that a person has an appointment, which
he will keep by consulting a clock or a watch. If none
is available, he may ask someone to tell him the time,
and the response permits him to respond effectively...

*The meaning of a response for the speaker* includes the
stimulus which controls it (in the example above, the
setting on the face of a clock or watch) and possibly
aversive aspects of the question, from which a response
brings release. *The meaning for the listener* is close
to the meaning the clock face would have if it were
visible to him, but it also includes the contingencies
involving the appointment, which make a response to the
clock face or the verbal response probable at such a
time..

One of the unfortunate implications of communication
theory is that the meanings for speaker and listener are
the same, that something is made common to both of them,
that the speaker conveys an idea or meaning, transmits
information, or imparts knowledge, as if his mental
possessions then become the mental possessions of the
listener. There are no meanings which are the same in
the speaker and listener. Meanings are not independent
entities...

Skinner, 1974, pp.90-2

One does not have to take Skinner's word alone, however,
for much current philosophical work also leads to the
conclusion that meanings are not in the head. The issue
extends beyond the problem of meaning construed as a
linguistic property to the problem of intensionality and
the interpretation of mentality itself. While the
reasoning behind this claim is varied and complex,
perhaps an analogy with machine functions can be helpful
here. A computer is a perfect example of a system that
performs meaningless syntactic operations. The
electrical configuration of the addressable memory
locations is just formal structures, without semantic
significance to the computer either as numbers or as
representations of numbers. All the computer does is
change states automatically as electrical current runs
through its circuits. Despite the pure formality of its
operations, however, the computer (if designed and
programmed correctly) will be truth-preserving across
computations: ask the thing to add 2 + 2 and it will
give you a 4 every time. But the numerical meanings we
attach to the inputs and outputs do not enter into and
emanate from the computer itself. Rather, they remain
outside the system, in the interpretations that we as
computer users assign to the inputs and outputs of the
machine's operations. Now, if one is inclined to a
computational view of mind, then by analogy much the
same thing holds for the organic computational systems
we call our brains. Meanings are not in them, but exist
in the mode through which they in their functioning
stand to the world.

Ironies begin to mount here. Brentano's claim that
'Intentionality' is the mark of the mental is now widely
accepted. Intentionality in its technical sense has to
do with the meaningfulness, the semantic context of
mental states. But the argument is now made that
cognitive operations and their objects are formal and
syntactic only, and do not themselves have semantic
context (e.g. see Putnam, 1975; Fodor, 1980; and Stich,
1983, for a range of contributions to this viewpoint).
Semantic issues do not concern internal mental
mechanisms but concern the mode of relation between
individuals and their worlds. Such issues are not really
psychological at all, it is claimed, and are relegated
to other fields of inquiry for whatever elucidation can
be brought to them. For example, while belief is a
canonical example of a mental, intentional state, Stich
says, 'believing that p is an amalgam of historical,
contextual, ideological, and perhaps other
considerations' (1983, p.170). The net result of these
recent moves in cognitive psychology and the philosophy
of mind seems to be that the essence of mentality - its
meaningfulness - is in the process of being disowned by
modern mentalism! But Stich's ashbin of intentionality -
historical and contextual considerations - is exactly
what behaviorism seeks to address. Can it be that
BEHAVIORISM will be the instrument called for final
explication of Brentano's thesis of the mental? One's
head spins to think it.

R. Schnaitter (1987)
Knowledge as Action: The Epistemology of Radical Behaviorism
In B. F. Skinner Consensus and Controversy
Eds. S. Modgil and C. Modgil

The reawakening of interest in connectionism in the early to mid 1980s
can indeed be seen as a vindication of the basic principles of
behaviourism. What is psychological may well be impenetrable, for any
serious scientific purposes, not because it is in any way a different
kind of 'stuff', but because structurally it amounts to no more than
an n-dimensional weight space, idiosyncratic and context specific, to
each and every one of us.

--
David Longley


David Longley

unread,
Nov 23, 1996, 8:00:00 AM11/23/96
to

This is a a critique of a stance in contemporary psychology, and
for want of a better term, I have characterised that as
"intensional". As a corrective, I outline what may be described
as "The Extensional Stance" which draws heavily on the
naturalized epistemology of W.V.O Quine.

Full text is available at:

http://www.uni-hamburg.de/~kriminol/TS/tskr.htm

A: Methodological Solipsism

'A cognitive theory with no rationality restrictions is
without predictive content; using it, we can have
virtually no expectations regarding a believer's
behavior. There is also a further metaphysical, as
opposed to epistemological, point concerning rationality
as part of what it is to be a PERSON: the elements of a
mind - and, in particular, a cognitive system - must FIT
TOGETHER or cohere.......no rationality, no agent.'

C. Cherniak (1986)
Minimal Rationality p.6

'Complexity theory raises the possibility that formally
correct deductive procedures may sometimes be so slow as
to yield computational paralysis; hence, the "quick but
dirty" heuristics uncovered by the psychological
research may not be irrational sloppiness but instead
the ultimate speed-reliability trade-off to evade
intractability. With a theory of nonidealized
rationality, complexity theory thereby "justifies the
ways of Man" to this extent.'

ibid p.75-76

The establishment of coherence or incoherence depends on a commitment
to clear and accurate recording and analysis of observations and their
relations within a formal system. Unfortunately, biological
constraints on both neuron conduction velocity and storage capacity
impose such severe constraints on human processing capacity that we
are restricted to using heuristics rather than recursive functions.
There can be no doubt that non-human computers are, at least with
respect to the propositional calculus, and first order predicate
calculus with monadic predicates, ie systems which are decidable, have
decision procedures untainted by Godel's Theorem 1931, offer a far
more reliable way of analysing information than intuitive judgment.

The primary reason for writing this volume is to locate the programme
of behaviour assessment and management referred to as 'Sentence
Management' within contemporary research and development in cognitive
and behaviour science. It is also in part motivated by the author
having been in a position for some time where he has been required to
both teach, train, and support applied criminological psychologists in
the use of deductive (computer, and relational database 4GL
programming) as well as inductive (inferential statistical) inference
in an applied setting. This responsibility has led to a degree of
bewilderment. Some very influential work in mathematical logic this
century has suggested that certain domains of concern simply do not
fall within the 'scope and language of science' (Quine 1956). That
work suggests, in fact, that psychological idioms, as opposed to
behavioural terms, belong to a domain which is resistant to the tools
of scientific analysis since they flout a basic axiom which is a
precondition for valid inference.

Whilst this point has been known to logicians for nearly a century,
empirical evidence in support of this conclusion began to accumulate
throughout the 1970s and 1980s as a result of work in Decision Theory
in psychology and medicine. (Kahneman, Slovic and Tversky 1982, Arkes
and Hammond 1986). This work provided a substantial body of empirical
evidence that human judgement is not adequately modelled by the axioms
of subjective probability theory (ie Bayes Theorem, cf. Savage 1954,
Cooke 1991), or formal logic (Wason 1966, Johnson-Laird and Wason
1972), and that in all areas of human judgement, quite severe errors
of judgement were endemic probably due to basic neglect of base rates
(prior probabilities or relative frequencies of behaviours in the
population, see Eddy 1982 for a striking example of the
misinterpretation of the diagnostic contribution of mammography in
breast cancer). Instead, the evidence now strongly suggests that
judgements are usually little more than guesses, or 'heuristics' which
are prone to well understood biases such as 'anchoring',
'availability', and 'representativeness'. This work has progressively
undermined the very foundations of Cognitive Science, which takes
rationality and substitutivity as axiomatic.

It is bewildering how difficult it is to teach deductive reasoning
skills effectively if in fact the classical, functionalist, stance of
contemporary cognitive psychology is in fact true. Yet the literature
on teaching skills in deductive reasoning suggests that these *are* in
fact very difficult skills to teach. Most significantly, it is
notoriously difficult to teach such skills with the objective of
having them *applied to practical problems*. What seems to happen is
that, despite efforts to achieve the contrary, the skills that are
acquired, are both acquired and applied as intensional, inductive
heuristics, rather than as a set of formal logical rules.

This volume is therefore to be taken as a rationale for both a
programme of inmate management and assessment referred to as 'Sentence
Management' (which is both historically descriptive and deductive
rather than projective and inductive in approach), and for the format
of the current MSc 'Computing and Statistics' module which is part of
the Msc in Applied Criminological Psychology, designed to provide
formal training in behaviour science for new psychologists working
within the English Prison Service. The 'Computing and Statistics'
module could in fact be regarded as a module in 'Cognitive Skills' for
psychologists. The format adopted is consistent with the
recommendations of researchers such as Nisbett and Ross (1980) Holland
et al. (1986), and Ross and Nisbett 1991. Elaboration of the
substantial Clinical vs. Actuarial dimension can be found in section C
below.

At the heart of 20th century logic there is a very interesting problem
(illuminated over the last three decades by W.V.O Quine (1956, 1960,
1990,1992), which seems to be divisive with respect to the
classification and analysis of 'mental' (or psychological) phenomena
as opposed to 'physical' phenomena. The problem is variously known as
Brentano's Thesis (Quine 1960), 'the problem of intensionality', or
'the content-clause problem'.

"The keynote of the mental is not the mind it is the
content-clause syntax, the idiom 'that p'".

W. V. O. Quine
Intension
The Pursuit of Truth (1990) p.71

It is the subject of this volume that the solution to this problem
renders psychology and behaviour science two very different subjects
with entirely different methods and domains of application. The
problem is reflected in differences in how language treats certain
classes of terms. One class is the 'extensional' and the other
'intensional'. This volume therefore sets out the relevant
contemporary research background and outlines the practical
implications which these logical classes have for the applied,
practical work of criminological psychologists. We will begin with a
few recent statements on the implications of Brentano's Thesis for
psychology as a science. The basic conclusion is that there can be no
scientific analysis, ie no reliable application of the laws of logic
or mathematics to psychological phenomena, because psychological
phenomena flout the very axioms which mathematical, logical and
computational processes must assume for valid inference. From the fact
that quantification is unreliable within intensional contexts, it
follows that both p and not-p can be held as truth values for the same
proposition, and from any system which allows such inconsistency, any
conclusion whatsoever can be inferred. The thrust of this volume is
that bewilderment vanishes once one appreciates that the subject
matter of Applied Criminological Psychology is exclusively that of
behaviour, and that its methodology is exclusively deductive and
analytical. This is taken to be a clear vindication of Quine's 1960
dictum that:

'If we are limning the true and ultimate structure of
reality, the canonical scheme for us is the austere
scheme that knows no quotation but direct quotation and
no propositional attitudes but only the physical
constitution and behavior of organisms.'

W.V.O Quine
Word and Object 1960 p 221

For:

'Once it is shown that a region of discourse is not
extensional, then according to Quine, we have reason to
doubt its claim to describe the structure of reality.'

C. Hookway
Logic: Canonical Notation and Extensionality
Quine (1988)

The problem with intensional (or common sense or 'folk') psychology
has been clearly spelled out by Nelson (1992):

'The trouble is, according to Brentano's thesis, no such
theory is forthcoming on strictly naturalistic, physical
grounds. If you want semantics, you need a full-blown,
irreducible psychology of intensions.

There is a counterpart in modern logic of the thesis of
irreducibility. The language of physical and biological
science is largely *extensional*. It can be formulated
(approximately) in the familiar predicate calculus. The
language of psychology, however, is *intensional*. For
the moment it is good enough to think of an
*intensional* sentence as one containing words for
*intensional* attitudes such as belief.

Roughly what the counterpart thesis means is that
important features of extensional, scientific language
on which inference depends are not present in
intensional sentences. In fact intensional words and
sentences are precisely those expressions in which
certain key forms of logical inference break down.'

R. J. Nelson (1992)
Naming and Reference p.39-42

and explicitly by Place (1987):

'The first-order predicate calculus is an extensional
logic in which Leibniz's Law is taken as an axiomatic
principle. Such a logic cannot admit 'intensional' or
'referentially opaque' predicates whose defining
characteristic is that they flout that principle.'

U. T. Place (1987)
Skinner Re-Skinned P. 244
In B.F. Skinner Consensus and Controversy
Eds. S. Modgil & C. Modgil

The *intension* of a sentence is its 'meaning', or the property it
conveys. It is sometimes used almost synonymously with the
'proposition' or 'content' communicated. The *extension* of a term or
sentence on the other hand is the CLASS of things of which the term or
sentence can be said to be true. Thus, things belong to the same
extension of a term or sentence if they are the same members of the
designated class, whilst things share the same intension,
(purportedly) if they share the same property. Here's how Quine (1987)
makes the distinction:

'If it makes sense to speak of properties, it should
make clear sense to speak of sameness and differences of
properties; but it does not. If a thing has this
property and not that, then certainly this property and
that are different properties. But what if everything
that has this property has that one as well, and vice
versa? Should we say that they are the same property? If
so, well and good; no problem. But people do not take
that line. I am told that every creature with a heart
has kidneys, and vice versa; but who will say that the
property of having a heart is the same as that of having
kidneys?

In short, coextensiveness of properties is not seen as
sufficient for their identity. What then is? If an
answer is given, it is apt to be that they are identical
if they do not just happen to be coextensive, but are
necessarily coextensive. But NECESSITY, q.v., is too
hazy a notion to rest with.

We have been able to go on blithely all these years
without making sense of identity between properties,
simply because the utility of the notion of property
does not hinge on identifying or distinguishing them.
That being the case, why not clean up our act by just
declaring coextensive properties identical? Only because
it would be a disturbing breach of usage, as seen in the
case of the heart and kidneys. To ease that shock, we
change the word; we speak no longer of properties, but
of CLASSES......

We must acquiesce in ordinary language for ordinary
purposes, and the word 'property' is of a piece with it.
But also the notion of property or its reasonable
facsimile that takes over, since these contexts never
hinge on distinguishing coextensive properties. One
instance among many of the use of classes in mathematics
is seen under DEFINITION, in the definition of number.

For science it is classes SI, properties NO.'

W. V. O. Quine (1987)
Classes versus Properties
QUIDDITIES:

It has been argued quite convincingly by Quine (1956,1960) that the
scope and language of science is entirely extensional, that the
intensional is purely attributive, instrumental or creative, and that
there can not be a universal language of thought or 'mentalese', since
such a system would presume determinate translation relations.
Different languages are different systems of behaviour which may
achieve similar ends, but they do not support direct, determinate
translation relations. This is Quine's (1960) 'Indeterminacy of
Translation Thesis'. Despite its import, we frequently behave 'as if'
it is legitimate to directly translate (substitute), and we do this
not only within our own language as illustrated below, but within our
own thinking and language.

This profound point of mathematical logic can be made very clear with
a simple, but representative example. The sub-set of intensional
idioms with which we are most concerned in our day to day dealings
with people are the so called 'propositional attitudes' (saying that,
remembering that, believing that, knowing that, hoping that and so
on). If we report that someone 'said that' he hated his father, it is
often the case that we do not report what is articulated verbatim, ie
precisely, what was said. Instead, we frequently 'approximate' the
'meaning' of what was said and consider this legitimate on the grounds
that the 'meaning' is preserved.

Unfortunately, this assumes that, in contexts of propositional
attitude ('says that', 'thinks that', 'believes that', and, quite
pertinently, 'knows that' etc.) we are free to substitute terms or
phrases which are otherwise co-referential as we can extensionally
with 7+3 = 10 and 5+5=10. That is, it assumes that inference within
intensional contexts is valid. Yet nobody would report that if Oedipus
said that he wanted to marry Jocasta that he said that he wanted to
marry his mother! The problem with intensional idioms is that they can
not be substituted for one another and still preserve the truth
functionality of the contexts within which they occur. In fact, they
can only be directly quoted verbatim, ie as behaviours. Now
substitutivity of co-referential identicals 'salva veritate'
(Leibniz's Law) is in fact a basic extensional axiom of first order
logic, and is a law which underpins all valid inference. One of the
objectives of this paper is therefore to specify in practical detail
how in fact we propose to develop a system for inmate reporting which
does not flout Leibniz's Law, but takes it as central. This is an
inversion of current practices in significant areas of the work of
applied psychologists, and whilst the example cited above is a simple
one, it is nevertheless highly representative of much of the
problematic work of practising psychologists, who, often ignorant of
the above constraint on dealing with the problems which logicians have
identified with the intensional, are, as a consequence, therefore more
often 'creative' in their dealings with inmates and in their report
writing, than is often appreciated, even though the 'Puzzle About
Belief' and modal contexts is well documented (Church 1954, Kripke
1979).

Dretske (1980) put the issue as follows:

'If I know that the train is moving and you know that
its wheels are turning, it does not follow that I know
what you know just because the train never moves without
its wheels turning. More generally, if all (and only) Fs
are G, one can nonetheless know that something is F
without knowing that it is G. Extensionally equivalent
expressions, when applied to the same object, do not
(necessarily) express the same cognitive content.
Furthermore, if Tom is my uncle, one can not infer (with
a possible exception to be mentioned later) that if S
knows that Tom is getting married, he thereby knows that
my uncle is getting married. The content of a cognitive
state, and hence the cognitive state itself, depends
(for its identity) on something beyond the extension or
reference of the terms we use to express the content. I
shall say, therefore, that a description of a cognitive
state, is non-extensional.'

F. I. Dretske (1980)
The Intentionality of Cognitive States
Midwest Studies in Philosophy 5,281-294

For the discipline of psychology, the above logical analyses can be
taken either as a vindication of 20th century behaviourism/physicalism
(Quine 1960,1990,1992) or as a knockout blow to 20th century
'Cognitivism' and psychologism (methodological solipsism).

'One may accept the Brentano thesis as showing the
indispensability of intentional idioms and the
importance of an autonomous science of intention, or as
showing the baselessness of intentional idioms and the
emptiness of a science of intention. My attitude, unlike
Brentano's, is the second. To accept intentional usage
at face value is, we saw, to postulate translation
relations as somehow objectively valid though
indeterminate in principle relative to the totality of
speech dispositions. Such postulation promises little
gain in scientific insight if there is no better ground
for it than that the supposed translation relations are
presupposed by the vernacular of semantics and
intention.'

W. V. O. Quine
The Double Standard
Flight from Intension
Word and Object (1960), p218-221

In response to these mounting problems, Jerry Fodor published an
influential paper in 1980 entitled 'Methodological Solipsism
Considered as a Research Strategy for Cognitive Psychology'. In that
paper he proposed that Cognitive Psychology adopt a stance, or
restricted itself to the explication of the ways that subjects make
sense of the world from their 'own particular point of view'. This was
to be contrasted with the objectives of 'Naturalistic Psychology' or
'Evidential Behaviourism'.

Methodological Solipsism, as opposed to Methodological Behaviourism,
takes 'cognitive processes', mental contents (meanings/propositions)
or 'propositional attitudes' of folk/commonsense psychology at face
value. It accepts that there is a 'Language of Thought' (Fodor 1975),
that there is a universal 'mentalese' which natural languages map
onto, and which express thoughts as 'propositions'. It examines the
apparent causal relations and processes of 'attribution' between these
processes and other psychological processes which have propositional
content. It accepts what is known as the 'formality condition', ie
that thinking is a purely formal, syntactic, computational affair
which therefore has no room for semantic notions such as truth or
falsehood. Such computational processes are therefore indifferent to
whether beliefs are about the world per se (can be said to have a
reference), or are just the views of the belief holder (ie may be
purely imaginary). Technically, this amounts to beliefs not being
subject to 'existential or universal quantification' (where
'existential' refers to the logical quantifier, y 'there exists at
least one', and 'universal' z refers to 'for all').

Methodological Solipsism looks to the *relations* between beliefs 'de
dicto', which are opaque to the holder (he may believe that the
Morning Star is the planet Venus, but not believe that the Evening
Star is the planet Venus, and therefore believe different things of
the Morning and Evening Stars). Methodological Solipsism does not
concern itself with the transparency of beliefs, ie their referential,
or 'de re' status. Some further examples of what all this entails
might be helpful here, since the implications of Methodological
Solipsism are both subtle and far ranging. The critical notions in
what follow are 'transfer of training', 'generalisation decrement',
'inductive vs. deductive inference', and the distinction between
'heuristics' and 'algorithms'.

Here is how Fodor's paper was summarised in abstract:

'Explores the distinction between 2 doctrines, both of
which inform theory construction in much of modern
cognitive psychology: the representational theory of
mind and the computational theory of mind. According to
the former, propositional attitudes are viewed as
relations that organisms bear to mental representations.
According to the latter, mental processes have access
only to formal (nonsemantic) properties of the mental
representations over which they are defined. The
following claims are defended: (1) The traditional
dispute between rational and naturalistic psychology is
plausibly viewed as an argument about the status of the
computational theory of mind. (2) To accept the
formality condition is to endorse a version of
methodological solipsism. (3) The acceptance of some
such condition is warranted, at least for that part of
psychology that concerns itself with theories of the
mental causation of behavior. A glossary and several
commentaries are included.'

J A Fodor (1980)
Methodological solipsism considered as a research
strategy in cognitive psychology.
Massachusetts Inst of Technology
Behavioral and Brain Sciences; 1980 Mar Vol 3(1) 63-109

Some of the commentaries, particularly those by Loar or Rey clarify
what is, admittedly, quite a difficult, but substantial view widely
held by graduate psychologists.

'If psychological explanation is a matter of describing
computational processes, then the references of our
thoughts do not matter to psychological explanation.
This is Fodor's main argument.....Notice that Fodor's
argument can be taken a step further. For not only are
the references of our thoughts not mentioned in
cognitive psychology; nothing that DETERMINES their
references, like Fregian senses, is mentioned
either....Neither reference nor reference-determining
sense have a place in the description of computational
processes.'

B. F. Loar
Ibid p.89

Not all of the commentaries were as formal, as the following
commentary from one of the UK's most eminent logicians makes clear:

'Fodor thinks that when we explain behaviour by mental
causes, these causes would be given "opaque"
descriptions "true in virtue of the way the agent
represents the objects of his wants (intentions,
beliefs, etc.) to HIMSELF" (his emphasis). But what an
agent intends may be widely different from the way he
represents the object of his intention to himself. A man
cannot shuck off the responsibility for killing another
man by just 'directing his intention' at the firing of a
gun:

"I press a trigger - Well, I'm blessed!
he's hit my bullet with his chest!"'

P. Geach
ibid p80

The Methodological Solipsist's stance is clearly at odds with what is
required to function effectively as an APPLIED Criminological
Psychologist if 'functional effectiveness' is taken to refer to
intervention in the behaviour of an inmate with reference to his
environment. Here's how Fodor contrasted Methodological Solipsism with
the naturalistic approach:

'..there's a tradition which argues that - epistemology
to one side - it is at best a strategic mistake to
attempt to develop a psychology which individuates
mental states without reference to their environmental
causes and effects...I have in mind the tradition which
includes the American Naturalists (notably Pierce and
Dewey), all the learning theorists, and such
contemporary representatives as Quine in philosophy and
Gibson in psychology. The recurrent theme here is that
psychology is a branch of biology, hence that one must
view the organism as embedded in a physical environment.
The psychologist's job is to trace those
organism/environment interactions which constitute its
behavior.'

J. Fodor (1980) ibid. p.64

Here is how Stich (1991) reviewed Fodor's position ten years on:

'This argument was part of a larger project. Influenced
by Quine, I have long been suspicious about the
integrity and scientific utility of the commonsense
notions of meaning and intentional content. This is not,
of course, to deny that the intentional idioms of
ordinary discourse have their uses, nor that the uses
are important. But, like Quine, I view ordinary
intentional locutions as projective, context sensitive,
observer relative, and essentially dramatic. They are
not the sorts of locutions we should welcome in serious
scientific discourse. For those who share this Quinean
scepticism, the sudden flourishing of cognitive
psychology in the 1970s posed something of a problem. On
the account offered by Fodor and other observers, the
cognitive psychology of that period was exploiting both
the ontology and the explanatory strategy of commonsense
psychology. It proposed to explain cognition and certain
aspects of behavior by positing beliefs, desires, and
other psychological states with intentional content, and
by couching generalisations about the interactions among
those states in terms of their intentional content. If
this was right, then those of us who would banish talk
of content in scientific settings would be throwing out
the cognitive psychological baby with the intentional
bath water. On my view, however, this account of
cognitive psychology was seriously mistaken. The
cognitive psychology of the 1970s and early 1980s was
not positing contentful intentional states, nor was it
(adverting) to content in its generalisations. Rather, I
maintained, the cognitive psychology of the day was
"really a kind of logical syntax (only psychologized).
Moreover, it seemed to me that there were good reasons
why cognitive psychology not only did not but SHOULD not
traffic in intentional states. One of these reasons was
provided by the Autonomy argument.'

Stephen P. Stich (1991)
Narrow Content meets Fat Syntax
in MEANING IN MIND - Fodor And His Critics

and writing with others in 1991, even more dramatically:

'In the psychological literature there is no dearth of
models for human belief or memory that follow the lead
of commonsense psychology in supposing that
propositional modularity is true. Indeed, until the
emergence of connectionism, just about all psychological
models of propositional memory, except those urged by
behaviorists, were comfortably compatible with
propositional modularity. Typically, these models view a
subject's store of beliefs or memories as an
interconnected collection of functionally discrete,
semantically interpretable states that interact in
systematic ways. Some of these models represent
individual beliefs as sentence like structures - strings
of symbols that can be individually activated by their
transfer from long-term memory to the more limited
memory of a central processing unit. Other models
represent beliefs as a network of labelled nodes and
labelled links through which patterns of activation may
spread. Still other models represent beliefs as sets of
production rules. In all three sorts of models, it is
generally the case that for any given cognitive episode,
like performing a particular inference or answering a
question, some of the memory states will be actively
involved, and others will be dormant......

The thesis we have been defending in this essay is that
connectionist models of a certain sort are incompatible
with the propositional modularity embedded in
commonsense psychology. The connectionist models in
question are those that are offered as models at the
COGNITIVE level, and in which the encoding of
information is widely distributed and subsymbolic. In
such models, we have argued, there are no DISCRETE,
SEMANTICALLY INTERPRETABLE states that play a CAUSAL
ROLE in some cognitive episodes but not others. Thus
there is, in these models, nothing with which the
propositional attitudes of commonsense psychology can
plausibly be identified. If these models turn out to
offer the best accounts of human belief and memory, we
shall be confronting an ONTOLOGICALLY RADICAL theory
change - the sort of theory change that will sustain the
conclusion that propositional attitudes, like caloric
and phlogiston, do not exist.'

W. Ramsey, S. Stich and J. Garon (1991)
Connectionism, eliminativism, and the future of folk
psychology.

The implications here are that progress in applying psychology will be
impeded if psychologists persist in trying to talk about, or use
psychological (intensional) phenomena within a framework (evidential
behaviourism) which inherently resists quantification into such
terms. Without bound, extensional predicates, we can not reliably use
the predicate calculus, and without the predicate (functional)
calculus we can not formulate lawful relationships, statistical or
determinate.

In the following pages, I hope to be able to explicate how dominant
the methodologically solipsistic approach is within psychological
research and practice, and how that work can only have a critically
negative impact on the practical work of the Applied Criminological
Psychologist. In the main, the following looks to the study of how
people spontaneously use socially conditioned (induced) intensional
heuristics, and how these are at odds with what we now know to be
formally optimal (valid) from the stance of the objective
(extensional) sciences. It argues that the primary objective of the
applied psychologist must be the extensional analysis of observations
of behaviour (Quine 1990) and that any intervention or advice must be
based exclusively on such data if what is provided is to be classed as
a professional service. To attempt to understand or describe behaviour
without reference to the environment within which it occurs, is, it is
argued, to only partially understand and describe behaviour at best, a
point made long ago by Brunswick and Tolman (1933). To do otherwise is
to treat self-assessment/report as a valid and reliable source of
behavioural data, whilst a substantial body of evidence from Cognitive
Psychology, some of which is reviewed in this paper, suggests such a
stance is a very fundamental error. Like 'folk physics', 'folk
psychology' has been documented and found wanting. The last section of
this paper outlines a technology for directly recording and
extensionally analysing inmate/regime interactions or relations,
thereby providing a practical direction to shape the work of Applied
Criminological Psychology.

The following pages cite some examples of research which looks at the
use of intensional heuristics from a methodological solipsistic
stance. The first looks at the degree to which intensional heuristics
can be trained, and is a development of work published by Nisbett and
Krantz (1983). The concept of response generalisation, ie the transfer
of training to new problems is the key issue in what follows. However,
as Nisbett and Wilson (1977) clearly pointed out, subjects' awareness
should not be given undue weight when assessing its efficacy, instead,
testing for change by differential placement in contexts which require
such skills should be the criterion.

'Ss were trained on the law of large numbers in a given
domain through the use of example problems. They were
then tested either on that domain or on another domain
either immediately or after a 2-wk delay. Strong domain
independence was found when testing was immediate. This
transfer of training was not due simply to Ss' ability
to draw direct analogies between problems in the trained
domain and in the untrained domain. After the 2-wk
delay, it was found that (1) there was no decline in
performance in the trained domain and (2) although there
was a significant decline in performance in the
untrained domain, performance was still better than for
control Ss. Memory measures suggest that the retention
of training effects is due to memory for the rule system
rather than to memory for the specific details of the
example problems, contrary to what would be expected if
Ss were using direct analogies to solve the test
problems.'

Fong G. T. & Nisbett R. E. (1991)
Immediate and delayed transfer of training effects in
statistical reasoning.
Journal of Experimental Psychology General; 1991 Mar Vol
120(1) 34-45

Note that the authors report a decline in performance after the delay,
a point taken up and critically discussed by Ploger and Wilson (1991).
Upon reanalysing the Fong and Nisbett's results, these authors
concluded:

'The data in this study suggest the following argument:
Most college students did not apply the LLN [Law of
Large Numbers] to problems in everyday life. When given
brief instruction on the LLN, the majority of college
students were able to remember that rule. This led to
some increase in performance on problems involving the
LLN. **Overall, most students could state the rule with
a high degree of accuracy, but failed to apply it
consistently. The vast majority of college students
could memorize a rule; some applied it to examples, but
most did not.**

Fong and Nisbett (1991) concluded their article with the
suggestion that "inferential rule training may be the
educational gift that keeps on giving" (p.44). It is
likely that their educational approach may be successful
for relatively straightforward problems that are in the
same general form as the training examples. We suspect,
however, that for more complex problems, rule training
might be less effective. **Students may remember the
rule, but fail to understand the relevant implications.
In such cases, students may accept the gift, but it will
not keep on giving.'**

D. Ploger and M. Wilson
J Experimental Psychology: General, 1991,120,2,213-214
(My emphasis)

This criticism is repeated by Reeves and Weisberg (1993):

G. T. Fong and R. E. Nisbett claimed that human problem

solvers use abstract principles to accomplish transfer
to novel problems, based on findings that Ss were able
to apply the law of large numbers to problems from a
different domain from that in which they had been
trained. However, the abstract-rules position cannot
account for results from other studies of analogical
transfer that indicate that the content or domain of a
problem is important both for retrieving previously
learned analogs (e.g., K. J. Holyoak and K. Koh, 1987;
M. Keane, 1985, 1987; B. H. Ross, 1989) and for mapping
base analogs onto target problems (Ross, 1989). It also
cannot account for Fong and Nisbett's own findings that
different-domain but not same-domain transfer was
impaired after a 2-wk delay. It is proposed that the
content of problems is more important in problem solving
than supposed by Fong and Nisbett.'

L. M. Reeves and R. W. Weisberg


Abstract versus concrete information as the basis for

transfer in problem Solving: Comment on Fong and Nisbett
(1991).
Journal of Experimental Psychology General 1993 Mar
Vol122(1) 125-128

The above authors concluded their paper:

'Accordingly, we urge caution in development of an
abstract-rules approach in analogical problem solving at
the expense of domain or exemplar-specific information.
Theories in deductive reasoning have been developed that
give a more prominent role to problem content (e.g.
Evans, 198; Johnson-Laird, 1988; Johnson-Laird & Byrne,
1991) and thus better explain the available data; the
evidence suggests that problem solving theories should
follow this trend. Ibid p.127

The key issue is not whether students (or inmates) can learn
particular rules, or strategies of behaviour, since such behaviour
modification is quite fundamental to training any professional;
rather, **the issue is how well such rules are in fact applied outside
the specific training domain where they are learned**, which, writ
large, means the specialism within which they belong. This theme runs
throughout this paper in different guises. In some places the emphasis
is on 'similarity metrics', in others, 'synonymy', 'analyticity' and
'the opacity of the intensional'. Throughout, the emphasis is on
transfer of training and the fragmentation of all skill learning which
is fundamental to the rationale for the system of Sentence Management
which will be explicated and discussed in the latter parts of this
paper.

Fong et al. (1990) having reviewed the general neglect of base rate
information and overemphasis on case-specific information in parole
decision making, went on to train probation officers in the use of the
law of large numbers. This training increased probation officers' use
of base-rates when making predictions about recidivism, but this is a
specialist, context specific skill.

'Consider a probation officer who is reviewing an
offender's case and has two primary sources of
information at his disposal: The first is a report by
another officer who has known the offender for three
years; and the second is his own impressions of the
offender based on a half-hour interview. According to
the law of large numbers, the report would be considered
more important than the officer's own report owing to
its greater sample size. But research suggests that
people will tend to underemphasize the large sample
report and overemphasize the interview. Indeed, research
on probation decisions suggests that probation officers
are subject to exactly such a bias (Gottfredson and
Gottfredson; 1988; Lurigio, 1981)'

G. T. Fong, A. J. Lurigio & L. J. Stalans (1990)
Improving Probation Decisions Through Statistical Training
Criminal Justice and behavior 17,3,1990, 370-388

However, it is important to evaluate the work of Nisbett and
colleagues in the context of their early work which is clearly in the
tradition of fallibility of 'intuitive' human judgment. Their work
illustrates the conditions under which formal discipline, or cognitive
skills can be effectively inculcated, and which classes of skills are
relatively highly resistant to training. Such training generalises
most effectively to apposite situations, many of which will be
professional contexts. A major thesis of this volume is that for
extensional skills to be put into effective practice, explicit
applications must be made salient to elicit and sustain the
application of such skills. Formal, logical skills are most likely to
be applied within contexts such as actuarial analysis, which comprise
the application of professional skills in information technology. Such
a system is outlined with illustrative practical examples as framework
for applied behaviour science in the latter part of this volume.

Recently, Nisbett and colleagues (1992) in defending their stance
against the conventional view that there may in fact be little in the
way of formal rule learning, have suggested criteria for resolving the
question as to whether or not explicit rule following is fundamental
to reasoning, and if so, under what circumstances:

'A number of theoretical positions in psychology -

including variants of case-based reasoning, instance-

based analogy, and connectionist models - maintain that

abstract rules are not involved in human reasoning, or
at best play a minor role. Other views hold that the use
of abstract rules is a core aspect of human reasoning.

We propose eight criteria for determining whether or not
people use abstract rules in reasoning, and examine

evidence relevant to each criterion for several rule

systems. We argue that there is substantial evidence
that several different inferential rules, including

modus ponens, contractual rules, causal rules, and the
law of large numbers, are used in solving everyday

problems. We discuss the implications for various
theoretical positions and consider hybrid mechanisms
that combine aspects of instance and rule models.

E. Smith , C Langston and R Nisbett (1992)
The Case for Rules in Reasoning, Cognitive Science 16, 1-40

Whilst the above, particularly the degree to which training must be
'taught for transfer', is clearly relevant to the training of
psychologists in the use of deductive and actuarial technology
(computing and statistics), it is also relevant to work in the domain
of cognitive skills, and, from the evidence that cognitive skills
should be treated no differently to any other behavioural skills, the
argument is relevant to any other skill training, whether part of
inmate programmes. or staff training.

For instance, in some of the published studies (e.g. Porporino et al
1991), pre to post course changes (difference scores) in cognitive
skills have been presented as evidence for the efficacy of such
programmes in conjunction with the more critical (albeit to date,
quantitatively less impressive) measures of changes in reconviction
rate. Clearly one must ask whether one is primarily concerned to bring
about a change in cognitive behaviour, and/or a change in other
behaviours. In the transfer of training and reasoning studies by
Nisbett and colleagues, the issues are acknowledged to be highly
dependent on the types of heuristics being induced, and the
conventional position (which is being represented in this volume) is,
as pointed out above, still contentious, although the view being
expressed here remains the *conventional* one. The issue is one of
*generalisation* of skills to novel tasks or situations, ie situations
other than the training tasks. To what extent does generalisation in
practice occur, if at all?. These issues, and the research in
experimental psychology (outside the relatively small area of
criminological psychology), are cited here as clear empirical
illustrations of *the opacity of the intensional*. The conventional
view, as Fong and Nisbett (1991) clearly state, is that:

'A great many scholars today are solidly in the
concrete, empirical, domain-specific camp established by
Thorndike and Woodworth (1901), arguing that people
reason without the aid of abstract inferential rules
that are independent of the content domain.'

Thus, whilst Nisbett and colleagues have provided some evidence for
the induction of (statistical) heuristics, they acknowledge that there
is a problem attempting to teach formal rules (such as those of the
predicate calculus) which are not 'intuitively obvious'. This issue is
therefore at the heart of the question as to the issue of resourcing
specific, ie special inmate programmes, which are 'cognitively' based,
and which adhere to the conventional 'formal discipline' notion. Such
investment must be compared with investment in the rest of inmate
activities which can be used to monitor and shape behaviour under the
relatively natural conditions of the prison regime. There, the natural
demands of the activities are focal, and the 'programme' element rests
in apposite allocation and clear description of what the activity area
requires/offers in terms of behavioural skills.

There is a logical possibility that in restricting the subject matter
of psychology, and thereby the deployment of psychologists, to what
can only be analysed and managed from a Methodological Solipsistic
(cognitive) perspective, one will render some very significant results
of research in psychology irrelevant to applied *behaviour* science
and technology, unless taken as a vindication of the stance that
behaviour is essentially context specific. As explicated above,
intensions are not, in principle, amenable to quantitative analysis.
They are, in all likelihood, only domain or context specific. A few
further examples should make these points clearer.

--
David Longley


David Longley

unread,
Nov 24, 1996, 8:00:00 AM11/24/96
to

It will help if an idea of what we mean by 'clinical' and 'actuarial'
judgement is provided. The following is taken from a an early (Meehl
1954), and a relatively recent review of the status 'Clinical vs.
Actuarial Judgement' by Dawes, Faust and Meehl (1989):

'One of the major methodological problems of clinical
psychology concerns the relation between the "clinical"
and "statistical" (or "actuarial") methods of
prediction. Without prejudging the question as to
whether these methods are fundamentally different, we
can at least set forth the main difference between them
as it appears superficially. The problem is to predict
how a person is going to behave. In what manner should
we go about this prediction?

We may order the individual to a class or set of classes
on the basis of objective facts concerning his life
history, his scores on psychometric tests, behavior
ratings or check lists, or subjective judgements gained
from interviews. The combination of all these data
enables us to CLASSIFY the subject; and once having made
such a classification, we enter a statistical or
actuarial table which gives the statistical frequencies
of behaviors of various sorts for persons belonging to
the class. The mechanical combining of information for
classification purposes, and the resultant probability
figure which is an empirically determined relative
frequency, are the characteristics that define the
actuarial or statistical type of prediction.

Alternatively, we may proceed on what seems, at least,
to be a very different path. On the basis of interview
impressions, other data from the history, and possibly
also psychometric information of the same type as in the
first sort of prediction, we formulate, as a psychiatric
staff conference, some psychological hypothesis
regarding the structure and the dynamics of this
particular individual. On the basis of this hypothesis
and certain reasonable expectations as to the course of
other events, we arrive at a prediction of what is going
to happen. This type of procedure has been loosely
called the clinical or case-study method of prediction'.

P. E. Meehl (1954)
The Problem: Clinical vs. Statistical Prediction

'In the clinical method the decision-maker combines or
processes information in his or her head. In the
actuarial or statistical method the human judge is
eliminated and conclusions rest solely on empirically
established relations between data and the condition or
event of interest. A life insurance agent uses the
clinical method if data on risk factors are combined
through personal judgement. The agent uses the actuarial
method if data are entered into a formula, or tables and
charts that contain empirical information relating these
background data to life expectancy.

Clinical judgement should not be equated with a clinical
setting or a clinical practitioner. A clinician in
psychiatry or medicine may use the clinical or actuarial
method. Conversely, the actuarial method should not be
equated with automated decision rules alone. For
example, computers can automate clinical judgements. The
computer can be programmed to yield the description
"dependency traits", just as the clinical judge would,
whenever a certain response appears on a psychological
test. To be truly actuarial, interpretations must be
both automatic (that is, prespecified or routinized) and
based on empirically established relations.'

R. Dawes, D. Faust & P. Meehl (1989)
Clinical Versus Actuarial Judgement Science v243, pp
1668-1674 (1989)

As long ago as 1941, Lundberg made it clear that any argument between
those committed to the 'clinical' (intuitive) stance and those arguing
for the 'actuarial' (statistical) was a pseudo-argument, since all the
clinician could possibly be making his or her decision on was his or
her limited experience (database) of past cases and outcomes.

'I have no objection to Stouffer's statement that "if
the case-method were not effective, life insurance
companies hardly would use it as they do in
supplementing their actuarial tables by a medical
examination of the applicant in order to narrow their
risks." I do not see, however, that this constitutes a
"supplementing" of actuarial tables. It is rather the
essential task of creating specific actuarial tables. To
be sure, we usually think of actuarial tables as being
based on age alone. But on the basis of what except
actuarial study has it been decided to charge a higher
premium (and how much) for a "case" twenty pounds
overweight, alcoholic, with a certain family history,
etc.? These case-studies have been classified and the
experience for each class noted until we have arrived at
a body of actuarial knowledge on the basis of which we
"predict" for each new case. The examination of the new
case is for the purpose of classifying him as one of a
certain class for which prediction is possible.'

G. Lundberg (1941)
Case Studies vs. Statistical Methods - An Issue Based
on Misunderstanding. Sociometry v4 pp379-83 (1941)

A few years later, Meehl (1954), drawing on the work of Lundberg
(1941) and Sarbin (1941) in reviewing the relative merits of clinical
vs. statistical prediction (judgement) reiterated the point that all
judgements about an individual are always referenced to a class, they
are always therefore, probability judgements.



'No predictions made about a single case in clinical
work are ever certain, but are always probable. The
notion of probability is inherently a frequency notion,
hence statements about the probability of a given event
are statements about frequencies, although they may not
seem to be so. Frequencies refer to the occurrence of
events in a class; therefore all predictions; even those
that from their appearance seem to be predictions about
individual concrete events or persons, have actually an
implicit reference to a class....it is only if we have a
reference class to which the event in question can be
ordered that the possibility of determining or

estimating a relative frequency exists.. the clinician,

if he is doing anything that is empirically meaningful,
is doing a second-rate job of actuarial prediction.
There is fundamentally no logical difference between the
clinical or case-study method and the actuarial method.
The only difference is on two quantitative continua,

namely that the actuarial method is more EXPLICIT and
more PRECISE.'

P. Meehl (1954)
Clinical vs. Statistical Prediction:

A Theoretical Analysis and a Review of the Evidence

There has, unfortunately, over the years, been a strong degree of
resistance to the actuarial approach. It must be appreciated however,
that the technology to support comprehensive actuarial analysis and
judgment has only been physically available since the 1940s with the
invention of the computer. Practically speaking, it has only been
available on the scale we are now discussing since the late 1970s with
the development of sophisticated DBMS's (databases with query
languages based on the Predicate Calculus; Codd 1970; Gray 1984;
Gardarin and Valduriez 1989, Date 1992), and the development and mass
production of powerful and cheap microcomputers. Minsky and Papert
(1988) in their expanded edition of 'Perceptrons' (basic pattern
recognition systems) in fact wrote:

'The goal of this study is to reach a deeper
understanding of some concepts we believe are crucial to
the general theory of computation. We will study in
great detail a class of computations that make decisions
by weighting evidence.....The people we want most to
speak to are interested in that general theory of
computation.'

M. L. Minsky & S. A. Papert (1969,1990)
Perceptrons p.1

The 'general theory of computation' is, as elaborated elsewhere,
'Recursive Function Theory' (Church 1936, Kleene 1936, Turing 1937),
and is essentially the approach being advocated here as evidential
behaviourism, or eliminative materialism which eschews psychologism
and intensionalism. Nevertheless, as late as 1972, Meehl still found
he had to say:

'I think it is time for those who resist drawing any
generalisation from the published research, by
fantasising about what WOULD happen if studies of a
different sort WERE conducted, to do them. I claim that
this crude, pragmatic box score IS important, and that
those who deny its importance do so because they just
don't like the way it comes out. There are few issues in
clinical, personality, or social psychology (or, for
that matter, even in such fields as animal learning) in
which the research trends are as uniform as this one.
Amazingly, this strong trend seems to exert almost no
influence upon clinical practice, even, you may be
surprised to learn, in Minnesota!...

It would be ironic indeed (but not in the least
surprising to one acquainted with the sociology of our
profession) if physicians in nonpsychiatric medicine
should learn the actuarial lesson from biometricians and
engineers, whilst the psychiatrist continues to muddle
through with inefficient combinations of unreliable
judgements because he has not been properly instructed
by his colleagues in clinical psychology, who might have
been expected to take the lead in this development.

I understand (anecdotally) that there are two other
domains, unrelated to either personality assessment or
the healing arts, in which actuarial methods of data
combination seem to do at least as good a job as the
traditional impressionistic methods: namely, meteorology
and the forecasting of security prices. From my limited
experience I have the impression that in these fields
also there is a strong emotional resistance to
substituting formalised techniques for human judgement.

Personally, I look upon the "formal-versus-judgmental"
issue as one of great generality, not confined to the
clinical context. I do not see why clinical
psychologists should persist in using inefficient means
of combining data just because investment brokers,
physicians, and weathermen do so. Meanwhile, I urge
those who find the box score "35:0" distasteful to
publish empirical studies filling in the score board
with numbers more to their liking.'

P. E. Meehl (1972)
When Shall We Use Our Heads Instead of the Formula?
PSYCHODIAGNOSIS: Collected Papers (1971)

In 1982, Kahneman, Slovic and Tversky, in their collection of papers
on (clinical) judgement under conditions of uncertainty, prefaced the
book with the following:

'Meehl's classic book, published in 1954, summarised
evidence for the conclusion that simple linear
combinations of cues outdo the intuitive judgements of
experts in predicting significant behavioural criteria.
The lasting intellectual legacy of this work, and of the
furious controversy that followed it, was probably not
the demonstration that clinicians performed poorly in
tasks that, as Meehl noted, they should not have
undertaken. Rather, it was the demonstration of a
substantial discrepancy between the objective record of
people's success in prediction tasks and the sincere
beliefs of these people about the quality of their
performance. This conclusion was not restricted to
clinicians or to clinical prediction:

People's impressions of how they reason, and how well
they reason, could not be taken at face value.'

D. Kahneman, P. Slovic & A. Tversky (1982)
Judgment Under Conditions of Uncertainty: Heuristics and
Biases

Earlier in 1977, reviewing the Attribution Theory literature evidence
on individuals' access to the reasons for their behaviours, Nisbett
and Wilson (1977) summarised the work as follows:

'................................... there may be little
or no direct introspective access to higher order
cognitive processes. Ss are sometimes (a) unaware of the
existence of a stimulus that importantly influenced a
response, (b) unaware of the existence of the response,
and (c) unaware that the stimulus has affected the
response. It is proposed that when people attempt to
report on their cognitive processes, that is, on the
processes mediating the effects of a stimulus on a
response, they do not do so on the basis of any true
introspection. Instead, their reports are based on a
priori, implicit causal theories, or judgments about the
extent to which a particular stimulus is a plausible
cause of a given response. This suggests that though
people may not be able to observe directly their
cognitive processes, they will sometimes be able to
report accurately about them. Accurate reports will
occur when influential stimuli are salient and are
plausible causes of the responses they produce, and will
not occur when stimuli are not salient or are not
plausible causes.'

R. Nisbett & T. Wilson (1977)
Telling More Than We Can Know: Public Reports on Private
Processes

Such rules of thumb or attributions, are of course the intensional
heuristics studied by Tversky and Kahneman (1973), or the 'function
approximations' computed by neural network systems discussed earlier
as connection weights (both in artificial and real neural nets, cf.
Kandel's work with Aplysia).

Mathematical logicians such as Putnam (1975,1988); Elgin 1990 and
Devitt (1990) have long been arguing that psychologists may, as
Skinner (1971,1974) argued consistently, be looking for their data in
the wrong place. Despite the empirical evidence from research in
psychology on the problems of self report, and a good deal more drawn
from decision making in medical diagnosis, the standard means of
obtaining information for 'reports' on inmates for purposes of review,
and the standard means of assessing inmates for counselling is on the
basis of clinical interview. In the Prison Service this makes little
sense, since it is possible to directly observe behaviour under
relatively natural conditions of everyday activities. The clinical
interview, is still the basis of much of the work of the Prison
Psychologist despite the literature on fallibility of self-reports,
and the fallibility and unwitting distortions of those making
judgments in such contexts has been consistently documented within
psychology:

'The previous review of this field (Slovic, Fischoff &
Lichtenstein 1977) described a long list of human
judgmental biases, deficiencies, and cognitive
illusions. In the intervening period this list has both
increased in size and influenced other areas of
psychology (Bettman 1979, Mischel 1979, Nisbett & Ross
1980).'

H. Einhorn and R. Hogarth (1981)

The following are also taken from the text:

'If one considers the rather typical findings that
clinical judgments tend to be (a) rather unreliable (in
at least two of the three senses of that term), (b) only
minimally related to the confidence and amount of
experience of the judge, (c) relatively unaffected by
the amount of information available to the judge, and
(d) rather low in validity on an absolute basis, it
should come as no great surprise that such judgments are
increasingly under attack by those who wish to
substitute actuarial prediction systems for the human
judge in many applied settings....I can summarize this
ever-growing body of literature by pointing out that
over a very large array of clinical judgment tasks
(including by now some which were specifically selected
to show the clinician at his best and the actuary at his
worst), rather simple actuarial formulae typically can
be constructed to perform at a level no lower than that
of the clinical expert.'

L. R. Goldberg (1968)
Simple models or simple processes?
Some research on clinical judgments
American Psychologist, 1968, 23(7) p.483-496

'The various studies can thus be viewed as repeated
sampling from a uniform universe of judgement tasks
involving the diagnosis and predication of human
behavior. Lacking complete knowledge of the elements
that constitute this universe, representativeness cannot
be determined precisely. However, with a sample of about
100 studies and the same outcome obtained in almost
every case, it is reasonable to conclude that the
actuarial advantage is not exceptional but general and
likely to encompass many of the unstudied judgement
tasks. Stated differently, if one poses the query:
Would an actuarial procedure developed for a particular
judgement task (say, predicting academic success at my
institution) equal or exceed the clinical method?", the
available research places the odds solidly in favour of
an affirmative reply. "There is no controversy in social
science that shows such a large body of qualitatively
diverse studies coming out so uniformly....as this one
(Meehl J. Person. Assess, 50,370 (1986)".'

The distinction between collecting observations and integrating it is
further brought out vividly by Meehl (1989):

'Surely we all know that the human brain is poor at
weighting and computing. When you check out at a
supermarket you don't eyeball the heap of purchases and
say to the clerk, "well it looks to me as if it's about
$17.00 worth; what do you think?" The clerk adds it up.
There are no strong arguments....from empirical
studies.....for believing that human beings can assign
optimal weight in equations subjectively or that they
apply their own weights consistently.'

P. Meehl (1986)
Causes and effects of my disturbing little book
J Person. Assess. 50,370-5,1986

'Distributional information, or base-rate data, consist
of knowledge about the distribution of outcomes in
similar situations. In predicting the sales of a new
novel, for example, what one knows about the author, the
style, and the plot is singular information, whereas
what one knows about the sales of novels is
distributional information. Similarly, in predicting the
longevity of a patient, the singular information
includes his age, state of health, and past medical
history, whereas the distributional information consists
of the relevant population statistics. The singular
information consists of the relevant features of the
problem that distinguish it from others, while the
distributional information characterises the outcomes
that have been observed in cases of the same general
class. The present concept of distributional data does
not coincide with the Bayesian concept of a prior
probability distribution. The former is defined by the
nature of the data, whereas the latter is defined in
terms of the sequence of information acquisition.

The tendency to neglect distributional information and
to rely mainly on singular information is enhanced by
any factor that increases the perceived uniqueness of
the problem. The relevance of distributional data can be
masked by detailed acquaintance with the specific case
or by intense involvement with it........

The prevalent tendency to underweigh or ignore
distributional information is perhaps the major error of
intuitive prediction. The consideration of
distributional information, of course, does not
guarantee the accuracy of forecasts. It does, however,
provide some protection against completely unrealistic
predictions. The analyst should therefore make every
effort to frame the forecasting problem so as to
facilitate utilising all the distributional information
that is available to the expert.'

A. Tversky & D. Kahneman (1983)


Extensional Versus Intuitive Reasoning: The Conjunction

Fallacy in Probability Judgment Psychological Review
v90(4) 1983


'The possession of unique observational capacities
clearly implies that human input or interaction is often
needed to achieve maximal predictive accuracy (or to
uncover potentially useful variables) but tempts us to
draw an additional, dubious inference. A unique capacity
to observe is not the same as a unique capacity to
predict on the basis of integration of observations. As
noted earlier, virtually any observation can be coded
quantitatively and thus subjected to actuarial analysis.
As Einhorn's study with pathologists and other research
shows, greater accuracy may be achieved if the skilled
observer performs this function and then steps aside,
leaving the interpretation of observational and other
data to the actuarial method.'

R. Dawes, D. Faust and P. Meehl (1989)
ibid.

--
David Longley


Jim Balter

unread,
Nov 24, 1996, 8:00:00 AM11/24/96
to

David Longley wrote:
>
> This is a a critique of a stance in contemporary psychology, and
> for want of a better term, I have characterised that as
> "intensional". As a corrective, I outline what may be described
> as "The Extensional Stance" which draws heavily on the
> naturalized epistemology of W.V.O Quine.

And hundreds of lines more of off-topic stuff. I apologize to these
groups for provoking the obviously disturbed Mr. Longley into posting
this inappropriate material. He has been doing so for over a year in
comp.ai.philosophy, but apparently has decided to branch out.
Hopefully he can reel himself back in and limit himself once again
to comp.ai.philosophy, where we are used to him and his autistic ways,
and have learned to tolerate his particular flavor of spam.

--
<J Q B>

Graham Hughes

unread,
Nov 24, 1996, 8:00:00 AM11/24/96
to

smr...@netcom.com (!@?*$%) writes:

>> Gee, then I guess I better stop linking the Boehm/Demers "Conservative
>> Garbage Collector for C and C++" with my C programs, hadn't I? I had
>> *thought* it was just a runtime library, but I guess I just didn't realize
>> it was "impossible". ;-} ;-}

>It is impossible, in general.

>You're just not writing general enough programs. Few people are that sadistic.

Bullshit. You've never actually *looked* at the Boehm/Demers GC, have
you? Or at Great Circle? Both are general garbage collectors, and both
will point you to memory leaks you're not otherwise handling. If they
don't handle everything, they handle more than enough.

Geez, *look* at the stuff before you make silly comments.
--
Graham Hughes (graham...@resnet.ucsb.edu)
alt.PGPlike-key."gra...@A-abe.resnet.ucsb.edu".finger.look.examine
alt.homelike-page."http://A-abe.resnet.ucsb.edu/~graham/".search.browse.view
alt.silliness."http://www.astro.su.se/~robert/aanvvv.html".look.go.laugh

!@?*$%

unread,
Nov 24, 1996, 8:00:00 AM11/24/96
to

> Bullshit. You've never actually *looked* at the Boehm/Demers GC, have
> you? Or at Great Circle? Both are general garbage collectors, and both
> will point you to memory leaks you're not otherwise handling. If they
> don't handle everything, they handle more than enough.
>
> Geez, *look* at the stuff before you make silly comments.


Fantastic! They finally solved that halting problem! This is great news!


(Rememberring, of course, that C and C++, unlike Lisp, permit arbitrary
transformations on pointer values which a collector can apparently, always
transform back into a valid pointer.)

G.P. Tootell

unread,
Nov 24, 1996, 8:00:00 AM11/24/96
to

what is this crap david and what's it doing in an msdos newsgroup?
kindly keep it to the ai groups please instead of spreading it all over the
place. if we're interested we'll subscribe to comp.ai.*

nik


Da...@longley.demon.co.uk (David Longley) spammed :

|> It will help if an idea of what we mean by 'clinical' and 'actuarial'
|> judgement is provided. The following is taken from a an early (Meehl
|> 1954), and a relatively recent review of the status 'Clinical vs.
|> Actuarial Judgement' by Dawes, Faust and Meehl (1989):
|>

a whole bunch of crap pasted out of some book or something.

|> --
|> David Longley
|>

--
* putting all reason aside you exchange what you've *
* got for a thing that's hypnotic and strange *

David Longley

unread,
Nov 24, 1996, 8:00:00 AM11/24/96
to

In article <329809...@netcom.com> j...@netcom.com "Jim Balter" writes:

> David Longley wrote:
> >
> > This is a a critique of a stance in contemporary psychology, and
> > for want of a better term, I have characterised that as
> > "intensional". As a corrective, I outline what may be described
> > as "The Extensional Stance" which draws heavily on the
> > naturalized epistemology of W.V.O Quine.
>

> And hundreds of lines more of off-topic stuff. I apologize to these
> groups for provoking the obviously disturbed Mr. Longley into posting
> this inappropriate material. He has been doing so for over a year in
> comp.ai.philosophy, but apparently has decided to branch out.
> Hopefully he can reel himself back in and limit himself once again
> to comp.ai.philosophy, where we are used to him and his autistic ways,
> and have learned to tolerate his particular flavor of spam.
>
> --
> <J Q B>
>

I am indeed "disturbed", and I think many others should be too.
However, there's nothing "autistic" about what I have been
attempting to draw attention to, and if it draws the attention of
even a few third generation programmers and a few folk in AI, I
think the intrusion will have been justified.

There *is* an important applied context for all that I have
posted so far, and it's an important one (however construed), not
only for psychologists, but also for programmers (of both sorts).

So long as Balter does not follow-up with yet more of his inane
pseudo-intellectual rubbish, I'll let this thread end with this
posting.

REGIMES, ACTIVITIES & PROGRAMMES:
WHAT WORKS & WHAT CAN BE EFFECTIVELY MANAGED:

An illustrative Analysis with Implications for Viable Behaviour Science

1. What Works? Cognitive Skill Programmes or Structured Regimes &
Attainment?

'The primary reason for the impact of "What Works?" is
the extraordinary gap between the claims of success made
by proponents of various treatments and the reality
revealed by good research.'

Robert Martinson (1976)
California Research at the Crossroads
Crime & Delinquency, April 1976, pp.63-73

It may help to look closely at some recent views and analyses of 'What
Works', in the area of inmate rehabilitation. Here is what Martinson
(1976) had to say about the common defences of the 'efficacy' of
programmes:

'Palmer's critique of "What Works?" is a strong defense
of what was best in the California tradition of
"recidivism only" research; it is also a stubborn
refusal to take the step forward from that kind of
thinking to the era of "social planning" research.

The primary reason for the impact of "What Works?" is
the extraordinary gap between the claims of success made
by proponents of various treatments and the reality
revealed by good research.

Palmer bases his critique on grounds of research
methods. In doing so, he makes an interpretation error
by construing as "studies" the "efforts" Martinson
mentions in his conclusion. In fact, "effort" represents
an independent variable category; this use of the term
does not justify Palmer's statement that Martinson
inaccurately described individual studies, whose results
have been favorable or partially favorable, as being few
and isolated exceptions. The table in which Palmer
tabulates 48 percent of the research studies as having
at least partially positive results is meaningless; it
includes findings from studies of "intervention" as
dissimilar as probation placement and castration. Palmer
does not understand the difficulties of summarising a
body of research findings. The problem lies in drawing
together often conflicting findings from individual
studies which differ in the degree of reliance that can
be placed on their conclusions. It is essential to weigh
the evidence and not merely count the studies. The real
conclusion of "What Works?" is that the addition of
isolated treatment elements to a system in which a given
flow of offenders has generated a gross rate of
recidivism has very little effect in changing this rate
of recidivism.

To continue the search for treatment that will reduce
the recidivism rate of the "middle base expectancy"
group or that will show differential effects for that
group is to become trapped in a dead end. The essence of
the new "social planning" epoch is a change in the
dependent variable from recidivism to the crime rate
(combined with cost). The public does not care whether a
program will demonstrate that the experimental group
shows a lower recidivism rate than a control group;
rather, it wants to know whether the program reduced the
overall crime rate. To ask "which methods work best for
which types of offenders and under what conditions or in
what types of settings" is to impose the narrowest of
questions on the research for knowledge. The economists,
too, do not live in Palmer's world of "types" of
offenders. To them, recidivism is an aspect of
occupational choice strengthened by the atrophy of
skills and opportunity for legitimate work that occurs
during the stay in prison.

The aim of future research will be to create the
knowledge needed to reduce crime. It must combine the
analytical skills of the economist, the jurisprudence of
the lawyer, the sociology of the life span, and the
analysis of systems. Traditional "evaluation" will play
a modest but declining role.'

Robert Martinson
California Research at the Crossroads
Crime & Delinquency, April 1976, pp.63-73 (my emphasis)

Martinson's sanguine recommendation is for more work and less
rhetoric, and that work will, as he says, depend on our establishing,
and analyzing the results of better systems. The same cautious
remarks were made by Lab and Whitehead (1990) in response to the
analysis of Andrews et. al. As recently as 1990, researchers
attempting to identify 'what works' from meta-analyses of published
research on programmes produced the following (it should be noted that
the analysis offered here presents a significantly different picture
to that presented by Nuttall in the 1992 seminar referenced above (see
Annex C):

'Even without applications of the principles of risk and
need, the behavioral aspect of the responsivity
principle was supported. The mean phi coefficient for
behavioral service was 0.29 (N=41) compared with an
average phi of .04 (N=113) for nonbehavioral
interventions overall and with 0.07 (N=83) for
nonbehavioral treatments when criminal sanctions were
excluded...

We Were Not Nice to Guided Group Interaction &
Psychodynamic Therapy:

We reported empirical tests of guided group interaction
and psychodynamic therapy that yielded negative mean phi
estimates and, in response, Lab and Whitehead cited
rhetoric favorable to treatments that even their review
had found ineffective. Ideally, research findings have
the effect of increasing or decreasing confidence in an
underlying theory. Reversing that ideal of the
relationship between theory and research, Lab and
Whitehead use theory to refute research findings
unfavorable to treatments that they apparently prefer,
and they use theory to reject research findings
favorable to treatments that apparently they find less
attractive.'

Andrews et al. (1990)
A Human Science Approach or More Punishments and
Pessimism: A Rejoinder to Lab and Whitehead -
Criminology, 28,3 1990 419-429

Note that of the 154 tests of correctional treatment surveyed by
Andrews et al (1990), the division into juvenile and adult across the
four types of treatment were as follows:

JUVENILE ADULT - - 0 + +
1. CRIMINAL SANCTIONS 26 4 3 1
2. INAPPROPRIATE CORRECTIONAL SERVICE 31 7 1 3 2 1
3. UNSPECIFIED CORRECTIONAL SERVICE 29 3 1 1 1
4. APPROPRIATE CORRECTIONAL SERVICE 45 9 1 8
------ -----
131 23
a = Significant NEGATIVE Phi
b = Negative Phi
c = 0 Phi
d = Positive Phi
e = Significant POSITIVE Phi

The authors classified all of their studies into one of the above four
types to bring home their point that of programmes are analyzed as to
whether on not programmes are appropriately targeted etc., it becomes
easier to ascertain whether anything does in fact work.

A negative Phi coefficient indicates that, if anything, the control
group did better than the treatment group. A significant negative Phi
indicates that this trend was statistically significant, ie that the
treatment group did WORSE than the control group.

Of the 23 adult programmes, 9 resulted in positive significant Phi
coefficients, ie where the treatment groups did better than controls.
These 9 studies are examined more closely below. Each of the 9
studies is listed along with its Phi coefficient. Listed also are the
percentages reconvicting in the treatment and control groups. Size of
these two groups is also listed, as is the setting of the programme.

A. Appropriateness Uncertain on Targets/Style

1. Walsh A. (1985) An evaluation of the effects of adult basic
education on rearrest rates amongst
probationers.
J. of Offender Counselling, Services &
Rehabilitation
Phi = 0.21
24% Treatment Group (N=50) Reconvicted vs. 44% of Control Group (N=50)
Setting = COMMUNITY

B. Structured One-on-one Paraprofessional/peer Program

2. Andrews D.A (1980) Some Experimental investigations of the
principles
of differential association through deliberate
manipulations of the structure of service
systems
American Sociological Review 45,448-462
Phi = 0.15
15% of Treatment Group (N=72) Reconvicted vs. 28% of Controls (N=116)
Setting = COMMUNITY

C.Intensive Structured Skill Training

Ross R.R, (1988) Reasoning and rehabilitation.
Fabiano E.A & International Journal of Offender Therapy and
Ewles C.D Comparative Criminology

Phi = 0.52
18% of Treatment Group (N=22) reoffended vs. 70% of control group
(N=23)
Setting = COMMUNITY

4. Same Study

Phi = 0.31
18% of treatment group (N=22) reoffended vs. 47% of control group
(N=17)
Setting = COMMUNITY

5. Dutton D.G (1986) The outcome of court-mandated treatment for
wife
assault: A quasi-experimental evaluation
Violence and Victims 1:163-175
Phi = 0.43
4% of treatment group (N=50) reoffended vs. 40% of control group
(N=50)
Setting = COMMUNITY

D. Appropriately Matched According to Risk/Responsivity or Need
Systems

6. Baird S.C, (1979) Project Report #14: A Two Year Follow-up
Heinz R.C. & Bureau of Community Corrections, Wisconsin
Bemus B.J Department of Health and Social Services

Phi = 0.17
16% of treatment group (N=184) reoffended vs. 30% of control group
(N=184)
Setting = COMMUNITY

7. Andrews D.A, (1986) The risk principle of case classification: An
Kiessling J.J, evaluation with young adult probationers.
Robinson D. & Canadian Journal of Criminology 28,377-396
Mickus S.

Phi = 0.31
33% of treatment group (N=54) reoffended vs. 75% of control group
(N=12)
Setting = COMMUNITY

8. Andrews D.A, (1980) Program structure an effective correctional
& Kiessling J.J practices: A summary of the CaVIC research

Phi = 0.82
0% of treatment group (N=11) reoffended vs. 80% of control group
(N=10)
Setting = COMMUNITY

9. Same study
Phi = 0.27
31% of treatment group (N=34) reoffended vs. 58% of control group
(N=23)

NOTE - all 9 studies which had positive Phi coefficients for the adult
programmes were conducted in the COMMUNITY, not in custodial
settings. All 9 programmes were classified as 'Probation, Parole,
Community' (PPC)

In fact, Andrews et al. (1990) say:

'The minor but statistically significant adjusted main
effect of setting is displayed in column six of Table 1.
This trend should not be overemphasized, but the
relatively weak performance of appropriate correctional
service in residential facilities is notable from Table
2 (mean phi estimate of .20 compared with .35 for
treatment within community settings, F[1/52] = 5.89,
p<.02). In addition, inappropriate service performed
particularly poorly in residential settings compared
with community settings (-.15 versus -.04, F[1/36] =
3.74, p<.06). Thus, it seems that institutions and
residential settings may dampen the positive effects of
appropriate service while augmenting the negative impact
of inappropriate service. This admittedly tentative
finding does not suggest that appropriate correctional
services should not be applied in institutional and
residential settings. Recall that appropriate service
was more effective than inappropriate service in all
settings.'

Andrews et. al (1990) ibid p384

In England, policy both in the area of inmate programmes and Sentence
Planning is primarily focusing on convicted, adult males, serving long
sentences . The data cited above speaks for itself. One may or may not
agree with Andrews et. al. in their interpretation of these results.
Cited below are some further figures drawn from the Andrews paper
which should help clarify implications for the thesis being developed
in this paper.

Note that there were only 23 Adult studies. Of these, only 5 were in
residential or institutional settings. Of these 5, 4 produced negative
Phi coefficients, three of them significant (-.18,-.17, and -.14).
The fifth programme, which was the only one the authors classed as
appropriate, produced a non-significant Phi of 0.09.

This suggests that at least in terms of the available adult studies,
the only significant findings are that adult programmes in
institutional/residential settings, have, if any effect, only a
*deleterious* one on likelihood to reconvict!

Here is the summary of average Phi coefficients from all of the
studies examined in the Andrews et. al. meta-analysis:

CORRECTIONAL SERVICE
Criminal Inapp. Unspec. Appropriate
Sanctions
Sample of Studies
Whitehead & Lab -.04 (21) -.11 (20) .09 (16) .24 (30)
Sample 2 -.13 ( 9) -.02 (18) .17 (16) .37 (24)
Justice System
Juvenile -.06 (26) -.07 (31) .13 (29) .29 (45)
Adult -.12 ( 4) -.03 ( 7) .13 ( 3) .34 ( 9)
Year of Publication
Before 1980s -.16 (10) -.09 (22) .17 (11) .24 (33)
1980s -.02 (20) -.03 (16) .11 (21) .40 (21)
Quality of Research Design
Weaker -.07 (21) -.04 (10) .15 (18) .32 (26)
Stronger -.07 ( 9) -.08 (22) .11 (14) .29 (28)
Setting
Community -.05 (24) -.14 (31) .12 (27) .35 (37)
Institution/Res. -.14 ( 6) -.15 ( 7) .21 ( 5) .20 (17)
Behavioral Intervention
No -.07 (24) -.14 (31) .12 (27) .35 (37)
Yes - -.09 ( 2) .23 ( 1) .31 (38)
Overall Mean Phi -.07 (30) -.06 (38) .13 (32) .30 (54)
S.D. .14 .15 .16 .19
Mean Phi Adjusted for -.08 (30) -.07 (38) .10 (32) .32 (54)
Other Variables

In summary, the 9 adult programmes which had positively significant
Phi coefficients were all run in the COMMUNITY, they were not run in
prisons. In fact, of the 5 adult institutional/residential programmes,
3 produced significant NEGATIVE Phi coefficients, 1 produced a non-
significant negative Phi, and the fifth a Phi of 0.09. Three of the 5
were classed by Andews et. al. under CRIMINAL SANCTIONS (two with
significant negative Phi coefficients), 1 under INAPPROPRIATE
CORRECTIONAL SERVICE (also a significant negative Phi). The only 1 to
be classified under APPROPRIATE CORRECTIONAL SERVICE was a study by
Grant & Grant (1959) entitled 'A Group Dynamics approach to the
treatment of nonconformists in the navy', a programme which produced a
Phi of 0.09 (treated group 29% reconvicted (N=135) vs. 38% of the
untreated (N=141).

On the juvenile side, there were 30 institutional/Residential
programmes, (there were only 35 institutional or residential
programmes in the meta-analysis of 154 programmes). These 30 are
presented below:

a b c d e
JUVENILE - - 0 + +
CRIMINAL SANCTIONS 3 2 1
INAPPROPRIATE CORRECTIONAL SERVICE 6 2 2 2
UNSPECIFIED CORRECTIONAL SERVICE 5 4 1
APPROPRIATE CORRECTIONAL SERVICE 16 1 1 4 10
------
30

Of these 30 programmes, there were 11 which produced significant
negative Phi coefficients.

A. Token Economies
5 significant positive Phi coefficients,
(Note that there were also two programmes with negative phi
coefficients in this area, one significantly negative)

B. Individual/Group Counselling
1 positive significant Phi

C. Intensive Structured Skill Training
2 significant positive Phi coefficients

D. Structured one-on-one Paraprofessional/Peer Program
1 positive significant Phi

E. Family Therapy
1 positive significant Phi

UNSPECIFIED CORRECTIONAL SERVICE

Months served in programme
1 positive significant Phi

The implications for the efficacy (and therefore resourcing) of adult
programmes within residential or institutional settings is not
promising on the basis of the studies reviewed by Andrews et al. 1990.
As far as rehabilitation for adult prisoners is concerned, their study
provides little direct evidence to support a renewed faith in
rehabilitative programmes as conventionally implemented.

What their study can be taken to suggest as being worthwhile, is
improvements in how we structure what we do, so that we can begin to
work towards apposite allocation of inmates to appropriate activities
or settings.

Over recent years, there have been some moves to introduce formal
'Cognitive Skills' programmes both in the Canadian Correctional System
(Porporino et. al. 1991) and more recently, within the English
system. Empirical studies to date have focused on very small numbers
in treatment and 'comparison' groups, and have produced equivocal
results when the dependent variable is taken as reconviction rate.

2. What Works in Special Cognitive Skills Programmes

In brief, and based on the *published data* at the time of writing
(1993), efficacy of the Canadian Cognitive Skills Training can not
be described as robust, nor can it be said that the programme's
content per se significantly influences recidivism. The objective here
is not to be negative, there may be further unpublished evidence which
puts the programme in a more favourable light. However, I think it
important to point out that on the basis of the evidence reported in
the Porporino et al (1991) paper we should look at the claims made for
the efficacy of the Cognitive Skills programme with a degree of
caution. The published results can in fact be taken to suggest
something equally positive if considered from the alternative
perspective of Sentence Management as outlined in "The Implications of
Recent Research for The Development of Programmes and Regimes" (1992).

The Porporino et al. paper suggests that those who are motivated to
change (those who volunteered) did almost as well as those who
actually participated in the Cognitive Skills programme. If this is
true, it would seem to be further justification for adopting the
Attainment based Sentence Management system as an infrastructure for
Inmate Programmes. If further evidence can be drawn upon to
substantiate the published claims for the efficacy of 'Cognitive
Skills', that evidence could be used to support the proposed strategy
of an integrated use of the natural demands of all activities and
routines to inculcate new skills in social behaviour and problem
solving. Sentence Management is designed to provide a prison service
with a means of integrating all of the currently used assessment
systems in use across activities. It is important to appreciate that
the criteria it looks to assess inmates with respect to, are the very
criteria which activity supervisors are already using to assess
inmates, be these NVQ Performance Criteria, or the 'can do' statements
of an RSA english course. Attainment Criteria per se, can not
therefore be dismissed lightly. The Sentence Management system is
designed to enable staff throughout the system to pull together
assessment material *in a common format*, it has not been designed to
ask anything new of such staff, although they can add additional
criteria to those they already use if they wish.

Effective programmes must produce evidence of behaviour change, and
not merely self-report, or changes in verbal behaviour. For thism one
requires measures of attainment with respect to the preset skill
levels which programme staff have been contracted to deliver. All
programmes must have predetermined goals or objectives and these can
be specified independently of any participating inmates. If there is
evidence that the special programmes approach has special merit which
excludes them from the remarks made so far (which, from the review of
programmes below must be viewed with caution), we should not lose
sight of the fact that special programmes are likely to be seen as
treatment programmes, and that they can only occupy inmates for a
small proportion of their time in custody. If there is evidence to
justify the efficacy of special programmes addressing how inmates
think, we should look carefully to what education and other skill
based programmes are designed to deliver. There is much to be said for
adopting an approach to inmate programmes which is education, rather
than treatment based, and one which looks to all that the regime has
to offer as an infrastructure.

The following is how the Canadian group describe the objectives of
their 'Cognitivist' approach:

'The basic assumption of the cognitive model is that the
offender's thinking should be a primary target for
offender rehabilitation. Cognitive skills, acquired
either through life experience or through intervention,
may serve to help the individual relate to his
environment in a more socially adaptive fashion and
reduce the chances of adopting a pattern of criminal
conduct.

Such a conceptualization of criminal behaviour has
important implications for correctional programming. It
suggests that offenders who are poorly equipped
cognitively to cope successfully must be taught rather
than treated. It suggests that emphasis be placed on
teaching offenders social competence by focusing on:

thinking skills, problem-solving and decision making;

general strategies for recognizing problems, analyzing
them, conceiving alternative non-criminal solutions to
them;

ways of thinking logically, objectively and rationally
without overgeneralizing, distorting facts, or
externalizing blame;

calculating the consequences of their behaviour - to
stop and think before they act;

to go beyond an egocentric view of the world and
comprehend and consider the thoughts and feelings of
other people;

to improve interpersonal problem-solving skills and
develop coping behaviours which can serve as effective
alternatives to anti-social or criminal behaviour;

to view frustrations as problem-solving tasks and not
just as personal threats;

to develop a self-regulatory system so that their pro-
social behaviour is not dependent on external control.

to develop beliefs that they can control their life;
that what happens to them depends in large measure on
their thinking and the behaviour it leads to.

To date we have been able to examine the outcome of 40
offenders who had been granted some form of conditional
release and were followed up in the community for at
least six months. On average, the follow up period was
19.7 months. We also gathered information on the outcome
of a comparison group of 23 offenders who were selected
for Cognitive Skills Training but had not participated.
These offenders did not differ from the program
participants on a number of characteristics and were
followed-up for a comparable period of time.

........................................offenders in the
treatment group were re-admitted for new convictions at
a lower rate that the comparison group during the
follow-up period. Specifically, only 20% of the
treatment group were re-admitted for new convictions
compared to 30% of the offenders in the comparison
group. It is interesting to note that the number of
offenders who were returned to prison without new
convictions (eg technical violations, day-parole
terminations) is similar yet marginally larger in the
treatment group. It is possible that the Cognitive
Skills Training participants may be subjected to closer
monitoring because of expectations regarding the
program.'

Porporino, Fabiano and Robinson
Focusing on Successful Reintegration:Cognitive Skills
Training for Offenders July 1991

'Fragments of Behaviour: The Extensional Stance', extracted from 'A
System Specification for PROfiling BEhaviour' presents a substantial
body of evidence drawn from mainstream research in the psychology of
reasoning which reveal that many of above statements are in fact a
highly contentious set of propositions on the basis of established
empiruical data.

Not only is their theoretical stance dubious on the basis of
mainstream research, but the authors tell us that seven of the 23 in
the comparison group were reconvicted for a new offence, whilst eight
of the 40 offenders in the treatment group were reconvicted for a new
offence. However, looking at returns to prison for violations of
parole etc., the authors say:

'It is interesting to note that the number of offenders
who were returned to prison without new convictions (eg
technical violations, day-parole terminations) is
similar yet marginally larger in the treatment group'.

Furthermore, when the authors compared the predicted reconviction rate
(52%) for these groups with the actual rates (20% and 30% for the
treatment and comparison groups respectively) the low rate of
reconviction in the comparison group led them to conclude:

'motivation for treatment in end of itself may be
influential in post-release success'.

In fact, the conclusion can be stated somewhat more strongly. Imagine
this was a drugs trial. The comparison group, like the treatment
group are all volunteers. They all wanted to be in the programme, they
all, effectively, wanted to take the tablets. Some, however, didn't
get to join the programme, they didn't 'get to take the tablets', but
other than that did not differ from the treatment group. In the
Porporino study, those inmates comprised the comparison group. When
the reconviction data came in, it showed that those in the comparison
group were pretty much like those in the treatment group. The
treatment, ie 'the Cognitive Skills' training, had virtually no
effect. The comparison group is remarkably like the treatment group in
not being reconvicted for a new offence. In fact, if five of the
comparison group had reconvicted rather than seven, the reconviction
rate would have been the same (20%) for both groups.

TREATMENT COMPARISON
Readmissions with New Convictions 20% 30.4%
(8/40) (7/23)

Readmissions without New Convictions 25% 21.7%
(10/40) (5/23)

No Readmissions 55% 47.9%
(22/40) (11/23)

Apart from the fact that the numbers being analyzed are extremely
small, the fact that the authors take these figures to justify
statements that Cognitive Skills, ie an intensive 8-12 week course,
focusing on what inmates 'think', a course that focuses apparently on
changing 'attitudes' rather than 'teaching new/different behaviours',
is causally efficacious in bringing about reductions in recidivism is
questionable. The comparison group it must be appreciated, were all
volunteers, only differing from the treatment group in that they did
not get to participate in the programme. But only 30% of them (7/23)
were reconvicted for a new offence, compared to 20% (8/40) in the
treatment group. Compared to the expected reconviction rate for those
in either group (52%) might reasonably be led to the conclusion that
those in the comparison group did very well compared to those who
actually participated in the programme. The above pattern of results
casts some doubt as to how important the content of the Cognitive
Skills Programme was at all. The fact that the percentages in the 'No
Readmissions' and the 'Readmissions Without New Reconvictions' lends
support to this view.

These (Canadian) studies have also presented evidence for short term
longitudinal changes in 'Cognitive Skills' performance for those
participating in the programme (and somewhat surprisingly, sometimes
in the comparison groups). These changes may however be comparable in
kind to the changes observed in the more formal education studies
surveyed by Nisbett et al. (1987). The whole notion of formal training
in abstract 'Cognitive Skills' might in fact be profitably critically
evaluated in the context of such research programmes, along with the
more substantial body of research into the heuristics and biases of
natural human judgment ('commonsense') in the absence of
distributional data. Other studies, e.g. McDougall, Barnett, Ashurst
and Willis (1987), although more sensitive to some of the
methodological constraints in evaluating such programs, still give
much greater weight to their conclusions than seems warranted by
either the design of their study or their actual data. For instance,
in the above study, an anger control course resulted in a
'significant' difference in institutional reports in a three month
follow up period at the p<0.05 level using a sign test. However, apart
from methodological problems, acknowledged by the authors, the
suggestion that the *efficacious component* was cognitive must, in the
light of the arguments of Meehl (1967;1978) and others, on the simple
logic of hypothesis testing, be considered indefensible. On the basis
of their design, one might (cautiously) suggest that there is some
evidence that participating in the programme had some effect
(possibly, as p<0.05), but precisely what it was within the course
which was efficacious can not be said given the design of the study.
As readers will come to appreciate, this is a pervasive problem in
social science research, and is yet another example of 'going beyond
the information given' (Bruner 1957;1974). The force of Meehl's and
Lakatos' arguments in the light of such failures to refrain from
inductive speculation on the basis of minimal evidence should not be
treated lightly. It is a problem which has reached epidemic
proportions in psychology as many of the leading statisticians now
lament (Guttman 1985, Cohen 1990), the above studies are in fact quite
representative of the general failure of psychologists as a group to
appreciate the limits of the propositional as opposed to the predicate
calculus as a basis for their methodology. Most of the designs of
experiments adopted do not allow researchers to draw the conclusions
that they do from their studies. In the above study, the best one
could say is that behaviour improved for those inmates who
participated in a program. Logically, one simply cannot say more.

3. An Alternative: Sentence Management & Actuarial Analysis of
Attainment

At the same time that 'Cognitive Skills' programmes are being
developed in the English Prison system, an attempt is being made to
introduce a naturalistic approach to behavioural skill development and
assessment,('cognitive skills' being but one class of these
behaviours. Such skills are generally taught within education, as
elements of particular Vocational Training or Civilian Instructor
supervised training courses, NVQs or even some of the domestic
activities such as wing cleaning. This is the system of 'Sentence
Management' which looks to inculcate skills under the relatively
natural conditions of inmate activities and the day to day routines.
Systematic work over the past three years has generated a timetable
for the deployment of psychologists whereby attainment data can be
routinely collected from all areas of the regimes on a weekly basis,
automatically analyzed and converted into inmate reports, generate
incentive levels and enable staff to identify norms and outliers
suitable as candidates for behavioural contract negotiation and
monitoring. Through an explicit and auditable combination of
continuous assessment of behaviour, target negotiation, contracting
and apposite allocation of inmates, the system aims to maximise
transfer of skills acquisition by teaching for transfer (Gladstone
1989), and compensating for deficits.

The Porporino empirical data is quite consistent with the argument
that 'volunteers' for programmes make up a sub-population of inmates
motivated to attain, who simply because they are 'attainers', show a
difference in reconviction rate when compared to baseline predicted
rates. That is, what is observed in studies such as that by Porporino
et al (1991) in all likelihood has nothing to do with "Cognitive
Skills" course content. Rather, the pattern in the data substantially
supports the rationale behind the system of "Sentence Management"
outlined here, in the 1992 Directorate of Inmate Programmes Senior
Management seminar report and as empirically illustrated in volume 2
of "A System Specification for PROfiling BEhaviour" (Longley 1995).

This behaviour profiling and assessment system, outlined below, is
specifically designed to provide behaviour scientists and their
managers with a formal behaviour management infrastructure which
provides an explicit professional role for behaviour scientists in the
measurement of positive "attainment" inculcated through the natural
contingencies afforded by the regime which has been selected by
Governors and their senior management teams.

As a behaviour profiling and assessment system it is designed to
shadow the structure of the regime, providing staff with routine
feedback on how the regime is operating through sensitive measures of
positive attainment. On this basis, the system provides the sine qua
non for effective inmate sentence management (and Key Performance
Indicator Monitoring), accommodating the expertise of all activity
supervisors contributing to the regime rather than risk focusing
divisively or disproportionately on innovative, but as yet unproven)
special programmes for inmates - an example being that between
traditional educational courses and the innovative Cognitive or
Thinking Skills programmes.

See also:

"Fragments of Behaviour: The Extensional Stance"

Robert Rodgers

unread,
Nov 25, 1996, 8:00:00 AM11/25/96
to

smr...@netcom.com (!@?*$%) wrote:
>> Bullshit. You've never actually *looked* at the Boehm/Demers GC, have
>> you? Or at Great Circle? Both are general garbage collectors, and both
>> will point you to memory leaks you're not otherwise handling. If they
>> don't handle everything, they handle more than enough.
>>
>> Geez, *look* at the stuff before you make silly comments.
>
>Fantastic! They finally solved that halting problem! This is great news!

No, but (I'm just starting on this thread) they do make C++ memory
management a lot easier, usually faster, and trivial to debug with
essentially no programmer overhead.

>(Rememberring, of course, that C and C++, unlike Lisp, permit arbitrary
>transformations on pointer values which a collector can apparently, always
>transform back into a valid pointer.)

Of course, why one would store a pointer in an int, or XOR one pointer
against another, or any of the other bizarre tricks that C programmers
take glory in (instead of, e.g., writing solid code that does their
customers some good) is another issue entirely.

GC works.


rsr

www.wam.umd.edu/~rsrodger b a l a n c e
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
"If we let our friend become cold and selfish and exacting without a
remonstrance, we are no true lover, no true friend."
- Harriet Beecher Stowe

David Longley

unread,
Nov 25, 1996, 8:00:00 AM11/25/96
to

In article <57a30d$s...@lyra.csx.cam.ac.uk>
gp...@thor.cam.ac.uk "G.P. Tootell" writes:

> what is this crap david and what's it doing in an msdos newsgroup?
> kindly keep it to the ai groups please instead of spreading it all over the
> place. if we're interested we'll subscribe to comp.ai.*
>
> nik

If you want to know what it is I suggest you *read* it. As to *why* it's
posted to these groups, I would have thought that was quite clear. Headers
from the original abusive material have been retained so that my response
to the nonsense from Balter is APPROPRIATELY circulated.
>
>
> Da...@longley.demon.co.uk (David Longley) spammed :


>
> |> It will help if an idea of what we mean by 'clinical' and 'actuarial'
> |> judgement is provided. The following is taken from a an early (Meehl
> |> 1954), and a relatively recent review of the status 'Clinical vs.
> |> Actuarial Judgement' by Dawes, Faust and Meehl (1989):
> |>
>

> a whole bunch of crap pasted out of some book or something.
>

*That's* abusive too - I suggest you make an effort to understand the
following (taking a brief break from blindly defending "netiquette").

For:

In B.F. Skinner Consensus and Controversy
Eds. S. Modgil & C. Modgil

--
David Longley


!@?*$%

unread,
Nov 25, 1996, 8:00:00 AM11/25/96
to

I don't know which is sadder, programmers uneducated in math or uneducated
in their native language.

David Williams

unread,
Nov 25, 1996, 8:00:00 AM11/25/96
to

In article <848906...@longley.demon.co.uk>, David Longley
<Da...@longley.demon.co.uk> writes

>In article <57a30d$s...@lyra.csx.cam.ac.uk>
> gp...@thor.cam.ac.uk "G.P. Tootell" writes:
>
>> what is this crap david and what's it doing in an msdos newsgroup?
>> kindly keep it to the ai groups please instead of spreading it all over the
>> place. if we're interested we'll subscribe to comp.ai.*
>>
>> nik
>
>If you want to know what it is I suggest you *read* it. As to *why* it's
>posted to these groups, I would have thought that was quite clear. Headers
>from the original abusive material have been retained so that my response
>to the nonsense from Balter is APPROPRIATELY circulated.
>>
>>
>> Da...@longley.demon.co.uk (David Longley) spammed :
>>
>> |> It will help if an idea of what we mean by 'clinical' and 'actuarial'
>> |> judgement is provided. The following is taken from a an early (Meehl
>> |> 1954), and a relatively recent review of the status 'Clinical vs.
>> |> Actuarial Judgement' by Dawes, Faust and Meehl (1989):
>> |>
>>
>> a whole bunch of crap pasted out of some book or something.
>>
>*That's* abusive too - I suggest you make an effort to understand the
>following (taking a brief break from blindly defending "netiquette").
>
> 'If we are limning the true and ultimate structure of
What is limning?

> reality, the canonical scheme for us is the austere
What do you mean by the "canonical scheme for us"?

> scheme that knows no quotation but direct quotation and
> no propositional attitudes but only the physical

What is a propositional attitude?

> constitution and behavior of organisms.'
>
> W.V.O Quine
> Word and Object 1960 p 221
>
>For:
>
> 'Once it is shown that a region of discourse is not
> extensional, then according to Quine, we have reason to

What is extensional? Who is Quine?


> doubt its claim to describe the structure of reality.'
>
> C. Hookway
> Logic: Canonical Notation and Extensionality
> Quine (1988)
>
>The problem with intensional (or common sense or 'folk') psychology
>has been clearly spelled out by Nelson (1992):
>
> 'The trouble is, according to Brentano's thesis, no such
> theory is forthcoming on strictly naturalistic, physical
> grounds. If you want semantics, you need a full-blown,
> irreducible psychology of intensions.

What?


>
> There is a counterpart in modern logic of the thesis of
> irreducibility. The language of physical and biological
> science is largely *extensional*. It can be formulated
> (approximately) in the familiar predicate calculus. The
> language of psychology, however, is *intensional*. For
> the moment it is good enough to think of an
> *intensional* sentence as one containing words for
> *intensional* attitudes such as belief.
>
> Roughly what the counterpart thesis means is that

What is the counterpart thesis?

> important features of extensional, scientific language

WHat?


> on which inference depends are not present in
> intensional sentences. In fact intensional words and
> sentences are precisely those expressions in which
> certain key forms of logical inference break down.'
>
> R. J. Nelson (1992)
> Naming and Reference p.39-42
>
>and explicitly by Place (1987):
>
> 'The first-order predicate calculus is an extensional
> logic in which Leibniz's Law is taken as an axiomatic
> principle. Such a logic cannot admit 'intensional' or
> 'referentially opaque' predicates whose defining

What is referentially opaque?

Very few people use these term? You don't impress anyone with
your long words, mate. Either you want everyone to understand you
in which case you will use simpler English or you don't, in which
case why are you sending me and several hundred other programmers
this message? We are programmers not English students.
Kepp it simple so I can understand you or go away.

> characteristic is that they flout that principle.'
>
> U. T. Place (1987)
> Skinner Re-Skinned P. 244
> In B.F. Skinner Consensus and Controversy
> Eds. S. Modgil & C. Modgil
>

--
David Williams

Glen Clark

unread,
Nov 26, 1996, 8:00:00 AM11/26/96
to G.P. Tootell

G.P. Tootell wrote:
>
> what is this crap david and what's it doing in an msdos newsgroup?
> kindly keep it to the ai groups please instead of spreading it all over the
> place. if we're interested we'll subscribe to comp.ai.*
>
> nik

Good God no!

We don't have a clue what he's babbling about either. We thought
he came with you.

--
Glen Clark
gl...@clarkcom.com

David Longley

unread,
Nov 26, 1996, 8:00:00 AM11/26/96
to

In article <9+pIkGAG...@smooth1.demon.co.uk>
d...@smooth1.demon.co.uk "David Williams" writes:
> What is limning?

> What do you mean by the "canonical scheme for us"?
> What is a propositional attitude?
> What is extensional? Who is Quine?
> What is the counterpart thesis?
> What is referentially opaque?

> Either you want everyone to understand you
> in which case you will use simpler English or you don't, in which
> case why are you sending me and several hundred other programmers
> this message? We are programmers not English students.

> --
> David Williams
>
The issues are fundamentally computational as many familair with
the basis of programming will no doubt appreciate. As to specific
answers to the above, they are explained in the text and
references at:

http://www.uni-hamburg.de/~kriminol/TS/tskr.htm

'Humans did not "make it to the moon" (or unravel the
mysteries of the double helix or deduce the existence of
quarks) by trusting the availability and
representativeness heuristics or by relying on the
vagaries of informal data collection and interpretation.
On the contrary, these triumphs were achieved by the use
of formal research methodology and normative principles
of scientific inference. Furthermore, as Dawes (1976)
pointed out, no single person could have solved all the
problems involved in such necessarily collective efforts
as space exploration. Getting to the moon was a joint
project, if not of 'idiots savants', at least of savants
whose individual areas of expertise were extremely
limited - one savant who knew a great deal about the
propellant properties of solid fuels but little about
the guidance capabilities of small computers, another
savant who knew a great deal about the guidance
capabilities of small computers but virtually nothing
about gravitational effects on moving objects, and so
forth. Finally, those savants included people who
believed that redheads are hot-tempered, who bought
their last car on the cocktail-party advice of an
acquaintance's brother-in-law, and whose mastery of the
formal rules of scientific inference did not notably
spare them from the social conflicts and personal
disappointments experienced by their fellow humans. The
very impressive results of organised intellectual
endeavour, in short, provide no basis for contradicting
our generalizations about human inferential
shortcomings. Those accomplishments are collective, at
least in the sense that we all stand on the shoulders of
those who have gone before; and most of them have been
achieved by using normative principles of inference
often conspicuously absent from everyday life. Most
importantly, there is no logical contradiction between
the assertion that people can be very impressively
intelligent on some occasions or in some domains and the
assertion that they can make howling inferential errors
on other occasions or in other domains.'

R. Nisbett and L. Ross (1980)
Human Inference: Strategies and Shortcomings of Social
Judgment

--
David Longley


Message has been deleted

Gregory R Barton

unread,
Nov 26, 1996, 8:00:00 AM11/26/96
to

: The issues are fundamentally computational as many familair with
: the basis of programming will no doubt appreciate. As to specific
: answers to the above, they are explained in the text and
: references at:

...whatever. Everybody, just put this guy in your kill file. Let's get on
with normal, rational discussion...

GreG

: --
: David Longley


Jive Dadson

unread,
Nov 26, 1996, 8:00:00 AM11/26/96
to

If you all must insist, against all protests, to continue to post these never-ending
computer language discussions to completely inappropriate newsgroups,
please keep to one subject-line so the thousands of people who are interested
in what the newsgroups are actually intended for may know what to ignore.

Thank you very much,
J.

David Longley

unread,
Nov 27, 1996, 8:00:00 AM11/27/96
to

In article <329B5E...@hal-pc.org>
mdke...@hal-pc.org "Michael D. Kersey" writes:
> Hi All,
>
> Half of this series of threads appears to be the the consequence of
> someone leaving their browser logged-on and unattended in an institution
> for the insane. Maybe there should be a new separate newsgroup,
>
> comp.ai.bedlam
> or
> comp.ai.babel
> or
> comp.ai.institutionalized
>
> Perhaps it is a new form of therapy developed by Microsoft to increase
> sales figures: give each inmate a copy of InterNet Exploder and turn
> them loose. I would have preferred to be forewarned, however.
>
> Good Luck,
> Michael D. Kersey
>
Now *that's* quite a nice illustrative example of the fallibility
of common-sense or folk-psychology as a theoretical (or practical)
perspective.
--
David Longley


Warren Sarle

unread,
Nov 27, 1996, 8:00:00 AM11/27/96
to

Those of you who are intelligent enough to be able to program in C or
Lisp should also be capable of looking at the "Newsgroups" line and
noting that this thread and several related ones should NOT be
cross-posted to comp.ai,comp.ai.genetic,comp.ai.neural-nets, and
comp.ai.philosophy.

In article <wsyg21yu8...@best.com>, Thomas Breuel <t...@intentionally.blank> writes:


|> smr...@netcom.com (!@?*$%) writes:
|> > (Rememberring, of course, that C and C++, unlike Lisp, permit arbitrary
|> > transformations on pointer values which a collector can apparently, always
|> > transform back into a valid pointer.)
|>

|> Read the ANSI C spec: C does not permit "arbitrary transformations on
|> pointer values". ...

--

Warren S. Sarle SAS Institute Inc. The opinions expressed here
sas...@unx.sas.com SAS Campus Drive are mine and not necessarily
(919) 677-8000 Cary, NC 27513, USA those of SAS Institute.
*** Do not send me unsolicited commercial or political email! ***


0 new messages