Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

SIMD/Vector benchmarks

4 views
Skip to first unread message

silk....@yahoo.com

unread,
Mar 9, 2006, 10:29:38 AM3/9/06
to
Is there a suite(or collections of programs) to benchmark
SIMD/Vector architectures for performance?

Even more, is there a SIMD language (maybe as
an extension to 'C'/Fortran) somewhere?

Peter Matthias

unread,
Mar 9, 2006, 11:03:20 AM3/9/06
to
silk....@yahoo.com wrote:

A SIMD language yes, but not C/Fortran like. More like Pascal.

http://www3.inf.ethz.ch/research/dissertations/show.php?type=diss&what=10277&lang=en

Peter

Peter Grandi

unread,
Mar 9, 2006, 12:56:48 PM3/9/06
to
>>> On 9 Mar 2006 07:29:38 -0800, silk....@yahoo.com said:

[ ... ]

silk.morton> Even more, is there a SIMD language (maybe as
silk.morton> an extension to 'C'/Fortran) somewhere?

Sort of, both the Intel C/C++ compiler/language and even more so
the Codeplay Vector C compiler/language have more or less
reasonable extensions to express SIMD semantics.

http://WWW.codeplay.com/customcompilers/
http://WWW.Intel.com/cd/ids/developer/asmo-na/eng/dc/tools/compilers/

Alex Colvin

unread,
Mar 9, 2006, 1:26:28 PM3/9/06
to
>> Even more, is there a SIMD language (maybe as
>> an extension to 'C'/Fortran) somewhere?

lots. look for C*, data-parallel C, HPF, ZPL, F, Cilk, Split-C, ...

--
mac the naïf

Dr. Adrian Wrigley

unread,
Mar 9, 2006, 6:19:51 PM3/9/06
to

I did a search a few months back, and found very few languages
suited to compilation for big SIMD systems. Much of what I found
was from the '70s or early '80s.

Of the ones you mention, Cilk and Split-C do not seem to target
SIMD architecture, since they are based multiple threads of control.
Mostly, they tweak existing languages, and often have serious
limitations as a result. Perhaps what would be needed to spur
innovation in SIMD processor architecture is a new set-declarative
language for high performance computing. But the track record
for new hardware with new languages is very poor :(

Data-parallel C looks interesting, but poses challenges to compiler
writers, as far as I can see. Didn't APL have something to offer
in this field too?

There seems little point in developing SIMD concepts in programming
languages without a decent SIMD architecture to target. Most of
what I have seen in the past that passes for SIMD is highly
inefficient for problems that aren't matched in size and shape
to the hardware, and perform poorly with common programming idioms.

Are there any modern SIMD architectures well supported by any
programming languages?
--
Adrian

Alex Colvin

unread,
Mar 9, 2006, 6:43:54 PM3/9/06
to
>Are there any modern SIMD architectures well supported by any
>programming languages?

Data-parallel C and C* are implemented for SPMD (Single-Program, Multiple
Data) architectures -- the processors don't need to synchronize every
instruction (the ILLIAC mistake), just every time the programs
communicate. This lets them run on regular message-passing or
shared-memory multiprocessors, instead of just SIMD architectures (of
which there are few).

--
mac the naïf

Dr. Adrian Wrigley

unread,
Mar 10, 2006, 12:09:41 PM3/10/06
to

I can't actually name *any* general-purpose large-scale SIMD architectures
less than about fifteen years old. Occasionally, a single-bit processor
array is designed (Connection Machine, GAPP, etc), but they are never
designed for high efficiency across a wide range of problem types
and vector lengths. It's a pity, because the potential definitely exists.
--
Adrian

Eugene Miya

unread,
Mar 28, 2006, 11:02:10 AM3/28/06
to
>>>Are there any modern SIMD architectures well supported by any
>>>programming languages?

In article <pan.2006.03.10....@linuxchip.demon.co.uk.uk.uk>,


Dr. Adrian Wrigley <am...@linuxchip.demon.co.uk.uk.uk> wrote:
>On Thu, 09 Mar 2006 23:43:54 +0000, Alex Colvin wrote:
>> Data-parallel C and C* are implemented for SPMD (Single-Program, Multiple
>> Data) architectures -- the processors don't need to synchronize every
>> instruction (the ILLIAC mistake), just every time the programs
>> communicate. This lets them run on regular message-passing or
>> shared-memory multiprocessors, instead of just SIMD architectures (of
>> which there are few).

I have the ILLIAC [IV]. I am not certain that I would call it a mistake.
It was certainly a tradeoff. SPMD != SIMD. The ILLIAC had these cables
that you would not believe (big and long) irregardless of how close a
processing element was. End application developers don't like
asynchrony, and they barely tolerate message passing.

>I can't actually name *any* general-purpose large-scale SIMD architectures
>less than about fifteen years old. Occasionally, a single-bit processor
>array is designed (Connection Machine, GAPP, etc), but they are never
>designed for high efficiency across a wide range of problem types
>and vector lengths. It's a pity, because the potential definitely exists.

I can think of a few, but I've not yet been invited nor allowed to see them.
They are "those need to know" things, but hints about them have been in
the literature for decades.

--

David DiNucci

unread,
Mar 28, 2006, 2:27:58 PM3/28/06
to
Eugene Miya wrote:
> End application developers don't like asynchrony,

These days, saying that is like saying "application developers don't
like speed". If those people are still those around, they need to read
"How I Learned to Stop Worrying and Love Asynchrony". (It can be found,
among other places, between the lines of my other writings.)

and they barely tolerate message passing.

I think you mean "because" rather than "and". If MP (and/or shared
memory) is all they know, I guess I can't blame them.

-Dave

Eugene Miya

unread,
Mar 29, 2006, 11:15:39 AM3/29/06
to
In article <H4ednWO-e6rWE7TZ...@comcast.com>,

David DiNucci <da...@elepar.com> wrote:
>Eugene Miya wrote:
>> End application developers don't like asynchrony,
>
>These days, saying that is like saying "application developers don't
>like speed". If those people are still those around, they need to read
>"How I Learned to Stop Worrying and Love Asynchrony". (It can be found,
>among other places, between the lines of my other writings.)

See what I just posted for Bader in c.p.

This is why I used developer than user. When you aren't certain
yourself who goes into a code in a physical simulation, asynchrony is
a time variable that they don't want to fool with.

>and they barely tolerate message passing.
>
>I think you mean "because" rather than "and". If MP (and/or shared
>memory) is all they know, I guess I can't blame them.

No, that was a deliberate "and". This contrasts with some articles,
one recently in Science which was quite pro message passing.


%A Willi Schonauer
%Z Univ. of Karlsruhe, FRG
%T Scientific Computing on Vector Computers
%S Special Topics in Supercomputing
%V 2
%I Elsevier Science Publishers B.V. (North-Holland)
%C Amsterdam
%D 1987
%K book, text, Hockney's $n sub {1 over 2}$,
Fortran, vectorization, matrix operations, algorithms,
Fujitsu VP, CRAY-2, IBM VF, Convex C-1, FIDISOL, ETA-10,
chaining, indirect addressing and indexing, direct vectorization,
linear first order recurrence, recursive doubling, cyclic reduction,
unrolling loops, Gaussian elimination, Gauss-Jordan, LU decomposition,
Householder algorithm, Crout algorithm, pivoting,
tridiagonal linear systems, pentadiagonal, boundary value problems,
Jacobi, successive overrelaxation (SOR), conjugate gradient (CG),
finite difference methods (FDM), finite element methods (FEM),
MJOR, Gauss-Seidel, colored SOR, sparse matrix (SM), multigrid (MG),
fast Fourier Transform (FFT),
%O ISBN #: 0444702881
%X Why I like vector computers (original title).
%X Interesting comment on supercomputers. An interesting book on
vector machines: discusses algorithms for vectorization (as opposed to
numerical algorithms [that, too]). It's written in a casual but
strong technical style which might grate on some nerves.
%X Also see:
A. Bossavit, 'Programming Discipline on Vector Computers:
"Vectors" as a Data Type and Vector Algorithms,'
Supercomputers in Theoretical and Experimental Science, ed. by
Jozef T. Devreese and Piet Van\ Camp, Plenum Press, New York, 1985,
pages 73-110.


While this reference may appear "old" to some, the algorithms are older
and not much improved, and still fairly relevant (no Moore's law for
algorithms).


Do you want the ref to the Science article?

--

David DiNucci

unread,
Mar 29, 2006, 1:38:48 PM3/29/06
to
Eugene Miya wrote:
> David DiNucci <da...@elepar.com> wrote:
>
>>Eugene Miya wrote:
>>
>>>End application developers don't like asynchrony,
>>
>>These days, saying that is like saying "application developers don't
>>like speed". If those people are still those around, they need to read
>>"How I Learned to Stop Worrying and Love Asynchrony". (It can be found,
>>among other places, between the lines of my other writings.)
>
> See what I just posted for Bader in c.p.

I seem to recall there was also a similar subtitle to some movie
somewhere, too. :-) If you guys are driving to that meeting in Greece,
maybe I can hitchhike along again.

> This is why I used developer than user. When you aren't certain
> yourself who goes into a code in a physical simulation, asynchrony is
> a time variable that they don't want to fool with.

Nevertheless, I stand by my translation/substitution. If the developer
doesn't like speed and the user does, then it sounds like it's time to
get new developers--or at least find ways to make the old ones happy
while making the user happy. I suggest that giving developers better
models and development tools for asynchrony is easier than convincing
the users that they don't really like speed.

>>>and they barely tolerate message passing.
>>
>>I think you mean "because" rather than "and". If MP (and/or shared
>>memory) is all they know, I guess I can't blame them.
>
> No, that was a deliberate "and". This contrasts with some articles,
> one recently in Science which was quite pro message passing.

There are many things to like about message passing, but passing
messages isn't among them.

Most of those good things (from latency hiding to clean interfaces) can
be found in interfaces I've propounded over the years (e.g. CDS,
ScalPL), minus some of the bad things (like gratutitous copying in
shared memory environments).

> While this reference may appear "old" to some, the algorithms are older
> and not much improved, and still fairly relevant (no Moore's law for
> algorithms).

"No Moore's law for algorithms" sounds good, but I wonder if it's really
true. How has the number of operations in an algorithm increased over
time? If an algorithm can be considered a proof, what about the one
used in the 4-color map theorem compared to previous proofs (say,
diagonalization)? That all begs the "what's an algorithm?" question.

In any case, as long as Moore's law applies to platforms, it's worth
reconsidering the algorithms that run on them now and then.

-Dave

Alex Colvin

unread,
Mar 29, 2006, 4:37:04 PM3/29/06
to
>>> Data-parallel C and C* are implemented for SPMD (Single-Program, Multiple
>>> Data) architectures -- the processors don't need to synchronize every
>>> instruction (the ILLIAC mistake), just every time the programs

>I have the ILLIAC [IV]. I am not certain that I would call it a mistake.

I wouldn't either. AT the time, synchronizing the PEs on instructions was
probably reasonable. Now we have trouible distributing clock across a
single chip. The mistake is thinking that we should buld the ILLIAC now.

>It was certainly a tradeoff. SPMD != SIMD.

The theory of SPMD is that programmers don't care about the difference,
and that the gain in looser synchronization is worth the software
synchronization overhead and the extra memory used by replicating scalar
data.

MIMD seems to be the current favorite.
--
mac the naïf

Eugene Miya

unread,
Mar 30, 2006, 4:11:43 PM3/30/06
to
In article <Eb6dnYgVtuHbSbfZ...@comcast.com>,

David DiNucci <da...@elepar.com> wrote:
>>>"How I Learned to Stop Worrying and Love Asynchrony". (It can be found,
>
>I seem to recall there was also a similar subtitle to some movie
>somewhere, too. :-)

Actually, I can think of a few CS papers with similar subtitles.
That along with XYZ Considered Harmful, The Next 700 XYZ, The Art of XYZ,
The Science of XYZ, Games XYZers Play, if you catch my drift.

> If you guys are driving to that meeting in Greece,
>maybe I can hitchhike along again.

While the founding Chair of the SC'xy conferences now has an electric
wheel chair, he was not invited to Salishan this year, and I don't plan
to attend Asilomar (reading Terje? I have to give a talk about
Antaractic science that evening of the RAT session). So Greece is
likely out. I'm surprised you didn't mention Germany (Ken Miura was
just visiting and he's going to that; I just got bac from Germany
and I am awaiting the Austrian postal system to send my trinkets back).

>> This is why I used developer than user. When you aren't certain
>> yourself who goes into a code in a physical simulation, asynchrony is
>> a time variable that they don't want to fool with.
>
>Nevertheless, I stand by my translation/substitution. If the developer
>doesn't like speed and the user does, then it sounds like it's time to

the two can be the same


>get new developers--or at least find ways to make the old ones happy
>while making the user happy. I suggest that giving developers better
>models and development tools for asynchrony is easier than convincing
>the users that they don't really like speed.

Get better scientists. More Nobel laureates.
Actually I am hoping that the lack of speed makes them develop better
intellectual tools than computers. We are discussing diagrams (schematics
for instance) vs. text based equations on another mailing list.

>>>>tolerate message passing.


>
>There are many things to like about message passing, but passing
>messages isn't among them.

Too much copying.

%A Marc Mezard
%T Passing Messages Between Disciplines
%J Science
%V 301
%N 5640
%D 19 September 2003
%P 1685-1686
%K physics/computer science (AI) perspectives,
information theory error correction, belief propagation (BP),
discrete optimization satisfiability, statistical physics spin glasses,
%X Nice short paper on the meeting of 2 disciplines.
It glosses over certain global/long distance topics, but quite nice.


>Most of those good things (from latency hiding to clean interfaces) can
>be found in interfaces I've propounded over the years (e.g. CDS,
>ScalPL), minus some of the bad things (like gratutitous copying in
>shared memory environments).

The problem Dave as noted by Perlis and
55. A LISP programmer knows the value of everything, but the cost of nothing.
generalized to most computer people is that implementation costs arent
trivial (scaling issues).


>> While this reference may appear "old" to some, the algorithms are older
>> and not much improved, and still fairly relevant (no Moore's law for
>> algorithms).
>
>"No Moore's law for algorithms" sounds good, but I wonder if it's really true.

Let me know if you can come up with basic factoring algorithms faster
than Euclid.

>How has the number of operations in an algorithm increased over time?

As a start:
Take any basic algorithm, 1-d say, Newton, on small machines you ran 1-D.
When you use regular data structures like arrays (we can get to
semistructured data structures later), you start to add integer adds and
multiplies w/o increasing the basic FP calculation. Those don't count.
FP not in direct solution: doesn't count. That's a skewed metric.
I have to agree with chums at the NSA that time to solution is more important.

>If an algorithm can be considered a proof, what about the one
>used in the 4-color map theorem compared to previous proofs (say,
>diagonalization)?

Well actually, I need to get that code from the UIUC guys in the
historic software issue.

>That all begs the "what's an algorithm?" question.

Get down here, and I will see if Don will have dinner with you.
I know one when I see one.

>In any case, as long as Moore's law applies to platforms, it's worth
>reconsidering the algorithms that run on them now and then.

Hardware only. Still in many cases algorithms of the 1890s (not a
transposition). Look Moore's law is based on scribing fine lines
(in the submicron level) and that's Newton's inverse squared law.
It's not double every 18 months, it's quadruple every 3 years.
Double is only used by managers and journalists.

--

Eugene Miya

unread,
Mar 30, 2006, 4:29:31 PM3/30/06
to
>>>> Data-parallel C and C* are implemented for SPMD (Single-Program, Multiple
>>>> Data) architectures -- the processors don't need to synchronize every
>>>> instruction (the ILLIAC mistake), just every time the programs
>>I have the ILLIAC [IV]. I am not certain that I would call it a mistake.

In article <e0eum0$bul$1...@pcls4.std.com>,


Alex Colvin <al...@TheWorld.com> wrote:
>I wouldn't either. AT the time, synchronizing the PEs on instructions was
>probably reasonable. Now we have trouible distributing clock across a

>single chip. The mistake is thinking that we should build the ILLIAC now.

Amusingly Intel, Caltech, Thinking Machines, Maspar, etc. et al.
didn't think that in the 90s. The Paragon Touchstone Delta sit next to
the CM-2 and the IV. And We also have an ICL DAP in the backroom, and
an offer of a SUPRENUM.

It all seems so simple to people with money and chips to throw at a
problem. It's not an issue of parallelism.

>>It was certainly a tradeoff. SPMD != SIMD.
>
>The theory of SPMD is that programmers don't care about the difference,
>and that the gain in looser synchronization is worth the software
>synchronization overhead and the extra memory used by replicating scalar
>data.
>
>MIMD seems to be the current favorite.

I would not go so far as to call it theory. I would be an embarassing theory
(to go along with all ther other embarassing things our field does).
Frederica didn't make the acronym as part of a theory, it was merely her
observation at the time, and one lacking detail. Over time the
community has fleshed details into those words (the diff between P and I).

It's like the Sidney Harris (Science) cartoon with 2 guys looking at a
chalk board and the words: "And then a miracle occurs."
It's no wonder why so many physicists hate computer science.

--

jos...@cc.hut.fi

unread,
Mar 31, 2006, 4:36:39 AM3/31/06
to
>While this reference may appear "old" to some, the algorithms are older
>and not much improved, and still fairly relevant (no Moore's law for
>algorithms).

Yes there is, moore law for algorithms.

Just remember what Moore's law stated, number of transistors doubles
every 18 months.

And then there is equivalent for algorithms:
Number of assembler instructions for given tasks by median programmer
doubles every 3 years.

In that law the assembler instructions doesn't state HOW median
programmer creates those assembler instructions ;-) Nor that is it
dynamic instruction count or static instruction count. One way or other
the average is going up.

There just isn't "not Moore's law" for algorithms.

David DiNucci

unread,
Mar 31, 2006, 5:24:10 PM3/31/06
to
Eugene Miya wrote:
> David DiNucci <da...@elepar.com> wrote:
>>If you guys are driving to that meeting in Greece,
>>maybe I can hitchhike along again.
>
> ...So Greece is

> likely out. I'm surprised you didn't mention Germany (Ken Miura was
> just visiting and he's going to that; I just got bac from Germany
> and I am awaiting the Austrian postal system to send my trinkets back).

I'm smiley challenged--I didn't really expect anyone to drive to Greece.
Lacking subsidies by shareholders, taxpayers, or other profits, I
don't pay much attention to expensive conferences or those requiring
much travel.

> Get better scientists. More Nobel laureates.
> Actually I am hoping that the lack of speed makes them develop better
> intellectual tools than computers.

A computer is a physical tool to leverage intellectual tools. I agree
that it's possible to overrely on tools at the expense of mastery of the
field, regardless of the field, but making the tools less capable isn't
a very good way of addressing that. Best to just let people with a
mastery of both the field and the tools outshine others.

> We are discussing diagrams (schematics
> for instance) vs. text based equations on another mailing list.

The most common ways to specify the routing of results from one
transform (e.g. function) to inputs/arguments of others are:
juxtaposition of the transforms (e.g. traditional functional
composition), associative matching of identifiers (e.g. variables,
memory locations), and drawing lines or arrows (e.g. schematics). The
last option seems to be most intuitive when the transforms/functions
produce multiple results which are each routed to different
transforms--such as in high-level software engineering, module
composition. (Then there's the issue of the conditions under which the
transforms should evaluate.)

>>>>>tolerate message passing.
>>
>>There are many things to like about message passing, but passing
>>messages isn't among them.
>
> Too much copying.

I would say: A semantics based on the assumption that copying will be
performed. So it''s hard to optimize it out, just like it's difficult
to extract parallelism from languages with semantics based on the
assumption that the target platform is a sequential computer.

>>Most of those good things (from latency hiding to clean interfaces) can
>>be found in interfaces I've propounded over the years (e.g. CDS,
>>ScalPL), minus some of the bad things (like gratutitous copying in
>>shared memory environments).
>
>
> The problem Dave as noted by Perlis and
> 55. A LISP programmer knows the value of everything, but the cost of nothing.
> generalized to most computer people is that implementation costs arent
> trivial (scaling issues).

Like others, "computer people" don't want to hear what's wrong with
something unless there's an existing alternative that costs about the
same or less and fixes the problem. Funding those alternatives when
potential customers say they've got "good enough" is a challenge.

>>That all begs the "what's an algorithm?" question.
>
> Get down here, and I will see if Don will have dinner with you.

Transpose that and it might be possible. My primary issue with his
definition and others is that it's based on a sequence, so the term
"parallel algorithm" becomes at best undefined and at worst an oxymoron.
But that doesn't stop people from using the term productively. I
think most folks interpret the term "algorithm" to just mean "a
(computer-)language independent representation of how to solve a
problem", and that's fine, but it still has the potential of being
seriously platform dependent, which doesn't seem very algorithm-ish to
me. I prefer to define it based on (folded) partial orderings of
operations.

>>>[Moore's Law]


> It's not double every 18 months, it's quadruple every 3 years.
> Double is only used by managers and journalists.

According to a thread here in October based on a Merc article based on a
CHM talk by Moore, Moore predicted doubling every year and then every
two years, and an Intel marketing exec, House, altered that in PR to
doubling every 18 months. By now I'm sure that the "law" has a hundred
different forms, at least one of which is likely to be close to
predicting the present from the past at any particular point in time.

-Dave

ram...@bigpond.net.au

unread,
Apr 2, 2006, 7:35:24 AM4/2/06
to
Alex Colvin <al...@TheWorld.com> writes:

> I wouldn't either. AT the time, synchronizing the PEs on instructions was
> probably reasonable. Now we have trouible distributing clock across a
> single chip. The mistake is thinking that we should buld the ILLIAC now.

The ILLIAC 6 is currently being built at Illinois.
http://illiac6-dev.cs.uiuc.edu/

Admittedly, it is very different from the ILLIAC IV.

http://illiac6-dev.cs.uiuc.edu/design/HW/System%20Drawings%20v4.htm
http://illiac6-dev.cs.uiuc.edu/design/HW/ChannelBuffer.html


Eugene Miya

unread,
Apr 5, 2006, 2:40:42 PM4/5/06
to
In article <87wte8n...@kafka.homenet>, <ram...@bigpond.net.au> wrote:
>The ILLIAC 6 is currently being built at Illinois.
>http://illiac6-dev.cs.uiuc.edu/
>
>Admittedly, it is very different from the ILLIAC IV.

Well, actually, so are the III, the II, and the I.
There was a V?
Pieces of the I..III are in the CS Bldg. at the UIUC.
Be warned of bikes, the paths are English.

--

Eugene Miya

unread,
Apr 6, 2006, 7:27:02 PM4/6/06
to
In article <qs6dndWl7aK...@comcast.com>,

David DiNucci <da...@elepar.com> wrote:
>Eugene Miya wrote:
>> David DiNucci <da...@elepar.com> wrote:
>>>If you guys are driving to that meeting in Greece,
>>>maybe I can hitchhike along again.
>> ...So Greece is
>> likely out. I'm surprised you didn't mention Germany (Ken Miura was
>> just visiting and he's going to that; I just got bac from Germany
>> and I am awaiting the Austrian postal system to send my trinkets back).
>
>I'm smiley challenged--I didn't really expect anyone to drive to Greece.
>Lacking subsidies by shareholders, taxpayers, or other profits, I
>don't pay much attention to expensive conferences or those requiring
>much travel.

You sense of humor is fine.
If you want to hitch to Greece, consider helping Steve Roberts
(microship.com) complete traversing US inland water ways and then get
him to the Aegean.

I don't pay for many expensive conferences but not because of the govt.
The Press get trade access.


>> Get better scientists. More Nobel laureates.
>> Actually I am hoping that the lack of speed makes them develop better
>> intellectual tools than computers.
>
>A computer is a physical tool to leverage intellectual tools. I agree
>that it's possible to overrely on tools at the expense of mastery of the
>field, regardless of the field, but making the tools less capable isn't
>a very good way of addressing that. Best to just let people with a
>mastery of both the field and the tools outshine others.

Sometimes, it's not enough. Engelbart learned that.
Something about a brick....

Still not enough. Just think what Woz would have done minue Jobs.

>> We are discussing diagrams (schematics
>> for instance) vs. text based equations on another mailing list.
>
>The most common ways to specify the routing of results from one
>transform (e.g. function) to inputs/arguments of others are:
>juxtaposition of the transforms (e.g. traditional functional
>composition), associative matching of identifiers (e.g. variables,
>memory locations), and drawing lines or arrows (e.g. schematics). The
>last option seems to be most intuitive when the transforms/functions
>produce multiple results which are each routed to different
>transforms--such as in high-level software engineering, module
>composition. (Then there's the issue of the conditions under which the
>transforms should evaluate.)

Joe, I got your email on this, I'm getting back from a class.
I have to catch up on my mailing list where this is being discussion

I would prefer associative but that's one of those "Then a miracle
happens areas."

>>>>>>tolerate message passing.
>>>There are many things to like about message passing, but passing
>>>messages isn't among them.
>> Too much copying.
>
>I would say: A semantics based on the assumption that copying will be
>performed. So it''s hard to optimize it out, just like it's difficult
>to extract parallelism from languages with semantics based on the
>assumption that the target platform is a sequential computer.

Look at Usenet. It survives.

>>>Most of those good things (from latency hiding to clean interfaces) can
>>>be found in interfaces I've propounded over the years (e.g. CDS,
>>>ScalPL), minus some of the bad things (like gratutitous copying in
>>>shared memory environments).
>> The problem Dave as noted by Perlis and

>> programmer knows the value of everything, but the cost of nothing.
>

>Like others, "computer people" don't want to hear what's wrong with
>something unless there's an existing alternative that costs about the
>same or less and fixes the problem. Funding those alternatives when
>potential customers say they've got "good enough" is a challenge.

The prophet has a problem.


>>>That all begs the "what's an algorithm?" question.
>> Get down here, and I will see if Don will have dinner with you.
>
>Transpose that and it might be possible. My primary issue with his
>definition and others is that it's based on a sequence, so the term
>"parallel algorithm" becomes at best undefined and at worst an oxymoron.
> But that doesn't stop people from using the term productively.
> I think most folks interpret the term "algorithm" to just mean "a
>(computer-)language independent representation of how to solve a
>problem", and that's fine, but it still has the potential of being
>seriously platform dependent, which doesn't seem very algorithm-ish to
>me. I prefer to define it based on (folded) partial orderings of
>operations.

Think "data parallel" 8^). Danny did, and look where it got him.
I may see him again a week from tomorrow. He has been hanging out with
Kevin Kelly too long.

> >>>[Moore's Law]
>> It's not double every 18 months, it's quadruple every 3 years.
>> Double is only used by managers and journalists.
>
>According to a thread here in October based on a Merc article based on a
>CHM talk by Moore, Moore predicted doubling every year and then every
>two years, and an Intel marketing exec, House, altered that in PR to
>doubling every 18 months. By now I'm sure that the "law" has a hundred
>different forms, at least one of which is likely to be close to
>predicting the present from the past at any particular point in time.

I was there.
And I see Dave House at those events, too. He has a cube at the Museum.
I merely keep a mail alias there (and part of my library).

--

ram...@bigpond.net.au

unread,
Apr 6, 2006, 9:54:07 PM4/6/06
to
eug...@cse.ucsc.edu (Eugene Miya) writes:

> In article <87wte8n...@kafka.homenet>, <ram...@bigpond.net.au> wrote:
> >The ILLIAC 6 is currently being built at Illinois.
> >http://illiac6-dev.cs.uiuc.edu/

> There was a V?

Since CEDAR was sometimes referred to as ILLIAC 5, they seem to have
skipped that numeral and decided to call the current version ILLIAC 6.


Eugene Miya

unread,
Apr 7, 2006, 1:50:03 PM4/7/06
to
In article <87mzey1...@kafka.homenet>, <ram...@bigpond.net.au> wrote:
>> In article <87wte8n...@kafka.homenet>, <ram...@bigpond.net.au> wrote:
>> >The ILLIAC 6 is currently being built at Illinois.
>> >http://illiac6-dev.cs.uiuc.edu/

eug...@cse.ucsc.edu (Eugene Miya) writes:
>> There was a V?
>
>Since CEDAR was sometimes referred to as ILLIAC 5, they seem to have
>skipped that numeral and decided to call the current version ILLIAC 6.

Wow! Really? I've never heard that, and I owe Dan Reed a seminar.
It was just a bunch of Alliant FX/8s with an initially poor port
of 4.2 BSD on them to me.

When I was travelling to UC (next time I will take a puddle jumper)
I never really stopped to look for pieces of CEDAR. And I did go into
the NCSA building a few times as well as Beckman, etc. Now I have to
think for the Museum and the Smithsonian now.

I just hope that the State Leg. of Ill. doesn't kill off this project
the way they apparently (hearing 2nd hand) killed off the spinoffs of
the last (phase 1) funded projects at the UIUC.

--

David DiNucci

unread,
Apr 8, 2006, 11:17:47 PM4/8/06
to
Eugene Miya wrote:
> In article <qs6dndWl7aK...@comcast.com>,
> David DiNucci <da...@elepar.com> wrote:
>
>>Eugene Miya wrote:

>>>Get better scientists. More Nobel laureates.
>>>Actually I am hoping that the lack of speed makes them develop better
>>>intellectual tools than computers.
>>
>>A computer is a physical tool to leverage intellectual tools. I agree
>>that it's possible to overrely on tools at the expense of mastery of the
>>field, regardless of the field, but making the tools less capable isn't
>>a very good way of addressing that. Best to just let people with a
>>mastery of both the field and the tools outshine others.
>
>
> Sometimes, it's not enough. Engelbart learned that.
> Something about a brick....

I think Engelbart would be on my side on this one--e.g. from the same
section of his 1962(!) paper where he described brick writing, "It seems
reasonable to consider the development of automated external symbol
manipulation means as a next stage in the evolution of our intellectual
power."

Regardless, it's always good to read about his work. And, born in
Portland, worked at Ames at Moffett, looking for tools to augment
intellect...but maybe the similarities end there.

> Still not enough. Just think what Woz would have done minue Jobs.

So was Engelbart Woz or Jobs? Some techies can also sell.

>>>We are discussing diagrams (schematics
>>>for instance) vs. text based equations on another mailing list.
>>
>>The most common ways to specify the routing of results from one
>>transform (e.g. function) to inputs/arguments of others are:
>>juxtaposition of the transforms (e.g. traditional functional
>>composition), associative matching of identifiers (e.g. variables,
>>memory locations), and drawing lines or arrows (e.g. schematics). The
>>last option seems to be most intuitive when the transforms/functions
>>produce multiple results which are each routed to different
>>transforms--such as in high-level software engineering, module
>>composition. (Then there's the issue of the conditions under which the
>>transforms should evaluate.)
>
>
> Joe, I got your email on this, I'm getting back from a class.
> I have to catch up on my mailing list where this is being discussion
>
> I would prefer associative but that's one of those "Then a miracle
> happens areas."

(I guess this is OT, but I'm not on whatever mailing list you keep
mentioning, and c.l.visual seems out of commission.)

That's just data flow, how about control flow--re my parenthesized
comment above? People like to see control dependences as well. It's
why structured control constructs utilize indentation, to add another
(partial) dimension, to allow the control points of a structured
construct to be visually related to each other (by being juxtaposed at
the same level of indentation) as well as being related to the
neighboring statements (by being juxtaposed, period). Entities which
are juxtaposed in a program trace are juxtaposed in the code (using one
definition or the other). I consider this similarity of code and trace
the essence of structured programming.

Once you get into parallel programming, and the dependences (both data
and control) go all over, and the program trace becomes a partial
ordering/DAG rather than a sequence, indentation isn't sufficient.
Still, if you represent data and control dependences in the program by
graphical connections (e.g. lines) and consider operations as being
proximal in both the program and the trace whenever they're
connected, you get the same essence of structured programming. (I'm
sure I discuss this in some paper, too.)

> Look at Usenet. It survives.

At least so far (though Big-8 is apparently neutered for now), probably
on purely political merits alone (shared ownership, lack of censorship,
etc.). If one could create a blog with those properties, it'd probably
win out over Usenet due to other factors.

> The prophet has a problem.

Smart/living messengers have learned to carry some good news with the bad.

>>>>That all begs the "what's an algorithm?" question.
>>>
>>>Get down here, and I will see if Don will have dinner with you.
>>
>>Transpose that and it might be possible. My primary issue with his
>>definition and others is that it's based on a sequence, so the term
>>"parallel algorithm" becomes at best undefined and at worst an oxymoron.

snip

>
> Think "data parallel" 8^). Danny did, and look where it got him.
> I may see him again a week from tomorrow. He has been hanging out with
> Kevin Kelly too long.

So where did it get him? I don't know if I'd consider the CM1/2
especially successful, though it was unique and an important experiment.
Not that I've gotten further.

Yes, data parallelism does evade the oxymoron, by leaving algorithms
sequential and making the operations parallel. (Often requires
considering loops as operations with their bodies as arguments, but
that's OK.) That's almost cheating, though, not to mention specialized,
pretty restrictive, and not particularly efficient, which is probably
why that machine evaporated. Much better to make the
"algorithm-like-thingy" parallel and the operations sequential--or at
least decomposable to deterministic atomic pieces.

>>>>>[Moore's Law]
>>>
>>>It's not double every 18 months, it's quadruple every 3 years.
>>>Double is only used by managers and journalists.
>>
>>According to a thread here in October based on a Merc article based on a
>>CHM talk by Moore, Moore predicted doubling every year and then every
>>two years, and an Intel marketing exec, House, altered that in PR to
>>doubling every 18 months. By now I'm sure that the "law" has a hundred
>>different forms, at least one of which is likely to be close to
>>predicting the present from the past at any particular point in time.
>
>
> I was there.
> And I see Dave House at those events, too. He has a cube at the Museum.
> I merely keep a mail alias there (and part of my library).

I figured you were (and mentioned so in that c.a thread), which is why I
didn't understand your "it's not double" statement.

-Dave

Eugene Miya

unread,
Apr 10, 2006, 1:58:24 PM4/10/06
to
>>>>Get better scientists.

>>>> better intellectual tools than computers.
>>>A computer is a physical tool to leverage intellectual tools. I agree
>> Sometimes, it's not enough. Engelbart learned that.
>> Something about a brick....

In article <GN-dna7oPbN...@comcast.com>,


David DiNucci <da...@elepar.com> wrote:
>I think Engelbart would be on my side on this one--e.g. from the same
>section of his 1962(!) paper where he described brick writing, "It seems
>reasonable to consider the development of automated external symbol
>manipulation means as a next stage in the evolution of our intellectual
>power."

I've seen his brick photo.

The nice discussion that I've had with him was to consider other better
tools such as Sherlock Holmes analog using a hand lense. He has just
now seen that that might be the better way to go. But last we spoke
he got some money from the NSF. A small chance I may see him tomorrow
nite at a monthly dinner.

>Regardless, it's always good to read about his work. And, born in
>Portland, worked at Ames at Moffett, looking for tools to augment
>intellect...but maybe the similarities end there.

Yes, Doug last came by when a German film crew came to film where he
used to work in the 40x80 but was then closed. I mean no one except a
an even normally small number, then smaller had any keys. It was just
reopen by the USAF and USA.

I recommend Markoff's latest book on that era.

>> Still not enough. Just think what Woz would have done minus Jobs.


>
>So was Engelbart Woz or Jobs? Some techies can also sell.

Doug is likely neither. Doug is in a class by himself in that era
when punch cards still reigned supreme. What's so amazing about Doug
is that he still has a huge group of followers from that era and to this
day. They fall in 2 camps (subgroups): the technical guys who worked
with him at the Augmentation Research Center like Bill English and the
rest whom I occasionally see even in grocery stores like Trader Joes,
and then there were/are the less technical groupies the Stew Brands,
the Ted Nelsons (and gusss Kevin Kelly was one [at Kevin's marginal
LongNow talk last month]). I am not certain why the Catmulls and Smiths
can't sell a Pixar as well as a Jobs.

>>>>We are discussing diagrams (schematics

>> "Then a miracle happens" areas.
>
>(I guess this is OT, but I'm not on whatever mailing list you keep
>mentioning, and c.l.visual seems out of commission.)

Oh, its a hackers thing.

>That's just data flow, how about control flow--re my parenthesized
>comment above? People like to see control dependences as well. It's
>why structured control constructs utilize indentation, to add another
>(partial) dimension, to allow the control points of a structured
>construct to be visually related to each other (by being juxtaposed at
>the same level of indentation) as well as being related to the
>neighboring statements (by being juxtaposed, period). Entities which
>are juxtaposed in a program trace are juxtaposed in the code (using one
>definition or the other). I consider this similarity of code and trace
>the essence of structured programming.

I think the problem is your word: See. That's why I think Parnas and
Habermann variously considered global variables harmful. The problem,
like many, became scale. Single words of memory weren't a big deal.
Arrays, array handling and inconsistent array state likely also contributed
to data flow's death.

Indentation was an afc thread which involved Python.


>Once you get into parallel programming, and the dependences (both data
>and control) go all over, and the program trace becomes a partial
>ordering/DAG rather than a sequence, indentation isn't sufficient.
>Still, if you represent data and control dependences in the program by
>graphical connections (e.g. lines) and consider operations as being
>proximal in both the program and the trace whenever they're
>connected, you get the same essence of structured programming. (I'm
>sure I discuss this in some paper, too.)

You will still have problems with exception handling.
Anyways, I'm not working on that anymore.

>> Look at Usenet. It survives.
>
>At least so far (though Big-8 is apparently neutered for now), probably
>on purely political merits alone (shared ownership, lack of censorship,
>etc.). If one could create a blog with those properties, it'd probably
>win out over Usenet due to other factors.

Maybe a Wiki more than a blog.
Depends. Hard to say. I know the security guys who prefer NNTP.
Listservs are still in use.

>> The prophet has a problem.
>
>Smart/living messengers have learned to carry some good news with the bad.

Who is your Judas? [I just saw the news before the weekend.]

>>>>>"what's an algorithm?" question.

>>>"parallel algorithm" becomes at best undefined and at worst an oxymoron.

...


>> Think "data parallel" 8^). Danny did, and look where it got him.
>

>So where did it get him? I don't know if I'd consider the CM1/2
>especially successful, though it was unique and an important experiment.
>Not that I've gotten further.

In the short run, it wasn't successful. But I like his idea of a 10,000
year clock out in the middle of Nevada (really Eastern), independent of
computers. Danny had to pay Gordon Bell for his mid-1990s bet.

He did spawn a few low level ideas as a reaction to the CM, just not
visible in public in detail.

>Yes, data parallelism does evade the oxymoron, by leaving algorithms
>sequential and making the operations parallel. (Often requires
>considering loops as operations with their bodies as arguments, but
>that's OK.) That's almost cheating, though, not to mention specialized,
>pretty restrictive, and not particularly efficient, which is probably
>why that machine evaporated. Much better to make the
>"algorithm-like-thingy" parallel and the operations sequential--or at
>least decomposable to deterministic atomic pieces.

I hear you.

I suspect that interpreters helped language development more than
compilers writers are willing to accept. But this is a guess on my part.
I mean that I realize that APL was interpreted, but I don't Iverson's
notation and semantics well enough. The place to watch might be
something like Matlab combined with spreadsheets.

>>>>>>[Moore's Law]
>>>>It's not double every 18 months, it's quadruple every 3 years.
>>>>Double is only used by managers and journalists.
>

>I didn't understand your "it's not double" statement.

I think "doubling" is deceptive. It's like talking about bytes on word
oriented architectures like Crays, DEC-10s/20s, Univacs, etc.

Look, the improvement in mask resolution is a 2-D issue: Newton and
inverse square laws (it's slice and dice in this case).

--

David DiNucci

unread,
Apr 10, 2006, 3:46:33 PM4/10/06
to
toEugene Miya wrote:
> DiNucci wrote:
>>> [Englebart]

>
> The nice discussion that I've had with him was to consider other better
> tools such as Sherlock Holmes analog using a hand lense. He has just
> now seen that that might be the better way to go. But last we spoke
> he got some money from the NSF. A small chance I may see him tomorrow
> nite at a monthly dinner.

Say "hi" from me. :-) I'm sure he'll remember, I was the one sitting in
the middle of that huge audience at the "Engelbart's Unfinished
Revolution" colloquium at Stanford. (I think I had a picnic lunch with
Gordon and friends there on the grounds that day.)

A friend (Buzz Hill) and I tend to talk alot about "zooming" in and out
regarding levels of abstraction. I don't know if that's similar.

>>That's just data flow, how about control flow--re my parenthesized
>>comment above? People like to see control dependences as well. It's
>>why structured control constructs utilize indentation, to add another
>>(partial) dimension, to allow the control points of a structured
>>construct to be visually related to each other (by being juxtaposed at
>>the same level of indentation) as well as being related to the
>>neighboring statements (by being juxtaposed, period). Entities which
>>are juxtaposed in a program trace are juxtaposed in the code (using one
>>definition or the other). I consider this similarity of code and trace
>>the essence of structured programming.
>
>
> I think the problem is your word: See. That's why I think Parnas and
> Habermann variously considered global variables harmful.

I'm sure that's the reason. You can't *see* the associations.

> The problem,
> like many, became scale. Single words of memory weren't a big deal.
> Arrays, array handling and inconsistent array state likely also contributed
> to data flow's death.

Those are undoubtedly the trickiest things to address in a visual and/or
latency-tolerant language. But I think ScalPL is on the right track there.

>>Once you get into parallel programming, and the dependences (both data
>>and control) go all over, and the program trace becomes a partial
>>ordering/DAG rather than a sequence, indentation isn't sufficient.
>>Still, if you represent data and control dependences in the program by
>>graphical connections (e.g. lines) and consider operations as being
>>proximal in both the program and the trace whenever they're
>>connected, you get the same essence of structured programming. (I'm
>>sure I discuss this in some paper, too.)
>
>
> You will still have problems with exception handling.

Not if you handle it right. e.g. in ScalPL, exceptions are just
alternate control states associated with data. "Exceptions" are only
considered exceptions because there's traditionally only one flow of
control, and they're exceptions to that.

>>>The prophet has a problem.
>>
>>Smart/living messengers have learned to carry some good news with the bad.
>
>
> Who is your Judas? [I just saw the news before the weekend.]

I didn't. What news? This sounds scary. I figured I couldn't get into
too much trouble sitting here writing tech papers and software, for
myself mostly.

-Dave
---
http://www.elepar.com/

Eugene Miya

unread,
Apr 10, 2006, 6:39:03 PM4/10/06
to
>>>> [Englebart]
>> The nice discussion that I've had with him was to consider other better
>> tools such as Sherlock Holmes analog using a hand lense. He has just

In article <v5qdnT7fn92hK6fZ...@comcast.com>,


David DiNucci <da...@elepar.com> wrote:
>Say "hi" from me. :-) I'm sure he'll remember, I was the one sitting in
>the middle of that huge audience at the "Engelbart's Unfinished
>Revolution" colloquium at Stanford. (I think I had a picnic lunch with

8^)


>Gordon and friends there on the grounds that day.)

I just saw Gordon at the Stanford 40th Annv. of the CS Dept. Founding.

>A friend (Buzz Hill) and I tend to talk alot about "zooming" in and out
>regarding levels of abstraction. I don't know if that's similar.

Abstraction is an abstract thing. What falls off the sides is a matter
of debate. Ted Nelson tends to be better for that. He's very specific
about things he wants for transclusion. Doug tends to be a bit more vague.

What did Doug in, one of the things, if you read Markoff was that Doug
didn't think (at the time), that 5,000 commands was such a bad thing.

>>>That's just data flow, how about control flow--re my parenthesized
>>>comment above?
>>

>> I think the problem is your word: See. That's why I think Parnas and
>> Habermann variously considered global variables harmful.
>
>I'm sure that's the reason. You can't *see* the associations.

Well who would have thought that Unix fork(2) was the way to fork ala
Dijkstra? I am sure Nick did. ;^) BTW: I just sent out both of your
posts in c.p. Part of the problem lay in one way links. The web's
problem as well.

>> The problem,
>> like many, became scale. Single words of memory weren't a big deal.
>> Arrays, array handling and inconsistent array state likely also contributed
>> to data flow's death.
>
>Those are undoubtedly the trickiest things to address in a visual and/or
>latency-tolerant language. But I think ScalPL is on the right track there.

Perhaps.
Time will tell.
I don't have a strong enough opinion either way.


>>>Once you get into parallel programming, and the dependences (both data
>>>and control) go all over, and the program trace becomes a partial
>>>ordering/DAG rather than a sequence, indentation isn't sufficient.

>> You will still have problems with exception handling.


>
>Not if you handle it right. e.g. in ScalPL, exceptions are just
>alternate control states associated with data. "Exceptions" are only
>considered exceptions because there's traditionally only one flow of
>control, and they're exceptions to that.

We cannot assume the end users will handle things right.
When we try to make things idiot proof, we find out how smart idiots are.
It's in part why people still use gotos.

>>>>The prophet has a problem.
>>>Smart/living messengers have learned to carry some good news with the bad.
>> Who is your Judas? [I just saw the news before the weekend.]
>
>I didn't. What news? This sounds scary. I figured I couldn't get into
>too much trouble sitting here writing tech papers and software, for
>myself mostly.

The Gospel according to Judas.

--

ram...@bigpond.net.au

unread,
Apr 10, 2006, 7:08:45 PM4/10/06
to
eug...@cse.ucsc.edu (Eugene Miya) writes:

> >Since CEDAR was sometimes referred to as ILLIAC 5, they seem to have
> >skipped that numeral and decided to call the current version ILLIAC 6.
>
> Wow! Really? I've never heard that, and I owe Dan Reed a seminar.
> It was just a bunch of Alliant FX/8s with an initially poor port
> of 4.2 BSD on them to me.

Interestingly, this project seems to be using TigerSharcs DSPs as the
processing units.

John Savard

unread,
Apr 11, 2006, 10:13:03 AM4/11/06
to
On 9 Mar 2006 07:29:38 -0800, silk....@yahoo.com wrote, in part:

>Is there a suite(or collections of programs) to benchmark
>SIMD/Vector architectures for performance?
>

>Even more, is there a SIMD language (maybe as
>an extension to 'C'/Fortran) somewhere?

Fortran 95 probably would count as that.

John Savard
http://www.quadibloc.com/index.html
_________________________________________
Usenet Zone Free Binaries Usenet Server
More than 140,000 groups
Unlimited download
http://www.usenetzone.com to open account

Greg Lindahl

unread,
Apr 11, 2006, 1:12:35 PM4/11/06
to
>>Even more, is there a SIMD language (maybe as
>>an extension to 'C'/Fortran) somewhere?
>
>Fortran 95 probably would count as that.

Compilers typically expand Fortran array syntax into the equivalent
Fortran77 code before optimization... and then vectorize it again.

-- greg

Del Cecchi

unread,
Apr 11, 2006, 4:49:37 PM4/11/06
to

Not a Gospel. If that is the title it should be quoted or underlined.
Proper naming is very important in computers as well. Calling something
what it isn't causes problems in any field. You recall the famous mark
twain quote about tails and legs.
>


--
Del Cecchi
"This post is my own and doesn’t necessarily represent IBM’s positions,
strategies or opinions.”

Eugene Miya

unread,
Apr 12, 2006, 1:06:46 PM4/12/06
to
In article <4a2j34F...@individual.net>,
Del Cecchi <cecchi...@us.ibm.com> wrote:

You copied the whole note for this?

>>>>>>The prophet has a problem.

>>>>Who is your Judas? [I just saw the news before the weekend.]
>>>I didn't. What news? This sounds scary. I figured I couldn't get into
>>>too much trouble sitting here writing tech papers and software, for
>>>myself mostly.
>>
>> The Gospel according to Judas.
>
>Not a Gospel. If that is the title it should be quoted or underlined.

I don't hear the quote marks or underlines over the airwaves.

>Proper naming is very important in computers as well.

True. IBM didn't attempt the less than successful DWIM research.
Parsing has not improved much since then.

>Calling something what it isn't causes problems in any field.

True in context. But this is Usenet, and wider news media cover this.
It's also very good for for hooking people's perceptions.

>You recall the famous mark twain quote about tails and legs.

No. I don't. I suspect many reading have not.

--

Del Cecchi

unread,
Apr 12, 2006, 3:37:10 PM4/12/06
to
Eugene Miya wrote:
> In article <4a2j34F...@individual.net>,
> Del Cecchi <cecchi...@us.ibm.com> wrote:
>
> You copied the whole note for this?
>
>
>>>>>>>The prophet has a problem.
>>>>>
>>>>>Who is your Judas? [I just saw the news before the weekend.]
>>>>
>>>>I didn't. What news? This sounds scary. I figured I couldn't get into
>>>>too much trouble sitting here writing tech papers and software, for
>>>>myself mostly.
>>>
>>>The Gospel according to Judas.
>>
>>Not a Gospel. If that is the title it should be quoted or underlined.
>
>
> I don't hear the quote marks or underlines over the airwaves.

That's a problem with inadequate news reporting and medium inadequacies.


>
>
>>Proper naming is very important in computers as well.
>
>
> True. IBM didn't attempt the less than successful DWIM research.
> Parsing has not improved much since then.
>
>
>>Calling something what it isn't causes problems in any field.
>
>
> True in context. But this is Usenet, and wider news media cover this.
> It's also very good for for hooking people's perceptions.

So, the dan rather "fake but true" argument?


>
>
>>You recall the famous mark twain quote about tails and legs.
>
>
> No. I don't. I suspect many reading have not.
>

"how many legs does a sheep have if you call the tail a leg?"

"four, calling a tail a leg doesn't make it a leg"

BTW, I did not verify this is a mark twain quote.

Eugene Miya

unread,
Apr 14, 2006, 7:47:19 PM4/14/06
to
>>>>>>>>The prophet has a problem.
>>>>>>Who is your Judas? [I just saw the news before the weekend.]
>>>>>I didn't. What news? This sounds scary. I figured I couldn't get into
>>>>>too much trouble sitting here writing tech papers and software, for
>>>>>myself mostly.
>>>>The Gospel according to Judas.
>>>Not a Gospel. If that is the title it should be quoted or underlined.
>> I don't hear the quote marks or underlines over the airwaves.

In article <4a5376F...@individual.net>,


Del Cecchi <cecchi...@us.ibm.com> wrote:
>That's a problem with inadequate news reporting and medium inadequacies.

Consider speech recognition software in that.


>>>Calling something what it isn't causes problems in any field.
>> True in context. But this is Usenet, and wider news media cover this.
>> It's also very good for for hooking people's perceptions.
>
>So, the dan rather "fake but true" argument?

I don't know enough about the CBS case (was the source ever determined?)
But I am thinking the propensity of engineers (and some other people in
general) to be like the guy in French gullotine argument which
ends with "I think I see your problem."

>>>You recall the famous mark twain quote about tails and legs.
>> No. I don't. I suspect many reading have not.
>>
>"how many legs does a sheep have if you call the tail a leg?"
>
>"four, calling a tail a leg doesn't make it a leg"

A truth vs. syntactic lingistic substitution joke.

>BTW, I did not verify this is a mark twain quote.

It's usenet.
He could merely be quoting some earlier person.
We won't hold that against you.


>"This post is my own and doesn’t necessarily represent IBM’s positions,
>strategies or opinions.”

Don't worry, I just came from Almaden.

--

0 new messages