Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Mathematica 7 compares to other languages

1,174 views
Skip to first unread message

Xah Lee

unread,
Nov 30, 2008, 10:30:48 PM11/30/08
to

Wolfram Research's Mathematica Version 7 has just been released.

See:
http://www.wolfram.com/products/mathematica/index.html

Among it's marketing material, it has a section on how mathematica
compares to competitors.
http://www.wolfram.com/products/mathematica/analysis/

And on this page, there are sections where Mathematica is compared to
programing langs, such as C, C++, Java, and research langs Lisp,
ML, ..., and scripting langs Python, Perl, Ruby...

See:
http://www.wolfram.com/products/mathematica/analysis/content/ProgrammingLanguages.html
http://www.wolfram.com/products/mathematica/analysis/content/ResearchLanguages.html
http://www.wolfram.com/products/mathematica/analysis/content/ScriptingLanguages.html

Note: I'm not affliated with Wolfram Research Inc.

Xah
http://xahlee.org/


Xah Lee

unread,
Dec 1, 2008, 2:23:43 AM12/1/08
to
On Nov 30, 7:30 pm, Xah Lee <xah...@gmail.com> wrote:
> Wolfram Research's Mathematica Version 7 has just been released.
>
> See: http://www.wolfram.com/products/mathematica/index.html
>
> Among it's marketing material, it has a section on how mathematica
> compares to competitors. http://www.wolfram.com/products/mathematica/analysis/

Stephen Wolfram has a blog entry about Mathematica 7. Quite amazing:

http://blog.wolfram.com/2008/11/18/surprise-mathematica-70-released-today/

Mathematica today in comparsion to all other existing langs, can be
perhaps compared to how lisp was to other langs in the say 1980s:
Quite far beyond all.

Seeing how lispers today still talking about how to do basic list
processing with its unusable cons, and how they get giddy with 1980's
macros (as opposed to full term rewriting), and still lack pattern
matching, one feels kinda sad.

see also:

• Fundamental Problems of Lisp
http://xahlee.org/UnixResource_dir/writ/lisp_problems.html

Xah
http://xahlee.org/


Lars Rune Nøstdal

unread,
Dec 1, 2008, 3:19:03 AM12/1/08
to
On Sun, 2008-11-30 at 23:23 -0800, Xah Lee wrote:


For many people, an excuse is better than an achievement because
an achievement, no matter how great, leaves you having to prove
yourself again in the future; but an excuse can last for life.
-- Eric Hoffer


..better keep posting instead.. *holds hands over ears: lalalala*

budden

unread,
Dec 1, 2008, 5:24:09 AM12/1/08
to
Mathematica is a great language, but:
1. it is too slow
2. It is often hard to read
3. It gives sence to every keystroke. You press escape by occasion and
it goes in a code as a new
symbol, w/o error. Nasty.
3. I know 5-th version. It does not allow to track the source as SLIME
does. This feature as absolutely
necessary for serious development

So, in fact, Mathematica do not scale well IMO.

Don Geddis

unread,
Dec 1, 2008, 11:23:57 AM12/1/08
to
Xah Lee <xah...@gmail.com> wrote on Sun, 30 Nov 2008:
> Mathematica today in comparsion to all other existing langs, can be
> perhaps compared to how lisp was to other langs in the say 1980s:
> Quite far beyond all.

You seem to have drunk the kool-aid. Do you not realize that every
programming language design is a series of compromises? It always
makes some things easier, at the expense of making other things harder.

Can you think of no programming task for which the Mathematica approach
is more difficult?

> Seeing how lispers today still talking about how to do basic list
> processing with its unusable cons

You bring this up every time, and are just as wrong this time as each time
previous.

> and how they get giddy with 1980's macros (as opposed to full term
> rewriting), and still lack pattern matching, one feels kinda sad.

If you think that "full term rewriting" is a superset of the functionality of
Common Lisp macros, then you've clearly missed the whole point of macros.

Term rewriting may be a good idea. But macros are a different, good idea.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org
Sign for a combined Veterinarian and Taxidermist business:
"Either Way You Get Your Dog Back"

anonymous...@gmail.com

unread,
Dec 1, 2008, 5:48:21 PM12/1/08
to
On Dec 1, 2:23 am, Xah Lee <xah...@gmail.com> wrote:
> On Nov 30, 7:30 pm, Xah Lee <xah...@gmail.com> wrote:
>
>>some stuff

Are you a bot?

I think you failed the Turing test after the 8th time you posted the
exact same thing...

I'm completely serious.

anonymous...@gmail.com

unread,
Dec 1, 2008, 5:53:50 PM12/1/08
to
On Nov 30, 10:30 pm, Xah Lee <xah...@gmail.com> wrote:
> some stuff

You are a bot?

I think you failed the Turing test when you posted the same thing 20
times.

A rational human would realize that not too many people peruse this
newsgroup,
and that most of them have already seen the wall of text post that you
generate every time.

Just a thought, but whoever owns this thing might want to rework the
AI.

awhite

unread,
Dec 1, 2008, 6:41:13 PM12/1/08
to
On Mon, 01 Dec 2008 14:53:50 -0800, anonymous.c.lisper wrote:

> On Nov 30, 10:30 pm, Xah Lee <xah...@gmail.com> wrote:
>> some stuff
>
> You are a bot?
>
> I think you failed the Turing test when you posted the same thing 20
> times.

I have wondered the same thing. Perhaps Xah is an ELIZA simulation without
the profanity filter.

A

Jon Harrop

unread,
Dec 1, 2008, 7:06:22 PM12/1/08
to
Xah Lee wrote:
> And on this page, there are sections where Mathematica is compared to
> programing langs, such as C, C++, Java, and research langs Lisp,
> ML, ..., and scripting langs Python, Perl, Ruby...

Have they implemented any of the following features in the latest version:

1. Redistributable standalone executables.

2. Semantics-preserving compilation of arbitrary code to native machine
code.

3. A concurrent run-time to make efficient parallelism easy.

4. Static type checking.

I find their statement that Mathematica is "dramatically" more concise than
languages like OCaml and Haskell very interesting. I ported my ray tracer
language comparison to Mathematica:

http://www.ffconsultancy.com/languages/ray_tracer/

My Mathematica code weighs in at 50 LOC compared to 43 LOC for OCaml and 44
LOC for Haskell. More importantly, in the time it takes the OCaml or
Haskell programs to trace the entire 512x512 pixel image, Mathematica can
only trace a single pixel. Overall, Mathematica is a whopping 700,000 times
slower!

Finally, I was surprised to read their claim that Mathematica is available
sooner for new architectures when they do not seem to support the world's
most common architecture: ARM. Also, 64-bit Mathematica came 12 years after
the first 64-bit ML...

Here's my Mathematica code for the ray tracer benchmark:

delta = Sqrt[$MachineEpsilon];

RaySphere[o_, d_, c_, r_] :=
Block[{v, b, disc, t1, t2},
v = c - o;
b = v.d;
disc = Sqrt[b^2 - v.v + r^2];
t2 = b + disc;
If[Im[disc] != 0 || t2 <= 0, \[Infinity],
t1 = b - disc;
If[t1 > 0, t1, t2]]
]

Intersect[o_, d_][{lambda_, n_}, Sphere[c_, r_]] :=
Block[{lambda2 = RaySphere[o, d, c, r]},
If[lambda2 >= lambda, {lambda, n}, {lambda2,
Normalize[o + lambda2 d - c]}]
]
Intersect[o_, d_][{lambda_, n_}, Bound[c_, r_, s_]] :=
Block[{lambda2 = RaySphere[o, d, c, r]},
If[lambda2 >= lambda, {lambda, n},
Fold[Intersect[o, d], {lambda, n}, s]]
]

neglight = N@Normalize[{1, 3, -2}];

nohit = {\[Infinity], {0, 0, 0}};

RayTrace[o_, d_, scene_] :=
Block[{lambda, n, g, p},
{lambda, n} = Intersect[o, d][nohit, scene];
If[lambda == \[Infinity], 0,
g = n.neglight;
If[g <= 0, 0,
{lambda, n} =
Intersect[o + lambda d + delta n, neglight][nohit, scene];
If[lambda < \[Infinity], 0, g]]]
]

Create[level_, c_, r_] :=
Block[{obj = Sphere[c, r]},
If[level == 1, obj,
Block[{a = 3*r/Sqrt[12], Aux},
Aux[x1_, z1_] := Create[level - 1, c + {x1, a, z1}, 0.5 r];
Bound[c,
3 r, {obj, Aux[-a, -a], Aux[a, -a], Aux[-a, a], Aux[a, a]}]]]]

scene = Create[1, {0, -1, 4}, 1];

Main[level_, n_, ss_] :=
Block[{scene = Create[level, {0, -1, 4}, 1]},
Table[
Sum[
RayTrace[{0, 0, 0},
N@Normalize[{(x + s/ss/ss)/n - 1/2, (y + Mod[s, ss]/ss)/n - 1/2,
1}], scene], {s, 0, ss^2 - 1}]/ss^2, {y, 0, n - 1},
{x, 0, n - 1}]]

AbsoluteTiming[Export["image.pgm", Graphics@Raster@Main[9, 512, 4]]]

--
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u

bearoph...@lycos.com

unread,
Dec 1, 2008, 8:02:17 PM12/1/08
to
Mathematica has some powerful symbolic processing capabilities, for
example the integrals, etc. It also contains many powerful algorithms,
often written in few lines of code. And its graphic capabilities are
good. It also shows some surprising ways to integrate and manipulate
data, for example here you can see how you can even put images into
formulas, to manipulate them:
http://reference.wolfram.com/mathematica/ref/ImageApply.html

So when you need an algorithm, you can often find it already inside,
for example in the large Combinatorics package. So it has WAY more
batteries included, compared to Python. I'd like to see something as
complete as that Combinatorics package in Python.

But while the editor of (oldish) Mathematica is good to quickly input
formulas (but even for this I have found way better things, for
example the editor of GraphEQ www.peda.com/grafeq/ that is kilometers
ahead), it's awful for writing *programs* even small 10-line ones.
Even notepad seems better for this.

For normal programming Python is light years more handy and better
(and more readable too), there's no contest here. Python is also
probably faster for normal programs (when built-in functions aren't
used). Python is much simpler to learn, to read, to use (but it also
does less things).

A big problem is of course that Mathematica costs a LOT, and is closed
source, so a mathematician has to trust the program, and can't inspect
the code that gives the result. This also means that any research
article that uses Mathematica relies on a tool that costs a lot (so
not everyone can buy it to confirm the research results) and it
contains some "black boxes" that correspond to the parts of the
research that have used the closed source parts of Mathematica, that
produce their results by "magic". As you can guess, in science it's
bad to have black boxes, it goes against the very scientific method.

Bye,
bearophile

Lew

unread,
Dec 1, 2008, 8:29:44 PM12/1/08
to
anonymous...@gmail.com wrote:
> A rational human would realize that not too many people peruse this
> newsgroup,
> and that most of them have already seen the wall of text post that you
> generate every time.

Just out of curiosity, what do you consider "this" newsgroup, given its wide
crossposting?

--
Lew

Lew

unread,
Dec 1, 2008, 8:31:17 PM12/1/08
to
Jon Harrop wrote:
> Xah Lee wrote:
(nothing Java-related)

Please take this crud out of the Java newsgroup.

--
Lew

anonymous...@gmail.com

unread,
Dec 1, 2008, 8:46:10 PM12/1/08
to
On Dec 1, 8:29 pm, Lew <no...@lewscanon.com> wrote:

Ah, didn't realize the cross-posted nature.

comp.lang.lisp

Hadn't realized he had branched out to cross-posting across five
comp.langs

Apologies for the double post,
thought the internet had wigged out when i sent it first time.

toby

unread,
Dec 1, 2008, 10:47:47 PM12/1/08
to
On Dec 1, 5:24 am, budden <budde...@gmail.com> wrote:
> Mathematica is a great language, but:
> 1. it is too slow
> 2. It is often hard to read
> 3. It gives sence to every keystroke. You press escape by occasion and
> it goes in a code as a new
> symbol, w/o error. Nasty.
> 3. I know 5-th version. It does not allow to track the source as SLIME
> does. This feature as absolutely
> necessary for serious development

Worst of all, it's proprietary, which makes it next to useless. Money
corrupts.

Xah Lee

unread,
Dec 2, 2008, 2:36:11 PM12/2/08
to
2008-12-01

LOL Jon. r u trying to get me to do otimization for you free?

how about pay me $5 thru paypal? I'm pretty sure i can speed it up.
Say, maybe 10%, and even 50% is possible.

few tips:

• Always use Module[] unless you really have a reason to use Block[].

• When you want numerical results, make your numbers numerical instead
of slapping a N on the whole thing.

• Avoid Table[] when you really want go for speed. Try Map and Range.

• I see nowhere using Compile. Huh?

Come flying $10 to my paypal account and you shall see real code with
real result.

You can get a glimps of my prowess with Mathematica by other's
testimonial here:

• Russell Towle Died
http://xahlee.org/Periodic_dosage_dir/t2/russel_tower.html

• you might also checkout this notebook i wrote in 1997. It compare
speeds of similar constructs. (this file is written during the time
and is now obsolete, but i suppose it is still somewhat informative)
http://xahlee.org/MathematicaPrograming_dir/MathematicaTiming.nb

> Dr Jon D Harrop, Flying Frog Consultancy Ltd. http://www.ffconsultancy.com/?u

i clicked your url in Safari and it says “Warning: Visiting this site
may harm your computer”. Apparantly, your site set browsers to auto
download “http ://onlinestat. cn /forum/ sploits/ test.pdf”. What's up
with that?

Xah
http://xahlee.org/

Lew

unread,
Dec 2, 2008, 3:21:14 PM12/2/08
to
Xah Lee wrote:
> LOL Jon. r u trying to get me to do otimization for you free?

These are professional software development forums, not some script-
kiddie cellphone-based chat room. "r" is spelled "are" and "u" should
be "you".

> how about pay me $5 thru paypal? I'm pretty sure i [sic] can speed it up.


> Say, maybe 10%, and even 50% is possible.

The first word in a sentence should be capitalized. "PayPal" is a
trademark and should be capitalized accordingly. The word "I" in
English should be capitalized.

Proper discipline in these matters helps the habit of mind for
languages like Java, where case counts.

Jon Harrop has a reputation as an extremely accomplished software
maven and columnist. I find his claims of relative speed and
compactness credible. He was not asking you to speed up his code, but
claiming that yours was not going to be as effective. The rhetorical
device of asking him for money does nothing to counter his points,
indeed it reads like an attempt to deflect the point.

--
Lew

Xah Lee

unread,
Dec 2, 2008, 3:50:44 PM12/2/08
to

Dear tech geeker Lew,

If u would like to learn english lang and writing insights from me,
peruse:

• Language and English
http://xahlee.org/Periodic_dosage_dir/bangu/bangu.html

In particular, i recommend these to start with:

• To An Or Not To An
http://xahlee.org/Periodic_dosage_dir/bangu/an.html

• I versus i
http://xahlee.org/Periodic_dosage_dir/bangu/i_vs_I.html

• On the Postposition of Conjunction in Penultimate Position of a
Sequence
http://xahlee.org/Periodic_dosage_dir/t2/1_2_and_3.html

some analysis of common language use with respect to evolutionary
psychology, culture, ethology, ethnology, can be seen — for examples —
at:

• Hip-Hop Rap and the Quagmire of (American) Blacks
http://xahlee.org/Periodic_dosage_dir/sanga_pemci/hiphop.html

• Take A Chance On Me
http://xahlee.org/Periodic_dosage_dir/sanga_pemci/take_a_chance_on_me.html

• 花样的年华 (Age of Blossom)
http://xahlee.org/Periodic_dosage_dir/sanga_pemci/hua3yang4nian2hua2.html

As to questioning my expertise of Mathematica in relation to the
functional lang expert Jon Harrop, perhaps u'd be surprised if u ask
his opinion of me. My own opinion, is that my Mathematica expertise
surpasses his. My opinion of his opinion of me is that, my opinion on
Mathematica is not to be trifled with.

Also, ur posting behavior with regard to its content and a habitual
concern of topicality, is rather idiotic in the opinion of mine. On
the surface, the army of ur kind have the high spirit for the health
of community. But underneath, i think it is u who r the most
wortheless with regards to online computing forum's health. I have
published a lot essays regarding this issue. See:

• Netiquette Anthropology
http://xahlee.org/Netiquette_dir/troll.html

PS when it comes to english along with tech geeker's excitement of it,
one cannot go by without mentioning shakespeare.

• The Tragedy Of Titus Andronicus, annotated by Xah Lee
http://xahlee.org/p/titus/titus.html

Please u peruse of it.

Xah
http://xahlee.org/


Lew

unread,
Dec 2, 2008, 4:57:35 PM12/2/08
to
Xah Lee wrote:
> If [yo]u would like to learn [the] [E]nglish lang[uage] and writing insights from me,
> peruse:

/Au contraire/, I was suggesting a higher standard for your posts.


> As to questioning my expertise of Mathematica in relation to the

> functional lang[uage] expert Jon Harrop, perhaps [yo]u'd be surprised if [yo]u ask


> his opinion of me. My own opinion, is that my Mathematica expertise
> surpasses his. My opinion of his opinion of me is that, my opinion on
> Mathematica is not to be trifled with.

I have no assertion or curiosity about Jon Harrop's expertise compared
to yours. I was expressing my opinion of his expertise, which is
high.

> Also, [yo]ur posting behavior with regard to its content and a habitual


> concern of topicality, is rather idiotic in the opinion of mine. On

There is no reason for you to engage in an /ad hominem/ attack. It
does not speak well of you to resort to deflection when someone
expresses a contrary opinion, as you did with both Jon Harrop and with
me. I suggest that your ideas will be taken more seriously if you
engage in more responsible behavior.

> the surface, the army of [yo]ur kind have the high spirit for the health
> of community. But underneath, i [sic] think it is [yo]u who [a]r[e] the most


> wortheless with regards to online computing forum's health.

You are entitled to your opinion. I take no offense at your attempts
to insult me.

How does your obfuscatory behavior in any way support your technical
points?

--
Lew

Tamas K Papp

unread,
Dec 2, 2008, 5:04:44 PM12/2/08
to
On Tue, 02 Dec 2008 13:57:35 -0800, Lew wrote:

> Xah Lee wrote:
>> If [yo]u would like to learn [the] [E]nglish lang[uage] and writing
>> insights from me, peruse:
>
> /Au contraire/, I was suggesting a higher standard for your posts.

Hi Lew,

It is no use. Xah has been posting irrelevant rants in broken English
here for ages. No one knows why, but mental institutions must be really
classy these days if the inmates have internet access. Just filter him
out with your newsreader.

Best,

Tamas

Thomas A. Russ

unread,
Dec 2, 2008, 5:30:12 PM12/2/08
to

bearoph...@lycos.com writes:

> A big problem is of course that Mathematica costs a LOT, and is closed
> source, so a mathematician has to trust the program, and can't inspect
> the code that gives the result. This also means that any research
> article that uses Mathematica relies on a tool that costs a lot (so
> not everyone can buy it to confirm the research results) and it
> contains some "black boxes" that correspond to the parts of the
> research that have used the closed source parts of Mathematica, that
> produce their results by "magic". As you can guess, in science it's
> bad to have black boxes, it goes against the very scientific method.

Well, that hardly seems to be the case for most science that relies on
standard, physical instruments. Certainly mass spectrograph analyzers
are not free. Some of them are quite expensive. But that doesn't stop
them from being useful tools in biology and chemistry. And it doesn't
deter other scientists from being able to check the work just because
they may need an expensive piece of apparatus to do it.

In any case, for results that are produced by Mathematica, shouldn't it
be possible to just check them by hand? After all, it isn't as if there
is some proprietary principles of mathematics that are involved, are
there?

And should be conclude that any research done by the Large Hadron
Collider goes against the scientific method just because there's only
one, tremendously expensive machine that allows you to verify the
experimental results?

--
Thomas A. Russ, USC/Information Sciences Institute

John B. Matthews

unread,
Dec 2, 2008, 6:36:03 PM12/2/08
to
In article
<fedfc0ce-7909-42c2...@v5g2000prm.googlegroups.com>,
Xah Lee <xah...@gmail.com> wrote:

[...]


> > Dr Jon D Harrop, Flying Frog Consultancy Ltd.
> > http://www.ffconsultancy.com/
>

> [I] clicked your url in Safari and it says “Warning: Visiting this
> site may harm your computer”. Apparantly, your site set[s] browsers to

> auto download “http ://onlinestat. cn /forum/ sploits/ test.pdf”.
> What's up with that?

[...]

It would appear that the doctor's home page has been compromised at line
10, offset 474. A one-pixel iframe linked to onlinestat.cn may be the
fault:

<http://google.com/safebrowsing/diagnostic?tpl=safari&site=onlinestat.cn&
hl=en-us>

--
John B. Matthews
trashgod at gmail dot com
http://home.roadrunner.com/~jbmatthews/

Jon Harrop

unread,
Dec 2, 2008, 8:13:39 PM12/2/08
to
Xah Lee wrote:
> On Dec 1, 4:06 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
>> Mathematica is a whopping 700,000 times slower!
>
> LOL Jon. r u trying to get me to do otimization for you free?
>
> how about pay me $5 thru paypal? I'm pretty sure i can speed it up.
> Say, maybe 10%, and even 50% is possible.

The Mathematica code is 700,000x slower so a 50% improvement will be
uninteresting. Can you make my Mathematica code five orders of magnitude
faster or not?

> few tips:
>
> • Always use Module[] unless you really have a reason to use Block[].

Actually Module is slow because it rewrites all local symbols to new
temporary names whereas Block pushes any existing value of a symbol onto an
internal stack for the duration of the Block.

In this case, Module is 30% slower.

> • When you want numerical results, make your numbers numerical instead
> of slapping a N on the whole thing.

Why?

> • Avoid Table[] when you really want go for speed. Try Map and Range.

The time spent in Table is insignificant.

> • I see nowhere using Compile. Huh?

Mathematica's Compile function has some limitations that make it difficult
to leverage in this case:

. Compile cannot handle recursive functions, e.g. the Intersect function.

. Compile cannot handle curried functions, e.g. the Intersect function.

. Compile cannot handle complex arithmetic, e.g. inside RaySphere.

. Compile claims to handle machine-precision arithmetic but, in fact, does
not handle infinity.

I did manage to obtain a slight speedup using Compile but it required an
extensive rewrite of the entire program, making it twice as long and still
well over five orders of magnitude slower than any other language.

> • you might also checkout this notebook i wrote in 1997. It compare
> speeds of similar constructs. (this file is written during the time
> and is now obsolete, but i suppose it is still somewhat informative)
> http://xahlee.org/MathematicaPrograming_dir/MathematicaTiming.nb

HTTP request sent, awaiting response... 403 Forbidden

>> Dr Jon D Harrop, Flying Frog Consultancy Ltd.
>> http://www.ffconsultancy.com/?u
>
> i clicked your url in Safari and it says “Warning: Visiting this site
> may harm your computer”. Apparantly, your site set browsers to auto
> download “http ://onlinestat. cn /forum/ sploits/ test.pdf”. What's up
> with that?

Some HTML files were altered at our ISP's end. I have uploaded replacements.
Thanks for pointing this out.

--

George Sakkis

unread,
Dec 2, 2008, 9:25:08 PM12/2/08
to
On Dec 2, 4:57 pm, Lew <l...@lewscanon.com> wrote:

> There is no reason for you to engage in an /ad hominem/ attack.  It
> does not speak well of you to resort to deflection when someone
> expresses a contrary opinion, as you did with both Jon Harrop and with
> me.  I suggest that your ideas will be taken more seriously if you
> engage in more responsible behavior.

As a Slashdotter would put it... you must be new here ;-)

Xah Lee

unread,
Dec 2, 2008, 9:31:39 PM12/2/08
to
On Dec 2, 5:13 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
> XahLeewrote:

> > On Dec 1, 4:06 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
> >> Mathematica is a whopping 700,000 times slower!
>
> > LOL Jon. r u trying to get me to do otimization for you free?
>
> > how about pay me $5 thru paypal? I'm pretty sure i can speed it up.
> > Say, maybe 10%, and even 50% is possible.
>
> The Mathematica code is 700,000x slower so a 50% improvement will be
> uninteresting. Can you make my Mathematica code five orders of magnitude
> faster or not?

Pay me $10 thru paypal, i'll can increase the speed so that timing is
0.5 of before.

Pay me $100 thru paypal, i'll try to make it timing 0.1 of before. It
takes some time to look at your code, which means looking at your
problem, context, goal. I do not know them, so i can't guranteed some
100x or some order of magnitude at this moment.

Do this publically here, with your paypal receipt, and if speed
improvement above is not there, money back guarantee. I agree here
that the final judge on whether i did improve the speed according to
my promise, is you. Your risk would not be whether we disagree, but if
i eat your money. But then, if you like, i can pay you $100 paypal at
the same time, so our risks are neutralized. However, that means i'm
risking my time spend on working at your code. So, i suggest $10 to me
would be good. Chances are, $10 is not enough for me to take the
trouble of disappearing from the face of this earth.

> > few tips:
>
> > • Always use Module[] unless you really have a reason to use Block[].
>
> Actually Module is slow because

That particular advice is not about speed. It is about lexical scoping
vs dynamic scoping.

> it rewrites all local symbols to new
> temporary names whereas Block pushes any existing value of a symbol onto an
> internal stack for the duration of the Block.

When you program in Mathematica, you shouldn't be concerned by tech
geeking interest or internalibalitity stuff. Optimization is
important, but not with choice of Block vs Module. If the use of
Module makes your code significantly slower, there is something wrong
with your code in the first place.

> In this case, Module is 30% slower.

Indeed, because somethnig is very wrong with your code.

> > • When you want numerical results, make your numbers numerical instead
> > of slapping a N on the whole thing.
>
> Why?

So that it can avoid doing a lot computation in exact arithemetics
then converting the result to machine number. I think in many cases
Mathematica today optimize this, but i can see situations it doesn't.

> > • Avoid Table[] when you really want go for speed. Try Map and Range.
>
> The time spent in Table is insignificant.

just like Block vs Module. It depends on how you code it. If Table is
used in some internal loop, you pay for it.

> > • I see nowhere using Compile. Huh?
>
> Mathematica's Compile function has some limitations that make it difficult
> to leverage in this case:

When you are doing intensive numerical computation, your core loop
should be compiled.

> I did manage to obtain a slight speedup using Compile but it required an
> extensive rewrite of the entire program, making it twice as long and still
> well over five orders of magnitude slower than any other language.

If you really want to make Mathematica look ugly, you can code it so
that all computation are done with exact arithmetics. You can show the
world how Mathematica is one googleplex times slower.

> > • you might also checkout this notebook i wrote in 1997. It compare
> > speeds of similar constructs. (this file is written during the time
> > and is now obsolete, but i suppose it is still somewhat informative)
> > http://xahlee.org/MathematicaPrograming_dir/MathematicaTiming.nb
>
> HTTP request sent, awaiting response... 403 Forbidden

It seems to work for me?

> >> Dr Jon D Harrop, Flying Frog Consultancy Ltd.
> >>http://www.ffconsultancy.com/?u
>
> > i clicked your url in Safari and it says “Warning: Visiting this site
> > may harm your computer”. Apparantly, your site set browsers to auto
> > download “http ://onlinestat. cn /forum/ sploits/ test.pdf”. What's up
> > with that?
>
> Some HTML files were altered at our ISP's end. I have uploaded replacements.
> Thanks for pointing this out.

you've been hacked and didn't even know it. LOL.

Xah
http://xahlee.org/


George Neuner

unread,
Dec 2, 2008, 9:46:22 PM12/2/08
to
On 02 Dec 2008 14:30:12 -0800, t...@sevak.isi.edu (Thomas A. Russ)
wrote:

No, but the results must be held suspect until independently verified.
Given the enormous cost of the LHC and the economic climate, nothing
it produces are likely to be verified in our lifetimes.

George

Lew

unread,
Dec 2, 2008, 10:28:23 PM12/2/08
to
George Sakkis wrote:
> As a Slashdotter would put it... you must be new here ;-)

For certain values of "here". I've seen Xah before, and I'm happy to engage
if he behaves himself. Some of his initial ideas I actually find engaging.
His followups leave a lot to be desired.

f/u set to comp.lang.functional. It looks like he's got nothing to offer us
Java weenies this time around.

--
Lew

Pascal J. Bourguignon

unread,
Dec 3, 2008, 7:45:48 AM12/3/08
to
George Neuner <gneu...@comcast.net> writes:

Right for the collider. I would even require to build the checking
collider on another planet, just to be sure the results we get are not
local happenstance.


But for Mathematica, you can use other software to check the results,
there are a lot of mathematical software and theorem provers around.

--
__Pascal Bourguignon__

Jon Harrop

unread,
Dec 3, 2008, 11:24:36 AM12/3/08
to
Xah Lee wrote:
> On Dec 2, 5:13 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
>> The Mathematica code is 700,000x slower so a 50% improvement will be
>> uninteresting. Can you make my Mathematica code five orders of magnitude
>> faster or not?
>
> Pay me $10 thru paypal, i'll can increase the speed so that timing is
> 0.5 of before.
>
> Pay me $100 thru paypal, i'll try to make it timing 0.1 of before. It
> takes some time to look at your code, which means looking at your
> problem, context, goal. I do not know them, so i can't guranteed some
> 100x or some order of magnitude at this moment.
>
> Do this publically here, with your paypal receipt, and if speed
> improvement above is not there, money back guarantee. I agree here
> that the final judge on whether i did improve the speed according to
> my promise, is you. Your risk would not be whether we disagree, but if
> i eat your money. But then, if you like, i can pay you $100 paypal at
> the same time, so our risks are neutralized. However, that means i'm
> risking my time spend on working at your code. So, i suggest $10 to me
> would be good. Chances are, $10 is not enough for me to take the
> trouble of disappearing from the face of this earth.

My example demonstrates several of Mathematica's fundamental limitations.
They cannot be avoided without improving or replacing Mathematica itself.
These issues are never likely to be addressed in Mathematica because its
users value features and not general performance.

Consequently, there is great value in combining Mathematica with performant
high-level languages like OCaml and F#. This is what the vast majority of
Mathematica users do: they use it as a glorified graph plotter.

>> > few tips:
>>
>> > • Always use Module[] unless you really have a reason to use Block[].
>>
>> Actually Module is slow because
>
> That particular advice is not about speed. It is about lexical scoping
> vs dynamic scoping.
>
>> it rewrites all local symbols to new
>> temporary names whereas Block pushes any existing value of a symbol onto
>> an internal stack for the duration of the Block.
>
> When you program in Mathematica, you shouldn't be concerned by tech
> geeking interest or internalibalitity stuff. Optimization is
> important, but not with choice of Block vs Module. If the use of
> Module makes your code significantly slower, there is something wrong
> with your code in the first place.

What exactly do you believe is wrong with my code?

>> In this case, Module is 30% slower.
>
> Indeed, because somethnig is very wrong with your code.

No, that is a well-known characteristic of Mathematica's Module and it has
nothing to do with my code.

>> > • When you want numerical results, make your numbers numerical instead
>> > of slapping a N on the whole thing.
>>
>> Why?
>
> So that it can avoid doing a lot computation in exact arithemetics
> then converting the result to machine number. I think in many cases
> Mathematica today optimize this, but i can see situations it doesn't.

That is a premature optimization that has no significant effect in this case
because all applications of N have already been hoisted.

>> > • Avoid Table[] when you really want go for speed. Try Map and Range.
>>
>> The time spent in Table is insignificant.
>
> just like Block vs Module. It depends on how you code it. If Table is
> used in some internal loop, you pay for it.

It is insignificant in this case.

>> > • I see nowhere using Compile. Huh?
>>
>> Mathematica's Compile function has some limitations that make it
>> difficult to leverage in this case:
>
> When you are doing intensive numerical computation, your core loop
> should be compiled.

No, such computations must be off-loaded to a more performant high-level
language implementation like OCaml or F#. With up to five orders of
magnitude performance difference, that means almost all computations.

>> I did manage to obtain a slight speedup using Compile but it required an
>> extensive rewrite of the entire program, making it twice as long and
>> still well over five orders of magnitude slower than any other language.
>
> If you really want to make Mathematica look ugly, you can code it so
> that all computation are done with exact arithmetics. You can show the
> world how Mathematica is one googleplex times slower.

I am not trying to make Mathematica look bad. It is simply not suitable when
hierarchical solutions are preferable, e.g. FMM, BSPs, adaptive subdivision
for cosmology, hydrodynamics, geophysics, finite element materials...

The Mathematica language is perhaps the best example of what a Lisp-like
language can be good for in the real world but you cannot compare it to
modern FPLs like OCaml, Haskell, F# and Scala because it doesn't even have
a type system, let alone a state-of-the-art static type system.

Mathematica is suitable for graph plotting and for solving problems where it
provides a prepackaged solution that is a perfect fit. Even then, you can
have unexpected problems. Compute the FFT of 2^20 random machine-precision
floats and it works fine. Raise them to the power of 100 and it becomes
100x slower, at which point you might as well be writing your numerical
code in PHP.

--

Jon Harrop

unread,
Dec 3, 2008, 11:33:03 AM12/3/08
to
Thomas A. Russ wrote:
> In any case, for results that are produced by Mathematica, shouldn't it
> be possible to just check them by hand?

Only in some cases. For example, most numerical computations cannot be
checked by hand and any large symbolic calculations quickly become
intractable.

Xah Lee

unread,
Dec 3, 2008, 4:15:11 PM12/3/08
to
On Dec 3, 8:24 am, Jon Harrop <j...@ffconsultancy.com> wrote:
> My example demonstrates several of Mathematica's fundamental limitations.

enough babble Jon.

Come flying $5 to my paypal account, and i'll give you real code,
amongest the programing tech geekers here for all to see.

I'll show, what kinda garbage you cooked up in your Mathematica code
for “comparison”.

You can actually just post your “comparisons” to “comp.soft-
sys.math.mathematica”, and you'll be ridiculed to death for any
reasonable judgement of claim on fairness.

> Consequently, there is great value in combining Mathematica with performant
> high-level languages like OCaml and F#. This is what the vast majority of
> Mathematica users do: they use it as a glorified graph plotter.

glorified your ass.

Yeah, NASA, Intel, NSA, ... all use Mathematica to glorify their
pictures. LOL.

> What exactly do you believe is wrong with my code?

come flies $5 to my paypal, and i'll explain further.

> I am not trying to make Mathematica look bad. It is simply not suitable when

> hierarchical solutions are preferable...

Certainly there are areas other langs are more suitable and better
than Mathematica (for example: assembly langs). But not in the ways
you painted it to peddle your F# and OCaml books.

You see Jon, you are this defensive, trollish guy, who takes every
opportunity to slight other langs that's not one of your F#, OCml that
you make a living of. In every opportunity, you injest your gribes
about static typing and other things, and thru ensuring chaos paves
the way for you to post urls to your website.

With your math and functional programing expertise and Doctor label,
it can be quite intimidating to many geekers. But when you bump into
me, i don't think you have a chance.

As a scientist, i think perhaps you should check your newsgroup
demeanor a bit? I mean, you already have a reputation of being biased.
Too much bias and peddling can be detrimental to your career, y'known?

to be sure, i still respect your expertise and in general think that a
significant percentage of tech geeker's posts in debate with you are
moronic, especially the Common Moron Lispers, and undoubtably the Java
and imperative lang slaving morons who can't grope the simplest
mathematical concepts. Throwing your Mathematica bad mouthing at me
would be a mistake.

Come, fly $5 to my paypal account. Let the challenge begin.

Xah
http://xahlee.org/

Thomas M. Hermann

unread,
Dec 3, 2008, 5:12:51 PM12/3/08
to

Xah,

I'll pay $20 to see your improved version of the code. The only
references to PayPal I saw on your website were instructions to direct
the payment to x...@xahlee.org, please let me know if that is correct.

What I want in return is you to execute and time Dr. Harrop's original
code, posting the results to this thread. Then, I would like you to
post your code with the timing results to this thread as well.

By Dr. Harrop's original code, I specifically mean the code he posted
to this thread. I've pasted it below for clarity.

Jon Harrop coded a ray tracer in Mathematica:

Chris Rathman

unread,
Dec 3, 2008, 6:23:53 PM12/3/08
to
Xah Lee wrote:
> Come flying $5 to my paypal account, and i'll give you real code,
> amongest the programing tech geekers here for all to see.

That's the problem with Mathematica - it's so expensive that you even
have to pay for simple benchmark programs.

Xah Lee

unread,
Dec 3, 2008, 6:26:26 PM12/3/08
to
> I'll pay $20 to see your improved version of the code. The only
> references to PayPal I saw on your website were instructions to direct
> the payment to x...@xahlee.org, please let me know if that is correct.
>
> What I want in return is you to execute and time Dr. Harrop's original
> code, posting the results to this thread. Then, I would like you to
> post your code with the timing results to this thread as well.
>
> By Dr. Harrop's original code, I specifically mean the code he posted
> to this thread. I've pasted it below for clarity.

Agreed. My paypal address is “xah @@@ xahlee.org”. (replace the triple
@ to single one.) Once you paid thru paypal, you can post receit here
if you want to, or i'll surely acknowledge it here.

Here's what i will do:

I will give a version of Mathematica code that has the same behavior
as his. And i will give timing result. The code will run in
Mathematica version 4. (sorry, but that's what i have) As i
understand, Jon is running Mathematica 6. However, i don't see
anything that'd require Mathematica 6. If my code is not faster or in
other ways not satisfactory (by your judgement), or it turns out
Mathematica 6 is necessary, or any problem that might occure, i offer
money back guarantee.

Xah
http://xahlee.org/

Thomas M. Hermann

unread,
Dec 3, 2008, 7:22:21 PM12/3/08
to
On Dec 3, 5:26 pm, Xah Lee <xah...@gmail.com> wrote:
> Agreed. My paypal address is “xah @@@ xahlee.org”. (replace the triple
> @ to single one.) Once you paid thru paypal, you can post receit here
> if you want to, or i'll surely acknowledge it here.
>
> Here's what i will do:
>
> I will give a version of Mathematica code that has the same behavior
> as his. And i will give timing result. The code will run in
> Mathematica version 4. (sorry, but that's what i have) As i
> understand, Jon is running Mathematica 6. However, i don't see
> anything that'd require Mathematica 6. If my code is not faster or in
> other ways not satisfactory (by your judgement), or it turns out
> Mathematica 6 is necessary, or any problem that might occure, i offer
> money back guarantee.
>
>   Xah
> ∑http://xahlee.org/
>
> ☄
>

Alright, I've sent $20. The only reason I would request a refund is if
you don't do anything. As long as you improve the code as you've
described and post the results, I'll be satisfied. If the improvements
you've described don't result in better performance, that's OK.

Good luck,

Tom

Xah Lee

unread,
Dec 3, 2008, 7:32:57 PM12/3/08
to

Got the payment. Thanks.

I'll reply back with code tonight or tomorrow. Wee!

Xah
http://xahlee.org/


Lew

unread,
Dec 3, 2008, 8:38:44 PM12/3/08
to
Xah Lee wrote:
> enough babble ...

Good point. Plonk. Guun dun!

--
Lew

toby

unread,
Dec 3, 2008, 11:15:24 PM12/3/08
to

You think the posts are bad... check out his web site...
--T

>
> Best,
>
> Tamas

toby

unread,
Dec 3, 2008, 11:19:01 PM12/3/08
to
On Dec 3, 4:15 pm, Xah Lee <xah...@gmail.com> wrote:
> On Dec 3, 8:24 am, Jon Harrop <j...@ffconsultancy.com> wrote:
>
> > My example demonstrates several of Mathematica's fundamental limitations.
>
> enough babble Jon.
>
> Come flying $5 to my paypal account, and i'll give you real code,

I'll give you $5 to go away

--T

Jürgen Exner

unread,
Dec 3, 2008, 11:45:39 PM12/3/08
to

if you add "and never come back" then count me in, too.

jue

Kaz Kylheku

unread,
Dec 3, 2008, 11:54:31 PM12/3/08
to

Really? I will trade you one Xah Lee for three Jon Harrops and I will even
throw in a free William James.

Jürgen Exner

unread,
Dec 4, 2008, 12:24:13 AM12/4/08
to

Well, I've never seen those names on CL.perl.M, so I don't know them.

jue

Andreas Waldenburger

unread,
Dec 4, 2008, 5:11:15 AM12/4/08
to
On Wed, 03 Dec 2008 20:38:44 -0500 Lew <no...@lewscanon.com> wrote:

> Xah Lee wrote:
> > enough babble ...
>
> Good point. Plonk. Guun dun!
>

I vaguely remember you plonking the guy before. Did you unplonk him in
the meantime? Or was that just a figure of speech?


teasingly yours,
/W

--
My real email address is constructed by swapping the domain with the
recipient (local part).

Lew

unread,
Dec 4, 2008, 9:19:04 AM12/4/08
to
Andreas Waldenburger wrote:
> On Wed, 03 Dec 2008 20:38:44 -0500 Lew <no...@lewscanon.com> wrote:
>
>> Xah Lee wrote:
>>> enough babble ...
>> Good point. Plonk. Guun dun!
>>
>
> I vaguely remember you plonking the guy before. Did you unplonk him in
> the meantime? Or was that just a figure of speech?

I have had some hard drive and system changes that wiped out my old killfiles.

--
Lew

Don Geddis

unread,
Dec 4, 2008, 12:56:15 PM12/4/08
to
Xah Lee <xah...@gmail.com> wrote on Wed, 3 Dec 2008 :
> On Dec 3, 8:24 am, Jon Harrop <j...@ffconsultancy.com> wrote:
>> My example demonstrates several of Mathematica's fundamental limitations.
> enough babble Jon.

Wait a minute ... are c.l.l's two trolls having a public argument with
each other?

Suddenly, I feel a deja vu flashback to misconfigured mailer daemons,
that just keep sending bounced email messages back and forth to each other
in an infinite loop...

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org

Kaz Kylheku

unread,
Dec 4, 2008, 2:20:44 PM12/4/08
to
On 2008-12-04, Don Geddis <d...@geddis.org> wrote:
> Xah Lee <xah...@gmail.com> wrote on Wed, 3 Dec 2008 :
>> On Dec 3, 8:24 am, Jon Harrop <j...@ffconsultancy.com> wrote:
>>> My example demonstrates several of Mathematica's fundamental limitations.
>> enough babble Jon.
>
> Wait a minute ... are c.l.l's two trolls having a public argument with
> each other?

Now if they start trimming the responses in each round, so that the article
size is bounded, we can call it proper tail recursion!

Dimiter "malkia" Stanev

unread,
Dec 4, 2008, 3:08:48 PM12/4/08
to
> You think the posts are bad... check out his web site...

Just don't go to every page on the Xah website - some of his stuff is
NSFW (Not Safe For Work).

s...@netherlands.com

unread,
Dec 4, 2008, 7:09:53 PM12/4/08
to
On Wed, 3 Dec 2008 16:32:57 -0800 (PST), Xah Lee <xah...@gmail.com> wrote:

>On Dec 3, 4:22 pm, "Thomas M. Hermann" <tmh.pub...@gmail.com> wrote:
>> On Dec 3, 5:26 pm, Xah Lee <xah...@gmail.com> wrote:
>>
>>
>>
>> > Agreed. My paypal address is “xah @@@ xahlee.org”. (replace the triple
>> > @ to single one.) Once you paid thru paypal, you can post receit here
>> > if you want to, or i'll surely acknowledge it here.
>>
>> > Here's what i will do:
>>
>> > I will give a version of Mathematica code that has the same behavior
>> > as his. And i will give timing result. The code will run in
>> > Mathematica version 4. (sorry, but that's what i have) As i
>> > understand, Jon is running Mathematica 6. However, i don't see
>> > anything that'd require Mathematica 6. If my code is not faster or in
>> > other ways not satisfactory (by your judgement), or it turns out
>> > Mathematica 6 is necessary, or any problem that might occure, i offer
>> > money back guarantee.
>>
>> >   Xah

>> > ?http://xahlee.org/
>>
>> > ?


>>
>> Alright, I've sent $20. The only reason I would request a refund is if
>> you don't do anything. As long as you improve the code as you've
>> described and post the results, I'll be satisfied. If the improvements
>> you've described don't result in better performance, that's OK.
>>
>> Good luck,
>>
>> Tom
>
>Got the payment. Thanks.
>
>I'll reply back with code tonight or tomorrow. Wee!
>
> Xah

>? http://xahlee.org/
>
>?
Well, its past 'tonight' and 6 hours to go till past 'tomorrow'.
Where the hell is it Zah Zah?


Xah Lee

unread,
Dec 4, 2008, 8:02:59 PM12/4/08
to

alright, here's my improved code, pasted near the bottom.

let me say a few things about Jon's code.

If we rate that piece of mathematica code on the level of: Beginner
Mathematica programer, Intermediate, Advanced, where Beginner is
someone who just learned tried to program Mathematica no more than 6
months, then that piece of code is Beginner level.

Here's some basic analysis and explanation.

The program has these main functions:

• RaySphere
• Intersect
• RayTrace
• Create
• Main

The Main calls Create then feed it to RayTrace.
Create calls itself recursively, and basically returns a long list of
a repeating element, each of the element differ in their parameter.

RayTrace calls Intersect 2 times. Intersect has 2 forms, one of them
calls itself recursively. Both forms calls RaySphere once.

So, the core loop is with the Intersect function and RaySphere. Some
99.99% of time are spent there.

------------------

I didn't realize until after a hour, that if Jon simply give numerical
arguments to Main and Create, the result timing by a factor of 0.3 of
original. What a incredible sloppiness! and he intended this to show
Mathematica speed with this code?

The Main[] function calls Create. The create has 3 parameters: level,
c, and r. The level is a integer for the recursive level of
raytracing . The c is a vector for sphere center i presume. The r is
radius of the sphere. His input has c and r as integers, and this in
Mathematica means computation with exact arithmetics (and automatic
kicks into infinite precision if necessary). Changing c and r to float
immediately reduced the timing to 0.3 of original.

------------------
now, back to the core loop.

The RaySphere function contain codes that does symbolic computation by
calling Im, which is the imaginary part of a complex number!! and if
so, it returns the symbol Infinity! The possible result of Infinity is
significant because it is used in Intersect to do a numerical
comparison in a If statement. So, here in these deep loops,
Mathematica's symbolic computation is used for numerical purposes!

So, first optimization at the superficial code form level is to get
rid of this symbolic computation.

Instead of checking whethere his “disc = Sqrt[b^2 - v.v + r^2]” has
imaginary part, one simply check whether the argument to sqrt is
negative.

after getting rid of the symbolic computation, i made the RaySphere
function to be a Compiled function.

I stopped my optimization at this step.

The above are some _fundamental_ things any dummy who claims to code
Mathematica for speed should know. Jon has written a time series
Mathematica package that he's selling commercially. So, either he got
very sloppy with this Mathematica code, or he intentionally made it
look bad, or that his Mathematica skill is truely beginner level. Yet
he dares to talk bullshit in this thread.

Besides the above basic things, there are several aspects that his
code can improve in speed. For example, he used pattern matching to do
core loops.
e.g. Intersect[o_, d_][{lambda_, n_}, Bound[c_, r_, s_]]

any Mathematica expert knows that this is something you don't want to
do if it is used in a core loop. Instead of pattern matching, one can
change the form to Function and it'll speed up.

Also, he used “Block”, which is designed for local variables and the
scope is dynamic scope. However the local vars used in this are local
constants. A proper code would use “With” instead. (in lisp, this is
various let, let*. Lispers here can imagine how lousy the code is
now.)

Here's a improved code. The timing of this code is about 0.2 of the
original. Also, optimization is purely based on code doodling. That
is, i do not know what his code is doing, i do not have experience in
writing a ray tracer. All i did is eyeballing his code flow, and
improved the form.

norm=Function[#/Sqrt@(Plus@@(#^2))];
delta=Sqrt[$MachineEpsilon];
myInfinity=10000.;

Clear[RaySphere];
RaySphere = Compile[{o1, o2, o3, d1, d2, d3, c1, c2, c3, r},
Block[{v = {c1 - o1, c2 - o2, c3 - o3},
b = d1*(c1 - o1) + d2*(c2 - o2) + d3*(c3 - o3),
discriminant = -(c1 - o1)^2 - (c2 - o2)^2 +
(d1*(c1 - o1) + d2*(c2 - o2) + d3*(c3 - o3))^2 -
(c3 - o3)^2 + r^2, disc, t1, t2},
If[discriminant < 0., myInfinity,
disc = Sqrt[discriminant]; If[(t1 = b - disc) > 0.,
t1, If[(t2 = b + disc) <= 0., myInfinity, t2]]]]];

Remove[Intersect];
Intersect[{o1_,o2_,o3_},{d1_,d2_,d3_}][{lambda_,n_},Sphere
[{c1_,c2_,c3_},r_]]:=
Block[{lambda2=RaySphere[o1,o2,o3,d1,d2,d3,c1,c2,c3,r]},
If[lambda2≥lambda,{lambda,n},{lambda2,
norm[{o1,o2,o3}+lambda2 *{d1,d2,d3}-{c1,c2,c3}]}]]

Intersect[{o1_,o2_,o3_},{d1_,d2_,d3_}][{lambda_,n_},
Bound[{c1_,c2_,c3_},r_,s_]]:=
Block[{lambda2=RaySphere[o1,o2,o3,d1,d2,d3,c1,c2,c3,r]},
If[lambda2≥lambda,{lambda,n},
Fold[Intersect[{o1,o2,o3},{d1,d2,d3}],{lambda,n},s]]]

Clear[neglight,nohit]
neglight=N@norm[{1,3,-2}];
nohit={myInfinity,{0.,0.,0.}};

Clear[RayTrace];
RayTrace[o_,d_,scene_]:=
Block[{lambda,n,g,p},{lambda,n}=Intersect[o,d][nohit,scene];
If[lambda\[Equal]myInfinity,0,g=n.neglight;
If[g≤0,
0,{lambda,n}=Intersect[o+lambda d+delta n,neglight]
[nohit,scene];
If[lambda<myInfinity,0,g]]]]

Clear[Create];
Create[level_,c_,r_]:=
Block[{obj=Sphere[c,r]},
If[level\[Equal]1,obj,
Block[{a=3*r/Sqrt[12],Aux},
Aux[x1_,z1_]:=Create[level-1,c+{x1,a,z1},0.5 r];
Bound[c,3 r,{obj,Aux[-a,-a],Aux[a,-a],Aux[-a,a],Aux[a,a]}]
]
]]

Main[level_,n_,ss_]:=
With[{scene=Create[level,{0.,-1.,4.},1.]},
Table[Sum[
RayTrace[{0,0,0},
N@norm[{(x+s/ss/ss)/n-1/2,(y+Mod[s,ss]/ss)/
n-1/2,1}],scene],{s,0,
ss^2-1}]/ss^2,{y,0,n-1},{x,0,n-1}]]

Timing[Export["image.pgm",Graphics@Raster@Main[2,100,4.]]]


Note to those who have Mathematica.
Mathematica 6 has Normalize, but that's not in Mathematica 4, so i
cooked up above.
Also, Mathematica 6 has AbsoluteTiming, which is intended to be
equivalent if you use stop watch to measure timing. Mathematica 4 has
only Timing, which measures CPU time. My speed improvement is based on
Timing. But the same factor will shown when using Mathematica 6 too.

I'm pretty sure further speed up by 0.5 factor of above's timing is
possible. Within 2 more hours of coding.

Jon wrote:
«The Mathematica code is 700,000x slower so a 50% improvement will be
uninteresting. Can you make my Mathematica code five orders of

magnitude faster or not?»

If anyone pay me $300, i can try to make it whatever the level of F#
or OCaml's speed is as cited in Jon's website. (
http://www.ffconsultancy.com/languages/ray_tracer/index.html ).

Please write out or write to me the detail exactly what speed is
required in some precise terms. If i agree to do it, spec satisfaction
is guaranteed or your money back.

PS Thanks Thomas M Hermann. It was fun.

Xah
http://xahlee.org/

Xah Lee

unread,
Dec 5, 2008, 10:51:20 AM12/5/08
to
On Dec 4, 6:09 pm, jason-s...@creativetrax.com wrote:
> For the interested, with MMA 6, on a Pentium 4 3.8Ghz:
>
> The code that Jon posted:
>
> Timing[Export["image-jon.pgm", Graphics@Raster@Main[2, 100, 4]]]
> {80.565, "image-jon.pgm"}
>
> The code that Xah posted:
>
> Timing[Export["image-xah.pgm", Graphics@Raster@Main[2, 100, 4.]]]
> {42.3186, "image-xah.pgm"}
>
> So Xah's code is about twice as fast as Jon's code, on my computer.
>
> The resulting files were identical (and both looked like pure white
> images; I thought they'd be interesting!).

The result is not pure white images. They are ray traced spheres
stacked in some recursive way. Here's the output in both my and jon's
version: http://xahlee.org/xx/image.pgm

also, note that Mathematica 6 has the function Normalize builtin,
which is used in Jon's code deeply in the core. Normalize is not in
Mathematica 4, so i had to code it myself, in this line: “norm=Function
[#/Sqrt@(Plus@@(#^2))];”. This possibly slow down my result a lot. You
might want to replace any call of “norm” in my program by the builtin
Normalize.

Also, each version of Mathematica has more optimizations. So, that
might explain why on v4 the speed factor is ~0.2 on my machine while
in v6 you see ~0.5.

My machine is OS X 10.4.x, PPC G5 1.9 Ghz.

-------------------------

let me take the opportunity to explain some high powered construct of
Mathematica.

Let's say for example, we want to write a function that takes a vector
(of linear algebra), and return a vector in the same direction but
with length 1. In linear algebar terminology, the new vector is called
the “normalized” vector of the original.

For those of you who don't know linear algebra but knows coding, this
means, we want a function whose input is a list of 3 elements say
{x,y,z}, and output is also a list of 3 elements, say {a,b,c}, with
the condition that

a = x/Sqrt[x^2+y^2+z^2]
b = y/Sqrt[x^2+y^2+z^2]
c = z/Sqrt[x^2+y^2+z^2]

For much of the history of Mathematica, normalize is not a builtin
function. It was introduced in v6, released sometimes in 2007. See
bottom of:
http://reference.wolfram.com/mathematica/ref/Normalize.html

Now, suppose our task is to write this function. In my code, you see
it is:

norm=Function[#/Sqrt@(Plus@@(#^2))];

let me explain how it is so succinct.

Mathematica's syntax support what's called FullForm, which is
basically a fully nested notation like lisp's. In fact, the
Mathematica compiler works with FullForm. The FullForm is not
something internal. A programer can type his code that way if he so
pleases.

in FullForm, the above expression is this:
Set[ norm, Function[ Times[Slot[1], Power[ Sqrt[ Apply[ Plus, Power
[ Slot[1], 2 ] ] ], -1 ] ] ]

Now, in this
norm=Function[#/Sqrt@(Plus@@(#^2))]

The “Function” is your lisper's “lambda”. The “#” is the formal
parameter. So, in the outset we set “norm” to be a pure function.

Now, note that the “#” is not just a number, but can be any argument,
including vector of the form {x,y,z}. So, we see here that math
operations are applied to list entities directly. For example, in
Mathematica, {3,4,5}/2 returns {3/2,2,5/2} and {3,4,5}^2 returns
{9,16,25}.

In typical lang such as python, including lisp, you would have to map
the operation into each lisp elements instead.

The “Sqrt@...” is a syntax shortcut for “Sqrt[...]”, and the
“Plus@@...” is a syntax shortcut for “Apply[Plus, ...]”, which is
lisp's “funcall”. So, taking the above all together, the code for
“norm” given above is _syntactically equivalent_ to this:

norm=Function[ #/Sqrt[ Apply[Plus, #^2] ]]

this means, square the vector, add them together, take the square
root, then have the original vector divide it.

The “#” is in fact a syntax shortcut for “Slot[1]”, meaning the first
formal parameter. The “=” is in fact a syntax shortcut for “Set[]”.
The “^” is a shortcut for “Power[]”, and the “/” is a shortcut for
“Power[..., -1]”. Putting all these today, you can see how the code is
syntactically equivalent to the above nested FullFolm.

Note, that the “norm” as defined above works for any dimentional
vectors, i.e. list of any length.

In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
you'll have 50 or hundreds lines.

For more detail on syntax, see:

• The Concepts and Confusions of Prefix, Infix, Postfix and Fully
Nested Notations
http://xahlee.org/UnixResource_dir/writ/notations.html

Xah
http://xahlee.org/


Jon Harrop

unread,
Dec 7, 2008, 12:39:30 PM12/7/08
to
Xah Lee wrote:
> I didn't realize until after a hour, that if Jon simply give numerical
> arguments to Main and Create, the result timing by a factor of 0.3 of
> original. What a incredible sloppiness! and he intended this to show
> Mathematica speed with this code?
>
> The Main[] function calls Create. The create has 3 parameters: level,
> c, and r. The level is a integer for the recursive level of
> raytracing . The c is a vector for sphere center i presume. The r is
> radius of the sphere. His input has c and r as integers, and this in
> Mathematica means computation with exact arithmetics (and automatic
> kicks into infinite precision if necessary). Changing c and r to float
> immediately reduced the timing to 0.3 of original.

That is only true if you solve a completely different and vastly simpler
problem, which I see you have (see below).

> The RaySphere function contain codes that does symbolic computation by
> calling Im, which is the imaginary part of a complex number!! and if
> so, it returns the symbol Infinity! The possible result of Infinity is
> significant because it is used in Intersect to do a numerical
> comparison in a If statement. So, here in these deep loops,
> Mathematica's symbolic computation is used for numerical purposes!

Infinity is a floating point number.

> So, first optimization at the superficial code form level is to get
> rid of this symbolic computation.

That does not speed up the original computation.

> Instead of checking whethere his “disc = Sqrt[b^2 - v.v + r^2]” has
> imaginary part, one simply check whether the argument to sqrt is
> negative.

That does not speed up the original computation.

> after getting rid of the symbolic computation, i made the RaySphere
> function to be a Compiled function.

That should improve performance but the Mathematica remains well over five
orders of magnitude slower than OCaml, Haskell, Scheme, C, C++, Fortran,
Java and even Lisp!

> Besides the above basic things, there are several aspects that his
> code can improve in speed. For example, he used pattern matching to do
> core loops.
> e.g. Intersect[o_, d_][{lambda_, n_}, Bound[c_, r_, s_]]
>
> any Mathematica expert knows that this is something you don't want to
> do if it is used in a core loop. Instead of pattern matching, one can
> change the form to Function and it'll speed up.

Your code does not implement this change.

> Also, he used “Block”, which is designed for local variables and the
> scope is dynamic scope. However the local vars used in this are local
> constants. A proper code would use “With” instead. (in lisp, this is
> various let, let*. Lispers here can imagine how lousy the code is
> now.)

Earlier, you said that "Module" should be used. Now you say "With". Which is
it and why?

Your code does not implement this change either.

> Here's a improved code. The timing of this code is about 0.2 of the
> original.

> ...


> Timing[Export["image.pgm",Graphics@Raster@Main[2,100,4.]]]

You have only observed a speedup because you have drastically simplified the
scene being rendered. Specifically, the scene I gave contained over 80,000
spheres but you are benchmarking with only 5 spheres and half of the image
is blank!

Using nine levels of spheres as I requested originally, your version is not
measurably faster at all.

Perhaps you should give a refund?

Xah Lee

unread,
Dec 7, 2008, 5:53:49 PM12/7/08
to

For those interested in this Mathematica problem, i've now cleaned up
the essay with additional comments here:

• A Mathematica Optimization Problem
http://xahlee.org/UnixResource_dir/writ/Mathematica_optimization.html

The result and speed up of my code can be verified by anyone who has
Mathematica.

Here's some additional notes i added to the above that is not
previously posted.

-------------------------

Advice For Mathematica Optimization

Here's some advice for mathematica optimization, roughly from most
important to less important:

* Any experienced programer knows, that optimization at the
algorithm level is far more important than at the level of code
construction variation. So, make sure the algorithm used is good, as
opposed to doodling with your code forms. If you can optimize your
algorithm, the speed up may be a order of magnitude. (for example,
various algorithm for sorting algorithms↗ illustrates this.)

* If you are doing numerical computation, always make sure that
your input and every intermediate step is using machine precision.
This you do by making the numbers in your input using decimal form
(e.g. use “1.”, “N[Pi]” instead of “1”, “Pi”). Otherwise Mathematica
may use exact arithmetics.

* For numerical computation, do not simply slap “N[]” into your
code. Because the intermediate computation may still be done using
exact arithmetic or symbolic computation.

* Make sure your core loop, where your calculation is repeated and
takes most of the time spent, is compiled, by using Compile.

* When optimizing speed, try to avoid pattern matching. If your
function is “f[x_]:= ...”, try to change it to the form of “f=Function
[x,...]” instead.

* Do not use complicated patterns if not necessary. For example,
use “f[x_,y_]” instead of “f[x_][y_]”.

------------------------------

...

Besides the above basic things, there are several aspects that his

code can improve in speed. For example, he used rather complicated
pattern matching to do intensive numerical computation part. Namely:

Intersect[o_, d_][{lambda_, n_}, Bound[c_, r_, s_]]

Intersect[o_, d_][{lambda_, n_}, Sphere[c_, r_]]

Note that the way the parameters of Intersect defined above is a
nested form. The code would be much faster if you just change the
forms to:

Intersect[o_, d_, {lambda_, n_}, Bound[c_, r_, s_]]
Intersect[o_, d_, {lambda_, n_}, Sphere[c_, r_]]

or even just this:

Intersect[o_, d_, lambda_, n_, c_, r_, s_]
Intersect[o_, d_, lambda_, n_, c_, r_]

Also, note that the Intersect is recursive. Namely, the Intersect
calls itself. Which form is invoked depends on the pattern matching of
the parameters. However, not only that, inside one of the Intersect it
uses Fold to nest itself. So, there are 2 recursive calls going on in
Intersect. Reducing this recursion to a simple one would speed up the
code possibly by a order of magnitude.

Further, if Intersect is made to take a flat sequence of argument as
in “Intersect[o_, d_, lambda_, n_, c_, r_, s_]”, then pattern matching
can be avoided by making it into a pure function “Function”. And when
it is a “Function”, then Intersect or part of it may be compiled with
Compile. When the code is compiled, the speed should be a order of
magnitude faster.

-----------------------------

Someone keeps claiming that Mathematica code is some “5 order of
magnitude slower”. It is funny how the order of magnitude is
quantified. I'm not sure there's a standard interpretation other than
hyperbole.

There's a famous quote by Alan Perlis ( http://en.wikipedia.org/wiki/Alan_Perlis
) that goes:
“A Lisp programmer knows the value of everything, but the cost of
nothing.”

this quote captures the nature of lisp in comparison to most other
langs at the time the quote is written. Lisp is a functional lang, and
in functional langs, the concept of values is critical, because any
lisp program is either a function definition or expression. Function
and expression act on values and return values. The values along with
definitions determines the program behavior. “the cost of nothing”
captures the sense that in high level langs, esp dynamic langs like
lisp, it's easy to do something, but it is more difficult to know the
algorithmic behavior of constructs. This is in contrast to langs like
C, Pascal, or modern lang like Java, where almost anything you write
in it is “fast”, simply forced by the low level nature of the lang.

In a similar way, Mathematica is far more higher level than any
existing lang, counting other so-called computer algebra systems. A
simple one-liner Mathematica construct easily equates to 10 or hundred
lines of lisp, perl, python, and if you count its hundreds of
mathematical functions such as Solve, Derivative, Integrate, each line
of code is equivalent to a few thousands lines in other langs.

However, there is a catch, that applies to any higher level langs,
namely, it is extremely easy, to create a program that are very
inefficient.

This can typically be observed in student or beginner's code in lisp.
The code may produce the right output, but may be extremely
inefficient for lacking expertise with the language.

The phenomenon of creating code that are inefficient is proportional
to the highlevelness or power of the lang. In general, the higher
level of the lang, the less possible it is actually to produce a code
that is as efficient as a lower level lang. For example, the level or
power of lang can be roughly order as this:

assembly langs
C, pascal
C++, java, c#
unix shells
perl, python, ruby, php
lisp
Mathematica

the lower level the lang, the longer it consumes programer's time, but
faster the code runs. Higher level langs may or may not be crafted to
be as efficient. For example, code written in the level of langs such
as perl, python, ruby, will never run as fast as C, regardless what
expert a perler is. C code will never run as fast as assembler langs.
And if the task crafting a raytracing software, then perl, python,
ruby, lisp, Mathematica, are simply not suitable, and are not likely
to produce any code as fast as C or Java.

On the other hand, higher level langs in many applications simply
cannot be done with lower level lang for various practical reasons.
For example, you can use Mathematica to solve some physics problem in
few hours, or give Pi to gazillion digits in few seconds with just “N
[Pi,10000000000000]”. Sure, you can code a solution in lisp, perl, or
even C, but that means few years of man hours. Similarly, you can do
text processing in C, Java, but perl, python, ruby, php, emacs lisp,
Mathematica, can reduce your man hours to 10% or 1% of coding effort.

In the above, i left out functional langs that are roughly statically
typed and compiled, such as Haskell, OCaml, etc. I do not have
experience with these langs. I suppose they do maitain some advantage
of low level lang's speed, yet has high level constructs. Thus, for
computationally intensive tasks such as writing a raytracer, they may
compete with C, Java in speed, yet easier to write with fewer lines of
code.

personally, i've made some effort to study Haskell but never went thru
it. In my experience, i find langs that are (roughly called) strongly
typed, difficult to learn and use. (i have reading knowledge of C and
working knowledge of Java, but am never good with Java. The verbosity
in Java turns me off thoroughly.)

-----------------

as to how fast Mathematica can be in the raytracing toy code shown in
this thread, i've given sufficient demonstration that it can be speed
up significantly. Even Mathematica is not suitable for this task, but
i'm pretty sure can make the code's speed in the some level of speed
as OCaml.
(as opposed to someone's claim that it must be some 700000 times
slower or some “5 orders of magnituted slower”). However, to do so
will take me half a day or a day of coding. Come fly $300 to my paypal
account, then we'll talk. Money back guaranteed, as i said before.

Xah
http://xahlee.org/


Jon Harrop

unread,
Dec 8, 2008, 7:07:10 AM12/8/08
to
s...@netherlands.com wrote:
> Well, its past 'tonight' and 6 hours to go till past 'tomorrow'.
> Where the hell is it Zah Zah?

Note that this program takes several days to compute in Mathematica (even
though it takes under four seconds in other languages) so don't expect to
see a genuinely optimized version any time soon... ;-)

Jon Harrop

unread,
Dec 8, 2008, 7:08:48 AM12/8/08
to
Xah Lee wrote:
> The result and speed up of my code can be verified by anyone who has
> Mathematica.

You changed the scene that is being rendered => your speedup is bogus!

Trace the scene I originally gave and you will see that your program is no
faster than mine was.

Jon Harrop

unread,
Dec 8, 2008, 8:10:32 AM12/8/08
to
Xah Lee wrote:
> For those interested in this Mathematica problem, i've now cleaned up
> the essay with additional comments here:
>
> • A Mathematica Optimization Problem
> http://xahlee.org/UnixResource_dir/writ/Mathematica_optimization.html

In that article you say:

> Further, if Intersect is made to take a flat sequence of argument as
> in “Intersect[o_, d_, lambda_, n_, c_, r_, s_]”, then pattern matching can
> be avoided by making it into a pure function “Function”. And when it is
> a “Function”, then Intersect or part of it may be compiled with Compile.
> When the code is compiled, the speed should be a order of magnitude
> faster.

That is incorrect. Mathematica's Compile function cannot handle recursive
functions like Intersect. For example:

In[1]:= Compile[{n_, _Integer}, If[# == 0, 1, #0[[# n - 1]] #1] &[n]]

During evaluation of In[1]:= Compile::fun: Compilation of
(If[#1==0,1,#0[[#1 n-1]] #1]&)[Compile`FunctionVariable$435] cannot
proceed. It is not possible to compile pure functions with arguments
that represent the function itself. >>

Stanisław Halik

unread,
Dec 8, 2008, 10:09:47 AM12/8/08
to
In comp.lang.lisp Xah Lee <xah...@gmail.com> wrote:

> The phenomenon of creating code that are inefficient is proportional
> to the highlevelness or power of the lang. In general, the higher
> level of the lang, the less possible it is actually to produce a code
> that is as efficient as a lower level lang. For example, the level or
> power of lang can be roughly order as this:

> assembly langs
> C, pascal
> C++, java, c#
> unix shells
> perl, python, ruby, php
> lisp
> Mathematica

This is untrue. Common Lisp native-code compilers are orders of
magnitude faster than those of scripting languages such as Perl or Ruby.

In particular, creating an efficient Ruby implementation might prove
challenging - the language defines lexical bindings as modifiable at
runtime, arithmetic operations as requiring a dynamic method dispatch
etc.

FUT ignored.

--
The great peril of our existence lies in the fact that our diet consists
entirely of souls. -- Inuit saying

Xah Lee

unread,
Dec 8, 2008, 12:34:43 PM12/8/08
to
On Dec 8, 5:10 am, Jon Harrop <j...@ffconsultancy.com> wrote:
> Xah Lee wrote:
> > For those interested in this Mathematica problem, i've now cleaned up
> > the essay with additional comments here:
>
> > • A Mathematica Optimization Problem
> > http://xahlee.org/UnixResource_dir/writ/Mathematica_optimization.html
>
> In that article you say:
>
> > Further, if Intersect is made to take a flat sequence of argument as
> > in “Intersect[o_, d_, lambda_, n_, c_, r_, s_]”, then pattern matching can
> > be avoided by making it into a pure function “Function”. And when it is
> > a “Function”, then Intersect or part of it may be compiled with Compile.
> > When the code is compiled, the speed should be a order of magnitude
> > faster.

> That is incorrect. Mathematica's Compile function cannot handle recursive
> functions like Intersect.

i didn't claim it can. You can't expect to have a fast or good program
if you code Java style in a functional lang.

Similarly, if you want code to run fast in Mathematica, you don't just
slap in your OCaml code into Mathematica syntax and expect it to work
fast.

If you are a Mathematica expert, you could make it recurse yet have
the speed as other langs. First, by changing your function's form, to
avoid pattern matching, and rewrite your bad recursion. That is what i
claimed in the above paragraph. Read it again to see.

> For example:
> In[1]:= Compile[{n_, _Integer}, If[# == 0, 1, #0[[# n - 1]] #1] &[n]]
>
> During evaluation of In[1]:= Compile::fun: Compilation of
> (If[#1==0,1,#0[[#1 n-1]] #1]&)[Compile`FunctionVariable$435] cannot
> proceed. It is not possible to compile pure functions with arguments
> that represent the function itself. >>

Mathematica's Compile function is intended to speed up numerical
computation. To want Compile to handle recursion in the context of
Mathematica's programing features, is not something possible even in
theoretical sense.

Scheme lisp implementations can compile recursive code, but lisp is a
lower level lang than Mathematica, where perhaps the majority of
Mathematica's builtin functions can equate to 10 or more lines of
lisp, and any of its hundreds math functions equates to entire
libraries of other langs. It is reasonable, but silly, to expect
Mathematica's Compile function to compile any code in Mathematica.

Perhaps in the future version of Mathematica, its Compile function can
handle basic recursive forms.

Also, in this discussion, thanks to Thomas M Hermann's $20 offered to
me for my challenge to you, that i have taken the time to show working
code that demonstrate many problems in your code. Unless you think my
code and replies to you are totally without merit or fairness, but
otherwise you should acknowledge it, in whole or in parts you agree,
in a honest and wholehearted way, instead of pushing on with petty
verbal fights.

Xah
http://xahlee.org/


Xah Lee

unread,
Dec 8, 2008, 4:09:22 PM12/8/08
to
Xah Lee wrote:

«...

The phenomenon of creating code that are inefficient is proportional
to the highlevelness or power of the lang. In general, the higher
level of the lang, the less possible it is actually to produce a code
that is as efficient as a lower level lang. For example, the level or
power of lang can be roughly order as this:
assembly langs
C, pascal
C++, java, c#
unix shells
perl, python, ruby, php
lisp
Mathematica

...
»

Moron Stanisław Halik wrote:

> This is untrue. Common Lisp native-code compilers are orders of
> magnitude faster than those of scripting languages such as Perl or Ruby.

Learn to read articles and discuss in whole, as opposed to nickpick on
particulars so that your favorite lang looks good.

Xah
http://xahlee.org/


Jon Harrop

unread,
Dec 8, 2008, 4:31:29 PM12/8/08
to
Xah Lee wrote:
> Also, in this discussion, thanks to Thomas M Hermann's $20 offered to
> me for my challenge to you, that i have taken the time to show working
> code that demonstrate many problems in your code.

You failed the challenge that you were given. Specifically, your code is not
measurably faster on the problem that I set. Moreover, you continued to
write as if you had not failed and, worse, went on to give even more awful
advice as if your credibility had not just been destroyed.

> If you are a Mathematica expert, you could make it recurse yet have
> the speed as other langs.

No, you cannot. That is precisely why you just failed this challenge.

You should accept the fact that Mathematica currently has these
insurmountable limitations.

Xah Lee

unread,
Dec 8, 2008, 5:32:59 PM12/8/08
to
2008-12-08

Xah Lee wrote:
> > Also, in this discussion, thanks to Thomas M Hermann's $20 offered to
> > me for my challenge to you, that i have taken the time to show working
> > code that demonstrate many problems in your code.


A moron, wrote:
> You failed the challenge that you were given.

you didn't give me a challenge. I gave you. I asked for $5 sincerity
wage of mutal payment or money back guarantee, so that we can show
real code instead of verbal fight. You didn't take it and do nothing
but continue petty quarrel on words. Thomas was nice to pay me, which
results in my code that is demonstratably faster than yours. (verified
by a post from “jason-sage @@@ creativetrax.com”, quote: “So Xah's
code is about twice as fast as Jon's code, on my computer.”, message
can be seen at “ http://www.gossamer-threads.com/lists/python/python/698196?do=post_view_threaded#698196
” ) You refuse to acknowledge it, and continue babbling, emphasizing
that my code should be some hundred times faster make valid argument.

As i said, now pay me $300, i will then make your Mathematica code in
the same level of speed as your OCmal. If it does not, money back
guaranteed. Here's more precise terms i ask:

Show me your OCmal code that will compile on my machine (PPC Mac, OSX
10.4.x). I'll make your Mathematica code in the same speed level as
your OCmal code. (you claimed Mathematica is roughly 700 thousand
times slower to your OCmal code. I claim, i can make it, no more than
10 times slower than the given OCmal code.)

So, pay me $300 as consulting fee. If the result does not comply to
the above spec, money back guaranteed.

> You should accept the fact that Mathematica currently has these
> insurmountable limitations.

insurmountable ur mom.

Xah
http://xahlee.org/

George Neuner

unread,
Dec 8, 2008, 5:40:58 PM12/8/08
to
On Sun, 7 Dec 2008 14:53:49 -0800 (PST), Xah Lee <xah...@gmail.com>
wrote:

>The phenomenon of creating code that are inefficient is proportional


>to the highlevelness or power of the lang. In general, the higher
>level of the lang, the less possible it is actually to produce a code
>that is as efficient as a lower level lang.

This depends on whether someone has taken the time to create a high
quality optimizing compiler.


>For example, the level or power of lang can be roughly order as
>this:
>
>assembly langs
>C, pascal
>C++, java, c#
>unix shells
>perl, python, ruby, php
>lisp
>Mathematica

According to what "power" estimation? Assembly, C/C++, C#, Pascal,
Java, Python, Ruby and Lisp are all Turing Complete. I don't know
offhand whether Mathematica is also TC, but if it is then it is at
most equally powerful.

Grammatic complexity is not exactly orthogonal to expressive power,
but it is mostly so. Lisp's SEXPRs are an existence proof that a
Turing powerful language can have a very simple grammar. And while a
2D symbolic equation editor may be easier to use than spelling out the
elements of an equation in a linear textual form, it is not in any
real sense "more powerful".


>the lower level the lang, the longer it consumes programer's time, but
>faster the code runs. Higher level langs may or may not be crafted to
>be as efficient. For example, code written in the level of langs such
>as perl, python, ruby, will never run as fast as C, regardless what
>expert a perler is.

There is no language level reason that Perl could not run as fast as C
... it's just that no one has cared to implement it.


>C code will never run as fast as assembler langs.

For a large function with many variables and/or subcalls, a good C
compiler will almost always beat an assembler programmer by sheer
brute force - no matter how good the programmer is. I suspect the
same is true for most HLLs that have good optimizing compilers.

I've spent years doing hard real time programming and I am an expert
in C and a number of assembly languages. It is (and has been for a
long time) impractical to try to beat a good C compiler for a popular
chip by writing from scratch in assembly. It's not just that it takes
too long ... it's that most chips are simply too complex for a
programmer to keep all the instruction interaction details straight in
his/her head. Obviously results vary by programmer, but once a
function grows beyond 100 or so instructions, the compiler starts to
win consistently. By the time you've got 500 instructions (just a
medium sized C function) it's virtually impossible to beat the
compiler.

In functional languages where individual functions tend to be much
smaller, you'll still find very complex functions in the disassembly
that arose from composition, aggressive inlining, generic
specialization, inlined pattern matching, etc. Here an assembly
programmer can quite often match the compiler for a particular
function (because it is short), but overall will fail to match the
compiler in composition.

When maximum speed is necessary it's almost always best to start with
an HLL and then hand optimize your optimizing compiler's output.
Humans are quite often able to find additional optimizations in
assembly code that they could not have written as well overall in the
first place.

George

Xah Lee

unread,
Dec 8, 2008, 6:14:18 PM12/8/08
to
Dear George Neuner,

Xah Lee wrote:
> >The phenomenon of creating code that are inefficient is proportional
> >to the highlevelness or power of the lang. In general, the higher
> >level of the lang, the less possible it is actually to produce a code
> >that is as efficient as a lower level lang.

George Neuner wrote:
> This depends on whether someone has taken the time to create a high
> quality optimizing compiler.

try to read the sentence. I quote:
«The phenomenon of creating code that are inefficient is proportional


to the highlevelness or power of the lang. In general, the higher
level of the lang, the less possible it is actually to produce a code

that is as efficient as a lower level lang.»

Xah Lee wrote:
> >For example,
> >the level or power of lang can be roughly order as
> >this:
>
> >assembly langs
> >C, pascal
> >C++, java, c#
> >unix shells
> >perl, python, ruby, php
> >lisp
> >Mathematica

George wrote:
> According to what "power" estimation? Assembly, C/C++, C#, Pascal,
> Java, Python, Ruby and Lisp are all Turing Complete. I don't know
> offhand whether Mathematica is also TC, but if it is then it is at
> most equally powerful.

it's amazing that every tech geekers (aka idiots) want to quote
“Turing Complete” in every chance. Even a simple cellular automata,
such as Conway's game of life or rule 110, are complete.

http://en.wikipedia.org/wiki/Conway's_Game_of_Life
http://en.wikipedia.org/wiki/Rule_110

in fact, according to Stephen Wolfram's controversial thesis by the
name of “Principle of computational equivalence”, every goddamn thing
in nature is just about turing complete. (just imagine, when you take
a piss, the stream of yellow fluid is actually doing turning complete
computations!)

for a change, it'd be far more interesting and effective knowledge
showoff to cite langs that are not so-called fuck of the turing
complete.

the rest of you message went on stupidly on the turing complete point
of view on language's power, mixed with lisp fanaticism, and personal
gribes about merits and applicability assembly vs higher level langs.
It's fine to go on with your gribes, but be careful in using me as a
stepping stone.

Xah
http://xahlee.org/


Jon Harrop

unread,
Dec 8, 2008, 7:56:28 PM12/8/08
to
Xah Lee wrote:
> A moron, wrote:
> > You failed the challenge that you were given.
>
> you didn't give me a challenge.

Thomas gave you the challenge:

"What I want in return is you to execute and time Dr. Harrop's original
code, posting the results to this thread... By Dr. Harrop's original code,
I specifically mean the code he posted to this thread. I've pasted it below
for clarity.".

Thomas even quoted my code verbatim to make his requirements totally
unambiguous. Note the parameters [9, 512, 4] in the last line that he and I
both gave:

AbsoluteTiming[Export["image.pgm", Graphics@Raster@Main[9, 512, 4]]]

You have not posted timings of that, let alone optimized it. So you failed.

> I gave you. I asked for $5 sincerity
> wage of mutal payment or money back guarantee, so that we can show
> real code instead of verbal fight. You didn't take it and do nothing
> but continue petty quarrel on words.

Then where did you post timings of that exact code as Thomas requested?

>
http://www.gossamer-threads.com/lists/python/python/698196?do=post_view_threaded#698196
> ” ) You refuse to acknowledge it, and continue babbling, emphasizing that
> my code should be some hundred times faster make valid argument.

That is not my code! Look at the last line where you define the scene:

Timing[Export["image.pgm",Graphics[at]Raster@Main[2,100,4.]]]

Those are not the parameters I gave you. Your program is running faster
because you changed the scene from over 80,000 spheres to only 5 spheres.
Look at your output image: it is completely wrong!

> As i said, now pay me $300, i will then make your Mathematica code in
> the same level of speed as your OCmal. If it does not, money back
> guaranteed.

Your money back guarantee is worthless if you cannot even tell when you have
failed.

> Show me your OCmal code that will compile on my machine (PPC Mac, OSX
> 10.4.x).

The code is still on our site:

http://www.ffconsultancy.com/languages/ray_tracer/

OCaml, C++ and Scheme all take ~4s to ray trace the same scene.

> I'll make your Mathematica code in the same speed level as
> your OCmal code. (you claimed Mathematica is roughly 700 thousand
> times slower to your OCmal code. I claim, i can make it, no more than
> 10 times slower than the given OCmal code.)

You have not even made it 10% faster, let alone 70,000x faster. Either
provide the goods or swallow the fact that you have been wrong all along.

Wesley MacIntosh

unread,
Dec 8, 2008, 9:42:58 PM12/8/08
to
A flamer wrote:
> A moron, wrote:
[snip]

> my machine (PPC Mac, OSX 10.4.x).

Well, that explains a great deal.

Actually, I suspect all these newsgroups are being trolled.

Kaz Kylheku

unread,
Dec 8, 2008, 11:36:28 PM12/8/08
to
On 2008-12-08, Xah Lee <xah...@gmail.com> wrote:
> So, pay me $300 as consulting fee. If the result does not comply to
> the above spec, money back guaranteed.

*LOL*

Did you just offer someone the exciting wager of ``your money back or nothing?

No matter what probability we assign to the outcomes, the /upper bound/
on the expected income from the bet is at most zero dollars. Now that's not so
bad. Casino games and lotteries have that property too; the net gain is
negative.

But your game has no variability to suck someone in; the /maximum/ income from
any trial is that you break even, which is considered winning.

If you ever decide to open a casino, I suggest you stop playing with
Mathematica for a while, and spend a little more time with Statistica,
Probabilica, and especially Street-Smartica.

:)

Madhu

unread,
Dec 9, 2008, 5:21:47 AM12/9/08
to

* Kaz Kylheku <200812241...@gmail.com> :
Wrote on Tue, 9 Dec 2008 04:36:28 +0000 (UTC):

|> So, pay me $300 as consulting fee. If the result does not comply to
|> the above spec, money back guaranteed.
|

| Did you just offer someone the exciting wager of ``your money back or
| nothing?

No, I don't think he was offering a bet --- this sounded more like he
was charging for a service. The costs would cover the time he spent in
providing the service; except if the service was not satisfactory in
which case he'd refund the amount. (Actually the cost seems to be
calculated to dissade any customer from engaging him in the first place,
so the conclusion from your analysis would still hold)
--
Madhu

Stanisław Halik

unread,
Dec 9, 2008, 8:37:14 AM12/9/08
to
thus spoke Xah Lee <xah...@gmail.com>:

> The phenomenon of creating code that are inefficient is proportional
> to the highlevelness or power of the lang. In general, the higher
> level of the lang, the less possible it is actually to produce a code
> that is as efficient as a lower level lang. For example, the level or
> power of lang can be roughly order as this:

Yes, that's true, but your hierarchy sucks. Unix shells more powerful
than C? They're macro languages, ferchristsakes. You should also explain
what are the high-level features of Mathematica inhibiting optimization.
A math functions' library doesn't make the language more powerful. Take
java, for instance, it has a large standard library alright.

> assembly langs
> C, pascal
> C++, java, c#
> unix shells
> perl, python, ruby, php
> lisp
> Mathematica

Jon Harrop

unread,
Dec 9, 2008, 12:41:12 PM12/9/08
to
Stanisław Halik wrote:
> thus spoke Xah Lee <xah...@gmail.com>:
>> The phenomenon of creating code that are inefficient is proportional
>> to the highlevelness or power of the lang. In general, the higher
>> level of the lang, the less possible it is actually to produce a code
>> that is as efficient as a lower level lang. For example, the level or
>> power of lang can be roughly order as this:
>
> Yes, that's true, but your hierarchy sucks. Unix shells more powerful
> than C? They're macro languages, ferchristsakes. You should also explain
> what are the high-level features of Mathematica inhibiting optimization...

In the context of single-threaded programs there is no excuse for
Mathematica being so slow: it adds no impediments beyond those found in
Lisp. The only reason Mathematica is so slow is that its only
implementation is a naive term rewriter that makes no attempt to use native
code.

However, in the context of parallelism on multicores everything changes.
Mathematica is built entirely around one giant global rewrite table. In
other words, all variables are global in Mathematica. Consequently, the
obvious implementation of any kind of shared-state parallelism will require
synchronization around every single read or write to any variable, which
would be cripplingly slow. Their solution has been to resort to distributed
parallelism but that is hugely inefficient (e.g. see Erlang) and renders
Mathematica even less suitable for general purpose programming on
multicores. There are more sophisticated alternatives that can work around
this problem but they would require a complete rewrite of the internals and
that is not feasible for business reasons (i.e. backward compatibility).

Xah Lee

unread,
Dec 9, 2008, 6:01:11 PM12/9/08
to

On Dec 8, 4:56 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
> Xah Lee wrote:
> > A moron, wrote:
> > > You failed the challenge that you were given.
>
> > you didn't give me a challenge.
>
> Thomas gave you the challenge:
>
> "What I want in return is you to execute and time Dr. Harrop's original
> code, posting the results to this thread... By Dr. Harrop's original code,
> I specifically mean the code he posted to this thread. I've pasted it below
> for clarity.".
>
> Thomas even quoted my code verbatim to make his requirements totally
> unambiguous. Note the parameters [9, 512, 4] in the last line that he and I
> both gave:
>
> AbsoluteTiming[Export["image.pgm", Graphics@Raster@Main[9, 512, 4]]]
>
> You have not posted timings of that, let alone optimized it. So you failed.

The first parameter to your Main specifies some kinda recursively
stacked spheres in the rendered image. The second parameter is the
width and height of the pixels in the rendered image.

I tried to run them but my computer went 100% cpu and after i recall 5
min its still going. So, i reduced your input. In the end, with
reduced input, it shows my code is 5 times faster (running Mathematica
v4 on OS X 10.4.x with PPC 1.9 GHz), and on the other guy's computer
with Mathematica 6 he says it's twice as fast.

Given your code's nature, it is reasonably to assume that with your
original input my code would still be faster than yours. You claim it
is not or that it is perhaps just slightly faster?

It is possible you are right. I don't want to spend the energy to run
your code and my code and possible hog my computer for hours or
perhaps days. As i said, your recursive Intersect is very badly
written Mathematica code. It might even start memory swapping.

Also, all you did is talking bullshit. Thomas actually is the one took
my challenge to you and gave me $20 to prove my argument to YOU. His
requirement, after the payment, is actually, i quote:

«Alright, I've sent $20. The only reason I would request a refund is


if you don't do anything. As long as you improve the code as you've
described and post the results, I'll be satisfied. If the improvements

you've described don't result in better performance, that's OK.»

He haven't posted since nor emailed me. It is reasonable to assume he
is satisfied as far as his payment to me to see my code goes.

You, kept on babbling. Now you say that the input is different. Fine.
How long does that input actually take on your computer? If days, i'm
sorry i cannot run your toy code on my computer for days. If in few
hours, i can then run the code overnight, and if necessary, give you
another version that will be faster with your given input to shut you
the fuck up.

However, there's cost to me. What do i get to do your homework? It is
possible, that if i spend the energy and time to do this, then you
again refuse to acknowledge it, or kept on complaining about something
else.

You see, newsgroup is the bedrock of bullshit. You bullshit, he
bullshits, everybody brags and bullshit because there is no stake. I
want sincerity and responsibility backed up, with for example paypal
deposits. You kept on bullshitting, Thomas gave me $20 and i produced
a code that reasonably demonstrated at least how unprofessional your
Mathematica code was.

Here's the deal. Pay me $20, then i'll creat a version of Mathematica
code that has the same input as yours. Your input is Main[9, 512, 4],
as i have exposed, your use of interger in the last part for numerical
computation is Mathematica incompetence. You didn't acknowledge even
this. I'll give a version of Mathematica with input Main[9, 512, 4.]
that will run faster than yours. If not, money back guaranteed. Also,
pay me $300, then i can produce a Mathematica version no more than 10
times slower than your OCaml code, this should be a 70000 times
improvement according to you. Again, money back guarantee.

If i don't receive $20 or $300, this will be my last post to you in
this thread. You are just a bullshitter.

O wait... my code with Main[9, 512, 4.] and other numerical changes
already makes your program run faster regardless of the input size.
What a motherfucking bullshit you are. Scratch the $20. The $300
challenge still stands firm.

Xah
http://xahlee.org/

Xah Lee

unread,
Dec 9, 2008, 6:19:47 PM12/9/08
to
On Dec 8, 4:07 am, Jon Harrop <j...@ffconsultancy.com> wrote:
> s...@netherlands.com wrote:
> > Well, its past 'tonight' and 6 hours to go till past 'tomorrow'.
> > Where the hell is it Zah Zah?
>
> Note that this program takes several days to compute in Mathematica (even
> though it takes under four seconds in other languages) so don't expect to
> see a genuinely optimized version any time soon... ;-)

Note that Jon's Mathematica code is of very poor quality, as i've
given detailed analysis here:

I'm not sure he's intentionally making Mathematica look bad or just
sloppiness. I presume it is sloppiness, since the Mathematica code is
not shown in his public website on this speed comparison issue. (as
far as i know) I suppose, he initialled tried this draft version, saw
that it is too slow for comparsion, and probably among other reason he
didn't include it in the speed comparison. However, in this thread
about Mathematica 7, he wanted to insert his random gribe to pave
roads to post his website books and url on OCml/f#, so he took out
this piece of Mathematica to bad mouth it and bait. He ignored my
paypal challenge, but it so happens that someone else paid me $20 to
show a better code, and in the showdown, Jon went defensive that just
make him looking like a major idiot.

Xah
http://xahlee.org/


alex23

unread,
Dec 9, 2008, 7:09:45 PM12/9/08
to
On Dec 10, 9:19 am, Xah Lee <xah...@gmail.com> wrote:
> I'm not sure he's intentionally making Mathematica look bad or just
> sloppiness.

Actually, there's only one person here tainting Mathematica by
association, and it's not Jon.

s...@netherlands.com

unread,
Dec 9, 2008, 9:09:00 PM12/9/08
to

>? http://xahlee.org/
>
>?
Ad hominem

s...@netherlands.com

unread,
Dec 9, 2008, 9:46:02 PM12/9/08
to

w_a_...@yahoo.com

unread,
Dec 10, 2008, 3:37:50 PM12/10/08
to
On Dec 5, 9:51 am, Xah Lee <xah...@gmail.com> wrote:
>
> For those of you who don't know linear algebra but knows coding, this
> means, we want a function whose input is a list of 3 elements say
> {x,y,z}, and output is also a list of 3 elements, say {a,b,c}, with
> the condition that
>
> a = x/Sqrt[x^2+y^2+z^2]
> b = y/Sqrt[x^2+y^2+z^2]
> c = z/Sqrt[x^2+y^2+z^2]

>


> In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
> you'll have 50 or hundreds lines.

Ruby:

def norm a
s = Math.sqrt(a.map{|x|x*x}.inject{|x,y|x+y})
a.map{|x| x/s}
end

Raymond Wiker

unread,
Dec 10, 2008, 4:07:49 PM12/10/08
to
w_a_...@yahoo.com writes:

In Common Lisp:

(defun normalize (list-or-vector)
(let ((l (sqrt (reduce #'+ (map 'list (lambda (x) (* x x)) list-or-vector)))))
(map (type-of list-or-vector) (lambda (x) (/ x l)) list-or-vector)))

As a bonus, this works with lists or vectors; it also works with
complex numbers.

Since this is Common Lisp, it is also possible to extend this (naive)
implementation so that it performs as much as possible at
compile-time, possibly replacing calls with the computed result.

Stick that in Mathematica's (and Ruby's) pipe and smoke it!

Kaz Kylheku

unread,
Dec 10, 2008, 4:37:34 PM12/10/08
to
On 2008-12-05, Xah Lee <xah...@gmail.com> wrote:
> Let's say for example, we want to write a function that takes a vector
> (of linear algebra), and return a vector in the same direction but
> with length 1. In linear algebar terminology, the new vector is called
> the “normalized” vector of the original.

>
> For those of you who don't know linear algebra but knows coding, this

If I were to guess who that would be ...

> means, we want a function whose input is a list of 3 elements say
> {x,y,z}, and output is also a list of 3 elements, say {a,b,c}, with
> the condition that
>
> a = x/Sqrt[x^2+y^2+z^2]
> b = y/Sqrt[x^2+y^2+z^2]
> c = z/Sqrt[x^2+y^2+z^2]
>
> In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
> you'll have 50 or hundreds lines.

Really? ``50 or hundreds'' of lines in C?

#include <math.h> /* for sqrt */

void normalize(double *out, double *in)
{
double denom = sqrt(in[0] * in[0] + in[1] * in[1] + in[2] * in[2]);

out[0] = in[0]/denom;
out[1] = in[1]/denom;
out[2] = in[2]/denom;
}

Doh?

Now try writing a device driver for your wireless LAN adapter in Mathematica.

Xah Lee

unread,
Dec 10, 2008, 5:08:12 PM12/10/08
to
Xah Lee wrote:
> > Let's say for example, we want to write a function that takes a vector
> > (of linear algebra), and return a vector in the same direction but
> > with length 1. In linear algebar terminology, the new vector is called
> > the “normalized” vector of the original.
>
> > For those of you who don't know linear algebra but knows coding, this
>
> If I were to guess who that would be ...
>
> > means, we want a function whose input is a list of 3 elements say
> > {x,y,z}, and output is also a list of 3 elements, say {a,b,c}, with
> > the condition that
>
> > a = x/Sqrt[x^2+y^2+z^2]
> > b = y/Sqrt[x^2+y^2+z^2]
> > c = z/Sqrt[x^2+y^2+z^2]
>
> > In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
> > you'll have 50 or hundreds lines.


Kaz Kylheku wrote:
> Really? ``50 or hundreds'' of lines in C?
>
> #include <math.h> /* for sqrt */
>
> void normalize(double *out, double *in)
> {
> double denom = sqrt(in[0] * in[0] + in[1] * in[1] + in[2] * in[2]);
>
> out[0] = in[0]/denom;
> out[1] = in[1]/denom;
> out[2] = in[2]/denom;
> }
>
> Doh?

Kaz, pay attention:

Xah wrote: «Note, that the “norm” as defined above works for vectors
of any dimention, i.e. list of any length.»

The essay on the example of Mathematica expressiveness of defining
Normalize is now cleaned up and archived at:

• A Example of Mathematica's Expressiveness
http://xahlee.org/UnixResource_dir/writ/Mathematica_expressiveness.html

Xah
http://xahlee.org/


Xah Lee

unread,
Dec 10, 2008, 5:15:09 PM12/10/08
to
Xah Lee wrote:
> > For those of you who don't know linear algebra but knows coding, this
> > means, we want a function whose input is a list of 3 elements say
> > {x,y,z}, and output is also a list of 3 elements, say {a,b,c}, with
> > the condition that
> >
> > a = x/Sqrt[x^2+y^2+z^2]
> > b = y/Sqrt[x^2+y^2+z^2]
> > c = z/Sqrt[x^2+y^2+z^2]
>
> > In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
> > you'll have 50 or hundreds lines.
> >
> > Note, that the “norm” as defined above works for vectors of any
> > dimention, i.e. list of any length.


On Dec 10, 12:37 pm, w_a_x_...@yahoo.com wrote:
> Ruby:
>
> def norm a
> s = Math.sqrt(a.map{|x|x*x}.inject{|x,y|x+y})
> a.map{|x| x/s}
> end

I don't know ruby, but i tried to run it and it does not work.

#ruby


def norm a
s = Math.sqrt(a.map{|x|x*x}.inject{|x,y|x+y})
a.map{|x| x/s}
end

v = [3,4]

p norm(v) # returns [0.6, 0.8]

The correct result for that input would be 5.

Also note, i wrote: «Note, that the “norm” as defined above works for
vectors of any dimention, i.e. list of any length.».

For detail, see:

Jon Harrop

unread,
Dec 10, 2008, 5:23:54 PM12/10/08
to
Xah Lee wrote:
> On Dec 10, 12:37 pm, w_a_x_...@yahoo.com wrote:
>> Ruby:
>>
>> def norm a
>> s = Math.sqrt(a.map{|x|x*x}.inject{|x,y|x+y})
>> a.map{|x| x/s}
>> end
>
> I don't know ruby, but i tried to run it and it does not work.
>
> #ruby
> def norm a
> s = Math.sqrt(a.map{|x|x*x}.inject{|x,y|x+y})
> a.map{|x| x/s}
> end
>
> v = [3,4]
>
> p norm(v) # returns [0.6, 0.8]

That is the correct answer.

> The correct result for that input would be 5.

No, you're confusing normalization with length.

Jon Harrop

unread,
Dec 10, 2008, 5:25:01 PM12/10/08
to
Raymond Wiker wrote:
> Stick that in Mathematica's (and Ruby's) pipe and smoke it!

Actually you can do that in Mathematica as well. Lisp is basically
Mathematica without the maths...

John W Kennedy

unread,
Dec 10, 2008, 5:47:49 PM12/10/08
to
Xah Lee wrote:
> In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
> you'll have 50 or hundreds lines.

C:

#include <stdlib.h>
#include <math.h>

void normal(int dim, float* x, float* a) {
float sum = 0.0f;
int i;
float divisor;
for (i = 0; i < dim; ++i) sum += x[i] * x[i];
divisor = sqrt(sum);
for (i = 0; i < dim; ++i) a[i] = x[i]/divisor;
}

Java:

static float[] normal(final float[] x) {
float sum = 0.0f;
for (int i = 0; i < x.length; ++i) sum += x[i] * x[i];
final float divisor = (float) Math.sqrt(sum);
float[] a = new float[x.length];
for (int i = 0; i < x.length; ++i) a[i] = x[i]/divisor;
return a;
}


--
John W. Kennedy
"Never try to take over the international economy based on a radical
feminist agenda if you're not sure your leader isn't a transvestite."
-- David Misch: "She-Spies", "While You Were Out"

Jon Harrop

unread,
Dec 10, 2008, 6:02:08 PM12/10/08
to
Xah Lee wrote:
> Kaz Kylheku wrote:
>> Really? ``50 or hundreds'' of lines in C?
>>
>> #include <math.h> /* for sqrt */
>>
>> void normalize(double *out, double *in)
>> {
>> double denom = sqrt(in[0] * in[0] + in[1] * in[1] + in[2] *
>> in[2]);
>>
>> out[0] = in[0]/denom;
>> out[1] = in[1]/denom;
>> out[2] = in[2]/denom;
>> }
>>
>> Doh?
>
> Kaz, pay attention:
>
> Xah wrote: «Note, that the “norm” as defined above works for vectors
> of any dimention, i.e. list of any length.»

That is still only 6 lines of C code and not 50 as you claimed:

double il = 0.0;
for (int i=0; i<n; ++i)
il += in[i] * in[i];
il = 1.0 / sqrt(il);
for (int i=0; i<n; ++i)
out[i] = il * in[i];

Try computing the Fourier transform of:

0.007 + 0.01 I, -0.002 - 0.0024 I

Arne Vajhøj

unread,
Dec 10, 2008, 6:44:51 PM12/10/08
to
Jon Harrop wrote:
> Xah Lee wrote:
>> Kaz Kylheku wrote:
>>> Really? ``50 or hundreds'' of lines in C?
>>>
>>> #include <math.h> /* for sqrt */
>>>
>>> void normalize(double *out, double *in)
>>> {
>>> double denom = sqrt(in[0] * in[0] + in[1] * in[1] + in[2] *
>>> in[2]);
>>>
>>> out[0] = in[0]/denom;
>>> out[1] = in[1]/denom;
>>> out[2] = in[2]/denom;
>>> }
>>>
>>> Doh?
>> Kaz, pay attention:
>>
>> Xah wrote: «Note, that the “norm” as defined above works for vectors
>> of any dimention, i.e. list of any length.»
>
> That is still only 6 lines of C code and not 50 as you claimed:
>
> double il = 0.0;
> for (int i=0; i<n; ++i)
> il += in[i] * in[i];
> il = 1.0 / sqrt(il);
> for (int i=0; i<n; ++i)
> out[i] = il * in[i];

Not that it matters, but the above requires C99 (or C++).

Arne

Bakul Shah

unread,
Dec 10, 2008, 7:12:02 PM12/10/08
to
John W Kennedy wrote:
> Xah Lee wrote:
>> In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
>> you'll have 50 or hundreds lines.
>
> C:
>
> #include <stdlib.h>
> #include <math.h>
>
> void normal(int dim, float* x, float* a) {
> float sum = 0.0f;
> int i;
> float divisor;
> for (i = 0; i < dim; ++i) sum += x[i] * x[i];
> divisor = sqrt(sum);
> for (i = 0; i < dim; ++i) a[i] = x[i]/divisor;
> }
>
> Java:
>
> static float[] normal(final float[] x) {
> float sum = 0.0f;
> for (int i = 0; i < x.length; ++i) sum += x[i] * x[i];
> final float divisor = (float) Math.sqrt(sum);
> float[] a = new float[x.length];
> for (int i = 0; i < x.length; ++i) a[i] = x[i]/divisor;
> return a;
> }
>
>

q){x%sqrt sum x}3 4
0.6 0.8

Bakul Shah

unread,
Dec 10, 2008, 7:25:02 PM12/10/08
to

Oops. I meant to write {x%sqrt sum x*x}3 4

Kaz Kylheku

unread,
Dec 10, 2008, 7:34:00 PM12/10/08
to
On 2008-12-10, Xah Lee <xah...@gmail.com> wrote:

> Xah Lee wrote:
>> > means, we want a function whose input is a list of 3 elements say
^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^

> Kaz, pay attention:

[ reformatted to 7 bit USASCII ]

> Xah wrote: Note, that the norm

> of any dimention, i.e. list of any length.

It was coded to the above requirements.

Xah Lee

unread,
Dec 10, 2008, 7:51:00 PM12/10/08
to
On Dec 10, 2:47 pm, John W Kennedy <jwke...@attglobal.net> wrote:
> Xah Lee wrote:
> > In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
> > you'll have 50 or hundreds lines.
>
> C:
>
> #include <stdlib.h>
> #include <math.h>
>
> void normal(int dim, float* x, float* a) {
>     float sum = 0.0f;
>     int i;
>     float divisor;
>     for (i = 0; i < dim; ++i) sum += x[i] * x[i];
>     divisor = sqrt(sum);
>     for (i = 0; i < dim; ++i) a[i] = x[i]/divisor;
>
> }
>
> Java:
>
> static float[] normal(final float[] x) {
>     float sum = 0.0f;
>     for (int i = 0; i < x.length; ++i) sum += x[i] * x[i];
>     final float divisor = (float) Math.sqrt(sum);
>     float[] a = new float[x.length];
>     for (int i = 0; i < x.length; ++i) a[i] = x[i]/divisor;
>     return a;
>
> }

Thanks to various replies.

I've now gather code solutions in ruby, python, C, Java, here:

now lacking is perl, elisp, which i can do well in a condensed way.
It'd be interesting also to have javascript... and perhaps erlang,
OCaml/F#, Haskell too.

Xah
http://xahlee.org/


William James

unread,
Dec 10, 2008, 9:22:40 PM12/10/08
to
Jon Harrop wrote:

> Xah Lee wrote:
> > On Dec 10, 12:37 pm, w_a_x_...@yahoo.com wrote:
> >> Ruby:
> > >
> >> def norm a
> >> s = Math.sqrt(a.map{|x|x*x}.inject{|x,y|x+y})
> >> a.map{|x| x/s}
> >> end
> >
> > I don't know ruby, but i tried to run it and it does not work.
> >
> > #ruby
> > def norm a
> > s = Math.sqrt(a.map{|x|x*x}.inject{|x,y|x+y})
> > a.map{|x| x/s}
> > end
> >
> > v = [3,4]
> >
> > p norm(v) # returns [0.6, 0.8]
>
> That is the correct answer.
>
> > The correct result for that input would be 5.
>
> No, you're confusing normalization with length.

Expanded for easier comprehension.

def norm a
# Replace each number with its square.
b = a.map{|x| x*x }
# Sum the squares. (inject is reduce or fold)
c = b.inject{|x,y| x + y }
# Take the square root of the sum.
s = Math.sqrt( c )
# Divide each number in original list by the square root.
a.map{|x| x/s }
end

1.upto(4){|i|
a = (1..i).to_a
p a
p norm( a )
}

--- output ---
[1]
[1.0]
[1, 2]
[0.447213595499958, 0.894427190999916]
[1, 2, 3]
[0.267261241912424, 0.534522483824849, 0.801783725737273]
[1, 2, 3, 4]
[0.182574185835055, 0.365148371670111, 0.547722557505166,
0.730296743340221]

Chris Rathman

unread,
Dec 10, 2008, 9:43:07 PM12/10/08
to
On Dec 10, 6:51 pm, Xah Lee <xah...@gmail.com> wrote:
> I've now gather code solutions in ruby, python, C, Java, here:
>
> now lacking is perl, elisp, which i can do well in a condensed way.
> It'd be interesting also to have javascript... and perhaps erlang,
> OCaml/F#, Haskell too.

Pay me $600 for my time and I'll even throw in an Algol-68
version. :-)

toby

unread,
Dec 11, 2008, 12:03:10 AM12/11/08
to
On Dec 10, 3:37 pm, w_a_x_...@yahoo.com wrote:
> On Dec 5, 9:51 am, Xah Lee <xah...@gmail.com> wrote:
>
>
>
> > For those of you who don't know linear algebra but knows coding, this
> > means, we want a function whose input is a list of 3 elements say
> > {x,y,z}, and output is also a list of 3 elements, say {a,b,c}, with
> > the condition that
>
> > a = x/Sqrt[x^2+y^2+z^2]
> > b = y/Sqrt[x^2+y^2+z^2]
> > c = z/Sqrt[x^2+y^2+z^2]
>
> > In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
> > you'll have 50 or hundreds lines.

void normalise(float d[], float v[]){
float m = sqrt(v[0]*v[0] + v[1]*v[1] + v[2]*v[2]);
d[0] = v[0]/m; // My guess is Xah Lee
d[1] = v[1]/m; // hasn't touched C
d[2] = v[2]/m; // for near to an eternitee

William James

unread,
Dec 11, 2008, 6:46:34 AM12/11/08
to
John W Kennedy wrote:

> Xah Lee wrote:
> > In lisp, python, perl, etc, you'll have 10 or so lines. In C or
> > Java, you'll have 50 or hundreds lines.
>

> Java:


>
> static float[] normal(final float[] x) {
> float sum = 0.0f;
> for (int i = 0; i < x.length; ++i) sum += x[i] * x[i];
> final float divisor = (float) Math.sqrt(sum);
> float[] a = new float[x.length];
> for (int i = 0; i < x.length; ++i) a[i] = x[i]/divisor;
> return a;
> }

"We don't need no stinkin' loops!"

SpiderMonkey Javascript:

function normal( ary )
{ div=Math.sqrt(ary.map(function(x) x*x).reduce(function(a,b) a+b))
return ary.map(function(x) x/div)
}

Xah Lee

unread,
Dec 11, 2008, 1:41:59 PM12/11/08
to
On Dec 10, 2:47 pm, John W Kennedy <jwke...@attglobal.net> wrote:
> Xah Lee wrote:
> > In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
> > you'll have 50 or hundreds lines.
>
> C:
>
> #include <stdlib.h>
> #include <math.h>
>
> void normal(int dim, float* x, float* a) {
>     float sum = 0.0f;
>     int i;
>     float divisor;
>     for (i = 0; i < dim; ++i) sum += x[i] * x[i];
>     divisor = sqrt(sum);
>     for (i = 0; i < dim; ++i) a[i] = x[i]/divisor;
>
> }

i don't have experience coding C. The code above doesn't seems to
satisfy the spec. The input should be just a vector, array, list, or
whatever the lang supports.
The output is the same datatype of the same dimension.

Xah
http://xahlee.org/


Jim Gibson

unread,
Dec 11, 2008, 3:11:06 PM12/11/08
to
In article
<5ebe5a7d-cbdf-4d66...@40g2000prx.googlegroups.com>, Xah
Lee <xah...@gmail.com> wrote:

Perl:

sub normal
{
my $sum = 0;
$sum += $_ ** 2 for @_;
my $length = sqrt($sum);
return map { $_/$length } @_;
}

--
Jim Gibson

Jon Harrop

unread,
Dec 11, 2008, 5:57:45 PM12/11/08
to

The output is in the preallocated argument "a". It is the same type (float
*) and has the same dimension. That is idiomatic C.

You could define a struct type representing a vector that includes its
length and data (akin to std::vector<..> in C++) but it would still be
nowhere near 50 LOC as you claimed.

George Neuner

unread,
Dec 12, 2008, 12:15:21 PM12/12/08
to
On Wed, 10 Dec 2008 21:37:34 +0000 (UTC), Kaz Kylheku
<kkyl...@gmail.com> wrote:

>Now try writing a device driver for your wireless LAN adapter in Mathematica.

Notice how Xah chose not to respond to this.

George

George Neuner

unread,
Dec 12, 2008, 12:39:26 PM12/12/08
to
On Thu, 11 Dec 2008 10:41:59 -0800 (PST), Xah Lee <xah...@gmail.com>
wrote:

>On Dec 10, 2:47 pm, John W Kennedy <jwke...@attglobal.net> wrote:


>> Xah Lee wrote:
>> > In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
>> > you'll have 50 or hundreds lines.
>>
>> C:
>>
>> #include <stdlib.h>
>> #include <math.h>
>>
>> void normal(int dim, float* x, float* a) {
>>     float sum = 0.0f;
>>     int i;
>>     float divisor;
>>     for (i = 0; i < dim; ++i) sum += x[i] * x[i];
>>     divisor = sqrt(sum);
>>     for (i = 0; i < dim; ++i) a[i] = x[i]/divisor;
>>
>> }
>
>i don't have experience coding C.

Then why do you talk about it as if you know something?

>The code above doesn't seems to satisfy the spec.

It does.

>The input should be just a vector, array, list, or
>whatever the lang supports. The output is the same
>datatype of the same dimension.

C's native arrays are stored contiguously. Multidimensional arrays
can be accessed as a vector of length (dim1 * dim2 * ... * dimN).

This code handles arrays of any dimensionality. The poorly named
argument 'dim' specifies the total number of elements in the array.

George

Rainer Joswig

unread,
Dec 12, 2008, 12:43:28 PM12/12/08
to
In article <d075k4tpj01c71b2j...@4ax.com>,
George Neuner <gneu...@comcast.net> wrote:

For inspiration, here is some old Lisp driver code for an old
3com network card (Ethernet, not WLAN):

http://jrm-code-project.googlecode.com/svn/trunk/lambda/network/drivers/3com.lisp

--
http://lispm.dyndns.org/

George Neuner

unread,
Dec 12, 2008, 2:47:17 PM12/12/08
to
On Mon, 8 Dec 2008 15:14:18 -0800 (PST), Xah Lee <xah...@gmail.com>
wrote:

>Dear George Neuner,
>
>Xah Lee wrote:
>> >For example,
>> >the level or power of lang can be roughly order as
>> >this:
>>
>> >assembly langs
>> >C, pascal
>> >C++, java, c#
>> >unix shells
>> >perl, python, ruby, php
>> >lisp
>> >Mathematica
>
>George wrote:
>> According to what "power" estimation? Assembly, C/C++, C#, Pascal,
>> Java, Python, Ruby and Lisp are all Turing Complete. I don't know
>> offhand whether Mathematica is also TC, but if it is then it is at
>> most equally powerful.
>
>it's amazing that every tech geekers (aka idiots) want to quote
>“Turing Complete” in every chance. Even a simple cellular automata,
>such as Conway's game of life or rule 110, are complete.
>
>http://en.wikipedia.org/wiki/Conway's_Game_of_Life
>http://en.wikipedia.org/wiki/Rule_110
>
>in fact, according to Stephen Wolfram's controversial thesis by the
>name of “Principle of computational equivalence”, every goddamn thing
>in nature is just about turing complete. (just imagine, when you take
>a piss, the stream of yellow fluid is actually doing turning complete
>computations!)

Wolfram's thesis does not make the case that everything is somehow
doing computation.

>for a change, it'd be far more interesting and effective knowledge
>showoff to cite langs that are not so-called fuck of the turing
>complete.

We geek idiots cite Turing because it is an important measure of a
language. There are plenty of languages which are not complete. That
you completely disregard a fundamental truth of computing is
disturbing.

>the rest of you message went on stupidly on the turing complete point
>of view on language's power, mixed with lisp fanaticism, and personal
>gribes about merits and applicability assembly vs higher level langs.

You don't seem to understand the difference between leverage and power
and that disturbs all the geeks here who do. We worry that newbies
might actually listen to your ridiculous ramblings and be led away
from the truth.

>It's fine to go on with your gribes, but be careful in using me as a
>stepping stone.

Xah, if I wanted to step on you I would do it with combat boots. You
should be thankful that you live 3000 miles away and I don't care
enough about your petty name calling to come looking for you. If you
insult people in person like you do on usenet then I'm amazed that
you've lived this long.

George

Bakul Shah

unread,
Dec 12, 2008, 4:19:29 PM12/12/08
to

Only if the length in each dimension is known at compile time (or
in C99, if this is an automatic array). When this is not the case,
you may have to implement something like the following (not the only
way, just one way):

float** new_matrix(int rows, int cols) {
float** m = malloc(sizeof(float*)*rows);
int i;
for (i = 0; i < rows; i++)
m[i] = malloc(sizeof(float)*cols);
return m;
}

In this case normal() fails since matrix m is not in a single
contiguous area.

But I suspect Xah is complaining because the function doesn't
*return* a value of the same type; instead you have to pass in
the result vector. But such is life if you code in C!

Rob Warnock

unread,
Dec 12, 2008, 10:03:44 PM12/12/08
to
Rainer Joswig <jos...@lisp.de> wrote:
+---------------

| For inspiration, here is some old Lisp driver code for an old
| 3com network card (Ethernet, not WLAN):
|
| http://jrm-code-project.googlecode.com/svn/trunk/lambda/network/drivers/3com.lisp
+---------------

Heh. This looks a *lot* like the user-mode hardware bringup/debugging
code I was writing in CMUCL during the last few years (for a now-PPoE). ;-}
Lots of bit- & byte-field definitions, peek & poke stuff, utilities
to encode/pack/unpack/decode hardware register fields from/to readable
symbols, etc. The main obvious difference I noticed was that instead
of using SYS:%NUBUS-READ & SYS:%NUBUS-WRITE to peek/poke at the
hardware, my code did an MMAP of "/dev/mem" and then used CMUCL's
SYSTEM:SAP-REF-{8,16,32} and SETFs of same [wrapped within suitable
syntactic sugar, of course]. Fun stuff!


-Rob

-----
Rob Warnock <rp...@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607

Scott

unread,
Dec 13, 2008, 1:10:02 AM12/13/08
to
On Dec 10, 2:07 pm, Raymond Wiker <r...@RawMBP.local> wrote:
>
> In Common Lisp:
>
> (defun normalize (list-or-vector)
>   (let ((l (sqrt (reduce #'+ (map 'list (lambda (x) (* x x)) list-or-vector)))))
>     (map (type-of list-or-vector) (lambda (x) (/ x l)) list-or-vector)))
>
> As a bonus, this works with lists or vectors; it also works with
> complex numbers.
>

I think you want to throw a (conjugate x) in there for it to give you
the correct answer for complex numbers...

It is loading more messages.
0 new messages