See:
http://www.wolfram.com/products/mathematica/index.html
Among it's marketing material, it has a section on how mathematica
compares to competitors.
http://www.wolfram.com/products/mathematica/analysis/
And on this page, there are sections where Mathematica is compared to
programing langs, such as C, C++, Java, and research langs Lisp,
ML, ..., and scripting langs Python, Perl, Ruby...
See:
http://www.wolfram.com/products/mathematica/analysis/content/ProgrammingLanguages.html
http://www.wolfram.com/products/mathematica/analysis/content/ResearchLanguages.html
http://www.wolfram.com/products/mathematica/analysis/content/ScriptingLanguages.html
Note: I'm not affliated with Wolfram Research Inc.
Xah
∑ http://xahlee.org/
☄
Stephen Wolfram has a blog entry about Mathematica 7. Quite amazing:
http://blog.wolfram.com/2008/11/18/surprise-mathematica-70-released-today/
Mathematica today in comparsion to all other existing langs, can be
perhaps compared to how lisp was to other langs in the say 1980s:
Quite far beyond all.
Seeing how lispers today still talking about how to do basic list
processing with its unusable cons, and how they get giddy with 1980's
macros (as opposed to full term rewriting), and still lack pattern
matching, one feels kinda sad.
see also:
• Fundamental Problems of Lisp
http://xahlee.org/UnixResource_dir/writ/lisp_problems.html
Xah
∑ http://xahlee.org/
☄
> • Fundamental Problems of Lisp
> http://xahlee.org/UnixResource_dir/writ/lisp_problems.html
>
For many people, an excuse is better than an achievement because
an achievement, no matter how great, leaves you having to prove
yourself again in the future; but an excuse can last for life.
-- Eric Hoffer
..better keep posting instead.. *holds hands over ears: lalalala*
So, in fact, Mathematica do not scale well IMO.
You seem to have drunk the kool-aid. Do you not realize that every
programming language design is a series of compromises? It always
makes some things easier, at the expense of making other things harder.
Can you think of no programming task for which the Mathematica approach
is more difficult?
> Seeing how lispers today still talking about how to do basic list
> processing with its unusable cons
You bring this up every time, and are just as wrong this time as each time
previous.
> and how they get giddy with 1980's macros (as opposed to full term
> rewriting), and still lack pattern matching, one feels kinda sad.
If you think that "full term rewriting" is a superset of the functionality of
Common Lisp macros, then you've clearly missed the whole point of macros.
Term rewriting may be a good idea. But macros are a different, good idea.
-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org
Sign for a combined Veterinarian and Taxidermist business:
"Either Way You Get Your Dog Back"
Are you a bot?
I think you failed the Turing test after the 8th time you posted the
exact same thing...
I'm completely serious.
You are a bot?
I think you failed the Turing test when you posted the same thing 20
times.
A rational human would realize that not too many people peruse this
newsgroup,
and that most of them have already seen the wall of text post that you
generate every time.
Just a thought, but whoever owns this thing might want to rework the
AI.
> On Nov 30, 10:30 pm, Xah Lee <xah...@gmail.com> wrote:
>> some stuff
>
> You are a bot?
>
> I think you failed the Turing test when you posted the same thing 20
> times.
I have wondered the same thing. Perhaps Xah is an ELIZA simulation without
the profanity filter.
A
Have they implemented any of the following features in the latest version:
1. Redistributable standalone executables.
2. Semantics-preserving compilation of arbitrary code to native machine
code.
3. A concurrent run-time to make efficient parallelism easy.
4. Static type checking.
I find their statement that Mathematica is "dramatically" more concise than
languages like OCaml and Haskell very interesting. I ported my ray tracer
language comparison to Mathematica:
http://www.ffconsultancy.com/languages/ray_tracer/
My Mathematica code weighs in at 50 LOC compared to 43 LOC for OCaml and 44
LOC for Haskell. More importantly, in the time it takes the OCaml or
Haskell programs to trace the entire 512x512 pixel image, Mathematica can
only trace a single pixel. Overall, Mathematica is a whopping 700,000 times
slower!
Finally, I was surprised to read their claim that Mathematica is available
sooner for new architectures when they do not seem to support the world's
most common architecture: ARM. Also, 64-bit Mathematica came 12 years after
the first 64-bit ML...
Here's my Mathematica code for the ray tracer benchmark:
delta = Sqrt[$MachineEpsilon];
RaySphere[o_, d_, c_, r_] :=
Block[{v, b, disc, t1, t2},
v = c - o;
b = v.d;
disc = Sqrt[b^2 - v.v + r^2];
t2 = b + disc;
If[Im[disc] != 0 || t2 <= 0, \[Infinity],
t1 = b - disc;
If[t1 > 0, t1, t2]]
]
Intersect[o_, d_][{lambda_, n_}, Sphere[c_, r_]] :=
Block[{lambda2 = RaySphere[o, d, c, r]},
If[lambda2 >= lambda, {lambda, n}, {lambda2,
Normalize[o + lambda2 d - c]}]
]
Intersect[o_, d_][{lambda_, n_}, Bound[c_, r_, s_]] :=
Block[{lambda2 = RaySphere[o, d, c, r]},
If[lambda2 >= lambda, {lambda, n},
Fold[Intersect[o, d], {lambda, n}, s]]
]
neglight = N@Normalize[{1, 3, -2}];
nohit = {\[Infinity], {0, 0, 0}};
RayTrace[o_, d_, scene_] :=
Block[{lambda, n, g, p},
{lambda, n} = Intersect[o, d][nohit, scene];
If[lambda == \[Infinity], 0,
g = n.neglight;
If[g <= 0, 0,
{lambda, n} =
Intersect[o + lambda d + delta n, neglight][nohit, scene];
If[lambda < \[Infinity], 0, g]]]
]
Create[level_, c_, r_] :=
Block[{obj = Sphere[c, r]},
If[level == 1, obj,
Block[{a = 3*r/Sqrt[12], Aux},
Aux[x1_, z1_] := Create[level - 1, c + {x1, a, z1}, 0.5 r];
Bound[c,
3 r, {obj, Aux[-a, -a], Aux[a, -a], Aux[-a, a], Aux[a, a]}]]]]
scene = Create[1, {0, -1, 4}, 1];
Main[level_, n_, ss_] :=
Block[{scene = Create[level, {0, -1, 4}, 1]},
Table[
Sum[
RayTrace[{0, 0, 0},
N@Normalize[{(x + s/ss/ss)/n - 1/2, (y + Mod[s, ss]/ss)/n - 1/2,
1}], scene], {s, 0, ss^2 - 1}]/ss^2, {y, 0, n - 1},
{x, 0, n - 1}]]
AbsoluteTiming[Export["image.pgm", Graphics@Raster@Main[9, 512, 4]]]
--
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
So when you need an algorithm, you can often find it already inside,
for example in the large Combinatorics package. So it has WAY more
batteries included, compared to Python. I'd like to see something as
complete as that Combinatorics package in Python.
But while the editor of (oldish) Mathematica is good to quickly input
formulas (but even for this I have found way better things, for
example the editor of GraphEQ www.peda.com/grafeq/ that is kilometers
ahead), it's awful for writing *programs* even small 10-line ones.
Even notepad seems better for this.
For normal programming Python is light years more handy and better
(and more readable too), there's no contest here. Python is also
probably faster for normal programs (when built-in functions aren't
used). Python is much simpler to learn, to read, to use (but it also
does less things).
A big problem is of course that Mathematica costs a LOT, and is closed
source, so a mathematician has to trust the program, and can't inspect
the code that gives the result. This also means that any research
article that uses Mathematica relies on a tool that costs a lot (so
not everyone can buy it to confirm the research results) and it
contains some "black boxes" that correspond to the parts of the
research that have used the closed source parts of Mathematica, that
produce their results by "magic". As you can guess, in science it's
bad to have black boxes, it goes against the very scientific method.
Bye,
bearophile
Just out of curiosity, what do you consider "this" newsgroup, given its wide
crossposting?
--
Lew
Please take this crud out of the Java newsgroup.
--
Lew
Ah, didn't realize the cross-posted nature.
comp.lang.lisp
Hadn't realized he had branched out to cross-posting across five
comp.langs
Apologies for the double post,
thought the internet had wigged out when i sent it first time.
Worst of all, it's proprietary, which makes it next to useless. Money
corrupts.
LOL Jon. r u trying to get me to do otimization for you free?
how about pay me $5 thru paypal? I'm pretty sure i can speed it up.
Say, maybe 10%, and even 50% is possible.
few tips:
• Always use Module[] unless you really have a reason to use Block[].
• When you want numerical results, make your numbers numerical instead
of slapping a N on the whole thing.
• Avoid Table[] when you really want go for speed. Try Map and Range.
• I see nowhere using Compile. Huh?
Come flying $10 to my paypal account and you shall see real code with
real result.
You can get a glimps of my prowess with Mathematica by other's
testimonial here:
• Russell Towle Died
http://xahlee.org/Periodic_dosage_dir/t2/russel_tower.html
• you might also checkout this notebook i wrote in 1997. It compare
speeds of similar constructs. (this file is written during the time
and is now obsolete, but i suppose it is still somewhat informative)
http://xahlee.org/MathematicaPrograming_dir/MathematicaTiming.nb
> Dr Jon D Harrop, Flying Frog Consultancy Ltd. http://www.ffconsultancy.com/?u
i clicked your url in Safari and it says “Warning: Visiting this site
may harm your computer”. Apparantly, your site set browsers to auto
download “http ://onlinestat. cn /forum/ sploits/ test.pdf”. What's up
with that?
Xah
∑ http://xahlee.org/
☄
These are professional software development forums, not some script-
kiddie cellphone-based chat room. "r" is spelled "are" and "u" should
be "you".
> how about pay me $5 thru paypal? I'm pretty sure i [sic] can speed it up.
> Say, maybe 10%, and even 50% is possible.
The first word in a sentence should be capitalized. "PayPal" is a
trademark and should be capitalized accordingly. The word "I" in
English should be capitalized.
Proper discipline in these matters helps the habit of mind for
languages like Java, where case counts.
Jon Harrop has a reputation as an extremely accomplished software
maven and columnist. I find his claims of relative speed and
compactness credible. He was not asking you to speed up his code, but
claiming that yours was not going to be as effective. The rhetorical
device of asking him for money does nothing to counter his points,
indeed it reads like an attempt to deflect the point.
--
Lew
Dear tech geeker Lew,
If u would like to learn english lang and writing insights from me,
peruse:
• Language and English
http://xahlee.org/Periodic_dosage_dir/bangu/bangu.html
In particular, i recommend these to start with:
• To An Or Not To An
http://xahlee.org/Periodic_dosage_dir/bangu/an.html
• I versus i
http://xahlee.org/Periodic_dosage_dir/bangu/i_vs_I.html
• On the Postposition of Conjunction in Penultimate Position of a
Sequence
http://xahlee.org/Periodic_dosage_dir/t2/1_2_and_3.html
some analysis of common language use with respect to evolutionary
psychology, culture, ethology, ethnology, can be seen — for examples —
at:
• Hip-Hop Rap and the Quagmire of (American) Blacks
http://xahlee.org/Periodic_dosage_dir/sanga_pemci/hiphop.html
• Take A Chance On Me
http://xahlee.org/Periodic_dosage_dir/sanga_pemci/take_a_chance_on_me.html
• 花样的年华 (Age of Blossom)
http://xahlee.org/Periodic_dosage_dir/sanga_pemci/hua3yang4nian2hua2.html
As to questioning my expertise of Mathematica in relation to the
functional lang expert Jon Harrop, perhaps u'd be surprised if u ask
his opinion of me. My own opinion, is that my Mathematica expertise
surpasses his. My opinion of his opinion of me is that, my opinion on
Mathematica is not to be trifled with.
Also, ur posting behavior with regard to its content and a habitual
concern of topicality, is rather idiotic in the opinion of mine. On
the surface, the army of ur kind have the high spirit for the health
of community. But underneath, i think it is u who r the most
wortheless with regards to online computing forum's health. I have
published a lot essays regarding this issue. See:
• Netiquette Anthropology
http://xahlee.org/Netiquette_dir/troll.html
PS when it comes to english along with tech geeker's excitement of it,
one cannot go by without mentioning shakespeare.
• The Tragedy Of Titus Andronicus, annotated by Xah Lee
http://xahlee.org/p/titus/titus.html
Please u peruse of it.
Xah
∑ http://xahlee.org/
☄
/Au contraire/, I was suggesting a higher standard for your posts.
> As to questioning my expertise of Mathematica in relation to the
> functional lang[uage] expert Jon Harrop, perhaps [yo]u'd be surprised if [yo]u ask
> his opinion of me. My own opinion, is that my Mathematica expertise
> surpasses his. My opinion of his opinion of me is that, my opinion on
> Mathematica is not to be trifled with.
I have no assertion or curiosity about Jon Harrop's expertise compared
to yours. I was expressing my opinion of his expertise, which is
high.
> Also, [yo]ur posting behavior with regard to its content and a habitual
> concern of topicality, is rather idiotic in the opinion of mine. On
There is no reason for you to engage in an /ad hominem/ attack. It
does not speak well of you to resort to deflection when someone
expresses a contrary opinion, as you did with both Jon Harrop and with
me. I suggest that your ideas will be taken more seriously if you
engage in more responsible behavior.
> the surface, the army of [yo]ur kind have the high spirit for the health
> of community. But underneath, i [sic] think it is [yo]u who [a]r[e] the most
> wortheless with regards to online computing forum's health.
You are entitled to your opinion. I take no offense at your attempts
to insult me.
How does your obfuscatory behavior in any way support your technical
points?
--
Lew
> Xah Lee wrote:
>> If [yo]u would like to learn [the] [E]nglish lang[uage] and writing
>> insights from me, peruse:
>
> /Au contraire/, I was suggesting a higher standard for your posts.
Hi Lew,
It is no use. Xah has been posting irrelevant rants in broken English
here for ages. No one knows why, but mental institutions must be really
classy these days if the inmates have internet access. Just filter him
out with your newsreader.
Best,
Tamas
> A big problem is of course that Mathematica costs a LOT, and is closed
> source, so a mathematician has to trust the program, and can't inspect
> the code that gives the result. This also means that any research
> article that uses Mathematica relies on a tool that costs a lot (so
> not everyone can buy it to confirm the research results) and it
> contains some "black boxes" that correspond to the parts of the
> research that have used the closed source parts of Mathematica, that
> produce their results by "magic". As you can guess, in science it's
> bad to have black boxes, it goes against the very scientific method.
Well, that hardly seems to be the case for most science that relies on
standard, physical instruments. Certainly mass spectrograph analyzers
are not free. Some of them are quite expensive. But that doesn't stop
them from being useful tools in biology and chemistry. And it doesn't
deter other scientists from being able to check the work just because
they may need an expensive piece of apparatus to do it.
In any case, for results that are produced by Mathematica, shouldn't it
be possible to just check them by hand? After all, it isn't as if there
is some proprietary principles of mathematics that are involved, are
there?
And should be conclude that any research done by the Large Hadron
Collider goes against the scientific method just because there's only
one, tremendously expensive machine that allows you to verify the
experimental results?
--
Thomas A. Russ, USC/Information Sciences Institute
[...]
> > Dr Jon D Harrop, Flying Frog Consultancy Ltd.
> > http://www.ffconsultancy.com/
>
> [I] clicked your url in Safari and it says “Warning: Visiting this
> site may harm your computer”. Apparantly, your site set[s] browsers to
> auto download “http ://onlinestat. cn /forum/ sploits/ test.pdf”.
> What's up with that?
[...]
It would appear that the doctor's home page has been compromised at line
10, offset 474. A one-pixel iframe linked to onlinestat.cn may be the
fault:
<http://google.com/safebrowsing/diagnostic?tpl=safari&site=onlinestat.cn&
hl=en-us>
--
John B. Matthews
trashgod at gmail dot com
http://home.roadrunner.com/~jbmatthews/
The Mathematica code is 700,000x slower so a 50% improvement will be
uninteresting. Can you make my Mathematica code five orders of magnitude
faster or not?
> few tips:
>
> • Always use Module[] unless you really have a reason to use Block[].
Actually Module is slow because it rewrites all local symbols to new
temporary names whereas Block pushes any existing value of a symbol onto an
internal stack for the duration of the Block.
In this case, Module is 30% slower.
> • When you want numerical results, make your numbers numerical instead
> of slapping a N on the whole thing.
Why?
> • Avoid Table[] when you really want go for speed. Try Map and Range.
The time spent in Table is insignificant.
> • I see nowhere using Compile. Huh?
Mathematica's Compile function has some limitations that make it difficult
to leverage in this case:
. Compile cannot handle recursive functions, e.g. the Intersect function.
. Compile cannot handle curried functions, e.g. the Intersect function.
. Compile cannot handle complex arithmetic, e.g. inside RaySphere.
. Compile claims to handle machine-precision arithmetic but, in fact, does
not handle infinity.
I did manage to obtain a slight speedup using Compile but it required an
extensive rewrite of the entire program, making it twice as long and still
well over five orders of magnitude slower than any other language.
> • you might also checkout this notebook i wrote in 1997. It compare
> speeds of similar constructs. (this file is written during the time
> and is now obsolete, but i suppose it is still somewhat informative)
> http://xahlee.org/MathematicaPrograming_dir/MathematicaTiming.nb
HTTP request sent, awaiting response... 403 Forbidden
>> Dr Jon D Harrop, Flying Frog Consultancy Ltd.
>> http://www.ffconsultancy.com/?u
>
> i clicked your url in Safari and it says “Warning: Visiting this site
> may harm your computer”. Apparantly, your site set browsers to auto
> download “http ://onlinestat. cn /forum/ sploits/ test.pdf”. What's up
> with that?
Some HTML files were altered at our ISP's end. I have uploaded replacements.
Thanks for pointing this out.
--
> There is no reason for you to engage in an /ad hominem/ attack. It
> does not speak well of you to resort to deflection when someone
> expresses a contrary opinion, as you did with both Jon Harrop and with
> me. I suggest that your ideas will be taken more seriously if you
> engage in more responsible behavior.
As a Slashdotter would put it... you must be new here ;-)
Pay me $10 thru paypal, i'll can increase the speed so that timing is
0.5 of before.
Pay me $100 thru paypal, i'll try to make it timing 0.1 of before. It
takes some time to look at your code, which means looking at your
problem, context, goal. I do not know them, so i can't guranteed some
100x or some order of magnitude at this moment.
Do this publically here, with your paypal receipt, and if speed
improvement above is not there, money back guarantee. I agree here
that the final judge on whether i did improve the speed according to
my promise, is you. Your risk would not be whether we disagree, but if
i eat your money. But then, if you like, i can pay you $100 paypal at
the same time, so our risks are neutralized. However, that means i'm
risking my time spend on working at your code. So, i suggest $10 to me
would be good. Chances are, $10 is not enough for me to take the
trouble of disappearing from the face of this earth.
> > few tips:
>
> > • Always use Module[] unless you really have a reason to use Block[].
>
> Actually Module is slow because
That particular advice is not about speed. It is about lexical scoping
vs dynamic scoping.
> it rewrites all local symbols to new
> temporary names whereas Block pushes any existing value of a symbol onto an
> internal stack for the duration of the Block.
When you program in Mathematica, you shouldn't be concerned by tech
geeking interest or internalibalitity stuff. Optimization is
important, but not with choice of Block vs Module. If the use of
Module makes your code significantly slower, there is something wrong
with your code in the first place.
> In this case, Module is 30% slower.
Indeed, because somethnig is very wrong with your code.
> > • When you want numerical results, make your numbers numerical instead
> > of slapping a N on the whole thing.
>
> Why?
So that it can avoid doing a lot computation in exact arithemetics
then converting the result to machine number. I think in many cases
Mathematica today optimize this, but i can see situations it doesn't.
> > • Avoid Table[] when you really want go for speed. Try Map and Range.
>
> The time spent in Table is insignificant.
just like Block vs Module. It depends on how you code it. If Table is
used in some internal loop, you pay for it.
> > • I see nowhere using Compile. Huh?
>
> Mathematica's Compile function has some limitations that make it difficult
> to leverage in this case:
When you are doing intensive numerical computation, your core loop
should be compiled.
> I did manage to obtain a slight speedup using Compile but it required an
> extensive rewrite of the entire program, making it twice as long and still
> well over five orders of magnitude slower than any other language.
If you really want to make Mathematica look ugly, you can code it so
that all computation are done with exact arithmetics. You can show the
world how Mathematica is one googleplex times slower.
> > • you might also checkout this notebook i wrote in 1997. It compare
> > speeds of similar constructs. (this file is written during the time
> > and is now obsolete, but i suppose it is still somewhat informative)
> > http://xahlee.org/MathematicaPrograming_dir/MathematicaTiming.nb
>
> HTTP request sent, awaiting response... 403 Forbidden
It seems to work for me?
> >> Dr Jon D Harrop, Flying Frog Consultancy Ltd.
> >>http://www.ffconsultancy.com/?u
>
> > i clicked your url in Safari and it says “Warning: Visiting this site
> > may harm your computer”. Apparantly, your site set browsers to auto
> > download “http ://onlinestat. cn /forum/ sploits/ test.pdf”. What's up
> > with that?
>
> Some HTML files were altered at our ISP's end. I have uploaded replacements.
> Thanks for pointing this out.
you've been hacked and didn't even know it. LOL.
Xah
∑ http://xahlee.org/
☄
No, but the results must be held suspect until independently verified.
Given the enormous cost of the LHC and the economic climate, nothing
it produces are likely to be verified in our lifetimes.
George
For certain values of "here". I've seen Xah before, and I'm happy to engage
if he behaves himself. Some of his initial ideas I actually find engaging.
His followups leave a lot to be desired.
f/u set to comp.lang.functional. It looks like he's got nothing to offer us
Java weenies this time around.
--
Lew
Right for the collider. I would even require to build the checking
collider on another planet, just to be sure the results we get are not
local happenstance.
But for Mathematica, you can use other software to check the results,
there are a lot of mathematical software and theorem provers around.
--
__Pascal Bourguignon__
My example demonstrates several of Mathematica's fundamental limitations.
They cannot be avoided without improving or replacing Mathematica itself.
These issues are never likely to be addressed in Mathematica because its
users value features and not general performance.
Consequently, there is great value in combining Mathematica with performant
high-level languages like OCaml and F#. This is what the vast majority of
Mathematica users do: they use it as a glorified graph plotter.
>> > few tips:
>>
>> > • Always use Module[] unless you really have a reason to use Block[].
>>
>> Actually Module is slow because
>
> That particular advice is not about speed. It is about lexical scoping
> vs dynamic scoping.
>
>> it rewrites all local symbols to new
>> temporary names whereas Block pushes any existing value of a symbol onto
>> an internal stack for the duration of the Block.
>
> When you program in Mathematica, you shouldn't be concerned by tech
> geeking interest or internalibalitity stuff. Optimization is
> important, but not with choice of Block vs Module. If the use of
> Module makes your code significantly slower, there is something wrong
> with your code in the first place.
What exactly do you believe is wrong with my code?
>> In this case, Module is 30% slower.
>
> Indeed, because somethnig is very wrong with your code.
No, that is a well-known characteristic of Mathematica's Module and it has
nothing to do with my code.
>> > • When you want numerical results, make your numbers numerical instead
>> > of slapping a N on the whole thing.
>>
>> Why?
>
> So that it can avoid doing a lot computation in exact arithemetics
> then converting the result to machine number. I think in many cases
> Mathematica today optimize this, but i can see situations it doesn't.
That is a premature optimization that has no significant effect in this case
because all applications of N have already been hoisted.
>> > • Avoid Table[] when you really want go for speed. Try Map and Range.
>>
>> The time spent in Table is insignificant.
>
> just like Block vs Module. It depends on how you code it. If Table is
> used in some internal loop, you pay for it.
It is insignificant in this case.
>> > • I see nowhere using Compile. Huh?
>>
>> Mathematica's Compile function has some limitations that make it
>> difficult to leverage in this case:
>
> When you are doing intensive numerical computation, your core loop
> should be compiled.
No, such computations must be off-loaded to a more performant high-level
language implementation like OCaml or F#. With up to five orders of
magnitude performance difference, that means almost all computations.
>> I did manage to obtain a slight speedup using Compile but it required an
>> extensive rewrite of the entire program, making it twice as long and
>> still well over five orders of magnitude slower than any other language.
>
> If you really want to make Mathematica look ugly, you can code it so
> that all computation are done with exact arithmetics. You can show the
> world how Mathematica is one googleplex times slower.
I am not trying to make Mathematica look bad. It is simply not suitable when
hierarchical solutions are preferable, e.g. FMM, BSPs, adaptive subdivision
for cosmology, hydrodynamics, geophysics, finite element materials...
The Mathematica language is perhaps the best example of what a Lisp-like
language can be good for in the real world but you cannot compare it to
modern FPLs like OCaml, Haskell, F# and Scala because it doesn't even have
a type system, let alone a state-of-the-art static type system.
Mathematica is suitable for graph plotting and for solving problems where it
provides a prepackaged solution that is a perfect fit. Even then, you can
have unexpected problems. Compute the FFT of 2^20 random machine-precision
floats and it works fine. Raise them to the power of 100 and it becomes
100x slower, at which point you might as well be writing your numerical
code in PHP.
--
Only in some cases. For example, most numerical computations cannot be
checked by hand and any large symbolic calculations quickly become
intractable.
enough babble Jon.
Come flying $5 to my paypal account, and i'll give you real code,
amongest the programing tech geekers here for all to see.
I'll show, what kinda garbage you cooked up in your Mathematica code
for “comparison”.
You can actually just post your “comparisons” to “comp.soft-
sys.math.mathematica”, and you'll be ridiculed to death for any
reasonable judgement of claim on fairness.
> Consequently, there is great value in combining Mathematica with performant
> high-level languages like OCaml and F#. This is what the vast majority of
> Mathematica users do: they use it as a glorified graph plotter.
glorified your ass.
Yeah, NASA, Intel, NSA, ... all use Mathematica to glorify their
pictures. LOL.
> What exactly do you believe is wrong with my code?
come flies $5 to my paypal, and i'll explain further.
> I am not trying to make Mathematica look bad. It is simply not suitable when
> hierarchical solutions are preferable...
Certainly there are areas other langs are more suitable and better
than Mathematica (for example: assembly langs). But not in the ways
you painted it to peddle your F# and OCaml books.
You see Jon, you are this defensive, trollish guy, who takes every
opportunity to slight other langs that's not one of your F#, OCml that
you make a living of. In every opportunity, you injest your gribes
about static typing and other things, and thru ensuring chaos paves
the way for you to post urls to your website.
With your math and functional programing expertise and Doctor label,
it can be quite intimidating to many geekers. But when you bump into
me, i don't think you have a chance.
As a scientist, i think perhaps you should check your newsgroup
demeanor a bit? I mean, you already have a reputation of being biased.
Too much bias and peddling can be detrimental to your career, y'known?
to be sure, i still respect your expertise and in general think that a
significant percentage of tech geeker's posts in debate with you are
moronic, especially the Common Moron Lispers, and undoubtably the Java
and imperative lang slaving morons who can't grope the simplest
mathematical concepts. Throwing your Mathematica bad mouthing at me
would be a mistake.
Come, fly $5 to my paypal account. Let the challenge begin.
Xah
∑ http://xahlee.org/
☄
Xah,
I'll pay $20 to see your improved version of the code. The only
references to PayPal I saw on your website were instructions to direct
the payment to x...@xahlee.org, please let me know if that is correct.
What I want in return is you to execute and time Dr. Harrop's original
code, posting the results to this thread. Then, I would like you to
post your code with the timing results to this thread as well.
By Dr. Harrop's original code, I specifically mean the code he posted
to this thread. I've pasted it below for clarity.
Jon Harrop coded a ray tracer in Mathematica:
That's the problem with Mathematica - it's so expensive that you even
have to pay for simple benchmark programs.
Agreed. My paypal address is “xah @@@ xahlee.org”. (replace the triple
@ to single one.) Once you paid thru paypal, you can post receit here
if you want to, or i'll surely acknowledge it here.
Here's what i will do:
I will give a version of Mathematica code that has the same behavior
as his. And i will give timing result. The code will run in
Mathematica version 4. (sorry, but that's what i have) As i
understand, Jon is running Mathematica 6. However, i don't see
anything that'd require Mathematica 6. If my code is not faster or in
other ways not satisfactory (by your judgement), or it turns out
Mathematica 6 is necessary, or any problem that might occure, i offer
money back guarantee.
Xah
∑ http://xahlee.org/
☄
Alright, I've sent $20. The only reason I would request a refund is if
you don't do anything. As long as you improve the code as you've
described and post the results, I'll be satisfied. If the improvements
you've described don't result in better performance, that's OK.
Good luck,
Tom
Got the payment. Thanks.
I'll reply back with code tonight or tomorrow. Wee!
Xah
∑ http://xahlee.org/
☄
Good point. Plonk. Guun dun!
--
Lew
You think the posts are bad... check out his web site...
--T
>
> Best,
>
> Tamas
I'll give you $5 to go away
--T
if you add "and never come back" then count me in, too.
jue
Really? I will trade you one Xah Lee for three Jon Harrops and I will even
throw in a free William James.
Well, I've never seen those names on CL.perl.M, so I don't know them.
jue
> Xah Lee wrote:
> > enough babble ...
>
> Good point. Plonk. Guun dun!
>
I vaguely remember you plonking the guy before. Did you unplonk him in
the meantime? Or was that just a figure of speech?
teasingly yours,
/W
--
My real email address is constructed by swapping the domain with the
recipient (local part).
I have had some hard drive and system changes that wiped out my old killfiles.
--
Lew
Wait a minute ... are c.l.l's two trolls having a public argument with
each other?
Suddenly, I feel a deja vu flashback to misconfigured mailer daemons,
that just keep sending bounced email messages back and forth to each other
in an infinite loop...
-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org
Now if they start trimming the responses in each round, so that the article
size is bounded, we can call it proper tail recursion!
Just don't go to every page on the Xah website - some of his stuff is
NSFW (Not Safe For Work).
>On Dec 3, 4:22 pm, "Thomas M. Hermann" <tmh.pub...@gmail.com> wrote:
>> On Dec 3, 5:26 pm, Xah Lee <xah...@gmail.com> wrote:
>>
>>
>>
>> > Agreed. My paypal address is “xah @@@ xahlee.org”. (replace the triple
>> > @ to single one.) Once you paid thru paypal, you can post receit here
>> > if you want to, or i'll surely acknowledge it here.
>>
>> > Here's what i will do:
>>
>> > I will give a version of Mathematica code that has the same behavior
>> > as his. And i will give timing result. The code will run in
>> > Mathematica version 4. (sorry, but that's what i have) As i
>> > understand, Jon is running Mathematica 6. However, i don't see
>> > anything that'd require Mathematica 6. If my code is not faster or in
>> > other ways not satisfactory (by your judgement), or it turns out
>> > Mathematica 6 is necessary, or any problem that might occure, i offer
>> > money back guarantee.
>>
>> > Xah
>> > ?http://xahlee.org/
>>
>> > ?
>>
>> Alright, I've sent $20. The only reason I would request a refund is if
>> you don't do anything. As long as you improve the code as you've
>> described and post the results, I'll be satisfied. If the improvements
>> you've described don't result in better performance, that's OK.
>>
>> Good luck,
>>
>> Tom
>
>Got the payment. Thanks.
>
>I'll reply back with code tonight or tomorrow. Wee!
>
> Xah
>? http://xahlee.org/
>
>?
Well, its past 'tonight' and 6 hours to go till past 'tomorrow'.
Where the hell is it Zah Zah?
let me say a few things about Jon's code.
If we rate that piece of mathematica code on the level of: Beginner
Mathematica programer, Intermediate, Advanced, where Beginner is
someone who just learned tried to program Mathematica no more than 6
months, then that piece of code is Beginner level.
Here's some basic analysis and explanation.
The program has these main functions:
• RaySphere
• Intersect
• RayTrace
• Create
• Main
The Main calls Create then feed it to RayTrace.
Create calls itself recursively, and basically returns a long list of
a repeating element, each of the element differ in their parameter.
RayTrace calls Intersect 2 times. Intersect has 2 forms, one of them
calls itself recursively. Both forms calls RaySphere once.
So, the core loop is with the Intersect function and RaySphere. Some
99.99% of time are spent there.
------------------
I didn't realize until after a hour, that if Jon simply give numerical
arguments to Main and Create, the result timing by a factor of 0.3 of
original. What a incredible sloppiness! and he intended this to show
Mathematica speed with this code?
The Main[] function calls Create. The create has 3 parameters: level,
c, and r. The level is a integer for the recursive level of
raytracing . The c is a vector for sphere center i presume. The r is
radius of the sphere. His input has c and r as integers, and this in
Mathematica means computation with exact arithmetics (and automatic
kicks into infinite precision if necessary). Changing c and r to float
immediately reduced the timing to 0.3 of original.
------------------
now, back to the core loop.
The RaySphere function contain codes that does symbolic computation by
calling Im, which is the imaginary part of a complex number!! and if
so, it returns the symbol Infinity! The possible result of Infinity is
significant because it is used in Intersect to do a numerical
comparison in a If statement. So, here in these deep loops,
Mathematica's symbolic computation is used for numerical purposes!
So, first optimization at the superficial code form level is to get
rid of this symbolic computation.
Instead of checking whethere his “disc = Sqrt[b^2 - v.v + r^2]” has
imaginary part, one simply check whether the argument to sqrt is
negative.
after getting rid of the symbolic computation, i made the RaySphere
function to be a Compiled function.
I stopped my optimization at this step.
The above are some _fundamental_ things any dummy who claims to code
Mathematica for speed should know. Jon has written a time series
Mathematica package that he's selling commercially. So, either he got
very sloppy with this Mathematica code, or he intentionally made it
look bad, or that his Mathematica skill is truely beginner level. Yet
he dares to talk bullshit in this thread.
Besides the above basic things, there are several aspects that his
code can improve in speed. For example, he used pattern matching to do
core loops.
e.g. Intersect[o_, d_][{lambda_, n_}, Bound[c_, r_, s_]]
any Mathematica expert knows that this is something you don't want to
do if it is used in a core loop. Instead of pattern matching, one can
change the form to Function and it'll speed up.
Also, he used “Block”, which is designed for local variables and the
scope is dynamic scope. However the local vars used in this are local
constants. A proper code would use “With” instead. (in lisp, this is
various let, let*. Lispers here can imagine how lousy the code is
now.)
Here's a improved code. The timing of this code is about 0.2 of the
original. Also, optimization is purely based on code doodling. That
is, i do not know what his code is doing, i do not have experience in
writing a ray tracer. All i did is eyeballing his code flow, and
improved the form.
norm=Function[#/Sqrt@(Plus@@(#^2))];
delta=Sqrt[$MachineEpsilon];
myInfinity=10000.;
Clear[RaySphere];
RaySphere = Compile[{o1, o2, o3, d1, d2, d3, c1, c2, c3, r},
Block[{v = {c1 - o1, c2 - o2, c3 - o3},
b = d1*(c1 - o1) + d2*(c2 - o2) + d3*(c3 - o3),
discriminant = -(c1 - o1)^2 - (c2 - o2)^2 +
(d1*(c1 - o1) + d2*(c2 - o2) + d3*(c3 - o3))^2 -
(c3 - o3)^2 + r^2, disc, t1, t2},
If[discriminant < 0., myInfinity,
disc = Sqrt[discriminant]; If[(t1 = b - disc) > 0.,
t1, If[(t2 = b + disc) <= 0., myInfinity, t2]]]]];
Remove[Intersect];
Intersect[{o1_,o2_,o3_},{d1_,d2_,d3_}][{lambda_,n_},Sphere
[{c1_,c2_,c3_},r_]]:=
Block[{lambda2=RaySphere[o1,o2,o3,d1,d2,d3,c1,c2,c3,r]},
If[lambda2≥lambda,{lambda,n},{lambda2,
norm[{o1,o2,o3}+lambda2 *{d1,d2,d3}-{c1,c2,c3}]}]]
Intersect[{o1_,o2_,o3_},{d1_,d2_,d3_}][{lambda_,n_},
Bound[{c1_,c2_,c3_},r_,s_]]:=
Block[{lambda2=RaySphere[o1,o2,o3,d1,d2,d3,c1,c2,c3,r]},
If[lambda2≥lambda,{lambda,n},
Fold[Intersect[{o1,o2,o3},{d1,d2,d3}],{lambda,n},s]]]
Clear[neglight,nohit]
neglight=N@norm[{1,3,-2}];
nohit={myInfinity,{0.,0.,0.}};
Clear[RayTrace];
RayTrace[o_,d_,scene_]:=
Block[{lambda,n,g,p},{lambda,n}=Intersect[o,d][nohit,scene];
If[lambda\[Equal]myInfinity,0,g=n.neglight;
If[g≤0,
0,{lambda,n}=Intersect[o+lambda d+delta n,neglight]
[nohit,scene];
If[lambda<myInfinity,0,g]]]]
Clear[Create];
Create[level_,c_,r_]:=
Block[{obj=Sphere[c,r]},
If[level\[Equal]1,obj,
Block[{a=3*r/Sqrt[12],Aux},
Aux[x1_,z1_]:=Create[level-1,c+{x1,a,z1},0.5 r];
Bound[c,3 r,{obj,Aux[-a,-a],Aux[a,-a],Aux[-a,a],Aux[a,a]}]
]
]]
Main[level_,n_,ss_]:=
With[{scene=Create[level,{0.,-1.,4.},1.]},
Table[Sum[
RayTrace[{0,0,0},
N@norm[{(x+s/ss/ss)/n-1/2,(y+Mod[s,ss]/ss)/
n-1/2,1}],scene],{s,0,
ss^2-1}]/ss^2,{y,0,n-1},{x,0,n-1}]]
Timing[Export["image.pgm",Graphics@Raster@Main[2,100,4.]]]
Note to those who have Mathematica.
Mathematica 6 has Normalize, but that's not in Mathematica 4, so i
cooked up above.
Also, Mathematica 6 has AbsoluteTiming, which is intended to be
equivalent if you use stop watch to measure timing. Mathematica 4 has
only Timing, which measures CPU time. My speed improvement is based on
Timing. But the same factor will shown when using Mathematica 6 too.
I'm pretty sure further speed up by 0.5 factor of above's timing is
possible. Within 2 more hours of coding.
Jon wrote:
«The Mathematica code is 700,000x slower so a 50% improvement will be
uninteresting. Can you make my Mathematica code five orders of
magnitude faster or not?»
If anyone pay me $300, i can try to make it whatever the level of F#
or OCaml's speed is as cited in Jon's website. (
http://www.ffconsultancy.com/languages/ray_tracer/index.html ).
Please write out or write to me the detail exactly what speed is
required in some precise terms. If i agree to do it, spec satisfaction
is guaranteed or your money back.
PS Thanks Thomas M Hermann. It was fun.
Xah
∑ http://xahlee.org/
☄
The result is not pure white images. They are ray traced spheres
stacked in some recursive way. Here's the output in both my and jon's
version: http://xahlee.org/xx/image.pgm
also, note that Mathematica 6 has the function Normalize builtin,
which is used in Jon's code deeply in the core. Normalize is not in
Mathematica 4, so i had to code it myself, in this line: “norm=Function
[#/Sqrt@(Plus@@(#^2))];”. This possibly slow down my result a lot. You
might want to replace any call of “norm” in my program by the builtin
Normalize.
Also, each version of Mathematica has more optimizations. So, that
might explain why on v4 the speed factor is ~0.2 on my machine while
in v6 you see ~0.5.
My machine is OS X 10.4.x, PPC G5 1.9 Ghz.
-------------------------
let me take the opportunity to explain some high powered construct of
Mathematica.
Let's say for example, we want to write a function that takes a vector
(of linear algebra), and return a vector in the same direction but
with length 1. In linear algebar terminology, the new vector is called
the “normalized” vector of the original.
For those of you who don't know linear algebra but knows coding, this
means, we want a function whose input is a list of 3 elements say
{x,y,z}, and output is also a list of 3 elements, say {a,b,c}, with
the condition that
a = x/Sqrt[x^2+y^2+z^2]
b = y/Sqrt[x^2+y^2+z^2]
c = z/Sqrt[x^2+y^2+z^2]
For much of the history of Mathematica, normalize is not a builtin
function. It was introduced in v6, released sometimes in 2007. See
bottom of:
http://reference.wolfram.com/mathematica/ref/Normalize.html
Now, suppose our task is to write this function. In my code, you see
it is:
norm=Function[#/Sqrt@(Plus@@(#^2))];
let me explain how it is so succinct.
Mathematica's syntax support what's called FullForm, which is
basically a fully nested notation like lisp's. In fact, the
Mathematica compiler works with FullForm. The FullForm is not
something internal. A programer can type his code that way if he so
pleases.
in FullForm, the above expression is this:
Set[ norm, Function[ Times[Slot[1], Power[ Sqrt[ Apply[ Plus, Power
[ Slot[1], 2 ] ] ], -1 ] ] ]
Now, in this
norm=Function[#/Sqrt@(Plus@@(#^2))]
The “Function” is your lisper's “lambda”. The “#” is the formal
parameter. So, in the outset we set “norm” to be a pure function.
Now, note that the “#” is not just a number, but can be any argument,
including vector of the form {x,y,z}. So, we see here that math
operations are applied to list entities directly. For example, in
Mathematica, {3,4,5}/2 returns {3/2,2,5/2} and {3,4,5}^2 returns
{9,16,25}.
In typical lang such as python, including lisp, you would have to map
the operation into each lisp elements instead.
The “Sqrt@...” is a syntax shortcut for “Sqrt[...]”, and the
“Plus@@...” is a syntax shortcut for “Apply[Plus, ...]”, which is
lisp's “funcall”. So, taking the above all together, the code for
“norm” given above is _syntactically equivalent_ to this:
norm=Function[ #/Sqrt[ Apply[Plus, #^2] ]]
this means, square the vector, add them together, take the square
root, then have the original vector divide it.
The “#” is in fact a syntax shortcut for “Slot[1]”, meaning the first
formal parameter. The “=” is in fact a syntax shortcut for “Set[]”.
The “^” is a shortcut for “Power[]”, and the “/” is a shortcut for
“Power[..., -1]”. Putting all these today, you can see how the code is
syntactically equivalent to the above nested FullFolm.
Note, that the “norm” as defined above works for any dimentional
vectors, i.e. list of any length.
In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
you'll have 50 or hundreds lines.
For more detail on syntax, see:
• The Concepts and Confusions of Prefix, Infix, Postfix and Fully
Nested Notations
http://xahlee.org/UnixResource_dir/writ/notations.html
Xah
∑ http://xahlee.org/
☄
That is only true if you solve a completely different and vastly simpler
problem, which I see you have (see below).
> The RaySphere function contain codes that does symbolic computation by
> calling Im, which is the imaginary part of a complex number!! and if
> so, it returns the symbol Infinity! The possible result of Infinity is
> significant because it is used in Intersect to do a numerical
> comparison in a If statement. So, here in these deep loops,
> Mathematica's symbolic computation is used for numerical purposes!
Infinity is a floating point number.
> So, first optimization at the superficial code form level is to get
> rid of this symbolic computation.
That does not speed up the original computation.
> Instead of checking whethere his “disc = Sqrt[b^2 - v.v + r^2]” has
> imaginary part, one simply check whether the argument to sqrt is
> negative.
That does not speed up the original computation.
> after getting rid of the symbolic computation, i made the RaySphere
> function to be a Compiled function.
That should improve performance but the Mathematica remains well over five
orders of magnitude slower than OCaml, Haskell, Scheme, C, C++, Fortran,
Java and even Lisp!
> Besides the above basic things, there are several aspects that his
> code can improve in speed. For example, he used pattern matching to do
> core loops.
> e.g. Intersect[o_, d_][{lambda_, n_}, Bound[c_, r_, s_]]
>
> any Mathematica expert knows that this is something you don't want to
> do if it is used in a core loop. Instead of pattern matching, one can
> change the form to Function and it'll speed up.
Your code does not implement this change.
> Also, he used “Block”, which is designed for local variables and the
> scope is dynamic scope. However the local vars used in this are local
> constants. A proper code would use “With” instead. (in lisp, this is
> various let, let*. Lispers here can imagine how lousy the code is
> now.)
Earlier, you said that "Module" should be used. Now you say "With". Which is
it and why?
Your code does not implement this change either.
> Here's a improved code. The timing of this code is about 0.2 of the
> original.
> ...
> Timing[Export["image.pgm",Graphics@Raster@Main[2,100,4.]]]
You have only observed a speedup because you have drastically simplified the
scene being rendered. Specifically, the scene I gave contained over 80,000
spheres but you are benchmarking with only 5 spheres and half of the image
is blank!
Using nine levels of spheres as I requested originally, your version is not
measurably faster at all.
Perhaps you should give a refund?
• A Mathematica Optimization Problem
http://xahlee.org/UnixResource_dir/writ/Mathematica_optimization.html
The result and speed up of my code can be verified by anyone who has
Mathematica.
Here's some additional notes i added to the above that is not
previously posted.
-------------------------
Advice For Mathematica Optimization
Here's some advice for mathematica optimization, roughly from most
important to less important:
* Any experienced programer knows, that optimization at the
algorithm level is far more important than at the level of code
construction variation. So, make sure the algorithm used is good, as
opposed to doodling with your code forms. If you can optimize your
algorithm, the speed up may be a order of magnitude. (for example,
various algorithm for sorting algorithms↗ illustrates this.)
* If you are doing numerical computation, always make sure that
your input and every intermediate step is using machine precision.
This you do by making the numbers in your input using decimal form
(e.g. use “1.”, “N[Pi]” instead of “1”, “Pi”). Otherwise Mathematica
may use exact arithmetics.
* For numerical computation, do not simply slap “N[]” into your
code. Because the intermediate computation may still be done using
exact arithmetic or symbolic computation.
* Make sure your core loop, where your calculation is repeated and
takes most of the time spent, is compiled, by using Compile.
* When optimizing speed, try to avoid pattern matching. If your
function is “f[x_]:= ...”, try to change it to the form of “f=Function
[x,...]” instead.
* Do not use complicated patterns if not necessary. For example,
use “f[x_,y_]” instead of “f[x_][y_]”.
------------------------------
...
Besides the above basic things, there are several aspects that his
code can improve in speed. For example, he used rather complicated
pattern matching to do intensive numerical computation part. Namely:
Intersect[o_, d_][{lambda_, n_}, Bound[c_, r_, s_]]
Intersect[o_, d_][{lambda_, n_}, Sphere[c_, r_]]
Note that the way the parameters of Intersect defined above is a
nested form. The code would be much faster if you just change the
forms to:
Intersect[o_, d_, {lambda_, n_}, Bound[c_, r_, s_]]
Intersect[o_, d_, {lambda_, n_}, Sphere[c_, r_]]
or even just this:
Intersect[o_, d_, lambda_, n_, c_, r_, s_]
Intersect[o_, d_, lambda_, n_, c_, r_]
Also, note that the Intersect is recursive. Namely, the Intersect
calls itself. Which form is invoked depends on the pattern matching of
the parameters. However, not only that, inside one of the Intersect it
uses Fold to nest itself. So, there are 2 recursive calls going on in
Intersect. Reducing this recursion to a simple one would speed up the
code possibly by a order of magnitude.
Further, if Intersect is made to take a flat sequence of argument as
in “Intersect[o_, d_, lambda_, n_, c_, r_, s_]”, then pattern matching
can be avoided by making it into a pure function “Function”. And when
it is a “Function”, then Intersect or part of it may be compiled with
Compile. When the code is compiled, the speed should be a order of
magnitude faster.
-----------------------------
Someone keeps claiming that Mathematica code is some “5 order of
magnitude slower”. It is funny how the order of magnitude is
quantified. I'm not sure there's a standard interpretation other than
hyperbole.
There's a famous quote by Alan Perlis ( http://en.wikipedia.org/wiki/Alan_Perlis
) that goes:
“A Lisp programmer knows the value of everything, but the cost of
nothing.”
this quote captures the nature of lisp in comparison to most other
langs at the time the quote is written. Lisp is a functional lang, and
in functional langs, the concept of values is critical, because any
lisp program is either a function definition or expression. Function
and expression act on values and return values. The values along with
definitions determines the program behavior. “the cost of nothing”
captures the sense that in high level langs, esp dynamic langs like
lisp, it's easy to do something, but it is more difficult to know the
algorithmic behavior of constructs. This is in contrast to langs like
C, Pascal, or modern lang like Java, where almost anything you write
in it is “fast”, simply forced by the low level nature of the lang.
In a similar way, Mathematica is far more higher level than any
existing lang, counting other so-called computer algebra systems. A
simple one-liner Mathematica construct easily equates to 10 or hundred
lines of lisp, perl, python, and if you count its hundreds of
mathematical functions such as Solve, Derivative, Integrate, each line
of code is equivalent to a few thousands lines in other langs.
However, there is a catch, that applies to any higher level langs,
namely, it is extremely easy, to create a program that are very
inefficient.
This can typically be observed in student or beginner's code in lisp.
The code may produce the right output, but may be extremely
inefficient for lacking expertise with the language.
The phenomenon of creating code that are inefficient is proportional
to the highlevelness or power of the lang. In general, the higher
level of the lang, the less possible it is actually to produce a code
that is as efficient as a lower level lang. For example, the level or
power of lang can be roughly order as this:
assembly langs
C, pascal
C++, java, c#
unix shells
perl, python, ruby, php
lisp
Mathematica
the lower level the lang, the longer it consumes programer's time, but
faster the code runs. Higher level langs may or may not be crafted to
be as efficient. For example, code written in the level of langs such
as perl, python, ruby, will never run as fast as C, regardless what
expert a perler is. C code will never run as fast as assembler langs.
And if the task crafting a raytracing software, then perl, python,
ruby, lisp, Mathematica, are simply not suitable, and are not likely
to produce any code as fast as C or Java.
On the other hand, higher level langs in many applications simply
cannot be done with lower level lang for various practical reasons.
For example, you can use Mathematica to solve some physics problem in
few hours, or give Pi to gazillion digits in few seconds with just “N
[Pi,10000000000000]”. Sure, you can code a solution in lisp, perl, or
even C, but that means few years of man hours. Similarly, you can do
text processing in C, Java, but perl, python, ruby, php, emacs lisp,
Mathematica, can reduce your man hours to 10% or 1% of coding effort.
In the above, i left out functional langs that are roughly statically
typed and compiled, such as Haskell, OCaml, etc. I do not have
experience with these langs. I suppose they do maitain some advantage
of low level lang's speed, yet has high level constructs. Thus, for
computationally intensive tasks such as writing a raytracer, they may
compete with C, Java in speed, yet easier to write with fewer lines of
code.
personally, i've made some effort to study Haskell but never went thru
it. In my experience, i find langs that are (roughly called) strongly
typed, difficult to learn and use. (i have reading knowledge of C and
working knowledge of Java, but am never good with Java. The verbosity
in Java turns me off thoroughly.)
-----------------
as to how fast Mathematica can be in the raytracing toy code shown in
this thread, i've given sufficient demonstration that it can be speed
up significantly. Even Mathematica is not suitable for this task, but
i'm pretty sure can make the code's speed in the some level of speed
as OCaml.
(as opposed to someone's claim that it must be some 700000 times
slower or some “5 orders of magnituted slower”). However, to do so
will take me half a day or a day of coding. Come fly $300 to my paypal
account, then we'll talk. Money back guaranteed, as i said before.
Xah
∑ http://xahlee.org/
☄
Note that this program takes several days to compute in Mathematica (even
though it takes under four seconds in other languages) so don't expect to
see a genuinely optimized version any time soon... ;-)
You changed the scene that is being rendered => your speedup is bogus!
Trace the scene I originally gave and you will see that your program is no
faster than mine was.
In that article you say:
> Further, if Intersect is made to take a flat sequence of argument as
> in “Intersect[o_, d_, lambda_, n_, c_, r_, s_]”, then pattern matching can
> be avoided by making it into a pure function “Function”. And when it is
> a “Function”, then Intersect or part of it may be compiled with Compile.
> When the code is compiled, the speed should be a order of magnitude
> faster.
That is incorrect. Mathematica's Compile function cannot handle recursive
functions like Intersect. For example:
In[1]:= Compile[{n_, _Integer}, If[# == 0, 1, #0[[# n - 1]] #1] &[n]]
During evaluation of In[1]:= Compile::fun: Compilation of
(If[#1==0,1,#0[[#1 n-1]] #1]&)[Compile`FunctionVariable$435] cannot
proceed. It is not possible to compile pure functions with arguments
that represent the function itself. >>
> The phenomenon of creating code that are inefficient is proportional
> to the highlevelness or power of the lang. In general, the higher
> level of the lang, the less possible it is actually to produce a code
> that is as efficient as a lower level lang. For example, the level or
> power of lang can be roughly order as this:
> assembly langs
> C, pascal
> C++, java, c#
> unix shells
> perl, python, ruby, php
> lisp
> Mathematica
This is untrue. Common Lisp native-code compilers are orders of
magnitude faster than those of scripting languages such as Perl or Ruby.
In particular, creating an efficient Ruby implementation might prove
challenging - the language defines lexical bindings as modifiable at
runtime, arithmetic operations as requiring a dynamic method dispatch
etc.
FUT ignored.
--
The great peril of our existence lies in the fact that our diet consists
entirely of souls. -- Inuit saying
> That is incorrect. Mathematica's Compile function cannot handle recursive
> functions like Intersect.
i didn't claim it can. You can't expect to have a fast or good program
if you code Java style in a functional lang.
Similarly, if you want code to run fast in Mathematica, you don't just
slap in your OCaml code into Mathematica syntax and expect it to work
fast.
If you are a Mathematica expert, you could make it recurse yet have
the speed as other langs. First, by changing your function's form, to
avoid pattern matching, and rewrite your bad recursion. That is what i
claimed in the above paragraph. Read it again to see.
> For example:
> In[1]:= Compile[{n_, _Integer}, If[# == 0, 1, #0[[# n - 1]] #1] &[n]]
>
> During evaluation of In[1]:= Compile::fun: Compilation of
> (If[#1==0,1,#0[[#1 n-1]] #1]&)[Compile`FunctionVariable$435] cannot
> proceed. It is not possible to compile pure functions with arguments
> that represent the function itself. >>
Mathematica's Compile function is intended to speed up numerical
computation. To want Compile to handle recursion in the context of
Mathematica's programing features, is not something possible even in
theoretical sense.
Scheme lisp implementations can compile recursive code, but lisp is a
lower level lang than Mathematica, where perhaps the majority of
Mathematica's builtin functions can equate to 10 or more lines of
lisp, and any of its hundreds math functions equates to entire
libraries of other langs. It is reasonable, but silly, to expect
Mathematica's Compile function to compile any code in Mathematica.
Perhaps in the future version of Mathematica, its Compile function can
handle basic recursive forms.
Also, in this discussion, thanks to Thomas M Hermann's $20 offered to
me for my challenge to you, that i have taken the time to show working
code that demonstrate many problems in your code. Unless you think my
code and replies to you are totally without merit or fairness, but
otherwise you should acknowledge it, in whole or in parts you agree,
in a honest and wholehearted way, instead of pushing on with petty
verbal fights.
Xah
∑ http://xahlee.org/
☄
«...
The phenomenon of creating code that are inefficient is proportional
to the highlevelness or power of the lang. In general, the higher
level of the lang, the less possible it is actually to produce a code
that is as efficient as a lower level lang. For example, the level or
power of lang can be roughly order as this:
assembly langs
C, pascal
C++, java, c#
unix shells
perl, python, ruby, php
lisp
Mathematica
...
»
Moron Stanisław Halik wrote:
> This is untrue. Common Lisp native-code compilers are orders of
> magnitude faster than those of scripting languages such as Perl or Ruby.
Learn to read articles and discuss in whole, as opposed to nickpick on
particulars so that your favorite lang looks good.
Xah
∑ http://xahlee.org/
☄
You failed the challenge that you were given. Specifically, your code is not
measurably faster on the problem that I set. Moreover, you continued to
write as if you had not failed and, worse, went on to give even more awful
advice as if your credibility had not just been destroyed.
> If you are a Mathematica expert, you could make it recurse yet have
> the speed as other langs.
No, you cannot. That is precisely why you just failed this challenge.
You should accept the fact that Mathematica currently has these
insurmountable limitations.
Xah Lee wrote:
> > Also, in this discussion, thanks to Thomas M Hermann's $20 offered to
> > me for my challenge to you, that i have taken the time to show working
> > code that demonstrate many problems in your code.
A moron, wrote:
> You failed the challenge that you were given.
you didn't give me a challenge. I gave you. I asked for $5 sincerity
wage of mutal payment or money back guarantee, so that we can show
real code instead of verbal fight. You didn't take it and do nothing
but continue petty quarrel on words. Thomas was nice to pay me, which
results in my code that is demonstratably faster than yours. (verified
by a post from “jason-sage @@@ creativetrax.com”, quote: “So Xah's
code is about twice as fast as Jon's code, on my computer.”, message
can be seen at “ http://www.gossamer-threads.com/lists/python/python/698196?do=post_view_threaded#698196
” ) You refuse to acknowledge it, and continue babbling, emphasizing
that my code should be some hundred times faster make valid argument.
As i said, now pay me $300, i will then make your Mathematica code in
the same level of speed as your OCmal. If it does not, money back
guaranteed. Here's more precise terms i ask:
Show me your OCmal code that will compile on my machine (PPC Mac, OSX
10.4.x). I'll make your Mathematica code in the same speed level as
your OCmal code. (you claimed Mathematica is roughly 700 thousand
times slower to your OCmal code. I claim, i can make it, no more than
10 times slower than the given OCmal code.)
So, pay me $300 as consulting fee. If the result does not comply to
the above spec, money back guaranteed.
> You should accept the fact that Mathematica currently has these
> insurmountable limitations.
insurmountable ur mom.
Xah
∑ http://xahlee.org/
☄
>The phenomenon of creating code that are inefficient is proportional
>to the highlevelness or power of the lang. In general, the higher
>level of the lang, the less possible it is actually to produce a code
>that is as efficient as a lower level lang.
This depends on whether someone has taken the time to create a high
quality optimizing compiler.
>For example, the level or power of lang can be roughly order as
>this:
>
>assembly langs
>C, pascal
>C++, java, c#
>unix shells
>perl, python, ruby, php
>lisp
>Mathematica
According to what "power" estimation? Assembly, C/C++, C#, Pascal,
Java, Python, Ruby and Lisp are all Turing Complete. I don't know
offhand whether Mathematica is also TC, but if it is then it is at
most equally powerful.
Grammatic complexity is not exactly orthogonal to expressive power,
but it is mostly so. Lisp's SEXPRs are an existence proof that a
Turing powerful language can have a very simple grammar. And while a
2D symbolic equation editor may be easier to use than spelling out the
elements of an equation in a linear textual form, it is not in any
real sense "more powerful".
>the lower level the lang, the longer it consumes programer's time, but
>faster the code runs. Higher level langs may or may not be crafted to
>be as efficient. For example, code written in the level of langs such
>as perl, python, ruby, will never run as fast as C, regardless what
>expert a perler is.
There is no language level reason that Perl could not run as fast as C
... it's just that no one has cared to implement it.
>C code will never run as fast as assembler langs.
For a large function with many variables and/or subcalls, a good C
compiler will almost always beat an assembler programmer by sheer
brute force - no matter how good the programmer is. I suspect the
same is true for most HLLs that have good optimizing compilers.
I've spent years doing hard real time programming and I am an expert
in C and a number of assembly languages. It is (and has been for a
long time) impractical to try to beat a good C compiler for a popular
chip by writing from scratch in assembly. It's not just that it takes
too long ... it's that most chips are simply too complex for a
programmer to keep all the instruction interaction details straight in
his/her head. Obviously results vary by programmer, but once a
function grows beyond 100 or so instructions, the compiler starts to
win consistently. By the time you've got 500 instructions (just a
medium sized C function) it's virtually impossible to beat the
compiler.
In functional languages where individual functions tend to be much
smaller, you'll still find very complex functions in the disassembly
that arose from composition, aggressive inlining, generic
specialization, inlined pattern matching, etc. Here an assembly
programmer can quite often match the compiler for a particular
function (because it is short), but overall will fail to match the
compiler in composition.
When maximum speed is necessary it's almost always best to start with
an HLL and then hand optimize your optimizing compiler's output.
Humans are quite often able to find additional optimizations in
assembly code that they could not have written as well overall in the
first place.
George
Xah Lee wrote:
> >The phenomenon of creating code that are inefficient is proportional
> >to the highlevelness or power of the lang. In general, the higher
> >level of the lang, the less possible it is actually to produce a code
> >that is as efficient as a lower level lang.
George Neuner wrote:
> This depends on whether someone has taken the time to create a high
> quality optimizing compiler.
try to read the sentence. I quote:
«The phenomenon of creating code that are inefficient is proportional
to the highlevelness or power of the lang. In general, the higher
level of the lang, the less possible it is actually to produce a code
that is as efficient as a lower level lang.»
Xah Lee wrote:
> >For example,
> >the level or power of lang can be roughly order as
> >this:
>
> >assembly langs
> >C, pascal
> >C++, java, c#
> >unix shells
> >perl, python, ruby, php
> >lisp
> >Mathematica
George wrote:
> According to what "power" estimation? Assembly, C/C++, C#, Pascal,
> Java, Python, Ruby and Lisp are all Turing Complete. I don't know
> offhand whether Mathematica is also TC, but if it is then it is at
> most equally powerful.
it's amazing that every tech geekers (aka idiots) want to quote
“Turing Complete” in every chance. Even a simple cellular automata,
such as Conway's game of life or rule 110, are complete.
http://en.wikipedia.org/wiki/Conway's_Game_of_Life
http://en.wikipedia.org/wiki/Rule_110
in fact, according to Stephen Wolfram's controversial thesis by the
name of “Principle of computational equivalence”, every goddamn thing
in nature is just about turing complete. (just imagine, when you take
a piss, the stream of yellow fluid is actually doing turning complete
computations!)
for a change, it'd be far more interesting and effective knowledge
showoff to cite langs that are not so-called fuck of the turing
complete.
the rest of you message went on stupidly on the turing complete point
of view on language's power, mixed with lisp fanaticism, and personal
gribes about merits and applicability assembly vs higher level langs.
It's fine to go on with your gribes, but be careful in using me as a
stepping stone.
Xah
∑ http://xahlee.org/
☄
Thomas gave you the challenge:
"What I want in return is you to execute and time Dr. Harrop's original
code, posting the results to this thread... By Dr. Harrop's original code,
I specifically mean the code he posted to this thread. I've pasted it below
for clarity.".
Thomas even quoted my code verbatim to make his requirements totally
unambiguous. Note the parameters [9, 512, 4] in the last line that he and I
both gave:
AbsoluteTiming[Export["image.pgm", Graphics@Raster@Main[9, 512, 4]]]
You have not posted timings of that, let alone optimized it. So you failed.
> I gave you. I asked for $5 sincerity
> wage of mutal payment or money back guarantee, so that we can show
> real code instead of verbal fight. You didn't take it and do nothing
> but continue petty quarrel on words.
Then where did you post timings of that exact code as Thomas requested?
>
http://www.gossamer-threads.com/lists/python/python/698196?do=post_view_threaded#698196
> ” ) You refuse to acknowledge it, and continue babbling, emphasizing that
> my code should be some hundred times faster make valid argument.
That is not my code! Look at the last line where you define the scene:
Timing[Export["image.pgm",Graphics[at]Raster@Main[2,100,4.]]]
Those are not the parameters I gave you. Your program is running faster
because you changed the scene from over 80,000 spheres to only 5 spheres.
Look at your output image: it is completely wrong!
> As i said, now pay me $300, i will then make your Mathematica code in
> the same level of speed as your OCmal. If it does not, money back
> guaranteed.
Your money back guarantee is worthless if you cannot even tell when you have
failed.
> Show me your OCmal code that will compile on my machine (PPC Mac, OSX
> 10.4.x).
The code is still on our site:
http://www.ffconsultancy.com/languages/ray_tracer/
OCaml, C++ and Scheme all take ~4s to ray trace the same scene.
> I'll make your Mathematica code in the same speed level as
> your OCmal code. (you claimed Mathematica is roughly 700 thousand
> times slower to your OCmal code. I claim, i can make it, no more than
> 10 times slower than the given OCmal code.)
You have not even made it 10% faster, let alone 70,000x faster. Either
provide the goods or swallow the fact that you have been wrong all along.
> my machine (PPC Mac, OSX 10.4.x).
Well, that explains a great deal.
Actually, I suspect all these newsgroups are being trolled.
*LOL*
Did you just offer someone the exciting wager of ``your money back or nothing?
No matter what probability we assign to the outcomes, the /upper bound/
on the expected income from the bet is at most zero dollars. Now that's not so
bad. Casino games and lotteries have that property too; the net gain is
negative.
But your game has no variability to suck someone in; the /maximum/ income from
any trial is that you break even, which is considered winning.
If you ever decide to open a casino, I suggest you stop playing with
Mathematica for a while, and spend a little more time with Statistica,
Probabilica, and especially Street-Smartica.
:)
|> So, pay me $300 as consulting fee. If the result does not comply to
|> the above spec, money back guaranteed.
|
| Did you just offer someone the exciting wager of ``your money back or
| nothing?
No, I don't think he was offering a bet --- this sounded more like he
was charging for a service. The costs would cover the time he spent in
providing the service; except if the service was not satisfactory in
which case he'd refund the amount. (Actually the cost seems to be
calculated to dissade any customer from engaging him in the first place,
so the conclusion from your analysis would still hold)
--
Madhu
> The phenomenon of creating code that are inefficient is proportional
> to the highlevelness or power of the lang. In general, the higher
> level of the lang, the less possible it is actually to produce a code
> that is as efficient as a lower level lang. For example, the level or
> power of lang can be roughly order as this:
Yes, that's true, but your hierarchy sucks. Unix shells more powerful
than C? They're macro languages, ferchristsakes. You should also explain
what are the high-level features of Mathematica inhibiting optimization.
A math functions' library doesn't make the language more powerful. Take
java, for instance, it has a large standard library alright.
> assembly langs
> C, pascal
> C++, java, c#
> unix shells
> perl, python, ruby, php
> lisp
> Mathematica
In the context of single-threaded programs there is no excuse for
Mathematica being so slow: it adds no impediments beyond those found in
Lisp. The only reason Mathematica is so slow is that its only
implementation is a naive term rewriter that makes no attempt to use native
code.
However, in the context of parallelism on multicores everything changes.
Mathematica is built entirely around one giant global rewrite table. In
other words, all variables are global in Mathematica. Consequently, the
obvious implementation of any kind of shared-state parallelism will require
synchronization around every single read or write to any variable, which
would be cripplingly slow. Their solution has been to resort to distributed
parallelism but that is hugely inefficient (e.g. see Erlang) and renders
Mathematica even less suitable for general purpose programming on
multicores. There are more sophisticated alternatives that can work around
this problem but they would require a complete rewrite of the internals and
that is not feasible for business reasons (i.e. backward compatibility).
The first parameter to your Main specifies some kinda recursively
stacked spheres in the rendered image. The second parameter is the
width and height of the pixels in the rendered image.
I tried to run them but my computer went 100% cpu and after i recall 5
min its still going. So, i reduced your input. In the end, with
reduced input, it shows my code is 5 times faster (running Mathematica
v4 on OS X 10.4.x with PPC 1.9 GHz), and on the other guy's computer
with Mathematica 6 he says it's twice as fast.
Given your code's nature, it is reasonably to assume that with your
original input my code would still be faster than yours. You claim it
is not or that it is perhaps just slightly faster?
It is possible you are right. I don't want to spend the energy to run
your code and my code and possible hog my computer for hours or
perhaps days. As i said, your recursive Intersect is very badly
written Mathematica code. It might even start memory swapping.
Also, all you did is talking bullshit. Thomas actually is the one took
my challenge to you and gave me $20 to prove my argument to YOU. His
requirement, after the payment, is actually, i quote:
«Alright, I've sent $20. The only reason I would request a refund is
if you don't do anything. As long as you improve the code as you've
described and post the results, I'll be satisfied. If the improvements
you've described don't result in better performance, that's OK.»
He haven't posted since nor emailed me. It is reasonable to assume he
is satisfied as far as his payment to me to see my code goes.
You, kept on babbling. Now you say that the input is different. Fine.
How long does that input actually take on your computer? If days, i'm
sorry i cannot run your toy code on my computer for days. If in few
hours, i can then run the code overnight, and if necessary, give you
another version that will be faster with your given input to shut you
the fuck up.
However, there's cost to me. What do i get to do your homework? It is
possible, that if i spend the energy and time to do this, then you
again refuse to acknowledge it, or kept on complaining about something
else.
You see, newsgroup is the bedrock of bullshit. You bullshit, he
bullshits, everybody brags and bullshit because there is no stake. I
want sincerity and responsibility backed up, with for example paypal
deposits. You kept on bullshitting, Thomas gave me $20 and i produced
a code that reasonably demonstrated at least how unprofessional your
Mathematica code was.
Here's the deal. Pay me $20, then i'll creat a version of Mathematica
code that has the same input as yours. Your input is Main[9, 512, 4],
as i have exposed, your use of interger in the last part for numerical
computation is Mathematica incompetence. You didn't acknowledge even
this. I'll give a version of Mathematica with input Main[9, 512, 4.]
that will run faster than yours. If not, money back guaranteed. Also,
pay me $300, then i can produce a Mathematica version no more than 10
times slower than your OCaml code, this should be a 70000 times
improvement according to you. Again, money back guarantee.
If i don't receive $20 or $300, this will be my last post to you in
this thread. You are just a bullshitter.
O wait... my code with Main[9, 512, 4.] and other numerical changes
already makes your program run faster regardless of the input size.
What a motherfucking bullshit you are. Scratch the $20. The $300
challenge still stands firm.
Xah
∑ http://xahlee.org/
☄
Note that Jon's Mathematica code is of very poor quality, as i've
given detailed analysis here:
• A Mathematica Optimization Problem
http://xahlee.org/UnixResource_dir/writ/Mathematica_optimization.html
I'm not sure he's intentionally making Mathematica look bad or just
sloppiness. I presume it is sloppiness, since the Mathematica code is
not shown in his public website on this speed comparison issue. (as
far as i know) I suppose, he initialled tried this draft version, saw
that it is too slow for comparsion, and probably among other reason he
didn't include it in the speed comparison. However, in this thread
about Mathematica 7, he wanted to insert his random gribe to pave
roads to post his website books and url on OCml/f#, so he took out
this piece of Mathematica to bad mouth it and bait. He ignored my
paypal challenge, but it so happens that someone else paid me $20 to
show a better code, and in the showdown, Jon went defensive that just
make him looking like a major idiot.
Xah
∑ http://xahlee.org/
☄
Actually, there's only one person here tainting Mathematica by
association, and it's not Jon.
>
> In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
> you'll have 50 or hundreds lines.
Ruby:
def norm a
s = Math.sqrt(a.map{|x|x*x}.inject{|x,y|x+y})
a.map{|x| x/s}
end
In Common Lisp:
(defun normalize (list-or-vector)
(let ((l (sqrt (reduce #'+ (map 'list (lambda (x) (* x x)) list-or-vector)))))
(map (type-of list-or-vector) (lambda (x) (/ x l)) list-or-vector)))
As a bonus, this works with lists or vectors; it also works with
complex numbers.
Since this is Common Lisp, it is also possible to extend this (naive)
implementation so that it performs as much as possible at
compile-time, possibly replacing calls with the computed result.
Stick that in Mathematica's (and Ruby's) pipe and smoke it!
If I were to guess who that would be ...
> means, we want a function whose input is a list of 3 elements say
> {x,y,z}, and output is also a list of 3 elements, say {a,b,c}, with
> the condition that
>
> a = x/Sqrt[x^2+y^2+z^2]
> b = y/Sqrt[x^2+y^2+z^2]
> c = z/Sqrt[x^2+y^2+z^2]
>
> In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
> you'll have 50 or hundreds lines.
Really? ``50 or hundreds'' of lines in C?
#include <math.h> /* for sqrt */
void normalize(double *out, double *in)
{
double denom = sqrt(in[0] * in[0] + in[1] * in[1] + in[2] * in[2]);
out[0] = in[0]/denom;
out[1] = in[1]/denom;
out[2] = in[2]/denom;
}
Doh?
Now try writing a device driver for your wireless LAN adapter in Mathematica.
Kaz Kylheku wrote:
> Really? ``50 or hundreds'' of lines in C?
>
> #include <math.h> /* for sqrt */
>
> void normalize(double *out, double *in)
> {
> double denom = sqrt(in[0] * in[0] + in[1] * in[1] + in[2] * in[2]);
>
> out[0] = in[0]/denom;
> out[1] = in[1]/denom;
> out[2] = in[2]/denom;
> }
>
> Doh?
Kaz, pay attention:
Xah wrote: «Note, that the “norm” as defined above works for vectors
of any dimention, i.e. list of any length.»
The essay on the example of Mathematica expressiveness of defining
Normalize is now cleaned up and archived at:
• A Example of Mathematica's Expressiveness
http://xahlee.org/UnixResource_dir/writ/Mathematica_expressiveness.html
Xah
∑ http://xahlee.org/
☄
On Dec 10, 12:37 pm, w_a_x_...@yahoo.com wrote:
> Ruby:
>
> def norm a
> s = Math.sqrt(a.map{|x|x*x}.inject{|x,y|x+y})
> a.map{|x| x/s}
> end
I don't know ruby, but i tried to run it and it does not work.
#ruby
def norm a
s = Math.sqrt(a.map{|x|x*x}.inject{|x,y|x+y})
a.map{|x| x/s}
end
v = [3,4]
p norm(v) # returns [0.6, 0.8]
The correct result for that input would be 5.
Also note, i wrote: «Note, that the “norm” as defined above works for
vectors of any dimention, i.e. list of any length.».
For detail, see:
That is the correct answer.
> The correct result for that input would be 5.
No, you're confusing normalization with length.
Actually you can do that in Mathematica as well. Lisp is basically
Mathematica without the maths...
C:
#include <stdlib.h>
#include <math.h>
void normal(int dim, float* x, float* a) {
float sum = 0.0f;
int i;
float divisor;
for (i = 0; i < dim; ++i) sum += x[i] * x[i];
divisor = sqrt(sum);
for (i = 0; i < dim; ++i) a[i] = x[i]/divisor;
}
Java:
static float[] normal(final float[] x) {
float sum = 0.0f;
for (int i = 0; i < x.length; ++i) sum += x[i] * x[i];
final float divisor = (float) Math.sqrt(sum);
float[] a = new float[x.length];
for (int i = 0; i < x.length; ++i) a[i] = x[i]/divisor;
return a;
}
--
John W. Kennedy
"Never try to take over the international economy based on a radical
feminist agenda if you're not sure your leader isn't a transvestite."
-- David Misch: "She-Spies", "While You Were Out"
That is still only 6 lines of C code and not 50 as you claimed:
double il = 0.0;
for (int i=0; i<n; ++i)
il += in[i] * in[i];
il = 1.0 / sqrt(il);
for (int i=0; i<n; ++i)
out[i] = il * in[i];
Try computing the Fourier transform of:
0.007 + 0.01 I, -0.002 - 0.0024 I
Not that it matters, but the above requires C99 (or C++).
Arne
q){x%sqrt sum x}3 4
0.6 0.8
Oops. I meant to write {x%sqrt sum x*x}3 4
> Kaz, pay attention:
[ reformatted to 7 bit USASCII ]
> Xah wrote: Note, that the norm
> of any dimention, i.e. list of any length.
It was coded to the above requirements.
Thanks to various replies.
I've now gather code solutions in ruby, python, C, Java, here:
• A Example of Mathematica's Expressiveness
http://xahlee.org/UnixResource_dir/writ/Mathematica_expressiveness.html
now lacking is perl, elisp, which i can do well in a condensed way.
It'd be interesting also to have javascript... and perhaps erlang,
OCaml/F#, Haskell too.
Xah
∑ http://xahlee.org/
☄
> Xah Lee wrote:
> > On Dec 10, 12:37 pm, w_a_x_...@yahoo.com wrote:
> >> Ruby:
> > >
> >> def norm a
> >> s = Math.sqrt(a.map{|x|x*x}.inject{|x,y|x+y})
> >> a.map{|x| x/s}
> >> end
> >
> > I don't know ruby, but i tried to run it and it does not work.
> >
> > #ruby
> > def norm a
> > s = Math.sqrt(a.map{|x|x*x}.inject{|x,y|x+y})
> > a.map{|x| x/s}
> > end
> >
> > v = [3,4]
> >
> > p norm(v) # returns [0.6, 0.8]
>
> That is the correct answer.
>
> > The correct result for that input would be 5.
>
> No, you're confusing normalization with length.
Expanded for easier comprehension.
def norm a
# Replace each number with its square.
b = a.map{|x| x*x }
# Sum the squares. (inject is reduce or fold)
c = b.inject{|x,y| x + y }
# Take the square root of the sum.
s = Math.sqrt( c )
# Divide each number in original list by the square root.
a.map{|x| x/s }
end
1.upto(4){|i|
a = (1..i).to_a
p a
p norm( a )
}
--- output ---
[1]
[1.0]
[1, 2]
[0.447213595499958, 0.894427190999916]
[1, 2, 3]
[0.267261241912424, 0.534522483824849, 0.801783725737273]
[1, 2, 3, 4]
[0.182574185835055, 0.365148371670111, 0.547722557505166,
0.730296743340221]
Pay me $600 for my time and I'll even throw in an Algol-68
version. :-)
void normalise(float d[], float v[]){
float m = sqrt(v[0]*v[0] + v[1]*v[1] + v[2]*v[2]);
d[0] = v[0]/m; // My guess is Xah Lee
d[1] = v[1]/m; // hasn't touched C
d[2] = v[2]/m; // for near to an eternitee
> Xah Lee wrote:
> > In lisp, python, perl, etc, you'll have 10 or so lines. In C or
> > Java, you'll have 50 or hundreds lines.
>
> Java:
>
> static float[] normal(final float[] x) {
> float sum = 0.0f;
> for (int i = 0; i < x.length; ++i) sum += x[i] * x[i];
> final float divisor = (float) Math.sqrt(sum);
> float[] a = new float[x.length];
> for (int i = 0; i < x.length; ++i) a[i] = x[i]/divisor;
> return a;
> }
"We don't need no stinkin' loops!"
SpiderMonkey Javascript:
function normal( ary )
{ div=Math.sqrt(ary.map(function(x) x*x).reduce(function(a,b) a+b))
return ary.map(function(x) x/div)
}
i don't have experience coding C. The code above doesn't seems to
satisfy the spec. The input should be just a vector, array, list, or
whatever the lang supports.
The output is the same datatype of the same dimension.
Xah
∑ http://xahlee.org/
☄
Perl:
sub normal
{
my $sum = 0;
$sum += $_ ** 2 for @_;
my $length = sqrt($sum);
return map { $_/$length } @_;
}
--
Jim Gibson
The output is in the preallocated argument "a". It is the same type (float
*) and has the same dimension. That is idiomatic C.
You could define a struct type representing a vector that includes its
length and data (akin to std::vector<..> in C++) but it would still be
nowhere near 50 LOC as you claimed.
>Now try writing a device driver for your wireless LAN adapter in Mathematica.
Notice how Xah chose not to respond to this.
George
>On Dec 10, 2:47 pm, John W Kennedy <jwke...@attglobal.net> wrote:
>> Xah Lee wrote:
>> > In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
>> > you'll have 50 or hundreds lines.
>>
>> C:
>>
>> #include <stdlib.h>
>> #include <math.h>
>>
>> void normal(int dim, float* x, float* a) {
>> float sum = 0.0f;
>> int i;
>> float divisor;
>> for (i = 0; i < dim; ++i) sum += x[i] * x[i];
>> divisor = sqrt(sum);
>> for (i = 0; i < dim; ++i) a[i] = x[i]/divisor;
>>
>> }
>
>i don't have experience coding C.
Then why do you talk about it as if you know something?
>The code above doesn't seems to satisfy the spec.
It does.
>The input should be just a vector, array, list, or
>whatever the lang supports. The output is the same
>datatype of the same dimension.
C's native arrays are stored contiguously. Multidimensional arrays
can be accessed as a vector of length (dim1 * dim2 * ... * dimN).
This code handles arrays of any dimensionality. The poorly named
argument 'dim' specifies the total number of elements in the array.
George
For inspiration, here is some old Lisp driver code for an old
3com network card (Ethernet, not WLAN):
http://jrm-code-project.googlecode.com/svn/trunk/lambda/network/drivers/3com.lisp
>Dear George Neuner,
>
>Xah Lee wrote:
>> >For example,
>> >the level or power of lang can be roughly order as
>> >this:
>>
>> >assembly langs
>> >C, pascal
>> >C++, java, c#
>> >unix shells
>> >perl, python, ruby, php
>> >lisp
>> >Mathematica
>
>George wrote:
>> According to what "power" estimation? Assembly, C/C++, C#, Pascal,
>> Java, Python, Ruby and Lisp are all Turing Complete. I don't know
>> offhand whether Mathematica is also TC, but if it is then it is at
>> most equally powerful.
>
>it's amazing that every tech geekers (aka idiots) want to quote
>“Turing Complete” in every chance. Even a simple cellular automata,
>such as Conway's game of life or rule 110, are complete.
>
>http://en.wikipedia.org/wiki/Conway's_Game_of_Life
>http://en.wikipedia.org/wiki/Rule_110
>
>in fact, according to Stephen Wolfram's controversial thesis by the
>name of “Principle of computational equivalence”, every goddamn thing
>in nature is just about turing complete. (just imagine, when you take
>a piss, the stream of yellow fluid is actually doing turning complete
>computations!)
Wolfram's thesis does not make the case that everything is somehow
doing computation.
>for a change, it'd be far more interesting and effective knowledge
>showoff to cite langs that are not so-called fuck of the turing
>complete.
We geek idiots cite Turing because it is an important measure of a
language. There are plenty of languages which are not complete. That
you completely disregard a fundamental truth of computing is
disturbing.
>the rest of you message went on stupidly on the turing complete point
>of view on language's power, mixed with lisp fanaticism, and personal
>gribes about merits and applicability assembly vs higher level langs.
You don't seem to understand the difference between leverage and power
and that disturbs all the geeks here who do. We worry that newbies
might actually listen to your ridiculous ramblings and be led away
from the truth.
>It's fine to go on with your gribes, but be careful in using me as a
>stepping stone.
Xah, if I wanted to step on you I would do it with combat boots. You
should be thankful that you live 3000 miles away and I don't care
enough about your petty name calling to come looking for you. If you
insult people in person like you do on usenet then I'm amazed that
you've lived this long.
George
Only if the length in each dimension is known at compile time (or
in C99, if this is an automatic array). When this is not the case,
you may have to implement something like the following (not the only
way, just one way):
float** new_matrix(int rows, int cols) {
float** m = malloc(sizeof(float*)*rows);
int i;
for (i = 0; i < rows; i++)
m[i] = malloc(sizeof(float)*cols);
return m;
}
In this case normal() fails since matrix m is not in a single
contiguous area.
But I suspect Xah is complaining because the function doesn't
*return* a value of the same type; instead you have to pass in
the result vector. But such is life if you code in C!
Heh. This looks a *lot* like the user-mode hardware bringup/debugging
code I was writing in CMUCL during the last few years (for a now-PPoE). ;-}
Lots of bit- & byte-field definitions, peek & poke stuff, utilities
to encode/pack/unpack/decode hardware register fields from/to readable
symbols, etc. The main obvious difference I noticed was that instead
of using SYS:%NUBUS-READ & SYS:%NUBUS-WRITE to peek/poke at the
hardware, my code did an MMAP of "/dev/mem" and then used CMUCL's
SYSTEM:SAP-REF-{8,16,32} and SETFs of same [wrapped within suitable
syntactic sugar, of course]. Fun stuff!
-Rob
-----
Rob Warnock <rp...@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607
I think you want to throw a (conjugate x) in there for it to give you
the correct answer for complex numbers...