I think the point was to take two programs of similar code complexity
(or, rather, simplicity) implementing the same algorithm (modulo
parallelism).
So I'm not sure what exactly you're objecting to.
If you're saying that there are better algorithms to compute Fibonacci
numbers, it's not relevant — the important thing that the two
programs are computing the same thing in the same way.
If you're saying that in C an explicit stack should have been used
instead of recursion, then it would increase the code complexity while
having non-obvious performance benefits.
In any case, assuming that on this particular task Haskell is x times
slower than C, taking sufficiently many cores and large enough N would
outweigh that.
The again, I don't think that article was meant as a fair comparison
between Haskell and C on an average task. (The chosen example consists of
one loop which is easily parallelisable.) All it demonstrates is that
it is very easy to exploit parallelism in Haskell when there's an
opportunity.
--
Roman I. Cheplyaka :: http://ro-che.info/
_______________________________________________
through the trouble of writing my algorithms in C/C++, but simple-minded
people often have a desire to get the best performance possible, in
which case you really want to use C, C++, Fortran or whatever high level
assembler language you like.
As a community I think we have to face the fact that writing the hot inner loop of your application as idiomatic Haskell is not [yet] going to give you C/Fortran performance off the bat. Though in some cases there's not really anything stopping us but more backend/codegen work (I'm thinking of arithmetically intensive loops with scalars only). For example, the following Mandel kernel is in many ways the *same* as the C version:We have the types; we've got strictness (for this loop); but the C version was 6X faster when I tested it.
With a few years of Haskell experience in my backpack I know how to utilize laziness to get amazing performance for code that most people would feel must be written with destructively updating loop.
Well, if it's "in many ways the same as C", then again it's probably notidiomatic Haskell.
mandel :: Int -> Complex Double -> Intmandel max_depth c = loop 0 0whereloop i !z| i == max_depth = i| magnitude z >= 2.0 = i| otherwise = loop (i+1) (z*z + c)
But, anyway, it turns out that my example above is easily transformed from a bad GHC performance story into a good one. If you'll bear with me, I'll show how below.First, Manuel makes a good point about the LLVM backend. My "6X" anecdote was from a while ago and I didn't use llvm [1]. I redid it just now with 7.4.1+LLVM, results below. (The below table should read correctly in fixed width font, but you can also see the data in the spreadsheet here.)Time (ms) Compiled File size Comple+Runtime (ms)GHC 7.4.1 O0 2444 1241KGHC 7.4.1 O2 925 1132K 1561GHC 7.4.1 O2 llvm 931 1133KGHC 7.0.4 O2 via-C 684 974KSo LLVM didn't help [1]. And in fact the deprecated via-C backend did the best!
[1] P.P.S. Most concerning to me about Haskell/C++ comparisons are David Peixotto's findings that LLVM optimizations are not very effective on Haskell-generated LLVM compared with typical clang-generated LLVM.
1) Does Haskell and its libraries need performance improvements? Probably yes. Some of the performance issues seem to be related to the way the language is implemented and others by how it is defined. Developers really do run into performance issues with Haskell and either learn to work around the issue or try to fix the offending implementation. The wiki performance page gives insight into some of the performance issues and how address them.
Also interesting is that in all my interviews, GHC performance was never raised. No one said "I have to drop into C to solve that performance problem."
Gregg
>> The profiler is certainly useful (and much better with GHC 7.4)
>
> What are the improvements in that matter? (I just noticed that some GHC
> flags wrt profiling have been renamed)
>
The executive summary can be found in the release notes[1]. There was
also a talk I remember watching a while ago which gave a pretty nice
overview. I can't recall, but I might have been this[2]. Lastly,
profiling now works with multiple capabilities[3].
Cheers,
- Ben
[1] http://www.haskell.org/ghc/docs/7.4.1/html/users_guide/release-7-4-1.html
[2] http://www.youtube.com/watch?v=QBFtnkb2Erg
[3] https://plus.google.com/107890464054636586545/posts/hdJAVufhKrD
It depends on what you use the code for. If I run an overnight report
for the trading book of my bank in 16 hours, it is
not acceptable and is a disaster. If I run it in 8h, it's OK-ish. In
business settings, you often have strict deadlines and
optimizing the code to be 2x faster can make a huge difference, as you
change from missing the deadline to
meeting a deadline.
The unconditional desire for maximum possible object codeperformance is usually very stupid, not to mention impossible to reach
with any high level language and any multi-tasking operating system.
Haskell's average penalty compared to C is
no reason to write the entire application in C.
"Past century"? Insults, is it?
> Do bear in mind that Java doesn't optimize ---that's the JIT's jobWhat are we supposed to make of that?
Why write that and not -- Do bear in mind that Smalltalk doesn't optimize that's the JIT's job -- or -- Do bear in mind that C doesn't optimize that's the compiler's job.