Am 18.04.2014 20:56, schrieb F. B.:
>
> C/C++ were conceived in the 80's and have remained thus far the most
> optimized high-level languages.
Well, for a given definition of "most optimized" and "high-level"...
> Languages that appeared in the 90's are
> either scripted (like Python) or rely on a virtual machine (like Java).
No.
In fact, interesting numeric work is being done in OCaml and F#.
I'm under the impression that your views are biased by your personal
experience with a specific ecosystem.
This does not invalidate them, but they need to be validated by persons
with other perspective - at least as long a you're making broad,
sweeping statements that generalize over all languages.
> This is comprehensible: writing a compiler to machine code is a really hard
> task, even the translation to assembler alone is hard. Since the LLVM
> appeared, it is now much faster to write compilers, and lots of new
> programming languages are appearing based on the LLVM. I think that LLVM is
> a good infrastructure to allow new programming languages to appear.
Agreed.
> Julia's types are like those of C, multiple dispatching is done at
> compile-time rather than at run-time whenever it is possible,
This is actually the first optimization that all OO and functional
languages introduce.
It's massively harder to do in languages which do not give you a
reliable guarantee whether a given data structure is updatable in place
or not.
> for this I
> see no disadvantage over C in terms of performance.
This is a sweeping overgeneralized statement again - benchmarks or it
didn't happen.
From the compiler folks, I hear that the difference is how polymorphic
your code actually is. Lots of polymorphism -> fewer optimization
opportunities.
The SmallEiffel folks reported this optimization to apply to roughly 90%
of call sites in their SmallEiffel compiler. They did not measure the
percentage of call executions, unfortunately.
The Waterloo of this optimization would be a tight loop where the
compiler happened to not know enough about the potential types to
de-polymorphize a call.
Polymorphic calls prevent inlining, which in turn prevents most
optimizations. It's the reason why the C++ folks have sunk so much time
into doing type flow analysis.
> The problem I see is
> metaprogramming, Julia's code can be loaded as a data structure, and be
> recompiled at run-time. I fear this capability introduces some performance
> disadvantages over C, (despite being very useful), as the code has to keep
> some track of the original source code, but I'm not expert enough about
> compilation processes to judge that.
From my compilation background, I'd say:
- It's hard to get right because it's difficult to determine which
differences will affect the compiled binary and which don't.
- If it's done right, and recompilation happens just a few dozen of
times, the performance effect is negligible.
- Exception: If the run-time compiler spends a lot of time optimizing,
then each reloaded code may require more. That's why run-time compilers
usually do less optimization than ahead-of-time compilers.
- This also means that you usually have less optimization for
just-in-time compilers.
- The advantage of just-in-time compilers is that they can optimize for
the processor architecture at hand, which ahead-of-time compilers
usually can't or don't.
I.e. you essentially have to do benchmarks to see which of the various
counteracting effects have dominance.
> In any case, before starting a translation into Julia I would wait until:
>
> - someone introduces a clear way to call Julia code from Python
> (possibly through static compilation).
> - abstract types in Julia can have fields.
> - multiple inheritance support in Julia.
> - dot and __call__ operator overloading.
Now that's quite a list of things.