It seems like the goal of this is to be a higher-level replacement for
OpenCL and CUDA, which is certainly welcome. I imagine it is the same
kind of thing one usually sees in niche HPC languages, i.e. defining a
limited domain of programs and data and aiming for peak flops within
that domain. (The process often culminates in adding an object system,
at which point everybody starts to wonder what went wrong.)
Julia "gets it wrong" by NOT doing this. Instead we start with the
workflow of matlab/python/R, and try to provide it within a richer
compiler infrastructure that we hope will eventually allow building up
pretty much anything instead of needing to "call out". Another way to
see it is to look at the huge effort that goes into building something
like NumPy, Star-P, pMatlab, or Accelereyes' Jacket. The goal of julia
is to make building functionally-similar platforms to those much
easier by moving work from people to a compiler. And the work that we
move is not just the typical fortran-compiler-style optimizations, but
rather dealing with the prevailing dynamic dispatch environment, so
one does not have to keep inventing ad-hoc type descriptors and
dispatch mechanisms to select operations, and code is written in julia
rather than in C against a "Julia API".
That was a bit gratuitous, but I just felt like writing it :)