"ANI aims to extract "good" parallelism (in the sense that with
optimization, it converges to optimality) out of nearly anything just
by the way that the dataflow paradigm forces you to think."
Is there any evidence for this other than personal anecdotes? For
someone who doesn't already have experience with a dataflow language,
this seems like a pretty bold claim. It suggests that for any
sequential algorithm that there's no known parallel version of, if one
were to try writing it in anic the solution would become obvious.
The evidence is there, it's just theoretical until the compiler gets
finished; the ideas behind ANI are several years of academic
literature review and whiteboard scribbling in the making and in fact
this is the third (fourth?) iteration of the language/compiler, though
it's the first one I've published openly. So, to imply that ANI is
some groundless personal anecdote is quite silly indeed.
And although I do have experience working with other attempts at
dataflow implementations, in most ways ANI diverges wildly from them;
going through the tutorial makes this plainly obvious. Pretty much all
past dataflow languages have focused on a limited field of application
and have had little concern for maximizing efficiency or parellelism.
Further, most dataflow languages throw away all notion of explicit
"execution" and are effectively description languages that are
intended to be solved (comparatively inefficiently) by SAT, change
propagation, or some other abstract mathematical model. ANI is a
completely different beast. Thus to compare ANI to other dataflow
languages because they share a common interpretation of data flowing
is akin to saying that C is like Haskell because both have functions.
And no, the suggestion isn't that programming in ANI will make
parallel versions of sequential programs obvious. However, ANI *does*
attempt to extract the greatest possible parallelism out of any
algorithm -- for example, in an abstract sense, ANI parallelizes all
for-loops, dealing with the data dependency issues automatically (the
dataflow paradigm makes this easy and natural to implement). However,
in a lot of ways, what the compiler's attempting to do is on the
contrary very fine-grained and *non*-obvious; thus implementing
algorithms in ANI wouldn't necessarily be insightful However, the goal/
idea is that it *would* result in ultimately better parallelized (and
faster) binaries when run through the compiler. A useful parallel to
this is that coding a large-scale project in C and running it through
an optimizing compiler won't necessarily help you to write a faster
version in assembler, but it *will* almost surely produce a faster
binary than you'd otherwise write.
This can be done in imperative languages, but its a lot simpler to
implement in a dataflow language, since the operations are essentially
static chunks through which the data "flows" instead of the other way
around: http://didntread.wordpress.com/2009/07/20/what-is-dataflow/
The potential massive parallelism from dataflow languages (when the
implementations are designed to exploit it) comes from the fact that
each operation or "compute node" in the dataflow network/graph (ie the
program) can be executing in parallel, continuously taking in data,
processing it and passing it to the next node. This is less
theoretical than Adrian said, because, even though dataflow languages
which are in common use (eg Max/MSP and LabVIEW) may not exploit
inherent parallelism in their current implementations, it still
exists. Or, at least, can exist, if the language is designed to allow
for it.
If you are interested in learning about this, check out the book (or
rather, collection of research papers, published together as a book)
"Advanced Topics in Dataflow Computing and Multithreading", the Fleet
dataflow processor and Cleo Saulniers blog. These will give you ideas
for different approaches to dataflow and show potential benefits of
using dataflow languages (more than just potential increase in
parallelism).
> To unsubscribe from this group, send email to ani-compiler+unsubscribegooglegroups.com or reply to this email with the words "REMOVE ME" as the subject.
>
--
Daniel Kersten.
Leveraging dynamic paradigms since the synergies of 1985.