> It's just a shame, it seems to me, that there is such a nice way to
> represent the procedure in Python or even C, yet Clojure (or any Lisp
> really) struggles to idiomatically answer this question of
> convolution.
No, it's pretty easy to do convolution idiomatically, as demonstrated by some of the replies you got. What makes it seem difficult is that the literature is all written with imperative languages in mind.
What Clojure does "struggle" with is performance, in that the obvious way of writing something often results in poor performance, especially for heavy numeric code. This applies to most (all?) dynamic languages, though; you'd be better off with C if high-performance numerics are your top priority.
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clo...@googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to
clojure+u...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
-Fred
--
Science answers questions; philosophy questions answers.
How about some common sense? The convolution algorithm everyone is
discussing runs in O(n m) time. So your measured time would imply that
a single of your computer's core is capable of 10 * 3,000,000^2 = 9 *
10^14 FLOPS, or about one peta-FLOPS. That should have tipped you off
you aren't actually measuring anything useful.
Wrap your (convolve dxs dys) calls with (doall ...). And you will want
to reduce the three million number to something much smaller, or
you'll be waiting for a very long time.
By the way, no-one actually uses this quadratic-time algorithm for
very wide kernels. For that you use FFTs and pointwise multiplication
(look up the convolution theorem), which yields an O(n log n)
algorithm.
-Per
> I did check out your response: it's rather fast on my machine. it's
> not really functional, though, as you use the `let` macro as a way of
> procedurally executing a lot of functions. This isn't bad at all, but
> you're not composing functions.
'Functional' means simply avoiding mutation and side-effects. How the code is organized is a separate issue.
In case you're interested, I made a small change to the code I posted before that makes it significantly faster. I don't know what you consider 'slow', but it takes 2-3x as long as your imperative Python version on my hardware (less if I replace the outermost map with pmap, depending on the input sizes). I think that's quite acceptable, especially given Clojure's relative youth as a language.
(defn convolve-indices [i max-m max-n]
"Lists all index pairs adding up to i, where the first index is less than max-m and the second is less than max-n."
(map #(vector % (- i %))
(range
(max 0 (inc (- i max-n)))
(min (inc i) max-m))))
(defn convolve-1 [i ms ns]
"Returns the ith value of the convolution of ms with ns."
(reduce +
(map (fn [[m n]] (* (ms m) (ns n)))
(convolve-indices i (count ms) (count ns)))))
(defn convolve [ms ns]
"Convolves ms with ns."
(map #(convolve-1 % ms ns)
(range (dec (+ (count ms) (count ns))))))
Apologies in advance if this is somewhat OT in a Clojure group.
> double convolve(double *xs, double *is, double *ys){
> int i,j;
> for(i=0; i<len(xs); i++){
> for(j=0; j<len(is); j++){
How are you getting the size of the collections from the pointers, and
why are you checking the lengths on every loop pass (rather than once, a
priori)?
> ys[i+j] = ys[i+j] + (xs[i]*is[j]);
The following would be more idiomatic:
ys[i + j] += xs[i] * is[j];
Less repetition, less line noise, and in C++, it may avoid some
unnecessary temporaries if you ever abstract from raw ints and doubles
to something less likely to be elided by the compiler (e.g.
Clojure-style, overflow-detecting types).
> }
> }
> return ys;
> }