--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clo...@googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to
clojure+u...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
---
You received this message because you are subscribed to the Google Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clojure+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Probably not helpful, but I tend to rely on the jvm optimisations and just -server. I figured this is an area where a little knowledge is a dangerous thing.
At the very least I would have a realistic benchmark suite to prove to myself that these gains were worth it. In my experience the performance bottleneck is always in design.
Just my 2p, although with my self-confessed ignoramus status it is more like 0.5p :).
Now I feel even more of an ignoramus :)
Thanks Colin and also Alex and Andy.
I'm trying to determine a reasonable way to do this without reading a book about it.
It sounds like I should use ^:replace, -server, and also -XX:-TieredCompilation (is that right, the way I've changed + to -?), and also -XX:+AggressiveOpts. Does it make sense to use all of these together?
And maybe I should get rid of "-XX:+UseG1GC", since I'm not really sure if that's a good thing.
Assuming that none of those things use as big a chunk of RAM as is available, I guess I should keep my messy code for the memory options.
So that would mean that overall I'd do the following to maximize performance on long-running, compute-intensive, memory-intensive runs:
:jvm-opts ^:replace ~(let [mem-to-use
(long (* (.getTotalPhysicalMemorySize
(java.lang.management.ManagementFactory/getOperatingSystemMXBean))
0.8))]
[(str "-Xmx" mem-to-use)
(str "-Xms" mem-to-use)
"-server"
"-XX:-TieredCompilation"
"-XX:+AggressiveOpts"])
Seem reasonable?
:jvm-opts ^:replace ["-server"
;;"-XX:+AggressiveOpts"
;;"-XX:+UseFastAccessorMethods"
;;"-XX:+UseCompressedOops"
"-Xmx4g"]
"Elapsed time: 185.651689 msecs"
However, if I modify the -main function in 214_intermediate.clj to wrap the time testing with (doseq [_ (range 20)]), to run the test multiple times, the behavior is much better after the first 9 or so runs, and by the end it is down to:
"Elapsed time: 35.574945 msecs"
I think you might be running into a situation where the VM hasn't run the functions enough times to optimize.
Rather than use the time function, you might want to try using criterium[1] to benchmark the code. The site explains all the wonderful things it does to get a good benchmark.
Unfortunately, if your data set is small, and you're running a one off calculation like this, I don't know if there's much to improve due to the warmup cost. You could fiddle a bit with VM flags to try to optimize with less calls (I can't recall the flag, but I think the JVM defaults to optimizing after a functions been called 10,000 times). On the other hand, if you're processing larger datasets, I think it's reassuring that once warmed up, the Clojure code performs pretty well.
For reference, this was run on an Macbook Pro 13" early 2011, Core i7 2.7ghz.
steven
lein run -m rdp.214-intermediate-arr 1 true
pmap. (8x speedup on 12-core machine)
lein run -m rdp.214-intermediate-arr 1 true
;; took around 250s.
lein run -m rdp.214-intermediate-leifp 1
;; took around 100s
lein run -m rdp.214-intermediate-arr-leifp 1 true
;; took around 70s
`(defn- covered?
[^long canvas-x ^long canvas-y ^Paper paper]
(and (<= ^long (.x1 paper) canvas-x ) (<= canvas-x ^long (.x2 paper))
(<= ^long (.y1 paper) canvas-y ) (<= canvas-y ^long (.y2 paper))))
(recur (unchecked-inc i))
(recur (inc i))