Speed of clojure hash map vs java hash map

1,623 views
Skip to first unread message

Mark Engelberg

unread,
Dec 28, 2010, 8:57:43 PM12/28/10
to clojure
Just for fun, I was curious to see what it would be like,
performance-wise, to simulate a synchronized mutable hash map by
putting a clojure hash map inside an atom, and making this accessible
to Java code via the map interface.

I didn't try to implement the entire map interface, but just the major
things for testing purposes. The code is below.

Benchmarking insertions on my machine, I'm seeing that the Clojure
code (Clojure 1.2, java -server) takes roughly 8-20x longer (depending
on how large the hashmap is).

To test, for example, I do something like
(def ^com.justforfun.NonBlockingHashMap h (com.justforfun.NonBlockingHashMap.))
(def ^java.util.Map j (. java.util.Collections synchronizedMap
(java.util.HashMap.)))

(time (doseq [i (range 100000)] (.put h i 2))) ;900ms
(time (doseq [i (range 100000)] (.put j i 2))) ;70ms

Does this seem reasonable, or is there a better way to do this test?
Frankly, I was expecting Clojure's numbers to compare a bit more
favorably.
If you repeat the run, shoving new values into the existing hash
table, make sure to change the value you're sticking into the hash
table (Clojure optimizes when you store the same value in the hash
table that's already there).

[Code]

(ns com.justforfun.NonBlockingHashMap
(:gen-class
:implements [java.lang.Iterable]
:init init
:constructors {[] []}
:state state
:methods [[get [Object] Object]
[put [Object Object] void]
[clear [] void]
[remove [Object] void]
[isEmpty [] boolean]
[size [] int]]))

(defn -init []
[[] (atom {})])

(defn -clear [this]
(reset! (.state this) {}))

(defn -get [this k]
(get @(.state this) k nil))

(defn -put [this k v]
(swap! (.state this) assoc k v))

(defn -remove [this k]
(swap! (.state this) dissoc k))

(defn -isEmpty [this]
(zero? (count @(.state this))))

(defn -size [this]
(count @(.state this)))

(defn -iterator [this]
(.iterator @(.state this)))

David Nolen

unread,
Dec 28, 2010, 9:46:51 PM12/28/10
to clo...@googlegroups.com
On Tue, Dec 28, 2010 at 8:57 PM, Mark Engelberg <mark.en...@gmail.com> wrote:
Just for fun, I was curious to see what it would be like,
performance-wise, to simulate a synchronized mutable hash map by
putting a clojure hash map inside an atom, and making this accessible
to Java code via the map interface.

;; ~50ms-60ms
(dotimes [_ 10]
  (let [m (atom {})
        r (range 1e5)]
   (time
    (doseq [i r]
      (swap! m assoc i 2)))))

;; same as above
(dotimes [_ 10]
  (let [m {}
        r (range 1e5)]
   (time
    (loop [m m r r]
      (if (nil? r)
        m
        (recur (assoc m (first r) 2) (next r)))))))

(def ^java.util.Map j (. java.util.Collections synchronizedMap (java.util.HashMap.)))

;; ~10ms
(dotimes [_ 10]
 (time (doseq [i (range 1e5)] (.put j i 2))))

So it's about 5X-6X slower for 1e5. But it looks to me there's no overhead from atom operations.

I note that for 1e3 keys the difference is around 2X.

Of course this isn't much of a comparison IMO because the Java HashMap isn't persistent. The Clojure version can add many keys as an atomic operation. Much more useful if you're using a map as some kind of in-memory data store.

David

Mark Engelberg

unread,
Dec 29, 2010, 12:27:49 AM12/29/10
to clo...@googlegroups.com
On Tue, Dec 28, 2010 at 6:46 PM, David Nolen <dnolen...@gmail.com> wrote:
> So it's about 5X-6X slower for 1e5. But it looks to me there's no overhead
> from atom operations.

I got similar results (5x slower) on the same sorts of tests. Calling
it from the class generated by genclass seemed to cause another 2x
slowdown, resulting in the 10x difference I was seeing. It slows down
more as you go to 10e6, which indicates that it's not just a "constant
factor" slower than Java. I know that Clojure's data structures are
actually log32n, but I'm so used to thinking of that as "essentially
constant" that I was surprised to see such a noticeable difference in
how much slower it was on 10e6 elements vs 10e5.

> I note that for 1e3 keys the difference is around 2X.
> Of course this isn't much of a comparison IMO because the Java HashMap isn't
> persistent. The Clojure version can add many keys as an atomic operation.
> Much more useful if you're using a map as some kind of in-memory data store.

I ran this test because I was having a discussion with someone about
the benefits of Clojure's persistent data structures, and how
concurrency is handled by sticking an immutable data structure into an
atom, ref, or agent, rather than through "synchronized" blocks. The
discussion went in the direction of memory caches as an example of the
two approaches. The discussion went like this. I said,
"So you get the benefit of no blocking on reading, you can even
iterate over a snapshot of the hash table without blocking -- it's
just a whole lot cleaner."
"That must be wildly inefficient."
"It's not as inefficient as you might think. [Insert discussion of
log32, shared structure, etc.]"
"But surely you're still paying a pretty high price. And why would
you want that to be the default, paying that price all the time?"

Clearly, I couldn't really answer that question without doing some
investigating to see what the speed difference is like. I was even
hoping that if the speed difference were small enough, I could bundle
up a little library that implements HashMap as a Clojure hash map in
an atom, so that my Java pals could start using Clojure's hash maps as
a drop-in replacement for Java's (that's where the gen-class piece of
the experiment comes in). But with a 10x speed difference and so much
more memory churn, I think that's going to be a tough sell. It's
possible that Clojure becomes more competitive with Java under a lot
of contention for access from different threads, but I haven't tested
that out yet.

Anyway, thanks for confirming that the ratio of speeds on your machine
are similar to mine.

Todd

unread,
Dec 29, 2010, 1:08:17 AM12/29/10
to clo...@googlegroups.com, Mark Engelberg
I thought it'd be interesting to look closer at the insertion times as a
fx of the size of the Map. The results are at:

https://gist.github.com/758198

At first I thought I'd found something interesting, only to investigate
further and realize that I'd been testing through a fx that was using
reflection (see trial results at bottom, Trials 1-4), and this was
completely swamping the results.

In Trial #5, I realized what I'd done, and broke my generic test fx into
two separate functions w/ proper type hinting (my-test-map and
my-test-nbhm). Now my results are on par w/ yours, and I don't see any
significant change in insertion times as a fx of map size up to 1e5.

Overall, NonBlockingHashMap insertions to take ~10x longer than the
SynchronizedHashMap.

I think your idea of measuring under heavy contention would be very useful.

-Todd

David Nolen

unread,
Dec 29, 2010, 1:15:36 AM12/29/10
to clo...@googlegroups.com
On Wed, Dec 29, 2010 at 12:27 AM, Mark Engelberg <mark.en...@gmail.com> wrote:
Clearly, I couldn't really answer that question without doing some
investigating to see what the speed difference is like.  I was even
hoping that if the speed difference were small enough, I could bundle
up a little library that implements HashMap as a Clojure hash map in
an atom, so that my Java pals could start using Clojure's hash maps as
a drop-in replacement for Java's (that's where the gen-class piece of
the experiment comes in).  But with a 10x speed difference and so much
more memory churn, I think that's going to be a tough sell.  It's
possible that Clojure becomes more competitive with Java under a lot
of contention for access from different threads, but I haven't tested
that out yet.

Anyway, thanks for confirming that the ratio of speeds on your machine
are similar to mine.

Even in in a single threaded context raw insert performance isn't the final word. What if you want to be able to deliver a snapshot for reporting? In Clojure that's free. In Java that's a blocking copy of a HashMap with 1e5 keys. Ick.

That's the problem with naive benchmarks. They only sound good until you have to write real programs that do real work.

David 

Mark Engelberg

unread,
Dec 29, 2010, 1:28:54 AM12/29/10
to clo...@googlegroups.com
On Tue, Dec 28, 2010 at 10:15 PM, David Nolen <dnolen...@gmail.com> wrote:
> Even in in a single threaded context raw insert performance isn't the final
> word. What if you want to be able to deliver a snapshot for reporting?

What if you don't?

Seriously, I agree with you that Clojure's data structures have some
significant advantages -- if you need those advantages. There are
still plenty of apps that use hash tables in a single-threaded manner,
or use them in a multithreaded way where contention is unlikely and
persistence is unnecessary. In many areas, Clojure has a
pay-for-what-you-need philosophy -- this just isn't one of those
areas. With respect to data structures, Clojure is very opinionated,
with an attitude of "Write it with immutable data structures -- you'll
thank me later." :) Since all the existing Java structures are easy
enough to access if you want them, this isn't necessarily a bad thing.
It just means I have to rethink my proselytizing strategy -- I was
definitely overselling the speed of the persistent data structures.

Jason Wolfe

unread,
Dec 29, 2010, 2:16:53 AM12/29/10
to Clojure


On Dec 28, 10:28 pm, Mark Engelberg <mark.engelb...@gmail.com> wrote:
> On Tue, Dec 28, 2010 at 10:15 PM, David Nolen <dnolen.li...@gmail.com> wrote:
> > Even in in a single threaded context raw insert performance isn't the final
> > word. What if you want to be able to deliver a snapshot for reporting?
>
> What if you don't?

Use transients?

(dotimes [_ 10]
(let [m (atom (transient {}))
r (range 1e5)]
(time
(doseq [i r]
(swap! m assoc! i 2)))))

is within 50% the speed of an ordinary java.util.HashMap, by my
measurements. And if you want snapshots at every step, in the worst
case you pay the price you've measured above.

-Jason

Mark Engelberg

unread,
Dec 29, 2010, 2:45:16 AM12/29/10
to clo...@googlegroups.com
On Tue, Dec 28, 2010 at 11:16 PM, Jason Wolfe <jaw...@berkeley.edu> wrote:
> Use transients?
>
> (dotimes [_ 10]
>  (let [m (atom (transient {}))
>        r (range 1e5)]
>   (time
>    (doseq [i r]
>      (swap! m assoc! i 2)))))

Since transients enforce single-threadedness, there's no reason to put
it in an atom. You're right that would work for the truly
single-threaded scenario. I'm more interested right now in the
scenario of "multi-threaded, low-contention, only occasionally need a
snapshot (for iteration without blocking)", so I hadn't really thought
in terms of transients, but I appreciate the reminder. Maybe some of
the new pod work will eventually help here since it will reportedly
allow for other sorts of ways to manage the mutable contents. Good
thinking,

Mark

Jason Wolfe

unread,
Dec 29, 2010, 3:16:52 AM12/29/10
to Clojure
> Since transients enforce single-threadedness, there's no reason to put
> it in an atom.  You're right that would work for the truly
> single-threaded scenario.  I'm more interested right now in the
> scenario of "multi-threaded, low-contention, only occasionally need a
> snapshot (for iteration without blocking)", so I hadn't really thought
> in terms of transients, but I appreciate the reminder.  Maybe some of
> the new pod work will eventually help here since it will reportedly
> allow for other sorts of ways to manage the mutable contents.  Good
> thinking,

Well, if your updates come in batches, you could still see significant
benefits by storing the persistent map in an atom and using transients
to perform the batch updates within a single call to swap!.
Otherwise, I guess they might not be suitable for your purpose (yet --
we'll see what happens with pods).

-Jason

Laurent PETIT

unread,
Dec 29, 2010, 4:15:08 AM12/29/10
to clo...@googlegroups.com
Hi Mark,

I understand the value of, sometimes, comparing raw speed of simple datastructures operations.
So in your example, if I understand correctly, you're not computing the time taken to compute the new value to be inserted, but rather just the time of  insertion.

I've seen in the past collegues of mine rejecting web frameworks because they had this kind of "simple benchmark" approach : the web page to be served was doing nothing at all (as in "serve a static hello world message"), and they were comparing plain JSP, Struts, and another framework whose name I can't recall right now.
Of course, as expected with such a case, JSP was faster than Struts was faster than the other framework, following the number of indirection levels each solution was introducing.
What was wrong, was that those raw numbers were irrelevant in 99% of use cases of web page access, where the time taken to serve a request was dominated by I/O (database access). And the "wrong" framework (with respect to this criteria) was chosen.

Ok, I'm pretty sure you know this above story well, I still wanted to talk about it, just in case.

Now for Clojure and hashmap, I'd like to share some thoughts.
I'm basically segregating "use cases" into two categories :
  a. Feeding the hash map in big chunks : here, we are close to your example. The values have already been computed and (presumably) stored somewhere, and we just want to create a hashmap as quickly as possible. In this scenario, it seems that it's possible to use transients.
  b. Feeding the hash map concurrently : here, in the case where the need for feeding the hash map concurrently is derived from the need to spread non-trivial computations over multiple threads, I can see the ratio time to compute value/time to store value become high, and thus the difference between insertion times be less relevant.



2010/12/29 Mark Engelberg <mark.en...@gmail.com>
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clo...@googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to
clojure+u...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

nicola...@gmail.com

unread,
Dec 29, 2010, 5:13:18 AM12/29/10
to clo...@googlegroups.com
I never was fully convinced an atom around a functional hash was
perfect for concurrency.

There is no write/write or read/write concurrency possible, even on
independent data.

Someone was working a while ago ob TransactionalHashMap, if I recall well.
Is there something already to benchmark against java?

Ken Wesson

unread,
Dec 29, 2010, 7:00:00 AM12/29/10
to clo...@googlegroups.com
On Wed, Dec 29, 2010 at 1:08 AM, Todd <t.green...@gmail.com> wrote:
> I thought it'd be interesting to look closer at the insertion times as a fx
> of the size of the Map. The results are at:
>
> https://gist.github.com/758198
>
> At first I thought I'd found something interesting, only to investigate
> further and realize that I'd been testing through a fx that was using
> reflection (see trial results at bottom, Trials 1-4), and this was
> completely swamping the results.
>
> In Trial #5, I realized what I'd done, and broke my generic test fx into two
> separate functions w/ proper type hinting (my-test-map and my-test-nbhm).
> Now my results are on par w/ yours, and I don't see any significant change
> in insertion times as a fx of map size up to 1e5.
>
> Overall, NonBlockingHashMap insertions to take ~10x longer than the
> SynchronizedHashMap.

A more apples-to-apples comparison might be with ConcurrentHashMap,
since that allows for lots of concurrent nonblocking readers like
NonBlockingHashMap and unlike SynchronizedHashMap.

How fast is ConcurrentHashMap?

Ken Wesson

unread,
Dec 29, 2010, 7:02:47 AM12/29/10
to clo...@googlegroups.com
On Wed, Dec 29, 2010 at 5:13 AM, nicola...@gmail.com
<nicola...@gmail.com> wrote:
> I never was fully convinced an atom around a functional hash was
> perfect for concurrency.
>
> There is no write/write or read/write concurrency possible, even on
> independent data.

There is if you resort to (mutable)
java.util.concurrent.ConcurrentHashMap, which I've even had occasion
to use in a Clojure project when I needed finer lock/retry granularity
than the whole map and for whatever reason reads and (commute mapref
assoc ...) didn't suffice to cover all the use-cases.

> Someone was working a while ago ob TransactionalHashMap, if I recall well.
> Is there something already to benchmark against java?

Is this something like a persistent immutable ConcurrentHashMap? Or
maybe a map integrated with the ref world in some way? Or both?

Mike Meyer

unread,
Dec 29, 2010, 7:27:34 AM12/29/10
to clo...@googlegroups.com
On Tue, 28 Dec 2010 22:28:54 -0800
Mark Engelberg <mark.en...@gmail.com> wrote:

[Standing on soapbox]

> On Tue, Dec 28, 2010 at 10:15 PM, David Nolen <dnolen...@gmail.com> wrote:
> > Even in in a single threaded context raw insert performance isn't the final
> > word. What if you want to be able to deliver a snapshot for reporting?
>
> What if you don't?
>
> Seriously, I agree with you that Clojure's data structures have some
> significant advantages -- if you need those advantages. There are
> still plenty of apps that use hash tables in a single-threaded manner,
> or use them in a multithreaded way where contention is unlikely and
> persistence is unnecessary. In many areas, Clojure has a
> pay-for-what-you-need philosophy -- this just isn't one of those
> areas. With respect to data structures, Clojure is very opinionated,
> with an attitude of "Write it with immutable data structures -- you'll
> thank me later." :)

And you'll do that because some things are hard enough that letting
the programmer do it themselves - or decide when they need the help -
often results in the programmer doing it wrong. I mean - you don't
*need* garbage collection for everything: you can allocate and free
things by hand, or even use a "voluntary" garbage collector and mix
those two approaches. Historically, those things don't work very well,
and most of the world has gone to doing GC by default for
everything. There are still cases where you need to free resources by
hand, but those tend to be obvious and can either be hooked into the
GC system (so you free them just before the GC system frees some
containing object) or easy to free properly.

Letting programmers lock/unlock shared structures is very similar
operation to allocating/freeing data structures, and (from my point of
view, anyway) historically hasn't worked very well (I've found dozens
of windows/hanging locks/etc. in large systems that had been in
*production* for years!). Going to "immutable by default" for
everything is the same as going to "gc by default" for everything -
complete with tools for dealing with the exceptions when they arise.

Such systems are still relatively new, and we're still working out the
best way to deal with things. Maybe it'll turn out there's some
compromise short of what Clojure does that actually works well. But I
don't think we'll know what "well" is unless something tackles doing
it full-bore the way clojure does.

<mike
--
Mike Meyer <m...@mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.

O< ascii ribbon campaign - stop html mail - www.asciiribbon.org

Chas Emerick

unread,
Dec 29, 2010, 8:07:46 AM12/29/10
to clo...@googlegroups.com

Totally aside from the (valid) hint this offers for your advocacy efforts, I think this just a vote for "use the right tool for the job" -- a loaded statement that silently ropes in all sorts of premises associated with the local context and circumstances of the developers, business, and domain involved. I'd say that Clojure is opinionated in many different vectors; slipping portions of it into contexts where the developers in question insist on maintaining their current state of practice sounds like a particularly large windmill to tilt at.

Clojure's facilities, taken individually, are interesting and maybe useful in limited circumstances. Persistent data structures have a lot of useful properties (for a price), laziness would seem to be a good solution to an incredibly niche problem, and reference types are an interesting take on the state/identity question; I submit that none of these on their own are particularly compelling. Use them in concert, however (assuming you've got first class functions, of course) and you have a cumulatively augmented result enabling you to do things you simply wouldn't be able to do before (or perhaps, wouldn't have considered possible before). From that context, those apps (or, really, those portions of some apps) that really benefit from using mutable data structures seem like a niche.

Re: "if you need those advantages"... Personally, I think the most significant advantage persistent data structures offer hasn't been mentioned: the ability to program with values, and the elimination of errors and defects that arise from doing anything else. To answer your friend, *that's* why you should be willing to "pay that price all the time" (though only on assoc of course, and maybe not even there depending on the nature of the problem at hand and how much laziness/transients/pmap/etc can be brought to bear). I don't think that packaging up a hashmap-in-an-atom is really doing anyone any favors, at least if that particular animal is being called a "persistent data structure".

FWIW, you might want to take a look at the FunctionalJava library (http://www.functionaljava.org/) for your friends that might be amenable to folding in some functional programming into their existing applications, but are strictly wedded to Java. It's opinionated in its own ways, but its pervasive use of generics may give some just enough of a handle on what's going on to grok what FP is about and why it's useful without going whole-hog and using a different language.

Cheers,

- Chas

David Nolen

unread,
Dec 29, 2010, 10:06:37 AM12/29/10
to clo...@googlegroups.com
On Wed, Dec 29, 2010 at 1:28 AM, Mark Engelberg <mark.en...@gmail.com> wrote:
 It just means I have to rethink my proselytizing strategy -- I was
definitely overselling the speed of the persistent data structures.

I agree, trying to sell persistent data structures on speed alone just doesn't make sense. Any talk about persistent data structures needs to be fairly holistic.

But I think that most experienced programmers favor flexibility over micro benchmarks as long as flexibility doesn't come at an unreasonable cost.

Because experience shows that, unexpectedly, the day will come where you'll want that simple KV store to support atomically adding and removing keys :)

David

Rich Hickey

unread,
Dec 29, 2010, 1:00:59 PM12/29/10
to clo...@googlegroups.com

On Dec 28, 2010, at 8:57 PM, Mark Engelberg wrote:

> Just for fun, I was curious to see what it would be like,
> performance-wise, to simulate a synchronized mutable hash map by
> putting a clojure hash map inside an atom, and making this accessible
> to Java code via the map interface.
>
> I didn't try to implement the entire map interface, but just the major
> things for testing purposes. The code is below.
>
> Benchmarking insertions on my machine, I'm seeing that the Clojure
> code (Clojure 1.2, java -server) takes roughly 8-20x longer (depending
> on how large the hashmap is).
>
> To test, for example, I do something like
> (def ^com.justforfun.NonBlockingHashMap h
> (com.justforfun.NonBlockingHashMap.))
> (def ^java.util.Map j (. java.util.Collections synchronizedMap
> (java.util.HashMap.)))
>
> (time (doseq [i (range 100000)] (.put h i 2))) ;900ms
> (time (doseq [i (range 100000)] (.put j i 2))) ;70ms
>
> Does this seem reasonable, or is there a better way to do this test?
> Frankly, I was expecting Clojure's numbers to compare a bit more
> favorably.

You need to compare against j.u.c.ConcurrentHashMap. It exists because
the locking strategy used by synchronizedMap doesn't scale well. You
won't even see that overhead on simplistic benchmarks like yours
because the optimizer can likely see the map never escapes and can
remove all the locking.

Here's a more realistic comparison:

(definterface M
(put [k v])
(get [k]))

(defn ^M h []
(let [m (atom {})]
(reify M
(put [_ k v] (swap! m assoc k v))
(get [_ k] (get @m k)))))

(defn ^java.util.Map j []
(java.util.concurrent.ConcurrentHashMap.))

(let [v (into [] (range 100000))]
(dotimes [_ 10]
(let [h (h)
j (j)]
(println "adding")
(time (doseq [i v] (.put h i 2)))
(time (doseq [i v] (.put j i 2)))
(println "lookup")
(time (doseq [i v] (.get h i)))
(time (doseq [i v] (.get j i))))))

adding
"Elapsed time: 96.351 msecs"
"Elapsed time: 46.474 msecs"
lookup
"Elapsed time: 30.424 msecs"
"Elapsed time: 28.3 msecs"
adding
"Elapsed time: 98.647 msecs"
"Elapsed time: 46.578 msecs"
lookup
"Elapsed time: 32.284 msecs"
"Elapsed time: 32.531 msecs"
adding
"Elapsed time: 293.863 msecs"
"Elapsed time: 34.284 msecs"
lookup
"Elapsed time: 18.632 msecs"
"Elapsed time: 15.416 msecs"
adding
"Elapsed time: 95.019 msecs"
"Elapsed time: 33.441 msecs"
lookup
"Elapsed time: 19.4 msecs"
"Elapsed time: 16.265 msecs"
adding
"Elapsed time: 89.819 msecs"
"Elapsed time: 31.929 msecs"
lookup
"Elapsed time: 19.574 msecs"
"Elapsed time: 14.87 msecs"
adding
"Elapsed time: 94.126 msecs"
"Elapsed time: 33.812 msecs"
lookup
"Elapsed time: 20.514 msecs"
"Elapsed time: 14.957 msecs"
adding
"Elapsed time: 427.7 msecs"
"Elapsed time: 32.465 msecs"
lookup
"Elapsed time: 18.858 msecs"
"Elapsed time: 14.848 msecs"
adding
"Elapsed time: 91.149 msecs"
"Elapsed time: 32.392 msecs"
lookup
"Elapsed time: 19.491 msecs"
"Elapsed time: 15.441 msecs"
adding
"Elapsed time: 91.648 msecs"
"Elapsed time: 33.283 msecs"
lookup
"Elapsed time: 19.113 msecs"
"Elapsed time: 15.028 msecs"
adding
"Elapsed time: 90.503 msecs"
"Elapsed time: 31.736 msecs"
lookup
"Elapsed time: 19.557 msecs"
"Elapsed time: 15.082 msecs"

Mark Engelberg

unread,
Dec 29, 2010, 3:39:49 PM12/29/10
to clo...@googlegroups.com
Thanks for that benchmark example, Rich. One thing that really
intrigues me here is that the reify version is substantially faster
than the gen-class version. Is there any way to harness the speed of
the reify from external Java libraries calling into Clojure, or is
gen-class still pretty much the "right" way to build a component
intended for use from Java?

Stuart Halloway

unread,
Dec 29, 2010, 3:55:38 PM12/29/10
to clo...@googlegroups.com
gen-class is only right when you need to do concrete inheritance. That may be true when doing Java interop, but with a well-designed Java library it doesn't have to be.

Stu

Reply all
Reply to author
Forward
0 new messages