Not unless your application is misbehaving in the first place. Since 1.0.5
weak references so that only PreparedStatement that are referenced in the
application are kept. If your application continuously create new
PreparedStatement without ever unreferencing previously created ones, then yes,
the map will grow forever, but you'd still be screwed even if Cluster was not
caching PreparedStatement at all :).
The behavior above means it wouldn't work for the use case you've described, as
a PreparedStatement is automatically removed from the cache as soon
(technically "when the GC feels like it" but well, you get the idea) as your
application don't hold a strong reference to it.
So really, it's meant to be the job of your application to cache the
PreparedStatement that it plans on reusing, and to implement whatever
kind of size management you want for that cache. The internal cache of Cluster
will follow.
To be honest, the idea don't feel me with joy a priori. Though I understand
the annoying aspect of what you describe, Serializing PreparedStatement feels
more like a hack that something that really make sense. Besides, making it
serializable is not enough, we'd have to also add a way to add newly
deserialized PreparedStatement to a Cluster instance. That's a bit too much new
API for a use case that, frankly, I'm not sure is something I'd advise doing in
general. Also, while I completely understand that you're just trying to work
around the lack of PreparedStatement batching in the 1.0 driver, that problem
is fixed in the 2.0 version.
Also, even in the 1.0 version, I suspect that having such large batch statement
might be something that can be somewhat limited in practice. For instance, at
least for the cases where batches are used for performance sake (which is
probably the most common case in my experience), one thing that I'd try would
be that instead of preparing very large batches of different size, I'd prepare
one batch with say 50-100 inserts, which I'd use repeatedly (of course, you
can end up with a "remainder", but you can use tricks like repeating the last
record multiple times since inserts are idempotent). I suspect that in practice
this might not be a lot slower than preparing a huge batch.
Anyway, I don't pretend I know your exact use case because I don't, and I'm sure
you've considered many possibility and do it the way you do it for good reason.
My point is merely to say that I do think the problem you are describing is
rather specific. And this to justify why I'm not convinced it justify making
PreparedStatement Serializable, especially given that Serialization in Java is
known to be a pain to maintain in particular.
--
Sylvain