Hello all.
Did anyone have experience with running hazelcast inside managed classloading environment like JBoss modules? As I found out there are several options to choose from, and I can't decide to pick one of them as "right". Options are:
1. Include hazelcast jar as library of deployment (web-app or ear or something specific to app-server)
Pros:
- no classloading problems
- any application specific java object can be seamlessly stored in hazelcast map, queue or topic
Cons:
- initializing a "Cluster Node" per-application, not per-server (jvm), so there might be several nodes in one jvm which may cause data loss on jvm/server crash
- can not share instance between applications (deployments)
2. Make a "module" of hazelcast and include it as a dependency in each deployment
Pros:
- single instance of HZ per JVM
- shared resource between deployments
- local invocations do not perform any serialization
- easy to maintain all apps to use the same version of HZ
Cons:
- classloading hell when trying to put to topic/map an application-specific bean (the queue works ok, it's a kind of magic!) - tested with HZ3.1. Can't get it working, so decided to use HashMaps insteand of beans.
- hence the latter point the only option to use HZ as a data-transfer-layer is to use standard classes or primitives
3. Fix option 2's issues with adding a jar with app-specific beans to hazelcast module to make them available for HZ classloader
Pros:
- fixed classloding issues
Cons:
- need to restart JBoss on module change
- need to maintain right versions of DTO beans in app and in module
- when several apps use different DTOs the HZ module become unsupportable
My heart choose option 2 :) But I have to win the fight with classloading to make it to production. Any thoughts?