All I'm having a problem implementing the Hazelcast ticket store in CAS
6.3.4 which uses hazelcast-4.1
Currently I'm testing with a two node cluster fontended with a
netscaler. Each node has it's own /etc/cas/config/cas.properties so
each node as it's own hazelcast configuration.
Here is the relevant hazelcast configuration parameters:
cas.ticket.registry.hazelcast.page-size=500
cas.ticket.registry.hazelcast.cluster.tcpip-enabled=true
cas.ticket.registry.hazelcast.cluster.map-merge-policy=PUT_IF_ABSENT
cas.ticket.registry.hazelcast.cluster.instance-name=cas-dev
cas.ticket.registry.hazelcast.cluster.members=10.0.79.38,10.0.79.37
cas.ticket.registry.hazelcast.cluster.eviction-policy=LRU
cas.ticket.registry.hazelcast.cluster.max-no-heartbeat-seconds=300
cas.ticket.registry.hazelcast.cluster.logging-type=slf4j
cas.ticket.registry.hazelcast.cluster.port=5701
cas.ticket.registry.hazelcast.cluster.max-size=85
cas.ticket.registry.hazelcast.cluster.backup-count=1
cas.ticket.registry.hazelcast.cluster.async-backup-count=0
cas.ticket.registry.hazelcast.cluster.max-size-
policy=USED_HEAP_PERCENTAGE
cas.ticket.registry.hazelcast.cluster.timeout=5
IN my testing I found that the tickets were not being replicated the
other host. I'd use the netscaler to switch between the backend CAS
nodes, log in to one, fail over to the other node and attempt to access
cas, and I was redirected to the login screen.
After restarting the cas services on both nodes and tailing out the cas
log I noticed the following error:
Cannot add a dynamic configuration
'MapConfig{name='serviceTicketsCache', inMemoryFormat=BINARY',
metadataPolicy=CREATE_ON_
UPDATE, backupCount=1, asyncBackupCount=0, timeToLiveSeconds=0,
maxIdleSeconds=500, readBackupData=false, evictionConfig=Evict
ionConfig{size=85, maxSizePolicy=USED_HEAP_PERCENTAGE,
evictionPolicy=LRU, comparatorClassName=null, comparator=null}, merkleT
ree=MerkleTreeConfig{enabled=false, depth=10},
eventJournal=EventJournalConfig{enabled=false, capacity=10000,
timeToLiveSecond
s=0}, hotRestart=HotRestartConfig{enabled=false, fsync=false},
nearCacheConfig=null, mapStoreConfig=MapStoreConfig{enabled=fal
se, className='null', factoryClassName='null', writeDelaySeconds=0,
writeBatchSize=1, implementation=null, factoryImplementation=null,
properties={}, initialLoadMode=LAZY, writeCoalescing=true},
mergePolicyConfig=MergePolicyConfig{policy='com.hazelcast.spi.merge.Lat
estUpdateMergePolicy', batchSize=100}, wanReplicationRef=null,
entryListenerConfigs=null, indexConfigs=null, attributeConfigs=null,
splitBrainProtectionName=null, queryCacheConfigs=null,
cacheDeserializedValues=INDEX_ONLY}'
as there is already a conflicting configuration
'MapConfig{name='serviceTicketsCache', inMemoryFormat=BINARY',
metadataPolicy=CREATE_ON_UPDATE, backupCount=1, asyncBackupCount=0,
timeToLiveSeconds=0, maxIdleSeconds=10, readBackupData=false,
evictionConfig=EvictionConfig{size=85,
maxSizePolicy=USED_HEAP_PERCENTAGE, evictionPolicy=LRU,
comparatorClassName=null, comparator=null},
merkleTree=MerkleTreeConfig{enabled=false, depth=10},
eventJournal=EventJournalConfig{enabled=false, capacity=10000,
timeToLiveSeconds=0}, hotRestart=HotRestartConfig{enabled=false,
fsync=false}, nearCacheConfig=null,
mapStoreConfig=MapStoreConfig{enabled=false, className='null',
factoryClassName='null', writeDelaySeconds=0, writeBatchSize=1,
implementation=null, factoryImplementation=null, properties={},
initialLoadMode=LAZY, writeCoalescing=true},
mergePolicyConfig=MergePolicyConfig{policy='com.hazelcast.spi.merge.Lat
estUpdateMergePolicy', batchSize=100}, wanReplicationRef=null,
entryListenerConfigs=null, indexConfigs=null, attributeConfigs=null,
splitBrainProtectionName=null, queryCacheConfigs=null,
cacheDeserializedValues=INDEX_ONLY}'>
So off to google I go and I find
https://github.com/hazelcast/hazelcast/issues/12222
and I add -Dhazelcast.dynamicconfig.ignore.conflicts=true for giggles
and to see something at least boot.
So now both services start up but I'm ignoring the dynamic config
conflicts. My testing fails it would appear that hazelcast is not able
to share the tgt between nodes.
Any help would be greatly appreciated.
--
Erik Mallory
Server Analyst
Wichita State University