Severe error while loading data into map

50 views
Skip to first unread message

Greg Haines

unread,
Mar 1, 2010, 4:12:49 PM3/1/10
to Hazelcast
I received the following error during a test run which inserted a
large number of objects into a distributed map:

Mar 1, 2010 3:51:33 PM com.hazelcast.cluster.ClusterService
dequeueProcessables
SEVERE: error processing messages obj=MPut operation=
CONCURRENT_MAP_PUT name= c:mpsSeriesMap
java.lang.NullPointerException
at java.lang.System.arraycopy(Native Method)
at com.hazelcast.impl.Request.setPacket(Request.java:188)
at com.hazelcast.impl.BaseManager
$TargetAwareOp.invoke(BaseManager.java:815)
at com.hazelcast.impl.BaseManager
$TargetAwareOp.process(BaseManager.java:807)
at com.hazelcast.impl.ConcurrentMapManager
$MBackupAwareOp.process(ConcurrentMapManager.java:1003)
at
com.hazelcast.cluster.ClusterService.processProcessable(ClusterService.java:
150)
at
com.hazelcast.cluster.ClusterService.dequeueProcessables(ClusterService.java:
210)
at com.hazelcast.cluster.ClusterService.run(ClusterService.java:161)
at java.lang.Thread.run(Thread.java:619)

It occurred right after a client (Java client not super-client) made a
request to see the contents of the map while another super-client was
putting data into the map.

While attempting to replicate the error, I got a different one. This
time the Java client had this error:

INFO: Client is connecting to Connection [0] [/10.10.10.211:5701]
INFO : 2010-03-01 15:56:13,763 [main] Server ready. Listening on port
4321...
Mar 1, 2010 3:56:25 PM com.hazelcast.client.ConnectionManager
destroyConnection
WARNING: Connection to Connection [0] [bpmunch/10.10.10.211:5701] is
lost
Mar 1, 2010 3:56:25 PM com.hazelcast.client.ConnectionManager
getConnection
INFO: Client is connecting to Connection [1] [/10.10.10.109:5701]
java.io.StreamCorruptedException: invalid type code: 00
at java.io.ObjectInputStream
$BlockDataInputStream.readBlockHeader(ObjectInputStream.java:2462)
at java.io.ObjectInputStream
$BlockDataInputStream.refill(ObjectInputStream.java:2497)
at java.io.ObjectInputStream
$BlockDataInputStream.skipBlockData(ObjectInputStream.java:2399)
at java.io.ObjectInputStream.skipCustomData(ObjectInputStream.java:
1895)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:
1872)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:
1753)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:351)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2380)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2396)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2396)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2396)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2364)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2396)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2364)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2364)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2364)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2364)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2364)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2322)
at java.util.TreeMap.readTreeSet(TreeMap.java:2274)
at java.util.TreeSet.readObject(TreeSet.java:518)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:
25)
at java.lang.reflect.Method.invoke(Method.java:597)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:
974)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:
1846)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:
1753)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:
1945)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:
1869)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:
1753)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:351)
at com.hazelcast.client.Serializer.toObject(Serializer.java:142)
at com.hazelcast.client.ProxyHelper.getValue(ProxyHelper.java:117)
at com.hazelcast.client.ProxyHelper.doOp(ProxyHelper.java:98)
at com.hazelcast.client.MapClientProxy.get(MapClientProxy.java:200)
at
com.ndr.sp.datagrid.service.impl.HazelcastSlotsService.getNumSlices(HazelcastSlotsService.java:
58)
at com.ndr.sp.datagrid.server.ServerMain
$DataStatusCommandHandler.handleCommand(ServerMain.java:297)
at com.ndr.sp.datagrid.server.Server$SocketHandler.run(Server.java:
155)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:885)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
Mar 1, 2010 3:56:25 PM com.hazelcast.client.ConnectionManager
destroyConnection
WARNING: Connection to Connection [1] [bpstax/10.10.10.109:5701] is
lost
Mar 1, 2010 3:56:25 PM com.hazelcast.client.ConnectionManager
getConnection
INFO: Client is connecting to Connection [2] [/10.10.10.100:5701]
java.io.StreamCorruptedException: invalid type code: 00
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1356)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:
1945)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:
1869)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:
1753)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:351)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2380)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2396)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2396)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2364)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2396)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2396)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2364)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2396)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2364)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2396)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2364)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2364)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2322)
at java.util.TreeMap.readTreeSet(TreeMap.java:2274)
at java.util.TreeSet.readObject(TreeSet.java:518)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:
25)
at java.lang.reflect.Method.invoke(Method.java:597)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:
974)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:
1846)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:
1753)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:
1945)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:
1869)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:
1753)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:351)
at com.hazelcast.client.Serializer.toObject(Serializer.java:142)
at com.hazelcast.client.ProxyHelper.getValue(ProxyHelper.java:117)
at com.hazelcast.client.ProxyHelper.doOp(ProxyHelper.java:98)
at com.hazelcast.client.MapClientProxy.get(MapClientProxy.java:200)
at
com.ndr.sp.datagrid.service.impl.HazelcastSlotsService.getNumSlices(HazelcastSlotsService.java:
58)
at com.ndr.sp.datagrid.server.ServerMain
$DataStatusCommandHandler.handleCommand(ServerMain.java:297)
at com.ndr.sp.datagrid.server.Server$SocketHandler.run(Server.java:
155)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:885)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)

and one of the cluster nodes printed this:

Mar 1, 2010 3:56:31 PM com.hazelcast.impl.BaseManager
$ResponseQueueCall getRedoAwareResult
INFO: c:mpsSeriesMap Re-doing [16] times! MLock operation=
CONCURRENT_MAP_LOCK_RETURN_OLD name= c:mpsSeriesMap
Mar 1, 2010 3:56:31 PM com.hazelcast.impl.BaseManager
$ResponseQueueCall getRedoAwareResult
INFO: key= Data size = 19, req.operation:
CONCURRENT_MAP_LOCK_RETURN_OLD
Mar 1, 2010 3:56:32 PM com.hazelcast.impl.BaseManager
$ResponseQueueCall getRedoAwareResult
INFO: c:mpsSeriesMap Re-doing [17] times! MLock operation=
CONCURRENT_MAP_LOCK_RETURN_OLD name= c:mpsSeriesMap
Mar 1, 2010 3:56:32 PM com.hazelcast.impl.BaseManager
$ResponseQueueCall getRedoAwareResult
INFO: key= Data size = 19, req.operation:
CONCURRENT_MAP_LOCK_RETURN_OLD
Mar 1, 2010 3:56:33 PM com.hazelcast.impl.BaseManager
$ResponseQueueCall getRedoAwareResult
INFO: c:mpsSeriesMap Re-doing [18] times! MLock operation=
CONCURRENT_MAP_LOCK_RETURN_OLD name= c:mpsSeriesMap
Mar 1, 2010 3:56:33 PM com.hazelcast.impl.BaseManager
$ResponseQueueCall getRedoAwareResult
INFO: key= Data size = 19, req.operation:
CONCURRENT_MAP_LOCK_RETURN_OLD
Mar 1, 2010 3:56:34 PM com.hazelcast.impl.BaseManager
$ResponseQueueCall getRedoAwareResult
INFO: c:mpsSeriesMap Re-doing [19] times! MLock operation=
CONCURRENT_MAP_LOCK_RETURN_OLD name= c:mpsSeriesMap
...<ad nauseam>...

Any ideas? Worse case, I'll just replace the Java client with a super-
client. Thanks!

-Greg Haines

P.S. - Hazelcast is fantastic! Lean, mean and it does everything I
need!
I tried for 6 weeks to get Terracotta DSO and tim-masterworker to do
what I wanted and kept bumping into invisible walls. Their bytecode
instrumentation sounds cool and all but in practice it encumbers local
jobs with cluster awareness, is a really leaky abstraction and is just
too much "magic".

Greg Haines

unread,
Mar 1, 2010, 4:20:23 PM3/1/10
to Hazelcast
More info:
Cluster config (hazelcast.xml):

<?xml version="1.0" encoding="UTF-8"?>
<hazelcast>
<group>
<name>ndr.sp.datagrid</name>
<password>foobar</password>
</group>
<network>
<port auto-increment="true">5701</port>
<join>
<multicast enabled="true">
<multicast-group>224.2.2.3</multicast-group>
<multicast-port>54327</multicast-port>
</multicast>
<tcp-ip enabled="false" />
</join>
</network>
<executor-service>
<core-pool-size>16</core-pool-size>
<max-pool-size>64</max-pool-size>
<keep-alive-seconds>60</keep-alive-seconds>
</executor-service>
<map name="slotDataMap">
<backup-count>1</backup-count>
<near-cache />
</map>
<map name="sliceMap">
<backup-count>1</backup-count>
<near-cache />
</map>
<map name="sliceGroupMap">
<backup-count>1</backup-count>
<near-cache />
</map>
<map name="mpsListMap">
<backup-count>1</backup-count>
<near-cache />
</map>
<map name="mpsSeriesMap">
<backup-count>1</backup-count>
<near-cache>
<max-size>10000</max-size>
<eviction-policy>LFU</eviction-policy>
</near-cache>
</map>
</hazelcast>

Cluster Size: 3 Nodes (storing data and executing jobs), 1 Super-
client (loading data), 1 Java client (querying)
OS: RedHat Linux (all)
Java Version: 1.6.0_03 (nodes and java client), 1.6.0_14 (super-
client),

The second node gave this error:

Mar 1, 2010 3:56:58 PM com.hazelcast.cluster.ClusterService
dequeueProcessables
SEVERE: error processing messages obj=MMigrate operation=
CONCURRENT_MAP_MIGRATE_RECORD name= c:mpsSeriesMap


java.lang.NullPointerException
at java.lang.System.arraycopy(Native Method)
at com.hazelcast.impl.Request.setPacket(Request.java:188)
at com.hazelcast.impl.BaseManager
$TargetAwareOp.invoke(BaseManager.java:815)
at com.hazelcast.impl.BaseManager
$TargetAwareOp.process(BaseManager.java:807)
at com.hazelcast.impl.ConcurrentMapManager
$MBackupAwareOp.process(ConcurrentMapManager.java:1003)
at
com.hazelcast.cluster.ClusterService.processProcessable(ClusterService.java:
150)
at
com.hazelcast.cluster.ClusterService.dequeueProcessables(ClusterService.java:
210)
at com.hazelcast.cluster.ClusterService.run(ClusterService.java:161)
at java.lang.Thread.run(Thread.java:619)

While the third showed this:

Exception in thread "hz.executor-_hzInstance_0-thread-11"
java.lang.Error: Only ServiceThread can access this method.
Thread[hz.executor-_hzInstance_0-thread-11,5,_hzInstance_0]
at
com.hazelcast.impl.ConcurrentMapManager.checkServiceThread(ConcurrentMapManager.java:
1356)
at
com.hazelcast.impl.ConcurrentMapManager.getTarget(ConcurrentMapManager.java:
1277)
at
com.hazelcast.impl.ConcurrentMapManager.getKeyOwner(ConcurrentMapManager.java:
1072)
at com.hazelcast.impl.BaseManager.getKeyOwner(BaseManager.java:969)
at com.hazelcast.impl.ExecutorManager
$DistributedExecutorAction.handleMemberLeft(ExecutorManager.java:443)
at
com.hazelcast.impl.ExecutorManager.memberRemoved(ExecutorManager.java:
710)
at com.hazelcast.cluster.ClusterImpl$2.run(ClusterImpl.java:82)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)

Talip Ozturk

unread,
Mar 2, 2010, 4:28:47 AM3/2/10
to haze...@googlegroups.com
Greg,

Most of these exceptions can no longer happen. Root-cause of your
problem seems to be the ExecutorService handling the member
disconnect. That part is completely re-implemented.

Please download and try the latest snapshot or wait for the 1.8.1
final which is about to be released.

-talip

> --
> You received this message because you are subscribed to the Google Groups "Hazelcast" group.
> To post to this group, send email to haze...@googlegroups.com.
> To unsubscribe from this group, send email to hazelcast+...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/hazelcast?hl=en.
>
>

Reply all
Reply to author
Forward
0 new messages