BlockingQueue poll() returns too many objects

12 views
Skip to first unread message

jm

unread,
Jul 9, 2009, 6:49:40 AM7/9/09
to Hazelcast
Hi,

this looks suspiciously like http://code.google.com/p/hazelcast/issues/detail?id=43
but I am using hazelcast 1.6 and it is supposed to be fixed
already...so here is my case:

I am having a problem with BlockingQueue<String> gettintg the same
objects being returned several times. I'll try to explain my code.
This is hazelcast 1.6 in jdk 1.6.0.10 on vista.

One process called mp does the put() from time to time on the queue
and the other process (rp) does the poll() every second. I launched
both yesterday in the afternoon, and mp did 3 puts(), that were
properly recieved by poll() on rp. Then left the machine on and I
left. It eventually went to sleep, when I arrived this morning and the
machine wake up, rp got again the 3 objects in 3 consecutive polls().

The call to poll is like this:
res = queryswaiting.poll(100, TimeUnit.MILLISECONDS);

I had logging enabled and here it is after removing unnecessary log.
Note that I dont see anything about the put() or poll() in the logs,
and I thougth all logging was enabled. Anyway I can see the error the
fake poll() creates in my code.

thanks!!
javier


mp
========
Server process Manager...@172.26.0.1:-1 initialized.
Hazelcast 1.6 (20090604) starting at Address[127.0.0.1:5701]
Copyright (C) 2009 Hazelcast.com
Wed Jul 08 18:41:56 CEST 2009 INFO: [com.hazelcast.impl.Node]


Members [1] {
Member [127.0.0.1:5701] this
}

Wed Jul 08 18:41:56 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ThreadContext] hz.ServiceThread is starting with
cacheSize 100
Wed Jul 08 18:43:21 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ThreadContext] hz.InThread is starting with
cacheSize 100
Wed Jul 08 18:43:21 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ClusterManager] false Handling JoinRequest
{nodeType=1, address=Address[127.0.0.1:5702], groupName='dev',
groupPassword='dev-pass'}
Wed Jul 08 18:43:21 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ThreadContext] hz.OutThread is starting with
cacheSize 0
Wed Jul 08 18:43:21 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ClusterManager] false Handling JoinRequest
{nodeType=1, address=Address[127.0.0.1:5702], groupName='dev',
groupPassword='dev-pass'}
Wed Jul 08 18:43:22 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ClusterManager] false Handling JoinRequest
{nodeType=1, address=Address[127.0.0.1:5702], groupName='dev',
groupPassword='dev-pass'}
Wed Jul 08 18:43:22 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ClusterManager] false Handling JoinRequest
{nodeType=1, address=Address[127.0.0.1:5702], groupName='dev',
groupPassword='dev-pass'}
Wed Jul 08 18:43:23 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ClusterManager] false Handling JoinRequest
{nodeType=1, address=Address[127.0.0.1:5702], groupName='dev',
groupPassword='dev-pass'}
Wed Jul 08 18:43:23 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ClusterManager] false Handling JoinRequest
{nodeType=1, address=Address[127.0.0.1:5702], groupName='dev',
groupPassword='dev-pass'}
Wed Jul 08 18:43:24 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ClusterManager] false Handling JoinRequest
{nodeType=1, address=Address[127.0.0.1:5702], groupName='dev',
groupPassword='dev-pass'}
Wed Jul 08 18:43:24 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ClusterManager] false Handling JoinRequest
{nodeType=1, address=Address[127.0.0.1:5702], groupName='dev',
groupPassword='dev-pass'}
Wed Jul 08 18:43:25 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ClusterManager] false Handling JoinRequest
{nodeType=1, address=Address[127.0.0.1:5702], groupName='dev',
groupPassword='dev-pass'}
Wed Jul 08 18:43:25 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ClusterManager] false Handling JoinRequest
{nodeType=1, address=Address[127.0.0.1:5702], groupName='dev',
groupPassword='dev-pass'}
Wed Jul 08 18:43:26 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ClusterManager] false Handling JoinRequest
{nodeType=1, address=Address[127.0.0.1:5702], groupName='dev',
groupPassword='dev-pass'}
Wed Jul 08 18:43:26 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ClusterManager] false Handling JoinRequest
{nodeType=1, address=Address[127.0.0.1:5702], groupName='dev',
groupPassword='dev-pass'}
Wed Jul 08 18:43:27 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ClusterManager] false Handling JoinRequest
{nodeType=1, address=Address[127.0.0.1:5702], groupName='dev',
groupPassword='dev-pass'}
Wed Jul 08 18:43:27 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ThreadContext] hz.pool-1-thread-2 is starting with
cacheSize 12
Wed Jul 08 18:43:27 CEST 2009 INFO:
[com.hazelcast.impl.ClusterManager]

Members [2] {
Member [127.0.0.1:5701] this
Member [127.0.0.1:5702]
}

Wed Jul 08 18:43:27 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ConcurrentMapManager] Migration Complete from
Address[127.0.0.1:5701]
Wed Jul 08 18:43:27 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ConcurrentMapManager] Migration Complete from
Address[127.0.0.1:5702]
Wed Jul 08 18:43:56 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ThreadContext] main is starting with cacheSize 12
Wed Jul 08 18:43:57 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ThreadContext] hz.pool-1-thread-5 is starting with
cacheSize 12
Wed Jul 08 18:44:10 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ThreadContext] hz.pool-1-thread-6 is starting with
cacheSize 12
Wed Jul 08 18:44:57 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ThreadContext] hz.pool-1-thread-7 is starting with
cacheSize 12
Thu Jul 09 09:04:23 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ClusterManager] NO HEARTBEAT should remove Address
[127.0.0.1:5702]


Members [1] {
Member [127.0.0.1:5701] this
}



Members [1] {
Member [127.0.0.1:5701] this
}

Thu Jul 09 09:04:23 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ConcurrentMapManager] Migration Complete from
Address[127.0.0.1:5701]
Thu Jul 09 09:04:23 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ConcurrentMapManager] Migration Complete from
Address[127.0.0.1:5701]
20090709-09:06:26|[[[[[BaseServerProcess.java[run[680[INFO[
Thu Jul 09 09:06:26 CEST 2009 LA MÁS FINA: [com.hazelcast.impl.Node$1]
Hazelcast ShutdownHook is shutting down!


rp
=======
Server process RunnerP...@172.26.0.1:-1 initialized.
Hazelcast 1.6 (20090604) starting at Address[127.0.0.1:5702]
Copyright (C) 2009 Hazelcast.com
Wed Jul 08 18:43:21 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ThreadContext] main is starting with cacheSize 12
Wed Jul 08 18:43:21 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ThreadContext] hz.OutThread is starting with
cacheSize 0
Wed Jul 08 18:43:21 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ThreadContext] hz.InThread is starting with
cacheSize 100
Wed Jul 08 18:43:21 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ThreadContext] hz.ServiceThread is starting with
cacheSize 100
Wed Jul 08 18:43:27 CEST 2009 INFO:
[com.hazelcast.impl.ClusterManager]

Members [2] {
Member [127.0.0.1:5701]
Member [127.0.0.1:5702] this
}

Wed Jul 08 18:43:27 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ConcurrentMapManager] Migration Complete from
Address[127.0.0.1:5701]
Wed Jul 08 18:43:27 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ConcurrentMapManager] Migration Complete from
Address[127.0.0.1:5702]
20090708-18:43:27|[[[[RunnerProcess1[RunnerWorker.java[<init>[43[INFO
[Server process RunnerP...@172.26.0.1:-1 initialized.
20090708-18:43:27|[[[[RunnerProcess1[BaseServerProcess.java[doLoop[423
[INFO[Initializing worker RunnerProcess1-0.
Wed Jul 08 18:43:27 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ThreadContext] Thread-7 is starting with cacheSize
12
20090708-18:43:56|[[[[RunnerProcess1[RunnerWorker.java[process[64[INFO
[Initializing worker RunnerProcess1-0.
20090708-18:43:57|[[33[[RunnerProcess1[RunnerWorker.java[processForm
[143[INFO[VanillaQuery found: D:\projects\adamda\adam\mig-master\..
\runtime\webwork\todo\33.xml
20090708-18:43:57|[CreateQueryPojoStep[33[[RunnerProcess1
[QueryInfo.java[updateStartStep[49[INFO[==>>>>>>>>> RunnerProcess1-0
starting 33
20090708-18:44:57|[[all-rec[[RunnerProcess1[RunnerWorker.java[process
[99[INFO[==<<<<<<<<< RunnerProcess1-0 ended all-rec


Members [1] {
Member [127.0.0.1:5702] this
}



Members [1] {
Member [127.0.0.1:5702] this
}

Thu Jul 09 09:04:23 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ConcurrentMapManager] Migration Complete from
Address[127.0.0.1:5702]
20090709-09:04:23|[[[[RunnerProcess1[RunnerWorker.java[process[64[INFO
[moved to D:\projects\adamda\adam\mig-master\..\runtime\webwork\done
\all-rec.xml
20090709-09:04:23|[[[[RunnerProcess1[LogUtils.java[LogEx[167[ERROR
[VanillaQuery found: D:\projects\adamda\adam\mig-master\..\runtime
\webwork\todo\33.xml
Thu Jul 09 09:04:23 CEST 2009 LA MÁS FINA:
[com.hazelcast.impl.ConcurrentMapManager] Migration Complete from
Address[127.0.0.1:5702]

Talip Ozturk

unread,
Jul 9, 2009, 6:56:29 AM7/9/09
to haze...@googlegroups.com
is this reproducible? can you try to reproduce it with the latest snapshot at
http://code.google.com/p/hazelcast/wiki/Downloads?tm=2

-talip
--
http://www.linkedin.com/in/talipozturk
twitter @oztalip

jm

unread,
Jul 9, 2009, 6:59:24 AM7/9/09
to Hazelcast
I'll try to reproduce in a smaller code and see if the snapshot fixes
it...

On Jul 9, 12:56 pm, Talip Ozturk <ozta...@gmail.com> wrote:
> is this reproducible? can you try to reproduce it with the latest snapshot athttp://code.google.com/p/hazelcast/wiki/Downloads?tm=2
>
> -talip
>
>
>
> On Thu, Jul 9, 2009 at 1:49 PM, jm<jmugur...@gmail.com> wrote:
>
> > Hi,
>
> > this looks suspiciously likehttp://code.google.com/p/hazelcast/issues/detail?id=43
> > but I am using  hazelcast 1.6 and it is supposed to be fixed
> > already...so here is my case:
>
> > I am having a problem with BlockingQueue<String> gettintg the same
> > objects being returned several times. I'll try to explain my code.
> > This is hazelcast 1.6 in jdk 1.6.0.10 on vista.
>
> > One process called mp does the put() from time to time on the queue
> > and the other process (rp) does the poll() every second. I launched
> > both yesterday in the afternoon, and mp did 3 puts(), that were
> > properly recieved by poll() on rp. Then left the machine on and I
> > left. It eventually went to sleep, when I arrived this morning and the
> > machine wake up, rp got again the 3 objects in 3 consecutive polls().
>
> > The call to poll is like this:
> >            res = queryswaiting.poll(100, TimeUnit.MILLISECONDS);
>
> > I had logging enabled and here it is after removing unnecessary log.
> > Note that I dont see anything about the put() or poll() in the logs,
> > and I thougth all logging was enabled. Anyway I can see the error the
> > fake poll() creates in my code.
>
> > thanks!!
> > javier
>
> > mp
> > ========
> > Server process ManagerProc...@172.26.0.1:-1 initialized.
> > Server process RunnerProce...@172.26.0.1:-1 initialized.
> > [Server process RunnerProce...@172.26.0.1:-1 initialized.

Talip Ozturk

unread,
Jul 9, 2009, 7:00:44 AM7/9/09
to haze...@googlegroups.com
that would be great.. thanks,
-talip

jm

unread,
Jul 9, 2009, 7:35:54 AM7/9/09
to Hazelcast
hi Talip,

I developed a small test and I reproduced the problem at least once
with 1.6

import java.util.concurrent.*;

import com.hazelcast.core.Hazelcast;

public class HazelcastTest {
protected static BlockingQueue<String> queryswaiting;

public static void main(String[] args) throws InterruptedException
{
queryswaiting = Hazelcast.getQueue("queryswaiting");
if (args.length == 1) {
queryswaiting.put("one");
queryswaiting.put("two");
while (true) {
Thread.sleep(5000);
}
} else {
while (true) {
String res = queryswaiting.poll(100,
TimeUnit.MILLISECONDS);
if (res != null) {
System.out.println("poll got " + res);
}
Thread.sleep(5000);
}
}
}
}


1. run it without params
2 run it with 1 param
3 wait to see
poll got one
poll got two
4 sleep machie
5 wake up
6 I got again
poll got one
poll got two

then I tried to reproduce with the snapshot and could not, but then
tried to reproduce again with 1.6 to double check and could not
either, so i still am not sure it is fixed...hopefully you can know
for sure if the test case helps.

Would the snapshot be ready for production?

javier
On Jul 9, 1:00 pm, Talip Ozturk <ozta...@gmail.com> wrote:
> that would be great.. thanks,
> -talip
>
> ...
>
> read more »

jm

unread,
Jul 9, 2009, 9:40:01 AM7/9/09
to Hazelcast
1 I reproduced it with the snapshot too:
2 run with param 1
3 run without params
4 see
poll got one
poll got two
5 sleep, wake up
6 i got this (process doing the poll):

Thu Jul 09 15:34:54 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Executor core:16, max:64, keepAlive:60
Thu Jul 09 15:34:54 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] ClusterManager adding Member
[172.26.0.63:5702] this
Thu Jul 09 15:34:54 CEST 2009 INFO: [com.hazelcast.nio.InSelector]
Started Selector at 5702
Hazelcast 1.6-snapshot-20090709 (20090709) starting at Address
[172.26.0.63:5702]
Copyright (C) 2009 Hazelcast.com
Thu Jul 09 15:34:54 CEST 2009 FINEST: [com.hazelcast.impl.Node] Address
[172.26.0.63:5702] master: Address[172.26.0.63:5701]
Thu Jul 09 15:34:54 CEST 2009 FINEST: [com.hazelcast.impl.Node]
joining... Address[172.26.0.63:5701]
Thu Jul 09 15:34:54 CEST 2009 INFO: [com.hazelcast.nio.OutSelector]
connect to Address[172.26.0.63:5701]
Thu Jul 09 15:34:54 CEST 2009 FINEST: [com.hazelcast.nio.OutSelector
$Connector] connecting to Address[172.26.0.63:5701]
Thu Jul 09 15:34:54 CEST 2009 FINEST: [com.hazelcast.nio.OutSelector
$Connector] connection check. connected: false, Address
[172.26.0.63:5701]
Thu Jul 09 15:34:54 CEST 2009 FINEST: [com.hazelcast.nio.OutSelector
$Connector] connected to Address[172.26.0.63:5701]
Thu Jul 09 15:34:54 CEST 2009 FINEST:
[com.hazelcast.impl.ThreadContext] hz.ServiceThread is starting with
cacheSize 100
Thu Jul 09 15:34:54 CEST 2009 FINEST:
[com.hazelcast.impl.ThreadContext] hz.OutThread is starting with
cacheSize 0
Thu Jul 09 15:34:55 CEST 2009 FINEST: [com.hazelcast.impl.Node] Master
connnection Connection [Address[172.26.0.63:5701]] live=true
Thu Jul 09 15:34:55 CEST 2009 FINEST:
[com.hazelcast.impl.ThreadContext] main is starting with cacheSize 12
Thu Jul 09 15:34:55 CEST 2009 FINEST:
[com.hazelcast.impl.ThreadContext] hz.InThread is starting with
cacheSize 100
Thu Jul 09 15:34:55 CEST 2009 FINEST: [com.hazelcast.impl.Node]
joining... Address[172.26.0.63:5701]
Thu Jul 09 15:34:55 CEST 2009 FINEST: [com.hazelcast.impl.Node] Master
connnection Connection [Address[172.26.0.63:5701]] live=true
Thu Jul 09 15:34:56 CEST 2009 FINEST: [com.hazelcast.impl.Node]
joining... Address[172.26.0.63:5701]
Thu Jul 09 15:34:56 CEST 2009 FINEST: [com.hazelcast.impl.Node] Master
connnection Connection [Address[172.26.0.63:5701]] live=true
Thu Jul 09 15:34:56 CEST 2009 FINEST: [com.hazelcast.impl.Node]
joining... Address[172.26.0.63:5701]
Thu Jul 09 15:34:56 CEST 2009 FINEST: [com.hazelcast.impl.Node] Master
connnection Connection [Address[172.26.0.63:5701]] live=true
Thu Jul 09 15:34:57 CEST 2009 FINEST: [com.hazelcast.impl.Node]
joining... Address[172.26.0.63:5701]
Thu Jul 09 15:34:57 CEST 2009 FINEST: [com.hazelcast.impl.Node] Master
connnection Connection [Address[172.26.0.63:5701]] live=true
Thu Jul 09 15:34:57 CEST 2009 FINEST: [com.hazelcast.impl.Node]
joining... Address[172.26.0.63:5701]
Thu Jul 09 15:34:57 CEST 2009 FINEST: [com.hazelcast.impl.Node] Master
connnection Connection [Address[172.26.0.63:5701]] live=true
Thu Jul 09 15:34:58 CEST 2009 FINEST: [com.hazelcast.impl.Node]
joining... Address[172.26.0.63:5701]
Thu Jul 09 15:34:58 CEST 2009 FINEST: [com.hazelcast.impl.Node] Master
connnection Connection [Address[172.26.0.63:5701]] live=true
Thu Jul 09 15:34:58 CEST 2009 FINEST: [com.hazelcast.impl.Node]
joining... Address[172.26.0.63:5701]
Thu Jul 09 15:34:58 CEST 2009 FINEST: [com.hazelcast.impl.Node] Master
connnection Connection [Address[172.26.0.63:5701]] live=true
Thu Jul 09 15:34:59 CEST 2009 FINEST: [com.hazelcast.impl.Node]
joining... Address[172.26.0.63:5701]
Thu Jul 09 15:34:59 CEST 2009 FINEST: [com.hazelcast.impl.Node] Master
connnection Connection [Address[172.26.0.63:5701]] live=true
Thu Jul 09 15:34:59 CEST 2009 FINEST: [com.hazelcast.impl.Node]
joining... Address[172.26.0.63:5701]
Thu Jul 09 15:34:59 CEST 2009 FINEST: [com.hazelcast.impl.Node] Master
connnection Connection [Address[172.26.0.63:5701]] live=true
Thu Jul 09 15:35:00 CEST 2009 FINEST: [com.hazelcast.impl.Node]
joining... Address[172.26.0.63:5701]
Thu Jul 09 15:35:00 CEST 2009 FINEST: [com.hazelcast.impl.Node] Master
connnection Connection [Address[172.26.0.63:5701]] live=true
Thu Jul 09 15:35:00 CEST 2009 FINEST: [com.hazelcast.impl.Node]
joining... Address[172.26.0.63:5701]
Thu Jul 09 15:35:00 CEST 2009 FINEST: [com.hazelcast.impl.Node] Master
connnection Connection [Address[172.26.0.63:5701]] live=true
Thu Jul 09 15:35:00 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] MEMBERS UPDATE!!
Thu Jul 09 15:35:00 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] ClusterManager adding Member
[172.26.0.63:5701] *
Thu Jul 09 15:35:00 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] ClusterManager adding Member
[172.26.0.63:5702] this
Thu Jul 09 15:35:00 CEST 2009 INFO:
[com.hazelcast.cluster.ClusterManager]

Members [2] {
Member [172.26.0.63:5701] *
Member [172.26.0.63:5702] this
}

Thu Jul 09 15:35:00 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
q:queryswaiting.maxSizePerJVM=10000
Thu Jul 09 15:35:00 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
q:queryswaiting.maxAge=2147483647
Thu Jul 09 15:35:00 CEST 2009 FINEST:
[com.hazelcast.impl.ConcurrentMapManager] Migration Complete from
Address[172.26.0.63:5702]
Thu Jul 09 15:35:00 CEST 2009 FINEST:
[com.hazelcast.impl.ConcurrentMapManager$33$1] Migration ended!
Thu Jul 09 15:35:00 CEST 2009 FINEST:
[com.hazelcast.impl.ConcurrentMapManager] Migration Complete from
Address[172.26.0.63:5701]
Thu Jul 09 15:35:01 CEST 2009 FINEST: [com.hazelcast.impl.Node] Join
DONE
poll got one
poll got two
Thu Jul 09 15:38:43 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Removing Address Address[172.26.0.63:5701]
Thu Jul 09 15:38:43 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Now Master Address[172.26.0.63:5702]
Thu Jul 09 15:38:43 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] removing Address
[172.26.0.63:5701]
Thu Jul 09 15:38:43 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Address[172.26.0.63:5701] is dead and its backup was Member
[172.26.0.63:5702] this *
Thu Jul 09 15:38:43 CEST 2009 INFO:
[com.hazelcast.cluster.ClusterManager]

Members [1] {
Member [172.26.0.63:5702] this *
}

Thu Jul 09 15:38:43 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Removing Address Address[172.26.0.63:5701]
poll got one
Thu Jul 09 15:38:43 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Address[172.26.0.63:5701] is dead and its backup was null
Thu Jul 09 15:38:43 CEST 2009 INFO:
[com.hazelcast.cluster.ClusterManager]

Members [1] {
Member [172.26.0.63:5702] this *
}

Thu Jul 09 15:38:43 CEST 2009 FINEST:
[com.hazelcast.impl.ConcurrentMapManager] Migration Complete from
Address[172.26.0.63:5702]
Thu Jul 09 15:38:43 CEST 2009 FINEST:
[com.hazelcast.impl.ConcurrentMapManager$33$1] Migration ended!
Thu Jul 09 15:38:43 CEST 2009 FINEST:
[com.hazelcast.impl.ConcurrentMapManager] Migration Complete from
Address[172.26.0.63:5702]
Thu Jul 09 15:38:43 CEST 2009 FINEST:
[com.hazelcast.impl.ConcurrentMapManager$33$1] Migration ended!
poll got two


the one doing hte put:
Thu Jul 09 15:34:41 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Executor core:16, max:64, keepAlive:60
Thu Jul 09 15:34:41 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] ClusterManager adding Member
[172.26.0.63:5701] this
Thu Jul 09 15:34:41 CEST 2009 INFO: [com.hazelcast.nio.InSelector]
Started Selector at 5701
Hazelcast 1.6-snapshot-20090709 (20090709) starting at Address
[172.26.0.63:5701]
Copyright (C) 2009 Hazelcast.com
Thu Jul 09 15:34:43 CEST 2009 FINEST: [com.hazelcast.impl.Node] Address
[172.26.0.63:5701] master: null
Thu Jul 09 15:34:43 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] ClusterManager adding Member
[172.26.0.63:5701] this
Thu Jul 09 15:34:43 CEST 2009 FINEST: [com.hazelcast.impl.Node] Join
DONE
Thu Jul 09 15:34:43 CEST 2009 INFO: [com.hazelcast.impl.Node]


Members [1] {
Member [172.26.0.63:5701] this *
}

Thu Jul 09 15:34:43 CEST 2009 FINEST:
[com.hazelcast.impl.ThreadContext] hz.ServiceThread is starting with
cacheSize 100
Thu Jul 09 15:34:43 CEST 2009 FINEST:
[com.hazelcast.impl.ThreadContext] main is starting with cacheSize 12
Thu Jul 09 15:34:43 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
q:queryswaiting.maxSizePerJVM=10000
Thu Jul 09 15:34:43 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
q:queryswaiting.maxAge=2147483647
Thu Jul 09 15:34:54 CEST 2009 INFO: [com.hazelcast.nio.InSelector
$Acceptor] 5701 this socket is connected to /172.26.0.63:62217
Thu Jul 09 15:34:54 CEST 2009 FINEST:
[com.hazelcast.impl.ThreadContext] hz.InThread is starting with
cacheSize 100
Thu Jul 09 15:34:55 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] false Handling JoinRequest
{nodeType=MEMBER, address=Address[172.26.0.63:5702], groupName='dev',
groupPassword='dev-pass'}
Thu Jul 09 15:34:55 CEST 2009 FINEST:
[com.hazelcast.impl.ThreadContext] hz.OutThread is starting with
cacheSize 0
Thu Jul 09 15:34:55 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] false Handling JoinRequest
{nodeType=MEMBER, address=Address[172.26.0.63:5702], groupName='dev',
groupPassword='dev-pass'}
Thu Jul 09 15:34:55 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] false Handling JoinRequest
{nodeType=MEMBER, address=Address[172.26.0.63:5702], groupName='dev',
groupPassword='dev-pass'}
Thu Jul 09 15:34:56 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] false Handling JoinRequest
{nodeType=MEMBER, address=Address[172.26.0.63:5702], groupName='dev',
groupPassword='dev-pass'}
Thu Jul 09 15:34:56 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] false Handling JoinRequest
{nodeType=MEMBER, address=Address[172.26.0.63:5702], groupName='dev',
groupPassword='dev-pass'}
Thu Jul 09 15:34:57 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] false Handling JoinRequest
{nodeType=MEMBER, address=Address[172.26.0.63:5702], groupName='dev',
groupPassword='dev-pass'}
Thu Jul 09 15:34:57 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] false Handling JoinRequest
{nodeType=MEMBER, address=Address[172.26.0.63:5702], groupName='dev',
groupPassword='dev-pass'}
Thu Jul 09 15:34:58 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] false Handling JoinRequest
{nodeType=MEMBER, address=Address[172.26.0.63:5702], groupName='dev',
groupPassword='dev-pass'}
Thu Jul 09 15:34:58 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] false Handling JoinRequest
{nodeType=MEMBER, address=Address[172.26.0.63:5702], groupName='dev',
groupPassword='dev-pass'}
Thu Jul 09 15:34:59 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] false Handling JoinRequest
{nodeType=MEMBER, address=Address[172.26.0.63:5702], groupName='dev',
groupPassword='dev-pass'}
Thu Jul 09 15:34:59 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] false Handling JoinRequest
{nodeType=MEMBER, address=Address[172.26.0.63:5702], groupName='dev',
groupPassword='dev-pass'}
Thu Jul 09 15:35:00 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] false Handling JoinRequest
{nodeType=MEMBER, address=Address[172.26.0.63:5702], groupName='dev',
groupPassword='dev-pass'}
Thu Jul 09 15:35:00 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] false Handling JoinRequest
{nodeType=MEMBER, address=Address[172.26.0.63:5702], groupName='dev',
groupPassword='dev-pass'}
Thu Jul 09 15:35:00 CEST 2009 FINEST:
[com.hazelcast.impl.ThreadContext] hz.pool-1-thread-2 is starting with
cacheSize 12
Thu Jul 09 15:35:00 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] MEMBERS UPDATE!!
Thu Jul 09 15:35:00 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] ClusterManager adding Member
[172.26.0.63:5701] this *
Thu Jul 09 15:35:00 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] ClusterManager adding Member
[172.26.0.63:5702]
Thu Jul 09 15:35:00 CEST 2009 INFO:
[com.hazelcast.cluster.ClusterManager]

Members [2] {
Member [172.26.0.63:5701] this *
Member [172.26.0.63:5702]
}

Thu Jul 09 15:35:00 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Backup changed!!! so backing up to Member [172.26.0.63:5702]
Thu Jul 09 15:35:00 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Backup changed!!! so backing up to Member [172.26.0.63:5702]
Thu Jul 09 15:35:00 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Backup changed!!! so backing up to Member [172.26.0.63:5702]
Thu Jul 09 15:35:00 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Backup changed!!! so backing up to Member [172.26.0.63:5702]
Thu Jul 09 15:35:00 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Backup changed!!! so backing up to Member [172.26.0.63:5702]
Thu Jul 09 15:35:00 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Backup changed!!! so backing up to Member [172.26.0.63:5702]
Thu Jul 09 15:35:00 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Backup changed!!! so backing up to Member [172.26.0.63:5702]
Thu Jul 09 15:35:00 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Backup changed!!! so backing up to Member [172.26.0.63:5702]
Thu Jul 09 15:35:00 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Backup changed!!! so backing up to Member [172.26.0.63:5702]
Thu Jul 09 15:35:00 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Backup changed!!! so backing up to Member [172.26.0.63:5702]
Thu Jul 09 15:35:00 CEST 2009 FINEST:
[com.hazelcast.impl.ConcurrentMapManager] Migration Complete from
Address[172.26.0.63:5701]
Thu Jul 09 15:35:00 CEST 2009 FINEST:
[com.hazelcast.impl.ConcurrentMapManager$33$1] Migration ended!
Thu Jul 09 15:35:00 CEST 2009 FINEST:
[com.hazelcast.impl.ConcurrentMapManager] Migration Complete from
Address[172.26.0.63:5702]
Thu Jul 09 15:38:43 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] NO HEARTBEAT should remove
Address[172.26.0.63:5702]
Thu Jul 09 15:38:43 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Removing Address Address[172.26.0.63:5702]
Thu Jul 09 15:38:43 CEST 2009 INFO: [com.hazelcast.nio.SelectorBase]
Selector exception at hz.InThread, cause=
java.nio.channels.CancelledKeyException
Thu Jul 09 15:38:43 CEST 2009 FINEST:
[com.hazelcast.cluster.ClusterManager] removing Address
[172.26.0.63:5702]
Thu Jul 09 15:38:43 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Address[172.26.0.63:5702] is dead and its backup was Member
[172.26.0.63:5701] this *
java.nio.channels.CancelledKeyException
at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59)
at com.hazelcast.nio.SelectorBase.run(SelectorBase.java:116)
at java.lang.Thread.run(Thread.java:619)
Thu Jul 09 15:38:43 CEST 2009 INFO:
[com.hazelcast.cluster.ClusterManager]

Members [1] {
Member [172.26.0.63:5701] this *
}

Thu Jul 09 15:38:43 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Removing Address Address[172.26.0.63:5702]
Thu Jul 09 15:38:43 CEST 2009 FINEST: [com.hazelcast.impl.BaseManager]
Address[172.26.0.63:5702] is dead and its backup was null
Thu Jul 09 15:38:43 CEST 2009 INFO:
[com.hazelcast.cluster.ClusterManager]

Members [1] {
Member [172.26.0.63:5701] this *
}

Thu Jul 09 15:38:43 CEST 2009 FINEST:
[com.hazelcast.impl.ConcurrentMapManager] Migration Complete from
Address[172.26.0.63:5701]
Thu Jul 09 15:38:43 CEST 2009 FINEST:
[com.hazelcast.impl.ConcurrentMapManager$33$1] Migration ended!
Thu Jul 09 15:38:43 CEST 2009 FINEST:
[com.hazelcast.impl.ConcurrentMapManager] Migration Complete from
Address[172.26.0.63:5701]
Thu Jul 09 15:38:43 CEST 2009 FINEST:
[com.hazelcast.impl.ConcurrentMapManager$33$1] Migration ended!
> ...
>
> read more »

jm

unread,
Jul 10, 2009, 5:02:00 AM7/10/09
to Hazelcast
Talip, I am leaving for vacation today so not sure if I would be able
to follow up during next week, just warning I could not reply until
the 20th.

This issue is bad enough for our code, so when I'm back I'll have
either to see if this issue can be resolved go back to our old code
for this functionality.

thanks
> {nodeType=MEMBER, ...
>
> read more »

Talip Ozturk

unread,
Jul 10, 2009, 5:05:49 AM7/10/09
to haze...@googlegroups.com
Ok. I will do my best.

Have a nice vacation,
-talip

jm

unread,
Jul 10, 2009, 5:09:20 AM7/10/09
to haze...@googlegroups.com
no pressure... just wanted to let you know I might not reply to the thread...
Reply all
Reply to author
Forward
0 new messages