The problem is for the 3rd node (node C:
127.0.0.1:8560), the backup
is painfully slow. Here is the output for 100 elements:
Information: /
127.0.0.1:8560 [dev] Address[127.0.0.1:8560][customers]
loaded 0 in total.
Map is initialized
Number of Customers: 100
MigrationEvent{partitionId=128, oldOwner=Member [
127.0.0.1:8540],
newOwner=Member [
127.0.0.1:8560] this}
MigrationEvent{partitionId=128, oldOwner=Member [
127.0.0.1:8540],
newOwner=Member [
127.0.0.1:8560] this}
Current time: 2012.03.06.11.28.58
Own Entry: 0, Backup Entry: 0
Number of Customers: 100
Own Entry: 0, Backup Entry: 0
Current time: 2012.03.06.11.29.03
Number of Customers: 100
MigrationEvent{partitionId=154, oldOwner=Member [
127.0.0.1:8540],
newOwner=Member [
127.0.0.1:8560] this}
MigrationEvent{partitionId=154, oldOwner=Member [
127.0.0.1:8540],
newOwner=Member [
127.0.0.1:8560] this}
Current time: 2012.03.06.11.29.08
Own Entry: 0, Backup Entry: 0
Number of Customers: 100
Current time: 2012.03.06.11.29.13
Own Entry: 0, Backup Entry: 0
Number of Customers: 100
MigrationEvent{partitionId=38, oldOwner=Member [
127.0.0.1:8550],
newOwner=Member [
127.0.0.1:8560] this}
MigrationEvent{partitionId=38, oldOwner=Member [
127.0.0.1:8550],
newOwner=Member [
127.0.0.1:8560] this}
Current time: 2012.03.06.11.29.18
Own Entry: 0, Backup Entry: 0
Number of Customers: 100
Current time: 2012.03.06.11.29.23
Own Entry: 0, Backup Entry: 0
Number of Customers: 100
MigrationEvent{partitionId=263, oldOwner=Member [
127.0.0.1:8540],
newOwner=Member [
127.0.0.1:8560] this}
MigrationEvent{partitionId=263, oldOwner=Member [
127.0.0.1:8540],
newOwner=Member [
127.0.0.1:8560] this}
Current time: 2012.03.06.11.29.28
Own Entry: 1, Backup Entry: 0
Number of Customers: 100
Current time: 2012.03.06.11.29.33
Own Entry: 1, Backup Entry: 0
Number of Customers: 100
Current time: 2012.03.06.11.29.38
Own Entry: 1, Backup Entry: 0
Number of Customers: 100
Current time: 2012.03.06.11.29.43
Own Entry: 1, Backup Entry: 0
Number of Customers: 100
Current time: 2012.03.06.11.29.48
Own Entry: 1, Backup Entry: 1
Number of Customers: 100
Current time: 2012.03.06.11.29.53
Own Entry: 1, Backup Entry: 1
Number of Customers: 100
Current time: 2012.03.06.11.29.58
Own Entry: 1, Backup Entry: 1
Number of Customers: 100
Own Entry: 1, Backup Entry: 1
Current time: 2012.03.06.11.30.03
Number of Customers: 100
Own Entry: 1, Backup Entry: 1
Current time: 2012.03.06.11.30.08
Number of Customers: 100
Own Entry: 1, Backup Entry: 1
Current time: 2012.03.06.11.30.13
Number of Customers: 100
Own Entry: 1, Backup Entry: 2
Current time: 2012.03.06.11.30.18
Number of Customers: 100
Own Entry: 1, Backup Entry: 2
Current time: 2012.03.06.11.30.23
Number of Customers: 100
Current time: 2012.03.06.11.30.28
Own Entry: 1, Backup Entry: 2
Number of Customers: 100
Own Entry: 1, Backup Entry: 2
Current time: 2012.03.06.11.30.33
Number of Customers: 100
Current time: 2012.03.06.11.30.38
Own Entry: 1, Backup Entry: 2
Number of Customers: 100
Current time: 2012.03.06.11.30.43
Own Entry: 1, Backup Entry: 2
Number of Customers: 100
It shows that it takes almost 2 minutes to get only two backup for 100
elements.
Is there any listener to listen when backup is made?
Thanks,
Md Kamaruzzaman
On Mar 5, 10:53 pm, Mehmet Dogan <
meh...@hazelcast.com> wrote:
> You should either wait until all 2nd backup operations are completed before
> terminating A and B or shutdown nodes A and B gracefully.