HyperDexClientException: reconfiguration affecting virtual_server

50 views
Skip to first unread message

umatomba

unread,
Jan 27, 2015, 4:22:04 PM1/27/15
to hyperdex...@googlegroups.com
Hi Robert,

I am trying to setup and test the 5 node cluster.
My current docker based cluster setup is:

Physical Host1:
192.168.100.1:
hyperdex coordinator
--foreground --listen=192.168.100.1 --data=/hyperdex/coord
hyperdex daemon
--foreground --listen=192.168.100.1 --coordinator=192.168.100.1 --data=/hyperdex/daemon

192.168.100.2:
hyperdex coordinator
--foreground --listen=192.168.100.2 --data=/hyperdex/coord --connect-string=192.168.100.1:1982,192.168.100.2:1982,192.168.100.3:1982,192.168.100.4:1982,192.168.100.5
hyperdex daemon
--foreground --listen=192.168.100.2 --coordinator=192.168.100.2 --data=/hyperdex/daemon

192.168.100.3:
hyperdex coordinator
--foreground --listen=192.168.100.3 --data=/hyperdex/coord --connect-string=192.168.100.1:1982,192.168.100.2:1982,192.168.100.3:1982,192.168.100.4:1982,192.168.100.5
hyperdex daemon
--foreground --listen=192.168.100.3 --coordinator=192.168.100.3 --data=/hyperdex/daemon



Physical Host2:

192.168.100.4:
hyperdex coordinator
--foreground --listen=192.168.100.4 --data=/hyperdex/coord --connect-string=192.168.100.1:1982,192.168.100.2:1982,192.168.100.3:1982,192.168.100.4:1982,192.168.100.5
hyperdex daemon
--foreground --listen=192.168.100.4 --coordinator=192.168.100.4 --data=/hyperdex/daemon

192.168.100.5:
hyperdex coordinator
--foreground --listen=192.168.100.5 --data=/hyperdex/coord --connect-string=192.168.100.1:1982,192.168.100.2:1982,192.168.100.3:1982,192.168.100.4:1982,192.168.100.5
hyperdex daemon
--foreground --listen=192.168.100.5 --coordinator=192.168.100.5 --data=/hyperdex/daemon




For the ycsb tests i use the default space configuration with 24 partitions and 1 tolerate  (http://hyperdex.org/performance/setup/)

space usertable
key k
attributes field0
, field1, field2, field3, field4,
           field5
, field6, field7, field8, field9
create
24 partitions
tolerate
1 failure


While all nodes in the cluster are working as a part of my test I shut down the node 4 (coordinator and daemon) and now the cluster still unavailable for new requests on all other nodes (1,2,3,5) with:

 java -Djava.library.path=/usr/local/lib/ com.yahoo.ycsb.Client -t -db org.hyperdex.ycsb.HyperDex -P /hyperdex_build/ycsb-0.1.4/workloads/workloada -p "hyperdex.host=192.168.100.1" -p "hyperdex.port=1982"


HyperDexClientException: org.hyperdex.client.HyperDexClientException: reconfiguration affecting virtual_server(1004)/server(3597616119902050258)
HyperDexClientException: org.hyperdex.client.HyperDexClientException: reconfiguration affecting virtual_server(1004)/server(3597616119902050258)
HyperDexClientException: org.hyperdex.client.HyperDexClientException: reconfiguration affecting virtual_server(1004)/server(3597616119902050258)
HyperDexClientException: org.hyperdex.client.HyperDexClientException: could not send REQ_GET to virtual_server(1004)
HyperDexClientException: org.hyperdex.client.HyperDexClientException: reconfiguration affecting virtual_server(1004)/server(3597616119902050258)
HyperDexClientException: org.hyperdex.client.HyperDexClientException: reconfiguration affecting virtual_server(1004)/server(3597616119902050258)
HyperDexClientException: org.hyperdex.client.HyperDexClientException: could not send REQ_GET to virtual_server(1004)
HyperDexClientException: org.hyperdex.client.HyperDexClientException: reconfiguration affecting virtual_server(1004)/server(3597616119902050258)
HyperDexClientException: org.hyperdex.client.HyperDexClientException: could not send REQ_GET to virtual_server(1004)
HyperDexClientException: org.hyperdex.client.HyperDexClientException: reconfiguration affecting virtual_server(1004)/server(3597616119902050258)
HyperDexClientException: org.hyperdex.client.HyperDexClientException: could not send REQ_GET to virtual_server(1004)
HyperDexClientException: org.hyperdex.client.HyperDexClientException: could not send REQ_GET to virtual_server(1004)


Cluster show-config tell me that server have only one replica, is it possible?:

region 408 lower=(17678129737304986950,) upper=(18446744073709551615,) replicas=[server(3597616119902050258)/virtual_server(1004)]


cluster 15136141433329036193
version
1029
flags
0
server
3597616119902050258 192.168.100.4:2012 SHUTDOWN
server
5650592981559313255 192.168.100.3:2012 AVAILABLE
server
13013824242805358806 192.168.100.1:2012 AVAILABLE
server
14154618844282470030 192.168.100.2:2012 AVAILABLE
server
17701203951474089523 192.168.100.5:2012 AVAILABLE

space
383 usertable
  fault_tolerance
1
  predecessor_width
1
  schema
    attribute k HYPERDATATYPE_STRING
    attribute field0 HYPERDATATYPE_STRING
    attribute field1 HYPERDATATYPE_STRING
    attribute field2 HYPERDATATYPE_STRING
    attribute field3 HYPERDATATYPE_STRING
    attribute field4 HYPERDATATYPE_STRING
    attribute field5 HYPERDATATYPE_STRING
    attribute field6 HYPERDATATYPE_STRING
    attribute field7 HYPERDATATYPE_STRING
    attribute field8 HYPERDATATYPE_STRING
    attribute field9 HYPERDATATYPE_STRING
  subspace
384
    attributes k
    region
385 lower=(0,) upper=(768614336404564649,) replicas=[server(13013824242805358806)/virtual_server(788), server(14154618844282470030)/virtual_server(834)]
    region
386 lower=(768614336404564650,) upper=(1537228672809129299,) replicas=[server(13013824242805358806)/virtual_server(790), server(14154618844282470030)/virtual_server(836)]
    region
387 lower=(1537228672809129300,) upper=(2305843009213693949,) replicas=[server(13013824242805358806)/virtual_server(792), server(14154618844282470030)/virtual_server(838)]
    region
388 lower=(2305843009213693950,) upper=(3074457345618258599,) replicas=[server(13013824242805358806)/virtual_server(794), server(14154618844282470030)/virtual_server(840)]
    region
389 lower=(3074457345618258600,) upper=(3843071682022823249,) replicas=[server(13013824242805358806)/virtual_server(796), server(14154618844282470030)/virtual_server(842)]
    region
390 lower=(3843071682022823250,) upper=(4611686018427387899,) replicas=[server(13013824242805358806)/virtual_server(798), server(14154618844282470030)/virtual_server(844)]
    region
391 lower=(4611686018427387900,) upper=(5380300354831952549,) replicas=[server(14154618844282470030)/virtual_server(421), server(5650592981559313255)/virtual_server(874)]
    region
392 lower=(5380300354831952550,) upper=(6148914691236517199,) replicas=[server(14154618844282470030)/virtual_server(423), server(5650592981559313255)/virtual_server(876)]
    region
393 lower=(6148914691236517200,) upper=(6917529027641081849,) replicas=[server(14154618844282470030)/virtual_server(425), server(5650592981559313255)/virtual_server(878)]
    region
394 lower=(6917529027641081850,) upper=(7686143364045646499,) replicas=[server(14154618844282470030)/virtual_server(427), server(5650592981559313255)/virtual_server(880)]
    region
395 lower=(7686143364045646500,) upper=(8454757700450211149,) replicas=[server(14154618844282470030)/virtual_server(429), server(5650592981559313255)/virtual_server(882)]
    region
396 lower=(8454757700450211150,) upper=(9223372036854775799,) replicas=[server(14154618844282470030)/virtual_server(431), server(5650592981559313255)/virtual_server(884)]
    region
397 lower=(9223372036854775800,) upper=(9991986373259340449,) replicas=[server(5650592981559313255)/virtual_server(886)]
    region
398 lower=(9991986373259340450,) upper=(10760600709663905099,) replicas=[server(5650592981559313255)/virtual_server(888)]
    region
399 lower=(10760600709663905100,) upper=(11529215046068469749,) replicas=[server(5650592981559313255)/virtual_server(890)]
    region
400 lower=(11529215046068469750,) upper=(12297829382473034399,) replicas=[server(5650592981559313255)/virtual_server(892)]
    region
401 lower=(12297829382473034400,) upper=(13066443718877599049,) replicas=[server(5650592981559313255)/virtual_server(894)]
    region
402 lower=(13066443718877599050,) upper=(13835058055282163699,) replicas=[server(5650592981559313255)/virtual_server(896)]
    region
403 lower=(13835058055282163700,) upper=(14603672391686728349,) replicas=[server(17701203951474089523)/virtual_server(1056)]
    region
404 lower=(14603672391686728350,) upper=(15372286728091292999,) replicas=[server(17701203951474089523)/virtual_server(1058)]
    region
405 lower=(15372286728091293000,) upper=(16140901064495857649,) replicas=[server(17701203951474089523)/virtual_server(1060)]
    region
406 lower=(16140901064495857650,) upper=(16909515400900422299,) replicas=[server(17701203951474089523)/virtual_server(1062)]
    region
407 lower=(16909515400900422300,) upper=(17678129737304986949,) replicas=[server(17701203951474089523)/virtual_server(1064)]
    region
408 lower=(17678129737304986950,) upper=(18446744073709551615,) replicas=[server(3597616119902050258)/virtual_server(1004)]
transfer
(id=transfer(1065), region=region(408), src=server(3597616119902050258), vsrc=virtual_server(1004), dst=server(17701203951474089523), vdst=virtual_server(1066))

Awaiting for your reply,
Vladimir

Holger Winkelmann

unread,
Jan 28, 2015, 2:03:30 AM1/28/15
to hyperdex...@googlegroups.com
Hi,

Just FYI, thats are the Docker Images und YCSB Test we spoke about at IRC. Once we get this running we will push the docker images public.
If required we also could provide the docker images to make it reproducible for your guys? Vladimir,  what do you think?

Holger

...

umatomba

unread,
Jan 28, 2015, 10:56:14 AM1/28/15
to hyperdex...@googlegroups.com
Reply all
Reply to author
Forward
0 new messages