Not able to restart opentsdb after opentsdb.conf file update

341 views
Skip to first unread message

santosh kumar

unread,
Feb 4, 2016, 2:31:02 PM2/4/16
to OpenTSDB
Hi,

Getting opentsdb startup failure after updating the opentsdb.conf file with HBase table and host details. 

Below are startup logs:

2016-02-04 18:37:33,584 INFO  [main] ZooKeeper: Client environment:zookeeper.version=3.3.6-1366786, built on 07/29/2012 06:22 GMT
2016-02-04 18:37:33,584 INFO  [main] ZooKeeper: Client environment:host.name=76316532f1a7
2016-02-04 18:37:33,584 INFO  [main] ZooKeeper: Client environment:java.version=1.7.0_91
2016-02-04 18:37:33,585 INFO  [main] ZooKeeper: Client environment:java.vendor=Oracle Corporation
2016-02-04 18:37:33,585 INFO  [main] ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre
2016-02-04 18:37:33,585 INFO  [main] ZooKeeper: Client environment:java.class.path=/usr/share/opentsdb/*.jar:/usr/share/opentsdb:/usr/share/opentsdb/bin:/usr/share/opentsdb/lib/async-1.4.0.jar:/usr/share/opentsdb/lib/asynchbase-1.7.1-20151004.015637-1.jar:/usr/share/opentsdb/lib/commons-math3-3.4.1.jar:/usr/share/opentsdb/lib/guava-18.0.jar:/usr/share/opentsdb/lib/jackson-annotations-2.4.3.jar:/usr/share/opentsdb/lib/jackson-core-2.4.3.jar:/usr/share/opentsdb/lib/jackson-databind-2.4.3.jar:/usr/share/opentsdb/lib/log4j-over-slf4j-1.7.7.jar:/usr/share/opentsdb/lib/logback-classic-1.0.13.jar:/usr/share/opentsdb/lib/logback-core-1.0.13.jar:/usr/share/opentsdb/lib/netty-3.9.4.Final.jar:/usr/share/opentsdb/lib/protobuf-java-2.5.0.jar:/usr/share/opentsdb/lib/slf4j-api-1.7.7.jar:/usr/share/opentsdb/lib/tsdb-2.2.0RC3.jar:/usr/share/opentsdb/lib/zookeeper-3.3.6.jar:/etc/opentsdb
2016-02-04 18:37:33,585 INFO  [main] ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
2016-02-04 18:37:33,585 INFO  [main] ZooKeeper: Client environment:java.io.tmpdir=/tmp
2016-02-04 18:37:33,585 INFO  [main] ZooKeeper: Client environment:java.compiler=<NA>
2016-02-04 18:37:33,585 INFO  [main] ZooKeeper: Client environment:os.name=Linux
2016-02-04 18:37:33,586 INFO  [main] ZooKeeper: Client environment:os.arch=amd64
2016-02-04 18:37:33,586 INFO  [main] ZooKeeper: Client environment:os.version=4.2.3-1.el7.elrepo.x86_64
2016-02-04 18:37:33,586 INFO  [main] ZooKeeper: Client environment:user.name=root
2016-02-04 18:37:33,586 INFO  [main] ZooKeeper: Client environment:user.home=/root
2016-02-04 18:37:33,586 INFO  [main] ZooKeeper: Client environment:user.dir=/etc
2016-02-04 18:37:33,587 INFO  [main] ZooKeeper: Initiating client connection, connectString=192.168.0.97 sessionTimeout=5000 watcher=org.hbase.async.HBaseClient$ZKClient@74a02b5e
2016-02-04 18:37:33,588 DEBUG [main] ClientCnxn: zookeeper.disableAutoWatchReset is false
2016-02-04 18:37:33,595 INFO  [main] HBaseClient: Need to find the -ROOT- region
2016-02-04 18:37:33,595 INFO  [main-SendThread()] ClientCnxn: Opening socket connection to server /192.168.0.97:2181
2016-02-04 18:37:33,603 INFO  [main-SendThread(test-opentsdb:2181)] ClientCnxn: Socket connection established to test-opentsdb/192.168.0.97:2181, initiating session
2016-02-04 18:37:33,605 DEBUG [main-SendThread(test-opentsdb:2181)] ClientCnxn: Session establishment request sent on test-opentsdb/192.168.0.97:2181
2016-02-04 18:37:33,608 INFO  [main-SendThread(test-opentsdb:2181)] ClientCnxn: Session establishment complete on server test-opentsdb/192.168.0.97:2181, sessionid = 0x15274517aa269a1, negotiated timeout = 5000
2016-02-04 18:37:33,610 DEBUG [main-EventThread] HBaseClient: Got ZooKeeper event: WatchedEvent state:SyncConnected type:None path:null
2016-02-04 18:37:33,611 DEBUG [main-EventThread] HBaseClient: Finding the -ROOT- or .META. region in ZooKeeper
2016-02-04 18:37:33,614 DEBUG [main-EventThread] HBaseClient: Done handling ZooKeeper event: WatchedEvent state:SyncConnected type:None path:null
2016-02-04 18:37:33,616 DEBUG [main-SendThread(test-opentsdb:2181)] ClientCnxn: Reading reply sessionid:0x15274517aa269a1, packet:: clientPath:/hbase/root-region-server serverPath:/hbase/root-region-server finished:false header:: 1,4  replyHeader:: 1,780424,-101  request:: '/hbase/root-region-server,T  response::
2016-02-04 18:37:33,618 DEBUG [main-SendThread(test-opentsdb:2181)] ClientCnxn: Reading reply sessionid:0x15274517aa269a1, packet:: clientPath:/hbase/meta-region-server serverPath:/hbase/meta-region-server finished:false header:: 2,4  replyHeader:: 2,780424,0  request:: '/hbase/meta-region-server,T  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffffa21765bffffff98ffffff9459050425546a22a15686466732d68626173652d73322e696e736967687410fffffff4ffffffd4318ffffff88ffffffaefffffffeffffffb5ffffffa52a100183,s{57448,57448,1453154351746,1453154351746,0,0,0,0,75,0,57448}
2016-02-04 18:37:33,629 DEBUG [main-EventThread] HBaseClient: Resolved IP of `test-opentsdb' to x.x.x.x in 725399ns
2016-02-04 18:37:33,630 INFO  [main-EventThread] HBaseClient: Connecting to .META. region @ x.x.x.x:60020
2016-02-04 18:37:33,655 DEBUG [main-EventThread] HBaseClient: Channel [id: 0x7dfe6a97]'s state changed: [id: 0x7dfe6a97] OPEN
2016-02-04 18:37:33,658 DEBUG [main-EventThread] RegionClient: handleUpstream [id: 0x7dfe6a97] OPEN
2016-02-04 18:37:33,659 DEBUG [main-EventThread] HBaseClient: Channel [id: 0x7dfe6a97]'s state changed: [id: 0x7dfe6a97] CONNECT: /x.x.x.x:60020
2016-02-04 18:37:33,662 DEBUG [main-EventThread] RegionClient: RPC queued: HBaseRpc(method=getClosestRowBefore, table="hbase:meta", key=[83, 80, 69, 67, 84, 82, 69, 95, 68, 69, 77, 79, 44, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60, 44, 58], region=RegionInfo(table="hbase:meta", region_name="hbase:meta,,1", stop_key=""), attempt=0, timeout=-1, hasTimedout=false)
2016-02-04 18:37:33,663 DEBUG [main-EventThread] Deferred: callback=retry RPC@516058581 returned Deferred@1857604281(state=PENDING, result=null, callback=type getClosestRowBefore response -> locateRegion in META -> release .META. lookup permit -> passthrough -> retry RPC -> (continuation of Deferred@637890681 after retry RPC@516058581), errback=passthrough -> passthrough -> release .META. lookup permit -> locateRegion errback -> retry RPC -> (continuation of Deferred@637890681 after retry RPC@516058581)), so the following Deferred is getting paused: Deferred@637890681(state=PAUSED, result=Deferred@1857604281, callback=notify DeferredGroup@790309743, errback=notify DeferredGroup@790309743)
2016-02-04 18:37:33,663 DEBUG [main-EventThread] RegionClient: RPC queued: HBaseRpc(method=getClosestRowBefore, table="hbase:meta", key=[83, 80, 69, 67, 84, 82, 69, 95, 68, 69, 77, 79, 44, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60, 44, 58], region=RegionInfo(table="hbase:meta", region_name="hbase:meta,,1", stop_key=""), attempt=0, timeout=-1, hasTimedout=false)
2016-02-04 18:37:33,664 DEBUG [main-EventThread] Deferred: callback=retry RPC@549493906 returned Deferred@1164396042(state=PENDING, result=null, callback=type getClosestRowBefore response -> locateRegion in META -> release .META. lookup permit -> passthrough -> retry RPC -> (continuation of Deferred@2133286430 after retry RPC@549493906), errback=passthrough -> passthrough -> release .META. lookup permit -> locateRegion errback -> retry RPC -> (continuation of Deferred@2133286430 after retry RPC@549493906)), so the following Deferred is getting paused: Deferred@2133286430(state=PAUSED, result=Deferred@1164396042, callback=notify DeferredGroup@790309743, errback=notify DeferredGroup@790309743)
2016-02-04 18:37:33,664 DEBUG [main-EventThread] HBaseClient: Ignore any DEBUG exception from ZooKeeper
2016-02-04 18:37:33,664 DEBUG [main-EventThread] ZooKeeper: Closing session: 0x15274517aa269a1
2016-02-04 18:37:33,664 DEBUG [main-EventThread] ClientCnxn: Closing client for session: 0x15274517aa269a1
2016-02-04 18:37:33,665 DEBUG [AsyncHBase I/O Worker #1] HBaseClient: Channel [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020]'s state changed: [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020] BOUND: /x.x.x.x:58208
2016-02-04 18:37:33,666 DEBUG [AsyncHBase I/O Worker #1] RegionClient: handleUpstream [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020] BOUND: /x.x.x.x:58208
2016-02-04 18:37:33,666 DEBUG [AsyncHBase I/O Worker #1] HBaseClient: Channel [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020]'s state changed: [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020] CONNECTED: /x.x.x.x:60020
2016-02-04 18:37:33,666 DEBUG [AsyncHBase I/O Worker #1] RegionClient: handleUpstream [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020] CONNECTED: /x.x.x.x:60020
2016-02-04 18:37:33,667 DEBUG [main-SendThread(test-opentsdb:2181)] ClientCnxn: Reading reply sessionid:0x15274517aa269a1, packet:: clientPath:null serverPath:null finished:false header:: 3,-11  replyHeader:: 3,780425,0  request:: null response:: null
2016-02-04 18:37:33,667 DEBUG [main-SendThread(test-opentsdb:2181)] ClientCnxn: An exception was thrown while closing send thread for session 0x15274517aa269a1 : Unable to read additional data from server sessionid 0x15274517aa269a1, likely server has closed socket
2016-02-04 18:37:33,667 DEBUG [main-EventThread] ClientCnxn: Disconnecting client for session: 0x15274517aa269a1
2016-02-04 18:37:33,667 INFO  [main-EventThread] ZooKeeper: Session: 0x15274517aa269a1 closed
2016-02-04 18:37:33,668 DEBUG [main-EventThread] HBaseClient: ZooKeeper#close completed in 3648469ns
2016-02-04 18:37:33,668 INFO  [main-EventThread] ClientCnxn: EventThread shut down
2016-02-04 18:37:33,677 DEBUG [AsyncHBase I/O Worker #1] RegionClient: handleUpstream [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020] WRITTEN_AMOUNT: 78
2016-02-04 18:37:33,677 DEBUG [AsyncHBase I/O Worker #1] RegionClient: Executing RPC queued: HBaseRpc(method=getClosestRowBefore, table="hbase:meta", key=[83, 80, 69, 67, 84, 82, 69, 95, 68, 69, 77, 79, 44, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60, 44, 58], region=RegionInfo(table="hbase:meta", region_name="hbase:meta,,1", stop_key=""), attempt=0, timeout=-1, hasTimedout=false)
2016-02-04 18:37:33,689 DEBUG [AsyncHBase I/O Worker #1] RegionClient: [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020] Sending RPC #0, payload=BigEndianHeapChannelBuffer(ridx=12, widx=140, cap=140) [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 124, 9, 8, 0, 26, 3, 71, 101, 116, 32, 1, 113, 10, 17, 8, 1, 18, 13, 104, 98, 97, 115, 101, 58, 109, 101, 116, 97, 44, 44, 49, 18, 92, 10, 80, 83, 80, 69, 67, 84, 82, 69, 95, 68, 69, 77, 79, 44, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60, 44, 58, 18, 6, 10, 4, 105, 110, 102, 111, 88, 1]
2016-02-04 18:37:33,689 DEBUG [AsyncHBase I/O Worker #1] RegionClient: handleUpstream [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020] WRITTEN_AMOUNT: 128
2016-02-04 18:37:33,690 DEBUG [AsyncHBase I/O Worker #1] RegionClient: Executing RPC queued: HBaseRpc(method=getClosestRowBefore, table="hbase:meta", key=[83, 80, 69, 67, 84, 82, 69, 95, 68, 69, 77, 79, 44, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60, 44, 58], region=RegionInfo(table="hbase:meta", region_name="hbase:meta,,1", stop_key=""), attempt=0, timeout=-1, hasTimedout=false)
2016-02-04 18:37:33,690 DEBUG [AsyncHBase I/O Worker #1] RegionClient: [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020] Sending RPC #1, payload=BigEndianHeapChannelBuffer(ridx=12, widx=140, cap=140) [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 124, 9, 8, 1, 26, 3, 71, 101, 116, 32, 1, 113, 10, 17, 8, 1, 18, 13, 104, 98, 97, 115, 101, 58, 109, 101, 116, 97, 44, 44, 49, 18, 92, 10, 80, 83, 80, 69, 67, 84, 82, 69, 95, 68, 69, 77, 79, 44, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60, 44, 58, 18, 6, 10, 4, 105, 110, 102, 111, 88, 1]
2016-02-04 18:37:33,690 DEBUG [AsyncHBase I/O Worker #1] RegionClient: handleUpstream [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020] WRITTEN_AMOUNT: 128
2016-02-04 18:37:33,693 DEBUG [AsyncHBase I/O Worker #1] RegionClient: handleUpstream [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=1024, cap=1024)
2016-02-04 18:37:33,693 DEBUG [AsyncHBase I/O Worker #1] RegionClient: ------------------>> ENTERING DECODE >>------------------
2016-02-04 18:37:33,700 DEBUG [AsyncHBase I/O Worker #1] RegionClient: rpcid=0, response size=520 bytes, 504 readable bytes left, rpc=HBaseRpc(method=getClosestRowBefore, table="hbase:meta", key=[83, 80, 69, 67, 84, 82, 69, 95, 68, 69, 77, 79, 44, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60, 44, 58], region=RegionInfo(table="hbase:meta", region_name="hbase:meta,,1", stop_key=""), attempt=0, timeout=0, hasTimedout=false)
2016-02-04 18:37:33,703 DEBUG [AsyncHBase I/O Worker #1] HBaseClient: Resolved IP of `test-opentsdb' to x.x.x.x in 922792ns
2016-02-04 18:37:33,703 DEBUG [AsyncHBase I/O Worker #1] HBaseClient: Channel [id: 0x83b5ad07]'s state changed: [id: 0x83b5ad07] OPEN
2016-02-04 18:37:33,703 DEBUG [AsyncHBase I/O Worker #1] RegionClient: handleUpstream [id: 0x83b5ad07] OPEN
2016-02-04 18:37:33,704 DEBUG [AsyncHBase I/O Worker #1] HBaseClient: Channel [id: 0x83b5ad07]'s state changed: [id: 0x83b5ad07] CONNECT: /x.x.x.x:60020
2016-02-04 18:37:33,704 INFO  [AsyncHBase I/O Worker #1] HBaseClient: Added client for region RegionInfo(table="TEST_TABLE", region_name="TEST_TABLE,,1454097342058.114fc87ff9467239939da4d9d6222598.", stop_key=[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), which was added to the regions cache.  Now we know that RegionClient@84855294(chan=null, #pending_rpcs=0, #batched=0, #rpcs_inflight=0) is hosting 1 region.
2016-02-04 18:37:33,705 DEBUG [AsyncHBase I/O Worker #1] RegionClient: RPC queued: Exists(table="TEST_TABLE", key=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60], family=null, qualifiers=null, attempt=3, region=RegionInfo(table="TEST_TABLE", region_name="TEST_TABLE,,1454097342058.114fc87ff9467239939da4d9d6222598.", stop_key=[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]))
2016-02-04 18:37:33,705 DEBUG [AsyncHBase I/O Worker #1] Deferred: callback=retry RPC@1691149975 returned Deferred@1211561780(state=PENDING, result=null, callback=(continuation of Deferred@1857604281 after retry RPC@1691149975), errback=(continuation of Deferred@1857604281 after retry RPC@1691149975)), so the following Deferred is getting paused: Deferred@1857604281(state=PAUSED, result=Deferred@1211561780, callback=(continuation of Deferred@637890681 after retry RPC@516058581), errback=(continuation of Deferred@637890681 after retry RPC@516058581))
2016-02-04 18:37:33,705 DEBUG [AsyncHBase I/O Worker #1] RegionClient: ------------------<< LEAVING  DECODE <<------------------ time elapsed: 11698us
2016-02-04 18:37:33,705 DEBUG [AsyncHBase I/O Worker #2] HBaseClient: Channel [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020]'s state changed: [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020] BOUND: /x.x.x.x:43370
2016-02-04 18:37:33,705 DEBUG [AsyncHBase I/O Worker #1] RegionClient: ------------------>> ENTERING DECODE >>------------------
2016-02-04 18:37:33,705 DEBUG [AsyncHBase I/O Worker #2] RegionClient: handleUpstream [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020] BOUND: /x.x.x.x:43370
2016-02-04 18:37:33,706 DEBUG [AsyncHBase I/O Worker #2] HBaseClient: Channel [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020]'s state changed: [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020] CONNECTED: /x.x.x.x:60020
2016-02-04 18:37:33,706 DEBUG [AsyncHBase I/O Worker #2] RegionClient: handleUpstream [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020] CONNECTED: /x.x.x.x:60020
2016-02-04 18:37:33,706 DEBUG [AsyncHBase I/O Worker #2] RegionClient: handleUpstream [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020] WRITTEN_AMOUNT: 78
2016-02-04 18:37:33,706 DEBUG [AsyncHBase I/O Worker #2] RegionClient: Executing RPC queued: Exists(table="TEST_TABLE", key=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60], family=null, qualifiers=null, attempt=3, region=RegionInfo(table="TEST_TABLE", region_name="TEST_TABLE,,1454097342058.114fc87ff9467239939da4d9d6222598.", stop_key=[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]))
2016-02-04 18:37:33,707 DEBUG [AsyncHBase I/O Worker #1] RegionClient: handleUpstream [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=16, cap=16)
2016-02-04 18:37:33,707 DEBUG [AsyncHBase I/O Worker #2] RegionClient: [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020] Sending RPC #0, payload=BigEndianHeapChannelBuffer(ridx=12, widx=166, cap=166) "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\t\x08\x00\x1A\x03Get \x01\x8A\x01\nA\x08\x01\x12=TEST_TABLE,,1454097342058.114fc87ff9467239939da4d9d6222598.\x12E\nA\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00:AsyncHBase~probe~<;_<P\x01"
2016-02-04 18:37:33,707 DEBUG [AsyncHBase I/O Worker #2] RegionClient: handleUpstream [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020] WRITTEN_AMOUNT: 154
2016-02-04 18:37:33,708 DEBUG [AsyncHBase I/O Worker #2] RegionClient: handleUpstream [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=14, cap=14)
2016-02-04 18:37:33,708 DEBUG [AsyncHBase I/O Worker #1] RegionClient: ------------------>> ENTERING DECODE >>------------------
2016-02-04 18:37:33,708 DEBUG [AsyncHBase I/O Worker #2] RegionClient: ------------------>> ENTERING DECODE >>------------------
2016-02-04 18:37:33,709 DEBUG [AsyncHBase I/O Worker #1] RegionClient: rpcid=1, response size=520 bytes, 0 readable bytes left, rpc=HBaseRpc(method=getClosestRowBefore, table="hbase:meta", key=[83, 80, 69, 67, 84, 82, 69, 95, 68, 69, 77, 79, 44, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60, 44, 58], region=RegionInfo(table="hbase:meta", region_name="hbase:meta,,1", stop_key=""), attempt=0, timeout=0, hasTimedout=false)
2016-02-04 18:37:33,709 DEBUG [AsyncHBase I/O Worker #2] RegionClient: rpcid=0, response size=14 bytes, 0 readable bytes left, rpc=Exists(table="TEST_TABLE", key=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60], family=null, qualifiers=null, attempt=3, region=RegionInfo(table="TEST_TABLE", region_name="TEST_TABLE,,1454097342058.114fc87ff9467239939da4d9d6222598.", stop_key=[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]))
2016-02-04 18:37:33,709 DEBUG [AsyncHBase I/O Worker #2] RegionClient: ------------------<< LEAVING  DECODE <<------------------ time elapsed: 712us
2016-02-04 18:37:33,709 DEBUG [AsyncHBase I/O Worker #1] RegionClient: [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020] Sending RPC #1, payload=BigEndianHeapChannelBuffer(ridx=12, widx=166, cap=166) "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\t\x08\x01\x1A\x03Get \x01\x8A\x01\nA\x08\x01\x12=TEST_TABLE,,1454097342058.114fc87ff9467239939da4d9d6222598.\x12E\nA\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00:AsyncHBase~probe~<;_<P\x01"
2016-02-04 18:37:33,709 DEBUG [AsyncHBase I/O Worker #1] Deferred: callback=retry RPC@1950281796 returned Deferred@1343241211(state=PENDING, result=null, callback=(continuation of Deferred@1164396042 after retry RPC@1950281796), errback=(continuation of Deferred@1164396042 after retry RPC@1950281796)), so the following Deferred is getting paused: Deferred@1164396042(state=PAUSED, result=Deferred@1343241211, callback=(continuation of Deferred@2133286430 after retry RPC@549493906), errback=(continuation of Deferred@2133286430 after retry RPC@549493906))
2016-02-04 18:37:33,709 DEBUG [AsyncHBase I/O Worker #2] RegionClient: handleUpstream [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020] WRITTEN_AMOUNT: 154
2016-02-04 18:37:33,710 DEBUG [AsyncHBase I/O Worker #1] RegionClient: ------------------<< LEAVING  DECODE <<------------------ time elapsed: 1449us
2016-02-04 18:37:33,710 DEBUG [AsyncHBase I/O Worker #2] RegionClient: handleUpstream [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=14, cap=14)
2016-02-04 18:37:33,710 DEBUG [AsyncHBase I/O Worker #2] RegionClient: ------------------>> ENTERING DECODE >>------------------
2016-02-04 18:37:33,711 DEBUG [AsyncHBase I/O Worker #2] RegionClient: rpcid=1, response size=14 bytes, 0 readable bytes left, rpc=Exists(table="TEST_TABLE", key=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60], family=null, qualifiers=null, attempt=3, region=RegionInfo(table="TEST_TABLE", region_name="TEST_TABLE,,1454097342058.114fc87ff9467239939da4d9d6222598.", stop_key=[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]))
2016-02-04 18:37:33,711 DEBUG [AsyncHBase I/O Worker #2] RegionClient: ------------------<< LEAVING  DECODE <<------------------ time elapsed: 457us
2016-02-04 18:37:33,738 DEBUG [main] GraphHandler: Using Gnuplot wrapper at /usr/share/opentsdb/bin/mygnuplot.sh
2016-02-04 18:37:33,754 INFO  [main] RpcHandler: TSD is in rw mode
2016-02-04 18:37:33,754 INFO  [main] RpcHandler: CORS domain list was empty, CORS will not be enabled
2016-02-04 18:37:33,755 INFO  [main] RpcHandler: Loaded CORS headers (Authorization, Content-Type, Accept, Origin, User-Agent, DNT, Cache-Control, X-Mx-ReqToken, Keep-Alive, X-Requested-With, If-Modified-Since)
2016-02-04 18:37:33,757 WARN  [main] PluginLoader: Unable to locate any plugins of the type: net.opentsdb.tsd.HttpSerializer
2016-02-04 18:37:33,773 INFO  [main] TSDB: Flushing compaction queue
2016-02-04 18:37:33,774 DEBUG [main] RegionClient: Shutdown requested, chan=[id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020]
2016-02-04 18:37:33,774 DEBUG [main] HBaseClient: Channel [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020]'s state changed: [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020] DISCONNECT
2016-02-04 18:37:33,775 INFO  [main] HBaseClient: Channel [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020] is disconnecting: [id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020] DISCONNECT
2016-02-04 18:37:33,775 DEBUG [main] HBaseClient: Removed from regions cache: RegionInfo(table="TEST_TABLE", region_name="TEST_TABLE,,1454097342058.114fc87ff9467239939da4d9d6222598.", stop_key=[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
2016-02-04 18:37:33,775 DEBUG [main] HBaseClient: Association removed: RegionInfo(table="TEST_TABLE", region_name="TEST_TABLE,,1454097342058.114fc87ff9467239939da4d9d6222598.", stop_key=[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) -> RegionClient@84855294(chan=[id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020], #pending_rpcs=0, #batched=0, #rpcs_inflight=0)
2016-02-04 18:37:33,775 DEBUG [main] HBaseClient: Removed from IP cache: x.x.x.x:60020 -> RegionClient@84855294(chan=[id: 0x83b5ad07, /x.x.x.x:43370 => /x.x.x.x:60020], #pending_rpcs=0, #batched=0, #rpcs_inflight=0)
2016-02-04 18:37:33,776 DEBUG [AsyncHBase I/O Worker #2] RegionClient: handleUpstream [id: 0x83b5ad07, /x.x.x.x:43370 :> /x.x.x.x:60020] DISCONNECTED
2016-02-04 18:37:33,777 DEBUG [main] RegionClient: Shutdown requested, chan=[id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020]
2016-02-04 18:37:33,777 DEBUG [AsyncHBase I/O Worker #2] RegionClient: handleUpstream [id: 0x83b5ad07, /x.x.x.x:43370 :> /x.x.x.x:60020] UNBOUND
2016-02-04 18:37:33,777 DEBUG [main] HBaseClient: Channel [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020]'s state changed: [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020] DISCONNECT
2016-02-04 18:37:33,777 DEBUG [AsyncHBase I/O Worker #2] RegionClient: handleUpstream [id: 0x83b5ad07, /x.x.x.x:43370 :> /x.x.x.x:60020] CLOSED
2016-02-04 18:37:33,777 INFO  [main] HBaseClient: Channel [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020] is disconnecting: [id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020] DISCONNECT
2016-02-04 18:37:33,777 INFO  [main] HBaseClient: Lost connection with the .META. region
2016-02-04 18:37:33,777 DEBUG [main] HBaseClient: Removed from IP cache: x.x.x.x:60020 -> RegionClient@1885164344(chan=[id: 0x7dfe6a97, /x.x.x.x:58208 => /x.x.x.x:60020], #pending_rpcs=0, #batched=0, #rpcs_inflight=0)
2016-02-04 18:37:33,778 DEBUG [AsyncHBase I/O Worker #1] RegionClient: handleUpstream [id: 0x7dfe6a97, /x.x.x.x:58208 :> /x.x.x.x:60020] DISCONNECTED
2016-02-04 18:37:33,778 DEBUG [AsyncHBase I/O Worker #1] RegionClient: handleUpstream [id: 0x7dfe6a97, /x.x.x.x:58208 :> /x.x.x.x:60020] UNBOUND
2016-02-04 18:37:33,778 DEBUG [AsyncHBase I/O Worker #1] RegionClient: handleUpstream [id: 0x7dfe6a97, /x.x.x.x:58208 :> /x.x.x.x:60020] CLOSED
2016-02-04 18:37:33,778 DEBUG [main] HBaseClient: Releasing all remaining resources
2016-02-04 18:37:33,779 INFO  [main] TSDB: Completed shutting down the TSDB
Exception in thread "main" java.lang.RuntimeException: Initialization failed
        at net.opentsdb.tools.TSDMain.main(TSDMain.java:196)
Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: /0.0.0.0:4242
        at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
        at net.opentsdb.tools.TSDMain.main(TSDMain.java:186)
Caused by: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:463)
        at sun.nio.ch.Net.bind(Net.java:455)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:372)
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:296)
        at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
        at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
        at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2016-02-04 18:37:33,790 INFO  [TSDBShutdown] TSDB: Flushing compaction queue
2016-02-04 18:37:33,790 DEBUG [TSDBShutdown] HBaseClient: Releasing all remaining resources
2016-02-04 18:37:33,790 INFO  [TSDBShutdown] TSDB: Completed shutting down the TSDB

Jonathan Creasy

unread,
Feb 4, 2016, 11:07:58 PM2/4/16
to santosh kumar, OpenTSDB

Are you sure it isn't already running?

"sudo netstat -nlp | grep 4242"

santosh kumar

unread,
Feb 5, 2016, 10:41:49 AM2/5/16
to OpenTSDB
Changing the port number resolved the issue. Thanks for your reply
Reply all
Reply to author
Forward
0 new messages