Invalidated cache for -ROOT- as null seems to be splitting

878 views
Skip to first unread message

Chris

unread,
Jan 13, 2011, 9:18:19 AM1/13/11
to OpenTSDB
I attempted to append this to a previous discussion which seems to
have been abandoned, but was unable. I have a small hadoop cluster
running (3 machines). On top of that I originally setup hbase on those
3 machines (I have since reduced to 1 machine to try to more closely
follow the installation guide).

On the machine currently running and hbase instance, I have followed
the steps to setup opentsdb, but any attempts to start it using:
./src/tsdb tsd --cachedir=/tmp/tsd/ --port=4242 --staticroot=build/
staticroot/ --zkquorum=node1
results in an an exception
Caused by: org.hbase.async.NonRecoverableException: Too many attempts:
HBaseRpc(method="getClosestRowBefore", table="-ROOT-",
key=".META.,tsdb,,:,:", region=RegionInfo(table="-ROOT-",
region_name="-ROOT-,,0", stop_key=""), attempt=11)

Attached is the last ~150 lines of my log with debug logging turned on
for async and root
http://pastebin.com/174Jybge

tsuna

unread,
Jan 14, 2011, 11:35:37 PM1/14/11
to Chris, OpenTSDB
On Thu, Jan 13, 2011 at 6:18 AM, Chris <c.mc...@gmail.com> wrote:
> On the machine currently running and hbase instance, I have followed
> the steps to setup opentsdb, but any attempts to start it using:
> ./src/tsdb tsd --cachedir=/tmp/tsd/ --port=4242 --staticroot=build/
> staticroot/ --zkquorum=node1
> results in an an exception
> Caused by: org.hbase.async.NonRecoverableException: Too many attempts:
> HBaseRpc(method="getClosestRowBefore", table="-ROOT-",
> key=".META.,tsdb,,:,:", region=RegionInfo(table="-ROOT-",
> region_name="-ROOT-,,0", stop_key=""), attempt=11)

Hi Chris,
the log is mangled (new lines have been removed or inserted in random
locations) but from what I can see, the HBase RegionServer is closing
the connection to the client as soon as the client attempts to talk to
it:

2011-01-13 08:54:57,907 DEBUG [New I/O client worker #1-24]
RegionClient: Sending RPC #0, payload=...
2011-01-13 08:54:57,908 DEBUG [New I/O client worker #1-24]
RegionClient: [id: 0x339214b1, /<IP>:45606 => /<IP>:60020]
WRITTEN_AMOUNT: 224
2011-01-13 08:54:57,910 INFO [New I/O client worker #1-24]
HBaseClient: Lost connection with the -ROOT- region

This is typically caused by a version incompatibility between the
client and the server. The HBase RPC protocol (which is mostly
Hadoop's RPC protocol with some changes here and there) provides no
way of telling the client that it has an incompatible version, instead
the server just brutally closes the connection.

So the question is: which version of HBase are you running? There are
lots of different versions and flavors of HBase, ranging from official
releases to dev releases to release from Cloudera.

Also, in the future, having the entire log would be better, since this
problem occurs immediately at startup.

PS: Are you running on a 12-core machine?

--
Benoit "tsuna" Sigoure
Software Engineer @ www.StumbleUpon.com

tsuna

unread,
Jan 15, 2011, 5:14:11 PM1/15/11
to Chris, OpenTSDB
Chris,
you're using Cloudera b3 (0.89.20100924+28), which includes a patch
that changes the wire-protocol in a non-backwards compatible way.
This patch hasn't been integrated anywhere yet and is likely to be
changed again before its final version. I will see if I can provide a
patch for asynchbase to make it work with CDHb3, as the change looks
fairly simple to implement.


For the records, this is the change I'm referring to:

From 74542880d740e9be24b103f1d5f5c6489d01911c Mon Sep 17 00:00:00 2001
From: Todd Lipcon <to...@cloudera.com>
Date: Mon, 30 Aug 2010 16:25:56 -0700
Subject: [PATCH 16/28] CLOUDERA-BUILD. HBase running on secure hadoop,
temporary patch.

This is not upstreamed, since it currently is very difficult to do this
without reflection or a shim layer. This will be upstreamed with the
larger project of HBase security later this year.

tsuna

unread,
Jan 18, 2011, 3:35:29 AM1/18/11
to Chris, OpenTSDB
Hi Chris,
can you please apply the following patch to asynchbase and let me know
if it fixes the problem for you?
https://gist.github.com/784136


Step by step instructions (you can copy-paste them):

git clone git://github.com/stumbleupon/asynchbase.git
cd asynchbase
curl https://gist.github.com/raw/784136/01fd5c7eb9ab95dfd5d50c83ae61cdc57d0c3464/0001-Add-support-for-CDHb3.patch
| git am -
make

Then simply copy build/hbaseasync-1.0.jar into the third_party/hbase
directory of OpenTSDB.

Chris

unread,
Jan 18, 2011, 4:46:02 PM1/18/11
to OpenTSDB
Hi tsuna,
I would first like to say, I appreciate your speedy patchwork. But,
after patching the hbaseaync, I am still getting errors. They do
however, seem like slightly different errors which end with the same
result. http://pastebin.com/azG2cWGe Hopefully these error logs are
useful to you.

Thanks
Chris

On Jan 18, 3:35 am, tsuna <tsuna...@gmail.com> wrote:
> Hi Chris,
> can you please apply the following patch to asynchbase and let me know
> if it fixes the problem for you?https://gist.github.com/784136
>
> Step by step instructions (you can copy-paste them):
>
> git clone git://github.com/stumbleupon/asynchbase.git
> cd asynchbase
> curlhttps://gist.github.com/raw/784136/01fd5c7eb9ab95dfd5d50c83ae61cdc57d...

tsuna

unread,
Jan 22, 2011, 8:36:43 PM1/22/11
to Chris, OpenTSDB
Chris,
sorry for the latency on my side, I'm pretty busy until after next week.

On Tue, Jan 18, 2011 at 1:46 PM, Chris <c.mc...@gmail.com> wrote:
> Hi tsuna,
> I would first like to say, I appreciate your speedy patchwork. But,
> after patching the hbaseaync, I am still getting errors. They do
> however, seem like slightly different errors which end with the same
> result. http://pastebin.com/azG2cWGe Hopefully these error logs are
> useful to you.

The log seems to indicate that the problem hasn't changed. It's weird
because with this patch I'm able to use the version of HBase that
comes with CDHb3 (whereas without it I have the exact same problem as
you).

What would help is if you could provide two packet captures (sudo
tcpdump -nvi eth0 -s 0 -w /tmp/pcap port 60020 – assuming you're using
eth0 to reach the RegionServer), one when the TSD attempts (and fails)
to connect to the RegionServer, and one when you use the HBase shell
like so:

$ ./bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version: 0.89.20100924+28, r, Mon Oct 11 09:52:07 PDT 2010

hbase(main):001:0> scan 'tsdb-uid'


Assuming that the scan from the shell works, I will be able to compare
the packet captures and see where the problem is. If you prefer not
to share a raw packet capture (maybe due to your company's policy),
consider running this instead:
sudo tcpdump -nvi eth0 -s 0 -X port 60020
And then remove the IPs from the output and pastebin it.

karan alang

unread,
Dec 14, 2016, 5:26:53 PM12/14/16
to OpenTSDB, c.mc...@gmail.com

Hi Chris, Tuna,

I seem to be getting the exact error that Chris has described below, here are the details of the issue ->

I've kerberized HDP 2.4 (HBase version - 1.1.2.2.4.0.0-169, openTSDB version - 2.2.1)
On starting OpenTSDB, i'm getting following error ->

What i understand (from the posts on the internet) is that this is because of connection issues between HBase & OpenTSDB

In connecting to HBase .META, the connection is failing.
Any ideas on how to fix this ?

Here is the command used to start OpenTSDB
/build/tsdb tsd --zkbasedir=/hbase-secure --port=9999 --zkquorum=sandbox.hortonworks.com:2181 --cachedir=/tmp/tsd --staticroot=build/staticroot --auto-metric

Also attaching the hbase-site.xml, (the complete) opentsdb.log, opentsdb.conf

Appreciate your inputs, since i'm stuck because of this issue.




regds,
Karan Alang

--------------------

2016-12-14 22:13:01,216 INFO  [AsyncHBase I/O Worker #4] HBaseClient: Lost connection with the .META. region
2016-12-14 22:13:01,217 INFO  [AsyncHBase I/O Worker #4] HBaseClient: Invalidated cache for .META. as null seems to be splitting or closing it.
2016-12-14 22:13:01,221 ERROR [AsyncHBase I/O Worker #4] RegionClient: Unexpected exception from downstream on [id: 0x23df66d0, /10.0.2.15:35639 :> /10.0.2.15:16020]
java.io.IOException: Broken pipe
    at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.7.0_95]
    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) ~[na:1.7.0_95]
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[na:1.7.0_95]
    at sun.nio.ch.IOUtil.write(IOUtil.java:51) ~[na:1.7.0_95]
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:492) ~[na:1.7.0_95]
    at org.jboss.netty.channel.socket.nio.SocketSendBufferPool$UnpooledSendBuffer.transferTo(SocketSendBufferPool.java:203) ~[netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.write0(AbstractNioWorker.java:201) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromTaskLoop(AbstractNioWorker.java:151) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.AbstractNioChannel$WriteTask.run(AbstractNioChannel.java:335) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:372) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:296) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [netty-3.9.4.Final.jar:na]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_95]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_95]
    at java.lang.Thread.run(Thread.java:745) [na:1.7.0_95]
2016-12-14 22:13:01,230 INFO  [main] TSDB: Flushing compaction queue
2016-12-14 22:13:01,236 INFO  [main] TSDB: Completed shutting down the TSDB
Exception in thread "main" java.lang.RuntimeException: Initialization failed
    at net.opentsdb.tools.TSDMain.main(TSDMain.java:196)
Caused by: com.stumbleupon.async.DeferredGroupException: At least one of the Deferreds failed, first exception:
    at com.stumbleupon.async.DeferredGroup.done(DeferredGroup.java:169)
    at com.stumbleupon.async.DeferredGroup.recordCompletion(DeferredGroup.java:142)
    at com.stumbleupon.async.DeferredGroup.access$000(DeferredGroup.java:36)
    at com.stumbleupon.async.DeferredGroup$1Notify.call(DeferredGroup.java:82)
    at com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
    at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
    at com.stumbleupon.async.Deferred.access$300(Deferred.java:430)
    at com.stumbleupon.async.Deferred$Continue.call(Deferred.java:1366)
    at com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
    at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
    at com.stumbleupon.async.Deferred.handleContinuation(Deferred.java:1313)
    at com.stumbleupon.async.Deferred.doCall(Deferred.java:1284)
    at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
    at com.stumbleupon.async.Deferred.callback(Deferred.java:1005)
    at org.hbase.async.HBaseRpc.callback(HBaseRpc.java:712)
    at org.hbase.async.HBaseClient.tooManyAttempts(HBaseClient.java:2058)
    at org.hbase.async.HBaseClient.handleNSRE(HBaseClient.java:2848)
    at org.hbase.async.RegionClient.failOrRetryRpcs(RegionClient.java:1212)
    at org.hbase.async.RegionClient.cleanup(RegionClient.java:1174)
    at org.hbase.async.RegionClient.channelDisconnected(RegionClient.java:1151)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:102)
    at org.hbase.async.RegionClient.handleUpstream(RegionClient.java:1223)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.SimpleChannelHandler.channelDisconnected(SimpleChannelHandler.java:199)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:120)
    at org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler.handleUpstream(IdleStateAwareChannelHandler.java:36)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.channelDisconnected(SimpleChannelUpstreamHandler.java:208)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:102)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:3121)
    at org.jboss.netty.channel.Channels.fireChannelDisconnected(Channels.java:396)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:360)
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:58)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:779)
    at org.jboss.netty.channel.SimpleChannelHandler.closeRequested(SimpleChannelHandler.java:334)
    at org.jboss.netty.channel.SimpleChannelHandler.handleDownstream(SimpleChannelHandler.java:260)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
    at org.hbase.async.HBaseClient$RegionClientPipeline.sendDownstream(HBaseClient.java:3111)
    at org.jboss.netty.channel.Channels.close(Channels.java:812)
    at org.hbase.async.RegionClient.exceptionCaught(RegionClient.java:1239)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112)
    at org.hbase.async.RegionClient.handleUpstream(RegionClient.java:1223)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.SimpleChannelHandler.exceptionCaught(SimpleChannelHandler.java:156)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:130)
    at org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler.handleUpstream(IdleStateAwareChannelHandler.java:36)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.exceptionCaught(SimpleChannelUpstreamHandler.java:153)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:3121)
    at org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.java:525)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.write0(AbstractNioWorker.java:291)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromTaskLoop(AbstractNioWorker.java:151)
    at org.jboss.netty.channel.socket.nio.AbstractNioChannel$WriteTask.run(AbstractNioChannel.java:335)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:372)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:296)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.hbase.async.PleaseThrottleException: 2 RPCs waiting on "hbase:meta,,1" to come back online
    at org.hbase.async.PleaseThrottleException.make(PleaseThrottleException.java:123)
    at org.hbase.async.PleaseThrottleException.make(PleaseThrottleException.java:75)
    at org.hbase.async.HBaseClient$1RetryRpc.call(HBaseClient.java:1939)
    at org.hbase.async.HBaseClient$1RetryRpc.call(HBaseClient.java:1927)
    at com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
    at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
    at com.stumbleupon.async.Deferred.callback(Deferred.java:1005)
    at org.hbase.async.HBaseRpc.callback(HBaseRpc.java:712)
    at org.hbase.async.HBaseClient.handleNSRE(HBaseClient.java:2816)
    at org.hbase.async.RegionClient.failOrRetryRpcs(RegionClient.java:1212)
    at org.hbase.async.RegionClient.cleanup(RegionClient.java:1187)
    at org.hbase.async.RegionClient.channelDisconnected(RegionClient.java:1151)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:102)
    at org.hbase.async.RegionClient.handleUpstream(RegionClient.java:1223)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.SimpleChannelHandler.channelDisconnected(SimpleChannelHandler.java:199)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:120)
    at org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler.handleUpstream(IdleStateAwareChannelHandler.java:36)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.channelDisconnected(SimpleChannelUpstreamHandler.java:208)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:102)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:3121)
    at org.jboss.netty.channel.Channels.fireChannelDisconnected(Channels.java:396)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:360)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:93)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
    ... 7 more
Caused by: org.hbase.async.PleaseThrottleException: 2 RPCs waiting on "hbase:meta,,1" to come back online
    ... 30 more
Caused by: org.hbase.async.NotServingRegionException: Connection reset: [id: 0xd8ed6d25, /10.0.2.15:35583 :> /10.0.2.15:16020] got disconnected
Caused by RPC: HBaseRpc(method=getClosestRowBefore, table="hbase:meta", key=[116, 115, 100, 98, 45, 117, 105, 100, 44, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60, 44, 58], region=RegionInfo(table="hbase:meta", region_name="hbase:meta,,1", stop_key=""), attempt=0, timeout=0, hasTimedout=false)
    at org.hbase.async.RegionClient.failOrRetryRpcs(RegionClient.java:1208)

ManOLamancha

unread,
Dec 20, 2016, 5:14:04 PM12/20/16
to OpenTSDB, c.mc...@gmail.com
On Wednesday, December 14, 2016 at 2:26:53 PM UTC-8, karan alang wrote:

Hi Chris, Tuna,

I seem to be getting the exact error that Chris has described below, here are the details of the issue ->

I've kerberized HDP 2.4 (HBase version - 1.1.2.2.4.0.0-169, openTSDB version - 2.2.1)
On starting OpenTSDB, i'm getting following error ->

What i understand (from the posts on the internet) is that this is because of connection issues between HBase & OpenTSDB

In connecting to HBase .META, the connection is failing.
Any ideas on how to fix this ?

Here is the command used to start OpenTSDB
/build/tsdb tsd --zkbasedir=/hbase-secure --port=9999 --zkquorum=sandbox.hortonworks.com:2181 --cachedir=/tmp/tsd --staticroot=build/staticroot --auto-metric

Ah, you'll likely need to modify the tsd script to add the JVM flag for the JAAS config for your kerberied installation. It should look something like: -Djava.security.auth.login.config=/build/jaas-hbase-client.conf

And did you set the "hbase.security.*" settings in your conf file (Didn't find the attachment). Take a look at http://opentsdb.github.io/asynchbase/docs/build/html/configuration.html for the params.

Hunter

unread,
Mar 29, 2018, 5:50:32 AM3/29/18
to OpenTSDB
Hi ManOLamancha,

Can you describe the answer more specifically pls?  


thanks


在 2016年12月21日星期三 UTC+8上午6:14:04,ManOLamancha写道:

ManOLamancha

unread,
May 22, 2018, 1:59:53 PM5/22/18
to OpenTSDB
On Thursday, March 29, 2018 at 2:50:32 AM UTC-7, huang botao wrote:
Hi ManOLamancha,

Can you describe the answer more specifically pls?  

Reply all
Reply to author
Forward
0 new messages