10:29:50.089 [AsyncHBase I/O Worker #1] DEBUG o.h.a.a.KerberosClientAuthProvider - Creating sasl client: client=hbase/url...@URL.COM, service=hbase, serviceHostname=hdtdata24.com10:29:50.104 [AsyncHBase I/O Worker #1] DEBUG org.hbase.async.RegionClient - handleUpstream [id: 0x9efdf4af, /172.22.168.12:51178 => /172.22.168.39:60020] WRITTEN_AMOUNT: 610:29:50.117 [AsyncHBase I/O Worker #1] DEBUG org.hbase.async.RegionClient - handleUpstream [id: 0x9efdf4af, /172.22.168.12:51178 => /172.22.168.39:60020] WRITTEN_AMOUNT: 138510:29:50.117 [AsyncHBase I/O Worker #1] INFO org.hbase.async.RegionClient - Initialized securityhelper: org.hbase.async.SecureRpcHelper96@435e331b for region client: RegionClient@1400264902(chan=null, #pending_rpcs=2, #batched=0, #rpcs_inflight=0)10:29:50.124 [AsyncHBase I/O Worker #1] DEBUG org.hbase.async.RegionClient - handleUpstream [id: 0x9efdf4af, /172.22.168.12:51178 => /172.22.168.39:60020] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=112, cap=112)10:29:50.124 [AsyncHBase I/O Worker #1] DEBUG org.hbase.async.RegionClient - ------------------>>ENTERING DECODE >>------------------10:29:50.125 [AsyncHBase I/O Worker #1] DEBUG org.hbase.async.RegionClient - handleUpstream [id: 0x9efdf4af, /172.22.168.12:51178 => /172.22.168.39:60020] WRITTEN_AMOUNT: 410:29:50.126 [AsyncHBase I/O Worker #1] DEBUG org.hbase.async.RegionClient - handleUpstream [id: 0x9efdf4af, /172.22.168.12:51178 => /172.22.168.39:60020] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=58, cap=58)10:29:50.126 [AsyncHBase I/O Worker #1] DEBUG org.hbase.async.RegionClient - ------------------>>ENTERING DECODE >>------------------10:29:50.136 [AsyncHBase I/O Worker #1] ERROR org.hbase.async.SecureRpcHelper - Failed Sasl challengejavax.security.sasl.SaslException: No common protection layer between client and serverat com.sun.security.sasl.gsskerb.GssKrb5Client.doFinalHandshake(GssKrb5Client.java:252) ~[na:1.7.0_67]at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:187) ~[na:1.7.0_67]at org.hbase.async.SecureRpcHelper$1PrivilegedAction.run(SecureRpcHelper.java:286) [asynchbase-1.7.1.jar:na]at org.hbase.async.SecureRpcHelper$1PrivilegedAction.run(SecureRpcHelper.java:282) [asynchbase-1.7.1.jar:na]at java.security.AccessController.doPrivileged(Native Method) [na:1.7.0_67]
We are using CDH 5.3.8Which uses hbase-0.98.6 +cdh5.3.8+126.Thanks for any feedback that you give.
<property>
<name>hadoop.rpc.protection</name>
<value>authentication</value>
12:43:56.046 [AsyncHBase Timer HBaseClient #1-SendThread(URL.com:2181)] DEBUG o.a.zookeeper.ClientCnxnSocketNIO - deferring non-priming packet: clientPath:null serverPath:null finished:false header:: 0,-11 replyHeader:: 0,0,0 request:: null response:: nulluntil SASL authentication completes.DEBUG org.hbase.async.RegionClient - handleUpstream [id: 0x2d538a7e, /172.22.168.12:47396 => /172.22.168.39:60020] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=58, cap=58)12:43:52.046 [AsyncHBase I/O Worker #2] ERROR org.hbase.async.RegionClient - Unexpected exception from downstream on [id: 0xdabbfe38, /172.22.168.12:47971 => /172.22.168.39:60020]java.lang.IndexOutOfBoundsException: Not enough readable bytes - Need 196, maximum is 184
Yeah, every time I try to start opentsdb it fails due to "RegionClient - Unexpected exception from downstream on" Do you know a good place to start looking? I had never developed any application that required SASL, we just upgraded to kerberos.
Its a bit old post now but lets try my luck :) I tried to setup opentsdb 2.2.0 service with hortonworks kerberized cluster, i tried the kerberos settings provided in this post but still it does not seem to work, i am not able to start opentsdb service as it always fails to read the hbase:meta table, which already exists in the hbase and then it gives downstream and broken pipe error in the end, does anybody faced similar issue with opentsdb. I face the below error, any help will be appreciated.
10:19:59.756 DEBUG [HBaseClient.handleDisconnect] - Channel [id: 0x301ad776]'s state changed: [id: 0x301ad776] CONNECT :16020
10:19:59.756 WARN [HBaseClient.call] - Probe Exists(table="hbase:meta", key=[100, 105, 115, 97, 103, 103, 114, 101, 103, 97, 116, 105, 111, 110, 46, 103, 101, 114, 109, 97, 110, 121, 44, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60, 44, 58, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60], family=null, qualifiers=null, attempt=0, region=RegionInfo(table="hbase:meta", region_name="hbase:meta,,1", stop_key="")) failed
org.hbase.async.NonRecoverableException: Too many attempts: Exists(table="hbase:meta", key=[100, 105, 115, 97, 103, 103, 114, 101, 103, 97, 116, 105, 111, 110, 46, 103, 101, 114, 109, 97, 110, 121, 44, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60, 44, 58, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60], family=null, qualifiers=null, attempt=11, region=RegionInfo(table="hbase:meta", region_name="hbase:meta,,1", stop_key=""))
at org.hbase.async.HBaseClient.tooManyAttempts(HBaseClient.java:2056) [asynchbase-1.7.1.jar:na]
at org.hbase.async.HBaseClient.sendRpcToRegion(HBaseClient.java:1920) [asynchbase-1.7.1.jar:na]
at org.hbase.async.HBaseClient$1RetryRpc.call(HBaseClient.java:1944) [asynchbase-1.7.1.jar:na]
at org.hbase.async.HBaseClient$1RetryRpc.call(HBaseClient.java:1927) [asynchbase-1.7.1.jar:na]
at com.stumbleupon.async.Deferred.doCall(Deferred.java:1278) [async-1.4.0.jar:na]
at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257) [async-1.4.0.jar:na]
at com.stumbleupon.async.Deferred.callback(Deferred.java:1005) [async-1.4.0.jar:na]
at org.hbase.async.HBaseClient$ZKClient$ZKCallback.processResult(HBaseClient.java:3632) [asynchbase-1.7.1.jar:na]
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:573) [zookeeper-3.4.6.2.3.4.0-3485.jar:3.4.6-3485--1]
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) [zookeeper-3.4.6.2.3.4.0-3485.jar:3.4.6-3485--1]
10:19:59.756 DEBUG [HBaseClient.call] - Retrying 3 RPCs on NSREd region "hbase:meta,,1"
10:19:59.757 DEBUG [RegionClient.sendRpc] - RPC queued: HBaseRpc(method=getClosestRowBefore, table="hbase:meta", key="mytable,\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00:AsyncHBase~probe~<;<,:", region=RegionInfo(table="hbase:meta", region_name="hbase:meta,,1", stop_key=""), attempt=11, timeout=-1, hasTimedout=false)
10:19:59.757 DEBUG [RegionClient.sendRpc] - RPC queued: HBaseRpc(method=getClosestRowBefore, table="hbase:meta", key="mytable-uid,\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00:AsyncHBase~probe~<;<,:", region=RegionInfo(table="hbase:meta", region_name="hbase:meta,,1", stop_key=""), attempt=11, timeout=-1, hasTimedout=false)
10:19:59.757 INFO [ClientCnxn.primeConnection] - Socket connection established to :2181, initiating session
10:19:59.757 DEBUG [HBaseClient.disconnectZK] - Ignore any DEBUG exception from ZooKeeper
10:19:59.757 DEBUG [ClientCnxn.primeConnection] - Session establishment request sent on :2181
10:19:59.757 DEBUG [ZooKeeper.close] - Closing session: 0x0
10:19:59.757 DEBUG [ClientCnxn.close] - Closing client for session: 0x0
10:19:59.757 DEBUG [HBaseClient.handleDisconnect] - Channel [id: 0x301ad776,
10:19:59.757 DEBUG [RegionClient.handleUpstream] - handleUpstream [id: 0x301ad776,
10:19:59.757 DEBUG [HBaseClient.handleDisconnect] - Channel [id: 0x301ad776, CONNECTED:
10:19:59.757 DEBUG [RegionClient.handleUpstream] - handleUpstream [id: 0x301ad776,
10:19:59.757 DEBUG [Login.initUserIfNeeded] - Already logged in
10:19:59.758 INFO [KerberosClientAuthProvider.newSaslClient] - Connecting to hbase/no...@example.com
10:19:59.758 INFO [KerberosClientAuthProvider.run] - Client will use GSSAPI as SASL mechanism.
10:19:59.758 DEBUG [KerberosClientAuthProvider.run] - Creating sasl client: client=
10:19:59.758 DEBUG [RegionClient.handleUpstream] - handleUpstream [id: 0x301ad776, /10.66.48.100:53332 => /] WRITTEN_AMOUNT: 6
10:19:59.759 DEBUG [RegionClient.handleUpstream] - handleUpstream [id: 0x301ad776, /10.66.48.100:53332 => /10.66.48.102:16020] EXCEPTION: java.io.IOException: Broken pipe
10:19:59.760 ERROR [RegionClient.exceptionCaught] - Unexpected exception from downstream on
java.io.IOException: Broken pipe
Regards,
Hammad
10:19:59.757 DEBUG [HBaseClient.handleDisconnect] - Channel [id: 0x301ad776, CONNECTED:
10:19:59.757 DEBUG [RegionClient.handleUpstream] - handleUpstream [id: 0x301ad776,
10:19:59.757 DEBUG [Login.initUserIfNeeded] - Already logged in
10:19:59.758 INFO [KerberosClientAuthProvider.newSaslClient] - Connecting to hbase/no...@example.com
10:19:59.758 INFO [KerberosClientAuthProvider.run] - Client will use GSSAPI as SASL mechanism.
10:19:59.758 DEBUG [KerberosClientAuthProvider.run] - Creating sasl client: client=
10:19:59.758 DEBUG [RegionClient.handleUpstream] - handleUpstream [id: 0x301ad776, /10.66.48.100:53332 => /] WRITTEN_AMOUNT: 6
10:19:59.759 DEBUG [RegionClient.handleUpstream] - handleUpstream [id: 0x301ad776, /10.66.48.100:53332 => /10.66.48.102:16020] EXCEPTION: java.io.IOException: Broken pipe
10:19:59.760 ERROR [RegionClient.exceptionCaught] - Unexpected exception from downstream on
java.io.IOException: Broken pipe
I looked into this, having the same problem, and came to the same conclusion - I've posted my findings on the GitHub. The change I made was to use readableBytes for the array dimension, which then keeps netty happy.That makes the SASL stuff work and things progress much further. However, I run into the same problem others are reporting later on with respect to "Broken Pipe" exceptions.I'm happy to contribute code here to make this work but I'd like to know where I'm starting from - does this work against any Kerberized Hadoop cluster and if so, which one?