org.hbase.async.CallQueueTooBigException: Call queue is full on localhost

1,430 views
Skip to first unread message

mahmoud hanafi

unread,
Oct 25, 2016, 2:37:43 PM10/25/16
to open...@googlegroups.com
I am getting these errors. tried a number of different tuning options but not luck solving the problem.
Any ideas?

2016-10-25 11:37:12,091 ERROR [AsyncHBase I/O Worker #32] CompactionQueue: Failed to read a row to re-compact
org.hbase.async.CallQueueTooBigException: Call queue is full on localhost,40479,1477414651219, too many items queued ?
Caused by RPC: GetRequest(table="tsdb", key=[0, 0, 26, 88, 15, -127, -128, 0, 0, 1, 0, 0, 1, 0, 0, 2, 0, 56, -123], family=null, qualifiers=null, attempt=1, region=RegionInfo(table="tsdb", region_name="tsdb,\x00\x00\x19X\x07D \x00\x00\x01\x00\x01\xAA\x00\x00\x02\x00\x19\xB1,1477348085697.56c87297c18b284f5c5e991d072ecbc3.", stop_key=""))
    at org.hbase.async.CallQueueTooBigException.make(CallQueueTooBigException.java:62) ~[asynchbase-1.7.2.jar:na]
    at org.hbase.async.CallQueueTooBigException.make(CallQueueTooBigException.java:34) ~[asynchbase-1.7.2.jar:na]
    at org.hbase.async.RegionClient.makeException(RegionClient.java:1753) [asynchbase-1.7.2.jar:na]
    at org.hbase.async.RegionClient.decodeException(RegionClient.java:1773) [asynchbase-1.7.2.jar:na]
    at org.hbase.async.RegionClient.decode(RegionClient.java:1485) [asynchbase-1.7.2.jar:na]
    at org.hbase.async.RegionClient.decode(RegionClient.java:88) [asynchbase-1.7.2.jar:na]
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:485) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty-3.9.4.Final.jar:na]
    at org.hbase.async.RegionClient.handleUpstream(RegionClient.java:1223) [asynchbase-1.7.2.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler.handleUpstream(IdleStateAwareChannelHandler.java:36) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) [netty-3.9.4.Final.jar:na]
    at org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:3121) [asynchbase-1.7.2.jar:na]
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.9.4.Final.jar:na]
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [netty-3.9.4.Final.jar:na]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_111]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_111]
    at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111]

ManOLamancha

unread,
Dec 20, 2016, 4:11:56 PM12/20/16
to OpenTSDB
On Tuesday, October 25, 2016 at 11:37:43 AM UTC-7, Mahmoud Hanafi wrote:
I am getting these errors. tried a number of different tuning options but not luck solving the problem.
Any ideas?

2016-10-25 11:37:12,091 ERROR [AsyncHBase I/O Worker #32] CompactionQueue: Failed to read a row to re-compact
org.hbase.async.CallQueueTooBigException: Call queue is full on localhost,40479,1477414651219, too many items queued ?

I'd suggest switching to Appends in 2.2 or turning off compactions. We'll likely disable compactions by default in 2.4 as it causes so many troubles. Disable "tsd.storage.enable_compaction"

Thibault Godouet

unread,
Dec 21, 2016, 4:34:48 AM12/21/16
to ManOLamancha, OpenTSDB
Hi there,
I'm surprised by this as in my tests a while back appends appeared slower (data point written per sec) than the perf I get without them during compaction.

Just wondering if others had different results, which may mean my setup is somehow different, or I did something wrong.

john....@skyscanner.net

unread,
Dec 21, 2016, 6:50:33 AM12/21/16
to OpenTSDB, clars...@gmail.com, tib...@godouet.net
Hi,

I'm using hbase 1.2 with tsdb 2.2.1 with the following settings
tsd.core.uid.random_metrics = true
tsd.core.meta.enable_realtime_ts=true
tsd.storage.fix_duplicates=true

tsd.storage.salt.width = 1
tsd.storage.salt.buckets = 8
tsd.storage.uid.width.tagv = 4

And I'm seeing lots of call queue full errors corresponding to when the tsdb compaction process is running
I've tried moving to appends but it had a really adverse performance impact for me as well.

Are there any best practice recommendations for tuning hbase either for appends or for running with realtime meta sync ?

Thanks

J.

ManOLamancha

unread,
Jan 7, 2017, 7:37:18 PM1/7/17
to OpenTSDB, clars...@gmail.com, tib...@godouet.net
On Wednesday, December 21, 2016 at 1:34:48 AM UTC-8, Thibault Godouet wrote:
I'm surprised by this as in my tests a while back appends appeared slower (data point written per sec) than the perf I get without them during compaction.

Just wondering if others had different results, which may mean my setup is somehow different, or I did something wrong.

There's definitely a region server impact with appends enabled as it must read, update, write the column for every data point. I *think* we saw about a 20% overhead when testing ourselves. Our engineers wrote a co-processor that performs much less impactful appends (and atomic increments) in HBase by simply recording all of the operations and resolving them at query time. That should be open-sourced at some point as we have HBase committers on staff. 

ManOLamancha

unread,
Jan 7, 2017, 7:38:25 PM1/7/17
to OpenTSDB, clars...@gmail.com, tib...@godouet.net
On Wednesday, December 21, 2016 at 3:50:33 AM UTC-8, john....@skyscanner.net wrote:
And I'm seeing lots of call queue full errors corresponding to when the tsdb compaction process is running
I've tried moving to appends but it had a really adverse performance impact for me as well.

Are there any best practice recommendations for tuning hbase either for appends or for running with realtime meta sync ?

In your case I'd recommend enabling a compression algorithm on your tables and disable appends and TSD compactions. 
Reply all
Reply to author
Forward
0 new messages