Timeout writing large blob fields.

269 views
Skip to first unread message

Aneesh Mohan

unread,
Apr 10, 2016, 12:54:19 AM4/10/16
to DataStax Java Driver for Apache Cassandra User Mailing List
I'm writing  a bytearray to blob. It works fine for sizes upto 10Mb but currently I'm trying to insert 30Mb data and it throws this error all the time :
It's a 2 node cassandra cluster running on Amazon x-large instance.

Exception in thread "pool-13-thread-1" com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency ONE (1 replica were required but only 0 acknowledged the write)
    at com.datastax.driver.core.exceptions.WriteTimeoutException.copy(WriteTimeoutException.java:54)
    at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:259)
    at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:175)
    at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
    at com.razorthink.monitor.cassandra.dao.CassanndraExecCommand.run(AbstractDao.java:119)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency ONE (1 replica were required but only 0 acknowledged the write)
    at com.datastax.driver.core.exceptions.WriteTimeoutException.copy(WriteTimeoutException.java:54)
    at com.datastax.driver.core.Responses$Error.asException(Responses.java:99)
    at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:110)
    at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:249)
    at com.datastax.driver.core.RequestHandler.onSet(RequestHandler.java:433)
    at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:668)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    ... 3 more

I have increased write_timeout_in_ms to a very large value( from 5000 to 100000) and yet no success. The write is the only query I'm doing, so workload is negligible on the cluster.



Aneesh Mohan

unread,
Apr 10, 2016, 1:48:58 AM4/10/16
to DataStax Java Driver for Apache Cassandra User Mailing List
Looking at the debug logs, I got the following error while inserting :

Mutation 32MB too large for maximum size of 16Mb.


I resolved this but setting commitlog_segment_size_in_mb to 64Mb. I was wondering what implication increasing commitlog_segment_size_in_mb  will have on performance though.

Jack Krupansky

unread,
Apr 10, 2016, 10:32:42 AM4/10/16
to java-dri...@lists.datastax.com
FWIW, the Cassandra doc says:

"The maximum theoretical size for a blob is 2GB. The practical limit on blob size, however, is less than 1 MB, ideally even smaller. A blob type is suitable for storing a small image or short string."

See:

Generally, you should try to keep your partition size below 10 MB.anyway.

Large blobs should be chunked anyway - it lets you retrieve the chunks in parallel, potentially from different nodes.

All of that said, the fact that you got a timeout rather than a specific error return sounds like a bug to me, but it's probably on the server side rather than the Java driver per se.



-- Jack Krupansky

--
You received this message because you are subscribed to the Google Groups "DataStax Java Driver for Apache Cassandra User Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to java-driver-us...@lists.datastax.com.

Braulio Livio

unread,
Jan 29, 2018, 9:58:41 PM1/29/18
to DataStax Java Driver for Apache Cassandra User Mailing List
Thank you so very much, Aneesh Mohan! Your solution helped me.

Bráulio

Nate McCall

unread,
Jan 29, 2018, 10:52:29 PM1/29/18
to java-dri...@lists.datastax.com

I'm writing  a bytearray to blob. It works fine for sizes upto 10Mb but currently I'm trying to insert 30Mb data and it throws this error all the time :
It's a 2 node cassandra cluster running on Amazon x-large instance.


Chunking is a solution to consider. Smaller chunks spread the load around the cluster and will keep you clear of edge cases like this. 

It's now retired, but the Netflix thrift driver, Asytanax, had an interesting example of how to do this:

Would not be much of a stretch to port this approach to CQL. 


--
-----------------
Nate McCall
Wellington, NZ
@zznate

CTO
Apache Cassandra Consulting
http://www.thelastpickle.com
Reply all
Reply to author
Forward
0 new messages