Google BigTable StatusRuntimeException error when scanning all rows of a table

1,277 views
Skip to first unread message

Nikola Yuroukov

unread,
Nov 3, 2016, 12:56:02 PM11/3/16
to Google Cloud Bigtable Discuss, sdu...@google.com

I am trying to scan all rows (~1.3M rows) of a Google BigTable table using the Java API, however I get the following error:


Error while reading table 'projects/firm-link-147413/instances/some-bigger-table/tables/media-location-demo' : Response was not consumed in time; terminating connection. (Possible causes: row size > 256MB, slow client data read, and network problems)



The whole dataset size is about 2GB, and the individual row size is very small (<100k). The network connection is excellent for both Download and Upload. The client has the capacity to read the data (I have similar code that executes when uploading, and it performs fine).

Thanks for your help!

The whole error is:



com.google.cloud.bigtable.grpc.io.IOExceptionWithStatus: Error in response stream at com.google.cloud.bigtable.grpc.scanner.ResultQueueEntry$ExceptionResultQueueEntry.getResponseOrThrow(ResultQueueEntry.java:88) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.cloud.bigtable.grpc.scanner.ResponseQueueReader.getNextMergedRow(ResponseQueueReader.java:95) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.cloud.bigtable.grpc.scanner.StreamingBigtableResultScanner.next(StreamingBigtableResultScanner.java:60) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.cloud.bigtable.grpc.scanner.StreamingBigtableResultScanner.next(StreamingBigtableResultScanner.java:34) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.cloud.bigtable.grpc.scanner.ResumingStreamingResultScanner.next(ResumingStreamingResultScanner.java:89) [bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.cloud.bigtable.grpc.scanner.ResumingStreamingResultScanner.next(ResumingStreamingResultScanner.java:35) [bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.cloud.bigtable.hbase.adapters.read.BigtableResultScannerAdapter$1.next(BigtableResultScannerAdapter.java:58) [bigtable-hbase-1.2-0.9.3.jar!/:na] at org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94) [hbase-client-1.2.1.jar!/:1.2.1] at com.shutterfly.corp.migrator.locspec.service.BigTableConnector.scanAllRows(BigTableConnector.java:157) [classes!/:0.0.1-SNAPSHOT] at com.shutterfly.corp.migrator.locspec.service.MediaIALookupMigrator.verifyMigratedData(MediaIALookupMigrator.java:86) [classes!/:0.0.1-SNAPSHOT] at com.shutterfly.corp.migrator.locspec.LocspecMigratorApplication.main(LocspecMigratorApplication.java:21) [classes!/:0.0.1-SNAPSHOT] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_102] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_102] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_102] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_102] at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [locspec-migrator-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT] at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [locspec-migrator-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT] at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [locspec-migrator-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT] at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:58) [locspec-migrator-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT] Caused by: com.google.bigtable.repackaged.io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: Error while reading table 'projects/firm-link-147413/instances/some-bigger-table/tables/media-location-demo' : Response was not consumed in time; terminating connection. (Possible causes: row size > 256MB, slow client data read, and network problems) at com.google.bigtable.repackaged.io.grpc.Status.asRuntimeException(Status.java:536) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.cloud.bigtable.grpc.scanner.StreamObserverAdapter.onClose(StreamObserverAdapter.java:61) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.cloud.bigtable.grpc.io.ChannelPool$InstrumentedChannel$2.onClose(ChannelPool.java:201) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.bigtable.repackaged.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:481) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.bigtable.repackaged.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:398) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.bigtable.repackaged.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:513) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.bigtable.repackaged.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:52) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.bigtable.repackaged.io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_102] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_102] at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_102]


This topic was started here, but moved to Google Groups.

Solomon Duskis

unread,
Nov 3, 2016, 2:46:58 PM11/3/16
to Nikola Yuroukov, Google Cloud Bigtable Discuss
As per stack overflow, you're reading from a us location into a cluster in Europe. You are getting DEADLINE_EXCEEDED which happens when the client reads data too slowly. 

Do you need to run your application across continents?
--

Solomon Duskis | Google Cloud Bigtable Software Engineer | sdu...@google.com | 914-462-0531

Nikola Yuroukov

unread,
Nov 3, 2016, 3:42:26 PM11/3/16
to Solomon Duskis, Google Cloud Bigtable Discuss

No, we are not.

Solomon Duskis

unread,
Nov 3, 2016, 4:49:41 PM11/3/16
to Nikola Yuroukov, Google Cloud Bigtable Discuss
That's what our diagnostics say.  How are you accessing Cloud Bigtable?  Are you using Dataflow?  If so, are you setting --zone?

Nikola Yuroukov

unread,
Nov 3, 2016, 4:52:17 PM11/3/16
to Solomon Duskis, Google Cloud Bigtable Discuss

Hi!

No, we are not using Google Dataflow. We are using the Java Hbase-based Google client.

Solomon Duskis

unread,
Nov 3, 2016, 4:56:26 PM11/3/16
to Nikola Yuroukov, Google Cloud Bigtable Discuss
Are you running the client from GCE or on a VM that's outside of google cloud?

Nikola Yuroukov

unread,
Nov 3, 2016, 4:57:20 PM11/3/16
to Solomon Duskis, Google Cloud Bigtable Discuss

A machine that is outside Google Cloud.

Solomon Duskis

unread,
Nov 3, 2016, 6:22:49 PM11/3/16
to Google Cloud Bigtable Discuss, sdu...@google.com
Strange.  If both the client machine and Bigtable cluster are in Europe, and the communication is going through the US central, that sounds like a networking issue.  These things happen sometime, and they're tricky to solve, and the Bigtable team doesn't have the expertise to solve these tricky networking problems

Can you please confirm that the machine is indeed in Europe?

Regardless, I think that debugging this may unfortunately require a cloud support package to resolve quickly.
Reply all
Reply to author
Forward
0 new messages