I am trying to scan all rows (~1.3M rows) of a Google BigTable table using the Java API, however I get the following error:
Error while reading table 'projects/firm-link-147413/instances/some-bigger-table/tables/media-location-demo' : Response was not consumed in time; terminating connection. (Possible causes: row size > 256MB, slow client data read, and network problems)
The whole dataset size is about 2GB, and the individual row size is very small (<100k). The network connection is excellent for both Download and Upload. The client has the capacity to read the data (I have similar code that executes when uploading, and it performs fine).
Thanks for your help!
The whole error is:
com.google.cloud.bigtable.grpc.io.IOExceptionWithStatus: Error in response stream at com.google.cloud.bigtable.grpc.scanner.ResultQueueEntry$ExceptionResultQueueEntry.getResponseOrThrow(ResultQueueEntry.java:88) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.cloud.bigtable.grpc.scanner.ResponseQueueReader.getNextMergedRow(ResponseQueueReader.java:95) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.cloud.bigtable.grpc.scanner.StreamingBigtableResultScanner.next(StreamingBigtableResultScanner.java:60) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.cloud.bigtable.grpc.scanner.StreamingBigtableResultScanner.next(StreamingBigtableResultScanner.java:34) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.cloud.bigtable.grpc.scanner.ResumingStreamingResultScanner.next(ResumingStreamingResultScanner.java:89) [bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.cloud.bigtable.grpc.scanner.ResumingStreamingResultScanner.next(ResumingStreamingResultScanner.java:35) [bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.cloud.bigtable.hbase.adapters.read.BigtableResultScannerAdapter$1.next(BigtableResultScannerAdapter.java:58) [bigtable-hbase-1.2-0.9.3.jar!/:na] at org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94) [hbase-client-1.2.1.jar!/:1.2.1] at com.shutterfly.corp.migrator.locspec.service.BigTableConnector.scanAllRows(BigTableConnector.java:157) [classes!/:0.0.1-SNAPSHOT] at com.shutterfly.corp.migrator.locspec.service.MediaIALookupMigrator.verifyMigratedData(MediaIALookupMigrator.java:86) [classes!/:0.0.1-SNAPSHOT] at com.shutterfly.corp.migrator.locspec.LocspecMigratorApplication.main(LocspecMigratorApplication.java:21) [classes!/:0.0.1-SNAPSHOT] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_102] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_102] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_102] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_102] at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [locspec-migrator-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT] at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [locspec-migrator-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT] at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [locspec-migrator-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT] at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:58) [locspec-migrator-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT] Caused by: com.google.bigtable.repackaged.io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: Error while reading table 'projects/firm-link-147413/instances/some-bigger-table/tables/media-location-demo' : Response was not consumed in time; terminating connection. (Possible causes: row size > 256MB, slow client data read, and network problems) at com.google.bigtable.repackaged.io.grpc.Status.asRuntimeException(Status.java:536) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.cloud.bigtable.grpc.scanner.StreamObserverAdapter.onClose(StreamObserverAdapter.java:61) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.cloud.bigtable.grpc.io.ChannelPool$InstrumentedChannel$2.onClose(ChannelPool.java:201) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.bigtable.repackaged.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:481) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.bigtable.repackaged.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:398) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.bigtable.repackaged.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:513) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.bigtable.repackaged.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:52) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at com.google.bigtable.repackaged.io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154) ~[bigtable-hbase-1.2-0.9.3.jar!/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_102] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_102] at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_102]
This topic was started here, but moved to Google Groups.
| Solomon Duskis | | Google Cloud Bigtable Software Engineer | | sdu...@google.com | | 914-462-0531 |
No, we are not.
Hi!
No, we are not using Google Dataflow. We are using the Java Hbase-based Google client.
A machine that is outside Google Cloud.