Re: ETL Data pipeline Keep on running for 1 hr without any result

81 views
Skip to first unread message

Poorna Chandra

unread,
Jun 8, 2016, 2:53:01 PM6/8/16
to Sudarshan Thakur, CDAP User
Hi Sudarshan,

A few questions to understand your setup -
  1. Is this running on cluster or SDK?
  2. How big is the data in the SQL Server table? 
  3. Are the log lines pasted above the final log lines after one hour? If there are more logs, can you attach all the logs?
  4. When you click on "Get Schema" on the Database source in Hydrator Studio, can you see the schema being fetched?
Thanks,
Poorna.



On Wed, Jun 8, 2016 at 12:28 AM, Sudarshan Thakur <sudarsha...@gmail.com> wrote:
Hi 

I am trying to move my data from one of the table of my sqlserver database to hbase table .

Pipeline is published successfully but when i run it ,it keeps on running for 1 hr 

This is the warning message i am getting 

2016-06-08 06:53:52,680 - WARN  [MapReduceRunner-phase-1:o.a.h.c.Configuration@2345] - file:/opt/cdap/sdk-3.4.1/data/tmp/runner/mapreduce.default.EBRLTOHBASEDATAPIPELINEONE.phase-1.c017affb-2d45-11e6-80af-0800272d2212/mapreduce/staging/cdap482720768/.staging/job_local482720768_0004/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
 2016-06-08 06:53:52,681 - WARN  [MapReduceRunner-phase-1:o.a.h.c.Configuration@2345] - file:/opt/cdap/sdk-3.4.1/data/tmp/runner/mapreduce.default.EBRLTOHBASEDATAPIPELINEONE.phase-1.c017affb-2d45-11e6-80af-0800272d2212/mapreduce/staging/cdap482720768/.staging/job_local482720768_0004/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
 2016-06-08 06:53:53,021 - WARN  [MapReduceRunner-phase-1:o.a.h.c.Configuration@2345] - file:/opt/cdap/sdk-3.4.1/data/tmp/runner/mapreduce.default.EBRLTOHBASEDATAPIPELINEONE.phase-1.c017affb-2d45-11e6-80af-0800272d2212/mapreduce/local/localRunner/cdap/job_local482720768_0004/job_local482720768_0004.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
 2016-06-08 06:53:53,030 - WARN  [MapReduceRunner-phase-1:o.a.h.c.Configuration@2345] - file:/opt/cdap/sdk-3.4.1/data/tmp/runner/mapreduce.default.EBRLTOHBASEDATAPIPELINEONE.phase-1.c017affb-2d45-11e6-80af-0800272d2212/mapreduce/local/localRunner/cdap/job_local482720768_0004/job_local482720768_0004.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.






Regards 
Sudarshan

--
You received this message because you are subscribed to the Google Groups "CDAP User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cdap-user+...@googlegroups.com.
To post to this group, send email to cdap...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cdap-user/12d9c2e6-87cf-4d60-9ccd-09f65060599a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Sudarshan Thakur

unread,
Jun 9, 2016, 3:20:41 AM6/9/16
to CDAP User, sudarsha...@gmail.com

Hi Poorna,

I have deleted the pipeline and created again.
1. CDAP is not running on cluster its on my standalone VM 
2.Data is pretty big 10 millions of data can you just tell me how to limit the data 
3. I got schema and then i clicked on apply button also .


This time it failed .
Can you just look at my query .

 Import query -------> select  col1,col2,col3,col4  from table  where $CONDITIONS col1<100
Bounding query is --------->select max(col1),min(col2) from table 
split by field name is col1 


Regards 
Sudarshan

Sudarshan Thakur

unread,
Jun 9, 2016, 3:29:55 AM6/9/16
to CDAP User, sudarsha...@gmail.com
Also attaching full log .


016-06-09 07:22:09,112 - DEBUG [MapReduceRunner-phase-1:c.c.c.d.m.w.BasicLineageWriter@59] - Writing access for run program_run:default.TestPipelineOne.mapreduce.phase-1.df618ca7-2e12-11e6-9325-0800272d2212, dataset dataset:default.TestReference, accessType READ, component null, accessTime = 1465456929112
 2016-06-09 07:22:09,652 - ERROR [MapReduceRunner-phase-1:c.c.c.d.a.KafkaAuditPublisher@74] - Got exception publishing audit message AuditMessage{version=1, time=1465456929149, entityId=dataset:default.TestReference, user='', type=ACCESS, payload=AccessPayload{accessType=READ, accessor=program_run:default.TestPipelineOne.mapreduce.phase-1.df618ca7-2e12-11e6-9325-0800272d2212} AuditPayload{}}. Exception:
java.util.concurrent.ExecutionException: kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:294) ~[com.google.guava.guava-13.0.1.jar:na]
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:281) ~[com.google.guava.guava-13.0.1.jar:na]
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) ~[com.google.guava.guava-13.0.1.jar:na]
at co.cask.cdap.data2.audit.KafkaAuditPublisher.publish(KafkaAuditPublisher.java:72) ~[co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.audit.AuditPublishers.publishAccess(AuditPublishers.java:72) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.metadata.writer.LineageWriterDatasetFramework.doWriteLineage(LineageWriterDatasetFramework.java:159) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.metadata.writer.LineageWriterDatasetFramework.writeLineage(LineageWriterDatasetFramework.java:141) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.dataset2.ForwardingDatasetFramework.writeLineage(ForwardingDatasetFramework.java:176) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.internal.app.runtime.workflow.NameMappedDatasetFramework.writeLineage(NameMappedDatasetFramework.java:172) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at co.cask.cdap.data.dataset.SystemDatasetInstantiator.writeLineage(SystemDatasetInstantiator.java:108) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.dataset2.SingleThreadDatasetCache$LineageRecordingDatasetCache.get(SingleThreadDatasetCache.java:143) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.dataset2.SingleThreadDatasetCache$LineageRecordingDatasetCache.get(SingleThreadDatasetCache.java:127) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.dataset2.SingleThreadDatasetCache.getDataset(SingleThreadDatasetCache.java:170) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.dataset2.DynamicDatasetCache.getDataset(DynamicDatasetCache.java:150) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.dataset2.DynamicDatasetCache.getDataset(DynamicDatasetCache.java:126) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.internal.app.runtime.AbstractContext.getDataset(AbstractContext.java:179) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at co.cask.cdap.internal.app.runtime.batch.BasicMapReduceContext.createInput(BasicMapReduceContext.java:420) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at co.cask.cdap.internal.app.runtime.batch.BasicMapReduceContext.addInput(BasicMapReduceContext.java:252) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at co.cask.cdap.internal.app.runtime.batch.BasicMapReduceContext.addInput(BasicMapReduceContext.java:227) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at co.cask.cdap.etl.batch.mapreduce.MapReduceSourceContext$7.call(MapReduceSourceContext.java:117) [cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.batch.mapreduce.MapReduceSourceContext$7.call(MapReduceSourceContext.java:113) [cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.log.LogContext.runUnchecked(LogContext.java:145) [cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.log.LogContext.runWithoutLoggingUnchecked(LogContext.java:139) [cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.batch.mapreduce.MapReduceSourceContext.setInput(MapReduceSourceContext.java:113) [cdap-etl-core-3.4.1.jar:na]
at co.cask.hydrator.plugin.db.batch.source.DBSource.prepareRun(DBSource.java:213) [1465456928749-0/:na]
at co.cask.hydrator.plugin.db.batch.source.DBSource.prepareRun(DBSource.java:64) [1465456928749-0/:na]
at co.cask.cdap.etl.batch.LoggedBatchConfigurable$1.call(LoggedBatchConfigurable.java:44) [cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.batch.LoggedBatchConfigurable$1.call(LoggedBatchConfigurable.java:41) [cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.log.LogContext.run(LogContext.java:59) [cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.batch.LoggedBatchConfigurable.prepareRun(LoggedBatchConfigurable.java:41) [cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.batch.mapreduce.ETLMapReduce.beforeSubmit(ETLMapReduce.java:170) [cdap-etl-batch-3.4.1.jar:na]
at co.cask.cdap.internal.app.runtime.batch.MapReduceRuntimeService$2.call(MapReduceRuntimeService.java:471) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at co.cask.cdap.internal.app.runtime.batch.MapReduceRuntimeService$2.call(MapReduceRuntimeService.java:466) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.transaction.Transactions.execute(Transactions.java:174) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.internal.app.runtime.batch.MapReduceRuntimeService.beforeSubmit(MapReduceRuntimeService.java:466) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at co.cask.cdap.internal.app.runtime.batch.MapReduceRuntimeService.startUp(MapReduceRuntimeService.java:204) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at com.google.common.util.concurrent.AbstractExecutionThreadService$1$1.run(AbstractExecutionThreadService.java:47) [com.google.guava.guava-13.0.1.jar:na]
at co.cask.cdap.internal.app.runtime.batch.MapReduceRuntimeService$1$1.run(MapReduceRuntimeService.java:386) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]
Caused by: kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90) ~[org.apache.kafka.kafka_2.10-0.8.2.2.jar:na]
at kafka.producer.Producer.send(Producer.scala:77) ~[org.apache.kafka.kafka_2.10-0.8.2.2.jar:na]
at kafka.javaapi.producer.Producer.send(Producer.scala:42) ~[org.apache.kafka.kafka_2.10-0.8.2.2.jar:na]
at org.apache.twill.internal.kafka.client.SimpleKafkaPublisher$SimplePreparer.send(SimpleKafkaPublisher.java:122) ~[org.apache.twill.twill-core-0.7.0-incubating.jar:0.7.0-incubating]
... 36 common frames omitted
 2016-06-09 07:22:09,946 - DEBUG [MapReduceRunner-phase-1:c.c.c.d.m.w.BasicLineageWriter@59] - Writing access for run program_run:default.TestPipelineOne.mapreduce.phase-1.df618ca7-2e12-11e6-9325-0800272d2212, dataset dataset:default.HBASE_REFERENCE, accessType WRITE, component null, accessTime = 1465456929946
 2016-06-09 07:22:10,433 - ERROR [MapReduceRunner-phase-1:c.c.c.d.a.KafkaAuditPublisher@74] - Got exception publishing audit message AuditMessage{version=1, time=1465456929955, entityId=dataset:default.HBASE_REFERENCE, user='', type=ACCESS, payload=AccessPayload{accessType=WRITE, accessor=program_run:default.TestPipelineOne.mapreduce.phase-1.df618ca7-2e12-11e6-9325-0800272d2212} AuditPayload{}}. Exception:
java.util.concurrent.ExecutionException: kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:294) ~[com.google.guava.guava-13.0.1.jar:na]
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:281) ~[com.google.guava.guava-13.0.1.jar:na]
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) ~[com.google.guava.guava-13.0.1.jar:na]
at co.cask.cdap.data2.audit.KafkaAuditPublisher.publish(KafkaAuditPublisher.java:72) ~[co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.audit.AuditPublishers.publishAccess(AuditPublishers.java:77) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.metadata.writer.LineageWriterDatasetFramework.doWriteLineage(LineageWriterDatasetFramework.java:159) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.metadata.writer.LineageWriterDatasetFramework.writeLineage(LineageWriterDatasetFramework.java:141) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.dataset2.ForwardingDatasetFramework.writeLineage(ForwardingDatasetFramework.java:176) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.internal.app.runtime.workflow.NameMappedDatasetFramework.writeLineage(NameMappedDatasetFramework.java:172) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at co.cask.cdap.data.dataset.SystemDatasetInstantiator.writeLineage(SystemDatasetInstantiator.java:108) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.dataset2.SingleThreadDatasetCache$LineageRecordingDatasetCache.get(SingleThreadDatasetCache.java:143) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.dataset2.SingleThreadDatasetCache$LineageRecordingDatasetCache.get(SingleThreadDatasetCache.java:127) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.dataset2.SingleThreadDatasetCache.getDataset(SingleThreadDatasetCache.java:170) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.dataset2.DynamicDatasetCache.getDataset(DynamicDatasetCache.java:150) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.dataset2.DynamicDatasetCache.getDataset(DynamicDatasetCache.java:126) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.internal.app.runtime.AbstractContext.getDataset(AbstractContext.java:179) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at co.cask.cdap.internal.app.runtime.batch.BasicMapReduceContext.addOutput(BasicMapReduceContext.java:311) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at co.cask.cdap.etl.batch.mapreduce.MapReduceSinkContext$4.call(MapReduceSinkContext.java:88) [cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.batch.mapreduce.MapReduceSinkContext$4.call(MapReduceSinkContext.java:84) [cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.log.LogContext.runUnchecked(LogContext.java:145) [cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.log.LogContext.runWithoutLoggingUnchecked(LogContext.java:139) [cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.batch.mapreduce.MapReduceSinkContext.addOutput(MapReduceSinkContext.java:84) [cdap-etl-core-3.4.1.jar:na]
at co.cask.hydrator.plugin.sink.HBaseSink.prepareRun(HBaseSink.java:89) [1465456927467-0/:na]
at co.cask.hydrator.plugin.sink.HBaseSink.prepareRun(HBaseSink.java:57) [1465456927467-0/:na]
at co.cask.cdap.etl.batch.LoggedBatchConfigurable$1.call(LoggedBatchConfigurable.java:44) [cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.batch.LoggedBatchConfigurable$1.call(LoggedBatchConfigurable.java:41) [cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.log.LogContext.run(LogContext.java:59) [cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.batch.LoggedBatchConfigurable.prepareRun(LoggedBatchConfigurable.java:41) [cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.batch.mapreduce.ETLMapReduce.beforeSubmit(ETLMapReduce.java:189) [cdap-etl-batch-3.4.1.jar:na]
at co.cask.cdap.internal.app.runtime.batch.MapReduceRuntimeService$2.call(MapReduceRuntimeService.java:471) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at co.cask.cdap.internal.app.runtime.batch.MapReduceRuntimeService$2.call(MapReduceRuntimeService.java:466) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at co.cask.cdap.data2.transaction.Transactions.execute(Transactions.java:174) [co.cask.cdap.cdap-data-fabric-3.4.1.jar:na]
at co.cask.cdap.internal.app.runtime.batch.MapReduceRuntimeService.beforeSubmit(MapReduceRuntimeService.java:466) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at co.cask.cdap.internal.app.runtime.batch.MapReduceRuntimeService.startUp(MapReduceRuntimeService.java:204) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at com.google.common.util.concurrent.AbstractExecutionThreadService$1$1.run(AbstractExecutionThreadService.java:47) [com.google.guava.guava-13.0.1.jar:na]
at co.cask.cdap.internal.app.runtime.batch.MapReduceRuntimeService$1$1.run(MapReduceRuntimeService.java:386) [co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]
Caused by: kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90) ~[org.apache.kafka.kafka_2.10-0.8.2.2.jar:na]
at kafka.producer.Producer.send(Producer.scala:77) ~[org.apache.kafka.kafka_2.10-0.8.2.2.jar:na]
at kafka.javaapi.producer.Producer.send(Producer.scala:42) ~[org.apache.kafka.kafka_2.10-0.8.2.2.jar:na]
at org.apache.twill.internal.kafka.client.SimpleKafkaPublisher$SimplePreparer.send(SimpleKafkaPublisher.java:122) ~[org.apache.twill.twill-core-0.7.0-incubating.jar:0.7.0-incubating]
... 34 common frames omitted
 2016-06-09 07:22:10,455 - DEBUG [MapReduceRunner-phase-1:c.c.c.i.a.r.b.MapReduceRuntimeService@640] - Using as output for MapReduce Job: [XBRLElement]
 2016-06-09 07:22:10,455 - DEBUG [MapReduceRunner-phase-1:c.c.c.i.a.r.b.MapReduceRuntimeService@871] - Set output key class to class java.lang.Object
 2016-06-09 07:22:10,456 - DEBUG [MapReduceRunner-phase-1:c.c.c.i.a.r.b.MapReduceRuntimeService@876] - Set output value class to class java.lang.Object
 2016-06-09 07:22:10,456 - DEBUG [MapReduceRunner-phase-1:c.c.c.i.a.r.b.MapReduceRuntimeService@914] - Set map output key class to class java.lang.Object
 2016-06-09 07:22:10,456 - DEBUG [MapReduceRunner-phase-1:c.c.c.i.a.r.b.MapReduceRuntimeService@919] - Set map output value class to class java.lang.Object
 2016-06-09 07:22:10,462 - DEBUG [MapReduceRunner-phase-1:c.c.c.i.a.r.b.MapReduceRuntimeService@733] - Creating Job jar: /opt/cdap/sdk-3.4.1/data/tmp/runner/mapreduce.default.TestPipelineOne.phase-1.df618ca7-2e12-11e6-9325-0800272d2212/job.jar
 2016-06-09 07:22:10,491 - INFO  [MapReduceRunner-phase-1:c.c.c.i.a.r.b.MapReduceRuntimeService@288] - Submitting MapReduce Job: job=phase-1,=namespaceId=default, applicationId=TestPipelineOne, program=phase-1, runid=df618ca7-2e12-11e6-9325-0800272d2212
 2016-06-09 07:22:10,491 - INFO  [MapReduceRunner-phase-1:c.c.c.i.a.r.b.i.LocalClientProtocolProvider@42] - Using framework: local
 2016-06-09 07:22:10,492 - INFO  [MapReduceRunner-phase-1:c.c.c.i.a.r.b.i.LocalClientProtocolProvider@50] - Using tracker: clocal
 2016-06-09 07:22:10,950 - WARN  [MapReduceRunner-phase-1:o.a.h.m.JobSubmitter@150] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
 2016-06-09 07:22:10,978 - DEBUG [MapReduceRunner-phase-1:c.c.h.p.DBUtils@202] - Removing non-null driver object from drivers list.
 2016-06-09 07:22:10,983 - DEBUG [MapReduceRunner-phase-1:c.c.h.p.DBUtils@205] - Found null classloader for default driver sun.jdbc.odbc.JdbcOdbcDriver. Ignoring since this may be using system classloader.
 2016-06-09 07:22:10,984 - DEBUG [MapReduceRunner-phase-1:c.c.h.p.DBUtils@202] - Removing non-null driver object from drivers list.
 2016-06-09 07:22:10,984 - DEBUG [MapReduceRunner-phase-1:c.c.h.p.DBUtils@202] - Removing non-null driver object from drivers list.
 2016-06-09 07:22:10,984 - DEBUG [MapReduceRunner-phase-1:c.c.h.p.DBUtils@202] - Removing non-null driver object from drivers list.
 2016-06-09 07:22:10,984 - DEBUG [MapReduceRunner-phase-1:c.c.h.p.DBUtils@202] - Removing non-null driver object from drivers list.
 2016-06-09 07:22:10,984 - DEBUG [MapReduceRunner-phase-1:c.c.h.p.DBUtils@202] - Removing non-null driver object from drivers list.
 2016-06-09 07:22:10,984 - DEBUG [MapReduceRunner-phase-1:c.c.h.p.DBUtils@211] - Removing default driver com.microsoft.sqlserver.jdbc.SQLServerDriver from registeredDrivers
 2016-06-09 07:22:10,984 - DEBUG [MapReduceRunner-phase-1:c.c.h.p.d.b.s.DataDrivenETLDBInputFormat@85] - Registered JDBC driver via shim co.cask.hydrator.plugin.JDBCDriverShim@4b340423. Actual Driver SQLServerDriver:2.
 2016-06-09 07:22:13,626 - WARN  [MapReduceRunner-phase-1:o.a.h.c.Configuration@2345] - file:/opt/cdap/sdk-3.4.1/data/tmp/runner/mapreduce.default.TestPipelineOne.phase-1.df618ca7-2e12-11e6-9325-0800272d2212/mapreduce/staging/cdap578916394/.staging/job_local578916394_0002/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
 2016-06-09 07:22:13,629 - WARN  [MapReduceRunner-phase-1:o.a.h.c.Configuration@2345] - file:/opt/cdap/sdk-3.4.1/data/tmp/runner/mapreduce.default.TestPipelineOne.phase-1.df618ca7-2e12-11e6-9325-0800272d2212/mapreduce/staging/cdap578916394/.staging/job_local578916394_0002/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
 2016-06-09 07:22:13,859 - WARN  [MapReduceRunner-phase-1:o.a.h.c.Configuration@2345] - file:/opt/cdap/sdk-3.4.1/data/tmp/runner/mapreduce.default.TestPipelineOne.phase-1.df618ca7-2e12-11e6-9325-0800272d2212/mapreduce/local/localRunner/cdap/job_local578916394_0002/job_local578916394_0002.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
 2016-06-09 07:22:13,872 - WARN  [MapReduceRunner-phase-1:o.a.h.c.Configuration@2345] - file:/opt/cdap/sdk-3.4.1/data/tmp/runner/mapreduce.default.TestPipelineOne.phase-1.df618ca7-2e12-11e6-9325-0800272d2212/mapreduce/local/localRunner/cdap/job_local578916394_0002/job_local578916394_0002.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.

On Thursday, June 9, 2016 at 12:23:01 AM UTC+5:30, poorna wrote:

Poorna Chandra

unread,
Jun 9, 2016, 10:10:17 PM6/9/16
to Sudarshan Thakur, CDAP User
Hi Sudarshan,

Couple of things to check -
  1. Is there a firewall between the box running the SDK and the HBase cluster? The firewall can cause the connections to hang.
  2. Trying to read 10 million records could cause SDK to go out of memory. SDK is not designed to handle large amounts of data. You could try limiting the data using TOP [1]
  3. We can ignore the log lines that say "Got exception publishing audit message AuditMessage". Those messages should not cause this pipeline to fail.
  4. Can you attach the complete cdap-debug.log. It should be located in SDK_HOME/logs directory.
Thanks,
Poorna.



Poorna Chandra

unread,
Jun 10, 2016, 12:36:51 AM6/10/16
to Sudarshan Thakur, cdap...@googlegroups.com
Hi Sudarshan,

I see the following exception in the logs you attached (full stack trace below) -

java.lang.ClassCastException: java.lang.Short cannot be cast to java.lang.Integer

This error is thrown when converting the input record from the DB table to output record to be written to HBase. Looks like there is a mismatch in the input/output schema. 

Couple of questions -
  1. Was the schema auto populated using the "Get Schema" button in the DB plugin configuration?
  2. Can you check if the datatype of one of the fields of the DB table is short?
Thanks,
Poorna.

Full stack trace - 

default:TestOne:DataPipelineWorkflow [WARN]  2016-06-10 03:43:05,270 - WARN  [Thread-54:o.a.h.m.LocalJobRunnerWithFix$Job@562] - Error cleaning up job: job_local697056285_0001java.lang.Exception: java.lang.ClassCastException: java.lang.Short cannot be cast to java.lang.Integer
at org.apache.hadoop.mapred.LocalJobRunnerWithFix$Job.runTasks(LocalJobRunnerWithFix.java:465) ~[co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at org.apache.hadoop.mapred.LocalJobRunnerWithFix$Job.run(LocalJobRunnerWithFix.java:524) ~[co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
java.lang.ClassCastException: java.lang.Short cannot be cast to java.lang.Integer
at co.cask.cdap.format.RecordPutTransformer.setField(RecordPutTransformer.java:104) ~[cdap-formats-3.4.1.jar:na]
at co.cask.cdap.format.RecordPutTransformer.toPut(RecordPutTransformer.java:83) ~[cdap-formats-3.4.1.jar:na]
at co.cask.hydrator.plugin.sink.HBaseSink.transform(HBaseSink.java:150) ~[1465530168450-0/:na]
at co.cask.hydrator.plugin.sink.HBaseSink.transform(HBaseSink.java:57) ~[1465530168450-0/:na]
at co.cask.cdap.etl.common.TrackedTransform.transform(TrackedTransform.java:59) ~[cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.common.TransformExecutor.executeTransformation(TransformExecutor.java:86) ~[cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.common.TransformExecutor.executeTransformation(TransformExecutor.java:90) ~[cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.common.TransformExecutor.runOneIteration(TransformExecutor.java:49) ~[cdap-etl-core-3.4.1.jar:na]
at co.cask.cdap.etl.batch.mapreduce.TransformRunner.transform(TransformRunner.java:154) ~[cdap-etl-batch-3.4.1.jar:na]
at co.cask.cdap.etl.batch.mapreduce.ETLMapReduce$ETLMapper.map(ETLMapReduce.java:299) ~[cdap-etl-batch-3.4.1.jar:na]
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) ~[org.apache.hadoop.hadoop-mapreduce-client-core-2.3.0.jar:na]
at co.cask.cdap.internal.app.runtime.batch.MapperWrapper.run(MapperWrapper.java:117) ~[co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) ~[org.apache.hadoop.hadoop-mapreduce-client-core-2.3.0.jar:na]
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) ~[org.apache.hadoop.hadoop-mapreduce-client-core-2.3.0.jar:na]
at org.apache.hadoop.mapred.LocalJobRunnerWithFix$Job$MapTaskRunnable.run(LocalJobRunnerWithFix.java:243) ~[co.cask.cdap.cdap-app-fabric-3.4.1.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_75]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_75]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_75]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.7.0_75]
at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_75]
  at org.apache.hadoop.mapred.LocalJobRunnerWithFix$Job.runTasks(LocalJobRunnerWithFix.java:465)
  at org.apache.hadoop.mapred.LocalJobRunnerWithFix$Job.run(LocalJobRunnerWithFix.java:524)



On Thu, Jun 9, 2016 at 9:01 PM, Sudarshan Thakur <sudarsha...@gmail.com> wrote:
Hi Poorna,

Please find the attached log file after making the change in the query as top 100.
--
Thanks and Regards
Sudarshan Kumar.

Sudarshan Thakur

unread,
Jun 10, 2016, 12:38:54 AM6/10/16
to Poorna Chandra, cdap...@googlegroups.com
Hi Pooran ,

I suspected that from the log but get Schema worked for me very well .
And also there is no shot type field in the table ..

Poorna Chandra

unread,
Jun 10, 2016, 10:51:08 PM6/10/16
to Sudarshan Thakur, cdap...@googlegroups.com
Hi Sudarshan,

We suspect this is because of mismatch in datatypes between DB table and the value CDAP expects. Can you do a "describe table" on your DB table and send me the output?

We were not able to reproduce this issue using a mysql DB. If you send me the table schema then we'll try reproducing it on a SQL Server.

Thanks,
Poorna.

Sudarshan Thakur

unread,
Jun 11, 2016, 5:49:46 AM6/11/16
to Poorna Chandra, cdap...@googlegroups.com
Hi Pooran,

I dont thik that is the problem but even though i will send you details .
i

Sudarshan Thakur

unread,
Jun 11, 2016, 5:51:04 AM6/11/16
to Poorna Chandra, cdap...@googlegroups.com
Hi Pooran,


I dont thik that is the problem but even though i will send you details .
I am using sqlserver 
Even i tried with one column to insert into hbase  but could not be able to do got error again .

Sudarshan Thakur

unread,
Jun 11, 2016, 7:08:28 AM6/11/16
to Poorna Chandra, cdap...@googlegroups.com
also there is not short type in sql or in hbase database ..So i suspect there is somthing that is happening on java level in the system.

Sudarshan Thakur

unread,
Jun 13, 2016, 2:19:44 AM6/13/16
to Poorna Chandra, cdap...@googlegroups.com
Hi Poorna,

The data types of columns  are 

1.col1 int
2.col2 varchar
3.col3 int
4.col4 int
5.col5 smallint
6.col6 int
7.col7 datetime
8.col8 smallint


Regards 
Sudarshan

Sudarshan Thakur

unread,
Jun 13, 2016, 2:50:58 AM6/13/16
to Poorna Chandra, cdap...@googlegroups.com
Hi Poorna,

I think converting data types from small int to short is throwing exception.
Intarnally CDAP system must be converting from small int to short data types that conversion is not properly .


Regards 
Sudarshan 

Rohit Sinha

unread,
Jun 13, 2016, 10:37:35 PM6/13/16
to CDAP User, poo...@cask.co, sudarsha...@gmail.com
Hello Sudarshan,

We suspect that the issue is due to SQL Server JDBC driver converting smallint to short instead of int. This does not happen on other DBs like MySQL etc. 

You can try casting the smallint columns to int in your query.

Here is an example: 

SELECT col1, col2, CAST(col5 AS INT), col6, CAST(col8 AS INT) FROM MY_TABLE

Please see the following link for more information: https://msdn.microsoft.com/en-us/library/ms187928(SQL.90).aspx

Please let us know if you have further issues/concerns.

Thanks. 
Rohit Sinha

Sudarshan Thakur

unread,
Jun 14, 2016, 12:30:39 AM6/14/16
to Rohit Sinha, CDAP User, Poorna Chandra
Hi Rohit ,


After Casting i am not getting such error but pipeline keeep on runnning 1000 records for very long time  .

Attaching screen shot of the ETL pepeline  and flow logs also .


2016-06-14-041449_1366x664_scrot.png
flow.log

Sudarshan Thakur

unread,
Jun 14, 2016, 2:26:52 AM6/14/16
to Rohit Sinha, CDAP User, Poorna Chandra
Hi ,

Even i tried with the 10 records still same problem.

Poorna Chandra

unread,
Jun 14, 2016, 2:51:30 AM6/14/16
to Sudarshan Thakur, Rohit Sinha, CDAP User
Hi Sudarshan,

From the screenshot attached, it looks like we are able to read 1,000 records from the DB. 

The next step is to write those records to HBase. When I looked at the pipeline configuration, I don't see Zookeeper quorum string for HBase set. Can you set "Zookeeper Quorum String" and "Zookeeper Client Port" for your HBase cluster?

If the Zookeeper quorum for your cluster is host1:2181,host2:2181,host3:2181 then set the following -
Zookeeper Quorum String = host1,host2,host3
Zookeeper Client Port = 2181

Let me know how it goes.

Thanks,
Poorna.

Sudarshan Thakur

unread,
Jun 14, 2016, 2:59:06 AM6/14/16
to Poorna Chandra, Rohit Sinha, CDAP User
Hi Pooran,

How do i know my Zookeeper Quorum String and Zookeeper Client Port ?
Will it not take by default ?As in the pipeline i can see it says by default it will take localhost and port 2181

I am working on standalone vm machine .

Nitin Motgi

unread,
Jun 14, 2016, 3:02:06 AM6/14/16
to sudarsha...@gmail.com, Poorna Chandra, Rohit Sinha, CDAP User
Hi Sudarshan, 

You are using a HBase Sink. This means that you are attempting to connect to HBase that is running on a cluster. If that’s your intent, then you need to find from your administrator the port and hostnames. 

Here is a small change you can do to make the pipeline store the data on to a standalone vm. Instead of using “HBase” sink, use “Table” sink. Provide the table name and column that should be used as key. 

Hope this helps. 

Thanks,
Nitin

Sudarshan Thakur

unread,
Jun 14, 2016, 3:19:35 AM6/14/16
to Nitin Motgi, Poorna Chandra, Rohit Sinha, CDAP User
Hi Nitin,

Thanks for the help .
I have changed from hbase to table as sink .
Pipeline published and completed also .

But when i query my table in preview i cant see any records.Though Storage is showing 1.6 kb 

Ali Anwar

unread,
Jun 14, 2016, 3:21:27 AM6/14/16
to sudarsha...@gmail.com, Nitin Motgi, Poorna Chandra, Rohit Sinha, CDAP User
Hey Sudarshan.

Any indication of what might have gone wrong, in the recent logs?

Sudarshan Thakur

unread,
Jun 14, 2016, 3:25:30 AM6/14/16
to Ali Anwar, Nitin Motgi, Poorna Chandra, Rohit Sinha, CDAP User
Hi Ali,

This is the last log i can see looks fine .

2016-06-14 07:12:40,583 - INFO  [MapReduceRunner-phase-1:c.c.c.e.b.m.ETLMapReduce@273] - Batch Run finished : succeeded = true
 2016-06-14 07:12:41,206 - INFO  [action-phase-1-0:c.c.c.i.w.ProgramWorkflowAction@67] - MAPREDUCE Program phase-1 workflow action completed
 2016-06-14 07:12:41,209 - INFO  [WorkflowDriver:c.c.c.i.a.r.w.WorkflowDriver@524] - Workflow execution succeeded for DataPipelineWorkflow
 2016-06-14 07:12:41,268 - INFO  [NettyHttpService STOPPING:c.c.h.NettyHttpService@275] - Stopping service on address /127.0.0.1:54949...
 2016-06-14 07:12:41,308 - INFO  [NettyHttpService STOPPING:c.c.h.NettyHttpService@289] - Done stopping service on address /127.0.0.1:54949
 2016-06-14 07:12:41,316 - INFO  [WorkflowDriver:c.c.c.i.a.r.w.WorkflowProgramController@84] - Workflow service terminated from RUNNING. Un-registering service workflow.default.ETL1000TestTable.DataPipelineWorkflow.55252ce6-31ff-11e6-97b0-0800272d2212.
 2016-06-14 07:12:41,316 - INFO  [WorkflowDriver:c.c.c.i.a.r.w.WorkflowProgramController@86] - Service workflow.default.ETL1000TestTable.DataPipelineWorkflow.55252ce6-31ff-11e6-97b0-0800272d2212 unregistered.
 2016-06-14 07:12:41,390 - DEBUG [pcontroller-program:default.ETL1000TestTable.workflow.DataPipelineWorkflow-55252ce6-31ff-11e6-97b0-0800272d2212:c.c.c.a.r.AbstractProgramRuntimeService@383] - Removing RuntimeInfo: Workflow DataPipelineWorkflow 55252ce6-31ff-11e6-97b0-0800272d2212
 2016-06-14 07:12:41,390 - DEBUG [pcontroller-program:default.ETL1000TestTable.workflow.DataPipelineWorkflow-55252ce6-31ff-11e6-97b0-0800272d2212:c.c.c.a.r.AbstractProgramRuntimeService@386] - RuntimeInfo removed: RuntimeInfo{type=Workflow, appId=ETL1000TestTable, programId=DataPipelineWorkflow}
 2016-06-14 07:12:41,513 - DEBUG [pcontroller-program:default.ETL1000TestTable.workflow.DataPipelineWorkflow-55252ce6-31ff-11e6-97b0-0800272d2212:c.c.c.i.a.s.ProgramLifecycleService@317] - Program program:default.ETL1000TestTable.workflow.DataPipelineWorkflow completed successfully.






Sudarshan Thakur

unread,
Jun 14, 2016, 3:27:05 AM6/14/16
to Ali Anwar, Nitin Motgi, Poorna Chandra, Rohit Sinha, CDAP User

Hi,

I inserted again some records and storage has increased to 1.4 mb that means querying has some problem .I am just running predefined query .

Ali Anwar

unread,
Jun 14, 2016, 3:29:19 AM6/14/16
to sudarsha...@gmail.com, Nitin Motgi, Poorna Chandra, Rohit Sinha, CDAP User
Hey.

According to the logs, the run was successful.
Can you paste the query you are executing here? Perhaps also try a count(*) on the table, to see how many rows it has.

Lastly, check the configuration of the pipeline, to verify that the table name defined in the sink matches the one in the query, just to make sure.

Not sure what else I can think of currently.

Sudarshan Thakur

unread,
Jun 14, 2016, 3:34:14 AM6/14/16
to Ali Anwar, Nitin Motgi, Poorna Chandra, Rohit Sinha, CDAP User
Hi ,

SELECT * FROM dataset_HBASE_TABLE LIMIT 5

Sharing Meta data details ..

I cant find file at file:/opt/cdap/sdk-3.4.1/data/explore/warehouse/dataset_hbase_table location its empty 



Table dataset_hbase_table
Database default
Owner cdap
Creation Time Tuesday, June 14th 2016, 7:11:53 am
Compressed false
Is Dataset true
Input Format
Last Accessed Time 0
Location file:/opt/cdap/sdk-3.4.1/data/explore/warehouse/dataset_hbase_table
Number of Buckets -1
Output Format
Parameters
cdap.name HBASE_TABLE
EXTERNAL TRUE
cdap.version 3.4.1-1463051886235
transient_lastDdlTime 1465888313
comment CDAP Dataset
storage_handler co.cask.cdap.hive.datasets.DatasetStorageHandler
Retention 0
SerDe co.cask.cdap.hive.datasets.DatasetSerDe
SerDe Parameters
serialization.format 1
explore.dataset.namespace default

Ali Anwar

unread,
Jun 14, 2016, 3:47:27 AM6/14/16
to sudarsha...@gmail.com, Nitin Motgi, Poorna Chandra, Rohit Sinha, CDAP User
It's an external hive table, which means that hive does not manage the actual data, which is why that file (in ../warehouse/..) is not there. So, that's fine.

I don't know why querying the results of the Hydrator pipeline isn't working for you:
1. Hydrator pipeline to write to a Table sink, namely 'hbase_table'
2. The program running the pipeline succeeds
3. select(*) of the table returns 0 rows.

One possibility is that it is a UI issue in serving the results of the explore query.
Can you try it out from the cli, and see what results it gives?

/opt/cdap/sdk-3.4.1/bin/cdap-cli.sh "execute 'SELECT * FROM dataset_HBASE_TABLE LIMIT 5'"

/opt/cdap/sdk-3.4.1/bin/cdap-cli.sh "execute 'SELECT count(*) FROM dataset_hbase_table'"

Regards,

Ali Anwar

Sudarshan Thakur

unread,
Jun 14, 2016, 3:53:29 AM6/14/16
to Ali Anwar, Nitin Motgi, Poorna Chandra, Rohit Sinha, CDAP User
Hey Ali,
I am getting

Invalid command '/opt/cdap/sdk-3.4.1/bin/cdap-cli.sh "execute 'SELECT count(*) FROM dataset_hbase_table'"'. Enter 'help' for a list of commands
 

Not sure if syntax is correct ?

Ali Anwar

unread,
Jun 14, 2016, 3:57:18 AM6/14/16
to Sudarshan Thakur, Nitin Motgi, Poorna Chandra, Rohit Sinha, CDAP User
Hey Sudarshan.
Did you start the CLI and then paste that entire line as input to the cdap cli?

Sudarshan Thakur

unread,
Jun 14, 2016, 3:59:07 AM6/14/16
to Ali Anwar, Nitin Motgi, Poorna Chandra, Rohit Sinha, CDAP User
Hey Ali,
Yes 
This is how i did 


Then command without any quotes .

Sudarshan Thakur

unread,
Jun 14, 2016, 4:00:51 AM6/14/16
to Ali Anwar, Nitin Motgi, Poorna Chandra, Rohit Sinha, CDAP User
Hey Ali,

Here is something i got 

cdap (http://cdap-standalone-vm-3:10000/namespace:default)> execute 'SELECT * FROM dataset_HBASE_TABLE LIMIT 5'
Error: co.cask.cdap.explore.service.ExploreException: Cannot get next results. Reason: Response code: 500, message: 'Internal Server Error', body: 'No value for UserID exists.'

Sudarshan Thakur

unread,
Jun 14, 2016, 4:02:50 AM6/14/16
to Ali Anwar, Nitin Motgi, Poorna Chandra, Rohit Sinha, CDAP User
and also 

cdap (http://cdap-standalone-vm-3:10000/namespace:default)> execute 'SELECT count(*) FROM dataset_hbase_table'
Error: Query 'SELECT count(*) FROM dataset_hbase_table' execution did not finish successfully. Got final state - ERROR

Ali Anwar

unread,
Jun 14, 2016, 4:07:11 AM6/14/16
to Sudarshan Thakur, Nitin Motgi, Poorna Chandra, Rohit Sinha, CDAP User
Oh ok. The commands I pasted earlier were 1-line commands that start the cli and execute the query in the same bash command, which is why they didn't work when used as input directly to the cli.

Just based upon that message, it seems that the schema defines 'UserId' not as nullable, but some data in the CDAP Table (and in the database table) had null values for that field?
The second error may simply be because I used lowercase for the count(*) query. I wasn't sure if it mattered.

Is it possible that some of the data you imported had null for UserId? If so, the schema would have to be updated to reflect that, in order for the explore query to work properly.

Regards,

Ali Anwar


Sudarshan Thakur

unread,
Jun 14, 2016, 4:33:52 AM6/14/16
to Ali Anwar, Nitin Motgi, Poorna Chandra, Rohit Sinha, CDAP User
Hi Ali,
Thanks a lot 

Yes That the Issue .
I removed UserId column and mapped then it worked .
But when i make that field null able then also it should work ?

Ali Anwar

unread,
Jun 14, 2016, 4:35:48 AM6/14/16
to Sudarshan Thakur, Nitin Motgi, Poorna Chandra, Rohit Sinha, CDAP User
Hey Sudarshan.

Yea, it seems like when you were making the hydrator pipeline - if you make the field nullable, it should also work.
Explore was failing because there's a non-null check for all fields that aren't explicitly marked nullable (via the table's schema). The table's schema is defined via the schema in the hydrator pipeline.

Sudarshan Thakur

unread,
Jun 14, 2016, 6:18:48 AM6/14/16
to Ali Anwar, Nitin Motgi, Poorna Chandra, Rohit Sinha, CDAP User
Hi,

I was trying to ingest 30 millions records but after 8 millions it looks like it hanged .
Is it like my VMor SDK is not able to handle that much records ?

Nitin Motgi

unread,
Jun 14, 2016, 10:50:22 AM6/14/16
to sudarsha...@gmail.com, Ali Anwar, Poorna Chandra, Rohit Sinha, CDAP User
Hi Sudharshan,

It's not advised to run production load or large volumes of data through standalone. 

Thanks,
Nitin 


--
"Humility isn't thinking less of yourself, it's thinking of yourself less"

Reply all
Reply to author
Forward
0 new messages