Could not Setup CDAP in Distributed Mode

241 views
Skip to first unread message

Avinash Dongre

unread,
Aug 17, 2015, 6:29:17 AM8/17/15
to CDAP User, rob...@ampool.io
Hi All,
I am trying to setup CDAP in Distributed Mode, But still no luck.
I have followed the process but could not get this one working.

Attached is my cdap-site.xml and logs



cdap-site.xml
ui-cdap-avinash.log
router-cdap-avinash.log
master-cdap-avinash.log
kafka-server-cdap-avinash.log
auth-server-cdap-avinash.log

Avinash Dongre

unread,
Aug 17, 2015, 7:00:04 AM8/17/15
to CDAP User, rob...@ampool.io
After choosing not to start cdap kakfa service. I do not see any errors in zookeeper regarding client request denied.

But I still could not see anything in UI.

503 No endpoint strategy found for request : /v3/namespaces
2 503 No endpoint strategy found for request : /v3/version




Thanks
Avinash

Sreevatsan Raman

unread,
Aug 17, 2015, 1:09:48 PM8/17/15
to Avinash Dongre, CDAP User, rob...@ampool.io
Hi Avinash,

If you look at the Kafka logs, there is a directory permission issue (Caused by: java.io.IOException: Permission denied) with the kafka directory (kafka.log.dir). CDAP user should be able to write to these directories. Can you check the permissions? 

I have followed the process but could not get this one working.

We run some of the CDAP system services (log saver, metrics processor etc.) in YARN and it takes several minutes for the services to startup based on the amount of time it takes to provision the containers. If you are not seeing any additional details in the logs after 5-10 mins, I would recommend starting to taking a look at the class path for cdap master service to see if the Hbase and hadoop configs and libraries are included in the classpath

you can run <Master_Install_Dir>/bin/svc-master classpath

Here is a run below from our dev setup. You can see we have the hadoop/hbase libs and configs

Thanks,
Sree

/opt/cdap/master/bin/svc-master classpath

/opt/cdap/hbase-compat-1.0-cdh/lib/*:/opt/cdap/master/lib/*:/etc/hbase/conf:/usr/lib/jvm/java-7-oracle-amd64/lib/tools.jar:/usr/lib/hbase:/usr/lib/hbase/lib/activation-1.1.jar:/usr/lib/hbase/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hbase/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hbase/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hbase/lib/api-util-1.0.0-M20.jar:/usr/lib/hbase/lib/asm-3.2.jar:/usr/lib/hbase/lib/avro.jar:/usr/lib/hbase/lib/commons-beanutils-1.7.0.jar:/usr/lib/hbase/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hbase/lib/commons-cli-1.2.jar:/usr/lib/hbase/lib/commons-codec-1.9.jar:/usr/lib/hbase/lib/commons-collections-3.2.1.jar:/usr/lib/hbase/lib/commons-compress-1.4.1.jar:/usr/lib/hbase/lib/commons-configuration-1.6.jar:/usr/lib/hbase/lib/commons-daemon-1.0.3.jar:/usr/lib/hbase/lib/commons-digester-1.8.jar:/usr/lib/hbase/lib/commons-el-1.0.jar:/usr/lib/hbase/lib/commons-httpclient-3.1.jar:/usr/lib/hbase/lib/commons-io-2.4.jar:/usr/lib/hbase/lib/commons-lang-2.6.jar:/usr/lib/hbase/lib/commons-logging-1.2.jar:/usr/lib/hbase/lib/commons-math-2.1.jar:/usr/lib/hbase/lib/commons-math3-3.1.1.jar:/usr/lib/hbase/lib/commons-net-3.1.jar:/usr/lib/hbase/lib/core-3.1.1.jar:/usr/lib/hbase/lib/curator-client-2.7.1.jar:/usr/lib/hbase/lib/curator-framework-2.7.1.jar:/usr/lib/hbase/lib/curator-recipes-2.7.1.jar:/usr/lib/hbase/lib/disruptor-3.3.0.jar:/usr/lib/hbase/lib/findbugs-annotations-1.3.9-1.jar:/usr/lib/hbase/lib/gson-2.2.4.jar:/usr/lib/hbase/lib/guava-12.0.1.jar:/usr/lib/hbase/lib/hamcrest-core-1.3.jar:/usr/lib/hbase/lib/hbase-annotations-1.0.0-cdh5.4.4.jar:/usr/lib/hbase/lib/hbase-annotations-1.0.0-cdh5.4.4-tests.jar:/usr/lib/hbase/lib/hbase-checkstyle-1.0.0-cdh5.4.4.jar:/usr/lib/hbase/lib/hbase-client-1.0.0-cdh5.4.4.jar:/usr/lib/hbase/lib/hbase-common-1.0.0-cdh5.4.4.jar:/usr/lib/hbase/lib/hbase-common-1.0.0-cdh5.4.4-tests.jar:/usr/lib/hbase/lib/hbase-examples-1.0.0-cdh5.4.4.jar:/usr/lib/hbase/lib/hbase-hadoop2-compat-1.0.0-cdh5.4.4.jar:/usr/lib/hbase/lib/hbase-hadoop2-compat-1.0.0-cdh5.4.4-tests.jar:/usr/lib/hbase/lib/hbase-hadoop-compat-1.0.0-cdh5.4.4.jar:/usr/lib/hbase/lib/hbase-hadoop-compat-1.0.0-cdh5.4.4-tests.jar:/usr/lib/hbase/lib/hbase-it-1.0.0-cdh5.4.4.jar:/usr/lib/hbase/lib/hbase-it-1.0.0-cdh5.4.4-tests.jar:/usr/lib/hbase/lib/hbase-prefix-tree-1.0.0-cdh5.4.4.jar:/usr/lib/hbase/lib/hbase-protocol-1.0.0-cdh5.4.4.jar:/usr/lib/hbase/lib/hbase-rest-1.0.0-cdh5.4.4.jar:/usr/lib/hbase/lib/hbase-server-1.0.0-cdh5.4.4.jar:/usr/lib/hbase/lib/hbase-server-1.0.0-cdh5.4.4-tests.jar:/usr/lib/hbase/lib/hbase-shell-1.0.0-cdh5.4.4.jar:/usr/lib/hbase/lib/hbase-testing-util-1.0.0-cdh5.4.4.jar:/usr/lib/hbase/lib/hbase-thrift-1.0.0-cdh5.4.4.jar:/usr/lib/hbase/lib/high-scale-lib-1.1.1.jar:/usr/lib/hbase/lib/hsqldb-1.8.0.10.jar:/usr/lib/hbase/lib/htrace-core-3.0.4.jar:/usr/lib/hbase/lib/htrace-core-3.1.0-incubating.jar:/usr/lib/hbase/lib/htrace-core.jar:/usr/lib/hbase/lib/httpclient-4.2.5.jar:/usr/lib/hbase/lib/httpcore-4.2.5.jar:/usr/lib/hbase/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hbase/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hbase/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hbase/lib/jackson-xc-1.8.8.jar:/usr/lib/hbase/lib/jamon-runtime-2.3.1.jar:/usr/lib/hbase/lib/jasper-compiler-5.5.23.jar:/usr/lib/hbase/lib/jasper-runtime-5.5.23.jar:/usr/lib/hbase/lib/java-xmlbuilder-0.4.jar:/usr/lib/hbase/lib/jaxb-api-2.1.jar:/usr/lib/hbase/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hbase/lib/jcodings-1.0.8.jar:/usr/lib/hbase/lib/jersey-client-1.9.jar:/usr/lib/hbase/lib/jersey-core-1.9.jar:/usr/lib/hbase/lib/jersey-json-1.9.jar:/usr/lib/hbase/lib/jersey-server-1.9.jar:/usr/lib/hbase/lib/jets3t-0.9.0.jar:/usr/lib/hbase/lib/jettison-1.3.3.jar:/usr/lib/hbase/lib/jetty-6.1.26.cloudera.4.jar:/usr/lib/hbase/lib/jetty-sslengine-6.1.26.cloudera.4.jar:/usr/lib/hbase/lib/jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hbase/lib/joni-2.1.2.jar:/usr/lib/hbase/lib/jruby-cloudera-1.0.0.jar:/usr/lib/hbase/lib/jsch-0.1.42.jar:/usr/lib/hbase/lib/jsp-2.1-6.1.14.jar:/usr/lib/hbase/lib/jsp-api-2.1-6.1.14.jar:/usr/lib/hbase/lib/jsp-api-2.1.jar:/usr/lib/hbase/lib/jsr305-1.3.9.jar:/usr/lib/hbase/lib/junit-4.11.jar:/usr/lib/hbase/lib/leveldbjni-all-1.8.jar:/usr/lib/hbase/lib/libthrift-0.9.0.jar:/usr/lib/hbase/lib/log4j-1.2.17.jar:/usr/lib/hbase/lib/metrics-core-2.2.0.jar:/usr/lib/hbase/lib/netty-3.2.4.Final.jar:/usr/lib/hbase/lib/netty-3.6.6.Final.jar:/usr/lib/hbase/lib/paranamer-2.3.jar:/usr/lib/hbase/lib/protobuf-java-2.5.0.jar:/usr/lib/hbase/lib/servlet-api-2.5-6.1.14.jar:/usr/lib/hbase/lib/servlet-api-2.5.jar:/usr/lib/hbase/lib/slf4j-api-1.7.5.jar:/usr/lib/hbase/lib/slf4j-log4j12.jar:/usr/lib/hbase/lib/snappy-java-1.0.4.1.jar:/usr/lib/hbase/lib/xmlenc-0.52.jar:/usr/lib/hbase/lib/xz-1.0.jar:/usr/lib/hbase/lib/zookeeper.jar:/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/.//*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/.//*:/etc/tez/conf.dev:/usr/lib/tez/*:/usr/lib/tez/lib/*::/etc/hadoop/conf:/usr/lib/hadoop/*:/usr/lib/hadoop/lib/*:/usr/lib/zookeeper/*:/usr/lib/zookeeper/lib/*::/etc/cdap/conf/:/opt/cdap/master/conf/:/etc/hbase/conf/




--
You received this message because you are subscribed to the Google Groups "CDAP User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cdap-user+...@googlegroups.com.
To post to this group, send email to cdap...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cdap-user/813416fd-e771-47a7-a90a-36e52dc4f936%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Avinash Dongre

unread,
Aug 17, 2015, 2:06:04 PM8/17/15
to Sreevatsan Raman, CDAP User, robert geiger
Thanks Sree,

For helping me.
I have fixed the directory permission issue and kafka service is up and running.
But Still could not login to ui, It keep saying session timeout.

I see no activity in master logs and seeing errors in router*.log 
co.cask.cdap.common.HandlerException: No endpoint strategy found for request : 

Attaching logs for your reference and also classpath output.

What configuration of machine do you guys suggest for cdap environment ?

Thanks
Avinash


ui-cdap-avinash.log
router-cdap-avinash.log
master-cdap-avinash.log
kafka-server-cdap-avinash.log
auth-server-cdap-avinash.log
cdap_master_classpath.out

Shankar Selvam

unread,
Aug 17, 2015, 3:12:44 PM8/17/15
to Avinash Dongre, Sreevatsan Raman, CDAP User, robert geiger
Hi Avinash,

Can you setup log-level for cdap to be "TRACE" so we will get more information about why master did not startup in the master logs. 

you can change the log-level to TRACE in /etc/cdap/conf/logback.xml file as below
<logger name="co.cask.cdap" level="TRACE"/> 

Please change this and restart master, then provide the master logs, that will help in debugging the issue.

Thanks
Shankar

Avinash Dongre

unread,
Aug 18, 2015, 1:48:05 AM8/18/15
to Shankar Selvam, Sreevatsan Raman, CDAP User, robert geiger, Nilk...@ampool.io
Still no luck,
Attached is the logs.

How Do I make sure I have enough YARN container the for various services master starts.

Thanks
Avinash
master-cdap-avinash.log

Avinash Dongre

unread,
Aug 18, 2015, 10:21:21 AM8/18/15
to Shankar Selvam, Sreevatsan Raman, CDAP User, robert geiger, Nilk...@ampool.io
I remade my setup and Now I am getting following error in master*.log
Log is attached.

Thanks
Avinash


2015-08-18 19:45:31,502 - INFO  [main:c.c.c.d.u.h.HBaseTableUtil@159] - Creating table 'TableId{namespace=system, tableName=configuration}'
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Compression algorithm 'snappy' previously failed test. Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
        at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1978)
        at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1917)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1849)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2025)
        at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42280)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2107)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
        at org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Compression algorithm 'snappy' previously failed test.
        at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:85)
        at org.apache.hadoop.hbase.master.HMaster.checkCompression(HMaster.java:1995)
        at org.apache.hadoop.hbase.master.HMaster.checkCompression(HMaster.java:1988)
        at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1915)
        ... 11 more

        at com.google.common.base.Throwables.propagate(Throwables.java:160)
        at co.cask.cdap.data.runtime.main.MasterServiceMain.updateConfigurationTable(MasterServiceMain.java:468)
        at co.cask.cdap.data.runtime.main.MasterServiceMain.start(MasterServiceMain.java:181)
        at co.cask.cdap.common.runtime.DaemonMain.doMain(DaemonMain.java:58)
        at co.cask.cdap.data.runtime.main.MasterServiceMain.main(MasterServiceMain.java:144)

master-cdap-adongre.log

Avinash Dongre

unread,
Aug 18, 2015, 1:56:53 PM8/18/15
to Shankar Selvam, Sreevatsan Raman, CDAP User, robert geiger, Nilk...@ampool.io
Please ignore earlier message, I guess that was problem with my hbase setup.
Now I am getting error with TRACE enabled.
Full log is also attached.

/cdap has yarn user
adongre@adongre /work/HadoopEcoSystem/hadoop-2.7.1 $ bin/hadoop fs -ls /
Found 2 items
drwxr-xr-x   - yarn    supergroup          0 2015-08-18 18:44 /cdap
drwxr-xr-x   - adongre supergroup          0 2015-08-18 23:17 /hbase



Caused by: java.io.IOException: Fails to create directory: file:/cdap/cdap/lib
        at co.cask.cdap.data2.util.hbase.HBaseTableUtil.createCoProcessorJar(HBaseTableUtil.java:314)
        at co.cask.cdap.data2.dataset2.lib.table.hbase.HBaseTableAdmin.createCoprocessorJarInternal(HBaseTableAdmin.java:208)
        at co.cask.cdap.data2.dataset2.lib.table.hbase.HBaseTableAdmin.createCoprocessorJar(HBaseTableAdmin.java:179)
        at co.cask.cdap.data2.dataset2.lib.table.hbase.HBaseTableAdmin.create(HBaseTableAdmin.java:110)
        at co.cask.cdap.api.dataset.lib.CompositeDatasetAdmin.create(CompositeDatasetAdmin.java:64)
        at co.cask.cdap.data2.dataset2.InMemoryDatasetFramework.addInstance(InMemoryDatasetFramework.java:215)
        at co.cask.cdap.data2.datafabric.dataset.DatasetsUtil.createIfNotExists(DatasetsUtil.java:67)
        at co.cask.cdap.data2.datafabric.dataset.DatasetsUtil.getOrCreateDataset(DatasetsUtil.java:54)
        at co.cask.cdap.data2.datafabric.dataset.DatasetMetaTableUtil.getInstanceMetaTable(DatasetMetaTableUtil.java:59)
        at co.cask.cdap.data2.datafabric.dataset.service.mds.MDSDatasetsRegistry.createContext(MDSDatasetsRegistry.java:58)
        at co.cask.cdap.data2.datafabric.dataset.service.mds.MDSDatasetsRegistry.createContext(MDSDatasetsRegistry.java:35)
        at co.cask.cdap.data2.dataset2.tx.TransactionalDatasetRegistry.execute(TransactionalDatasetRegistry.java:52)
        at co.cask.cdap.data2.datafabric.dataset.type.DatasetTypeManager.deleteSystemModules(DatasetTypeManager.java:397)
        ... 3 more

master-cdap-adongre.log

Sreevatsan Raman

unread,
Aug 18, 2015, 2:07:49 PM8/18/15
to Avinash Dongre, Shankar Selvam, CDAP User, robert geiger, Nilk...@ampool.io
The coprocessor jar should be created in HDFS. Based on the logs looks like it is attempting to write to the local file system file:/cdap/cdap/lib

I suspect the Hbase/Hadoop configs are not in the classpath for cdap master service. As suggested earlier make sure they are in the classpath. You can verify the cdap-master service's classpath by running

/opt/cdap/master/bin/svc-master classpath

Thanks,
Sree

Avinash Dongre

unread,
Aug 18, 2015, 3:05:29 PM8/18/15
to Sreevatsan Raman, Shankar Selvam, CDAP User, robert geiger, Nilk...@ampool.io
Attached is the output of that command, Could you please help me to understand what is wrong with my setup.


master_classpath.txt

Nitin Motgi

unread,
Aug 18, 2015, 3:07:30 PM8/18/15
to Avinash Dongre, Sreevatsan Raman, Shankar Selvam, CDAP User, robert geiger, Nilk...@ampool.io
Hi Avinash, 

Could you join IRC for CDAP we can help you faster. Room #cdap on irc.freenode.net

Thanks,
Nitin



For more options, visit https://groups.google.com/d/optout.



--
"Humility isn't thinking less of yourself, it's thinking of yourself less"

Derek Wood

unread,
Aug 19, 2015, 12:57:12 AM8/19/15
to Nitin Motgi, Avinash Dongre, Sreevatsan Raman, Shankar Selvam, CDAP User, robert geiger, Nilk...@ampool.io
Hi Avinash,
As we discussed in IRC, we think there must be a bad default hadoop config getting picked up in your classpath, causing it to try to write to the local filesystem instead of hdfs.  I also see evidence of the default yarn-site.xml configs which will not work if this is a multi-node cluster (yarn.resourcemanager.hostname=0.0.0.0).  Can you try adding your intended hadoop conf dir to the beginning of your classpath?  You can do this by adding the following line to /opt/cdap/master/conf/master-env.sh:

CLASSPATH="/work/HadoopEcoSystem/hadoop-2.7.1/etc/hadoop:${CLASSPATH}"

Running "/opt/cdap/master/bin/svc-master classpath" should list it as one of the first few entries.  Then restart cdap-master service.

If this doesn't shed any light on the issue, then we will need some more info for you to better understand your setup:
Which hadoop distribution and version are you using?
Which CDAP version?
Which JDK version?
Brief cluster topology: where is CDAP installed on the cluster?

Thanks,
-Derek


Avinash Dongre

unread,
Aug 19, 2015, 6:52:06 AM8/19/15
to Derek Wood, Nitin Motgi, Sreevatsan Raman, Shankar Selvam, CDAP User, robert geiger, Nilk...@ampool.io
Hi Derek,

After adding CLASSPATH , everything is working as expected.
Thanks for your help.

Thanks
Avinash
Reply all
Reply to author
Forward
0 new messages