Unable to run Lingual JDBC job with lingual jar but can run with using as project dependency

64 views
Skip to first unread message

santlal gupta

unread,
Jun 15, 2016, 8:42:01 AM6/15/16
to Lingual User
Hi,

I am doing POC, in which i want to run lingual job without installing it, and by  providing it required jar. I am sharing source code that i had done. To run lingual job i am facing below issue. I have also attached code which will create schema, catalog and stereotype. And after creating all the required entities, it will execute simple SQL query using lingual jdbc.

Below are the 4 cases where I am facing issue:

Case 1 :

When i am running attached code by providing lingual jar (lingual-core-1.2.1.jar, lingual-platform-1.2.1.jar and lingual-hadoop2-mr1-1.2.1.jar )
i am getting below exception : 
Exception : 
                
        java.lang.NoSuchMethodError: cascading.tap.hadoop.Hfs.getTempPath(Lorg/apache/hadoop/mapred/JobConf;)Lorg/apache/hadoop/fs/Path;
at cascading.lingual.platform.hadoop2.Hadoop2MR1PlatformBroker.getTempPath(Hadoop2MR1PlatformBroker.java:404)
at cascading.lingual.platform.PlatformBroker.getRootResultPath(PlatformBroker.java:477)
at cascading.lingual.platform.PlatformBroker.getResultPath(PlatformBroker.java:469)
at cascading.lingual.optiq.enumerable.CascadingFlowRunnerEnumerable.createResultResource(CascadingFlowRunnerEnumerable.java:277)
at cascading.lingual.optiq.enumerable.CascadingFlowRunnerEnumerable.createEnumerator(CascadingFlowRunnerEnumerable.java:172)
at cascading.lingual.optiq.enumerable.CascadingFlowRunnerEnumerable.enumerator(CascadingFlowRunnerEnumerable.java:124)
at net.hydromatic.optiq.jdbc.OptiqPrepare$PrepareResult.enumerator(OptiqPrepare.java:116)
at net.hydromatic.optiq.jdbc.OptiqStatement$1.apply(OptiqStatement.java:376)
at net.hydromatic.optiq.jdbc.OptiqStatement$1.apply(OptiqStatement.java:374)
at net.hydromatic.optiq.jdbc.OptiqResultSet.execute(OptiqResultSet.java:155)
at net.hydromatic.optiq.jdbc.OptiqStatement.executeQueryInternal(OptiqStatement.java:364)
at net.hydromatic.optiq.jdbc.OptiqStatement.executeQuery(OptiqStatement.java:79)
at cascading.lingual.jdbc.LingualStatement.executeQuery(LingualStatement.java:187)
at hydrograph.debug.server.lingual.LingualMain.runJdbcQuery(LingualMain.java:53)
at hydrograph.debug.server.lingual.LingualMain.main(LingualMain.java:34)

   jar used : 
        cascading-core  : 3.0.3
        cascading-hadoop : 3.0.3
        cascading-hadoop2-mr1 : 3.0.3
        cascading-local  : 3.0.3
        hadoop-common : 2.6.0 
        hadoop-hdfs  : 2.6.0
        hadoop-mapreduce-client-core : 3.0.3
        lingual-core  : 1.2.1
        lingual-platform : 1.2.1 and 
        lingual-hadoop2-mr1 : 1.2.1



Case 2 :

On github i found that lingual 1.2.1 version is compatible with cascading 2.7.0 and hadoop version 2.4.1. So i had run above code this version also but i had got below exception:

        java.lang.UnsupportedOperationException:  setXIncludeAware is not supported on this JAXP implementation or earlier: class org.apache.xerces.jaxp.DocumentBuilderFactoryImpl
at javax.xml.parsers.DocumentBuilderFactory.setXIncludeAware(DocumentBuilderFactory.java:614)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2216)

   java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:470)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:449)
at cascading.flow.hadoop.planner.HadoopFlowStepJob.internalNonBlockingStart(HadoopFlowStepJob.java:105)
at cascading.flow.planner.FlowStepJob.blockOnJob(FlowStepJob.java:265)
at cascading.flow.planner.FlowStepJob.start(FlowStepJob.java:184)
at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:146)
at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:48)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

jar used : 
        cascading-core  : 2.7.0
        cascading-hadoop : 2.7.0
        cascading-hadoop2-mr1 : 2.7.0
        cascading-local  : 2.7.0
        hadoop-common : 2.4.1 
        hadoop-hdfs  : 2.4.1
        hadoop-mapreduce-client-core : 2.4.1
        lingual-core  : 1.2.1
        lingual-platform : 1.2.1 and 
        lingual-hadoop2-mr1 : 1.2.1


Case 3 :

I had downloaded lingual wip-2.0 source code from git hub, build it and manually created jar of lingual-core, lingual-client, lingual-hadoop2-mr1, and lingual-platform  as (lingual_client_wip2.0.jar, lingual_core_wip2.0.jar, lingual_hadoop2-mr1_wip2.0.jar, and lingual_platform_wip2.0.jar). And run same code then i got below exception : 

        java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:470)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:449)
at cascading.flow.hadoop.planner.HadoopFlowStepJob.internalNonBlockingStart(HadoopFlowStepJob.java:105)
at cascading.flow.planner.FlowStepJob.blockOnJob(FlowStepJob.java:265)
at cascading.flow.planner.FlowStepJob.start(FlowStepJob.java:184)
at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:146)
at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:48)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
 
   jar used : 
        cascading-core  : 3.0.3
        cascading-hadoop : 3.0.3
        cascading-hadoop2-mr1 : 3.0.3
        cascading-local  : 3.0.3
        hadoop-common : 2.6.0 
        hadoop-hdfs  : 2.6.0
        hadoop-mapreduce-client-core : 3.0.3
        lingual-core  : wip-2.0
        lingual-client : wip-2.0
        lingual-platform : wip-2.0 and 
        lingual-hadoop2-mr1 : wip-2.0



Case 4 : 

I had run above code by giving lingual-wip2.0 (lingual-core, lingual-hadoop2-mr1 and lingual-platform)  as project dependency and code attached code run successfully. 

 jar used : 
        cascading-core  : 3.0.3
        cascading-hadoop : 3.0.3
        cascading-hadoop2-mr1 : 3.0.3
        cascading-local  : 3.0.3
        hadoop-common : 2.6.0 
        hadoop-hdfs  : 2.6.0
        hadoop-mapreduce-client-core : 3.0.3
        lingual-core  : project dependency (wip-2.0)
        lingual-platform : project dependency (wip-2.0) and 
        lingual-hadoop2-mr1 : project dependency (wip-2.0)


In all above cases, I was facing issue when I was using lingual jars as a dependencies, but when I had added lingual source code as a project dependency instead of jars it was executing successfully. Can someone please help me in pointing out the issue while executing lingual jdbc job? 


Thanks 
Santlal Gupta
LingualMain.java
SchemaWithoutCommand.java
StringDate.csv

Andre Kelpe

unread,
Jun 15, 2016, 8:44:01 AM6/15/16
to lingua...@googlegroups.com
The easiest way is to use the JDBC drivers, which include everything,
except hadoop:

For lingual 2.x try these
http://conjars.org/repo/cascading/lingual-hadoop2-mr1-jdbc/

In lingual 1.x we use different naming pattern. You will find the the
jdbc driver here:

http://conjars.org/repo/cascading/lingual-hadoop2-mr1/1.2.1/lingual-hadoop2-mr1-1.2.1-jdbc.jar


If you use those, you don't need any other lingual or cascading
dependency. Only add the libraries of your hadoop distro.

- André
> --
> You received this message because you are subscribed to the Google Groups
> "Lingual User" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to lingual-user...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



--
André Kelpe
an...@concurrentinc.com
http://concurrentinc.com

Andre Kelpe

unread,
Jun 15, 2016, 8:44:21 AM6/15/16
to lingua...@googlegroups.com
see my answer on the other list.

santlal gupta

unread,
Jun 15, 2016, 10:16:58 AM6/15/16
to Lingual User
Hi Andre,

Thanks for your response.

As you suggested, i had removed all cascading and lingual dependency from project.  and provided only lingual-hadoop2-mr1-1.2.1-jdbc.jar or lingual-hadoop2-mr1-jdbc-2.0.0-wip-94.jar, but i am getting same exception as: 

        Caused by: java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:470)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:449)
at cascading.flow.hadoop.planner.HadoopFlowStepJob.internalNonBlockingStart(HadoopFlowStepJob.java:105)
at cascading.flow.planner.FlowStepJob.blockOnJob(FlowStepJob.java:265)
at cascading.flow.planner.FlowStepJob.start(FlowStepJob.java:184)
at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:146)
at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:48)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Thanks 
Santlal Gupta

Andre Kelpe

unread,
Jun 15, 2016, 10:29:23 AM6/15/16
to lingua...@googlegroups.com
Don't use both, you need to only use one. Also, it seems your hadoop
is not correctly configured:
https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/SingleCluster.html#YARN_on_a_Single_Node

- André

santlal gupta

unread,
Jun 15, 2016, 10:35:05 AM6/15/16
to Lingual User
Hi Andre,

I am using only one out of it. and then next time using other one.

but got same exception.

Thanks 
Santlal Gupta

Andre Kelpe

unread,
Jun 15, 2016, 10:44:25 AM6/15/16
to lingua...@googlegroups.com
As I said, your hadoop is not correctly configured. Follow the hadoop
documetation and it will work.

- André
Message has been deleted

Andre Kelpe

unread,
Jun 20, 2016, 12:17:59 PM6/20/16
to lingua...@googlegroups.com
In lingual 1.x, the "jdbc" part is a classifier. Something like this
should work:

compile group: 'cascading', name: 'lingual-hadoop2-mr1', version:
'1.2.1' classifier: "jdbc"

- Andre

On Mon, Jun 20, 2016 at 11:37 AM, santlal gupta <santla...@gmail.com> wrote:
> Hi Andre,
>
> I am using gradle as a build tool in my project. I am unable to download
> http://conjars.org/repo/cascading/lingual-hadoop2-mr1/1.2.1/lingual-hadoop2-mr1-1.2.1-jdbc.jar.
>
> I tried adding the dependency in couple of ways as below-
>
> 1. compile group: 'cascading', name: 'lingual-hadoop2-mr1-1.2.1-jdbc',
> version: '1.2.1'
>
> 2. compile group: 'cascading', name: 'lingual-hadoop2-mr1-1.2.1', version:
> '1.2.1-jdbc'
>
> Could you please help me out?
>
> Thanks in advance.
>
> -Santlal

santlal gupta

unread,
Jun 22, 2016, 12:29:30 PM6/22/16
to Lingual User
Hi Andre,
 
Yes Andre it works.

Thanks for your valuable help.

Thanks
Santlal Gupta
 

santlal gupta

unread,
Jun 30, 2016, 5:05:17 AM6/30/16
to Lingual User
Hi Andre,

i am looking for lingual-hadoop2-mr1-1.2.1-jdbc.jar  source  jar  or  source code. I have searched it on conjars but it is not available there.

can you help me to get it.

Thanks
Santlal Gupta

Andre Kelpe

unread,
Jun 30, 2016, 12:33:37 PM6/30/16
to lingua...@googlegroups.com
The code is here: https://github.com/cascading/lingual/tree/1.2

- Andre
Reply all
Reply to author
Forward
0 new messages