[Hadoop-studio-users] Cannot deploy workflow to my cluster

15 views
Skip to first unread message

Kosmaj

unread,
Nov 15, 2011, 8:57:02 AM11/15/11
to hadoop-st...@lists.sourceforge.net
Hi
I got Karmasphere Community Edition software and I'm exiceted about
its cool features.

I successfuly completed "(2) Local development", and
the first half of "(3) Remote deployment".
My "cluster" is pseudo-distributed using one machine where I run
Eclipse as well. After starting Hadoop (start-all.sh) I can connect
to my HDFS and to my cluster using your plugin in Eclipse (Indigo).
I can also give to examples from (2) input files on my hdfs,
they are found and used as expected.

But when I try to deploy a work-flow, either WordCount or Pi,
I get the following exceptions (see below). My project looks
exaclty like yours in (3), I added KS Hadoop MapReduce 0.18.3
to my project and my target cluster is running Hadoop-0.20.2.
I only don't have "Hadoop client (KS)" because this is Community Ed.
I'm giving as parameters an input text file (for word count)
which exists on my HDFS, and a non-existing output directory
(both using hdfs://localhost URIs)

I can deploy without problems "standalone jars" from my Hadoop-0.20.2
distro. I haven't tried streaming yet.

Your help will be greatly appreciated.
Thanks
Predrag (Kosmaj)

-------------------------------
Exception in thread "main" java.lang.RuntimeException: java.io.IOException: Call to localhost/127.0.0.1:8020 failed on local exception: java.io.EOFException
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:358)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:377)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:350)
at HadoopJob.call(HadoopJob.java:76)
at HadoopJob.main(HadoopJob.java:169)
Caused by: java.io.IOException: Call to localhost/127.0.0.1:8020 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:751)
at org.apache.hadoop.ipc.Client.call(Client.java:719)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:348)
at org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:103)
at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:172)
at org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:67)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1339)
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:56)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1351)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:213)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:118)
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:354)
... 4 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:500)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:442)


------------------------------------------------------------------------------
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
_______________________________________________
Hadoop-studio-users mailing list
Hadoop-st...@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/hadoop-studio-users

Ted Reynolds

unread,
Nov 15, 2011, 12:09:20 PM11/15/11
to hadoop-st...@lists.sourceforge.net, sup...@karmasphere.com
Kosmaj,

Just letting you know that we have received your email.  Though probably not the cause of the EOFException, which we are still looking into, when adding libraries to your project they should match the hadoop version of the cluster on which you will be deploying the job.

We will get back to you with the fix for the EOFException a soon as we can.

Ted.

--
-- 
   Ted Reynolds
   Technical Support Engineer    
The Leader in Big Data Analytics for Hadoop

P: (650)292-6113
19200 Stevens Creek Blvd. Suite 130, Cupertino, CA 95014


Ted Reynolds

unread,
Nov 15, 2011, 5:39:42 PM11/15/11
to hadoop-st...@lists.sourceforge.net, sup...@karmasphere.com
Kosmaj,

After doing some testing here we found that this mismatch of the hadoop versions is indeed the root of your problem.  If you make sure that these versions match you should see no EOFException.

Ted.

Kosmaj

unread,
Nov 15, 2011, 10:13:44 PM11/15/11
to hadoop-st...@lists.sourceforge.net
Ted,

Thanks a lot!
I replaced the 0.18.3 library with 0.20.2 and now
everything works great! Without changing a single line of code!
I have a few depricated classes in my Pi project, which I'm going
to fix now.

Makes sense, to match the library with the target claster
version, but I was confused with your document (3) where in
section 3-v-a you have shown local 0.18.3 working with 0.20.2
on the cluster. Might be a good idea to change that, or add a
short comment.

Anyway, thanks for such a speedy reply!
Taking care of one's users has become a rarity :-)

Kosmaj

--- On Wed, 11/16/11, Ted Reynolds <trey...@karmasphere.com> wrote:

From: Ted Reynolds <trey...@karmasphere.com>
Subject: Re: [Hadoop-studio-users] Cannot deploy workflow to my cluster
To: hadoop-st...@lists.sourceforge.net
Cc: sup...@karmasphere.com
Date: Wednesday, November 16, 2011, 7:39 AM


Kosmaj,


After doing some testing here we found that this mismatch of the hadoop versions is indeed the root of your problem.  If you make sure that these versions match you should see no EOFException.


Ted.


Reply all
Reply to author
Forward
0 new messages