Getting error when running Spark job over YARN

294 views
Skip to first unread message

manish....@gmail.com

unread,
Mar 29, 2016, 10:32:04 AM3/29/16
to Alluxio Users
Hi,

When I write/read data-frame to Alluxio from a Spark job executed over YARN (using command: spark-submit --master yarn) I get following exception.
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 5 in stage 0.0 failed 4 times, most recent failure: Lost task 5.3 in stage 0.0 (TID 23, alluxio-master.us-west-2.compute.internal): java.lang.NullPointerException
        at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:147)
        at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
        at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
        at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetRelation.scala:101)
        at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.abortTask$1(WriterContainer.scala:272)
        at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:249)
        at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
        at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:88)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

May be I'm missing some configuration and that's why getting this error. The same work when I use "spark-submit --master=local". I'm using alluxio 1.0.0 and Spark 1.5 -- the spark cluster has 6 workers. I've tried with single alluxio worker as well as six, with one master but it always fails with "--master=yarn".

Am I missing something in Alluxio configuration or Spark?

Thanks.

Manish

Gene Pang

unread,
Apr 1, 2016, 9:49:55 AM4/1/16
to Alluxio Users
Hi Manish,

Could you take a look at the Alluxio master logs, and the Alluxio worker logs?

Also, is there more in the spark application logs for Alluxio client log messages?

Thanks,
Gene

Gene Pang

unread,
Apr 20, 2016, 1:01:03 AM4/20/16
to Alluxio Users
Hi Manish,

Were you able to resolve your issue?

Thanks,
Gene

张华玮

unread,
Jan 16, 2017, 1:02:54 AM1/16/17
to Alluxio Users
I have the same problem,
are you solve it ? 

在 2016年3月29日星期二 UTC+8下午10:32:04,manish....@gmail.com写道:

Gene Pang

unread,
Jan 19, 2017, 9:33:10 AM1/19/17
to Alluxio Users
Hi Manish,

Did you find a solution? If so, could you share how you resolved it?

Thanks,
Gene
Reply all
Reply to author
Forward
0 new messages