Inconsistent behavior when running spark on top of tachyon on top of HDFS HA

11 views
Skip to first unread message

Łukasz Jastrzębski

unread,
Feb 14, 2014, 8:10:23 PM2/14/14
to tachyo...@googlegroups.com
Hi,

I don't understand why counting file is failing, but when I running in spark-shell second time is give good results


Below full log from spark shell:

14/02/15 03:04:22 INFO : initialize(tachyon://hadoop-ha-1:19998/tmp/proxy.txt, Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, hdfs-default.xml, hdfs-site.xml). Connecting to Tachyon: tachyon://hadoop-ha-1:19998/tmp/proxy.txt
14/02/15 03:04:22 INFO : Trying to connect master @ hadoop-ha-1/14.255.247.81:19998
14/02/15 03:04:22 INFO : User registered at the master hadoop-ha-1/14.255.247.81:19998 got UserId 15
14/02/15 03:04:22 INFO : Trying to get local worker host : hadoop-ha-1
14/02/15 03:04:22 INFO : No local worker on hadoop-ha-1
14/02/15 03:04:22 INFO : Connecting remote worker @ hadoop-worker-6/14.255.247.53:29998
14/02/15 03:04:22 INFO : tachyon://hadoop-ha-1:19998 tachyon://hadoop-ha-1:19998 hdfs://stanley
14/02/15 03:04:22 INFO : getFileStatus(/tmp/proxy.txt): HDFS Path: hdfs://stanley/tmp/proxy.txt TPath: tachyon://hadoop-ha-1:19998/tmp/proxy.txt
14/02/15 03:04:22 INFO mapred.FileInputFormat: Total input paths to process : 1
14/02/15 03:04:22 INFO spark.SparkContext: Starting job: count at <console>:15
14/02/15 03:04:22 INFO scheduler.DAGScheduler: Got job 0 (count at <console>:15) with 2 output partitions (allowLocal=false)
14/02/15 03:04:22 INFO scheduler.DAGScheduler: Final stage: Stage 0 (count at <console>:15)
14/02/15 03:04:22 INFO scheduler.DAGScheduler: Parents of final stage: List()
14/02/15 03:04:22 INFO scheduler.DAGScheduler: Missing parents: List()
14/02/15 03:04:22 INFO scheduler.DAGScheduler: Submitting Stage 0 (MappedRDD[1] at textFile at <console>:12), which has no missing parents
14/02/15 03:04:22 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 0 (MappedRDD[1] at textFile at <console>:12)
14/02/15 03:04:22 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
14/02/15 03:04:22 INFO scheduler.FairSchedulableBuilder: Added task set TaskSet_0 tasks to pool default
14/02/15 03:04:22 INFO scheduler.TaskSetManager: Starting task 0.0:0 as TID 0 on executor 201402142202-1375207182-5951-23748-5: hadoop-worker-1 (NODE_LOCAL)
14/02/15 03:04:22 INFO scheduler.TaskSetManager: Serialized task 0.0:0 as 1569 bytes in 4 ms
14/02/15 03:04:22 INFO scheduler.TaskSetManager: Starting task 0.0:1 as TID 1 on executor 201402142202-1375207182-5951-23748-5: hadoop-worker-1 (NODE_LOCAL)
14/02/15 03:04:22 INFO scheduler.TaskSetManager: Serialized task 0.0:1 as 1569 bytes in 0 ms
14/02/15 03:04:23 WARN scheduler.TaskSetManager: Lost TID 1 (task 0.0:1)
14/02/15 03:04:23 WARN scheduler.TaskSetManager: Loss was due to java.lang.IllegalArgumentException
java
.lang.IllegalArgumentException: java.net.UnknownHostException: stanley
    at org
.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:414)
    at org
.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:164)
    at org
.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:129)
    at org
.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:448)
    at org
.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:410)
    at org
.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:128)
    at org
.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2308)
    at org
.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:87)
    at org
.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2342)
    at org
.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2324)
    at org
.apache.hadoop.fs.FileSystem.get(FileSystem.java:351)
    at org
.apache.hadoop.fs.FileSystem.get(FileSystem.java:163)
    at org
.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:587)
    at org
.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:315)
    at org
.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:288)
    at org
.apache.spark.SparkContext$$anonfun$15.apply(SparkContext.scala:391)
    at org
.apache.spark.SparkContext$$anonfun$15.apply(SparkContext.scala:391)
    at org
.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$1.apply(HadoopRDD.scala:111)
    at org
.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$1.apply(HadoopRDD.scala:111)
    at scala
.Option.map(Option.scala:145)
    at org
.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:111)
    at org
.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:154)
    at org
.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:149)
    at org
.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:64)
    at org
.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
    at org
.apache.spark.rdd.RDD.iterator(RDD.scala:232)
    at org
.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
    at org
.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
    at org
.apache.spark.rdd.RDD.iterator(RDD.scala:232)
    at org
.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
    at org
.apache.spark.scheduler.Task.run(Task.scala:53)
    at org
.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
    at org
.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
    at org
.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
    at java
.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java
.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java
.lang.Thread.run(Thread.java:744)
14/02/15 03:04:23 WARN scheduler.TaskSetManager: Lost TID 0 (task 0.0:0)
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Loss was due to java.lang.IllegalArgumentException: java.net.UnknownHostException: stanley [duplicate 1]
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Starting task 0.0:0 as TID 2 on executor 201402142202-1375207182-5951-23748-5: hadoop-worker-1 (NODE_LOCAL)
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Serialized task 0.0:0 as 1569 bytes in 1 ms
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Starting task 0.0:1 as TID 3 on executor 201402142202-1375207182-5951-23748-5: hadoop-worker-1 (NODE_LOCAL)
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Serialized task 0.0:1 as 1569 bytes in 0 ms
14/02/15 03:04:23 WARN scheduler.TaskSetManager: Lost TID 2 (task 0.0:0)
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Loss was due to java.lang.IllegalArgumentException: java.net.UnknownHostException: stanley [duplicate 2]
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Starting task 0.0:0 as TID 4 on executor 201402142202-1375207182-5951-23748-5: hadoop-worker-1 (NODE_LOCAL)
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Serialized task 0.0:0 as 1569 bytes in 1 ms
14/02/15 03:04:23 WARN scheduler.TaskSetManager: Lost TID 3 (task 0.0:1)
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Loss was due to java.lang.IllegalArgumentException: java.net.UnknownHostException: stanley [duplicate 3]
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Starting task 0.0:1 as TID 5 on executor 201402142202-1375207182-5951-23748-5: hadoop-worker-1 (NODE_LOCAL)
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Serialized task 0.0:1 as 1569 bytes in 1 ms
14/02/15 03:04:23 WARN scheduler.TaskSetManager: Lost TID 4 (task 0.0:0)
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Loss was due to java.lang.IllegalArgumentException: java.net.UnknownHostException: stanley [duplicate 4]
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Starting task 0.0:0 as TID 6 on executor 201402142202-1375207182-5951-23748-5: hadoop-worker-1 (NODE_LOCAL)
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Serialized task 0.0:0 as 1569 bytes in 1 ms
14/02/15 03:04:23 WARN scheduler.TaskSetManager: Lost TID 5 (task 0.0:1)
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Loss was due to java.lang.IllegalArgumentException: java.net.UnknownHostException: stanley [duplicate 5]
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Starting task 0.0:1 as TID 7 on executor 201402142202-1375207182-5951-23748-5: hadoop-worker-1 (NODE_LOCAL)
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Serialized task 0.0:1 as 1569 bytes in 0 ms
14/02/15 03:04:23 WARN scheduler.TaskSetManager: Lost TID 6 (task 0.0:0)
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Loss was due to java.lang.IllegalArgumentException: java.net.UnknownHostException: stanley [duplicate 6]
14/02/15 03:04:23 ERROR scheduler.TaskSetManager: Task 0.0:0 failed 4 times; aborting job
14/02/15 03:04:23 INFO scheduler.TaskSchedulerImpl: Remove TaskSet 0.0 from pool default
14/02/15 03:04:23 INFO scheduler.DAGScheduler: Failed to run count at <console>:15
14/02/15 03:04:23 INFO scheduler.TaskSchedulerImpl: Ignoring update with state FAILED from TID 7 because its task set is gone
org
.apache.spark.SparkException: Job aborted: Task 0.0:0 failed 4 times (most recent failure: Exception failure: java.lang.IllegalArgumentException: java.net.UnknownHostException: stanley)
    at org
.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028)
    at org
.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1026)
    at scala
.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala
.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org
.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026)
    at org
.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
    at org
.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
    at scala
.Option.foreach(Option.scala:236)
    at org
.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:619)
    at org
.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
    at akka
.actor.ActorCell.receiveMessage(ActorCell.scala:498)
    at akka
.actor.ActorCell.invoke(ActorCell.scala:456)
    at akka
.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
    at akka
.dispatch.Mailbox.run(Mailbox.scala:219)
    at akka
.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
    at scala
.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala
.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala
.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala
.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)


scala
> val s = sc.textFile("tachyon://hadoop-ha-1:19998/tmp/proxy.txt")
14/02/15 03:04:27 INFO storage.MemoryStore: ensureFreeSpace(45012) called with curMem=80405, maxMem=309225062
14/02/15 03:04:27 INFO storage.MemoryStore: Block broadcast_1 stored as values to memory (estimated size 44.0 KB, free 294.8 MB)
s
: org.apache.spark.rdd.RDD[String] = MappedRDD[3] at textFile at <console>:12

scala
> s.count()
14/02/15 03:04:29 INFO : getFileStatus(/tmp/proxy.txt): HDFS Path: hdfs://stanley/tmp/proxy.txt TPath: tachyon://hadoop-ha-1:19998/tmp/proxy.txt
14/02/15 03:04:29 INFO mapred.FileInputFormat: Total input paths to process : 1
14/02/15 03:04:29 INFO spark.SparkContext: Starting job: count at <console>:15
14/02/15 03:04:29 INFO scheduler.DAGScheduler: Got job 1 (count at <console>:15) with 2 output partitions (allowLocal=false)
14/02/15 03:04:29 INFO scheduler.DAGScheduler: Final stage: Stage 1 (count at <console>:15)
14/02/15 03:04:29 INFO scheduler.DAGScheduler: Parents of final stage: List()
14/02/15 03:04:29 INFO scheduler.DAGScheduler: Missing parents: List()
14/02/15 03:04:29 INFO scheduler.DAGScheduler: Submitting Stage 1 (MappedRDD[3] at textFile at <console>:12), which has no missing parents
14/02/15 03:04:29 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 1 (MappedRDD[3] at textFile at <console>:12)
14/02/15 03:04:29 INFO scheduler.TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
14/02/15 03:04:29 INFO scheduler.FairSchedulableBuilder: Added task set TaskSet_1 tasks to pool default
14/02/15 03:04:29 INFO scheduler.TaskSetManager: Starting task 1.0:0 as TID 8 on executor 201402142202-1375207182-5951-23748-5: hadoop-worker-1 (NODE_LOCAL)
14/02/15 03:04:29 INFO scheduler.TaskSetManager: Serialized task 1.0:0 as 1570 bytes in 0 ms
14/02/15 03:04:29 INFO scheduler.TaskSetManager: Starting task 1.0:1 as TID 9 on executor 201402142202-1375207182-5951-23748-5: hadoop-worker-1 (NODE_LOCAL)
14/02/15 03:04:29 INFO scheduler.TaskSetManager: Serialized task 1.0:1 as 1570 bytes in 0 ms
14/02/15 03:04:29 INFO scheduler.TaskSetManager: Finished TID 8 in 451 ms on hadoop-worker-1 (progress: 0/2)
14/02/15 03:04:29 INFO scheduler.TaskSetManager: Finished TID 9 in 456 ms on hadoop-worker-1 (progress: 1/2)
14/02/15 03:04:29 INFO scheduler.DAGScheduler: Completed ResultTask(1, 0)
14/02/15 03:04:29 INFO scheduler.DAGScheduler: Completed ResultTask(1, 1)
14/02/15 03:04:29 INFO scheduler.DAGScheduler: Stage 1 (count at <console>:15) finished in 0.461 s
14/02/15 03:04:29 INFO scheduler.TaskSchedulerImpl: Remove TaskSet 1.0 from pool default
14/02/15 03:04:29 INFO spark.SparkContext: Job finished: count at <console>:15, took 0.466730364 s
res1
: Long = 5


Łukasz Jastrzębski

unread,
Feb 15, 2014, 2:36:41 PM2/15/14
to tachyo...@googlegroups.com
I checked it is a problem when running Spark on mesos, and it's not relevant to tachyon, the same situation occurs when reading directly from HDFS HA.

Best regards
Lukasz Jastrzebski
14/02/15 03:04:23 INFO scheduler.TaskSetManager: Starting task 0.0:1 as TID 3 on executor 201402142202-1375207182-5951-23748-5: hadoop-<spa
...

Haoyuan Li

unread,
Feb 17, 2014, 12:24:34 AM2/17/14
to tachyo...@googlegroups.com
Thanks for letting us know.

Haoyuan
Reply all
Reply to author
Forward
0 new messages