hi guys:
when i run the spark-1.0.2 on tachyon-0.5.0 I got follow error
2014-08-19 16:17:10,367 INFO (TFS.java:initialize) - initialize(tachyon://
172.22.178.63:19998/sshcp, Configuration: core-default.xml, core-site.xml, yarn-default.xml, yarn-site.xml, mapred-default.xml, mapred-site.xml, hdfs-default.xml, hdfs-site.xml). Connecting to Tachyon: tachyon://
172.22.178.63:19998/sshcp2014-08-19 16:17:10,373 INFO (TachyonFS.java:connect) - Trying to connect master @ /
172.22.178.63:199982014-08-19 16:17:10,454 INFO (TFS.java:getFileStatus) -
getFileStatus(/sshcp): HDFS Path: hdfs://ns1/sshcp(should be hdfs://ns1/user/jdmp/tachyon/data/47 in my hdfs) TPath: tachyon://172.22.178.63:19998/sshcp2014-08-19 16:17:10,478 INFO FileInputFormat (FileInputFormat.java:listStatus) - Total input paths to process : 1
2014-08-19 16:17:10,503 INFO SparkContext (Logging.scala:logInfo) - Starting job: count at <console>:15
2014-08-19 16:17:10,516 INFO DAGScheduler (Logging.scala:logInfo) - Got job 0 (count at <console>:15) with 1 output partitions (allowLocal=false)
2014-08-19 16:17:10,517 INFO DAGScheduler (Logging.scala:logInfo) - Final stage: Stage 0(count at <console>:15)
2014-08-19 16:17:10,517 INFO DAGScheduler (Logging.scala:logInfo) - Parents of final stage: List()
2014-08-19 16:17:10,523 INFO DAGScheduler (Logging.scala:logInfo) - Missing parents: List()
2014-08-19 16:17:10,526 INFO DAGScheduler (Logging.scala:logInfo) - Submitting Stage 0 (MappedRDD[1] at textFile at <console>:12), which has no missing parents
2014-08-19 16:17:10,569 INFO DAGScheduler (Logging.scala:logInfo) - Submitting 1 missing tasks from Stage 0 (MappedRDD[1] at textFile at <console>:12)
2014-08-19 16:17:10,571 INFO YarnClientClusterScheduler (Logging.scala:logInfo) - Adding task set 0.0 with 1 tasks
2014-08-19 16:17:10,586 INFO TaskSetManager (Logging.scala:logInfo) - Starting task 0.0:0 as TID 0 on executor 5:
BJHC-BIGDATA-TEST-17867.test.com (PROCESS_LOCAL)
2014-08-19 16:17:10,592 INFO TaskSetManager (Logging.scala:logInfo) - Serialized task 0.0:0 as 1719 bytes in 4 ms
2014-08-19 16:17:11,971 WARN TaskSetManager (Logging.scala:logWarning) - Lost TID 0 (task 0.0:0)
2014-08-19 16:17:11,980 WARN TaskSetManager (Logging.scala:logWarning) - Loss was due to java.io.FileNotFoundException
java.io.FileNotFoundException: File does not exist: /sshcp
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:51)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1499)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1448)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1428)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1402)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:468)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:269)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59566)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1066)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1054)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1044)
at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:235)
at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:202)
at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:195)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1212)
at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:290)
at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:286)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:286)
at tachyon.hadoop.HdfsFileInputStream.read(HdfsFileInputStream.java:149)
at java.io.DataInputStream.read(DataInputStream.java:83)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:211)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:206)
at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:45)
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:201)
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:184)
at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1014)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:861)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:861)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1083)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1083)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
at org.apache.spark.scheduler.Task.run(Task.scala:51)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
2014-08-19 16:17:11,983 INFO TaskSetManager (Logging.scala:logInfo) - Starting task 0.0:0 as TID 1 on executor 8:
BJHC-BIGDATA-TEST-17871.test.com (PROCESS_LOCAL)
2014-08-19 16:17:11,984 INFO TaskSetManager (Logging.scala:logInfo) - Serialized task 0.0:0 as 1719 bytes in 0 ms
2014-08-19 16:17:13,610 WARN TaskSetManager (Logging.scala:logWarning) - Lost TID 1 (task 0.0:0)
2014-08-19 16:17:13,612 INFO TaskSetManager (Logging.scala:logInfo) - Loss was due to java.io.FileNotFoundException: File does not exist: /sshcp
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:51)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1499)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1448)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1428)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1402)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:468)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:269)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59566)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
[duplicate 1]
2014-08-19 16:17:13,613 INFO TaskSetManager (Logging.scala:logInfo) - Starting task 0.0:0 as TID 2 on executor 3:
BJHC-BIGDATA-TEST-17872.test.com (PROCESS_LOCAL)
2014-08-19 16:17:13,614 INFO TaskSetManager (Logging.scala:logInfo) - Serialized task 0.0:0 as 1719 bytes in 1 ms
2014-08-19 16:17:17,736 WARN TaskSetManager (Logging.scala:logWarning) - Lost TID 2 (task 0.0:0)
2014-08-19 16:17:17,737 INFO TaskSetManager (Logging.scala:logInfo) - Loss was due to java.io.FileNotFoundException: File does not exist: /sshcp
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:51)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1499)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1448)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1428)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1402)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:468)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:269)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59566)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
[duplicate 2]
2014-08-19 16:17:17,738 INFO TaskSetManager (Logging.scala:logInfo) - Starting task 0.0:0 as TID 3 on executor 7:
BJHC-BIGDATA-TEST-17865.test.com (PROCESS_LOCAL)
2014-08-19 16:17:17,739 INFO TaskSetManager (Logging.scala:logInfo) - Serialized task 0.0:0 as 1719 bytes in 0 ms
2014-08-19 16:17:19,127 WARN TaskSetManager (Logging.scala:logWarning) - Lost TID 3 (task 0.0:0)
2014-08-19 16:17:19,128 INFO TaskSetManager (Logging.scala:logInfo) - Loss was due to java.io.FileNotFoundException: File does not exist: /sshcp
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:51)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1499)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1448)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1428)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1402)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:468)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:269)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59566)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
[duplicate 3]
2014-08-19 16:17:19,129 ERROR TaskSetManager (Logging.scala:logError) - Task 0.0:0 failed 4 times; aborting job
2014-08-19 16:17:19,131 INFO YarnClientClusterScheduler (Logging.scala:logInfo) - Removed TaskSet 0.0, whose tasks have all completed, from pool
2014-08-19 16:17:19,134 INFO YarnClientClusterScheduler (Logging.scala:logInfo) - Cancelling stage 0
2014-08-19 16:17:19,138 INFO DAGScheduler (Logging.scala:logInfo) - Failed to run count at <console>:15
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0.0:0 failed 4 times, most recent failure: Exception failure in TID 3 on host BJHC-BIGDATA-TEST-17865.test .com: java.io.FileNotFoundException: File does not exist: /sshcp
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:51)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1499)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1448)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1428)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1402)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:468)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:269)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59566)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
java.lang.reflect.Constructor.newInstance(Constructor.java:513)
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1066)
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1054)
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1044)
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:235)
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:202)
org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:195)
org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1212)
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:290)
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:286)
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:286)
tachyon.hadoop.HdfsFileInputStream.read(HdfsFileInputStream.java:149)
java.io.DataInputStream.read(DataInputStream.java:83)
org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:211)
org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:206)
org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:45)
org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:201)
org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:184)
org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1014)
org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:861)
org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:861)
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1083)
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1083)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
org.apache.spark.scheduler.Task.run(Task.scala:51)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1033)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1031)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1031)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:635)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:635)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:635)
at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1234)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
i read the source code and found that the function getHDFSPath in the tachyon.hadoop.Utils.java file
return the hdfspath with UNDERFS_DATA_FOLDER +"/"+filename which file didn't exists in hdfs.
i think the hdfspath should be returned by UNDERFS_DATA_FOLDER+fileid
(forgive my pool english)
Best regards,
jeanlyn