Did this issue get resolved? I'm running into the same issue, though my setup is a little bit different:
When I open spark-shell, I have no problems loading a file from HDFS via Tachyon (val s = sc.textFile("tachyon://<ip>:<port>/filepath"), where filepath is the path in HDFS; I did not use loadufs or anything like that to load HDFS into Tachyon).
However, when I'm done and I want to save something back to Tachyon, I can't do that and I run into the exact same issues as outlined above:
15/07/29 17:16:15 INFO : create(tachyon://n1:19998/out/_temporary/0/_temporary/attempt_201507291716_0001_m_000001_3/part-00001, rw-r--r--, true, 65536, 1, 536870912, org.apache.hadoop.mapred.Reporter$1@20a707d5)
15/07/29 17:16:15 INFO : create(tachyon://n1:19998/out/_temporary/0/_temporary/attempt_201507291716_0001_m_000000_2/part-00000, rw-r--r--, true, 65536, 1, 536870912, org.apache.hadoop.mapred.Reporter$1@20a707d5)
15/07/29 17:16:15 INFO : /mnt/ramdisk/tachyonworker/users/2/179314884608 was created!
15/07/29 17:16:15 INFO : /mnt/ramdisk/tachyonworker/users/2/181462368256 was created!
15/07/29 17:16:15 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 2)
java.io.IOException: FailedToCheckpointException(message:Failed to rename hdfs://n1:9000//tmp/tachyon/workers/1438215000001/2/169 to hdfs://n1:9000//tmp/tachyon/data/169)
at tachyon.worker.WorkerClient.addCheckpoint(WorkerClient.java:116)
at tachyon.client.TachyonFS.addCheckpoint(TachyonFS.java:183)
at tachyon.client.FileOutStream.close(FileOutStream.java:104)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:108)
at org.apache.spark.SparkHadoopWriter.close(SparkHadoopWriter.scala:102)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply$mcV$sp(PairRDDFunctions.scala:1117)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1294)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1116)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1095)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)