I haven't updated my master branch in about 2-3 weeks and the DistributedSuite, amongst others, seems to fail for me when I run "sbt test" from the root project.
[info] DistributedSuite:
[info] - task throws not serializable exception
[info] - local-cluster format
[info] - simple groupByKey
[info] - groupByKey where map output sizes exceed maxMbInFlight *** FAILED ***
[info] spark.SparkException: Job failed: Task 1.0:1 failed more than 4 times
[info] at spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:709)
[info] at spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:707)
[info] at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:60)
[info] at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
[info] at spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:707)
[info] at spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:352)
[info] at spark.scheduler.DAGScheduler.spark$scheduler$DAGScheduler$$run(DAGScheduler.scala:414)
[info] at spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:132)
[info] ...
[info] - accumulators *** FAILED ***
[info] spark.SparkException: Job failed: Task 0.0:0 failed more than 4 times
[info] at spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:709)
[info] at spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:707)
[info] at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:60)
[info] at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
[info] at spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:707)
[info] at spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:352)
[info] at spark.scheduler.DAGScheduler.spark$scheduler$DAGScheduler$$run(DAGScheduler.scala:414)
[info] at spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:132)
[info] ...
[info] - broadcast variables *** FAILED ***
[info] spark.SparkException: Job failed: Task 0.0:0 failed more than 4 times
[info] at spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:709)
[info] at spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:707)
[info] at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:60)
[info] at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
[info] at spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:707)
[info] at spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:352)
[info] at spark.scheduler.DAGScheduler.spark$scheduler$DAGScheduler$$run(DAGScheduler.scala:414)
[info] at spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:132)
[info] ...
[info] - repeatedly failing task
[info] - caching
[info] - caching on disk
[info] - caching in memory, replicated
[
[info] DriverSuite:
Exception in thread "main" java.lang.NoClassDefFoundError: spark/DriverWithoutCleanup
Caused by: java.lang.ClassNotFoundException: spark.DriverWithoutCleanup
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
[info] - driver should exit after finishing *** FAILED ***
[info] SparkException was thrown during property evaluation. (DriverSuite.scala:35)
[info] Message: Process List(./run, spark.DriverWithoutCleanup, local) exited with code 1
[info] Occurred at table row 0 (zero based, not counting headings), which had values (
[info] master = local
I do see DriverWithoutCleanup in the DriverSuite, so not sure why that's failing.