Hello,
For the record, RDD stands for resilient distributed dataset. Resilient means that if one of your processes dies and looses data, that data will be automatically recalculated. From the controller thread, an RDD looks like a simple collection, and Spark takes care of spreading data and calculation out over multiple nodes.
Perhaps I am misunderstanding your use case. Spark is when you have a large calculation that you want to distribute and you don't care about the details of how it is distributed.
If, on the other hand, your use case is to perform the same maintenance on every node, that would not fit Spark.
I should also mention that Spark is peculiar about its dependencies. It still uses Scala 2.10. It also has it's own edition of Akka included, which may lead to version issues if you use another edition of Akka (say, the standard one) within the same app. For example, I once tried to use Spark and Play (which uses Akka) in the same app, got some errors I could not resolve, and then split the project into two apps (one for Spark, one for Play) that communicate over the network.
Best, Oliver