Hi,
It depends. If you're considering Spark, I'm sure your use case will involve TB or PB or data and tens or hundreds of nodes to crunch data. Of course if this is not the case you can just use a simpler, custom solution. Maybe you don't even need Redis as external dependency, since you can just use ets to store computations and the native inter-node communication features of Erlang. But you still have to solve problems like: how to partition data and distribute the work across your cluster? What will be the recovery strategy in case of failure? Can you afford to partially lose data?