Ruffus seems to have a slightly different functionality set from Luigi.
Ruffus is aimed squarely at managing scalability, running many (e.g. bioinformatics) operations in parallel.
Ruffus manages task dependencies like Luigi, though no one has yet contributed (!) a run time work flow graph gui visualization for Ruffus unlike Luigi. (You do get static views of the workflow / pipeline).
However, a key part of the design for Ruffus is that each of these dependent tasks can comprise multiple parallel operations which have their own dependencies, can merge together or be split up and then transformed in multiple steps.
So for example, in bioinformatics, you might need to (1) split up a fastq file into small chunks, (2) run bwa to align them onto reference genome(s), (3) run stampy to refine the alignment, (4) merge all these alignments back together and (5) sort and compress to give a bam file. Each of these steps would be a separate task with hundreds of component files running in parallel. Many of these components may fail but that should not cause the whole pipeline to be rerun.
My understanding is that these [single input ->parallel->single output] operations would be combined into one Hadoop map reduce task in Luigi. This makes some parts of the pipeline look simpler (All these complicated operations get hidden as a single task) but you still need to manage the underlying dependencies and complexities and failures one way or another.
The other part of Ruffus is that I am very wary of monolithic designs. I try to ensure that Ruffus plays nice with other libraries and setups. So I try to make sure that bioinformatics groups can use Ruffus on a shared computation cluster without asking to take over job scheduling, worrying about whether hadoop needs to be installed and supported etc. This is a design philosophy and has both pros and cons (and obviously makes less sense for a single big company like spotify where one team can make IT decisions for the whole company).
Leo