Hi Chris,
Sorry for the delay in getting back to you on this. I do in fact have a suggestion that stems from our experiences when performing unit testing on complex flows and cascades. For the most part we can effectively leverage existing Java tooling to build suites of tests. We can also practice good unit testing behaviours by composing our flows from modular assemblies and, of course, operations. These components are readily testable. However, in regard to assemblies and larger scale tests of flows and cascades we find that our cascading development short-circuits a fundamental piece of the automated testing practices: the measurement and reporting of code coverage with tools such as Cobertura.
As it stands is is simple to attain 100% test coverage of assemblies and flows because in the true Java sense we are simply exercising the construction logic and not the data processing logic that results from said construction. However, to truly measure test coverage in these instances what we really need to be able to do is check that every vertex of the process' corresponding DAG has been exercised. I imagine that this would be as simple as measuring whether or not a vertex (pipe) has transported one or more Tuples. As it stands this is a mental exercise: we can image the graph and consider appropriate test scenarios to attain full coverage. However, this is prone to error - especially if the DAG differs from that which we think we've constructed (human error).
As a solution to this, it'd be great if there were some generic hooks into our Flows, Cascades, and Assemblies onto which we could build some tooling. I imagine such tools would interrogate the DAG after a test execution and report the names or pipes that did not transport any Tuples.
I'd be keen to hear your thoughts on this.
Cheers - Elliot.