The LeeYannakakis code is pretty much a straight forward adaption of the paper. The BacktrackingSearch is a mixed (breadth/depth) first traversal of the successor tree (should be intuitive if you skim over the Gill paper).
What I like about the current binary yes/no decision whether or not an ADS exists is, that you can easily detect when to stop exploring a branch in the ADS. When you want to compute a "good enough" ADS and you encounter a pair of states that is not distinguishable, you'll always have to ask yourself "Do I continue the current ADS trace or should I change an input symbol further up the tree?". To me, this introduces some kind of optimization problem, because I assume you kind of want to compute the biggest subset of states for which an ADS exists. This will probably make the computation more complex and slower.
A poor man's approach for your task with the current implementation may be, if you do the sub-sampling outside the ADS computation yourself. Let's say, you want an ADS for states {s1, s2, s3} and it doesn't exist. Then you can check the subsets {s1, s2}, {s1, s3} or {s2, s3} to see, if an ADS exists for these subsets.
On Fri, May 31, 2019 at 01:26:12PM +0100, Carlos Diego Nascimento Damasceno wrote:
> Hi Markus,
>
> Thanks for the clarification.
>
> I didn't dig into the ADS code yet, but I've seen that your implementations
> return *Optional.empty()* if there is no complete adaptive experiment.
> In my case, I would like to run some experiments with incomplete ADS (or, as
> they say <
https://link.springer.com/chapter/10.1007/978-3-319-25423-4_5>,