Computing ADS

19 views
Skip to first unread message

Carlos Diego Nascimento Damasceno

unread,
May 30, 2019, 11:51:58 AM5/30/19
to LearnLib Q&A
Dear members,

I've seen that LearnLib (more specifically AutomataLib) has support to compute Adaptive Distinguishing Sequences [Code, JavaDocs].
Is this the same version used at the paper Applying Automata Learning to Embedded Control Software where the ESM model (77 inputs and 3.410 states) was built?
Does it build incomplete ADS trees?


--
Regards, 

Carlos Diego N. Damasceno, MSc
PhD candidate @ ICMC - USP

Carlos Diego Nascimento Damasceno

unread,
May 30, 2019, 11:54:36 AM5/30/19
to LearnLib Q&A
On Thu, 30 May 2019 at 16:51, Carlos Diego Nascimento Damasceno <damasce...@usp.br> wrote:
Dear members,

I've seen that LearnLib (more specifically AutomataLib) has support to compute Adaptive Distinguishing Sequences [Code, JavaDocs].
Is this the same version used at the paper Applying Automata Learning to Embedded Control Software where the ESM model (77 inputs and 3.410 states) was built?
Does it build incomplete ADS trees?
I'm asking this because I would like to run some experiments with ADS regardless they full capacity of distinguishing all states. 


--
Regards, 

Carlos Diego N. Damasceno, MSc
PhD candidate @ ICMC - USP

Markus Frohme

unread,
May 30, 2019, 5:24:47 PM5/30/19
to Carlos Diego Nascimento Damasceno, LearnLib Q&A
Hi Diego,


no, it's not the same version. (It's actually from my master thesis' code which took place around the same time as the paper).

In AutomataLib, there is the `ADS` class [0], which is a general purpose interface for the various ADS computation algorithms:

* LeeYannakakis is the L&Y algorithm for complete automata. If there exists no ADS it returns a set of indistinguishable states.
* BacktrackingSearch uses the successor tree by Arthur Gill [1] for ADSs of arbitrary subsets of states. There exists a "best-effort" `compute` method (trading optimality for runtime), and a `computeOptimal` method for a smallest ADS (either in number of states or depth).
* StateEquivalence solves the basic state equivalence problem


Kind regards,
Markus


[0] - https://github.com/LearnLib/automatalib/blob/develop/util/src/main/java/net/automatalib/util/automata/ads/ADS.java
[1] - State-Identification Experiments in Finite Automata, Arthur Gill


On Thu, May 30, 2019 at 04:54:25PM +0100, Carlos Diego Nascimento Damasceno wrote:
> On Thu, 30 May 2019 at 16:51, Carlos Diego Nascimento Damasceno <
> damasce...@usp.br> wrote:
>
> > Dear members,
> >
> > I've seen that LearnLib (more specifically AutomataLib) has support to
> > compute Adaptive Distinguishing Sequences [Code
> > <https://github.com/LearnLib/automatalib/blob/659caa7f16ef4292cf68387f706c8e8581d7fc5a/util/src/main/java/net/automatalib/util/automata/ads/LeeYannakakis.java>,
> > JavaDocs
> > <http://learnlib.github.io/automatalib/maven-site/0.7.0/apidocs/net/automatalib/util/automata/ads/LeeYannakakis.html>
> > ].
> > Is this the same version used at the paper Applying Automata Learning to
> > Embedded Control Software
> > <https://link.springer.com/chapter/10.1007/978-3-319-25423-4_5> where the

Carlos Diego Nascimento Damasceno

unread,
May 31, 2019, 8:26:25 AM5/31/19
to Markus Frohme, LearnLib Q&A
Hi Markus, 

Thanks for the clarification.

I didn't dig into the ADS code yet, but I've seen that your implementations return Optional.empty() if there is no complete adaptive experiment.  
In my case, I would like to run some experiments with incomplete ADS (or, as they say, that ‘almost’ identify states). 
Thus, I would like to implement an ADS algorithm that returns, instead of Optional.empty(), an ADS regardless the completeness of the adaptive experiment.

Do you have any recommendation on how to implement an extended ADS like this?

Markus Frohme

unread,
Jun 3, 2019, 2:28:57 PM6/3/19
to Carlos Diego Nascimento Damasceno, LearnLib Q&A
The LeeYannakakis code is pretty much a straight forward adaption of the paper. The BacktrackingSearch is a mixed (breadth/depth) first traversal of the successor tree (should be intuitive if you skim over the Gill paper).

What I like about the current binary yes/no decision whether or not an ADS exists is, that you can easily detect when to stop exploring a branch in the ADS. When you want to compute a "good enough" ADS and you encounter a pair of states that is not distinguishable, you'll always have to ask yourself "Do I continue the current ADS trace or should I change an input symbol further up the tree?". To me, this introduces some kind of optimization problem, because I assume you kind of want to compute the biggest subset of states for which an ADS exists. This will probably make the computation more complex and slower.

A poor man's approach for your task with the current implementation may be, if you do the sub-sampling outside the ADS computation yourself. Let's say, you want an ADS for states {s1, s2, s3} and it doesn't exist. Then you can check the subsets {s1, s2}, {s1, s3} or {s2, s3} to see, if an ADS exists for these subsets.

On Fri, May 31, 2019 at 01:26:12PM +0100, Carlos Diego Nascimento Damasceno wrote:
> Hi Markus,
>
> Thanks for the clarification.
>
> I didn't dig into the ADS code yet, but I've seen that your implementations
> return *Optional.empty()* if there is no complete adaptive experiment.
> In my case, I would like to run some experiments with incomplete ADS (or, as
> they say <https://link.springer.com/chapter/10.1007/978-3-319-25423-4_5>,
Reply all
Reply to author
Forward
0 new messages