Dealing with findCounterExample method

46 views
Skip to first unread message

Simone Agostinelli

unread,
Feb 13, 2020, 11:35:57 AM2/13/20
to LearnLib Q&A
Dear all,

I'm having trouble in refining the findCounterExample(A hypothesis, Collection<? extends I> inputs) method of class DFAWMethodEQOracle which computes in output an object of type DefaultQuery
The problem is that I don't understand how I can build an object of type DefaultQuery starting from the two parameters this method takes in input.

Is there anyone who have an example to share with me?

Thank you in advance,
Simone Agostinelli

Markus Frohme

unread,
Feb 13, 2020, 6:44:09 PM2/13/20
to Simone Agostinelli, LearnLib Q&A
Hi Simone,


the basic idea is that this method receives the current hypothesis of type A (which is whatever type you decide to bind the generic to, e.g. DFA<?, I> if you want to implement an equivalence oracle for DFAs) and a set of inputs that should be used for analyzing the hypothesis.

And what you are expected to return is a new DefaultQuery<>(input, output) which states something like "If I test 'input' on the system under learning I observe 'output' which is different from what the hypothesis tells me" or null if you deem the hypothesis equivalent to your system under learning.

A very simple example can be seen in the SampleSetEQ [0]. With this oracle, you can configure static queries beforehand. When used for an equivalence test, the oracle just compares the expected output of the previously configured queries with the current hypothesis output (since we only care about output behavior the automaton type A is bound to the more generic SuffixOutput type).

Depending on how you can determine in-equivalence in your setup, you would have to construct a sequence of input symbols that exposes the different behavior and then provide the corresponding DefaultQuery. Alternatively, if you have some code-snippets you could share, we could look into providing the glue-code.


Kind regards,
Markus


[0] - https://github.com/LearnLib/learnlib/blob/learnlib-0.13.0/oracles/equivalence-oracles/src/main/java/de/learnlib/oracle/equivalence/SampleSetEQOracle.java#L158


On Thu, Feb 13, 2020 at 08:35:57AM -0800, 'Simone Agostinelli' via LearnLib Q&A wrote:
> Dear all,
>
> I'm having trouble in refining the *findCounterExample(*A
> <http://learnlib.github.io/learnlib/maven-site/0.13.0/apidocs/de/learnlib/oracle/equivalence/AbstractTestWordEQOracle.html>
> hypothesis, Collection
> <http://docs.oracle.com/javase/8/docs/api/java/util/Collection.html?is-external=true><?
> extends I
> <http://learnlib.github.io/learnlib/maven-site/0.13.0/apidocs/de/learnlib/oracle/equivalence/AbstractTestWordEQOracle.html>
> > inputs*)* method of class *DFAWMethodEQOracle* which computes in output
> an object of type *DefaultQuery*.
> The problem is that I don't understand how I can build an object of type *DefaultQuery
> *starting from the two parameters this method takes in input.

Simone Agostinelli

unread,
Feb 17, 2020, 6:24:31 AM2/17/20
to LearnLib Q&A
Dear Markus,

Thank you for the directions you spell ;)

I want to share with you the skeleton of my code and the basic idea of what my code should do. 


The SUL works in the following way: it (i) accepts both the positives input and (ii) rejects all the negatives ones. The goal is to build an automaton that captures and models this behavior. The problem I have with the code is that I can't find programmatically a way that allows me to capture all the negatives input in the automaton, that is, I am not able to find a method that allows me to model the rejection of the negatives input. With the method findCounterExample() I am able to model the positives input but not the negatives one. That's my main problem :-(

Kind regards,
Simone Agostinelli

Markus Frohme

unread,
Feb 18, 2020, 3:17:47 AM2/18/20
to Simone Agostinelli, LearnLib Q&A
Dear Simone,


a few things:

* are your negative and positive "inputs" complete input words that should
be accepted/rejected or individual input symbols, such that a word should
e.g. be rejected if it contains "bad" symbols? If the former, wouldn't the
SampleSetEQOracle from before already be the type of EQ oracle you need?
If the latter, what happens if test sequences contain both good and bad
symbols?

* counterexamples (DefaultQueries) can hold the expected output of the
query. So in case of acceptor-based systems (DFAs) you can provide a
boolean value to describe either a positive (should be accepted) or
negative (should be rejected) counterexample. What would help here is
using the generic types of Java so that you get better compiler support.

* I think in your case it would be easier to just implement the EQ oracle
interface rather than trying to extend the WMethod oracle. After
delegating the counterexample search to super.findCounterexample you are
already past the point of testing, which is exactly what you want to
override. However, you can simply use the same (internal) methods of the
WMethodOracle to generate test-sequences of the WMethod.

* Be careful to not let the MQ oracle and EQ oracle go out-of-sync.
Altering only the equivalence check and e.g. returning negative
counterexamples if they contain "bad" symbols can be problematic if the
normal MQ oracle still accepts them. You would then have
non-output-deterministic systems (MQ says 'yes', EQ says 'no') which for
the current algorithms in LearnLib is forbidden. Maybe a "filter" for you
MQ oracle that disables certain "bad" input symbols (rejects them) and
sticking to the regular EQ oracles would be the better approach?

Anyway, I hacked together a few lines [0] which hopefully do what I
understood from your scenario. At least they should give you a starting
point for your own equivalence check.


Kind regards,
Markus

[0] - https://gist.github.com/mtf90/6806e3bf210b21bb285b2bab34f44bc4


Am Mo, 17.02.2020, 12:24 schrieb 'Simone Agostinelli' via LearnLib Q&A:
> Dear Markus,
>
> Thank you for the directions you spell ;)
>
> I want to share with you the skeleton of my code and the basic idea of
> what
> my code should do.
>
> https://pastebin.com/cuzQTthq
>
> The SUL works in the following way: it (i) accepts both the positives
> input
> and (ii) rejects all the negatives ones. The goal is to build an automaton
> that captures and models this behavior. The problem I have with the code
> is
> that I can't find *programmatically* a way that allows me to capture all
> the negatives input in the automaton, that is, I am not able to find a
> method that allows me to model the rejection of the negatives input. With
> the method *findCounterExample()* I am able to model the positives input

Simone Agostinelli

unread,
Feb 20, 2020, 7:11:06 AM2/20/20
to LearnLib Q&A
Hi Markus,

My negative and positive inputs are complete input words that should be either accepted or rejected.

Thank you for providing me a starting point for the equivalence check! I'm now able to compute (at least) the output but I obtain a DFA with a single state that is able to recognize all the input words (and it is wrong because I want to reject the negative ones), as shown in the following capture:

capture.PNG


I don't understand why the DFA learned is the one depicted in the figure above. Do you ever seen this type of issue?

For allowing you to understand better the problem I attach the screenshots of both the membership query and the equivalence query. 

This is my Membership Query:

MQ.PNG




And this is my Equivalence Query:

EQ.PNG


Thank you very much for the support! 

Best,
Simone Agostinelli

Markus Frohme

unread,
Feb 20, 2020, 2:50:46 PM2/20/20
to Simone Agostinelli, LearnLib Q&A
Hi Simone,


the automaton looks like an initial hypothesis of a system where a lot of "short" words are accepted. Since most learning algorithms only check for one-letter extensions, this is an expected pattern if your initial state (empty word) and every single Activity_* input is accepted. From a learner's perspective it looks like every (queried) word is accepted, hence the model is the smallest generalization of this behavior.

This is often an issue with unsufficient counterexample search. Your EQ looks fine, but what value did you choose for the "maxDepth" parameter? It could be that WMethod doesn't generate enough tests words since a single state can't be separated from any other and if the "maxDepth" value is too low, the generated test words don't explore enough between the transition cover and characterizing part of the test word.

Also, how does your main learning loop look like (i.e. where you pass the discovered counterexamples to the learner for hypothesis refinement)? Your hypothesis may also be result of one not correctly using the counterexample to refine the hypothesis and basically terminating after the initial hypothesis.


Kind regards,
Markus


On Thu, Feb 20, 2020 at 04:11:06AM -0800, 'Simone Agostinelli' via LearnLib Q&A wrote:
> Hi Markus,
>
> My negative and positive inputs are complete input words that should be
> either accepted or rejected.
>
> Thank you for providing me a starting point for the equivalence check! I'm
> now able to compute (at least) the output but I obtain a DFA with a single
> state that is able to recognize all the input words (and it is wrong
> because I want to reject the negative ones), as shown in the following
> capture:
>
> [image: capture.PNG] <about:invalid#zClosurez>

Simone Agostinelli

unread,
Feb 25, 2020, 10:07:34 AM2/25/20
to LearnLib Q&A
Dear Markus,

I chose 3 as maxDepth because with 4 the program gets stuck, I have already tried. I have also tried to reduce the number of positive/negative inputs in order to augment the maxDepth parameters but the result is always the same: an automaton with 1 state accepting all the words.

In the following instead I attach you the code of the main loop of active learning:

Cattura.PNG

where lstar is an object of type ClassicLStarDFA<String> and inputs is an object of type Alphabet<String>.

Is the problem maybe related to how I define the experiment???

Best regards,
Simone

Markus Frohme

unread,
Feb 26, 2020, 2:55:39 AM2/26/20
to Simone Agostinelli, LearnLib Q&A
Hi Simone,


you are using the Experiment class for the main learning loop so
everything should be fine in that regard.


An interesting observation is the fact that your program "gets stuck" (I
assume it doesn't crash but just continues running without feedback?).

The maxDepth parameter affects the number of generated test-words
exponentially. So for your ~15 input symbols, it makes sense that it takes
quite a significantly amount of time longer from 15^3 to 15^4. However,
the program is running, so it generates all these test-words, compares the
hypothesis output to the SUL output and doesn't find a counterexample.
This tells me, that the hypothesis actually behaves according to the SUL
at least for the generated test-words.

So the next interesting part would be to look at the membership oracle,
i.e. what positive and negative sets of words do you use to answer
queries? At this point I would probably just try to debug the learning
process, i.e. setting a breakpoint after the test-word has been generated
and simply checking what (and why) the SUL and the hypothesis generate as
answers. Are you comfortable with this?

Alternatively, you could also just put some System.out.println()s and log
things like the hypothesis size, the test-word, the SUL's response, the
hypothesis' response, etc. to see if anything unexpected pops up.


Kind regards,
Markus



Am Di, 25.02.2020, 16:07 schrieb 'Simone Agostinelli' via LearnLib Q&A:
> Dear Markus,
>
> I chose 3 as maxDepth because with 4 the program gets stuck, I have
> already
> tried. I have also tried to reduce the number of positive/negative inputs
> in order to augment the maxDepth parameters but the result is always the
> same: an automaton with 1 state accepting all the words.
>
> In the following instead I attach you the code of the main loop of active
> learning:
>
> [image: Cattura.PNG]
> where *lstar* is an object of type *ClassicLStarDFA<String> *and *inputs*
> is an object of type *Alphabet<String>*.

Simone Agostinelli

unread,
Feb 26, 2020, 9:17:29 AM2/26/20
to LearnLib Q&A
Dear Markus,

Thank you for the suggested directions. I will try all of them! I keep you updated, thanks again.

--Simone
Reply all
Reply to author
Forward
0 new messages