EQ NFA

30 views
Skip to first unread message

Alex F.

unread,
Dec 24, 2020, 4:31:14 AM12/24/20
to LearnLib Q&A
Hi everyone,
I'm working on learning a NFA, in particular I have a set of correct words and a set of wrong words, the correctness and wrongness is based on Regular Expression. 
I implemented my MQ oracle but now I don't know in which way I can implement  an EQ oracle. Can you help me? 
In attachment there is my current code for MQ 

Thanks 
MBOracle.java
Active_Learning.java

Markus Frohme

unread,
Dec 24, 2020, 9:04:27 AM12/24/20
to Alex F., LearnLib Q&A
Hi Alex,


if you have a fixed set of traces you want to check your hypothesis against, you can use the SampleSetEQOracle. After construction, you simply add the test words and their expected output.

I wrote a few lines that should give you the basic idea [0]. Note that visualizing the hypothesis may take quite some time (especially with DOT) because the model has quite a lot of transitions.


Kind regards,
Markus


[0] - https://gist.github.com/mtf90/0a4b9435f46b61ab90db12de962840f8
> --
> You received this message because you are subscribed to the Google Groups "LearnLib Q&A" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to learnlib-qa...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/learnlib-qa/de3df124-c938-458c-8ba1-e955303eb89dn%40googlegroups.com.

> import de.learnlib.api.oracle.SingleQueryOracle;
> import net.automatalib.words.Word;
> import java.util.Collection;
> import java.util.Collection;
>
> public class MBOracle<I> implements SingleQueryOracle<I,Boolean> {
> private Collection<Word<I>> positive;
> private Collection<Word<I>> negative;
>
> public MBOracle(Collection<Word<I>> positive, Collection<Word<I>> negative){
> // TODO: Check disjointness of positive and negative
> super();
> this.positive = positive;
> this.negative = negative;
> }
>
> @Override
> public Boolean answerQuery(Word<I> prefix, Word<I> suffix) {
> Word<I> input = prefix.concat(suffix);
> return (!negative.contains(input));
> }
> }
>

> import java.io.IOException;
> import java.util.*;
> import de.learnlib.algorithms.lstar.dfa.ClassicLStarDFA;
> import de.learnlib.algorithms.lstar.dfa.ClassicLStarDFABuilder;
> import de.learnlib.algorithms.nlstar.NLStarLearner;
> import de.learnlib.algorithms.nlstar.NLStarLearnerBuilder;
> import de.learnlib.api.oracle.MembershipOracle.DFAMembershipOracle;
> import de.learnlib.datastructure.observationtable.OTUtils;
> import de.learnlib.datastructure.observationtable.writer.ObservationTableASCIIWriter;
> import de.learnlib.filter.statistic.oracle.CounterOracle;
> import de.learnlib.oracle.equivalence.DFAWMethodEQOracle;
> import de.learnlib.oracle.equivalence.IncrementalWMethodEQOracle;
> import de.learnlib.oracle.equivalence.WMethodEQOracle.*;
> import de.learnlib.oracle.membership.SimulatorOracle.DFASimulatorOracle;
> import de.learnlib.util.Experiment.DFAExperiment;
> import de.learnlib.util.statistics.SimpleProfiler;
> import net.automatalib.automata.fsa.DFA;
> import net.automatalib.automata.fsa.impl.compact.CompactDFA;
> import net.automatalib.serialization.dot.GraphDOT;
> import net.automatalib.util.automata.builders.AutomatonBuilders;
> import net.automatalib.visualization.Visualization;
> import net.automatalib.words.Alphabet;
> import net.automatalib.words.Word;
> import net.automatalib.words.impl.ListAlphabet;
> import net.automatalib.words.impl.Alphabets;
>
> public class Active_Learning {
> private static final int EXPLORATION_DEPTH = 3;
>
> private static List<Character> getActions() {
> List<Character> actions = new LinkedList<>();
> for (int i = 0; i < 7; i++)
> actions.add((char) (97 + i));
> return actions;
> }
> public static void main(String[] args) throws IOException {
> System.out.println("Starting...");
>
> Alphabet<Character> alphabet = new ListAlphabet<>(getActions()); //f not used
>
> Collection<Word<Character>> positive = getPositiveExamples(); //ok
>
> Collection<Word<Character>> negative = getNegativeExamples(); //ok
>
> MBOracle<Character> test = new MBOracle<>(positive,negative);
>
> CounterOracle<Character,Boolean> mqOracle = new CounterOracle<>(test, "membership queries");
>
> NLStarLearner<Character> nlstar =
> new NLStarLearnerBuilder<Character>().withAlphabet(alphabet) // input alphabet
> .withOracle(mqOracle) // membership oracle
> .create();
> }
>
> private static Collection<Word<Character>> getPositiveExamples() { //20 samples
> return Arrays.asList(
> Word.fromCharSequence("acab"),
> Word.fromCharSequence("acaba"),
> Word.fromCharSequence("acabc"),
> Word.fromCharSequence("acabab"),
> Word.fromCharSequence("acabac"),
> Word.fromCharSequence("acabed"),
> Word.fromCharSequence("acabe"),
>
> Word.fromCharSequence("abc"),
> Word.fromCharSequence("abca"),
> Word.fromCharSequence("abce"),
> Word.fromCharSequence("abac"),
> Word.fromCharSequence("abcab"),
> Word.fromCharSequence("abaced"),
> Word.fromCharSequence("abcabed"),
>
>
> Word.fromCharSequence("abababc"),
> Word.fromCharSequence("ababcab"),
>
> Word.fromCharSequence("eabcdc"),
> Word.fromCharSequence("eabdc"),
> Word.fromCharSequence("eabababccd"));
> }
>
> private static Collection<Word<Character>> getNegativeExamples() { //25 samples
> return Arrays.asList(
> Word.fromCharSequence("a"),
> Word.fromCharSequence("b"),
> Word.fromCharSequence("c"),
>
> Word.fromCharSequence("ab"),
> Word.fromCharSequence("ac"),
>
> Word.fromCharSequence("cb"),
> Word.fromCharSequence("ca"),
> Word.fromCharSequence("cabac"),
> Word.fromCharSequence("cde"),
>
> Word.fromCharSequence("ba"),
> Word.fromCharSequence("bc"),
> Word.fromCharSequence("be"),
> Word.fromCharSequence("bg"), //g
> Word.fromCharSequence("bac"),
>
> Word.fromCharSequence("abcabacb"),
> Word.fromCharSequence("abcbccbea"),
> Word.fromCharSequence("abacde"),
> Word.fromCharSequence("abcaaaccddee"),
> Word.fromCharSequence("abacg"), //g
>
> Word.fromCharSequence("deabc"),
> Word.fromCharSequence("de"),
> Word.fromCharSequence("decabg"),
>
> Word.fromCharSequence("edabcb"),
> Word.fromCharSequence("ed"),
>
> Word.fromCharSequence("gabced"));
> }
> }

Alex F.

unread,
Dec 24, 2020, 11:33:06 AM12/24/20
to LearnLib Q&A
Great! It works but I noticed, as you told me in advance, that the visualization is problematic. 

Before using the NFA with active learning I used the same set of samples (both positive and negative) to costruct a DFA with RPNI algorithm and in that case the number of states was 6 or 7. While using NFA with NL* algorithm the number of state is 36, this large difference derivas in the way in which I implemented MQ ?

Thank you and Merry Christmas. 

Regards,
Alex F.

Markus Frohme

unread,
Dec 24, 2020, 7:19:31 PM12/24/20
to Alex F., LearnLib Q&A
Hi Alex,


I don't know how farmiliar you are with the intricacies of automata learning, so I'll just try to sketch the differences for now.

In passive automata learning (RPNI), you have the problem of limited information as you can only use a fixed set of traces for learning the hypothesis. On the other hand, this gives the learning algorithm some freedom to decide how the "unknown" states of the automaton should behave. Therefore it can choose the properties of the states (accepting or rejecting) in order to minimize (or rather maximze the generalization of) the (size of the) hypothesis.

With active automata learning (NL*), the learner can ask for information that is not clear during learning. Given your implementation of the MQOracle, every word is accepted excepted the negative examples. This adds much more constraints on the language and therefore the hypothesis model (or rather the language) may end up more complicated and therefore require more states. In fact, the hypothesis model learned via RPNI may reject words are accepted by your NL* hypothesis for this very reason.

There are also differences in the model type (DFA vs. NFA) but since NFAs are usually smaller than DFAs, learning the system with the classic L* may yield even bigger hypotheses.

If you want to further process the learned models, you may also check out one of the serializers -- even the DOT file itself may be worth looking at without trying to render it as an image.

I hope this clears up some things.


Merry christmas to you as well,
Markus

Am Thu, Dec 24, 2020 at 08:33:05AM -0800 schrieb Alex F.:
> Great! It works but I noticed, as you told me in advance, that the
> visualization is problematic.
>
> Before using the NFA with active learning I used the same set of samples
> (both positive and negative) to costruct a DFA with RPNI algorithm and in
> that case the number of states was 6 or 7. While using NFA with NL*
> algorithm the number of state is 36, this large difference derivas in the
> way in which I implemented MQ ?
>
> Thank you and Merry Christmas.
>
> Regards,
> Alex F.
>
>
> [...]

Alex F.

unread,
Dec 25, 2020, 10:46:16 AM12/25/20
to LearnLib Q&A
Thanks, you gave me a better idea on the differences of the two techniques. 
May suggest some paper for a deeper investigation ?

Regards,
Alex F. 

Markus Frohme

unread,
Dec 26, 2020, 7:21:02 PM12/26/20
to Alex F., LearnLib Q&A
Dear Alex,


I can suggest the book "Grammatical Inference" by Colin de la Higuera [0]. While the book has some flaws (e.g. errors in some examples, etc.) in generally gives quite a good overview of the different concepts, how they are connected, and plenty of references to the original papers if a certain topic piques your interest.


Kind regards,
Markus


[0] - http://pagesperso.lina.univ-nantes.fr/~cdlh//book/


Am Fri, Dec 25, 2020 at 07:46:16AM -0800 schrieb Alex F.:
> Thanks, you gave me a better idea on the differences of the two techniques.
> May suggest some paper for a deeper investigation ?
>
> Regards,
> Alex F.
>
> [...]

Alex F.

unread,
Dec 28, 2020, 8:26:25 AM12/28/20
to LearnLib Q&A
Dear Markus

Thank you very much!

Kind regards, 
Alex F.
Reply all
Reply to author
Forward
0 new messages