In a language, the following are valid floating-point values:
47.11
47.
.11
The language also has a construct to define a range such as:
a[3..5] or {1, 2, 3}[$-1..$]
So I defined
TO: '..';
FLOAT: INT? '.' INT?;
Using them in that way, the lexer/parser correctly identify the difference between the two as long as the range expression uses spaces around the .. sequence.
So I added a semantic predicate to my lexer at the point it has found the '.' at which point it only considers the thing a FLOAT if it's not followed by a second '.', i.e. like so:
INT? '.' { _input.LA(1) != '.' }? INT?
And yay me! Works a treat; my unit test runs green.
But! When I run all my unit tests together a number of tests involving FLOAT values start failing. All these tests pass successfully if I run them individually (method by method or class by class).
Did a lot of digging around, and found something about caching of DFA states even when instantiating new lexers and parsers for every run.
Question is: could it be that the DFA State "cache" does not handle the presence of the semantic predicate properly. Alternatively (probably the more likely scenario), did I define the predicate wrongly?
Sorry if this was unclear, it's late and I've had a long day.
(Using ANTLR 4 -- retrieved by Maven 08May2013, using JUnit 4.11 in Eclipse Juno)
Thanks for any help,
Thanks! That did indeed do the trick.
Jaap