Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How would you represent an abstract syntax tree in Prolog?

545 views
Skip to first unread message

rupert...@googlemail.com

unread,
Jun 6, 2016, 6:33:04 AM6/6/16
to
Taking a concrete example, here is some json I might parse:

{
'author': 'Philip K Dick',
'works': [
{'title': 'The Man in the High Castle'},
{'title': 'Do Androids Dream of Electric Sheep'}
]
}

How would you represent an AST in Prolog?

You translate the whole thing as a Prolog structure, using lists for the fields:

fields([field(author, "Philip K Dick"), field(works, list([fields(['title', ...

Or try and flatten it:

field(root, author, "Philip K Dick").
field(root, works, list123).

items(list123, object456).
items(list123, object457).
// Or use a nil/cons structure for lists.

field(object456, title, "The Man in the High Castle").
field(object457, title, "Do Androids Dream of Electric Sheep").

The flattening requires making up names to refer to things, if there are un-named objects in a structure, or even if there are named object but the names are not unique.

Are there other approaches to representing ASTs in Prolog?

I'm interested in efficiency, and also ease of programming with such structures.

Rupert

Falco Nogatz

unread,
Jun 6, 2016, 11:43:07 AM6/6/16
to
Hi Rupert,

you might be interested in SWI-Prolog's two ways of representing JSON terms as they are a good example of nested terms. Have a look at http://www.swi-prolog.org/pldoc/man?section=jsonsupport

In SWI-Prolog prior to version 7.x JSON terms are represented as follows:

json([
author= 'Philip K Dick',
works= [
json([
title= 'The Man in the High Castle'
]),
json([
title= 'Do Androids Dream of Electric Sheep'
])
]
])

With SWI-Prolog v7.x it is possible to use dicts to represent JSON terms:

_{
author: 'Philip K Dick',
works: [
_{
title: 'The Man in the High Castle'
},
_{
title: 'Do Androids Dream of Electric Sheep'
}
]
}

Please notice that this notation results in incompatibilities with the ISO Prolog standard.

Jan Burse

unread,
Jun 6, 2016, 1:15:49 PM6/6/16
to
Assuming your Prolog system understands lists,
set notation and comma operator,

You can directly parse it with Prolog, you only
need an operator definition for (:)/2.

Look see (why does it show 2015 and not 2016?):

Welcome to SWI-Prolog (Multi-threaded, 64 bits, Version 7.3.22)
Copyright (c) 1990-2015 University of Amsterdam, VU Amsterdam

?- X = {
'author': 'Philip K Dick',
'works': [
{'title': 'The Man in the High Castle'},
{'title': 'Do Androids Dream of Electric Sheep'}
]
}.
X = {author:'Philip K Dick', works:[{title:
'The Man in the High Castle'}, {title:
'Do Androids Dream of Electric Sheep'}]}.

Works with any ISO compatible Prolog, if you don't have
the (:)/2 operator, just do, which is ISO compatible directive:

:- op(600, xfy, :).

So what do you get by this approach. Well an JSON entity is either:
- array
- map
- atomic

You get this:
- JSON array is Prolog list
- JSON map is Prolog set, inside comma and colon
- JSON atomic is Prolog atomic

Advantage/Disadvantage over dicts:
- dicts in SWI-Prolog are just compounds where the functor is the
blob C'dict', and where the key and values are alternated.

?- X = f{a:b, c:d}, X=.. L.
L = [C'dict', f, a, b, c, d].

- So dicts are indeed more compact, you don't need the comma
and colon nodes.

- Also the keys are sorted, so equality because easier. An incoming
JSON stream might thus be somehow normalized (is this needed
and or wanted?):

?- X = f{c:d, a:b}, X=.. L.
L = [C'dict', f, a, b, c, d].

- If you start coding your own ops on dicts in pure Prolog,
for example using (=..)/2, you probably don't end up with
much more efficient code.

- So you are pretty much dependent on the provision of some
efficient operations from the Prolog vendor. Which is not
so much the case for other datastructures.

Would it be possible to bootstrap dicts without much C-code,
i.e. with pure Prolog or even some impure Prolog? And thus
provide a reference implementation?

Many datastructures and thus modules are just piles of
Prolog predicates, i.e. list, set, ordset, etc.. Ist this
also possible for dicts?

Bye


rupert...@googlemail.com schrieb:

Jan Burse

unread,
Jun 6, 2016, 1:24:07 PM6/6/16
to
Jan Burse schrieb:
>
> So what do you get by this approach. Well an JSON entity is either:
> - array
> - map
> - atomic

JSON lingo is:

- array
- object
- literal names/number/string

And string is assumed to be double quoted, and can have
unicode hex escapes. In Prolog there might be a slight problem
distinguising literal names and string, if both are just atoms.

In SWI-Prolog, since it has a string type, we could distinguish
null from "null".

https://tools.ietf.org/html/rfc7159

Jan Burse

unread,
Jun 6, 2016, 1:25:07 PM6/6/16
to
Jan Burse schrieb:
> In SWI-Prolog, since it has a string type, we could distinguish
> null from "null".

But we don't find the string type in the ISO core standard.

Jan Burse

unread,
Jun 6, 2016, 1:30:39 PM6/6/16
to
An alternative approach, without the need for strings,
would be to map strings to atoms. And have special blobs
for true, false and null.

If there is already a blob C'dict', it would be easy
to also introduce blobs C'true, C'false and C'null,
and thus totally avoid introducing a new datatype string.

In Jekejeke Prolog could use the reference data type
for this. But generally with blobs there is a Prolog of
mentioning them in Prolog code. For example the
follwing doesn't work in SWI-Prolog:

?- X = C'dict'.
ERROR: Syntax error: Operator expected
ERROR: X =
ERROR: ** here **
ERROR: C'dict' .

So investing into blobs and/or reference data type could
be the cheaper solution to the problem than investing into
strings. Of course as a fall-back, we could also use some
functor to wrap strings:

Sketch:

value = ARRAY
| OBJECT
| true
| false
| null
| str(ATOM)

Don't know what SWI-Prolog 7.x JSON was doing. And don't
know how dicts could solve the null versus "null" Problem.
And don't know how aggravated this problem is.

Bye

Jan Burse schrieb:

Julio Di Egidio

unread,
Jun 6, 2016, 4:04:43 PM6/6/16
to
On Monday, June 6, 2016 at 7:15:49 PM UTC+2, Jan Burse wrote:

> - Also the keys are sorted, so equality because easier. An incoming
> JSON stream might thus be somehow normalized (is this needed
> and or wanted?):

It is not wanted: order is significant and should be preserved.

Julio

j4n bur53

unread,
Jun 6, 2016, 8:47:01 PM6/6/16
to
Arrays are of course sorted. But I am refering to the
keys here, so I am refering to objects. And I just
learnt now,

it seems to be wanted, the RFC 7159 states:
- [...]. The names within an object SHOULD be unique.
- [...]. Implementations whose behavior does not depend on member
ordering will be interoperable in the sense that they
will not be affected by these differences.

But probably one of the worst written RFC ever.

Julio Di Egidio schrieb:

j4n bur53

unread,
Jun 6, 2016, 8:48:08 PM6/6/16
to
j4n bur53 schrieb:
> Arrays are of course sorted.
Oops, sorted --> ordered

j4n bur53

unread,
Jun 6, 2016, 8:50:37 PM6/6/16
to
j4n bur53 schrieb:
> j4n bur53 schrieb:
>> Arrays are of course sorted.
> Oops, sorted --> ordered

Or wikipedia:
- Array: an ordered list
- Object: an unordered collection
https://en.wikipedia.org/wiki/JSON

Meaning/Consequence:
Arrays you are not allowed to sort the elements
Objects you are allowed to sort by key value pairs by key


j4n bur53

unread,
Jun 6, 2016, 9:08:22 PM6/6/16
to
BTW: XML-RPC, dating from 1998, might be considered
a precoursor of JSON, it has also some atomics
and then arrays and structs.

Very similar idea for a universal exchange format,
especially for the actual parameters and the
return values of procedure calls,

but XML based:
https://en.wikipedia.org/wiki/XML-RPC#Data_types

j4n bur53

unread,
Jun 6, 2016, 9:11:41 PM6/6/16
to
j4n bur53 schrieb:
Ok, here you go, why JSON became more popular than XML-PRC:

Criticism (from Wikipedia page on XML-RPC):
[...] Furthermore, XML-RPC uses about 4 times the number of bytes
compared to plain XML to encode the same objects, which is itself
verbose compared to JSON.

Markus Triska

unread,
Jun 7, 2016, 2:38:34 AM6/7/16
to
Hi Rupert,

"rupert...@googlemail.com" <rupert...@googlemail.com> writes:

> Taking a concrete example, here is some json I might parse:
>
> {
> 'author': 'Philip K Dick',
> 'works': [
> {'title': 'The Man in the High Castle'},
> {'title': 'Do Androids Dream of Electric Sheep'}
> ]
> }
>
> How would you represent an AST in Prolog?
>
> You translate the whole thing as a Prolog structure, using lists for the fields:
>
> fields([field(author, "Philip K Dick"), field(works, list([fields(['title', ...

I would like to focus on this structure: The representation you show
above has the severe drawback that you cannot distinguish the different
field types symbolically (i.e., via pattern matching). Such a
representation is called "defaulty", because you will end up with a
series of if-then-else cases and likely a "default" case every time you
analyze or convert such terms.

You already hint at the solution with the list/1 functor you introduced.
It only remains to carry this idea over to other elements too:

fields([field(author, string("Philip K Dick")),
field(works, list([fields([title, ...])]))])

The difference is only in the string/1 wrapper that lets us symbolically
distinguish the different types.

By the way, the "list" case actually that does not require such a
wrapper, because you can already distinguish lists by their primary
functor and arity, and therefore you can also write this as:

fields([field(author, string("Philip K Dick")),
field(works, [fields([title, ...])])])

and "title" above should actually read field(title, string(...)), no?

Making the cases distinguishable by different functors improves
convenience and purity (it is more likely that you can use your code in
several directions, because you do not need if-then-else and other
non-monotonic constructs to handle such terms), and typically also
performance, because you cannot beat your system's first-argument
indexing by a series of if-then-else constructs within your code.

Great question!

All the best,
Markus

--
comp.lang.prolog FAQ: http://www.logic.at/prolog/faq/

rupert...@googlemail.com

unread,
Jun 7, 2016, 4:36:15 AM6/7/16
to
On Tuesday, June 7, 2016 at 7:38:34 AM UTC+1, Markus Triska wrote:
> > fields([field(author, "Philip K Dick"), field(works, list([fields(['title', ...
>
> I would like to focus on this structure: The representation you show
> above has the severe drawback that you cannot distinguish the different
> field types symbolically (i.e., via pattern matching). Such a
> representation is called "defaulty", because you will end up with a
> series of if-then-else cases and likely a "default" case every time you
> analyze or convert such terms.
>
> You already hint at the solution with the list/1 functor you introduced.
> It only remains to carry this idea over to other elements too:
>
> fields([field(author, string("Philip K Dick")),
> field(works, list([fields([title, ...])]))])

Thanks, yes some type constructors to wrap various types. Another way might be to have type tags as in:

field(author, string, "Philip K Dick").

rupert...@googlemail.com

unread,
Jun 7, 2016, 4:50:10 AM6/7/16
to
Hi,

Thanks for all the replies. Perhaps giving a json example was a bad choice - I do want to work with json, but I also want to work with other formats too. Perhaps XML schemas, or Java classes, or some other description of a data structure, maybe a text UML format for example. I just picked a snippet of json to illustrate what an AST for it may look like.

What I am aiming to do is to parse various schema descriptions of 'objects' with 'fields' into a common AST that can be used to convert between them. I'd like the AST structure to be efficiently query-able, both for deducing properties of how the data is structured and for transforming the AST prior to producing outputs from it.

What about the flattening approach:

On Monday, June 6, 2016 at 11:33:04 AM UTC+1, rupert...@googlemail.com wrote:
> Or try and flatten it:
>
> field(root, author, "Philip K Dick").
> field(root, works, list123).
>
> items(list123, object456).
> items(list123, object457).
> // Or use a nil/cons structure for lists.
>
> field(object456, title, "The Man in the High Castle").
> field(object457, title, "Do Androids Dream of Electric Sheep").

> I'm interested in efficiency, and also ease of programming with such structures.

Is it not much more efficient to represent a set of fields as predicates, rather than to put them in a list?

I also found that working with lists of fields requires writing explicit list recursions, which complicates the code. Back-tracking over a set of predicates would provide the iteration for me.

> Are there other approaches to representing ASTs in Prolog?

What about something like this?

http://www.complang.tuwien.ac.at/adrian/termite/Manual/Contents.html

Extending the concept of zipper lists to ASTs.

Rupert

rupert...@googlemail.com

unread,
Jun 7, 2016, 5:18:00 AM6/7/16
to
On Tuesday, June 7, 2016 at 9:50:10 AM UTC+1, rupert...@googlemail.com wrote:
> What about something like this?
>
> http://www.complang.tuwien.ac.at/adrian/termite/Manual/Contents.html
>
> Extending the concept of zipper lists to ASTs.

After a bit of hunting around, it seems you can get the Termite sources here:

https://github.com/rose-compiler/rose/tree/master/projects/SATIrE/src/termite

I have difficulty enough remembering how difference lists work, never mind this - but taking a look.

j4n bur53

unread,
Jun 7, 2016, 5:38:48 AM6/7/16
to
You should definitively choose another name than AST.
AST = Abstract Syntax Tree, just the tree of something
you were parsing, typically an artefact written in
a programming language.

But here you are parsing the object model of a progamming
language. A debugger might do this at runtime. And if the
debugger is remote there might be the need to serialize
this object model.

In general if you have a grammar as DCGs:

A --> A1, .., An

You can this turn into an AST returning grammar by
simply adding an argument and inventing functors f
for each rule:

A(f(X1,..,Xn)) --> A1(X1), .., An(Xn)

You might drop here and then a functor f, to have
a more concise AST, even leading to defaulty representation
as Markus Triska calls them.

But I dont agree that defaulty representations lead
to impure code. For example if you use the following
AST for a subset of JSON, that doesn't have false,
true and null, * denoting lists:

json ::= array(json*)
| object(pair*)
| ATOM
| NUMBER
pair ::= ATOM-json

Was deliberately not using (:)/2 in the representation.
You can still declaratively implement the visitor
pattern. For example an is_json predicate would
perfectly fine work as follows:

is_json(array(L)) :- is_json_list(L).
is_json(object(P)) :- is_json_pairs(P).
is_json(A) :- atom(A).
is_json(A) :- number(A).

is_json_list([]).
is_json_list([X|Y]) :- is_json(X), is_json_list(Y).

is_json_pairs([]).
is_json_pairs([A-X|Y]) :- atom(A), is_json(X), is_json_pairs(Y).

But of course you would place greeen cuts:

is_json(array(L)) :- !, is_json_list(L).
is_json(object(P)) :- !, is_json_pairs(P).
is_json(A) :- atom(A), !.
is_json(A) :- number(A).

Etc.. Etc..

rupert...@googlemail.com schrieb:

j4n bur53

unread,
Jun 7, 2016, 5:44:36 AM6/7/16
to
j4n bur53 schrieb:
> json ::= array(json*)
> | object(pair*)
> | ATOM
> | NUMBER

You cant do this ins Haskell, right? A datatype must
always have a functor for each variant. Thats why you
don't see defaulty representations in Haskell.

j4n bur53

unread,
Jun 7, 2016, 5:45:28 AM6/7/16
to
j4n bur53 schrieb:
> But here you are parsing the object model of a progamming
> language. A debugger might do this at runtime. And if the
> debugger is remote there might be the need to serialize
> this object model.

But if you go for artefacts in a programming language,
as in TERMINTE, AST matches again as a name.

j4n bur53

unread,
Jun 7, 2016, 5:56:00 AM6/7/16
to
Hi,

Markus Triska schrieb:
> Making the cases distinguishable by different functors improves
> convenience and purity (it is more likely that you can use your code in
> several directions, because you do not need if-then-else and other
> non-monotonic constructs to handle such terms), and typically also
> performance, because you cannot beat your system's first-argument
> indexing by a series of if-then-else constructs within your code.

Applying pure Prolog in parsing is practically impossible. Even
if you avoid defaulty representation, you might still use the
atom type, and maybe assemble it by atom_codes/2.

But atom_codes/2 is an impure predicate, in the sense that it
doesn't support all modes. A form of non-monotonicity I guess
in your lingo, right? So you might want to repesent atoms
as lists of codes.

But then the codes itself are impure. Integers in Prolog are
also impure. For example the (<)/2 predicate doesn't
support all modes. A form of non-monotonicity I guess
in your lingo, right?

So to be 100% pure you would need to use lists of peano represented
integeres. Which is not very practical in my opinion.

Bye

Look see, (<)/2 is not monotonic:

?- X=1, X<2.
true
?- X<2, X=1.
exception

j4n bur53

unread,
Jun 7, 2016, 6:19:34 AM6/7/16
to
Markus Triska schrieb:
> I would like to focus on this structure: The representation you show
> above has the severe drawback that you cannot distinguish the different
> field types symbolically (i.e., via pattern matching). Such a
> representation is called "defaulty", because you will end up with a
> series of if-then-else cases and likely a "default" case every time you
> analyze or convert such terms.

Well the number if-then-else cases is the same with and without
defaulty rerpresentation. You can check yourself in the json
example I gave:

With defaulty representation (4 clauses):

is_json(array(L)) :- is_json_list(L).
is_json(object(P)) :- is_json_pairs(P).
is_json(A) :- atom(A).
is_json(A) :- number(A).

Without defaulty representation (4 clauses):

is_json(array(L)) :- is_json_list(L).
is_json(object(P)) :- is_json_pairs(P).
is_json(string(_)).
is_json(number(_)).

But I agree a non-defaulty representation is sometimes easier to
use, but also needs more space. Probably a reason we don't find
defaulty representation in Haskell and also in theorem provers
such as Isabello/HOL is,

that non-defaulty representation allows easier structural induction,
either manually or automated, or a mix of it. Structural induction
is a very important tool in proving program properties etc..
But structural induction is not limited to non-defaulty representation.

Human beings and mathematical logic can do structural induction
also with defaulty representation. Just turn back the time. Think
about ZFC, where the universe is built from sets alone, a non-defaulty
representation.

But one can quickly see how to do ZFC with ur-elements(*). You
just need some more predicates to distinguish the ur-elements.
In Prolog atom/1 and number/1 do the job. You can also do structural
induction with non-defaulty representation, probably just a matter
of technical economy that you don't find it. Have to check.

Bye

(*)
https://en.wikipedia.org/wiki/Urelement

Julio Di Egidio

unread,
Jun 7, 2016, 6:24:58 AM6/7/16
to
On Tuesday, June 7, 2016 at 2:47:01 AM UTC+2, j4n bur53 wrote:
>
> Arrays are of course sorted. But I am refering to the
> keys here, so I am refering to objects. And I just
> learnt now,
>
> it seems to be wanted, the RFC 7159 states:
> - [...]. The names within an object SHOULD be unique.
> - [...]. Implementations whose behavior does not depend on member
> ordering will be interoperable in the sense that they
> will not be affected by these differences.

<< JSON parsing libraries have been observed to differ as to whether or
not they make the ordering of object members visible to calling
software. Implementations whose behavior does not depend on member
ordering will be interoperable in the sense that they will not be
affected by these differences. >> (*)

Order is significant (in some languages) and SHOULD be preserved: that is an interoperable implementation!

> But probably one of the worst written RFC ever.
>
> Julio Di Egidio schrieb:
> > On Monday, June 6, 2016 at 7:15:49 PM UTC+2, Jan Burse wrote:
> >
> >> - Also the keys are sorted, so equality because easier. An incoming
> >> JSON stream might thus be somehow normalized (is this needed
> >> and or wanted?):
> >
> > It is not wanted: order is significant and should be preserved.

Please do not top-post.

Julio

(*) <https://tools.ietf.org/html/rfc7159#section-4>

Jan Burse

unread,
Jun 7, 2016, 6:41:43 AM6/7/16
to
j4n bur53 schrieb:
> Look see, (<)/2 is not monotonic:
>
> ?- X=1, X<2.
> true
> ?- X<2, X=1.
> exception

So we could say that the following grammar from
a tokenizer is impure:

digit(X) --> [X], {0'0 =< X, X =< 0'9}.

Unfortunately CLP(FD) isn't really a solution. We
could go on and do the following:

digit(X) --> [X], {0'0 #=< X, X #=< 0'9}.

But we can still not reason about the program. CLP(FD)
doesn't offer some induction principles.

The simplest induction principle it could cover would
be induction for natural numbers. If we look at the
structure of peano numbers:

nat ::= s(nat)
| z

From this we could derive a structural induction,
which amounts more or less to the same as the
usual induction for natural numbers.

In CLP(FD) we have integers, so we need some sign
flipping as well. The labeling can do some work, if
the domain is limited. Which is the case admittedly
for the codes that are input to a parser.

But in a tokenizer we have also lists, on the
input side and on the output side, if we view
tokens as lists. We would need structural induction
on lists. Which is out of scope of CLP(FD).

Bye


Jan Burse

unread,
Jun 7, 2016, 6:42:50 AM6/7/16
to
Jan Burse schrieb:
> But in a tokenizer we have also lists, on the
> input side and on the output side, if we view
> tokens as lists. We would need structural induction
> on lists. Which is out of scope of CLP(FD).

Maybe show us GUPU some time soon. Maybe there
is something in it. Or it could simply give us a
better understanding of your "monotonicity" desires.

Bye


Jan Burse

unread,
Jun 7, 2016, 8:00:17 AM6/7/16
to
More on purity. The only pure way of using is_json/1
is either with mode + or with mode -.

But mode - you cant do with either defaulty representation
or non-defaulty representation.

Since you cannot enumerate the atoms and numbers at the
same time with simple Prolog code. I mean you could,
assume is_atom/1 and is_number/1 do that, but you
get an incomplete enumeration.

is_json(string(A)) :- is_atom(A).
is_json(number(N)) :- is_number(N).
is_json(array(L)) :- is_json_list(L).
is_json(object(P)) :- is_json_pairs(P).

Because of depth first Prolog interpreter, you will
never stop enumerating atoms, and never reach enumerating
a single number.

So your purity anyway works never in depth first Prolog.
The test case would be:

?- is_json(number(123)).
true

?- is_json(X), X=number(123).
hangs

So logically you have two hilbert hotels following each
other(*), or symbolically an omega+omega order. Of course
your predicate will also not correctly enumerate arrays
and objects, same problem here it will stuck with arrays.

The only possibility to reorder here is if you also use
the freeze/2 predicate and maybe interative deepening. So
basically my experience is the contrary of what you wrote:

Markus Triska schrieb:
> Making the cases distinguishable by different functors improves
> convenience and purity (it is more likely that you can use
> your code in several directions, because you do not need
> if-then-else and other non-monotonic constructs to handle
> such terms), and typically also performance, because you
> cannot beat your system's first-argument indexing by a
> series of if-then-else constructs within your code.

My experience is you never never get code that works in
several directions when structures are involved, that are
not as trivial as peano. If you want several directions
you have to cleverly use freeze/2.

The quine example recently beeing a good example. Where
we tried to solve eval(X,X). Even if you use freeze/2 and
iterative deepening the interpreter might be busy a whole
lot of a time.

Bye

(*)
The depth first Prolog interpreter is not as clever as this Clerk:
https://www.youtube.com/watch?v=Uj3_KqkI9Zo
Iterative deepening and breadth first and similar techniques on
the other hand restore a simple omega enumeration.

Jan Burse

unread,
Jun 7, 2016, 8:04:28 AM6/7/16
to
Jan Burse schrieb:
> The quine example recently beeing a good example. Where
> we tried to solve eval(X,X). Even if you use freeze/2 and
> iterative deepening the interpreter might be busy a whole
> lot of a time.

Remember some months ago, here on comp.lang.prolog:
https://www.cs.indiana.edu/~eholk/papers/sfp2012.pdf

Jan Burse

unread,
Jun 7, 2016, 8:07:09 AM6/7/16
to
Jan Burse schrieb:
> My experience is you never never get code that works in
> several directions when structures are involved, that are
> not as trivial as peano. If you want several directions
> you have to cleverly use freeze/2.

You can reorder in Datalog, even in some of Rdf/Sparql,
but with real structures involved, reordering becomes
impossible in standard Prolog.

Jan Burse

unread,
Jun 7, 2016, 9:21:42 AM6/7/16
to
Hi,

Well I was bluffing. Reordering for Datalog holds
only in the top-level.

But if you have recursive rules, you cannot so easily
reorder the literals in the body.

Or to begin with, if you recursive rules in Datalog
you need something like tabling.

Tabling is evailable in SWI-Prolog since the last release.
Its also said that it is a reset/shift implementation.

reset/shift can be viewed as a form of co-routines. So
its not freeze/2 that comes here to rescue.

So botton line is:
- Pure Prolog is Definite aka Horn Clause Logic
- Depth First Interpreter for Horn Clause Logic is Incomplete
- Hence this Incompleteness holds for the Pure Prolog
- Asking for more Pureness doesn't give you more Completness.
- The Pure Prolog Depth First Interpreter is already Incomplete.

Bye

Jan Burse schrieb:

burs...@gmail.com

unread,
Jun 7, 2016, 9:55:57 AM6/7/16
to
Am Dienstag, 7. Juni 2016 12:24:58 UTC+2 schrieb Julio Di Egidio:
> << JSON parsing libraries have been observed to differ as to whether or
> not they make the ordering of object members visible to calling
> software. Implementations whose behavior does not depend on member
> ordering will be interoperable in the sense that they will not be
> affected by these differences. >> (*)
>
> Order is significant (in some languages) and SHOULD
> be preserved: that is an interoperable implementation!

There is only a SHOULD for the uniqueness of the
member names, which is even not a MUST (sic!)(*).

For the rest, what you have cited, I dont see a
SHOULD or MUST.

The rest is so vague, that one can only conclude
that this is the worst RFC ever.

But common sense tells me the good faith intention
of JSON is as the Wikipedia articel lists it:
- arrays, Order is significant for the elements,
i.e. other JSON values
- object, Order is not significant for the members.
i.e. the JSON key value pairs

Yes or Yes?

(*) Wikipedia explains what this means:
RFC 7159 states: "The names within an object SHOULD be
unique.", where the word "SHOULD", interpreted as per
RFC 2119, means a recommendation.
https://en.wikipedia.org/wiki/JSON#cite_note-12

rupert...@googlemail.com

unread,
Jun 7, 2016, 10:16:09 AM6/7/16
to
On Tuesday, June 7, 2016 at 10:38:48 AM UTC+1, Jan Burse wrote:
> You should definitively choose another name than AST.
> AST = Abstract Syntax Tree, just the tree of something
> you were parsing, typically an artefact written in
> a programming language.

Typically an AST is the parse tree of some concrete programming language, yes. They have also been used in a slightly more abstract form to help translate between computer languages - never completely successfully as translating the syntax isn't even half the problem.

Stepping away from executable code and just focussing just on data models reduces the problem greatly. I'm fairly sure that with the right level of abstraction the lingua franca of it can be described by something that fairly closely resembles the AST produced by parsing these various data model descriptions; a document model held as a DAG that is abstract enough to capture the intentions of the data model I am trying to describe in these various formats. I can't think of a better name for it than an AST.

But actually, I think some of the work that has been done on writing static analysers in Prolog against ASTs is going to be useful to me. Or where program ASTs have been reduced to Datalog format to enable efficient querying for static analysers; I'm sure there was at least one Java bug-finder project I saw that took that approach.

Prolog is messy, we know. Want to get back to talking about handling ASTs in it?

Jan Burse

unread,
Jun 7, 2016, 10:22:21 AM6/7/16
to
I know some cases where order could be relevant, of
members. But they are also cases where members are
multi-value. What comes directly to mind is a HTTP
request and the attr=value in the query.

The query of a HTTP request can have attr=value1 and
attr=value2 at the same time, or even many more values
for the same attribute name. In Java the Problem is
as follows solved:

There is a method:
getParameter()
If an attribute has multiple values, it returns
the first value for a given attribute name, or
null.
And there is a method:
getParameterValues()
This method returns all the values or null, in
their appearing order for a given attribute
name.

http://docs.oracle.com/javaee/6/api/javax/servlet/ServletRequest.html

We can do the same modelling in JSON if we have
multi-valued attribute values. Nevertheless we
can keep the attribute names unique.

Just use the array in connection with an object.
So a query ?foo=bar&foo=baz&other=tutu or also
this query ?foo=bar&other=tutu&foo=baz could be
modelled as follows:

{"foo" : ["bar", "baz"],
"other" : ["tutu"]}

Easy?

Bye

Julio Di Egidio schrieb:

Jan Burse

unread,
Jun 7, 2016, 11:01:55 AM6/7/16
to
rupert...@googlemail.com schrieb:
> Prolog is messy, we know. Want to get back to talking about handling ASTs in it?

What are your use cases? What is the Zippers use case?

rupert...@googlemail.com

unread,
Jun 7, 2016, 11:02:22 AM6/7/16
to
On Tuesday, June 7, 2016 at 10:18:00 AM UTC+1, rupert...@googlemail.com wrote:
> After a bit of hunting around, it seems you can get the Termite sources here:
>
> https://github.com/rose-compiler/rose/tree/master/projects/SATIrE/src/termite

Here is the code for the zipper over trees. After playing around with it, I realize its not so difficult to understand. It simply move the appropriate sub-tree out of the way, and introduces a variable as 'hole' in the current location.

More complex to code against than a non-zipped structure, but holds out the promise of allowing transformations to be applied to the tree more efficiently.


%% zip(+Tree, -Zipper) is det.
% Creates a new Zipper from Tree.
%
% How to move around a tree and replace subtrees=branches?
%
% At each node, we cut out a branch and replace it with a free variable <Gap>.
% The original branch is given as a separate argument,
% allowing us to bind the new branch to <Gap>.
%
% In a way, this works just like Huet's zipper!
zip(X, zipper(X, [])).

%% unzip(?Zipper, ?Tree, ?Context) is det.
% Converts between the Zipper data structure and its contents.
unzip(zipper(X,Ctx), X, Ctx).

% Helper functions to navigate through a zipper

%% walk_to(+Zipper, +Context, -Zipper1) is semidet.
walk_to(Z, Ctx, Z1) :-
reverse(Ctx, Path),
top(Z, Top),
walk_to1(Top, Path, Z1), !.

walk_to1(Z, [], Z).
walk_to1(Z, [Down|Ps], Z2) :-
( Down = down(_, _, N)
; Down = down_list(_, _, N)),
down(Z, N, Z1),
walk_to1(Z1, Ps, Z2).

%% down(+Zipper, +BranchNum, -Zipper1) is semidet.
% Navigate downwards in the tree to child #BranchNum.
%
% * Works also with lists.
down(zipper(List,Ctx), N,
zipper(Child, [down_list(PredsR, Succs, N)|Ctx])) :-
is_list(List), !,
N1 is N-1,
length(Preds, N1),
append(Preds, [Child|Succs], List),
reverse(Preds, PredsR), !.
%replace_nth(List, N1, Gap, Child, List1).

down(zipper(X,Ctx), N, zipper(Child,[down(X1,Gap,N)|Ctx])) :-
X =.. List,
replace_nth(List, N, Gap, Child, List1),
X1 =.. List1.

%% up(+Zipper, -Zipper1) is semidet.
% Navigate upwards in the tree.
up(zipper(X,[down(Parent,Gap,_)|Ctx]), zipper(Parent,Ctx)) :- !, X = Gap.
up(zipper(X,[down_list(PredsR, Succs, _)|Ctx]), zipper(Parent,Ctx)) :-
reverse(PredsR, Preds),
append(Preds, [X|Succs], Parent),
!.

%% right(+Zipper, -Zipper1) is semidet.
% Navigate to the next sibling in a tree or a list.
right(zipper(X, [down_list(PredsR, [Y|Succs], N)|Ctx]),
zipper(Y, [down_list([X|PredsR], Succs, N1)|Ctx])) :- !,
N1 is N+1.

% @tbd Could be done much(!) more efficiently, currently O(n*n)
right(zipper(X, [down(C,Gap,N)|Ctx]), X2) :-
up(zipper(X, [down(C,Gap,N)|Ctx]), X1),
N1 is N+1,
down(X1, N1, X2).

%% top(+Zipper, -Zipper1) is semidet.
% Navigate back to the root of our tree.
%
% @tbd Could be implemented more efficiently, too
top(zipper(X,[]), zipper(X,[])) :- !.
top(X, X2) :- up(X, X1), top(X1, X2), !.

%% goto_function(+Zipper, ?Template, +Zipper1) is nondet.
% find a function like Template in a project or file and return its body
% if there is only a declaration available, the declaration will be returned
goto_function(P, Function, P1) :-
( % Prefer to find a real definition first
find_function(P, Function, P1),
unzip(P1, Function, _),
Function = function_declaration(_Params, _Null, Def, _A1, _Ai1, _F1),
Def \= null
;
find_function(P, Function, P1),
unzip(P1, Function, _),
Function = function_declaration(_, _, null, _, _, _)
), !.

find_function(P, FunctionTemplate, P3) :-
unzip(P, project(_Files, _A, _Ai, _Fi), _), !,
down(P, 1, P1),
down(P1, 1, P2),
find_function1(P2, FunctionTemplate, P3).

find_function(P, FunctionTemplate, P4) :-
unzip(P, source_file(global(_Funcs, _A1, _Ai, _F1), _A2, _Ai2, _F2), _), !,
down(P, 1, P1),
down(P1, 1, P2),
down(P2, 1, P3),
find_function1(P3, FunctionTemplate, P4).

find_function(zipper(FuncTempl, Ctx), FuncTempl, zipper(FuncTempl, Ctx)).

find_function1(zipper([], Ctx), _, zipper([], Ctx)).
find_function1(P, FunctionTemplate, P2) :-
( find_function(P, FunctionTemplate, P2)
;
right(P, P1),
find_function1(P1, FunctionTemplate, P2)
).

%% distance_from_root(+Zipper, -Distance) is det.
% Return the current distance from the root node
distance_from_root(zipper(_, Nav), Distance) :- length(Nav, Distance).

%% next_preorder(+Zipper, -Zipper)
% @tbd change name to next_tdlr
% Return the ``next'' node in a left-to-right traversal fashion.
% Algorithm: (rechtssucher)
% try down(1)
% else while not try right() do up()

next_preorder(P1, P2) :- down(P1, 1, P2), !.
next_preorder(P1, P2) :- right_or_up(P1, P2).

right_or_up(P1, Pn) :- right(P1, Pn), !.
right_or_up(P1, Pn) :-
up(P1, P2),
right_or_up(P2, Pn).

rupert...@googlemail.com

unread,
Jun 7, 2016, 11:12:26 AM6/7/16
to
My use cases are:

* Being able to efficiently search the data structures with various heuristics.

For example, if one 'class' has a reference to another, but there is no back-reference, I will heuristically assume that the latter class is owned by the first one, and that they have a relationship by composition. Unless this heuristic is overridden by the user, code generated to fetch instances of the former class, will also return the latter embedded into the response.

Given a set of N classes, I need to be able to efficiently query the nature of the relationships between them, of which there may be N*N. Actually, it could be more than N*N since two classes may be related in more than one way, but in practice it will typically be a lot less than N*N.

* Being able to apply transformations on the AST data model; to annotate it with extra information; and to get it into a formats more suited to producing various concrete outputs.

I transform data models from Prolog -> Java -> StringTemplate at the moment. But having the transformation spread across 3 languages is not ideal, and in StringTemplate its down-right awkward. I want to try and get the transformation part done in Prolog and have much simpler templates.

Jan Burse

unread,
Jun 7, 2016, 11:14:58 AM6/7/16
to
Jan Burse schrieb:
The SWI-Prolog dict proposal would have the advantage that
it sorts the members. But it has also some extra which don't
find in JSON. Namely it has a type field.

?- X = f{a:b, c:d}, X=.. L.
L = [C'dict', f, a, b, c, d].

?- X = _{a:b, c:d}, X=..L.
X = _G1525{a:b, c:d},
L = [C'dict', _G1525, a, b, c, d].

This type field is really at argument position 1, one
can also do:

?- X = f{a:b, c:d}, arg(1, X, Y).
X = f{a:b, c:d},
Y = f.

?- X = _{a:b, c:d}, arg(1, X, Y).
X = Y{a:b, c:d}.

So these dicts are much closer to an XML tag, where you
have also key-value pairs, but besides that the tag itself
has a type.

But when you want to model an object model of some programming
language, you will also face that there are types around.
For example each Java instance has a class field. And each
Java class, the corresponding Java instance that models this
class has a superclass field.

So in JSON you could go:

An instance object:

{"class" : <the class>
rest of object }

A class:

{"name" : <the class name>
"superclass" : <the super class>
rest of class }

But problem is a Java object model forms a graph. So basically
you cannot so easily directly represent it as JSON. JSON is
only good if you have object trees, or documents. If you
want to represent a graph via JSON, you need to model somehow
nodes that point to other roots, etc..

Thats why there is also the notion of a JSON document, and
it is used as the exchange format in some NoSQL databases.
But I didn't put my hands on it, but the situation is similar
to XML, its rather for documents, i.e. object trees, or if
you like an old notion, records with subrecords and repeated
records.

Bye

Jan Burse

unread,
Jun 7, 2016, 11:16:46 AM6/7/16
to
Jan Burse schrieb:
> Thats why there is also the notion of a JSON document, and
> it is used as the exchange format in some NoSQL databases.
> But I didn't put my hands on it, but the situation is similar
> to XML, its rather for documents, i.e. object trees, or if
> you like an old notion, records with subrecords and repeated
> records.

Now I know why I had a gut feeling that an object model isnt
an AST. Because its essentially a graph and not a document.
Maybe you should go with RDF instead of JSON.

Bye

rupert...@googlemail.com

unread,
Jun 7, 2016, 11:38:09 AM6/7/16
to
On Tuesday, June 7, 2016 at 4:14:58 PM UTC+1, Jan Burse wrote:
> But problem is a Java object model forms a graph. So basically
> you cannot so easily directly represent it as JSON. JSON is
> only good if you have object trees, or documents. If you
> want to represent a graph via JSON, you need to model somehow
> nodes that point to other roots, etc..

Its a good point. The JSON data I would be most interested in working with would be a json-schema:

http://json-schema.org/

The concept of a 'reference' has to be made explicit in json-schema for precisely the reason you describe. Two classes 'a' and 'b' that have fields that refer to each other would be modelled like this:

{
"title": "a",
"type": "object",
"properties": {
"refb": { "$ref": "#/b" }
}
}

{
"title": "b",
"type": "object",
"properties": {
"refa": { "$ref": "#/a" }
}
}

This is precisely the sort of thing that I want to be able to query the data model for. If a and b refer to each other, I would heuristically assume they have a relationship by aggregation. When generating code to fetch an instance of a, I would probably return the contents of a with some sort of id in it that refers to an instance of b.

I call this 'slicing' a relational data model, and have found that deducing the slicing model, and code to slice/un-slice for serialization purposes can be done automatically.

For some generated REST API:

HTTP GET /api/a - finds all a's.

[{
fieldOfA: "someValue",
b: "1235" // Data model got sliced here, this is a ref to b.
}]

At the moment, my rules for slicing got a bit spread around between Prolog, Java and StringTemplate, as I mentioned before. It would be much better if I could make the slicing rules explicit in Prolog, then transformation to concrete syntax could proceed along 'abstract data model' -> slicing -> transform to java -> codegen.

Julio Di Egidio

unread,
Jun 7, 2016, 11:38:18 AM6/7/16
to
On Tuesday, June 7, 2016 at 3:55:57 PM UTC+2, burs...@gmail.com wrote:
> Am Dienstag, 7. Juni 2016 12:24:58 UTC+2 schrieb Julio Di Egidio:
> > << JSON parsing libraries have been observed to differ as to whether or
> > not they make the ordering of object members visible to calling
> > software. Implementations whose behavior does not depend on member
> > ordering will be interoperable in the sense that they will not be
> > affected by these differences. >>
> > [<https://tools.ietf.org/html/rfc7159#section-4>]
> >
> > Order is significant (in some languages) and SHOULD
> > be preserved: that is an interoperable implementation!
>
> There is only a SHOULD for the uniqueness of the
> member names, which is even not a MUST (sic!)(*).
>
> For the rest, what you have cited, I dont see a
> SHOULD or MUST.

You miss the point, as usual. That spec is not just badly written, it is bad, period: that notion of interoperability in particular is just up-side down, but I also agree that those SHOULD should be MUST. That said, note that I was rather answering *your question*: whether a JSON parser/translator should preserve the order of properties or not. Yes, despite the spec, of course it should, as order MAY be significant.

EOD.

Julio

Jan Burse

unread,
Jun 7, 2016, 12:24:52 PM6/7/16
to
rupert...@googlemail.com schrieb:
> Its a good point. The JSON data I would be most interested in working with would be a json-schema:
>
> http://json-schema.org/

Thats what I thought, you can separate type information
from the data model itself (this is good news, if you
have zillion of instances).

So, metaphorically, not JSON, instead of:

field(author, string, "Philip K Dick").

You can have:

field(author, "Philip K Dick").
meta-field(author, string).

But a field is not a self standing unit, in
Java it is homed in a class respectively instance.

So you would have:

field(instance123, author, "Philip K Dick").
meta-field(class456, author, string).

Eh voila this is already RDF. Just ingnore that
you have field/1 and meta-field/1.

Bye







Jan Burse

unread,
Jun 7, 2016, 12:27:19 PM6/7/16
to
Julio Di Egidio schrieb:
> You miss the point, as usual.
And you are not able to take criticism, as usual.

Jan Burse

unread,
Jun 7, 2016, 12:40:24 PM6/7/16
to
Julio Di Egidio schrieb:
> You miss the point, as usual. That spec is not just
> badly written, it is bad, period: that notion of
> interoperability in particular is just up-side down,
> but I also agree that those SHOULD should be MUST.
> That said, note that I was rather answering*your
> question*: whether a JSON parser/translator should
> preserve the order of properties or not. Yes,
> despite the spec, of course it should, as order MAY
> be significant.

Under what circumstances will the order be significant?
What will the parser/translator do with the order,
why can he just build a map, and even choose the
representation of the map.

The parser might create a HashMap or a TreeMap, depending
on the size of the object or what ever. He might write
HashMap in a different order than a TreeMap. He should
not be forced to use a LinkedHashMap which preseves the
input order of the attributes.

Also LinkedHashMap doesn't solve the problem, if later
the JSON datastructure is modified, this changes the order.
In a TreeMap the order wouldn't be changed, it would
always be the lexical order of the key names.

The crucial operation that needs an order is only
equals. We want these to JSONs to be equal, don't we:

{a: 1,
b: 2}

- equals -

{b: 2,
a: 1}

SWI-Prolog dicts would solve it elegantly, in that they
keysort the object, and we can then use syntactic equality.
But there are different keysorts possible, ascending or
descending, etc.. etc..

Fortunately SWI-Prolog is more precise than the RFC:

A dict can not hold duplicate keys. The dict is transformed
into an opaque internal representation that does not respect
the order in which the key-value pairs appear in the input
text. If a dict is written, the keys are written according
to the standard order of terms (see section 4.7.1).
http://www.swi-prolog.org/pldoc/man?section=dicts

So its even over cautious. It might have a different internal
organisation, without a TreeMap or LinkedHashMap. But it
specifies the written syntax. With the result that ==/1
is the same as =/1 on written dicts, provided the dicts
are ground:

X == Y

- iff -

term_to_string(X, S1), term_to_string(Y, S2), S1=S2


Bye

Jan Burse

unread,
Jun 7, 2016, 12:52:35 PM6/7/16
to
Jan Burse schrieb:
>
> X == Y
>
> - iff -
>
> term_to_string(X, S1), term_to_string(Y, S2), S1=S2

To have the above I would suggest to use '[]' instead of
_. Namely with _ we have some problems:

?- X = _{a: 1, b: 2}, Y = _{b: 2, a: 1}, X==Y.
false

With the '[]' we don't have this problem. We get the
following as expected:

?- X = '[]'{a: 1, b: 2}, Y = '[]'{b: 2, a: 1}, X==Y.
X = Y, Y = '[]'{a:1, b:2}.

But there is probably a little glitch in SWI-Prolog
parser. ISO standard says [] is an ordinary atom. So
why this doesn't work I don't know:

?- X = []{a: 1, b: 2}, Y = []{b: 2, a: 1}, X==Y.
ERROR: Syntax error: Operator expected
ERROR: X = [
ERROR: ** here **
ERROR: ]{a: 1, b: 2}, Y = []{b: 2, a: 1}, X==Y .

Ok we have to go with the longer '[]'. Lets see
whether syntactic equality is logical equality for
written terms:

?- X = '[]'{a: 1, b: 2}, Y = '[]'{b: 2, a: 1},
term_to_atom(X, S1), term_to_atom(Y, S2), S1=S2.
X = Y, Y = '[]'{a:1, b:2},
S1 = S2, S2 = '\'[]\'{a:1,b:2}'.

Would be a little less fugly with the real []. But
anyway, works as advertised.

Bye

Jan Burse

unread,
Jun 7, 2016, 1:00:04 PM6/7/16
to
Jan Burse schrieb:
>
> Thats what I thought, you can separate type information
> from the data model itself (this is good news, if you
> have zillion of instances).

Or you do fugly stuff as follows, where you mix types
and data, you might do this deliberately even if you
understand JSON:

http://www.swi-prolog.org/pldoc/doc_for?object=term_to_json/2

Why is there a need for a type field? Only because
of variables? Must likely to have uniform type
check, i.e. non-defaulty modelling.

Atom, Integer, Float would go through without
wrapping them into an object, with a type field.

Compound wouldn't need probably the type
field as well, just recognized an object with
functor and args field as a compound.

Python is good in recognizing classes this way,
i.e. by the existence/non-extence of some field.
You can plugin such "classes" in Python.

Bye


Jan Burse

unread,
Jun 7, 2016, 1:02:30 PM6/7/16
to
Jan Burse schrieb:
> Or you do fugly stuff as follows, where you mix types
> and data, you might do this deliberately even if you
> understand JSON:
>
> http://www.swi-prolog.org/pldoc/doc_for?object=term_to_json/2

Source code of term_to_json/2 reveals there is more
going on, also recognizes @(Symbol).

Jan Burse

unread,
Jun 7, 2016, 1:21:21 PM6/7/16
to
I would rather prefer to use the json_read_dict/[2,3]
predicates with a tag, so that the tags are not turned
into variables. For example the tag '[]' would give:


'[]'{
author: 'Philip K Dick',
works: [
'[]'{
title: 'The Man in the High Castle'
},
'[]'{
title: 'Do Androids Dream of Electric Sheep'
}
]
}

Reason is variables denote open records, and you don't
have syntactic equality anymore. So ==/2 doesn't work.
But =/2 might work.

But although =/2 might work, after serializing a term,
it still doesn't work:

?- X = _{a: 1, b: 2}, Y = _{b: 2, a: 1},
term_to_atom(X,S1), term_to_atom(Y,S2), S1=S2.
false.

Variables also put pressure when you assert/retract
terms, and they might get unrelated then.

Falco Nogatz schrieb:
> With SWI-Prolog v7.x it is possible to use dicts to represent JSON terms:
>
> _{
> author: 'Philip K Dick',
> works: [
> _{
> title: 'The Man in the High Castle'
> },
> _{
> title: 'Do Androids Dream of Electric Sheep'
> }
> ]
> }

Jan Burse

unread,
Jun 7, 2016, 1:25:39 PM6/7/16
to
Jan Burse schrieb:
>
> So you would have:
>
> field(instance123, author, "Philip K Dick").
> meta-field(class456, author, string).
>
> Eh voila this is already RDF. Just ingnore that
> you have field/1 and meta-field/1.

If you want to do some graph matching, you can
anyway not use the SWI-Prolog dicts.

For example the following doesn't work as of now:

?- X = _{a:1, b:2}, Y = _{b:2, c:3}, X = Y.
false.

So an open record is only a record which is open
in the tag. You wont get a union record here, i.e.

X = Y = _{a:1, b:2, c:3}

Markus Triska

unread,
Jun 7, 2016, 3:41:14 PM6/7/16
to
Hi Rupert,

"rupert...@googlemail.com" <rupert...@googlemail.com> writes:

> Is it not much more efficient to represent a set of fields as
> predicates, rather than to put them in a list?

My experience is that with such syntax trees, you often either process
it all anyway by walking over it, or look through certain parts of the
tree in search of elements with specific properties. A fact base may
(due to argument indexing) help to make this faster if what you search
for is to a large extent already known, but it also comes with some
hefty drawbacks. For example, it makes testing your predicates and
interactive queries more complicated, and requires auxiliary elements to
provide sufficient links between different nodes, as you have shown.
Also, it is neither easy nor efficient to change parts of tree if it is
represented by Prolog facts, and your code may be harder to reuse etc.

> I also found that working with lists of fields requires writing
> explicit list recursions, which complicates the code. Back-tracking
> over a set of predicates would provide the iteration for me.

True, and so would using a very intelligent set of interface predicates
that let you more conveniently express queries over a tree that is
expressed as a single compound term.

I highly recommend to check out library(xpath) in SWI-Prolog to see how
this can be done, using a domain-specific language to formulate queries
over nested structures:

http://eu.swi-prolog.org/pldoc/man?section=xpath

The single interface predicate xpath/3 may sound quite unimpressive at
first, but boy does it make querying XML DOM trees a breeze! Try it!

All the best,
Markus

--
comp.lang.prolog FAQ: http://www.logic.at/prolog/faq/

Markus Triska

unread,
Jun 7, 2016, 3:42:08 PM6/7/16
to
j4n bur53 <janb...@fastmail.fm> writes:

> Look see, (<)/2 is not monotonic:
>
> ?- X=1, X<2.
> true
> ?- X<2, X=1.
> exception
>

"Not monotonic" means that adding a constraint yields solutions where
the predicate previously *failed*. Without the X=1 constraint in front,
X<2 does not fail, but throws an instantiation error. It does this
precisely to avoid non-monotonicity. For example, (<)/2 would be
non-monotonic if we had:

?- X<2, X=1.
false. % This would be incomplete! Therefore, an exception is thrown.

?- X=1, X<2, X=1.
true.

If that were the case, then adding the constraint X=1 in front of the
query would yield a solution where the predicate previously said there
are none. For this reason, (<)/2 throws an instantiation error in this
case, to reflect that too little is known about its arguments to make a
sound decision. Note that instantiation errors *cannot* simply be
replaced by silent failure, because adding more constraints may make the
error go away! In contrast, domain errors *can* be replaced by silent
failures: No further constraints can ever remove a domain error.

But (<)/2 is still not a true relation. From true relations, we expect
more. In particular, a true relation should not simply leave the realm
of logic by throwing instantiation errors. Instead, we expect more
useful answers also for very general queries. Consider for example the
CLP(FD) constraint (#<)/2 in such cases:

?- X #< 2.
X in inf..1.

?- X #< Y.
X#=<Y+ -1.

There are ways to make this constraint monotonic too.

Jan Burse

unread,
Jun 7, 2016, 5:33:02 PM6/7/16
to
Markus Triska schrieb:
> But (<)/2 is still not a true relation. From true relations, we expect
> more. In particular, a true relation should not simply leave the realm
> of logic by throwing instantiation errors.

Well now, here we have it, there is still something wrong
with (<)/2. It is not "non-monotonic", but it "leaves the
realm of logic".

Hm. In my opinion the instantiation error is only a messsage
that the results set would be infinite and that the predicate
refuses to enumerate the result set.

There is for example the following inbetween between (<)/2
which refuses infinite results and the (#<)/2 which returns
infinite results intensionally, by a constraint.

less(X, Y) :- integer(X), integer(Y), !, X < Y.
less(X, Y) :- integer(Y), !, between(1,inf,H), X is Y-H.
less(X, Y) :- integer(X), !, between(1,inf,H), Y is X+H.
less(X, Y) :- /* some cantor enumeration */

We would then have:
?- X = 1, less(X,2).
X = 1

?- less(X,2), X = 1.
X = 1

Unfortunately when I do a redo, the second query hangs. Also
this myoptic test on the toplevel doesn't say much. Combinations
of less/2 might not work, that would work with (#<)/2.

The "leaves the realm of logic" is rather a "leaves the realm
of computation" but "still stays in the realm of mathematics".
Except if you deny infinity being part of mathematics.

In 1900, David Hilbert proposed the solvability of
all Diophantine equations as the tenth of his fundamental
problems. In 1970, Yuri Matiyasevich solved it negatively,
by proving that a general algorithm for solving all
Diophantine equations cannot exist.
https://en.wikipedia.org/wiki/Hilbert's_tenth_problem

But these integers which already cause so much problems,
are on equal footing with structures aka non-Datalog. It
is easy to see that we could encode structures, i.e. Prolog
terms, as integers only.

So if you leave finite domains, Datalog, RDF/SPARQL, etc..
you get into a domain with infinite size and then if you
can model it as Horn Clauses i.e. purity, and if you
use something else then depth first, you only get semi-
decidability.

It is not only that Diophantine equation are undecidable
but semi-decidable, but also Horn Clauses aka Pure Prolog
is undecidable but semi-decidable.

For more guarantees you would need a regime of certifying
some properties for each predicate you introduce. So
that you get a pile of definitions and certificates until
your main application is certified.

Bye










rupert...@googlemail.com

unread,
Jun 8, 2016, 5:05:51 AM6/8/16
to
On Tuesday, June 7, 2016 at 5:24:52 PM UTC+1, Jan Burse wrote:
> So you would have:
>
> field(instance123, author, "Philip K Dick").
> meta-field(class456, author, string).

Yes. I don't actually need to model the instance data, just the data model that a class describes:

meta-field(classAuthor, name, string).

I also need to model where one class has a field holding a reference to another:

meta-field(book, author, ref(classAuthor)).

And where one class has a reference to >1 instances of another. I would probably model the collection type as far as stating that is is a list, set, bag or dict:

meta-field(classAuthor, books, set(book)).

If I were to parse the various input models and transform them into a description as above, by 'assert'ing the meta-fields, then I would end of with a description of a data model that could be queried very efficiently.

Holding the fields in a list is not efficient, since each time I want to find out if a field holds say a relation to another class, I need to make a linear scan down the list. A SWI dict would be log(n). But modelling each field as a Prolog predicate would take advantage of compilation+indexing, and provide iteration by standard Prolog search.

I think the zipper trees style would be good for applying transformations to an AST, but the flat predicate style would be simpler and more efficient for querying. What to do, maybe use both?

rupert...@googlemail.com

unread,
Jun 8, 2016, 5:12:09 AM6/8/16
to
On Tuesday, June 7, 2016 at 5:40:24 PM UTC+1, Jan Burse wrote:
> The parser might create a HashMap or a TreeMap, depending
> on the size of the object or what ever. He might write
> HashMap in a different order than a TreeMap. He should
> not be forced to use a LinkedHashMap which preseves the
> input order of the attributes.
>
> Also LinkedHashMap doesn't solve the problem, if later
> the JSON datastructure is modified, this changes the order.
> In a TreeMap the order wouldn't be changed, it would
> always be the lexical order of the key names.

For code generation, I do tend to use LinkedHashMap. LinkedHashMap means that whatever order I put the fields in, I get them out in the same order. A TreeMap would also work fine, but I see no reason to sort the fields alphabetically; in fact if a field were renamed it might move position as a result of alphabetical sorting, which may be annoying.

When generating code, I took the view that generated code should look and feel identical to hand-written code. It should also be possible to progressively disable code generation across a code-base, as it evolves and is customized. For those reasons, I check in generated code under source control, and also pass it through static analysis tools like Sonar. I don't want stuff jumping around in source files with no semantic difference, it will just introduce unnecessary diffs into the source control history, and prevent branches from merging easily.

rupert...@googlemail.com

unread,
Jun 8, 2016, 5:23:00 AM6/8/16
to
On Tuesday, June 7, 2016 at 5:40:24 PM UTC+1, Jan Burse wrote:
> The crucial operation that needs an order is only
> equals. We want these to JSONs to be equal, don't we:
>
> {a: 1,
> b: 2}
>
> - equals -
>
> {b: 2,
> a: 1}

Yes. I haven't so far need to check if two data structures are equal or not.

I might be interested in querying whether two data structures are equal, ignoring their names.

What I would be more interested in querying is whether one data structure is a super-set of another, inferring that it could be a sub-class. Or querying to find a class that is the most general super-class of two other classes. I have a feeling this is related to most-general-unifier but not exactly sure how to represent it as such in Prolog.

rupert...@googlemail.com

unread,
Jun 8, 2016, 5:32:56 AM6/8/16
to
On Tuesday, June 7, 2016 at 8:41:14 PM UTC+1, Markus Triska wrote:
> A fact base may
> (due to argument indexing) help to make this faster if what you search
> for is to a large extent already known, but it also comes with some
> hefty drawbacks. For example, it makes testing your predicates and
> interactive queries more complicated, and requires auxiliary elements to
> provide sufficient links between different nodes, as you have shown.
> Also, it is neither easy nor efficient to change parts of tree if it is
> represented by Prolog facts, and your code may be harder to reuse etc.

Yes, I would have to use assert/retract to modify the fact base.

> I highly recommend to check out library(xpath) in SWI-Prolog to see how
> this can be done, using a domain-specific language to formulate queries
> over nested structures:
>
> http://eu.swi-prolog.org/pldoc/man?section=xpath
>
> The single interface predicate xpath/3 may sound quite unimpressive at
> first, but boy does it make querying XML DOM trees a breeze! Try it!

Thanks for this suggestion. I think the fact base approach appeals to me, because it seems simpler than writing recursive predicates over lists. It would be efficient, but I think even the N*N*F (number of classes squared times number of fields per class) query to examine the relation between classes is not overly slow in my current implementation which uses lists of fields.

If I can draw some inspiration from this xpath library to make writing queries simpler then that would be very useful.

Julio Di Egidio

unread,
Jun 8, 2016, 5:33:11 AM6/8/16
to
On Tuesday, June 7, 2016 at 6:40:24 PM UTC+2, Jan Burse wrote:
> Julio Di Egidio schrieb:
<snip>
> > That said, note that I was rather answering*your
> > question*: whether a JSON parser/translator should
> > preserve the order of properties or not. Yes,
> > despite the spec, of course it should, as order MAY
> > be significant.
>
> Under what circumstances will the order be significant?

You keep snipping and missing the same point: JSON is a data interchange format and a more proper question would be, *for which languages* will the order be significant, i.e. for which *users*. And the answer really is, for most! In most languages the order of properties is the order of declaration, not any lexicographic ordering. Think also dynamic languages, not just static ones.

> What will the parser/translator do with the order,

For the umpteenth time, just *preserve* it. Again, JSON is a data interchange format... but this really holds for *any* interchange format!

Anyway, to close on JSON: truth is, JSON is not even faithful to sparse arrays (some languages, e.g. JavaScript, have sparse arrays), so, as already reflected by some comments up-thread, it turns out that implementing a proper interchange protocol with JSON really requires building an extra layer on top of it (e.g. a custom encoding for member types and indexes) in all but the most trivial cases. Then, by the way, it becomes less and less clear why one should choose JSON over, say, XML: the greater simplicity is quite fake.

HTH, as I won't explain it yet again.

Julio

Jan Burse

unread,
Jun 8, 2016, 5:50:16 AM6/8/16
to
Julio Di Egidio schrieb:
> You keep snipping and missing the same point: JSON is
> a data interchange format and a more proper question
> would be,*for which languages* will the order be
> significant, i.e. for which*users*. And the answer
> really is, for most! In most languages the order of
> properties is the order of declaration, not any lexicographic
> ordering. Think also dynamic languages, not just static ones.

Can you make an example for a (a modern) language, where the
order of the declaration of fields matters (except maybe C)?
And how this would influence JSON?

For example in Java it doesn't matter whether you say:
- class Point {
int x;
int y;
}

-- or --

- class Point {
int y;
int x;
}

You anyway access the object via .x and .y, and after
class loading they might have different slot indices
depending on the different field order, but for the runtime
this is not important.

There are some reflection apis where you get the declarated
fields, but the bytecode specification and the API specifications
say, that there is no guarantee that the order of the returned
lists reflects the declaration order. (See for your self
and see also for other languages (*))

So tools are allowed to reorder, for example a obsfuscater might
reorder. The only reason via where you cannot reorder is:

1) if you have initializers that depend on each other
2) if you serialize without placing member names
3) if 1) and 2) are not there how does the order influence JSON?

But this is both not the case for JSON. To have this as a case
for JSON, somebody would need to abuse JSON in the following
ways, just some sketch:

1) using JSON to transmit expressions, i.e. initializers etc..
2) using arrays in place of objects where the position in
the array depends on some member name position
3) If 1) and 2) are not there what else could make you
believe membership order is important?

Bye

(*)
For C# respective CLI see:

http://stackoverflow.com/questions/8067493/if-getfields-doesnt-guarantee-order-how-does-layoutkind-sequential-work

You would need to mark a class if you want to preserve order:

ECMA 335 does say "A class marked sequentiallayout guides the loader to
preserve field order as emitted",



rupert...@googlemail.com

unread,
Jun 8, 2016, 6:02:01 AM6/8/16
to
On Wednesday, June 8, 2016 at 10:33:11 AM UTC+1, Julio Di Egidio wrote:
> By the way, it becomes less and less clear why one should choose JSON over, say, XML: the greater simplicity is quite fake.

Less typing and direct use within javascript.

In the bad old days of SOAP, if I wanted to call a webservice I downloaded its WSDL, generated a Java client from it, wrapped that up in a proxy, and then got on with trying to achieve the task at hand. In these enlightened times of JSON/REST we had to re-invent XML schemas as json-schema and have a whole new standards war around describing APIs. Not that I'd want to go back to SOAP :-(

Jan Burse

unread,
Jun 8, 2016, 6:06:29 AM6/8/16
to
Julio Di Egidio schrieb:
> Anyway, to close on JSON: truth is, JSON is not
> even faithful to sparse arrays (some languages, e.g.
> JavaScript, have sparse arrays), so, as already
> reflected by some comments up-thread, it turns out
> that implementing a proper interchange protocol with
> JSON really requires building an extra layer on top
> of it (e.g. a custom encoding for member types and
> indexes) in all but the most trivial cases. Then,
> by the way, it becomes less and less clear why one
> should choose JSON over, say, XML: the greater
> simplicity is quite fake.

The array problem is mentioned on Wikipedia, see here:
https://en.wikipedia.org/wiki/JSON#cite_note-undefinedType-18

I only worry that if the member ordering requirement
would be true, then SWI-Prolog dicts wouldn't work
as they were designed as of now.

The ordering of the fields has the benefit that via
binary search fields can be found in O(log n) time.
https://github.com/SWI-Prolog/swipl-devel/blob/7c31e0e5e9ec66cd1e67244577403d10c589cd68/src/pl-dict.c#L222-L249



Jan Burse

unread,
Jun 8, 2016, 6:07:48 AM6/8/16
to
Jan Burse schrieb:
> The ordering of the fields has the benefit that via
--> The re-ordering of the fields has the benefit that via

Julio Di Egidio

unread,
Jun 8, 2016, 6:33:00 AM6/8/16
to
On Wednesday, June 8, 2016 at 11:50:16 AM UTC+2, Jan Burse wrote:
> Julio Di Egidio schrieb:
> > You keep snipping and missing the same point: JSON is
> > a data interchange format and a more proper question
> > would be,*for which languages* will the order be
> > significant, i.e. for which*users*. And the answer
> > really is, for most! In most languages the order of
> > properties is the order of declaration, not any lexicographic
> > ordering. Think also dynamic languages, not just static ones.
>
> Can you make an example for a (a modern) language, where the
> order of the declaration of fields matters (except maybe C)?

For example in JavaScript:

var obj = {};
obj.beta = "...";
obj.alpha = "...";

Then Object.keys(obj) returns [ "beta", "alpha" ].

(In fact, not all JS implementation are guaranteed to honour the order, but that is again a story, a long story indeed, of poor standards and poor implementations.)

Or, indeed, think Prolog: the order of declaration of predicate clauses is of course significant, as you do know.

But, really, how preserving order can be useful is not your concern, as you cannot pretend to know all languages and all relevant usage patterns, can you? Sometimes it just is significant, and that is all you need to know in order to concoct a proper interchange format.

> And how this would influence JSON?

It doesn't: for the umpteenth plus one time, you just *preserve* it!

> ECMA 335 does say "A class marked sequentiallayout guides the loader to
> preserve field order as emitted",

Right, that is another example...

Julio

Julio Di Egidio

unread,
Jun 8, 2016, 6:39:21 AM6/8/16
to
On Wednesday, June 8, 2016 at 12:02:01 PM UTC+2, rupert...@googlemail.com wrote:
> On Wednesday, June 8, 2016 at 10:33:11 AM UTC+1, Julio Di Egidio wrote:
> > By the way, it becomes less and less clear why one should choose JSON over, say, XML: the greater simplicity is quite fake.
>
> Less typing and direct use within javascript.

Bollocks, but you just snip the very explanation and reasons, indeed misrepresenting my point by capitalising that "by the way". Please be more careful:

I said "implementing a proper interchange protocol with JSON really requires building an extra layer on top of it (e.g. a custom encoding for member types and indexes) in all but the most trivial cases. Then, by the way, it becomes less and less clear why one should choose JSON over, say, XML"...

> In the bad old days of SOAP

And you are now conflating XML and web services.

Julio

Julio Di Egidio

unread,
Jun 8, 2016, 6:50:50 AM6/8/16
to
That is wrong on two accounts:

Firstly, if dict are inadequate they are inadequate, period: your goal is to implement some requirements, not to compromise in order to make your own life easier.

Secondly, you are missing the distinction between specification and implementation: dicts might even work, it is the data representation that needs tweaking (which might include the use of some indexes, etc.).

Julio

rupert...@googlemail.com

unread,
Jun 8, 2016, 9:47:05 AM6/8/16
to
On Wednesday, June 8, 2016 at 11:39:21 AM UTC+1, Julio Di Egidio wrote:
> Bollocks, but you just snip the very explanation and reasons, indeed misrepresenting my point by capitalising that "by the way". Please be more careful:
>
> I said "implementing a proper interchange protocol with JSON really requires building an extra layer on top of it (e.g. a custom encoding for member types and indexes) in all but the most trivial cases. Then, by the way, it becomes less and less clear why one should choose JSON over, say, XML"...

Sorry. But I do agree with what you are saying.

Julio Di Egidio

unread,
Jun 8, 2016, 10:10:19 AM6/8/16
to
Should that bother me, also considering that your disagreement comes with no reasons? As a matter of fact, you do not even know what you are doing, as testified by the original subject, nor you have ever implemented anything like this, otherwise you'd know for yourself. As I keep saying, software engineering is the most complex engineering that there is, and you just won't guess it, not even in several years time.

There are in fact two recurring fallacies going hand in hand, here as in most public discussions: that you ultimately only accept arguments by your authorities, and the other side of the same coin, the Dunning-Kruger effect. Both imply no actual reasoning capability.

No worries, just go on and do it, that is the best lesson...

*Plonk*

Julio

rupert...@googlemail.com

unread,
Jun 8, 2016, 11:51:11 AM6/8/16
to
On Wednesday, June 8, 2016 at 3:10:19 PM UTC+1, Julio Di Egidio wrote:
> > Sorry. But I do agree with what you are saying.
>
> Should that bother me, also considering that your disagreement comes with no reasons? As a matter of fact, you do not even know what you are doing, as testified by the original subject, nor you have ever implemented anything like this, otherwise you'd know for yourself. As I keep saying, software engineering is the most complex engineering that there is, and you just won't guess it, not even in several years time.

Eh? I said "I do agree with you", not that I disagree.

What I was trying to point out is that JSON has taken over from XML in web services because it is perceived to be more convenient for developer so to work with; less typing and its built into javascript. It doesn't really matter that this has been something of a step backwards, because the way our industry seems to go about choosing the currently fashionable tool-chain isn't based on sound engineering, but more of a beauty contest.

At any rate I'm currently most interested in working with data models defined in json-schema, XML schema, Java and SQL.

I have implemented something like this before, and used it to generate thousands of lines of fully tested code that is running in production, and done so much quicker than other development teams. I'm now thinking about round 2, how I can improve on what I have already done.

I have not done a lot of work ASTs in Prolog though - hence my question about what the different approaches are.

Jan Burse

unread,
Jun 8, 2016, 12:30:28 PM6/8/16
to
Julio Di Egidio schrieb:
> Or, indeed, think Prolog: the order of declaration
> of predicate clauses is of course significant, as
> you do know.

Mostlikely you have no clue what we are talking
about concerning JSON, and worst of it you have
also no clue about Prolog.

The clauses inside a predicate yes, since there
are cuts, depth-first is not complete etc.. are
execute by specification in input order.

But the order of predicates inside a Prolog
text is hardly relevant. Even not necessary for
forward references in the majority of Prolog
systems nowadays.

So the following two variants work in Prolog
systems, even primitive ones such as GNU Prolog,
the same way:

Variant 1: rev/2 before concatenate/3

rev([], []).
rev([X|Rest], Ans) :-
rev(Rest, L),
concatenate(L, [X], Ans).

concatenate([], L, L).
concatenate([X|L1], L2, [X|L3]) :-
concatenate(L1, L2, L3).

Variant 2: rev/2 after concatenate/3

concatenate([], L, L).
concatenate([X|L1], L2, [X|L3]) :-
concatenate(L1, L2, L3).

rev([], []).
rev([X|Rest], Ans) :-
rev(Rest, L),
concatenate(L, [X], Ans).

GNU Prolog has the most burden, since it has a
difficult WAM pipeline. Other Prolog system might
solve the problem with some lazy predicate lookup.

Here is some test run with GNU Prolog:

Variant 1:
?- rev("abc", X).

X = [99,98,97]

Variant 2:
?- rev("abc", X).

X = [99,98,97]

ABC. (Acronym Bitch Chew)

Bye

Jan Burse

unread,
Jun 8, 2016, 12:51:33 PM6/8/16
to
Hi,

Since the order of predicates in a Prolog text is
not relevant, some Prolog systems even don't bother
in reproducing the order of predicates when doing
a listing/0.

So they don't use LinkedHashMap or similar, or
if they might use it, but not for listing/0.
Take SWI-Prolog and the two variants of nrev. I
get for listing/0 the following results:

Welcome to SWI-Prolog (Multi-threaded, 64 bits, Version 7.3.22)
Copyright (c) 1990-2015 University of Amsterdam, VU Amsterdam

Variant 1: rev/2 before concatenate/3
?- listing.
[...]
rev([], []).
rev([C|A], D) :-
rev(A, B),
concatenate(B, [C], D).
[...]
concatenate([], A, A).
concatenate([A|B], C, [A|D]) :-
concatenate(B, C, D).

Variant 2: rev/2 after concatenate/3
?- listing.
[...]
rev([], []).
rev([C|A], D) :-
rev(A, B),
concatenate(B, [C], D).
[...]
concatenate([], A, A).
concatenate([A|B], C, [A|D]) :-
concatenate(B, C, D).

In Jekejeke Prolog I have gone through the perils of
reproducing the order, but only as an extra feature
for the end-user, since it is not needed to execute
the predicates:

Jekejeke Prolog 2, Development Environment 1.1.4
(c) 1985-2016, XLOG Technologies GmbH, Switzerland

Variant 1: rev/2 before concatenate/3
?- listing.
% user

rev([], []).
rev([X|Rest], Ans) :-
rev(Rest, L),
concatenate(L, [X], Ans).

concatenate([], L, L).
concatenate([X|L1], L2, [X|L3]) :-
concatenate(L1, L2, L3).

Variant 2: rev/2 after concatenate/3
% user

concatenate([], L, L).
concatenate([X|L1], L2, [X|L3]) :-
concatenate(L1, L2, L3).

rev([], []).
rev([X|Rest], Ans) :-
rev(Rest, L),
concatenate(L, [X], Ans).

But the effort in doing this is possible, since I
have only a single predicate table. But if you have
zillions of dicts, it might be prohibitive to also
preserve the order of the key-vaue pairs in these
dicts, if its only used for cosmetics during output.

So I think the SWI-Prolog decision isn't the worst.
It could use C'dict' and C'dictordered' to distinguish
different dicts and provide an option during json_read_dict,
to ask what dicts are created.

The C'dictordered could preserve the order, with
the disadvantage that accessing a key is O(n) and
not O(log n). And a further disadvantage that they
don't have an equality modulus keysort.

C'dictordered would be used in some rare cosmetic
use cases. If there is no such use case, drop it.

XYZ. (Xerxes You Zap)

Bye



Jan Burse schrieb:

Jan Burse

unread,
Jun 8, 2016, 12:53:15 PM6/8/16
to
Jan Burse schrieb:
> And a further disadvantage that they
> don't have an equality modulus keysort.
Or provide a more expensive equality also
for C'dictordered'. But which would violate
term_to_atom/2.

Bye

Markus Triska

unread,
Jun 8, 2016, 1:25:18 PM6/8/16
to
"rupert...@googlemail.com" <rupert...@googlemail.com> writes:

> I have not done a lot of work ASTs in Prolog though - hence my
> question about what the different approaches are.

I would again like to emphasize that your question is a very good one.

One issue I would also like to make explicit is uniformity of the tree.
This is orthogonal to choosing a pure representation and simply means
that if there is a natural way to make the AST uniform, you should seize
the opportunity. For example, in the SWI-Prolog XML libraries, elements
are of the form element(Name, As, Es), where As is a list of attributes,
and Es is a list of elements. This uniformity is very valuable.

All the best,
Markus

Julio Di Egidio

unread,
Jun 8, 2016, 2:17:40 PM6/8/16
to
On Wednesday, June 8, 2016 at 5:51:11 PM UTC+2, rupert...@googlemail.com wrote:
> On Wednesday, June 8, 2016 at 3:10:19 PM UTC+1, Julio Di Egidio wrote:
> > > Sorry. But I do agree with what you are saying.
> >
> > Should that bother me, also considering that your disagreement comes with no reasons? <snip>
>
> Eh? I said "I do agree with you", not that I disagree.

Right, my turn to say I am sorry.

To my partial justification, it is not easy to discuss with continuous disturbance and trolling: I'll try and be more careful in the future, and for now I am kill-filing JB once and for all.

Julio

Jan Burse

unread,
Jun 8, 2016, 2:18:08 PM6/8/16
to
Not necessarely. You have already =../2 in Prolog.

Also XML was conceived as eXtensible Markup Language,
and there are clumsy so called language bindings where
XML is not parsed in itself, but just used in a transient
way to map to some language binding object model. ruperls-
smith mentioned that already.

But if you deal with XML directly, the is a representation
element(Name, As, Es) which is eXtensible in the sense
that you even don't need a DTD, it can capture any tag name
and any attribute name and any hierarchical structure that
comes along.

But thats not an AST belonging to a specific domain grammer.
That can be viewed as a framework for ASTs of different
domain grammars. Frameworks are valuable yes, of they dont
cause too much overhead. Prolog itself is also a framework,
don't forget about that. You could also search a tree with
=../2, right? (*)

Bye

(*)
Besides JSON, XML and the like, when the AST is for programming
languages, I would anyway go by a format that is closer
to lambda-Prolog. If your domain grammar has variable binders
you might want to map these things anyway to dependent types
and do some type inference.

So maybe forget completely about JSON and XML, and look at
some of the frameworks like pure HOL or else what is underneath
Haskell, and do your AST thing with these. These things could
offer you much much more than some trivial serialization of
records inside records etc..

Markus Triska schrieb:

Jan Burse

unread,
Jun 8, 2016, 2:20:46 PM6/8/16
to
Hi,

Julio Di Egidio schrieb:
> To my partial justification, it is not easy to
> discuss with continuous disturbance and trolling:
> I'll try and be more careful in the future, and for
> now I am kill-filing JB once and for all.

You might also call your kindergarden teacher and
tell her about bad JB.

Bye

Jan Burse

unread,
Jun 8, 2016, 2:23:41 PM6/8/16
to
Jan Burse schrieb:
> So maybe forget completely about JSON and XML, and look at
> some of the frameworks like pure HOL or else what is underneath
> Haskell, and do your AST thing with these. These things could
> offer you much much more than some trivial serialization of
> records inside records etc..

These frameworks usually go under the term HOAS,
https://en.wikipedia.org/wiki/Higher-order_abstract_syntax

But now I am confused, I don't know whether this is a joke:
https://github.com/hoaproject/Compiler

(Since it says PHP as well)

Jan Burse

unread,
Jun 8, 2016, 2:44:51 PM6/8/16
to
Jan Burse schrieb:
> Not necessarely. You have already =../2 in Prolog.

The interesting thing here is, that dicts can also
be traversed by =../2, (*) they have just the functor
C'dict'. But I guess a use of =../2 is not intendet.

But nevertheless, dicts integrate quite "uniform"
in Prolog. They inherit equality, unification, etc..

And they give a small bonus, the automatic keysorting
of the members. But still HOAS might also have dicts,
but it will additionally have binders.

Bye

(*)
I posted this already, again some examples:

?- _{} =.. L.
L = [C'dict', _G1447].

?- f{a:1, b:2} =.. L.
L = [C'dict', f, a, 1, b, 2].


Jan Burse

unread,
Jun 8, 2016, 2:54:18 PM6/8/16
to
Jan Burse schrieb:
> Jan Burse schrieb:
>> Not necessarely. You have already =../2 in Prolog.
>
> The interesting thing here is, that dicts can also
> be traversed by =../2, (*) they have just the functor
> C'dict'. But I guess a use of =../2 is not intendet.

If you parse JSON into dicts, you would also need
to fall back to this if you want to have domain
independent visitors or what ever.

Bye


Jan Burse

unread,
Jun 8, 2016, 4:29:19 PM6/8/16
to
Markus Triska mentioned pure data structures, such as
XML via element(Type, KeyValues, Subs). I said Prolog
has =../2, you don't need this indirection.

On a second thought, since =../2 isn't the most
efficient operator in many Prolog systems, except
maybe LPA, I would say yes, pure data structures are better.

But this would also kill the idea of parsing JSON
into dicts. And the better parsing would be something
that can be directly explored via pure Prolog.

Bye

Jan Burse schrieb:
> I would rather prefer to use the json_read_dict/[2,3]
> predicates with a tag, so that the tags are not turned
> into variables. For example the tag '[]' would give:
>
>
> '[]'{
> author: 'Philip K Dick',
> works: [
> '[]'{
> title: 'The Man in the High Castle'
> },
> '[]'{
> title: 'Do Androids Dream of Electric Sheep'
> }
> ]
> }
>
> Reason is variables denote open records, and you don't
> have syntactic equality anymore. So ==/2 doesn't work.
> But =/2 might work.
>
> But although =/2 might work, after serializing a term,
> it still doesn't work:
>
> ?- X = _{a: 1, b: 2}, Y = _{b: 2, a: 1},
> term_to_atom(X,S1), term_to_atom(Y,S2), S1=S2.
> false.
>
> Variables also put pressure when you assert/retract
> terms, and they might get unrelated then.
>
> Falco Nogatz schrieb:
>> With SWI-Prolog v7.x it is possible to use dicts to represent JSON terms:
>>
>> _{
>> author: 'Philip K Dick',
>> works: [
>> _{
>> title: 'The Man in the High Castle'
>> },
>> _{
>> title: 'Do Androids Dream of Electric Sheep'
>> }
>> ]
>> }
>

Jan Burse

unread,
Jun 8, 2016, 4:34:37 PM6/8/16
to
On the other hand there is functor/3 and arg/3,
maybe the situation isn't that bad if only
visiting structures.

Still I think dicts are quite uniform, its not only
equality and unification (except for forming unions),
but also things like term_variables/2 etc.. work.

Bye

Here functor/3 and arg/3 on dicts:

?- X = f{a:1, b:2}, functor(X, F, A).
X = f{a:1, b:2},
F = C'dict',
A = 5.

?- X = f{a:1, b:2}, arg(3, X, A).
X = f{a:1, b:2},
A = 1.

?- X = f{a:1, b:2}, arg(5, X, A).
X = f{a:1, b:2},
A = 2.


Jan Burse schrieb:

rupert...@googlemail.com

unread,
Jun 9, 2016, 9:47:59 AM6/9/16
to
On Wednesday, June 8, 2016 at 7:17:40 PM UTC+1, Julio Di Egidio wrote:
> To my partial justification, it is not easy to discuss with continuous disturbance and trolling: I'll try and be more careful in the future, and for now I am kill-filing JB once and for all.

Lol. I only come here to listen to him talk to himself :-P

rupert...@googlemail.com

unread,
Jun 9, 2016, 9:53:12 AM6/9/16
to
> Markus Triska schrieb:
> > One issue I would also like to make explicit is uniformity of the tree.
> > This is orthogonal to choosing a pure representation and simply means
> > that if there is a natural way to make the AST uniform, you should seize
> > the opportunity. For example, in the SWI-Prolog XML libraries, elements
> > are of the form element(Name, As, Es), where As is a list of attributes,
> > and Es is a list of elements. This uniformity is very valuable.

On Wednesday, June 8, 2016 at 7:18:08 PM UTC+1, Jan Burse wrote:
> Not necessarely. You have already =../2 in Prolog.
<snip>
> You could also search a tree with =../2, right? (*)

Yes, I was playing around with the astwalk code this morning, and ended up doing precisely that. I wrote a predicate that searches for specific functors using =../2.

Is there some other advantage to uniformity that I am missing?

Jan Burse

unread,
Jun 9, 2016, 10:18:14 AM6/9/16
to
rupert...@googlemail.com schrieb:
> On Wednesday, June 8, 2016 at 7:17:40 PM UTC+1, Julio Di Egidio wrote:
>> To my partial justification, it is not easy to discuss with continuous disturbance and trolling: I'll try and be more careful in the future, and for now I am kill-filing JB once and for all.
>
> Lol. I only come here to listen to him talk to himself :-P
>

Its actually called EBS, it derives from brainwriting:

"EBS techniques have been shown to produce more ideas and help
individuals focus their attention on the ideas of others better than a
brainwriting technique (participants write individual written notes in
silence and then subsequently communicate them with the group). The
production of more ideas has been linked to the fact that paying
attention to others’ ideas leads to non-redundancy, as brainstormers try
to avoid to replicate or repeat another participant’s comment or idea."

https://en.wikipedia.org/wiki/Brainstorming#Electronic_brainstorming_.28EBS.29

But I think the newsgroup is only an EMS, but not a true EBS.
You still have to take notes by yourself to organize the gist.
Very few people post multiviewed summaries nowadays, I am
also too lazy for that, happy with my own scribbles on the
back of an envelope. I think this was slightly different in
the distant past. There is no more ROK etc..

Bye

Jan Burse

unread,
Jun 9, 2016, 10:50:45 AM6/9/16
to
Everybody is like f*** standard, just make something
new. A new SQL/NoSQL database, no problem, just do it:
http://db-engines.com/en/ranking

Ignoring the ISO core Prolog standard. No problem,
just do it: Picat, Plasma, little bit SWI, etc..
http://plasmalang.org/faq.html (formerly Mercury!)

Copying from Erlang, maybe only for the sake of
the name, eh voila Cloud Haskell:
http://haskell-distributed.github.io/documentation.html

der Prozess der Semiose ist unendlich,
dem Inhalt wird sich immer weiter angenährt,
kulturelle Einheiten bilden Semiose-Einheiten,
diese bilden ein kulturelles Netz
Umberto Eco, R.I.P.


Jan Burse schrieb:

j4n bur53

unread,
Jun 9, 2016, 12:04:34 PM6/9/16
to
Jan Burse schrieb:
> It is not only that Diophantine equation are undecidable
> but semi-decidable, but also Horn Clauses aka Pure Prolog
> is undecidable but semi-decidable.


Diophantine equations over finite domains on
the other hand are decidable, thats why you
can do CLP(FD) with label/1.

Also Herbrand domain (=/2 and dif/2) is
decidable, and you could do more(*) than SWI-
Prologs dif/2, namely negation & quatifier.

But if you add a subtree relation to the
Herbrand domain it turns undecidable. Subtree
relation is little bit like XPath.

See also:
On the Bounded Theories of Finite Tree
Sergei Vorobyov,
www.dbai.tuwien.ac.at/staff/vorobyov/ASIAN96.pdf
(There is also a meantion of rational feature
trees, which is like JSON objects (**))

(*)
Also compacting the constraintgs, via
contraction and subsumption. But currently
don't know the full algorithm.

(**)
Julio would disagree, since feature trees the
ordering of the features is unimportant.

j4n bur53

unread,
Jun 9, 2016, 12:05:23 PM6/9/16
to
j4n bur53 schrieb:
> (**)
> Julio would disagree, since feature trees the
> ordering of the features is unimportant.
ftp://ftp.dfki.de/pub/papers/local/FeatureGamesCCL94.ps.Z

Julio Di Egidio

unread,
Jun 10, 2016, 9:44:29 AM6/10/16
to
We are a bunch of smart asses...

Julio

j4n bur53

unread,
Jun 10, 2016, 2:33:58 PM6/10/16
to
Most of the smart asses are already dead.
BTW: besides the sorted key-value pairs, there is
another way to represent dicts.

A dict can be also viewed as a function. Namley a
function from the keys to the values. As a function
keys are then automatically unique, and the order
of the keys doesn't matter. A dict would be thus a type:

DICT = KEY -> VALUE

If your system is some theorem prover, you can use
an extensionality to bootstrap dicts equality from
the values equality. The extensionality would be:

forall x (f(x) = g(x)) -> f = g (ext)

But one might ask, hold on, how can you prove
this equality when the forall ranges over an infinite
domain. Well it depends what you know about a particular
dict, you can also have knowledge about a dict, which
talks about an infinity. For example the empty dict <>
would be axiomized:

forall x (<> x = undef)

A futher question might arise, how do we then build
dicts. A convenient way is first to define a key update
notation, which is nothing else than an if-then-else:

f[x->y] z := if z=x then y else f(x) (fun_upd_apply)

We can use the following short-cut for dicts:

<k1:v1, .., kn:vn> := <>[k1->v1]..[kn->vn]

Interestingly this works all pretty well, for example
in Isabelle/HOL. The syntax there is slightly different
but we can freely switch between using concrete dicts or
even abstractly reasoning about dict placeholders.

So who is the smart ass R.I.P.? One that comes to mind is
John C. Reynolds. Just open the paper "Definitional
Interpreters for Higher-Order Programming Languages",
and look at I.12 on page 374.

Papers we love: John Reynolds, Definitional Interpreters for
Higher-Order Programming Languages
http://wadler.blogspot.ch/2016/06/papers-we-love-john-reynolds_10.html

We find (ext) which is (fun_upd_apply) with a different
order of arguments. Same idea I guess, but in a different
context, and probably for less ambition. But I guess
the idea is even older.

What I am still wondering is whether Haskell provides a
way to bootstrap, a maybe incomplete equality, for
functions. Not like a theorem prover it will not have the
context of a proof available.


Julio Di Egidio schrieb:

j4n bur53

unread,
Jun 10, 2016, 2:47:23 PM6/10/16
to
j4n bur53 schrieb:
> What I am still wondering is whether Haskell provides a
> way to bootstrap, a maybe incomplete equality, for
> functions. Not like a theorem prover it will not have the
> context of a proof available.

Hint: For Prolog a simple proof context might be viewed
as a constraint store and/or frozen goals.

j4n bur53

unread,
Jun 11, 2016, 7:45:47 AM6/11/16
to
rupert...@googlemail.com schrieb:
> Thanks for this suggestion. I think the fact base
> approach appeals to me, because it seems simpler than
> writing recursive predicates over lists. It would be
> efficient, but I think even the N*N*F (number of classes
> squared times number of fields per class) query to examine
> the relation between classes is not overly slow in my
> current implementation which uses lists of fields.
>
> If I can draw some inspiration from this xpath library to
> make writing queries simpler then that would be very useful.

Intersting find. There are dozen of finds on the
internet, research projects/libraries that anylize
code, based on who knows what, Haskell Parsec etc..

How to Build Static Checking Systems Using Orders of Magnitude Less Code
http://web.stanford.edu/~mlfbrown/paper.pdf

But strange as it is, was research SLG resolution,
which is supposedly implemented in the new SWI-Prolog
tabling, and found this little gem:

The SOUL program query language
Query and inspect your Java, Smalltalk, C and Cobol programs.
http://soft.vub.ac.be/SOUL/features-of-soul/tabled-resolution/

j4n bur53

unread,
Jun 11, 2016, 7:48:48 AM6/11/16
to
j4n bur53 schrieb:
>
> The SOUL program query language
> Query and inspect your Java, Smalltalk, C and Cobol programs.
> http://soft.vub.ac.be/SOUL/features-of-soul/tabled-resolution/

They also employ some constraint solving techniques:

"Both MustAliasLeaf and MayAliasLeaf were only detected because SOUL
employs a domain-specific unification procedure under which multiple
occurrences of the same variable express a data flow characteristic: the
parameter of method acceptVisitor and the receiver of message
?visitMethod should evaluate to the same object at run-time. The
unification procedure consults static analyses to determine whether this
is actually the case."

http://soft.vub.ac.be/SOUL/features-of-soul/template-terms/

rupert...@googlemail.com

unread,
Jun 11, 2016, 3:45:44 PM6/11/16
to
On Friday, June 10, 2016 at 7:33:58 PM UTC+1, j4n bur53 wrote:
> A dict can be also viewed as a function. Namley a
> function from the keys to the values. As a function
> keys are then automatically unique, and the order
> of the keys doesn't matter. A dict would be thus a type:
>
> DICT = KEY -> VALUE

Or we could just use a hash map...?

I somehow find the log(n) performance of dict a little disappointing (if I don't want the sort).

Markus Triska

unread,
Jun 13, 2016, 2:35:26 AM6/13/16
to
"rupert...@googlemail.com" <rupert...@googlemail.com> writes:

> Yes, I was playing around with the astwalk code this morning, and
> ended up doing precisely that. I wrote a predicate that searches for
> specific functors using =../2.
>
> Is there some other advantage to uniformity that I am missing?

The more you use extra-logical or at least extremely limited constructs
like =../2, the more you confine your code to particular modes of use.
For example, if you use =../2, you can no longer (easily) use your code
to *generate* the trees you are describing:

?- T =.. [F|Args].
ERROR: =../2: Arguments are not sufficiently instantiated

This may not seem a big deal to you initially, because you are at the
beginning probably interested in only one of several possible modes.
However, the more complex your program will grow, the more you will
yearn for features that build upon logical properties of your code, such
as declarative debugging methods and more generality, and these are
largely precluded if you use such limited constructs.

Jan Burse

unread,
Jun 13, 2016, 6:21:05 AM6/13/16
to
Why bother with this artificial declarativity, and
call =../2 illogical.

Its anyway not possible to enumerate data structures that
don't use =../2 in ordinary Prolog, since it doesn't
implement fair disjunction or fair conjunction.

Assume the following very simplified json data structure,
which is pure:

is_json(array(L)) :- is_json_list(L).
is_json(object(L)) :- is_json_pairs(L).
is_json(xxx).

is_json_list([X|Y]) :- is_json(X), is_json_list(Y).
is_json_list([]).

is_json_pairs([xxx:X|Y]) :- is_json(X), is_json_pairs(Y).
is_json_pairs([]).

The above is declarative and doesn't use =../2. Neverthess
it cannot completely enumerate json data structures. You
can try yourself, the following query will hang:

?- is_json(X), X=object([]).
/* hangs */

On the other hand if you use fair disjunction and fair
conjunction, the above query wouldn't hang.

?- fair_is_json(X), X=object([]).
X = object([])

But even with fair disjunction and fair conjunction negative
samples would hang, or simple redo hangs:

?- fair_is_json(X), X=object([]).
X = object([]) ;
/* hangs */

?- fair_is_json(X), X=foo.
/* hangs */

So why bother about declarativity when we don't have the
means to describe a data structure?

In Isabelle/HOL we could describe data structure, and it
would deduce that X=object([]) is a json data structure,
and that X=foo is not a data structure. The foo would be
already rejected by the type system.

But in Isabelle/HOL we could also prove sometimes non-
membership, since the inductive definitions don't use
primitve CWA, they are logically completed. Even adorned
with possible induction arguments.

vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
But I don't see that any Prolog system offers this,
so why bother about declarativity. Is this just an in
investment into some unclear future? Why should this
future not also be able to more cleverly deal with
=../2, if it is anyway more clever.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

=../2 is declarative in the same sense as </2 is
declarative. It just refuses to give an answer when
it sees that the result is infinite. Otherwise its
a perfect relation. You can imagine both relations
by their diagram i.e. their intended extension:

0 < 1.
0 < 2.
...
1 < 2.
1 < 3.
...

f =.. [f]
f(a) =.. [f,a]
...
g = .. [f].
g(a) =.. [g,a].
...

Nothing illogical.

Bye

Markus Triska schrieb:

Jan Burse

unread,
Jun 13, 2016, 7:59:15 AM6/13/16
to
You would get a similar result with a tor library
or something else. Using either iterative deepening
or something else.

The can all not judge negative membership in
an infinite structure, and they all go astray after
the first positive membership. Not sure how in the
later case a cut would here have a logical justification.

You need to go FOL or so, for better dealing with
negative information. Better dealing with negative
information would also probably prevents the back-
tracking hang.

Just think: We humans can do it! So its just a
challenge of implanting the same reasoning in
a machine.


Jan Burse schrieb:

Jan Burse

unread,
Jun 13, 2016, 9:01:34 AM6/13/16
to
Maybe you could give us an example where a tree generate
works, and what you would do to make it work. Maybe you
mean just patterning a node of a tree, and not generating
whole trees for later use.

Even in the simplest of trees, binary trees, you will
have problems with fairness. Binary tree is left regursive,
it might even not generate anything. A predicate is_tree/1
would give a hang in the first place:

is_tree(node(X,Y)) :- is_tree(X), is_tree(Y).
is_tree(leaf).

Ordinary Prolog:

?- is_tree(X).
/* hangs */

Left recursive. As a last resort, tabling, will probably
also not give anything sensible, since tabling is rather
for finite solutions.

Tabling Prolog:

:- use_module(library(tabling)).
:- table is_tree/1.
is_tree(node(X,Y)) :- is_tree(X), is_tree(Y).
is_tree(leaf).

?- is_tree(X).
/* hangs */

So its not a problem of duplicates and loops, rather a
problem of fairness. And of course a problem of negative
information. BTW, wanna read more about fairness:

Open Text:

Closures and fairness in the semantics of programming logic
J.-L. Lassez, M.J. Maher, 1984
http://www.sciencedirect.com/science/article/pii/0304397584900173

Bye

Markus Triska schrieb:
> The more you use extra-logical or at least extremely limited constructs
> like =../2, the more you confine your code to particular modes of use.
> For example, if you use =../2, you can no longer (easily) use your code
> to*generate* the trees you are describing:

j4n bur53

unread,
Jun 13, 2016, 10:32:45 AM6/13/16
to
Jan Burse schrieb:
> is_tree(node(X,Y)) :- is_tree(X), is_tree(Y).
> is_tree(leaf).

Also if I reorder the clauses, it still doesnt work:

is_tree(leaf).
is_tree(node(X,Y)) :- is_tree(X), is_tree(Y).

I get:

?- is_tree(X).
X = leaf ;
X = node(leaf, leaf) ;
X = node(leaf, node(leaf, leaf)) ;
X = node(leaf, node(leaf, node(leaf, leaf))) ;
...

Because depth first Prolog search lacks fairness, we
don't get all the trees, only the list like forms.

So most likely by *generate*, you mean something else
like creating deep templates, or whatever.

Templating works better without =../2, I agree. You can
make templates like:

Templ = node(node(X),leaf).

But also most likely, since call/n is now standard,
your visitor pattern will anyway take a closure as
well. So that again =../2 isn't a problem.

rupert...@googlemail.com

unread,
Jun 13, 2016, 5:07:22 PM6/13/16
to
On Monday, June 13, 2016 at 11:21:05 AM UTC+1, Jan Burse wrote:
> Its anyway not possible to enumerate data structures that
> don't use =../2 in ordinary Prolog, since it doesn't
> implement fair disjunction or fair conjunction.

I'm no genius with Prolog, I can see how to keep "all modes" working in the simplest of programs. Once a piece of software gets beyond a few predicates, its really just luck if any modes beyond my main intention produce good results. I don't know of a successful strategy for doing this in any program of a reasonable size. Or as Jan points out, if its even reasonable to expect to be able to achieve this.

On the other hand, I don't think I will need to enumerate infinite structures - the data I will use as input will be complete, the AST will be complete. It would be nice to be able to parse <-> unparse. Or better yet, parse <-> transform <-> unparse. So I will bear uniformity in mind as a tool that might help me get there, but it isn't a big priority at the moment.

j4n bur53

unread,
Jun 14, 2016, 11:09:31 AM6/14/16
to
The new fundamentalist agenda is best described by the
following post on StackOverflow:

Features of good Prolog code? [closed]
http://stackoverflow.com/a/15710541/502187

In the old times before the attempt for a wide spread
adoption of CLP(FD), we only had this Guide:

Coding Guidelines for Prolog
http://arxiv.org/abs/0911.2899

The above guidelines contain an interesting semantic
requirement called steadfastness:

5.1 Predicates must be steadfast.

?- p(X), X=t.
-- same as --
?- p(t).

But this is just an instance of Ulrich Neumerkels
commutativity. Its nothimg else than:

?- p(X), X=t.
-- same as --
?- X=t, p(X).

What we havent examined so far is how freeze/2 and
when/2 could us help here.

Interestingly its very simple to make a test predicate
steadfast. Just start with the test predicate, here is
an example of a red/black binary tree:

tree(red).
tree(black).
tree(node(X,Y)) :- tree(X), tree(Y).

Now make it steadfast via freeze/2, by adding a single
clause at the beginnig of the predicate:

tree(V) :- var(V), !, freeze(V, tree(V)).
tree(red).
tree(black).
tree(node(X,Y)) :- tree(X), tree(Y).

Lets run some tests:

?- X=node(red,black), tree(X).
X = node(red,black)

?- tree(X), X=node(red,black).
X = node(red,black)

Works fine. So this is not LogicM, its not making disjunction
or conjunction fair. Its just delaying goals.

You said you want to transfor your ASTs. I know of at least one
application that made tree transformers also declarative via
freeze/2 and when/2. Just start with the original code. here is
a flipper, that flips red to black and vice versa:

flip(red, black).
flip(black, red).
flip(node(X,Y), node(Z,T)) :- flip(X,Z), flip(Y,T).

Now again its enough to add one line at the beginning of the
predicate to make it steadfast:

flip(V, W) :- var(V), var(W), !,
when((nonvar(V);nonvar(W)), flip(V, W)).
flip(red, black).
flip(black, red).
flip(node(X,Y), node(Z,T)) :- flip(X,Z), flip(Y,T).

We have to use the when/2 instead of the freeze/2. The freeze/2
only listens on one variable.

Lets run some tests:

?- X=node(red,black), Y=node(black,red), flip(X,Y).
X = node(red,black),
Y = node(black,red)

?- flip(X,Y), X=node(red,black), Y=node(black,red).
X = node(red,black),
Y = node(black,red)

Of course we can also get output variables for input variables:

?- flip(X,Y), X=node(red,black).
X = node(red,black),
Y = node(black,red)

?- flip(X,Y), Y=node(black,red).
X = node(red,black),
Y = node(black,red)

Via freeze/2 and when/2 you can also apply programming techniques
know from CLP(FD). Namely to first set-up a model, and then instantiate
and generate.

So CLP(FD) allows to convert generate+test into test+generate. freeze/2
and when/2 allows the same for structure based problems.

But freeze/2 and when/2 is no where explicitly
on Ulrich Neumerkels agenda.

Bye



rupert...@googlemail.com schrieb:

j4n bur53

unread,
Jun 14, 2016, 11:39:28 AM6/14/16
to
Bad news: Your Prolog system might generate unnecessary
choice points in the steadfast version that uses when/2.

Even if the Prolog system has some automatic multi-argument
indexes, it might need some manual intervention.

j4n bur53 schrieb:

Jan Burse

unread,
Jun 14, 2016, 5:32:46 PM6/14/16
to
But there are some indexing techniques which
are especially good for freeze/2 and when/2 loops.

Guarded indexes are good for freeze/2 or when/2.
https://plus.google.com/+JekejekeCh/posts/fQ4kioFzSuf

j4n bur53 schrieb:
0 new messages