Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Programming vs. Specification

7 views
Skip to first unread message

Bart Demoen

unread,
Mar 29, 1992, 5:02:04 AM3/29/92
to
In <22...@alice.att.com> per...@alice.att.com (Fernando Pereira)
writes:

> The "logical" advantage of setof over findall is that it
> deals correctly with variables in the goal, that is
> ?- X = a, setof(Y, p(X, Y), S).
> and
> ?- setof(Y, p(X, Y), S), X = a.
> yield the same result.

not for the following definition of p/2:

p(b,1) :- ! .
p(_,2) .


Bart Demoen

Fernando Pereira

unread,
Mar 29, 1992, 6:56:16 PM3/29/92
to
Sigh... We were discussing *logical* advantages. Of course setof cannot give a
declarative reading to a program that doesn't have one to start with, but at least
it allows a declarative reading for aggregations over goals all of whose solutions
are ground, something that findall does not do. One would think that any
gain of declarativeness would be desirable from the standpoint of *logic*
programming. (But maybe the goals of logic programming have by now been forgotten --
if they were ever appreciated -- among Prolog programmers and only survive
among deductive database people )-:).
Setof was originally designed to support the translation of
natural-language database queries into generalized quantifier formulas that
could then be translated into Prolog goals and executed. For this application,
order-independence of subgoals is essential, both because the natural-language
interpretation system should not be burdened with the need to know how
to order subgoals to obtain correct results and because subgoal reordering
can be used to improve the efficiency of query answering. Try to do this
efficiently (eg. without explicit generators for all free variables in
aggregate goals) with findall...

In my original posting, I forgot to mention an obvious but very useful class
of instantiations for the semiring schema: combine = some associative operation
like sequence concatenation, collect = set union (eg. regular algebra without *).

Fernando Pereira
AT&T Bell Laboratories, Murray Hill
per...@research.att.com

Bart Demoen

unread,
Mar 30, 1992, 1:53:12 AM3/30/92
to
In <22...@alice.att.com> per...@alice.att.com (Fernando Pereira)
writes:

> The "logical" advantage of setof over findall is that it
> deals correctly with variables in the goal, that is
> ?- X = a, setof(Y, p(X, Y), S).
> and
> ?- setof(Y, p(X, Y), S), X = a.
> yield the same result.

SICStus 2.1 #3: Fri Feb 28 19:15:36 MET 1992
| ?- listing .

p(a, 1).
p(_, 2).

yes
| ?- X = a , setof(Y,p(X,Y),L) .

L = [1,2],
X = a ? ;

no
| ?- setof(Y,p(X,Y),L) , X = a .

L = [2],
X = a ? ;

L = [1],
X = a ? ;

no


Bart Demoen

Lee Naish

unread,
Mar 30, 1992, 8:35:08 PM3/30/92
to
In article <1992Mar30.0...@kulcs.uucp> bim...@cs.kuleuven.ac.be (Bart Demoen) writes:
>In <22...@alice.att.com> per...@alice.att.com (Fernando Pereira)
>writes:
>
>> The "logical" advantage of setof over findall is that it
>> deals correctly with variables in the goal,
>> ...
> <Counterexample using setof in Sicstus>

See also the following paper, which points out nonlogical behaviour of
*all* all solutions predicates known to me at the time, and proposes a
logical version (which has been available in NU-Prolog for years). It
could be argued that setof is "less nonlogical" than findall; others
would say thats like being "less pregnant".

lee

@article{naish:solns:85,
author = {Lee Naish},
title = {All solutions predicates in {Prolog}},
journal = {Proceedings of the Second IEEE Symposium on
Logic Programming},
address = {Boston, Massachusetts},
year = {July 1985},
pages = {73-77}
}

Paul Tarau

unread,
Mar 31, 1992, 1:08:42 AM3/31/92
to
Replying to my article <1992Mar27.1...@IRO.UMontreal.CA>
Fernando Pereira writes:

> My feelings about unfolding semantics parallel that famous quip by Dr. Johnson
> about a dog walking on its hindlengs: it is amazing that it can be done at
> all, but it's not quite one would call a success. Ground semantics *is*
> the logical semantics (cf. Tarski).

I do not have feelings about unfolding semantics. I am simply a
_happy customer_ of the concept. Especially in the case of binary
programs it is not only more expressive but also simpler than ground
semantics. The PLILP91 paper in the BinProlog distribution (ftp
139.103.16.2), does just that, using a very simple concept of
"arrow composition". The compiler shows that the thing can be quite
practical (200,000 LIPS on a Sparc 2).

Fernando Pereira writes:

>>Richard writes:
>>
>>> Ok, so why _isn't_ this built into more Prologs?
>>
>>> (1) It is very easy to build it yourself.
>>
>> I think this is true also for findall, but findall _is_ there.
> Not quite. Most early implementations of findall were buggy, eg. didn't
> deal properly with nested findalls.

I do not see why nesting bestof is significantly easier than nesting
findall. Buggy findalls may come from an unfortunate overloading of
assert and retract. Using the order of the clauses in the database to
implement a data structure has no "logical" excuses. We can forgive a
dynamic assert as a way to feed the program with new information, but
using the order in which clauses happen to sit in the database is like
hoping that a dog can write Dr. Johnson's phrase by touching the keyboard
with what the phrase is about. On the other hand, take a look at what does
happen to Richard O'Keefe's bestof if something inside Goal has the
strange idea to assert the clause:

'best of'('Dr. Johnson''s latest works are about reincarnation').

Sorry about overloading your very amusing citation... But we live
in the age of CUT and PASTE.

>> Findall is not nasty. Setof shows a larger smile to those who work with
>> SQL in the morning but what is logical about sorting terms in Prolog's
>> standard order?
> Sorting is irrelevant here, it is just a byproduct of removing duplicates
in setof.

Sorting may be irrelevant, but "duplicates" are not as innocent as they
look (for example, f(_133) and f(_134) have the same set of ground
instances but they are not seen as duplicates).

However, let us suppose that we are talking only about bagof. Let us
suppose that we are considering only definite programs, to avoid Bart
Demoen's silver bullet counterexample. Oops, he just put a second one
on the net, with definite clauses:

In article <1992Mar30.0...@kulcs.uucp> Bart Demoen writes:

> p(a, 1).
> p(_, 2).
> .....

Well...

I am quite convinced that nor findall neither bagof are logical in the
serious sense that a clean semantics exists for the language obtained by
adding them to the language of definite programs. So what's left is a
matter of taste and/or implementation efficiency. Appearance of logic is
like appearance of justice. It can be harmful, if it becomes law.

Looking back with a "grep" in the 1983 PROLOG Digest I can remark that
things have changed very little:

Fernando Pereira writes:

> Date: Fri 14 Oct 83 08:26:56-PDT
> From: Pereira@SRI-AI
> Subject: More on findall Vs. bagof

> Consider the following code:

> p(S) :- q(a,X), r(X,S).
> u(S) :- r(X,S), q(a,X).
>
> r(X,S) :- findall(Y,a(X,Y),S).
>
> q(X,X).
>
> a(a,1).
> a(b,2).
> a(a,3).

> ?- p(S). will return S=[2], whereas ?- u(S) will return S=[1,2,3].
> This just because the two goals q and r where exchanged ! Is this
> the kind of behaviour one should expect from a logic programming
> system ? Of course not! The problem is similar to the one of using
> \+P when P has unbound variables.

> In contrast, if findall is replaced by bagof in the example,
> both queries will return S=[2] as they should.

And later he writes:

> Date: Mon 17 Oct 83 08:59:49-PDT
> From: Pereira@SRI-AI
> Subject: Mixup

> In my latest "findall vs. bagof" example, I got my 'a's and 'b's mixed
> up. The reply to the query is S = [1,3] with bagof, and S = [1,2,3]
> or S = [1,3] with findall depending on the order of goals. Sorry for
> the confusion.

As in the example of 1992, I agree that commutativity is a nice property
of conjunction

> ?- X = a, setof(Y, p(X, Y), S).

but I do not see why it is a _necessary_ property of a second order
collecting operator when it happens to be mixed with its first order
friends. They are simply at different logical levels in the same way
the word "dog" and its friend that is sitting on the carpet are.
A look at the new language Goedel will probably help us to put things
in the right place.

Fernando Pereira writes:

> If you look at shortest-path problems,
> dynamic programming or aggregations in deductive databases, it becomes clear
> that most sum problems (what you call cooperative) and max problems (what you
> call competitive) are based on a single algebraic structure, a semiring
> whose two operations one may call ``combine'' and ``collect''.
> For instance, for shortest-path problems combine = +, collect = min,
> ...

The semiring model is definitely nice. Especially when two operators
cooperate like in the + and min case. However, it is basically
the same as Richard A. O'Keefe's reduce predicate.

What's different in bestof is the use of failure, a very natural
concept in Prolog. Both the reduce and the semiring model work with
functions (min, max. + ) while bestof uses a binary order relation. I think
that using a "native" relation goes well in Prolog while functions like +
or max become ugly 3 argument relations. Worst, it is on the user to
provide "max" using ">" and to provide "+" using "is" while frequently
occurring order relations are _already_ there.

For example, I can express my feelings about a very popular
subject on the net right now as:

?-bestof(X,less_illogical,member(X,[findall,bagof,setof])).

Just try the same thing with the reduce or the semiring model.

More seriously, competition from other programming paradigms (like
OOP) suggests that simple and strong metaphors are useful in designing
(and selling) language constructs. So let's learn to forget (from time
to time) what's in the books. A living language is also what its users
think it is, even if they are wrong.

Paul Tarau
ta...@iro.umontreal.ca

Fernando Pereira

unread,
Mar 31, 1992, 10:37:31 AM3/31/92
to
In article <1992Mar30.0...@kulcs.uucp> bim...@cs.kuleuven.ac.be (Bart Demoen) writes:
>In <22...@alice.att.com> per...@alice.att.com (Fernando Pereira)
>writes:
>> The "logical" advantage of setof over findall is that it
>> deals correctly with variables in the goal [...], that is

>| ?- listing .
>p(a, 1).
>p(_, 2).
>yes
>| ?- X = a , setof(Y,p(X,Y),L) .
>L = [1,2],
>X = a ? ;
>no
>| ?- setof(Y,p(X,Y),L) , X = a .
>L = [2],
>X = a ? ;
>L = [1],
>X = a ? ;
>no
Sigh again... Maybe reading the whole message you are responding to would
help. I noted explicitly that setof's properties only held for goals all
of whose answers are ground. This is what setof was designed to do:
allow order independence in aggregate database queries. Why complain
about setof misbehaving outside its design envelope? A B-747 is not an
F-16 either.

More interestingly, setof's problem with nonground answers is just an
instance of a general problem with aggregations in the nonground case.
For another example that concerns me, take dynamic programming with
some appropriate objective function, eg. likelihood of an analysis
in a tabular parser. How does one relate, for example, conclusion
p(X) with probability 0.5 and conclusion p(a) with probability 0.3?
In terms of the combine-collect, semiring, dynamic programming model,
one would like appropriate algebraic conditions relating the semiring
operations to instantiation so that doing the operations on nonground
conclusions would have the same effect as doing them on the ground
semantics.

Fernando Pereira
2D-447, AT&T Bell Laboratories
600 Mountain Ave, PO Box 636
Murray Hill, NJ 07974-0636
per...@research.att.com


Fernando Pereira

unread,
Mar 31, 1992, 11:17:42 AM3/31/92
to
In article <1992Mar31....@IRO.UMontreal.CA> ta...@IRO.UMontreal.CA (Paul Tarau) writes:
>Sorting may be irrelevant, but "duplicates" are not as innocent as they
>look (for example, f(_133) and f(_134) have the same set of ground
>instances but they are not seen as duplicates).
As I noted explicitly in my article, setof has reasonable properties
only for goals ans of whose answers are ground. That's what it was
designed for.

>However, let us suppose that we are talking only about bagof. Let us
>suppose that we are considering only definite programs, to avoid Bart
>Demoen's silver bullet counterexample. Oops, he just put a second one
>on the net, with definite clauses:
Off the mark "silver bullets". Neither example complied to the *explicit*
design criteria and applicability limits for setof. Now, one may complain
that setof (like the whole of Prolog, for that matter) is "logical" only
under certain limited conditions, as opposed to "logical" for all legal
programs. That's engineering: devices have design limits, and can break
if those limits are exceeded. setof is *more* logical than *findall*
because it fits the declarative semantics of definite clauses for
more goals and programs than findall.

>I am quite convinced that nor findall neither bagof are logical in the
>serious sense that a clean semantics exists for the language obtained by
>adding them to the language of definite programs. So what's left is a
>matter of taste and/or implementation efficiency. Appearance of logic is
>like appearance of justice. It can be harmful, if it becomes law.
>
>Looking back with a "grep" in the 1983 PROLOG Digest I can remark that
>things have changed very little [...]
Some views change little because they happen to reflect the *truth*.

>but I do not see why it is a _necessary_ property of a second order
>collecting operator when it happens to be mixed with its first order
>friends.
I don't know about necessary, but us bears of little brain like their
aggregation operators do the obvious thing. For example, if I have
a database of facts likes(person,book) I would hope to be able to
collect the set of pairs (person,books s/he likes) in the obvious way:

setof((Person,Books), setof(Book,likes(Person,Book),Books), Pairs)

This way is obvious because it complies with our naive view of set
comprehension, and because it is close to natural language ("Which
books does each person like?"). I'm afraid findall doesn't quite fit
the bill:

| ?- [user].
| likes(p1,b1).
| likes(p2,b1).
| likes(p2,b2).
| ^D
% user compiled in module user, 0.050 sec 384 bytes

yes
| ?- setof((P,Bs),setof(B,likes(P,B),Bs),P).

P = [(p1,[b1]),(p2,[b1,b2])],
Bs = _6875,
B = _6904 ;

no
| ?- findall((P,Bs),findall(B,likes(P,B),Bs),P).

P = [(_7214,[b1,b1,b2])],
Bs = _6875,
B = _6904 ;

>The semiring model is definitely nice. Especially when two operators
>cooperate like in the + and min case.

>What's different in bestof is the use of failure, a very natural
>concept in Prolog. Both the reduce and the semiring model work with
>functions (min, max. + ) while bestof uses a binary order relation.

The crucial property of the semiring model is that it allows
dynamic programming when combining "best" solutions to subproblems
into solutions for a problem. In many cases, this is the difference
between exponential and (low order) polynomial algorithms.

Paul Tarau

unread,
Apr 4, 1992, 10:55:19 AM4/4/92
to
In article <10...@goanna.cs.rmit.oz.au> R. A. O'Keefe writes:

> It is important to realise that bestof/3 is PRECISELY as abstract as
> min_member/3. They are exactly interdefinable:
>
> bestof(Term, Order, Goal) :-
> findall(Term, Goal, Terms),
> min_member(Order, Terms, Term).
>
> min_member(Order, Term, Terms) :-
> bestof(Term, Order, member(Term, Terms)).

You need another second order predicate to define bestof/3 in terms of
min_member/3. That's a non-trivial difference in expressiveness
between them. What I am saying is that definite programs + bestof/3 are
more expressive than definite programs + min_member/3 = definite programs.
As you know, call/N that's used in min_member/3 can be simulated by
adding a few clauses to a given definite program to make the "links"
between terms and predicates.

> What's going on in both cases is "picking the smallest element from a set",
> and the question is just how to represent the set, and whether to change
> to another representation. A good way to think when you have a problem
> like this is to stay at the abstract level, where you are concerned with
> the _set_, not its representation in Prolog (and a goal+variable combination
> is, I repeat, a concrete representation). What do you think about?

I would like to agree on this point because I teach (preach?) similar
things to my students. There's however a less obvious thing that makes me
uncomfortable. You are talking about a _set_ = the abstract,
clean concept that we all understand and its _concrete_ Prolog representation.
The problem is that we are (in principle) programming in
logic where we _have_ predicates. What if we think with predicates as well
as we think with sets or even better? That's WHY we _program_ in logic.
(By the way, when you _program_, you don't think about what's unthinkable
when you just think, you do). First order predicate calculus and its
restricted forms that we use in Prolog are in a very obvious way much
simpler then sets. Compare Zermelo-Fraenkel's set theory (a reasonable
formalism that describe what a _set_ is) with, for example, resolution
on binary definite clauses. I tell things like you do when I am teaching
Pascal or C because those languages do not HAVE clean mathematical objects
so they must REPRESENT them. My feeling is that in logic programming we
_have_ them or at least we can have them if we think in terms of predicates
as much as possible, most of the time. Well, let's say simply that we can
use assertional representations much more often then we normally do and
write more readable, more abstract and still very efficient programs.
Your sketch of a call/N implementation at WAM-level in a previous posting
shows that metaprogramming on assertional representations is reasonably
efficient with appropriate support. My experience with performance analysis
of BinProlog was that amortized cost of metaprogramming is very small,
even if it seems very intensively used at source-level.

> By the way, with respect to OR-parallel, do I really need to point out
> that bestof(X, Order, member(X,List)) is O(|List|) whether sequential
> or OR-parallel, regardless of the number of processors?

Well, if you leave it in the hands of the implementor and avoid to do
it yourself with assert and retract you may find out that he can make
it for you O(log(|List|)). What about a process tree where only "winners"
qualify for the next game?

Paul Tarau
ta...@iro.umontreal.ca

Fernando Pereira

unread,
Apr 5, 1992, 11:14:48 AM4/5/92
to
I think you are missing Richard's point. That goal is O(|List|) because
Prolog lists are an essentially sequential data structure.
member/2 has to match against n pairs in the list before it can generate element n.
The kind of algorithm you suggest can only be applied to tree or array
data structures which you can divide into similar size parts in O(1) time.

Fernando Pereira
2D-447 AT&T Bell Laboratories
600 Mountain Ave.
Murray Hill NJ 07974 USA
per...@research.att.com

Paul Tarau

unread,
Apr 5, 1992, 4:23:13 PM4/5/92
to
In article <22...@alice.att.com> per...@alice.UUCP () writes:
>In article <1992Apr4.1...@IRO.UMontreal.CA> ta...@IRO.UMontreal.CA (Paul Tarau) writes:
>>In article <10...@goanna.cs.rmit.oz.au> R. A. O'Keefe writes:

>>> By the way, with respect to OR-parallel, do I really need to point out
>>> that bestof(X, Order, member(X,List)) is O(|List|) whether sequential
>>> or OR-parallel, regardless of the number of processors?

>>Well, if you leave it in the hands of the implementor and avoid to do
>>it yourself with assert and retract you may find out that he can make
>>it for you O(log(|List|)). What about a process tree where only "winners"
>>qualify for the next game?

>I think you are missing Richard's point. That goal is O(|List|) because
>Prolog lists are an essentially sequential data structure.
>member/2 has to match against n pairs in the list before it can generate element n.
>The kind of algorithm you suggest can only be applied to tree or array
>data structures which you can divide into similar size parts in O(1) time.

Yes, but the problem is with member/2 and with the unfortunate data
structure (list) it works on, not with bestof/3. What I wanted to suggest
is that if an OR-parallel generator produces N elements in O(1) time then
bestof/3 can find the winner in O(log(N)) time. Again, that's why it is
_much_ better to have and abstract generator in the specification of
bestof, instead of a concrete data structure.

Paul Tarau
ta...@iro.umontreal.ca

Fernando Pereira

unread,
Apr 5, 1992, 6:52:22 PM4/5/92
to
In article <1992Apr5.2...@IRO.UMontreal.CA> ta...@IRO.UMontreal.CA (Paul Tarau) writes:
>In article <22...@alice.att.com> per...@alice.UUCP () writes:
>> [bestof(X, member(X,List), B)] is O(|List|) because

>>Prolog lists are an essentially sequential data structure.
>>member/2 has to match against n pairs in the list before it can generate element n.
>Yes, but the problem is with member/2 and with the unfortunate data
>structure (list) it works on, not with bestof/3.
It is always better to try to say what one means, rather than leaving
it to the imagination of the reader. As for lists being an "unfortunate"
data structure, it all depends of what you want to represent. I'm starting
to feel that this discussion is getting nowhere because of a lack of
understanding of the use of abstractions in programming. The point
of (abstract) datatypes is to provide implementations of certain
signatures satisfying certain constraints. Your bestof/3 is an
(approximate) implementation of a particular kind of set comprehension,
namely one in which the selected elements must be maximal in a
specified partial order. Of course this is an useful abstraction,
but so are many others. However, the real problem is that in Prolog
terms are first-class objects, but set comprehensions (or, for that
matter, any non-free datatype) are not. Here's a pratical test:
use Prolog to represent finite-state machines in such a way that
you can pass them around, operate on them, determinize and minimize
them, etc. with asymptotic efficiency comparable to that of a purely
functional language, and without using program-changing operations
(assert or retract). I just cannot see how to do this without
representing the machines as complex Prolog terms and the operations
as relations between complex terms. How would bestof help here?
The *real* problem with Prolog and most of its derivatives is not
the lack of a *particular* abstraction like bestof but their
lack of modern mechanisms for defining and composing abstractions,
such as Standard ML or Modula-3 have.

Fernando Pereira
per...@research.att.com

Richard A. O'Keefe

unread,
Apr 6, 1992, 12:19:57 AM4/6/92
to
In article <1992Apr4.1...@IRO.UMontreal.CA>, ta...@IRO.UMontreal.CA (Paul Tarau) writes:
> In article <10...@goanna.cs.rmit.oz.au> R. A. O'Keefe writes:
>
> > It is important to realise that bestof/3 is PRECISELY as abstract as
> > min_member/3. They are exactly interdefinable:
> >
> > bestof(Term, Order, Goal) :-
> > findall(Term, Goal, Terms),
> > min_member(Order, Terms, Term).
> >
> > min_member(Order, Term, Terms) :-
> > bestof(Term, Order, member(Term, Terms)).
>
> You need another second order predicate to define bestof/3 in terms of
> min_member/3. That's a non-trivial difference in expressiveness
> between them. What I am saying is that definite programs + bestof/3 are
> more expressive than definite programs + min_member/3 = definite programs.
> As you know, call/N that's used in min_member/3 can be simulated by
> adding a few clauses to a given definite program to make the "links"
> between terms and predicates.

It has become evident that Paul Tarau and I are having different
arguments. Here's the argument _I_ thought we were having:

- bestof/3 is not sufficient
- once you have something like gather_solutions/3, bestof/3 is not
necessary.

I am not arguing and never have argued that the "reduce" paradigm is
sufficient! I have, in fact, been arguing in favour of a (logically
clean) version of findall/3 as _the_ "or-parallel" primitive, and had
assumed that the need for _some_ such operation was agreed between me
and Tarau, and that we were arguing about what it should look like.
So from my point of view there is no "non-trivial difference in
expressiveness between" bestof/3 and min_member/3, because I was
taking _some_ sort of 'all-solutions' construct for granted.

Let me see if I can make this absolutely unmistakable:

- I agree that a primitive for combining solutions from OR
branches is important
- I agree that it is far better to build it into the machinery
than to build it on top of assert and retract
(this is quite separate from sequential/parallel questions;
if an exception occurs you want the working storage of
the all solutions construct to be automatically reclaimed;
the assert/retract approach leaves garbage in the data base)
- I agree that bestof/3 is useful

- I claim that even if you have bestof/3 it is so restricted
that an additional primitive is needed
- I claim that if you _have_ an adequate primitive for
combining solutions from OR branches then bestof/3 becomes
an uninteresting special case

- I claim that bestof/3's interface is defective in precisely
the same sense as findall/3's; that (when used within its
domain of definition) setof/3's interface is better, and
that NU-Prolog's solutions/3 (which generalises setof/3)
is better still.

- I agree that BIN Prolog sounds like a really neat idea, and
I only wish that the C source files were available (perhaps
in shrouded form) so that I could try it out.

In particular, Tarau has argued strongly that

* it is much better two use a two-place order predicate
than a three-place "function"

while I thought that one of the major differences between us was that
I have been arguing that

- an interface based on a three-place closure is as easy to
use as one based on a two-place predicate (because with an
auxiliary definition for max/4 any two-place predicate can
trivially be "lifted")
- an interface based on a two-place predicate is inadequate.

Remember how this thread started?
Someone wanted to find the smallest and greatest numbers in a list.
It wasn't clear quite what was meant. Let's suppose that both
results are wanted.

I argued for
... max_member(Max, List),
min_member(Min, List).

Tarau argued for
... bestof(Min, <, member(Min,List)),
bestof(Max, >, member(Max,List))

I keep on saying that if you _have_ a list, you might as well work
directly with it. Let's eliminate that possible confusion by
changing the problem. There's also the fact that my aggregate/[3,4]
operations can handle simple minimum and maximum directly. Let's
eliminate that possible confusion by stipulating that the Order is
to be passed as a predicate too. The new problem specification is:

We are to write a predicate
extrema(Closure, Order, Min, Max) :-
Closure(Min) & Closure(Max) &
forall X Closure(X) -> Order(Min, X) & Order(X, Max)

Here Order is to be like <= . It isn't clear to me whether bestof/3
is supposed to work with things like <= or only with things like <,
but let's suppose that it works with things like <=.

The best I can figure out with bestof/3 is

extrema(Closure, Order, Min, Max) :-
bestof(Min, Order, call(Closure,Min)),
bestof(Max, opp(Order), call(Closure,Max)).

opp(Closure, X, Y) :-
call(Closure, Y, X).

The significant point here is that Closure(_) has to be completely
solved TWICE. Now suppose we have the primitive that I am advocating,
namely gather_solutions/3. Here's what we do:

extrema(Closure, Order, Min, Max) :-
gather_solutions(X, call(Closure,X), Solutions),
min_member(Order, Solutions, Min),
max_member(Order, Solutions, Max).

The price we pay here is that we build a list of Solutions and then
traverse it twice. Ah, but the savings! We only have to find the
solutions once!

The basic point here is that bestof/3 is defined to return ONE of the
solutions. It _cannot_ combine information from two or more solutions.
*THAT* is what Fernando and I are arguing for. We're saying that there
are natural problems where you need to _combine_ information from several
branches, not merely pick _one_ of them.

Suppose now that we decide not to build a list to represent the set of
solutions. Very well. Here's what we can do:

combine_solutions(Function, Term, Generator, Answer) :-
gather_solutions(Term, Generator, [Solution|Solutions]),
reduce(Function, Solutions, Soltion, Answer).

Be careful! This is only defined when Function is commutative and
associative. If we further require Function(X,X) = X, then we don't
need to worry about eliminating duplicates, but of course one of the
major functions of interest, namely +, does not have this property.
Now that was a specification. combine_solutions/4 can be implemented
the same way as bestof/3. We have

bestof(Term, Order, Generator) :-
combine_solutions(min(Order), Term, Generator, Term).

gather_solutions(Term, Order, Solutions) :-
combine_solutions(ord_union, [Term], Generator, Solutions).

extrema(Closure, Order, Min, Max) :-
combine_solutions(minmax(Order),
(X,X), call(Closure,X), (Min,Max)).

minmax(O, (A,B), (C,D), (E,F)) :-
min(O, A, C, E),
max(O, B, D, F)

What is the difference between combine_solutions/4 and bestof/3?

0. It can be implemented in essentially the same way as bestof/3.

1. combine_solutions/4 is based on a binary _function_, not a binary
_predicate_. It is not restricted to returning one of the answers,
but can combine them. There are quite simple problems which it
can solve and bestof/3 can't.

2. I'm proposing that combine_solutions/4 be as much like solutions/3
as it can be, while bestof/3 is like findall/3.

Now there is something that needs justifying. NU-Prolog's solutions/3
predicate (which I am taking as my model) is *very* like setof/3.
If it is possible to get a better approximation to solutions/3 without
coroutining, I would very much like to know how to do it. Lee Naish's
"little bit pregnant" slur is rather exaggerated.

Tarau is scathing about the fact that setof/3 (AND solutions/3!) sort
their output in order to eliminate duplicates.

He is partly right. Prolog's standard ordering on terms:
variables < other constants < numbers < strings < atoms <
compound terms
does one regrettable thing: it means that Prolog programs can _only_ be
construed as working with the Herbrand universe (plus an additional
built-in predicate). In pure first order logic, it would be impossible
to depend on "a @< b" because in non-Herbrand models a and b might be
interpreted as the same element of the universe. However, to push Lee
Naish's phrase back a little earlier in time, Prolog lost its virginity
long before _that_. As soon as you allow inequality, you've restricted
yourself to the Herbrand universe. Defining "a @< b" isn't even like
succumbing to a second seducer, it's like engaging in additional
variations with the same "a ~= b" seducer. The crucial thing that makes
solutions/3 work in NU Prolog is that it can introduce suspended calls
to (~=)/2 when it isn't quite sure that a pair of solutions is distinct.
It would be perverse to gulp down the '~=' camel and then complain
about the '@<' gnat. (I suggested hashing as a way of grouping solutions
with the same values for free variables; that depends on '~='.)

Here's the "logical behaviour" ranking for all-solutions predicates:
WORST
If there are no free variables in the Generator,
findall/3 is as well-behaved as bagof/3, otherwise not.
DUBIOUS
If the Generator returns no duplicate solutions,
bagof/3 is as well-behaved as setof/3, otherwise not.
USABLE
If every two solutions returned by the Generator either
do not unify or are absolutely identical,
setof/3 is as well-behaved as solutions/3, otherwise not.
` BEST


> The problem is that we are (in principle) programming in
> logic where we _have_ predicates. What if we think with predicates as well
> as we think with sets or even better? That's WHY we _program_ in logic.

I think I'm being criticised for being on the wrong side of a fence which
I can't even see. I program with relations. Sometimes I use the algebra
of set theory. Sometimes I use the relational algebra. Sometimes I use
the algebra of graph theory. Sometimes I use the word 'predicate'. My
point is "DON'T make any distinction until it pays". We mustn't let a
word like "predicate" blind us to the fact that _sometimes_ a term may
be the right way to represent a mathematical concept, and we mustn't let
a word like "set" blind us to the fact that _sometimes_ a Prolog procedure
may be the right way to represent a mathematical concept.

> Your sketch of a call/N implementation at WAM-level in a previous posting
> shows that metaprogramming on assertional representations is reasonably
> efficient with appropriate support. My experience with performance analysis
> of BinProlog was that amortized cost of metaprogramming is very small,
> even if it seems very intensively used at source-level.

This is good news.

> > By the way, with respect to OR-parallel, do I really need to point out
> > that bestof(X, Order, member(X,List)) is O(|List|) whether sequential
> > or OR-parallel, regardless of the number of processors?
>
> Well, if you leave it in the hands of the implementor and avoid to do
> it yourself with assert and retract you may find out that he can make
> it for you O(log(|List|)). What about a process tree where only "winners"
> qualify for the next game?

I'm afraid Tarau has completely missed the point here. Of _course_ the
combination phase may have logarithmic depth. Book one, lesson three,
in a good parallel programming text. But what about the _setup_ phase?
If you are finding bestof(X, Order, member(X,[A1,...,An]) the depth of
the computation that locates An is O(|List|). (Yes, I know about
pointer jumping.) To put it simply: it's not the calls to Order which
have O(|List|) depth but the call to member/2.

--
I am writing a book on debugging aimed at 1st & 2nd year CS students using
C/Modula/Pascal-like languages. Please send suggestions (other than "you
_must_ cite "C Traps and Pitfalls") to o...@goanna.cs.rmit.oz.au

Lee Naish

unread,
Apr 6, 1992, 8:20:15 AM4/6/92
to
A couple of points raised by Richard's postings:

>I repeat, THE ISSUE IS TREATMENT OF FREE VARIABLES. Sorting is not
>the issue.

To quote from my all solutions paper: "The main difference between
findall and solutions is the way local variables are distinguished".
Setof-like treatment of free variables is as important as Richard says.

It turns out that sorting is also an issue - the fact that the IC-Prolog
all solutions primitive does not sort solutions allows var/1 to be
implemented (but only using by using the coroutining facilities). To
avoid a logical specification of an all solutions construct allowing N
factorial different permutations being returned (or an unbounded number
of repeated elements), it seems necessary to specify some canonical
representation of the set of answers.

> Lee Naish's
>"little bit pregnant" slur is rather exaggerated.

What I said is you can argue both ways on this. Richard obviously
thinks there are degrees of "non-logicalness". From a practical point
of view I agree that setof/3 is nicer than findall/3 because it is closer
to the ideal logical construct. However, that kind of argument is not
going to convince John Lloyd to put setof/3 into Goedel (for example).
Actually John is not entirely happy with any all solutions construct,
including solutions/3. The semantics of solutions/3 was defined by a
schema: for any call to solutions with an instantiated generator, it
gives you an equivalent bit of logic. Personally I'm happy with that.

There are a couple of rough edges. One is what you do when the
generator is a variable. Well, thats just like call/1, which again
doesn't worry me. You can define call like this:

call(p(X)) :- p(X).
call(q(X, Y)) :- q(X, Y).
.....

Some people are very uptight about p/1 and q/2 being used as predicate
and function names. The context always tells you whether something is a
predicate or function so if it worries you, add a superscript greek letter
to everything in sight. The other rough edge is local variables in the
solutions to the generator. What we really need to do is return an
infinite list of solutions. Solutions/3 writes an error message, which
is like going into an infinite loop, but more useful.

> As soon as you allow inequality, you've restricted
>yourself to the Herbrand universe.

I don't think its quite as bad as that. When you write a NU-Prolog
program, implicitly there are a whole lot of additional clauses:

a ~= b.
b ~= c.
....

These better be true in the intended interpretation (at least if you use
~= in your program), but it need not be a Herbrand interpretation.
Similarly, when you use negation as failure, the equality axioms (which
include the ~= case) better be true in the intended interpretation).
The equality axioms restrict the interpretation to look just like a
Herbrand interpretation, but it could be another (a Clark interpretation
perhaps?). The solutions/3 semantics implicitly uses negation.

> If every two solutions returned by the Generator either
> do not unify or are absolutely identical,
> setof/3 is as well-behaved as solutions/3, otherwise not.

The condition needs to be stronger:

1) no solution to the generator contains local variables (note that
solutions/3 just produces an error message when local variables are
present in answers - not much more clever than setof/3), and

2) every two solutions returned by the Generator the *global* variables
either do not unify or are identical.

If all solutions to the generator are ground (the condition mentioned by
Fernando), conditions 1) and 2) are satisfied. To show why we need 2),
consider the following (NU-Prolog) goal:

?- solutions(Y, (Y = a, X = a ; Y = b), L), (X = a ; X = b).
L = [a, b],
X = a ;

L = [b],
X = b ;

fail.

The solutions call returns the answer L=[a,b], with X=a. Setof/3
could do the same thing - no problems so far.

The next solution is L = [b], but that is only valid if X ~= a.
In NU-Prolog, the solutions call results in a delayed constraint
equivalent to X ~= a. Poor old setof doesn't have the benefit of
delayed calls - there is nothing it can do to represent this answer
properly.

Note that if the goal is reversed, so the all solutions construct
is called last, setof produces the same set of answers as solutions.
Condition 2) is satisfied since there is no binding of global variables
by the "generator".

A final subtle point. In a construct which can't use delaying, we need
a third restriction:

3) the set of answers must be constrained to be non-empty.

Consider the following:

?- solutions(Y, (Y = a, X = a), L), (X = a ; X = b).
L = [a],
X = a ;

L = [],
X = b ;

fail.

This goal satisfies conditions 1) and 2), but needs the constraint
X ~= b for the L = [] solution. Fortunately, setof/3 and bagof/3
are defined to fail rather than return empty lists.

>If it is possible to get a better approximation to solutions/3 without
>coroutining, I would very much like to know how to do it.

There are a couple of things:

1) Do quantifiers *properly*. With code which has variables which should
be local to the all solutions construct appearing elsewhere (eg

setof(X, ...), ...... X ...

), either the different occurrences of X should be considered different
variables or the compiler should generate an error. At the very least,
"style checkers" should COMPLAIN VERY LOUDLY. I'm not sure what current
commercial Prolog systems do with such code. Of course this case makes
a mess of the semantics (it can be as bad as findall) if not handled
properly.

1a) Just a point on implementation. If you do quantifiers properly, you
can distinguish local and global variables at compile time, saving some
runtime overheads, ie, ITS FASTER.

2) It wouldn't hurt to have versions of setof/3 which produced errors
when called "outside their design envelope", rather than returning
*something*.

lee

Richard A. O'Keefe

unread,
Apr 6, 1992, 8:44:17 PM4/6/92
to
In article <920972...@mulga.cs.mu.OZ.AU>, l...@munta.cs.mu.OZ.AU (Lee Naish) writes:
> A couple of points raised by Richard's postings:
> >If it is possible to get a better approximation to solutions/3 without
> >coroutining, I would very much like to know how to do it.

> There are a couple of things:

> 1) Do quantifiers *properly*. With code which has variables which should
> be local to the all solutions construct appearing elsewhere (eg
>
> setof(X, ...), ...... X ...
>
> ), either the different occurrences of X should be considered different
> variables or the compiler should generate an error. At the very least,
> "style checkers" should COMPLAIN VERY LOUDLY.

Agreed. David Scott Warren ran into this problem once, and was very
surprised. Commercial Prologs are stuck with the backwards compatibility
issue: the DEC-10 Prolog manual defined "X^p(X)" to be _exactly_ like
"p(X)" outside setof/bagof forms. This overspecification was regrettable.
In fact the DEC-10 Prolog library treats existential quantifiers
specially inside not(_) as well, so that not(X^p(X)) is ok.

My preference would be for the compiler to rename existentially quantified
variables _and_ for the style checker to warn about it.

> 2) It wouldn't hurt to have versions of setof/3 which produced errors
> when called "outside their design envelope", rather than returning
> *something*.

The Quintus library contains bag_of_all/3 and set_of_all/3 which are
"safe" versions of findall/3. If there are any free variables in the
Goal argument they report an error. (This makes it safe for them to
return empty lists.) I'll add
safe_{bag_of,set_of,bag_of_all,set_of_all}/3
safe_grouped_{bag_of,set_of}/4
that report an error if the list of solutions is non-ground.

Paul Tarau

unread,
Apr 7, 1992, 8:12:31 PM4/7/92
to
Thanks for pointing out clearly our differences and points of agreement.
Well, the main question still left is:

Richard A. O'Keefe writes:

> - I claim that bestof/3's interface is defective in precisely
> the same sense as findall/3's; that (when used within its
> domain of definition) setof/3's interface is better, and
> that NU-Prolog's solutions/3 (which generalises setof/3)
> is better still.

OK, I'll try to fix that. Let's call it _angelic bestof/3_ for reasons I
will discuss later. Here it is:

% bestof/3(Winner,Game,Generator) is true iff Winner is an answer
% to Generator, Game is a closure that works as a total order on
% alternative answers to Generator and Winner is maximal with respect
% to the total order on the set of the answers.
% The implementation must ensure that:
% - no bindings travel between distinct OR-branches used to prove Generator
% - no bindings are made on possibly uninstantiated variables of the
% closure Game

bestof(Winner,Game,Generator):-
gather_solutions(Winner,Generator,[X|Xs]),
let_them_play(Xs,Game,X,Winner).

let_them_play([],_,Winner,Winner).
let_them_play([Challenger|Xs],Game,Incumbent,Winner) :-
is_better(NewIncumbent,Game,Incumbent,Challenger),
let_them_play(Xs,Game,NewIncumbent,Winner).

is_better(Winner,Game,Incumbent,Challenger):-
\+ \+ % ensures that the winner does not steal bindings
call_with_closure(Game,[Challenger,Incumbent]),
!,
Winner=Challenger.
is_better(Incumbent,_Game,Incumbent,_Challenger).

call_with_closure(Closure,Args):- % may be replaced by call/N
Closure=..L1,
append(L1,Args,L2),
Test=..L2,
Test.

gather_solutions(X,G,Xs):-findall(X,G,Xs). % just to have it working

% tests

max(Xs,X):-bestof(X,>,member(X,Xs)).

test_max(X):-max([1,2,3,4,3,4,2,1],X).

thief(K1-_V,K2-_V):-K1>K2.

thief_max(Xs,X):-bestof(X,thief,member(X,Xs)).

test_thief(X):-
% comment out \+ \+ in is_better/4 to steal bindings
thief_max([1-f(_,b), 2-f(a,_), 4-f(_,_), 3-c], X).


1-st order "soundness" of _angelic bestof/3_:

If P and G are a definite program and a definite goal both possibly
containing occurrences of bestof/3 and $\theta$ is a computed answer
of P+{G}, then $\theta$ is also a computed answer of P'+{G'} where
P' and G' are obtained by replacing each occurrence of
bestof(X,Order,Generator) by Generator in P and G.

Therefore answers computed by P+{G} are sound as answers of P'+{G'}.

The nice thing is that now we can see bestof/3 as a _pragma_ that is
guiding the search without interfering with the proof itself. That's
one reason I call it "angelic". The other one is that on top of a
definite program it works like a meta logical pruning
operator: it cuts off "declaratively" branches of the search tree without
interfering with the final proof. It also has efficient OR-parallel
execution when working with an appropriate generator.

Let's call the version without \+ \+ "unrestricted bestof". You can try
out some of its not always desirable powers by executing test_thief/1.

(Angelic) bestof/3 is like \+ as it cuts some parts of the tree without
making any bindings. However \+ is more difficult as it reverses success
and failure while even unrestricted bestof/3 only gets things more
instantiated when it goes with the winning OR-branch.

Obviously some of the hidden information exchange capabilities between
different OR-branches of unrestricted bestof/3 are lost as a price for
this strong soundness property, but I agree that if that's what we want
to do, then Richard's:

> combine_solutions(Function, Term, Generator, Answer) :-
> gather_solutions(Term, Generator, [Solution|Solutions]),

> reduce(Function, Solutions, Solution, Answer).

does the job quite well.

Paul Tarau
ta...@iro.umontreal.ca

0 new messages