Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Is Visual Prolog "real" Prolog?

287 views
Skip to first unread message

Jose A. S. Alegria

unread,
Apr 16, 1996, 3:00:00 AM4/16/96
to
Could someone clarify if Visual Prolog is a full implementation of the
PROLOG language (ISO standard or Quintus compatible)? If not, what are
the differences from the semantic viewpoint? I understand there are
some syntactic differences (declarations and so), but what about
unification and control semantics?

Thanks,

jalegria
Jose A. S. Alegria
Work: <jale...@bfe.pt> Home: <jale...@mail.telapac.pt>


Jonathan Lerwill

unread,
Apr 19, 1996, 3:00:00 AM4/19/96
to Jose A. S. Alegria
Visual Prolog Technical Support
Service Request Number: 12698

Jose A. S. Alegria had a few questions about Visual Prolog

Visual Prolog is not (yet) ISO complient, though we fully supported the
standardization effort.


VIP has static typing, does not support operator definitions, does not
implement dcg-expansion, and in general, since it is a native code
compiler, it does not support the metalingustic constructs that tend to
rely on some capability for decompilation (interpreters, p-code
systems). The multipass compiler uses warren-like abstactions on the way
to emitting standard machine code, the type system and flow analysis
allow the compiler to treat most forms of unification more efficiently,
although 'normal' unification is used where such optimizations are not
possible. Backtraking and other control semantics work as usual. That
one of the supplied example programs is a fairly standard Prolog
interpreter demonstrates that any of the above deemed missing can be
modelled pretty easily.

Regards

Jonathan Lerwill + Michael Alexander (PDC Atlanta)
sup...@pdc.dk

Fergus Henderson

unread,
Apr 20, 1996, 3:00:00 AM4/20/96
to
Jonathan Lerwill <jona...@pdc.dk> writes:

>Jose A. S. Alegria had a few questions about Visual Prolog
>
>Visual Prolog is not (yet) ISO complient

[...]
>VIP [...] does not implement dcg-expansion

Incidentally, ISO Prolog does not include DCGs. So while lack of DCGs
is an incompatiblity with traditional (Edinburgh) Prolog, it's not an
incompatibility with ISO Prolog.

The ISO Prolog committee wanted to standardize DCGs, but the problem
was that they couldn't agree on an operational semantics. The
declarative semantics were clear, but they couldn't come to any
agreement on precisely how to treat DCGs containing non-logical code.

--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
WWW: <http://www.cs.mu.oz.au/~fjh> | of excellence is a lethal habit"
PGP: finger f...@128.250.37.3 | -- the last words of T. S. Garp.

Avery Andrews

unread,
Apr 23, 1996, 3:00:00 AM4/23/96
to
Has anything been done about the annoying syntactic/lexical
incompatibilities with Edinburgh prolog, such as using `=' for
`is', `free' instead of `var', etc.?

Avery....@anu.edu.au


Richard A. O'Keefe

unread,
Apr 30, 1996, 3:00:00 AM4/30/96
to

and...@Turing.Stanford.EDU (Avery Andrews) writes:

>Has anything been done about the annoying syntactic/lexical
>incompatibilities with Edinburgh prolog, such as using `=' for
>`is', `free' instead of `var', etc.?

Switch to Mercury. The type system is more powerful, the code is
blindingly fast, it isn't tied to 80*86s, it handles DCGs, and it's free.

Of course you don't get the "visual" bit. PDC Prolog's user interface
stuff shouldn't be too hard to emulate; what does Visual Prolog offer?

--
Fifty years of programming language research, and we end up with C++ ???
Richard A. O'Keefe; http://www.cs.rmit.edu.au/~ok; RMIT Comp.Sci.

Jose A. S. Alegria

unread,
May 1, 1996, 3:00:00 AM5/1/96
to

o...@goanna.cs.rmit.edu.au (Richard A. O'Keefe) wrote:

>and...@Turing.Stanford.EDU (Avery Andrews) writes:

>>Has anything been done about the annoying syntactic/lexical
>>incompatibilities with Edinburgh prolog, such as using `=' for
>>`is', `free' instead of `var', etc.?

>Switch to Mercury. The type system is more powerful, the code is
>blindingly fast, it isn't tied to 80*86s, it handles DCGs, and it's free.

Richard,

Could you post pointers to papers or other documentation on Mercury's
type system and implementation? Is Mercury a WAM based
compiler/interpreter? How does it compare to Sicstus in terms of
speed?

>Of course you don't get the "visual" bit. PDC Prolog's user interface
>stuff shouldn't be too hard to emulate; what does Visual Prolog offer?

Well, they promise ultra fast performance without GC overhead (they
don't need it ?!). This high performance and well behavied memory
system allow them to compete with VC++ in developing "efficient"
Windows apps. Or, so I was told.

Regards,

Jose A. S. Alegria
Work: <jale...@bfe.pt> Home: <jale...@mail.telepac.pt>


Jonathan Lerwill

unread,
May 1, 1996, 3:00:00 AM5/1/96
to Richard A. O'Keefe

please look at the Visual Prolog home page for details of this DOS,
Win3.1x, Win95, WinNT and OS/2 Prolog development system.
The online newsletter gives a good overview.

/Jonathan Lerwill
sup...@pdc.dk

Thomas Charles CONWAY

unread,
May 2, 1996, 3:00:00 AM5/2/96
to

jale...@mail.telepac.pt (Jose A. S. Alegria) writes:

[deletia]


>Could you post pointers to papers or other documentation on Mercury's
>type system and implementation? Is Mercury a WAM based
>compiler/interpreter? How does it compare to Sicstus in terms of
>speed?

Check out the Mercury home page: http://www.cs.mu.oz.au/mercury/mercury.html

Mercury is not a *Prolog* system. It uses Prolog compatible syntax, but is
a new purely declarative language. The compiler for Mercury that we have
written make use of a new execution algorithm, not the WAM. Our execution
strategy allows for much more aggressive low level optimization, and the
purely declarative nature of Mercury allows for much more aggressive
high level optimization.

Our benchmarking shows Mercury to be just under twice as fast as
Aquarius Prolog, and about four times faster than SICStus Prolog.

Acutally, the ratio of the speed of the Mercury compiler running under
SICStus Prolog and the Mercury compiler running under Mercury is more
than that - I don't have hard data at hand, but the Mercury version is,
I think, more like 8 times faster.

pax
Thomas

Thomas Lindgren

unread,
May 2, 1996, 3:00:00 AM5/2/96
to

In article <4m91i9$3...@mulga.cs.mu.OZ.AU> con...@rimmer.cs.mu.OZ.AU (Thomas Charles CONWAY) writes:

Our benchmarking shows Mercury to be just under twice as fast as
Aquarius Prolog, and about four times faster than SICStus Prolog.

Acutally, the ratio of the speed of the Mercury compiler running under
SICStus Prolog and the Mercury compiler running under Mercury is more
than that - I don't have hard data at hand, but the Mercury version is,
I think, more like 8 times faster.

How does Mercury compare to Visual/PDC Prolog in this respect?

As to the execution speed of Prolog, it appears that customers are
more interested in functionality than they are in execution speed
nowadays. From what I can see, vendors prefer to add features
rather than work on improving the compiler. Perhaps because a WAM-based
system tends to be fast and space efficient enough.
Too bad for us compiler writers :-/

("If you need the program to run really fast, just write a C
procedure for the inner loop", so to say.)

Thomas
--
Thomas Lindgren, Uppsala University
tho...@csd.uu.se, lind...@sics.se
http://www.csd.uu.se/~thomasl/

Copyright Thomas Lindgren, 1996. Distribution on Microsoft Network prohibited.

Paul Tarau

unread,
May 3, 1996, 3:00:00 AM5/3/96
to

In article <3mafzro...@arnold.csd.uu.se>,

Thomas Lindgren <tho...@csd.uu.se> wrote:
>
>As to the execution speed of Prolog, it appears that customers are
>more interested in functionality than they are in execution speed
>nowadays. From what I can see, vendors prefer to add features
>rather than work on improving the compiler. Perhaps because a WAM-based
>system tends to be fast and space efficient enough.

I think Thomas Lindgren is pointing out something I heared around
from a fairly large number of people involved in logic programming.
Basically, giving away expressiveness for speed is _wrong_.
Giving up Prolog for less expressive languages for reasons
of speed, types/modes, proprietary visual tools etc. is _wrong_ too.
Some history might help on this.

For many `real' Prologers today Turbo-Prolog is `out' and Mercury `in'.
Novelty and Fregus Henderson's well-targeted postings have definitely
helped Mercury, as well as our fascination with speed on some popular
functional-programs-in-relational-form (tak and friends) benchmarks :-)

I should agree that there's a lot of inventive implementation skill and
talent involved in Mercury. I do not know about the latest version of
Turbo-PDC-Visual `Prolog'. Except for the fact of not supporting
meta-programming and having a too limitative type system it also seemed
to be a well implemented language with its own merits. It had a fairly
effective mode inference algorithm, among other things. And it was
really `in', for a while.

I think that the two languages have much more in common than what
separates them from Prolog. Basically, they have given away
expressiveness for speed. By the way, the two languages should, at
some point, be compared empirically on speed, program size and size of
standalone executables on both functional style and nondeterministic
programs. Any volunteers having both implementations around? Mercury
on Linux vs. PDC on W95 or NT looks fair enough to me.

It is likely that the designers/implementors of the two languages have
(independently?) got attempted to push static compilation techniques to
their limit while giving away, little by little, some of the key
features that make `real' Prolog so appealing: logical variables,
unification, the 5 lines meta-interpreter etc.

Not only purity was given away but the fancy parts of impure stuff as
well: if I remember well, only ground facts were allowed in Turbo
Prolog's dynamic data-base. Mercury, being a `purely declarative'
`new language', (despite the fact that it still runs under
SICStus:-), does not have such a beast at all.

The few mode and type declarations, usually less than the size of the
clauses themselves :-), are enough to break meta-programming once for
all. The same did happened with Turbo Prolog. A 100-times slower
meta-interpreter for real Prolog gave back `Edinburgh Prolog
functionality', of course :-). A little bit smaller and a lot
slower than in C.

Wanting to know `too much' statically is not worth its price when half
of the code will travel on the net (to end up executed in the Java-chip
in your TV set:-) Overall, the main strength of logic programming as a
practical programming tool is expressiveness. A factor of 2-3 in speed
is simply not enough to give it away. Modes and types are helpful to
manage complexity but without inference they are tedious enough not to
be excused for killing out convenient meta-programming.

To leave some hope, I would gladly switch to Visual Prolog if it had
type inference and to Mercury if it had at least as much mode inference
as Visual Prolog :-). Combined with a first order representation of
type and mode declarations and a 10 lines meta-interpreter around.

Until then, the slim Prolog kernel which successfully survived ISO
standardization looks just too hard to beat. Visual programming is not
an issue as most modern Prologs have at least a Tcl/Tk interface and
Java interfaces are coming. Learning a home brewed visual language
with Prolog syntax when you already know a generic one is not worth
it.

I think that Prolog as an embedded logic engine in a multi-paradigm,
net aware environment is still the way to go for the next few years.


Paul Tarau

--
Paul Tarau, PhD. Associate professor Phone: (506) 858-4120
Dept. of Computer Science University of Moncton FAX: (506) 858-4541
Moncton N.B. CANADA E1A-3E9 ta...@info.umoncton.ca


Peter Van Roy

unread,
May 3, 1996, 3:00:00 AM5/3/96
to

In article <4m4ard$k...@goanna.cs.rmit.edu.au>, o...@goanna.cs.rmit.edu.au (Richard A. O'Keefe) writes:

|> Switch to Mercury. The type system is more powerful, the code is
|> blindingly fast, it isn't tied to 80*86s, it handles DCGs, and it's free.
|>

You don't get constraints, though (e.g.: no unification!). Perhaps the Mercury
team is working on it for a future version?

Could a Mercury expert give a *balanced* comparison between Mercury and
Prolog--not focusing solely on tweaked execution speeds for a few small
programs, nor proselytizing.

For example, how does Mercury compare with Prolog in the following areas:

1. Coroutining support (freeze and relatives)
2. Support for program transformation (term_expansion, reflective syntax)
3. Read/write of arbitrary terms
4. Program modification support (assert/retract and relatives)
5. Higher-orderness (call and relatives)
6. Database query support (setof and relatives)
7. Constraint systems (rational trees, finite domains, attributed variables)
8. Development environment (interactive top level, incremental compiler)

I like nouvelle cuisine, but I don't want to starve!

Peter

-------------------------------------------------------------------------
Peter Van Roy
Programming Systems Lab Tel: +49-681-302-5332
DFKI Fax: +49-681-302-5341
Stuhlsatzenhausweg 3 Net: van...@dfki.uni-sb.de
D-66123 Saarbruecken, Germany Web: http://ps-www.dfki.uni-sb.de/~vanroy
-------------------------------------------------------------------------

Richard A. O'Keefe

unread,
May 6, 1996, 3:00:00 AM5/6/96
to

van...@dfki.uni-sb.de (Peter Van Roy) writes:
>You don't get constraints, though (e.g.: no unification!).

You don't get constraints; you _do_ get unification.
The semantics of any Mercury program that the compiler accepts is
precisely the standard semantics of the logic program obtained by
erasing the tyoe and mode declarations.

>Perhaps the Mercury
>team is working on it for a future version?

Constraints are indeed being considered. It's interesting that a PhD
student working on constraint logic programming under me started off by
arguing that it was important to base things on Prolog and that you
could build a Prolog interpreter as fast as some compiled Prologs
(which he achieved, thanks to writing more assembly code than any
sensible person would _want_ to write), but now argues that the Prolog
part is mainly useful for building up the constraint network.

>Could a Mercury expert give a *balanced* comparison between Mercury and
>Prolog--not focusing solely on tweaked execution speeds for a few small
>programs, nor proselytizing.

I'm not an expert, but I _can_ read a Web page...

>For example, how does Mercury compare with Prolog in the following areas:

>1. Coroutining support (freeze and relatives)

Mode declarations and 'when' declarations convey similar information.
Such alternative execution strategies are indeed planned for the
future.

>2. Support for program transformation (term_expansion, reflective syntax)

No. However, the language supports program transformation in a rather
different way: it places no _semantic_ barriers in its way, unlike
Prolog. It is
harder to write sound program transformers for Mercury in Mercury
than sound program transformers for Prolog in Prolog

easier to write sound program transformers for Mercury in Prolog
than sound program transformers for Prolog in Prolog.

>3. Read/write of arbitrary terms

See the 'term_io' package in the Library manual. There are
term_io__read_term//1
term_io__write_term//2
amongst others.

>4. Program modification support (assert/retract and relatives)

No.

>5. Higher-orderness (call and relatives)

Yes.

>6. Database query support (setof and relatives)

All-solutions predicates, yes.

>7. Constraint systems (rational trees, finite domains, attributed variables)

No. But then, none of the Prologs I have access to has these things
either.

>8. Development environment (interactive top level, incremental compiler)

The compiler looks like a traditional UNIX compiler; usable in makefiles.
(It uses GCC as a back end to get portability.)

>I like nouvelle cuisine, but I don't want to starve!

Neither do I.

On the other hand, I am sick of having to apologise that Prolog doesn't
have constant-time updatable arrays (or that Prolog dialect XXX which does
has done so at the sacrifice of any connection with logic programming).
I _like_ being able to say "yes you can have efficient I/O without strange
contortions, yes you can have constant time updatable arrays, and yes you
_can_ have logic programs based on a sound execution strategy for Horn
clause programs with a pretty standard semantics, all in the same box."

I don't think anyone associated with Mercury would want to argue either
that Mercury should ever replace Prolog or that Mercury is finished.
There are lots of things to come, concurrency (where the absence of
updates to a global variable is a big help), coroutining, exceptions,
program transformation tools, great development environment, ...
There's a reason that the current release is 0.5, not 1.5!

(Can I have a Clean-style programming interface for windows? Please?
Pretty please with sugar on? I'm finally getting my head around that,
and it looks like what I've always wanted.)

Fergus Henderson

unread,
May 7, 1996, 3:00:00 AM5/7/96
to

ta...@clement.info.umoncton.ca (Paul Tarau) writes:

>I think Thomas Lindgren is pointing out something I heared around
>from a fairly large number of people involved in logic programming.
>Basically, giving away expressiveness for speed is _wrong_.
>Giving up Prolog for less expressive languages for reasons
>of speed, types/modes, proprietary visual tools etc. is _wrong_ too.
>
>Some history might help on this.

and he continues

>It is likely that the designers/implementors of the two languages

>[Turbo-Prolog and Mercury] have (independently?) got attempted to push


>static compilation techniques to their limit while giving away, little
>by little, some of the key features that make `real' Prolog so
>appealing: logical variables, unification, the 5 lines meta-interpreter
>etc.

I'm afraid Paul has a quite fundamental misunderstanding about the history,
design and aims of Mercury. We most certainly did not start with Prolog
and then "give away, little by little, some of the key features that
make `real' Prolog so appealing". On the contrary, we set out to
design from scratch a _different_ language for _different_ purposes.

Our goal was to design a logic programming language with that could
satisfy our two main aims: a language that is purely declarative, and
yet one that is a genuinely practical "real-world" tool which provides
real support for programming in the large. The wonderfully felicitous
thing about it was that these two aims led to exactly the same design
decisions.

The core of Mercury is first order predicate calculus with strong
types, modes, and determinism. Types, modes, and determinism are very
important from a software engineering perspective: compile-time
detection of errors provides a huge productivity boost, and is
particularly important for programming-in-the-large; the information
about types, modes, and determinism is essential for maintenance; and
while the efficiency which they enable -- in terms of runtime speed,
code size, and data size -- is a helpful for just about every sort of
programming, it is especially important for large real-world
applications. But types, modes, and determinism are also integral to
the way we have achieved our aim of a purely declarative language.
Some means of enforcing determinism is necessary to avoid attempts to
backtrack over I/O. Types and modes provide information which is necessary
to support accurate determinism analysis. Modes also provide a way
of handling negation, if-then-else, arithmetic, and many other difficult
constructs in a logically sound manner.

So, starting with this core language, we have then gradually _added_
features, being careful to preserve declarativeness. These features
were not chosen at random; they were chosen because they are essential
to achieving our design goals.

For large-scale software development, there is a clear need for a
module system, so there was never any doubt about including that.
All-solutions predicates and other aggregate predicates (e.g. min, max,
sum, count) provide an essential mechanism for interfacing between
deterministic and nondeterministic code, so there was no doubt about
including those either. But getting the scope rules right for such
constructs is a bit tricky. One way to do it is to make each such
construct a special language builtin, as is done in NU-Prolog.
However, this seems very ad-hoc. Instead, we designed a much more
general and extensible scheme based on the use of higher-order
predicates, which are desireable anyway, since they provide a very
high-level and elegant mechanism which promotes code reuse. So we
added higher-order predicates and an all-solutions predicate. It would
be a shame if the use of higher order predicates to get the scope rules
right meant the code was less readable, so to allow the all-solutions
goal to be written inline we added lambda expressions (which are also
useful for a lot of other things too). Support for a declaratively
sound form of committed choice nondeterminism was added because it will
be crucial for parallel and multithreaded (coroutining) applications,
but also provides some additional flexibility that is useful for
sequential programs too. Getting arithmetic right basically requires
support for functional syntax, so we've added that too. (Functional
syntax is not an afterthought --- its something we discussed on day 1,
but postponed implementing until after most other things. But it's
done now -- look out for it in the next release! Mercury is now an
*equational* programming language, not just a logic programming
language.) Real-world programs need a C interface, so of course we
added that.

>I think that the two languages have much more in common than what
>separates them from Prolog. Basically, they have given away
>expressiveness for speed.

Semantically, Mercury has more in common with Goedel than with either
Turbo-Prolog or standard Prolog. Efficiency was an important
consideration is the design of the language (and certainly in the
implementation!), but it was by no means the only consideration.

In any case, we think Mercury is extremely competitive with Prolog when
it comes to expressiveness. There are several considerations in
Mercury's favour.

* Mercury allows a much more natural style: there is never any need to
introduce auxiliary predicates or to contort your code so that meets
the requirements for indexing on only the top-level functor of the
first argument, there is never any need to litter the code with cuts,
or to scratch your head wondering exactly where the cut should be
placed. There is no danger in Mercury of accidentally writing code
that is unsound or not steadfast.

* Prolog programmers typically avoid using higher-order predicates; one
important reason for this is that they typically incur a high
overhead. By contrast, Mercury programmers can make fairly free use
of higher-order predicates and be confident that the implementation
will produce efficient code.

* Mercury now supports functional syntax. (This will be included in the
next public release.) The vast majority of Prolog systems do not.

* Mercury has been designed to support high-level optimizations and
code transformations, such as deforestation, constraint propagation,
as well as alternative execution strategies, such as memoing or even
bottom-up execution. Eventually, implementations will exploit these
opportunities, freeing programmers from having to worry so much about
low-level details. The Mercury compiler already performs automatic
inlining (unfolding), specialization of higher-order predicates, and
elimination of unused arguments.

* Mercury's unique mode system supports a declarative means of performing
destructive updates (although the support for unique modes in the
current implementation is not yet complete). Prolog does provide
assert and retract, which can be used for this purpose, but they
are neither elegant nor efficient, and of course they are totally
non-logical.

There are a couple of reasons why some people think Mercury is less
expressive than Prolog.

* Mercury requires users to declare the types, modes, and determinisms
of their predicates. In our view, programmers ought to know the
intended types, modes, and determinisms of the predicates they write.
Writing down this information helps anyone who has to read the source
code in the future. (Even, in many cases, the original programmer,
especially if he/she is working on a large program.)

However, we do appreciate that writing these declarations can interfere
with exploratory programming, and so we have been working on getting the
compiler to infer them, for predicates or functions local to a module.
The latest development version of the Mercury compiler performs type
inference as well as determinism inference. Furthermore, mode
declarations for functions can be omitted, in which case the
system will presume that all arguments are input and the function
result is output. So if you write functional code, you don't need any
declarations at all, except in the module interface. Even in that case,
the compiler can often generate the declarations for you, which you can
then paste into the source code.

Peter Schachte has also been doing some work on doing mode inference
for predicates.

* Mercury doesn't allow code that is not well-typed.

Programmers who want to write code that is not well-typed can use
the Mercury library module `term'; this isolates the non-well-typed
parts of the program from those that are well-typed, and allows the
well-typed parts of the program to benefit from the better
compile-time error checking and higher runtime efficiency made
possible by type information.

* Mercury doesn't allow code that is not well-moded.

There are several categories of such code. One category is code
that would make use of coroutining; another is code that would make
use of constraints. As discussed earlier, we are work on extensions
to Mercury and Mercury's mode system to handle both of these categories.
The third category of non-well-moded code is of course bugs;
we don't have any plans to support code that falls in this category ;-)

* Mercury doesn't provide (or allow) many of the non-logical constructs
that Prolog programmers are used to, such as cut, assert/retract,
side-effect I/O, var/1, and so on.

Mercury provides alternative solutions for the problems addressed by
these constructs. Mercury is not just a new dialect of Prolog. It
is a new kind of language, and learning a new kind of language requires
learning new idioms.

>By the way, the two languages should, at
>some point, be compared empirically on speed, program size and size of
>standalone executables on both functional style and nondeterministic
>programs. Any volunteers having both implementations around? Mercury
>on Linux vs. PDC on W95 or NT looks fair enough to me.

We agree; it would definitely be interesting to compare the performance
of Mercury and Turbo/PDC/Visual Prolog.

(Comparing performance on an x86 would be useful if you want to run
code on an x86, but for any other purposes we don't think such a
comparison would be *entirely* fair. The PDC compiler is a native-code
generator designed specifically for the x86, with its tiny set of
registers and so forth, and ought to be pretty well tuned by this
time. Although the Mercury compilation scheme is portable, it is
better suited to modern architectures which have more registers. This
is not to say that Mercury performs badly on x86, just that it is at
somewhat of a disadvantage compared to a native code generator on
that platform.)

>The few mode and type declarations, usually less than the size of the
>clauses themselves :-), are enough to break meta-programming once for
>all.

That's not the case. Of course, you can't use the non-ground
representation, since it is non-logical. But using the non-ground
representation is a bad idea anyway. Yesterday, as it happens, I sent
off two bug reports, one for SWI-Prolog and one for BinProlog; both of
them were for bugs in expand_term that are directly attributable to the
use of the non-ground representation. The implementors of SWI-Prolog
and BinProlog are extremely competent Prolog programmers; if they can't
get it right, can we expect anyone else to?

If you want examples of meta-programming in Mercury, see
<http://www.cs.mu.oz.au/~fjh/mercury/interpreter.m>, or download the
Mercury source distribution -- after all, the Mercury compiler is
itself one big meta-program. Having written 100000-odd lines of
meta-program, our experience is that meta-programming works just fine
in Mercury.

>The same did happened with Turbo Prolog. A 100-times slower
>meta-interpreter for real Prolog gave back `Edinburgh Prolog
>functionality', of course :-). A little bit smaller and a lot
>slower than in C.

The reason the Turbo Prolog meta-interpreter is so slow is the same
reason that the Goedel meta-interpreter is so slow -- lack of
destructive update. Although the support for unique modes and destructive
update in Mercury is not quite complete, there is probably enough
there to write a reasonably efficient meta-interpreter using uniq_array.m.
(We leave this as an exercise for the reader. :-)

>Overall, the main strength of logic programming as a
>practical programming tool is expressiveness. A factor of 2-3 in speed
>is simply not enough to give it away. Modes and types are helpful to
>manage complexity but without inference they are tedious enough not to
>be excused for killing out convenient meta-programming.
>
>To leave some hope, I would gladly switch to Visual Prolog if it had
>type inference and to Mercury if it had at least as much mode inference
>as Visual Prolog :-). Combined with a first order representation of
>type and mode declarations and a 10 lines meta-interpreter around.

I'm sure you will be glad to know that I've just implemented type inference,
so that will be available in the next release. Peter Schachte has been
doing some work on mode inference, so I'm hoping he will implement something
soon (but if he doesn't, I may ;-).

Here's your 10-line meta-interpreter:

:- module meta_interpreter.
:- import_module term, varset, clause_db.
:- pred solve(database::in, term::in, varset::in, varset::out).
solve(DB, functor(atom(","), [A, B], _)) --> solve(DB, A) , solve(DB, B).
solve(DB, functor(atom(";"), [A, B], _)) --> solve(DB, A) ; solve(DB, B).
solve(Database, Goal) -->
{ clause(Database, Goal, ClauseVarSet, Head0, Body0) },
rename_apart(ClauseVarSet, [Head0, Body0], [Head, Body]),
unify(Goal, Head),
solve(Database, Body).

This is actually the kernel of interpreter.m. Since meta-interpreters
are clearly so important to comp.lang.prolog posters, we assume that
they would agree that the useful utility predicates clause/5,
rename_apart/5, and unify/4 belong in the standard library; we have
therefore assumed that they've been put in a new library module
clause_db. To make this example useful, you probably want to add a
main/2 predicate, etc., but for the sake of argument we've squeezed it
into 10 lines, just to show that it can be done.

What does this prove? Not much. We think this 10-line meta-interpreter
is not very useful. Nor is the 5-line Prolog meta-interpreter. If you
want to do anything meaningful with either, then they're both going to
need plenty of work.

>Wanting to know `too much' statically is not worth its price when half
>of the code will travel on the net (to end up executed in the Java-chip
>in your TV set:-)

We certainly hope that half the code that people execute *won't* travel
on the net. The world has enough security problems without people
blindly executing code off the net.

As it happens, Java uses static analysis to avoid having to perform runtime
security checks, so static analysis and internet aware languages are not
exactly unrelated concepts :-)

We presume you are concerned with the size of the code that must traverse
the network. We recently did some measurements on this. Compiling the Mercury
compiler with Mercury produced a 2.17 Mb executable. By contrast, NU-Prolog
produced a save file containing 2.6 Mb of bytecode; the figure for SICStus
compactcode was 5.5 Mb. So just because a program is in bytecode instead
of native code does not mean that it is small. When we add a backend to
the Mercury compiler to generate bytecode for the Java Virtual Machine,
static analysis will no doubt still serve to reduce the size of the
generated code :-)

Incidentally, if you're interested in this sort of stuff, you may want
to check out LogicWeb, a technology for embedding Prolog rules in Web
pages, which is being developed by some people in our department, lead
by Leon Sterling and Andrew Davison.

>I think that Prolog as an embedded logic engine in a multi-paradigm,
>net aware environment is still the way to go for the next few years.

We don't think there is now or ever will be any one single "way to
go". As always, different problems will require different solutions,
and no one single tool will solve all possible problems. It is up to
us to choose the right tool for the job.

Fergus Henderson and Zoltan Somogyi.

Fergus Henderson

unread,
May 7, 1996, 3:00:00 AM5/7/96
to

The wording in the following post is mostly mine, but it is a
collaborative effort that expresses the joint views of myself,
Zoltan Somogyi and Thomas Conway.

van...@dfki.uni-sb.de (Peter Van Roy) writes:

>Could a Mercury expert give a *balanced* comparison between Mercury and
>Prolog--not focusing solely on tweaked execution speeds for a few small
>programs, nor proselytizing.

>For example, how does Mercury compare with Prolog in the following areas:

Comparing Mercury with Prolog is a little bit difficult, because the
name "Prolog" has been used for a very diverse group of languages.
We'll compare Mercury with Prolog as standardized by ISO, although since
ISO Prolog is quite limited in some respects, it is somewhat of a
"straw man", so we'll also mention what particular Prolog
implementations offer.

In many of the areas Peter mentioned, there is active development
work being currently undertaken by the (now large) Mercury team
here at the University of Melbourne. We will mention these, although
anyone deciding whether to use Mercury for a project starting immediately
should of course base their decision on what is currently available.

>1. Coroutining support (freeze and relatives)

ISO Prolog has no coroutining support at all.
Some Prolog systems such as SICStus Prolog and NU-Prolog do support
coroutining. (Different systems do it in different ways,
unfortunately, although its probably possible to get some minimum
functionality like freeze/2 working on all systems which support
coroutining without too much difficulty.)

Mercury has no support for run-time coroutining, but it will perform
mode-directed goal reordering at compile time. This level of support
is enough to support multi-moded predicates, and to ensure soundness
(e.g. of negation), but of course it only handles a fairly limited
subset of the cases for which coroutining might be useful.

Thomas Conway is working on extending Mercury to support coroutining
and stream AND-parallelism for his PhD.

>2. Support for program transformation (term_expansion, reflective syntax)

ISO Prolog does not have term_expansion. Most Prolog systems do have
term_expansion. (There are some differences between individual systems;
for example, systems differ on whether or not a single clause can be
expanded to a list of clauses.)

Mercury does not have term_expansion, although it is easy to write
a term_expansion level source-to-source preprocessor, and very easy to
tell `mmake' (the Mercury front-end to GNU Make) to use this preprocessor.
Since this is important to some people, we'll include an example such
preprocessor as one of the example programs in the next Mercury release.
(For now, see <http://www.cs.mu.oz.au/~fjh/mercury/expand_terms.m>.)

Prolog has a reflective syntax, in the sense that the syntax for
representations of program fragments can be the same as for those
program fragments themselves. Mercury does not.
(Was that what you meant by a reflective syntax?)

Forgive me for asking such a basic question, but would you mind
explaining why you think having a reflective syntax is important?

>3. Read/write of arbitrary terms

Prolog supports this.

Mercury provides a meta-programming library for term manipulation which
will let you write arbitrary values of type `term', but there is
currently no way to read or write user-defined data types of other
types without explicitly writing code to do so.

We've actually implemented read/write of arbitrary types in Mercury,
but the current implementation slowed down compilation time by about
25% last time we measured it, which seemed to be a very high price to
pay, so currently that feature is not enabled. But we'd like to
support it in some future release - providing an implementation which
doesn't have such a cost in terms of compilation time is definitely on
our list of things to do.

>4. Program modification support (assert/retract and relatives)

Prolog supports run-time program modification. Mercury does not,
because *modifying* a running program cannot be declarative.
Dynamically loading code into a running program can be declarative,
and it is something that Mercury may eventually support (you can
probably do it now using the C interface and dlopen()).

However, if you can describe the underlying problem that you want to
solve using runtime program modification, we may be able to give a
better indication of how easy it is to achieve that aim in both Mercury
and Prolog. Mercury does include standard library modules for handling
sets, maps, etc.; these declarative alternatives can replace the use of
assert/retract to store in-memory databases.

>5. Higher-orderness (call and relatives)

ISO Prolog provides call/1, but there is no call/N (N>1).
However, it is possible for a user to implement call/N (N>1) using
call/1, univ, and append, e.g.

call(G0, X) :-
G0 =.. L0,
append(L0, [X], L),
G =.. L,
call(G).

Most Prolog systems implement call/1 pretty inefficiently,
and the calls to univ and append make call/N even less efficient.

Mercury provides a reasonably efficient call/1 and call/N (N>1).
The latest development version also does fairly aggressive inlining
and specialization of higher-order predicates within a module.
We think that in most cases, this will entirely eliminate the overhead
of using higher calls, which means that you can use them much more freely.
However, we don't do any cross-module inlining or specialization,
so this only works within modules, not across modules.

As a convenience feature, Mercury also provides lambda
expressions, so that you can write a higher-order term without
resorting to using an auxiliary predicate.

The most recent public release of Mercury has some annoying
restrictions on call/N (N < 5, all input arguments must precede all
output arguments) but these have been fixed in the latest development
version and so will be fixed in the next public release.

>6. Database query support (setof and relatives)

Both Prolog and Mercury provide support for getting all solutions to
a goal. With regard to expressiveness, I think they're about equal,
and with regard to efficiency, each will be better in different
circumstances but they probably come out roughly even overall.
A theoretical difference is that the Mercury versions are purely
logical, properly scoped predicates, whereas the Prolog versions are all
non-logical and improperly scoped, but that probably doesn't make a
huge amount of difference from a practical perspective. One important
practical difference is that Mercury doesn't allow you to depend on the
order of solutions (this is necessary to ensure soundness).

>7. Constraint systems (rational trees, finite domains, attributed variables)

ISO Prolog provides no support for constraint handling other than
full unification (Herbrand constraints), although many Prolog and
Prolog-like systems do provide such support. Mercury does not provide
any support for constraint handling, not even Herbrand constraints.

We're working on adding support for constraints, using a small
extension to Mercury's mode system, and some hooks in the generated
code. To start with, we're interfacing to the CLP(R) constraint
solver, since there is a research group here who have been working on
CLP(R). David Jeffery, a student here, has been doing most of the
implementation work, and I understand he has got some simple examples
working, but a fair bit more work is required in the way of
documentation, testing, packaging, and so forth. After that we will
look towards interfacing to other constraint solvers on atomic types,
e.g. integer, boolean, and finite domain constraints. Providing
efficient support for more general constraint handling (e.g. Herbrand
constraints) without significantly reducing the efficiency of programs
or parts of programs that don't use them is a research problem, but one
that we would like to attack.

>8. Development environment (interactive top level, incremental compiler)

ISO Prolog doesn't mandate anything here, but pretty much all Prolog systems
provide some sort of interactive top level, and you can buy commercial
systems with GUI development environments - I haven't tried any commercial
Prolog systems recently, but no doubt there are some very good development
environments out there.

Mercury doesn't have anything much in the way of support for a GUI
development environment. (Of course, you can as always use emacs.)
It also doesn't have any support for developing GUI programs
(as opposed to developing programs using a GUI development
environment). Even the C interface doesn't help much, since in
the current public release it is one-way: Mercury can call C,
but not vice versa. However, most of the work for a bidirectional C
interface has already been done, so the next release of Mercury will
very likely support C calling Mercury.

Given that Mercury is a pretty new language, the lack of GUI development
environments is not too surprising. But there is a reasonable chance that
there will be at least one GUI environment for Mercury (other than
emacs) in the next 6-9 months. There are a couple of projects along
those lines happening at here at the University of Melbourne.

Mercury doesn't have an interactive top-level either. The current
implementation comes with some support for using SICStus or NU-Prolog
to execute Mercury programs, although this support does not handle the
full Mercury language. (For example, it doesn't have any support for
functions.)

Mercury does have pretty good support for a UNIX-based development
environment using the usual UNIX tools. This support includes a
program to generate vi `tags' files for hypertext-like navigation of
source code; a build system based on GNU make that calculates
dependencies automatically, that allows user extension using the quite
powerful GNU Make syntax, and that supports parallel builds; and a
format for compilation error messages that is parsed by tools such as
emacs, error, elvis, and vim, so that the process of finding the piece
of source code corresponding to an error message can be automated;
and easy-to-use support for building stand-alone executables.

We're not sure exactly what you mean by an incremental compiler,
but we suspect the underlying wish is for a *fast* compiler with
fast turn-around time after small changes. The Mercury compiler
supports separate compilation, and the Mercury build system only
recompiles modules which have changed or which depend on an interface
that has changed -- adding comments to the interface part of a module
does not cause dependent modules to be recompiled. The Mercury
compiler is unfortunately not fast; as programmers who use it on a
day-to-day basis for developing a large (100,000 line) program, we
find its speed to be tolerable (probably about 5-10 times slower
than SICStus or NU-Prolog but faster than Aquarius Prolog). If you're
compiling smaller programs, it's probably quite OK (about 5-20s for a
typical ~400 line module on a 50 MHz SuperSPARC, no doubt less on a
fast Pentium).

We don't know of any Prolog system which has a working incremental
compilation system with a granularity finer than a single source file.
[NU-Prolog has an incremental compilation system, but it is broken -- I
stopped using it when I found that I spent much more time trying to
debug correct code (which didn't work because of bugs in the
incremental compilation) than was ever saved by using incremental
compilation.]

---------------------------------------------------------------------------

From a practical perspective, the most important difference between
Mercury and Prolog is that the two languages were designed with
different aims. Mercury is a pure logic language that was specifically
designed to support the needs of large, long-lived programs:
reliability, maintainability, and efficiency, among others.
Supporting the development of large, long-lived software products was
not a design goal of Prolog. Prolog is more oriented towards
exploratory programming. If you want a dynamically typed language that
supports imperative features and dynamic program modification and is
good for rapid prototyping of small programs, then Prolog will be a
better choice than Mercury. If you want a purely declarative language,
and one that provides the support that teams of software engineers
need to build robust, maintainable, efficient real-world applications,
then Mercury will be a better choice.

Peter's questions explored various aspects of language design which are
important for the applications in which Prolog is currently being used.
His questions did not cover many aspects of language design which
are more relevant to Mercury's intended application area. To make the
comparison more complete, perhaps an expert on Prolog (or another
language, e.g. OZ) would like to compare their language with
Mercury in areas such as the ones below.

1. What support does the language offer for improving program reliability?
What classes of bugs does the language design eliminate altogether or
catch automatically at compile time or run time?
When bugs are caught by the system, how useful are the diagnostics?

2. What support does the language (or implementation) offer for debugging?
Is it feasible to apply declarative debugging to the language?

3. What support does the language offer for program maintenance?
For example, what information can programmers easily find out about
existing code?

4. What support does the language offer for modularity?
Has the language been used for programs of more than 10,000 lines of code?
If so, how well did the language fare?

5. What support does the language offer for programmers working in teams?
Has the language been used for programs written by more than say 6
programmers? If so, how well did the language fare?

6. What support does the language offer for code reuse?

7. What support does the language offer for parallelism?
How hard is it to automatically parallelize programs written in the
language? Do the language or its idioms impose any limits on the
level of parallelism? Does the language impose any overheads on
parallel implementations?

8. Does the language have a simple declarative semantics?

--
Fergus Henderson
Zoltan Somogyi
Thomas Conway

Tom Howland

unread,
May 7, 1996, 3:00:00 AM5/7/96
to

> When we add a backend to the Mercury compiler to generate bytecode
> for the Java Virtual Machine, static analysis will no doubt still
> serve to reduce the size of the generated code :-)

Wouldn't it have been great if Fergus hadn't ended that sentence with a
happy face?

Paul Tarau

unread,
May 8, 1996, 3:00:00 AM5/8/96
to

In article <4mn60o$n...@mulga.cs.mu.OZ.AU>,
Fergus Henderson <f...@mundook.cs.mu.OZ.AU> writes...

More precisely, Zoltan Somogyi using Fergus Henderson's bionic hands

(please use goal reordering if I am wrong :-) writes:

>I'm afraid Paul has a quite fundamental misunderstanding about the
>history,design and aims of Mercury.

Not necessarily. Remember my Portland'95 predictions about "six month from
now people will end up comparing Mercury with Turbo Prolog" :-) Good
predictions usually hint towards fairly accurate `misunderstandings'
(internal representations) on what's going on.

>Semantically, Mercury has more in common with Goedel than with either
>Turbo-Prolog or standard Prolog.

I do not deny this. If it is indeed the case, it means that the three of
them are much closer to each other than to Prolog :-)

>In any case, we think Mercury is extremely competitive with Prolog when
>it comes to expressiveness. There are several considerations in

>Mercury's favor.

>* Mercury allows a much more natural style: there is never any need to
> introduce auxiliary predicates or to contort your code so that meets
> the requirements for indexing on only the top-level functor of the
> first argument, there is never any need to litter the code with cuts,
> or to scratch your head wondering exactly where the cut should be
> placed. There is no danger in Mercury of accidentally writing code
> that is unsound or not steadfast.

Do you mean (among other remarks I agree with) that (a forest of) embedded
if-then-elses is more readable then plain Horn Clause style?

a:-b,!,c.
a:-d.

always looked about as nice (ugly) to me as

a:-
b->c
; d.

>* Prolog programmers typically avoid using higher-order predicates; one
> important reason for this is that they typically incur a high
> overhead. By contrast, Mercury programmers can make fairly free use
> of higher-order predicates and be confident that the implementation
> will produce efficient code.

Prolog supports full meta-programming, not only higher-order call/N.
Anyway, both can be made fast enough (see BinProlog) not to inhibit
people to use them. Look at BinProlog 5.00's findall, map, foldr, foldl
call/N, etc. Prolog programmers who care about data abstraction
use either high-order or full metaprogramming quite extensively.
Systems like Eclipse have macro-expansion of map/3 with no loss in
performance. BinProlog starts to have some direct compilation of
high-level constructs to C. (Frequently used linear recursion patterns
are directly mapped to while loops, bypassing traditional WAM-code).
You might be surprised to see that nrev/2 is not much slower than in
Mercury. More work is needed (a lot for a one-member Prolog team :-))
to make this really shine, but it can be done.

>* Mercury now supports functional syntax. (This will be included in the
> next public release.) The vast majority of Prolog systems do not.

That's a major improvement. And, if most well-moded predicates have 1-2
modes it means that modes are not needed anymore. If you rewrite moded
predicates functionally, then Mercury becomes unnecessary: Prolog +
functions (well-typed, well-moded) is just fine :-)

More seriously, I think this is something all Prologs should do. And
the best way to achieve it properly is to cannibalize a good Haskell
implementation (Gofer?). Even better, it is just plain multi-paradigm
programming i.e. using the right tool for the right thing and combine
them. By the way, in this context, questions like: how Mercury's
functional subset compares in expressiveness/flexibility with Haskell
(type inference, lazy evaluation, overloading etc.), become
quite natural.

>* Mercury has been designed to support high-level optimizations and
> code transformations, such as deforestation, constraint propagation,
> as well as alternative execution strategies, such as memoing or even
> bottom-up execution. Eventually, implementations will exploit these
> opportunities, freeing programmers from having to worry so much about
> low-level details. The Mercury compiler already performs automatic
> inlining (unfolding), specialization of higher-order predicates, and
> elimination of unused arguments.

So do some Prologs around. The SICStus + Mixtus (a partial evaluator
for full Prolog) combination I have tried out a few years ago already
did at source level probably a lot of what you do in Mercury.
BinProlog 5.00's programmer controlled "compile-time execution" can be
used to do this in a very precise way, only when you think it is really
useful. On the other hand, the variable execution strategies look
interesting, and obviously Mercury (as well as definite clause
Prolog!) are much happier with them than Prolog with side effects.

>* Mercury's unique mode system supports a declarative means of performing
> destructive updates (although the support for unique modes in the
> current implementation is not yet complete). Prolog does provide
> assert and retract, which can be used for this purpose, but they
> are neither elegant nor efficient, and of course they are totally
> non-logical.

`Destructive updates' can be done as an internal compiler optimization
in any logic language. This has little to do with the Mercury vs.
Prolog language design issue. Not having dynamic code is a virtue easy
to have: you just avoid having it :-)

I have started without assert and retract in BinProlog when it was a
`pure and declarative logic language'. Currently they are back (of
course) as in any Prolog around. I still think it is wise not to use
them as such, and build, for instance, an object oriented layer
around. Or use implication instead. On the other hand, keeping
state in hundred of useless arguments has no practical excuses.

I found that the Mercury revolution is recessive on enforcing
traditional argument based state information handling. Visual noise
is as bad as excessively hidden information.
They both make you see less.
The _real_ alternative is an object-oriented style, and programmers
have voted with their legs (i.e. fingers) on that issue.
My prediction is that Mercury will have them in some form in
the next six months. Of course, you can still change destiny,
at your own risk :-)

Less `spicy' dynamic code constructs have been around in various
Prologs. For handling backtrackable state, BinProlog has
assumptions. DCGs become an instance of them. No need to translate as
with the usual --> and you have multiple DCGs streams as well.
Implications (as in Lambda-Prolog and Lolli) also help avoiding using
assert/retract most of the time.

Another myth around is that dynamic code is necessarily slow. The
nice thing I found out recently about asserted/assumed code and
implication is how well Prolog reacts to dynamic recompilation.
Currently it's in BinProlog 5.0 (only for asserted code). In
practical terms, user-level distinction between
native/byte-code/interpreted code just vanishes, while performance is
usually within a factor of 2 from your best implementation method. I
think this is easy to replicate in any Prolog compiler and would be
highly beneficial for commercial implementations.

Mercury badly suffers from slow compilation. I do not know if this is
by necessity (design :-)) or by accident. Anyway, one way to get
around this is to have a quick-loading interpreter. And combine it
with dynamic recompilation. Initially, code is loaded in memory with no
concern about speed. The more you use the code the more it becomes
likely that it will get promoted on the `speed-hierarchy' to a faster
implementation (interpreted->bytecode->native). With this you have
quick load _and_ amortized high performance.

BinProlog 5.00 does it one predicate at a time by computing on the
fly `update' vs. `call' statistics. Moving from one
representation to the other is child's play.
Without Prolog's ability to do full meta-programming it would have
taken a year. It took about one week!

I guess that some of your design decisions (which as I said, _do_
sacrifice expressiveness for speed) will make this unusually hard to
do in Mercury. I still think it is worth doing it, but it will
require a lot of work.

> However, we do appreciate that writing these declarations can interfere
> with exploratory programming, and so we have been working on getting the
> compiler to infer them, for predicates or functions local to a module.
> The latest development version of the Mercury compiler performs type
> inference as well as determinism inference.

Definitely a positive development! I am looking forward to see
the release.

>* Mercury doesn't allow code that is not well-typed.

>* Mercury doesn't allow code that is not well-moded.

Looks like custom officers who stop people with beards and jeans while
letting 90% of drug dealers (usually either smiling old ladies or
people with ties :-)) to go through.

More latter, unless some other Prologers will find the time to tell
what they think :-)


Paul Tarau


P.S. Well, let's handle one more, 'cause of its story:

>That's not the case. Of course, you can't use the non-ground
>representation, since it is non-logical. But using the non-ground
>representation is a bad idea anyway. Yesterday, as it happens, I sent
>off two bug reports, one for SWI-Prolog and one for BinProlog; both of
>them were for bugs in expand_term that are directly attributable to the
>use of the non-ground representation. The implementors of SWI-Prolog
>and BinProlog are extremely competent Prolog programmers; if they can't
>get it right, can we expect anyone else to?

It was about how to `expand_term(A,B)' with A an uninstantiated
variable. SWI crashed, BinProlog behaved as if

A were X-->Y.

SICStus, Aquarius, C-Prolog returned the more reasonable (I agree),
A=B, as if

A were meant to be (X:-Y).

During the intersection between Australian night and Canadian day we
have exchanged a few messages with Fergus. Arrival of Australian
morning :-) left us with the conclusion that a run-time error is the
most sensible thing to do. I still believe that this is unrelated
to the ground vs. non-ground representation. It simply means that
translation should either suspend until we know more about A in
expand_term(A,B), or simply return an instantiation error.
By the way, Fergus told me that his interest on this bug was
mostly philosophical. Mine too.

Thomas Lindgren

unread,
May 8, 1996, 3:00:00 AM5/8/96
to

In article <4mn8jk$p...@mulga.cs.mu.OZ.AU> f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:

> 7. What support does the language offer for parallelism?
> How hard is it to automatically parallelize programs written in the
> language? Do the language or its idioms impose any limits on the
> level of parallelism? Does the language impose any overheads on
> parallel implementations?

Interesting questions. I'll comment on this one a bit, since I've
worked on a parallel Prolog.

First, support for parallelism. For reasons expounded on below, the
main support needed for parallelism is probably expressing (a) loops
and (b) computations that do not share data in difficult ways.

I think automatic parallelization is a red herring except in some
very special contexts. As far as I can tell, only Fortran programs
have been automatically parallelized with any _great_ success,
and it usually takes interactive tweaking of the code to get any
performance. The reason for this is that sequential algorithms
do not parallelize well. (Even today's complex dataflow engines
running inside our workstations struggle mightily to sustainedly
run 2 or 3 instructions simultaneously.)

The experiences of the FGCS project and related ones implies similar
conclusions. When GHC, KL1 and all the other languages were used, with
the promise of implicit and unrestricted parallelism -- run every
single goal in parallel, if need be! -- it turned that most processes
spent their time suspended UNLESS you wrote your algorithms so that
they didn't. Note that GHC and KL1 are about as pure as you get. No
nasty var/1 or assert/retract. Ueda wrote an interesting paper on
this towards the end, proposing message-oriented scheduling amongst
other things.

What idioms impose limits on parallelism? There are two fundamental
ones: data- and control dependences. You can't read data before
it's produced, and you must in all interesting programs decide what
to do next at some point. You can ignore these partly in logic
programming languages: some data dependences disappear when you use
unification, and if you have the hardware to throw at it, you can
ignore control dependences. Just duplicate your machine at every
point you make a decision and run both (or all) paths.
(The reason that Fortran parallelizes comparatively well is that
the successful algorithms, that walk over arrays and do the same
thing to a lot of things, have few data or control dependences to
begin with.)

For instance, AND-parallel implementations such as &-Prolog, Reform
Prolog or Shen's DASWAM all to some extent try to overcome these
limits, by removing control dependences (speculative execution of
AND-parallel goals) and data dependences (by weeding out false
dependences as in &-Prolog and Reform Prolog, to varying degrees, or
by speculative data accesses as in DASWAM). OR-parallel
implementations tend to have few data dependences, but still have a
form of control dependences: someone must create the choice points
that will be searched, and this is typically done sequentially.
(Perhaps a topic for future research?)

In general, the problem persists even if one runs pure logic programs.
Perhaps Mercury's determinism declarations could help in reducing
speculative work (since one can see at compile-time whether a goal may
fail), but that might not be enough. I think the sequential access of
data structures such as lists will be the main impediment in the
future. (Reform Prolog attempts to bribe that particular bugbear by
paying lots of attention to list code, e.g., converting lists into
arrays at runtime, but the problem needs a lot more attention.)

Finally, language overheads on parallel implementations.

The simple answer is that any language which is not Fortran or SQL
imposes overheads on parallel execution, and even those two do it
sometimes. Nowadays, coarse-grain parallel machines are built for two
purposes: running scientific codes and answering database queries.
(There's a market for departmental servers as well, if one wants to
optimize file systems and similar things.) Since logic programmers
usually don't do scientific computations, perhaps XSB-Prolog is the
way to go? (There's been interesting work at ECRC on parallel
constraint solving in medicine, but I'm not enough into constraints to
tell you more.) Fine-grain machines, such as current workstations,
probably respond better to lower-level techniques than goal-level
parallelism.

There's lots more to be said here, but since I haven't thought it
through, I shan't. If you will work on parallelism, look at FGCS and
particularly coarse-grain dataflow at Berkeley, Illinois
Urbana-Champaign (e.g., Concert) and MIT.

This got a bit longer than I thought. I hope someone read it to
the end :-)

ti...@insect.sd.monash.edu.au

unread,
May 9, 1996, 3:00:00 AM5/9/96
to

In article <4mn60o$n...@mulga.cs.mu.OZ.AU>, Fergus Henderson wrote:

>
>Our goal was to design a logic programming language with that could
>satisfy our two main aims: a language that is purely declarative, and
>yet one that is a genuinely practical "real-world" tool which provides
>real support for programming in the large. The wonderfully felicitous
>thing about it was that these two aims led to exactly the same design
>decisions.

i would like to explore this "real-world" issue. the main thing i think
we need to turn prolog-esque systems into real-world systems are simple
abstract/information hiding mechanisms. logic programming encourages a
style of programming where we have to match explictedly on the
structures we are processing. i.e. to use _it_ we need to need to know
the details of the internals of _it_. this is a **bad** thing,
particularly when those structures change frequently or they are
complex.

what does mercury offer in this regard?

--
Dr. Tim Menzies |
ti...@insect.sd.monash.edu.au | Help! I'm modeming...
,-_|\ www.sd.monash.edu.au/~timm | and I can't hang up!!!
/ \ Dept. Software Development, |
\_,-._/ Monash University, Caulfield, |
v Melbourne, Australia, 3145. |
+61-3-9903-1033(p)9903-1077(f)|


Tyson Richard DOWD

unread,
May 9, 1996, 3:00:00 AM5/9/96
to

ti...@insect.sd.monash.edu.au () writes:

>In article <4mn60o$n...@mulga.cs.mu.OZ.AU>, Fergus Henderson wrote:

>>
>>Our goal was to design a logic programming language with that could
>>satisfy our two main aims: a language that is purely declarative, and
>>yet one that is a genuinely practical "real-world" tool which provides
>>real support for programming in the large. The wonderfully felicitous
>>thing about it was that these two aims led to exactly the same design
>>decisions.

>i would like to explore this "real-world" issue. the main thing i think


>we need to turn prolog-esque systems into real-world systems are simple
>abstract/information hiding mechanisms. logic programming encourages a
>style of programming where we have to match explictedly on the
>structures we are processing. i.e. to use _it_ we need to need to know
>the details of the internals of _it_. this is a **bad** thing,
>particularly when those structures change frequently or they are
>complex.

>what does mercury offer in this regard?

Mercury has a module system, allowing the use of abstract types.

This allows detail of implementation to be hidden from modules that
don't need to see them. A common programming style seen in Mercury is
to have a data type, and a set of 'access predicates' to return
information stored in (or computable from) that data type.

I believe this answers your concern.

Of course, to use a module with an abstract type, you only need to read
the interface section of the module. The types and modes of all the
exported predicates are all visible in the interface (of course, any
good programmer will put lots of comments there too, but usually the
types and modes are enough information by themselves). The types and
modes should be _checked_ by the compiler. Lots of languages let you
get away with this step (some of them are even used in the real world).

This too, is a big benefit for real world programming.

--
Tyson Dowd # Another great idea from the
# people who brought you
t...@mundil.cs.mu.oz.au # Beer Milkshakes!
http://www.cs.mu.oz.au/~trd # Confidence --- Red Dwarf

Fernando Pereira

unread,
May 9, 1996, 3:00:00 AM5/9/96
to

In article <4mn8jk$p...@mulga.cs.mu.OZ.AU>, f...@mundook.cs.mu.OZ.AU (Fergus
Henderson) wrote:
> [ very interesting comparison of Mercury with Prolog, followed by
> the following (abbreviated here) questions ]

> 1. What support does the language offer for improving program reliability?
>
> 2. What support does the language (or implementation) offer for debugging?
>
> 3. What support does the language offer for program maintenance?
>
> 4. What support does the language offer for modularity?
>
> 5. What support does the language offer for programmers working in teams?
>
> 6. What support does the language offer for code reuse?
>
> 7. What support does the language offer for parallelism?
>
> 8. Does the language have a simple declarative semantics?
These are all good questions. But I would like to add another one:

9. How will the answers to the previous questions, the implementation, the
applications and the organizations that support it help the language find
fertile soil and prosper?

To my mind, that is the *main* question for programming language
researchers that want to see their work change the world of programming
for the better. The problem they face is simple: the ecosystem for
languages is extremely crowded, and new languages have great difficulty in
finding a vacant niche. Highly specialized languages are at risk of
finding their niche disappear or be invaded by predatory languages or
software products (cf. island species). The disturbed conditions typical
of newly settled territory (eg. the Web) are attractive to "weed"
languages such as Perl. The main component of the language ecosystem is
its carrier, the programmer. Thus the social, economic and educational
forces that affect the selection, training and employment of programmers
have great influence on the survival and prosperity of languages. In
particular, languages that demand of the majority of programmers and
organizations more up-front investment in intellectual development and new
practices are at a serious disadvantage (cf. the history of functional and
logic languages with their foundations in deep mathematical notions).
Given all this, what is our best guess for the future of new
scientifically designed languages? Will they be doomed to the same fate as
artificial human languages?

These ideas, which are just a brief sketch of something that has been
bothering me for years, are prompted too by my repeated frustration with
advanced languages as a vehicle for demanding applications such as speech
and pattern recognition and machine learning from real-world data. We
always end caving in to C or C++, not for petty reasons but because
efficiency, portability, transferability to developers and implementation
stability end up dominating the discourse when the goal is to get
something difficult done rather than demonstrating a new language.

So, how will Mercury measure up to natural selection? Does it have a
chance? Do other modern logic-based languages have a worse or better
chance? Or is the Cambrian explosion of languages behind us and basic
successful language designs fixed for good?

--
Fernando Pereira
2B-441, AT&T Bell Laboratories
600 Mountain Ave, Murray Hill, NJ 07974-0636
per...@research.att.com

Olivier Ridoux

unread,
May 9, 1996, 3:00:00 AM5/9/96
to

In article <pereira-0805...@greeley.research.att.com>, per...@research.att.com (Fernando Pereira) writes:
...

|> 9. How will the answers to the previous questions, the implementation, the
|> applications and the organizations that support it help the language find
|> fertile soil and prosper?
|>
|> To my mind, that is the *main* question for programming language
|> researchers that want to see their work change the world of programming
|> for the better. The problem they face is simple: the ecosystem for

I think that there is a more subtle way of seeing one's work change the world
of programming than having one's language and system beeing accepted. It is
to see one's ideas dissolve (see note *) in the computing milieu and be accepted
in other languages and systems. I give three examples. During around 20 years
of research on automatic memory management for pointer-free programming
languages, the main echo from the computing milieu was that a real-life
system should not rely on such things. Now, Java appears and suddenly gains
a wide acceptance. Consider also when the first papers on relational data-bases
were published, and when RDBMS became trendy. Finally, when IBM started
distributing machines with their version of UNIX (~ 1993), they had a large
advertising campaign in newspaper (in France, at least) with several slogans
including one that went as follows "20 years ago, academics invented UNIX, now
IBM is selling it". Now, consider what the 20 years old UNIX and IBM's UNIX
share: ideas, merely ideas. Note that the 20 years champions of the ideas should
not expect any form of gratitude; instead they should merely be happy. Note also
that nobody is to be blamed for the delay.

So, for me the real question is which ideas of Logic Programming have already
dissolved in the milieu, and which ideas of the new forms of LP will dissolve.

This should not discourage from studying programming languages and working (hard)
on their implementation. This is necessary for giving life to ideas. It is like
a prototype car; nothing will remain in anybody's car, except ideas, and shapes.
Just like a prototype car may seem excessive because it is centered about few
ideas like speed, or smart devices, or ecology, your prototypes look excessive
to the computing milieu. However, just like anybody expects a better car
that goes faster, cheaper, safer, I think (Am I naive?) that the computing milieu
expects something from this research.

Olivier


(note *) I understand that in English "to dissolve" has a more negative
connotation than what I mean, but I could not find better.

ti...@insect.sd.monash.edu.au

unread,
May 9, 1996, 3:00:00 AM5/9/96
to

In article <4ms7hn$1...@mulga.cs.mu.OZ.AU>, Tyson Richard DOWD wrote:
>ti...@insect.sd.monash.edu.au () writes:
>
>>In article <4mn60o$n...@mulga.cs.mu.OZ.AU>, Fergus Henderson wrote:
>
>>>
>>>Our goal was to design a logic programming language with that could
>>>satisfy our two main aims: a language that is purely declarative, and
>>>yet one that is a genuinely practical "real-world" tool which provides
>>>real support for programming in the large. The wonderfully felicitous
>>>thing about it was that these two aims led to exactly the same design
>>>decisions.
>
>>i would like to explore this "real-world" issue. the main thing i think
>>we need to turn prolog-esque systems into real-world systems are simple
>>abstract/information hiding mechanisms. logic programming encourages a
>>style of programming where we have to match explictedly on the
>>structures we are processing. i.e. to use _it_ we need to need to know
>>the details of the internals of _it_. this is a **bad** thing,
>>particularly when those structures change frequently or they are
>>complex.
>
>>what does mercury offer in this regard?
>
>Mercury has a module system, allowing the use of abstract types.
>
>This allows detail of implementation to be hidden from modules that
>don't need to see them. A common programming style seen in Mercury is
>to have a data type, and a set of 'access predicates' to return
>information stored in (or computable from) that data type.
>
>I believe this answers your concern.

by accessors, do you mean things like:

employee(name,emp(OldName,X,Y),OldName,emp(NewName,X,Y),NewName).

which we can use to change the value of the Name field of employee
while copying over the other variables?

if so, then that's hows its done in prolog too, rite? wrong?

--
Dr. Tim Menzies | "The Americans have need of
ti...@insect.sd.monash.edu.au | the telephone, but we do
,-_|\ www.sd.monash.edu.au/~timm | not. We have plenty of
/ \ Dept. Software Development, | messenger boys."
\_,-._/ Monash University, Caulfield, | -- Sir William Preece, chief
v Melbourne, Australia, 3145. | engineer of the British
+61-3-9903-1033(p)9903-1077(f)| Post Office, 1876


Fergus Henderson

unread,
May 9, 1996, 3:00:00 AM5/9/96
to

ta...@clement.info.umoncton.ca (Paul Tarau) writes:

>Fergus Henderson <f...@mundook.cs.mu.OZ.AU> writes...


>
>>I'm afraid Paul has a quite fundamental misunderstanding about the
>>history,design and aims of Mercury.
>
>Not necessarily. Remember my Portland'95 predictions about "six month from
>now people will end up comparing Mercury with Turbo Prolog" :-)

Certainly Mercury and Turbo Prolog have enough in common that such
comparisons are useful; I never doubted that people would compare the
two languages. But there are many important differences between the
two languages, and the history, design process, and most importantly
the aims of the two languages are quite different. In the case
of Turbo Prolog, both in name and in character the language seems to be
a Prolog variant that has been "tweaked" in various ways primarily to
improve efficiency. In the case of Mercury, efficiency is important,
but it's only one of the motivations behind the language design --
software engineering and declarativeness are at least as important.

>>In any case, we think Mercury is extremely competitive with Prolog when
>>it comes to expressiveness. There are several considerations in
>>Mercury's favor.
>
>>* Mercury allows a much more natural style: there is never any need to
>> introduce auxiliary predicates or to contort your code so that meets
>> the requirements for indexing on only the top-level functor of the
>> first argument, there is never any need to litter the code with cuts,
>> or to scratch your head wondering exactly where the cut should be
>> placed. There is no danger in Mercury of accidentally writing code
>> that is unsound or not steadfast.
>
>Do you mean (among other remarks I agree with) that (a forest of) embedded
>if-then-elses is more readable then plain Horn Clause style?

The short answer: yes.

Provided you use them appropriately, and don't overdo it as far as
nesting goes, they are more readable, IMHO. Of course, if you overdo
the nesting, whether with nested disjunctions or nested if-then-elses
(or indeed any nested construct) it can decrease readability. Any
control structure that doesn't fit on a single screen of code is asking
for trouble. But long sequences of clauses with cuts can cause the
same sorts of problems, because the clauses can't be understood in
isolation.

It comes down to using different notations for different concepts,
and saying what you mean.

>a:-b,!,c.
>a:-d.
>
>always looked about as nice (ugly) to me as
>
>a:-
> b->c
>; d.

Well, if you were to indent all your if-then-elses like that, it
*would* be ugly ;-)

There's several major problems with using cut rather than if-then-else.
The first problem is that in the clause

a :- d.

it is not at all clear that the intended semantics are that `a' is true
if `b' is false and `d' is true. You have to look at the previous
clause. This means that clauses are no longer independent; the
programmer needs to take care with the order of the clauses, and with
the complicated relationship between tests performed in one clause and
the intended semantics of the next. This problem doesn't show up
too much in trivial examples like the above, but does make itself
more apparent in slightly more complicated examples.

The second problem with using cut rather than if-then-else is that
it encourages people to write non-steadfast code. I've explained
this problem here before, so I won't do that again.

But the most severe problem with using cut rather than if-then-else
is that the semantics of clauses with cuts depends on the mode in which
they are invoked. This, combined with the usual practice in Prolog of
not writing down the intended modes of a predicate, can make it extremely
difficult to figure out the intended semantics of code written using cuts,
or what the intended effect of each cut was. Often the reader has
to do global mode inference to figure out what a certain bit of code
means.

(These last two problems only show up in examples which include
variables, which your simple example didn't.)

>>* Prolog programmers typically avoid using higher-order predicates; one
>> important reason for this is that they typically incur a high
>> overhead. By contrast, Mercury programmers can make fairly free use
>> of higher-order predicates and be confident that the implementation
>> will produce efficient code.
>
>Prolog supports full meta-programming, not only higher-order call/N.

Meta-programming is really an entirely different subject from
higher-order programming. You can do meta-programming in any
Turing-complete programming language, even with Turing machines if you
wish. (Consider, for example, the Universal Turing machine, whose
input includes an encoding of another Turing machine.) So when you say
"Prolog supports full meta-programming", what you really mean is just
that Prolog supports the use of a particularly concise syntax for
meta-programming. What you fail to mention is that this concise syntax
has semantics which are highly suspect, to say the least, in ways which
have been documented in great detail by researchers in logic
programming over the last decade or so, and that these faulty semantics
continue to cause problems in Prolog implementations, both straight-out
bugs (e.g. segmentation violations!) as well as more difficult
questions of semantic interpretation which simply do not arise when
using methods for meta-programming that have sound theoretical foundations.
These semantic problems have caused and will continue to cause
difficulty for even the best of Prolog programmers.

>Anyway, both can be made fast enough (see BinProlog) not to inhibit
>people to use them.

I agree that much of the problem with higher-order programming in
Prolog has been poor implementations; in general, Prolog implementors
haven't deemed higher-order programming important enough to merit
much special attention, and I'm sure things could be improved.

>>* Mercury has been designed to support high-level optimizations and
>> code transformations, such as deforestation, constraint propagation,
>> as well as alternative execution strategies, such as memoing or even
>> bottom-up execution. Eventually, implementations will exploit these
>> opportunities, freeing programmers from having to worry so much about
>> low-level details. The Mercury compiler already performs automatic
>> inlining (unfolding), specialization of higher-order predicates, and
>> elimination of unused arguments.
>
>So do some Prologs around.

True, and I wouldn't be too surprised if some C compilers do so too.
The difference is that Mercury's simple declarative semantics make it
a lot easier to do for Mercury.

>>* Mercury's unique mode system supports a declarative means of performing
>> destructive updates (although the support for unique modes in the
>> current implementation is not yet complete). Prolog does provide
>> assert and retract, which can be used for this purpose, but they
>> are neither elegant nor efficient, and of course they are totally
>> non-logical.
>
>`Destructive updates' can be done as an internal compiler optimization
> in any logic language.

That's true, how many of them do it?

The trouble with optimizations is it is difficult to rely on them,
particularly when you are talking about complicated optimizations such
as introducing destructive update. A very small, seemlingly innocuous
change to a program can prevent a compiler from being able to apply
such an optimization. There are times when not being able to apply
destructive update optimization results in unacceptable performance;
for example, when using arrays, failure to apply destructive update
optimization can change both time and space requirements from linear to
quadratic. If programmers are to make use of arrays, hash tables, and
so forth, and their code is to be reliable and maintainable, there has
to be a more robust method than "trust the optimizer".

Of course, there are also situations in which you don't need or want to
worry about details like storage reuse. In those situations, you would
not use unique mode declarations, and it is up to the compiler to infer
what it can. In such situations, Mercury's ordinary mode and
determinism systems will definitely make the analysis easier and
probably make it more precise, but you're right that implementations of
other logic languages could do it too.

So getting back to expressiveness, in the inference situations described in
the immediately preceding paragraph, neither language has an expressiveness
advantage. It's the situations for which inference is not satisfactory
that Mercury has an expressiveness advantage: in Mercury, you can write
declarative code, and ensure destructive update by using unique mode
declarations, whereas in Prolog you have to resort to non-declarative
code using assert/retract or setarg or something similarly awful.
In the case of I/O, in Prolog you have no choice but to use non-declarative
side-effect based I/O.

(I should be using future tense in the above paragraphs, since support
for unique modes and destructive update is not yet complete.)


... to be continued ;-)

Fergus Henderson

unread,
May 9, 1996, 3:00:00 AM5/9/96
to

t...@mundil.cs.mu.OZ.AU (Tyson Richard DOWD) writes:

>ti...@insect.sd.monash.edu.au () writes:
>
>>i would like to explore this "real-world" issue. the main thing i think
>>we need to turn prolog-esque systems into real-world systems are simple
>>abstract/information hiding mechanisms. logic programming encourages a
>>style of programming where we have to match explictedly on the
>>structures we are processing. i.e. to use _it_ we need to need to know
>>the details of the internals of _it_. this is a **bad** thing,
>>particularly when those structures change frequently or they are
>>complex.
>
>>what does mercury offer in this regard?
>
>Mercury has a module system, allowing the use of abstract types.
>
>This allows detail of implementation to be hidden from modules that
>don't need to see them. A common programming style seen in Mercury is
>to have a data type, and a set of 'access predicates' to return
>information stored in (or computable from) that data type.

Yes. In addition, the recently-added support for functional syntax is
very important in this regard. Mercury allows functions to be run
in reverse modes, so you can define an abstract data type and some
functions in the interface of a module, and then code in other modules
can use those functions for "pattern-matching" without depending on the
implementation details.

Fergus Henderson

unread,
May 9, 1996, 3:00:00 AM5/9/96
to

ta...@clement.info.umoncton.ca (Paul Tarau) writes:

>Fergus Henderson <f...@mundook.cs.mu.OZ.AU> writes...
>


>>* Mercury now supports functional syntax. (This will be included in the
>> next public release.) The vast majority of Prolog systems do not.
>
>That's a major improvement. And, if most well-moded predicates have 1-2
>modes it means that modes are not needed anymore. If you rewrite moded
>predicates functionally, then Mercury becomes unnecessary: Prolog +
>functions (well-typed, well-moded) is just fine :-)

Nothing is *necessary*, but multi-moded predicates - and functions! -
are one of the nice features of Mercury which I wouldn't like to give
up. As I pointed out in another article, multi-moded functions are a
nice tool for data abstraction.

Did you mean *necessary for efficiency*?

>More seriously, I think this is something all Prologs should do. And
>the best way to achieve it properly is to cannibalize a good Haskell
>implementation (Gofer?).

I think the best way to achieve it is by flattening, as is done in
NUE-Prolog and Mercury.

>Even better, it is just plain multi-paradigm
>programming i.e. using the right tool for the right thing and combine
>them. By the way, in this context, questions like: how Mercury's
>functional subset compares in expressiveness/flexibility with Haskell
>(type inference, lazy evaluation, overloading etc.), become
>quite natural.

You're not the first to compare Mercury with Haskell. Don Smith wrote
a paper "How to translate moded Prolog programs into Haskell" which
comments on the close semantic relationship between Mercury programs
and programs in functional languages.

Mercury (0.6) supports type inference. The language definition allows,
but does not require, lazy evalation; the current implementation uses
strict evaluation. Comparison of overloading is complicated; Haskell
supports some things Mercury doesn't, and vice versa.

Tyson Richard DOWD

unread,
May 10, 1996, 3:00:00 AM5/10/96
to

ti...@insect.sd.monash.edu.au () writes:

>In article <4ms7hn$1...@mulga.cs.mu.OZ.AU>, Tyson Richard DOWD wrote:
>>ti...@insect.sd.monash.edu.au () writes:
>>
>>>In article <4mn60o$n...@mulga.cs.mu.OZ.AU>, Fergus Henderson wrote:
>>
>>>>

>>>>Our goal was to design a logic programming language with that could
>>>>satisfy our two main aims: a language that is purely declarative, and
>>>>yet one that is a genuinely practical "real-world" tool which provides
>>>>real support for programming in the large. The wonderfully felicitous
>>>>thing about it was that these two aims led to exactly the same design
>>>>decisions.
>>

>>>i would like to explore this "real-world" issue. the main thing i think
>>>we need to turn prolog-esque systems into real-world systems are simple
>>>abstract/information hiding mechanisms. logic programming encourages a
>>>style of programming where we have to match explictedly on the
>>>structures we are processing. i.e. to use _it_ we need to need to know
>>>the details of the internals of _it_. this is a **bad** thing,
>>>particularly when those structures change frequently or they are
>>>complex.
>>
>>>what does mercury offer in this regard?
>>
>>Mercury has a module system, allowing the use of abstract types.
>>
>>This allows detail of implementation to be hidden from modules that
>>don't need to see them. A common programming style seen in Mercury is
>>to have a data type, and a set of 'access predicates' to return
>>information stored in (or computable from) that data type.
>>

>>I believe this answers your concern.

>by accessors, do you mean things like:

>employee(name,emp(OldName,X,Y),OldName,emp(NewName,X,Y),NewName).

>which we can use to change the value of the Name field of employee
>while copying over the other variables?

>if so, then that's hows its done in prolog too, rite? wrong?

That's essentially the same as you would do in Mercury, but,
I believe that the Prolog example above would then go on to have

employee(salary,emp(X,OldSalary,Y),OldSalary,emp(X,OldSalary,Y),NewSalary).

-- which is generally not well typed (unless type of Salary is same as type
of Name)...

Mercury would tend to use:

employee__set_salary(NewSalary, emp(X, _Old, Y), emp(X, NewSalary, Y)).

The difference is, with Prolog you can break the barrier at any time, saying
X = emp(X,Y,Z)
whereas with Mercury, if you do this to an abstract type (outside of the
module it is implemented in) you'll get compile-time errors.

Any well-disciplined programmer can do information hiding in most languages,
but when programming in teams, you want to have a mechanism to enforce it.
Of course, I'd expect some Prolog variants provide features for doing this.

Paul Tarau

unread,
May 10, 1996, 3:00:00 AM5/10/96
to

In article <4mt5gr$k...@mulga.cs.mu.OZ.AU>,

Fergus Henderson <f...@mundook.cs.mu.OZ.AU> wrote:
>>
>>Do you mean (among other remarks I agree with) that (a forest of) embedded
>>if-then-elses is more readable then plain Horn Clause style?
>
>The short answer: yes.
>
On the `scale of declarativeness' I have always put if-then-else
close to the bottom. The same for negation. Good Prolog programmers
never need them, anyway :-)
Despite the problems with "!" and its obscure interaction with
moding, I still value the ability of reading Horn Clauses
and approximate "!" as not being there (i.e. as "true").
That's what programming in logic means: programming using
logic formulas. When someone tells me `let's use if-then-else to get
more declarative', it sounds like: let's eat a cheese-cake made
by Weight Watchers! I see a good intent, however :-)


>>
>>Prolog supports full meta-programming, not only higher-order call/N.
>
>Meta-programming is really an entirely different subject from
>higher-order programming. You can do meta-programming in any
>Turing-complete programming language, even with Turing machines if you
>wish.

As you definitely know, we all know this. The problem with saying it
is that it implies that we had not :-)
The real point is, of course, practical expressiveness.
It is interesting to know that programs written by people
in Inductive Logic Programming (S. Muggleton's PROGOL)
use some fancy complexity measures to throw away unpleasantly
complicate candidate clauses. Maybe humans should do the same...

By the way, as you might know, reflection in Prolog is not as bad as
in Turing machines. On the other hand, when I wrote my first
10 Prolog meta-interpreters in Turbo Prolog (more than a few years ago)
I have taken a look at various ground representations :-).
Well, their first 10 lines looked the same as the one
posted by Fergus. I have put the other 500 or so in libraries too :-)
If something is the same in Mercury and Visual Prolog, then
this is really it!
As advocates of the most obvious cure for prostate cancer
would say "we can live without it" :-). Haskell and ML have
given away Lisp's eval. I think, they have lost very little :-)
(Because Lisp's representation of programs as data was no fun.)
On the other hand, meta-programming in Prolog is not only useful but
it's fun too. Because of its non-ground representation! It really
adds practical expressiveness. And declarativeness in the (real)
sense that you do not have to say _how_, but just _what_ you mean.
Ascribing responsability for segmentation violations (a human error)
to meta-programming, is like saying that planes crash because
they allow people to drink on board.

Fergus write:
>
>... to be continued ;-)
>

Hopfully this will not delay release of Prolog compatible Mercury 1.0 :-)


Paul Tarau

Paul Tarau

unread,
May 10, 1996, 3:00:00 AM5/10/96
to

In article <pereira-0805...@greeley.research.att.com>,
Fernando Pereira <per...@research.att.com> wrote:

>Highly specialized languages are at risk of
>finding their niche disappear or be invaded by predatory languages or
>software products (cf. island species). The disturbed conditions typical
>of newly settled territory (eg. the Web) are attractive to "weed"
>languages such as Perl. The main component of the language ecosystem is
>its carrier, the programmer.
>Thus the social, economic and educational
>forces that affect the selection, training and employment of programmers
>have great influence on the survival and prosperity of languages. In
>particular, languages that demand of the majority of programmers and
>organizations more up-front investment in intellectual development and new
>practices are at a serious disadvantage (cf. the history of functional and
>logic languages with their foundations in deep mathematical notions).
>Given all this, what is our best guess for the future of new
>scientifically designed languages? Will they be doomed to the same fate as
>artificial human languages?

I like Fernando's ecosystem metaphor. That's what `real-world' is, after
all. It's nice to stop and see the lights of the city from the top of a
hill, from time to time.

I am not really afraid of _deep_ mathematical notions in programming
language design. Category theory inspired monads in Haskell are such
an example. Take a look at Haskell 1.3's I/O-system. They made ordinary
programmer's life easier in a non-trivial way. I have heard recently
at PAP/PACT Pascal Van Hentenryck's invited constraint talk. It started
with Newton's method. And it ended quite deep, while still crystal
clear! By the way, constraint languages are earning real millions in the
real world, with real mathematics!. Lattices have made their way into
objects seamlessly. Well, class maps are usually shallow, but that's
just bad C influence :-)

I am afraid of _bad_ mathematics, however. (I am actually
talking about a more general scientific methodology here).
One symptom is `dead paradigm consolidation' i.e. over-formalization of
established knowledge of little practical value. The other is `orthodoxy
propaganda' (just look for contexts of the most frequently misused
concept in comp.lang.prolog: `declarative programming':-)). As with
overzealous or misguided T-cells, in real-world socio-ecology, similar
things usually end up destroying the immune system of the inflicted
organism. To improve benign accidental hits from Deja News, Alta Vista
and friends (for large scale free advertising for Mercury and Prolog
:-)), I would add: "As it happened with the Third Reich, the (Great)
October Revolution and the Pentium floating point bug.". I am joking
about this because I remember a research proposal which never got funding,
about a logic programming based WEB search engine with some NL
understanding in it. At a time when searching the WEB was still playing
with archie and veronica. If our fellow bureaucrats could have travelled
in time, (on tax-payers' money, of course) :-)

Overall, I think that the real danger does not come from innovation
but from conformity. Innovation is not necessarily artificial.
Conformity can be quite often.

I will try to point this out by answering some of Fergus's questions:

> 4. What support does the language offer for modularity?
>

> 5. What support does the language offer for programmers working in teams?
>

> 6. What support does the language offer for code reuse?
>

I think that Mercury (as well as ISO-Prolog)'s idea of module is a very
weak one. Without inheritance, code reuse means very little. Modules,
as borrowed from Modula and Ada have _no reasoning_ behind them.
It looks like forgetting what `logic programming' is all about :-)

Compare this with Java's conceptual parsimony: classes subsume modules:
let's throw them away. Objects subsume functions: let's throw them
away too! Well, it takes some courage, but it is paying off!
(I would have actually gone one step further, as in SELF 4.0:
objects and classes are the same thing, let's make them one!)

Back to logic programming, WHAT ABOUT THROWING AWAY MODULES AND JUST
START STANDARDIZING ON OBJECTS?

Most major Prologs systems have some OO dialect or library, by now.
There are also some nice theoretical developments around objects in LP.
Among the recent ones, I liked M. Martelli's LL based system. Dale
Miller's implication based constructs are another elegant and clean
formalism to start with. Sorts in Life can be as inspiring as well.

Evolutionary changes are not enough to make species survive
in ecosystems. I am afraid the same is true for languages.

The only improvement from Prolog to Mercury is a (fairly reasonable)
gain in speed. (I know this for sure for Mercury, still have to be
convinced about Visual Prolog, if someone having the software actual
publishes some benchmarks. The Mercury guys should not be afraid to do
this on a PC, they might just win because of gcc's good optimizer).
Let's also hope they will manage to reduce the `expressiveness
gap' with Prolog at some point. Then compiling full Prolog to Mercury
(and redesigning Mercury to allow doing this gracefully) would benefit
all.

I think the 3 languages face basically the same problems.
Balkanization and emphasis on differences give usually very short-term
gains.

Fernando talks about "weed" languages such as Perl. With proper
guidance, what's left of Prolog can still become such a thing. I have
visited the LogicWeb at Melbourne, following a hint from Fergus
(see links from http://www.cs.mu.oz.au/~swloke/home.html). It
shows that with their text processing abilities, logic programming
languages definitely have the right genotype for the task.

At some point in this thread, Fergus hinted towards generating Java
bytecode. As Mercury already generates C, I would instead generate
plain Java after adding Java-style classes/objects to Mercury. It seems
to be less work and give better returns.
Actually, I am currently looking into how to do this for BinProlog
which also generates C and it is already embeddable in Java, as a native
C method. Some help from interested volunteers around?
A free `wizard' hat left in the lobby of LogiMOO (prototype at
http://clement.info.umoncton.ca/BinProlog/BPMOO) is at stake.

To stress that innovation is not necessarily artificial, I
would suggest to take a quick look at Mondoo at
http://www.intel.com/iaweb/moondo/ or ChibaMOO, starting from
http://sensemedia.net/sprawl/122. They definitely give an
unconventional meaning to "working in teams" :-)

Fergus Henderson

unread,
May 10, 1996, 3:00:00 AM5/10/96
to

ta...@clement.info.umoncton.ca (Paul Tarau) writes:

>P.S. Well, let's handle one more, 'cause of its story:
>

>>... using the non-ground


>>representation is a bad idea anyway. Yesterday, as it happens, I sent
>>off two bug reports, one for SWI-Prolog and one for BinProlog; both of
>>them were for bugs in expand_term that are directly attributable to the
>>use of the non-ground representation. The implementors of SWI-Prolog
>>and BinProlog are extremely competent Prolog programmers; if they can't
>>get it right, can we expect anyone else to?
>
>It was about how to `expand_term(A,B)' with A an uninstantiated
>variable. SWI crashed, BinProlog behaved as if
>
> A were X-->Y.

To be precise, BinProlog _bound_ A to X-->Y.

>SICStus, Aquarius, C-Prolog returned the more reasonable (I agree),
>A=B, as if
>
> A were meant to be (X:-Y).

No, as if A represented an object-level variable.

>During the intersection between Australian night and Canadian day we
>have exchanged a few messages with Fergus. Arrival of Australian
>morning :-) left us with the conclusion that a run-time error is the
>most sensible thing to do.

I agreed that expand_term(A, B) was likely to be a programming error,
and that a run-time error might therefore well be the most useful
behaviour, but I'm still not convinced that it's the best thing to do;
I thnk the behaviour of SICStus, Aquarius, C-Prolog, and NU-Prolog is
better on the grounds of consistency. I mean consistency of
expand_term (and other meta-programming predicates), not consistency
between different Prolog implementations (although that is another
argument in favour of returning A=B rather than a runtime error).

>I still believe that this is unrelated
>to the ground vs. non-ground representation. It simply means that
>translation should either suspend until we know more about A in
>expand_term(A,B), or simply return an instantiation error.

There is no doubt at all that this *is* related to the use of the
non-ground representation. The difficulty in defining what the "right"
behaviour is stems directly from the fact that when using a non-ground
representation there are *two* different things that a variable could
mean: either it could be a representation of an object variable (in
which case the SICStus etc. behaviour is correct), or it could be just
an uninstantiated variable which doesn't represent anything other than
absence of information, in which case delaying or returning an
instantiation error is the right thing to do. If a ground
representation were used, this confusion just simply wouldn't arise.
It is the (ab)use of free variables to mean something other than
absence of information that causes the problem.

Fergus Henderson

unread,
May 11, 1996, 3:00:00 AM5/11/96
to

ta...@clement.info.umoncton.ca (Paul Tarau) writes:

>>>Do you mean (among other remarks I agree with) that (a forest of) embedded
>>>if-then-elses is more readable then plain Horn Clause style?

One aspect of Paul Tarau's debating technique is his choice of descriptive
phrases. Everyone has their own slant on things, and I'm probably as
guilty as he is in this respect, but I just thought I'd point out this
instance of it. By swapping the positive and negative adjectives,
which are equally applicable to both styles, Paul's leading question
could also be phrased as

Do you mean that using plain if-then-elses is more readable than
(a forest of) Prolog-style Horn clauses with embedded cuts?

--- but of course this way of phrasing the question would point to a
different answer.

>On the `scale of declarativeness' I have always put if-then-else
>close to the bottom. The same for negation.

Well, I simply don't understand that. Negation seems like a pretty
fundamental logical concept to me, and I don't see any reason to
consider it as non-declarative. If-then-else, being an abbreviation
for a combination of conjunction, disjunction, and negation, falls in
the same category. Both of these are semantically well-founded language
constructs that *say what they mean*. So I can't understand at all
why you consider them non-declarative.

Perhaps you have come to associate if-then-else and negation
with non-declarativeness simply because of the traditionally
unsound implementations of these constructs in Prolog. If so,
then I would urge you to put aside your past prejudices, and
reconsider if-then-else and negation in the light of a language
that provides logically correct implementations of these constructs.

>Good Prolog programmers never need them, anyway :-)

I think Paul and I may have to agree to disagree on what constitutes
"Good Prolog" ;-)

>Despite the problems with "!" and its obscure interaction with
>moding, I still value the ability of reading Horn Clauses
>and approximate "!" as not being there (i.e. as "true").

The problem comes from those many occaisions when you *can't* just ignore
the cuts. Green cuts are fine (indeed as a sop to Prolog programmers,
Mercury actually allows -- but ignores -- calls to '!'/0). The problems
with cut come from those occaisions when the programmer *left out* the
negations.

>That's what programming in logic means: programming using
>logic formulas.

Right! So why not do it??

Here's an example; it's the sort of example that can be found in
countless Prolog textbooks and countless Prolog programs.

% min(X, Y, Min): Min is the minimum of X and Y.
min(X, Y, X) :- X < Y, !.
min(X, Y, Y).

Paul, before you read further -- do you consider this definition of
min/3 to be good Prolog?

The sources to BinProlog certainly contain lots of predicates in this style.
There is one thing you can definitely say about this style, and that is
that it is certainly *not* programming in logic in the sense you
advocated above.

The first problem with this code is that it is not steadfast.
For example, the query `min(1, 2, 2)' succeeds, even though the minium
of 1 and 2 is 1, not 2. This is a straight-out bug: it gets the wrong
answer! Of course the bug doesn't manifest itself if min/3 is not
called with all arguments input, so it often goes unnoticed.

The bug can be fixed by postponing the output unification until after
the cut:

min(X, Y, Min) :- X < Y, !, Min = X.
min(X, Y, Y).

Good Prolog programmers know this, but even good Prolog programmers
make mistakes, and there are many Prolog programmers (and textbook
authors!) that don't know about this.

However, even this revised example still doesn't say what it means; the
cut cannot be ignored, because the meaning is dramatically changed if
you omit the cut.

In Mercury, you can write this as

min(X, Y, Min) :- if X < Y then Min = X else Min = Y.

or (using functional notation)

min(X, Y) = if X < Y then X else Y.

This style says what it means.

There is another style that is also declarative:

min(X, Y, X) :- X < Y.
min(X, Y, Y) :- X >= Y.

The problem with this style in Prolog is that it generally leaves a
choice point around; the vast majority of Prolog implementations can't
figure out that the two clauses are mutually exclusive. (Aquarius
Prolog is a notable exception.) In Mercury, you have the same problem
with this style, the difference being that the Mercury compiler will in
most situations report a determinism error (the exception being when
there is no determinism declaration and the predicate is used only in a
nondet context). In either language, you should avoid this style.

In Prolog, you can optionally add a green cut or two, if you remember
to carefully move output unifications after the cut.

% with one cut (asymmetric)
min(X, Y, Min) :- X < Y, !, Min = X.
min(X, Y, Y) :- X >= Y.

% with two cuts
min(X, Y, Min) :- X < Y, !, Min = X.
min(X, Y, Min) :- X >= Y, !, Min = Y.

These two styles is not supported in Mercury; in Mercury, the cuts are ignored
and so you have the same problems as with the previous style. Perhaps
this is what Paul was getting at.

I don't consider these two styles, which involve writing both the
condition and its negation, to be particularly important. Some people
may perhaps be a little disappointed that Mercury doesn't support them,
but they are not styles that I would ordinarily use anyway. I
generally find the if-then-else syntax, with its smaller amount of
duplication, to be preferable; furthermore, the absence of
mode-dependent and easy-to-misuse cuts means that you can read the code
without having to worry about mistakes in the use of cut.

When it comes down to it, I do prefer

min(X, Y) = if X < Y then X else Y.

to

min(X, Y, Min) :- X < Y, !, Min = X.
min(X, Y, Min) :- X >= Y, !, Min = Y.

and don't see any sufficiently strong motivations for extending Mercury
to support the latter style.

Fergus Henderson

unread,
May 11, 1996, 3:00:00 AM5/11/96
to

ta...@clement.info.umoncton.ca (Paul Tarau) writes:

>Fergus Henderson <f...@mundook.cs.mu.OZ.AU> wrote:
>>
>>Meta-programming is really an entirely different subject from
>>higher-order programming. You can do meta-programming in any
>>Turing-complete programming language, even with Turing machines if you
>>wish.
>

>As you definitely know, we all know this. The problem with saying it
>is that it implies that we had not :-)

Well, the problem with saying that "Prolog supports full meta-programming"
is that it implies that Mercury does not, which "we all know" is not the
case.

>The real point is, of course, practical expressiveness.

Agreed. My point was to show that (1) Mercury's syntax for meta-programming
is not that much less concise than Prolog's and (2) Prolog's concise syntax
for meta-programming is based on a fundamentally flawed semantics which
creates real practical problems when meta-programming in Prolog.
I agree that Prolog's non-ground meta-programming has advantages in
terms of conciseness, but I think these are outweighed by the
additional complications caused by the flawed semantics.

>Haskell and ML have
>given away Lisp's eval. I think, they have lost very little :-)
>(Because Lisp's representation of programs as data was no fun.)
>On the other hand, meta-programming in Prolog is not only useful but
>it's fun too. Because of its non-ground representation! It really
>adds practical expressiveness. And declarativeness in the (real)
>sense that you do not have to say _how_, but just _what_ you mean.

I don't find non-ground meta-programming in Prolog fun, because I have
to continually be on my guard to avoid making errors as a result of the
dangerous non-ground representation semantics. If you find walking
over minefields "fun", well, you're welcome to it ;-)
Me, I'll take the slightly longer but much safer detour, where
I can spend my time looking at the scenery, rather than carefully
watching exactly where I put my feet.

>Ascribing responsability for segmentation violations (a human error)
>to meta-programming, is like saying that planes crash because
>they allow people to drink on board.

All programming errors are human errors. That doesn't make it any less
sensible to ascribe responsibility for those errors to language
features which make errors common-place. We're talking here about
letting the pilot drink, not the people on board! Segmentation
violations in C programs are the programmer's fault, but they're also
in many cases due to the use of dangerous constructs such as pointers.
In the particular case of SWI-Prolog's bug, the segmentation violation
was the indirect result of a cut being incorrectly placed.
Furthermore, the particular piece of SWI-Prolog code which had the cut
incorrectly placed looks like it would have been correct for the ground
representation.

>Fergus write:
>>
>>... to be continued ;-)
>>

>Hopfully this will not delay release of Prolog compatible Mercury 1.0 :-)

There is *no* chance that any release of Mercury will ever be Prolog
compatible. We may provide tools to help people convert Prolog
programs to Mercury, but there are many non-logical Prolog constructs
which will never be supported in Mercury, since there are perfectly
good Mercury constructs for doing the same thing in a declarative manner.

Fergus Henderson

unread,
May 11, 1996, 3:00:00 AM5/11/96
to

ta...@clement.info.umoncton.ca (Paul Tarau) writes:

>The only improvement from Prolog to Mercury is a (fairly reasonable)
>gain in speed.

If, after all this debate, Paul Tarau still believes this, then I think
any further efforts to convince him otherwise would be futile. I can
only reiterate my statement that Paul has failed to understand the aims
of Mercury.

Fernando Pereira

unread,
May 11, 1996, 3:00:00 AM5/11/96
to

In article <4msjsf$r...@news.irisa.fr>, rid...@irisa.fr wrote:

> In article <pereira-0805...@greeley.research.att.com>,
per...@research.att.com (Fernando Pereira) writes:
> ...
> |> 9. How will the answers to the previous questions, the implementation, the
> |> applications and the organizations that support it help the language find
> |> fertile soil and prosper?
> |>
> |> To my mind, that is the *main* question for programming language
> |> researchers that want to see their work change the world of programming
> |> for the better. The problem they face is simple: the ecosystem for
>
> I think that there is a more subtle way of seeing one's work change the world
> of programming than having one's language and system beeing accepted. It is
> to see one's ideas dissolve (see note *) in the computing milieu and be
accepted
> in other languages and systems.

I fully agree. But I did not bring that up because I was more concerned
with the issue I discuss below.


>I give three examples. During around 20 years
> of research on automatic memory management for pointer-free programming

> languages [...] Consider also when the first papers on relational data-bases


> were published, and when RDBMS became trendy. Finally, when IBM started
> distributing machines with their version of UNIX (~ 1993), they had a large
> advertising campaign in newspaper (in France, at least) with several slogans
> including one that went as follows "20 years ago, academics invented
UNIX, now
> IBM is selling it".

Academics? So much for truth in advertising... Seriously, those are good
examples. An even older one is the influence of of Algol-60 and Simula 67,
neither of which was very widely used, on other, widely-used languages
(Pascal, C, C++).


> Note that the 20 years champions of the ideas should
> not expect any form of gratitude; instead they should merely be happy.
Note also
> that nobody is to be blamed for the delay.

But will research funders continue to be happy with such outcomes?
Creating a serious language implementation requires 10-100 person years,
at an approximate cost of 1.5-15 M $US (including overhead for equipment,
etc.) That's a lot of money, and research funders are likely to ask for
concrete results from the investment, not just difficult-to-measure
diffuse influence on the field. Consider further that the same investment
could have provided summer and graduate student support for a good number
of theory faculty and their students, or industrial researchers, leading
maybe to such fundamental but patentable breakthroughs as public-key
cryptography.

So, even though I quite agree with your basic argument for the
intellectual rather than purely instrumental benefits of programming
language research, I worry that the argument does not play well with
funders and the wider CS research community, and hides a very real danger
for programming language research. It would not hurt if the community
could point to *both* intellectual influence and practical benefits
commensurate with the investments. Remember, large funded research
projects are a relatively recent phenomenon and are under serious threat
(see Andrew Odlyzko's ``The decline of unfettered research''
http://www.math.washington.edu/Special/comment/science.html). Furthermore,
it may be a bit disingenuous (in the style of the fox and grapes of the
fable) to claim that we are satisfied with pure long-term intellectual
influence. I doubt that many people would engage in the all-consuming
venture that is to implement well a modern programming language without
the hope that their efforts will be of direct use to a substantial
community *for real programming tasks*. (I'm speaking here from
experience).

Fergus Henderson

unread,
May 12, 1996, 3:00:00 AM5/12/96
to

ta...@clement.info.umoncton.ca (Paul Tarau) writes:

>I like Fernando's ecosystem metaphor.

Yes, so do I. I think there is room for more optimism than his article
conveys, however, and Olivier Ridoux's reply makes an important point.

>I am not really afraid of _deep_ mathematical notions in programming

>language design. [...]


>I am afraid of _bad_ mathematics, however. (I am actually
>talking about a more general scientific methodology here).
>One symptom is `dead paradigm consolidation' i.e. over-formalization of
>established knowledge of little practical value. The other is `orthodoxy
>propaganda' (just look for contexts of the most frequently misused
>concept in comp.lang.prolog: `declarative programming':-)).

In general I agree with these comments, but I strongly disagree with
the unstated implication that they apply to Mercury.

The paradigm of equational programming has great practical value, and is
most certainly not dead; on the contrary, it is only just reaching maturity,
and can look forward to a long and fruitful life. Mercury constitutes
a practical realization of this paradigm, not an over-formalization.

It is indeed true that certain terms can attract a somewhat religious
fervour, and that there are some people who think the term "declarative"
is synonymous with "good" (and similarly for "object-oriented", etc.).
It is also with doubt true that unquestioning acceptance of these sorts
of buzzwords and the concepts they represent is a very real danger
which we must all be on constant vigilance against. But the Mercury
designers are certainly not guilty of unquestioning acceptance of the
orthodoxy! Mercury stands on its merits, buzzwords and orthodoxy
notwithstanding. I hope (and believe) that open-minded critical
observers who examine Mercury will find Mercury useful and practical,
as well as elegant, and with thus be convinced that Mercury's design is
based on sound design decisions, not just buzzwords or orthodoxy.

>I will try to point this out by answering some of Fergus's questions:
>
>> 4. What support does the language offer for modularity?
>>
>> 5. What support does the language offer for programmers working in teams?
>>
>> 6. What support does the language offer for code reuse?
>
>I think that Mercury (as well as ISO-Prolog)'s idea of module is a very
>weak one.

Mercury's idea of a module is a very simple one, but I wouldn't describe
it as "weak". I'd say that Mercury's module system is the 10% of the
implementation effort and language complexity that buys you 90% of the
benefits that more complicated module systems could provide.

I *do* agree ISO Prolog's idea of a module is weak, because it does
not do anything to enforce encapsulation; its only benefit is to avoid
name clashes.

[Actually I should speak about the *draft* ISO Prolog standard, part 2,
idea of a module. The Prolog standard was split into two
parts; part 1 (the core) has been published, but part 2
(modules) is still a draft at this stage, I believe.]

>Without inheritance, code reuse means very little.

Here, I would disagree very strongly. By placing the emphasis on inheritence,
I think you have fallen into the trap of "orthodoxy propaganda".

The ultimate goal is ease of programming and maintenance and increased
reliability. Code reuse is a very good way of achieving this.
Inheritance is one way of achieving greater code reuse, but there are
many other ways (higher-order programming being one of them).
Code reuse is much more important than inheritance.

>Modules, as borrowed from Modula and Ada have _no reasoning_ behind them.

I disagree with this. Modules have just as much reasoning behind them
as classes.

>Compare this with Java's conceptual parsimony:

I think the main reason that Java appears conceptually parsimonious is
that the usual basis of comparison is C++! ;-)

>classes subsume modules: let's throw them away.

There is a very fine line between on the one hand taking two disparate concepts
and unifying them in a single concept, and on the other hand taking two
separate concepts a squashing them together in a single concept in a way
that confuses rather than unifies.

The idea that classes subsume modules is one of these sorts of issues.
This idea works very well in many situations, but fails badly in some
other situations, particularly when it comes to encapsulating large
software systems.

The C++ people discovered the classes where inadequate for this task
quite a while ago, and invented a different module-like concept
(namespaces) for this purpose.

The Eiffel people discovered this quite a while ago, and also invented
a different module-like concept for packaging up different classes.
(I forget the name of the construct Eiffel uses; it is extra-lingual,
implemented using a separate tool, I believe.)

The Sather people have discovered this, and have been discussing it
on comp.lang.sather, but have thus far not yet found a good solution.

The Java people realized this at the outset, which is why Java has
two different module-like language constructs: packages and classes.

Ada 95 separates the notions of inheritance and dynamic binding,
which are supported with tagged types, and encapsulation, which
is supported using packages. This approach means that they only
need one language construct for encapsulation, since Ada packages
scale well from the small scale to the large scale.

I think that there are arguments both for and against merging
the concepts of "module" and "type" in a single concept "class";
in the end, there is probably not much advantage either way
between the "unified" approach of Java and the "orthogonal"
approach of Ada 95.

>Objects subsume functions: let's throw them away too!

That's hardly an accurate description of Java. Java certainly has
functions. What C++ has but Java doesn't is non-member functions.
Java makes do with static and non-static members, which can be
virtual or non-virtual, possibly `deferred', and possibly `final'.
Add in `public', `protected', and `private', and you really have to
struggle to call it conceptual parsimony.

>Back to logic programming, WHAT ABOUT THROWING AWAY MODULES AND JUST
>START STANDARDIZING ON OBJECTS?

As explained above, objects or classes do not, on their own, provide an
adequate means of encapsulation for large software systems composed of
several sub-systems. So throwing away modules may not be the best
way to proceed.

>Most major Prologs systems have some OO dialect or library, by now.

Yes, but they're all different!

Moreover, the differences are not just syntactic, but are quite fundamental
semantic differences.

With regard to standardizing on objects, well, I agree that it would be
a good idea to research ways of integrating the paradigms of functional/
logic programming and object-oriented programming. However, I haven't
seen any existing systems which do this in a sufficiently elegant way
to convince me that this is clearly the "right" way of doing things.
I'd say that it is certainly too early to standardize on any particular
OOP extension to Prolog. (Sheesh, it's not as if any of the Prolog
systems currently available fully conform to the existing ISO Prolog
standard yet anyway, is it?)

If there was a way of integrating object-oriented programming
in Mercury which was elegant, feasible to implement, and didn't
compromise the existing advantages of Mercury, then we'd certainly
consider it.

Paul, I'm sorry if Mercury isn't sufficiently revolutionary for you.
I knew we'd encounter criticism, but being insufficiently revolutionary
was not one that I expected! ;-)

Micha Meier

unread,
May 13, 1996, 3:00:00 AM5/13/96
to

Thomas says that there has not been much success with automated parallelization
of Prolog programs. I have to say that OR-parallelism exploited in CLP/FD has shown
very successful, without having to write parallel algorithms. The point here
is that CLP/FD programs end up searching a quite large search space and OR-parallelism
makes it possible to search it in parallel without any extra cost.
See also the APPLAUSE project http://www.ecrc.de/research/collaborations/applause/applause.html

--Micha

Dominic Binks

unread,
May 13, 1996, 3:00:00 AM5/13/96
to

In article <4n1jie$2...@mulga.cs.mu.OZ.AU> f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:
ta...@clement.info.umoncton.ca (Paul Tarau) writes:
>Fergus Henderson <f...@mundook.cs.mu.OZ.AU> wrote:

>The real point is, of course, practical expressiveness.

Agreed. My point was to show that (1) Mercury's syntax for meta-programming
is not that much less concise than Prolog's and (2) Prolog's concise syntax
for meta-programming is based on a fundamentally flawed semantics which
creates real practical problems when meta-programming in Prolog.
I agree that Prolog's non-ground meta-programming has advantages in
terms of conciseness, but I think these are outweighed by the
additional complications caused by the flawed semantics.

In my experience of extensive ground meta-rpogramming (in Goedel) and
reasonable experience of programming non-ground in Prolog. I would
not even consider writing anything but a trivial program in the
non-ground representation anymore. There are simply too many
pitfalls. Some of the most important predicates for the non-ground
representation are those that are used to avoid incorrectly binding
variables (i.e. renaming variables and grounding the term in some
reversible way). Both these are far more easily and (more
importantly) cleanly implementable in a ground representation.

>Haskell and ML have
>given away Lisp's eval. I think, they have lost very little :-)

This almost sounds like a case for giving away non-ground
meta-programming! :-)

>(Because Lisp's representation of programs as data was no fun.)
>On the other hand, meta-programming in Prolog is not only useful but
>it's fun too. Because of its non-ground representation! It really
>adds practical expressiveness. And declarativeness in the (real)
>sense that you do not have to say _how_, but just _what_ you mean.

I don't find non-ground meta-programming in Prolog fun, because I have
to continually be on my guard to avoid making errors as a result of the
dangerous non-ground representation semantics. If you find walking
over minefields "fun", well, you're welcome to it ;-)
Me, I'll take the slightly longer but much safer detour, where
I can spend my time looking at the scenery, rather than carefully
watching exactly where I put my feet.

I agree completely with Fergus here. I find that the ground
representation and the type system are essential for me to even hope
to program well. Maybe Paul is a far better Prolog programmer than I
am, but I seriously need all the help I can get from the compiler and
to that end types and the ground representation are essential to me.

--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
WWW: <http://www.cs.mu.oz.au/~fjh> | of excellence is a lethal habit"
PGP: finger f...@128.250.37.3 | -- the last words of T. S. Garp.

--
-----------
Dominic Binks dom...@aethos.demon.co.uk
Aethos Communication Systems Ltd. | 220 Park Avenue | Bristol | BS8 1SB
-----------
"Meditation, 2. _spec_ (in religious use): That kind of private devotional
exercise which consists in the continuous application of the mind to the
contemplation of some truth, mystery or object of reverence, in order that the
soul may increase in love of God and holiness of life." OED


Dominic Binks

unread,
May 13, 1996, 3:00:00 AM5/13/96
to

Thomas Lindgren

unread,
May 13, 1996, 3:00:00 AM5/13/96
to

Actually, I also wrote:

>(There's been interesting work at ECRC on parallel
>constraint solving in medicine, but I'm not enough into constraints to
>tell you more.)

I see that the project actually was about protein folding,
among other things. I think it must've been the Imperial
Cancer Research Fund that made me think about 'medicine'.
Sorry about that.

And thanks for the reference!

Vladimir Alexiev

unread,
May 13, 1996, 3:00:00 AM5/13/96
to

In article <4mvil2$g...@sol.sun.csd.unb.ca> ta...@clement.info.umoncton.ca (Paul Tarau) writes:

> > what is our best guess for the future of new
> >scientifically designed languages? Will they be doomed to the same fate as
> >artificial human languages?

> I like Fernando's ecosystem metaphor. That's what `real-world' is

I like it too, however I don't think the goal of programming language
researchers should necessarily be to produce a language that's "socially
viable". This would be the goal of commercial language developers (although
I'm not certain there is really such a a thing, as opposed to compiler
developers).

Take for example Java: I think it is successful not because there is anything
particularly new or nice about it, but because of social factors and a smart
policy by Sun. The only new thing there is the security features, but I don't
think they are sufficient at all. The security of the code is checked by a
theorem prover built in the loader? Come on!

If researchers would limit themselves to what is socially viable *now*, there
would hardly be room for novel developments. The slogan "small minds produce
small executables" that's deployed by implementors who like the "pig of a
program" approach can be paraphrased "small minds produce convenient
languages". The thing is that language research is on the border of pure and
applied science, so there should be room for both theoretical and practical
developments.

I'd like to also point out the article "Lisp: How to win big" by R. Gabriel,
http://www.ai.mit.edu/articles/good-news/good-news.html It explains why the
"worse is better" ideology wins; the best slogan I like is "Unix and C are the
ultimate computer viruses."

> Compare this with Java's conceptual parsimony: classes subsume modules:

I think this is not correct. A Java module is a collection of classes
developed for a certain purpose, but not necessarily with any inheritance or
structural connection between them. Module and class naming may make them seem
similar (eg lang.java.Parser is the full name of class Parser in module
lang.java), but the two concepts are largely orthogonal. The only commonality
is that both are used to group code by function, however methods are much more
tightly coupled inside a class than between classes in a module. Also, the
inheritance relation will usually span modules.

To me, Beta's "patterns" are the ultimate example of parsimony.

> WHAT ABOUT THROWING AWAY MODULES AND JUST START STANDARDIZING ON OBJECTS?

I'm not sure if this is completely appropriate. In any case, OO can (and has)
borrow(ed) much from research on modules, namely encapsulation. However, there
is the danger that the transfer of ideas from modules to obejcts will further
the concept that objects are encapsulated theories, with the associated
understanding that state change is theory change. And I think this is wrong
because state change is simpler than (and important special case of) theory
change.

> I liked M. Martelli's LL based system. Dale Miller's implication based
> constructs are another elegant and clean formalism

I'd be glad if we can have some technical discussion on this, because OOLP
based on LL is my area of interest. BinProlog is one of the few "conventional"
(I mean, not based on LL) systems that makes some good use of LL ideas
(intuitionistic vs linear assumptions, etc.)

Delzanno and Martelli's proposal (F&O) is based on Miller's Forum, which
itself is a generalization of Lolli, a linear variant of LambdaProlog.
However, I think that F&O takes the easy way to OOP, basically by
reformulating the well-known functional-style object calculi in Forum, using
the higher-order nature of Forum. Objects are terms where all the state and
methods is represented by higher-order LL formulas embedded in the term. Thus,
most of the interesting stuff is going on at the term ("functional") level, as
opposed to the atom ("logic") level. I think that an approach representing
attributes as atoms (like in LO), and interaction in the tradition of process
calculi, or proof/interaction nets, can at least obtain more substantial
results, if not be more viable.

> Sorts in Life can be as inspiring as well.

Yes, feature terms are a very nice approach to complex/circular objects.

I think one of the least well understood aspects of OOLP is how to reconcile
committed-mode execution with declarative mode (full proof search). Other
aspects/names of the same dichotomy are imperative vs declarative, don't care
vs don't know nondeterminism, etc. For example, the reconciliation approach
taken in Oz (encapsulation of the declarative mode in a search primitive that
can return all, one, the best or on-demand answers by following different
search strategies) is a major achievement, but it's not clear if it is the
best approach. My objection is that it confines the LP component inside the
OOP component, so local object computations can be programmed declaratively,
the object interactions have to be dealt with imperatively. And with the
advent of distributed systems of rational entities (eg communities of software
agents), it is more and more important to be able to govern these interactions
using logic.

> Evolutionary changes are not enough to make species survive in ecosystems.
> I am afraid the same is true for languages.

I would put it another way: such changes are not enough to produce the
languages needed for the increasingly complex systems we want to build.
Fortran itself is alive and well, it's just the poor programmers that have
to use it who suffer.

Regards, Vlad

michael lawley

unread,
May 13, 1996, 3:00:00 AM5/13/96
to

f...@mundook.cs.mu.oz.au (fergus henderson) writes:
>ta...@clement.info.umoncton.ca (Paul Tarau) writes:

>>The only improvement from Prolog to Mercury is a (fairly reasonable)
>>gain in speed.

>If, after all this debate, Paul Tarau still believes this, then I think


>any further efforts to convince him otherwise would be futile.

Perhaps, but if you'd convinced him then the rest of us wouldn't have had
the pleasure of ``eavesdropping'' on the debate.

michael

--
.
. michael lawley (law...@cit.gu.edu.au) _--_|\
. O _ The Unicycling Postgrad / *
_/=/ CIT, Griffith University, Nathan, 411, QLD, AUSTRALIA \_.--._/
| phone: +61 7 875 5041; fax: +61 7 875 5051 v
_/ \_ URL: http://www.cit.gu.edu.au./~lawley


Micha Meier

unread,
May 14, 1996, 3:00:00 AM5/14/96
to

In article f...@arnold.csd.uu.se, tho...@csd.uu.se (Thomas Lindgren) writes:
>
>In article <DrC5K...@ecrc.de> mi...@ecrc.de (Micha Meier) writes:
..

> possible to search it in parallel without any extra cost. See also
> the APPLAUSE project
> http://www.ecrc.de/research/collaborations/applause/applause.html
>
>Actually, I also wrote:
>
>>(There's been interesting work at ECRC on parallel
>>constraint solving in medicine, but I'm not enough into constraints to
>>tell you more.)

Actually, the point I was trying to make was that this was not only one application
in medicine, but a bunch of applications from other domains as well. Your sentence
suggested that this was a very specialised usage of parallel constraints
(or that the other domains were not interesting :-)), which I think was not
the case.

--Micha

Paul A S Ward

unread,
May 14, 1996, 3:00:00 AM5/14/96
to

>>>>> "Fergus" == Fergus Henderson <f...@mundook.cs.mu.OZ.AU> writes:

Fergus> There is another style that is also declarative:

Fergus> min(X, Y, X) :- X < Y.
Fergus> min(X, Y, Y) :- X >= Y.

Fergus> The problem with this style in Prolog is that it generally leaves a
Fergus> choice point around; the vast majority of Prolog implementations can't
Fergus> figure out that the two clauses are mutually exclusive. (Aquarius
Fergus> Prolog is a notable exception.) In Mercury, you have the same problem
Fergus> with this style, the difference being that the Mercury compiler will in
Fergus> most situations report a determinism error (the exception being when
Fergus> there is no determinism declaration and the predicate is used only in a
Fergus> nondet context). In either language, you should avoid this style.

There's a bigger problem than the simple choice point issue. The
intent in min/3 is that the conditions be mutually exclusive, but this
is not what is specified in the code! The same condition has to be
specified in two different places in the code. This is a maintenance
nightmare.
--
Paul A.S. Ward Phone: (604) 291-5711
Research Associate Fax: (604) 291-3045
Intelligent Software Group E-mail: paul...@cs.sfu.ca
Simon Fraser University URL: http://www.isg.sfu.ca/~paulward

Paul Tarau

unread,
May 15, 1996, 3:00:00 AM5/15/96
to

In article <DOMINIC.96...@patmos.aethos.co.uk>,
Dominic Binks <dom...@patmos.aethos.co.uk> wrote:

> ta...@clement.info.umoncton.ca (Paul Tarau) writes:
> >(Because Lisp's representation of programs as data was no fun.)
> >On the other hand, meta-programming in Prolog is not only useful but
> >it's fun too. Because of its non-ground representation! It really
> >adds practical expressiveness. And declarativeness in the (real)
> >sense that you do not have to say _how_, but just _what_ you mean.

> I don't find non-ground meta-programming in Prolog fun, because I have
> to continually be on my guard to avoid making errors as a result of the
> dangerous non-ground representation semantics. If you find walking
> over minefields "fun", well, you're welcome to it ;-)

It would be interesting to see how the ground representation of
append/3 actually _looks_ (type/mode info included!) in Goedel and
Mercury. Maybe you (and Fergus) can post them on the net. This will also
help poor unilingual Prologers see how fun is what we are actually
talking about :-)

My view on this is actually more moderate. I am not against using ground
representation as such. Having a built-in ground representation
carefully packaged as an abstract data-type (as in Goedel or Mercury)
does no harm. The important thing is having the _choice_ to use
whatever is more suitable. My hesitation is about language design which
makes ground representation the _only_ way to go.

Another, maybe neglected topic is the possibility to chose the right
`granularity'. A fixed built-in ground representation might force you
to deal with too much detail (like quoting each ASCII code occurring in
a name :-)). I guess some `zooming' ability is present in the ground
representation libraries of Goedel and/or Mercury and this is not
currently a problem.

I recall from my TurboProlog experience that some of my concerns w.r.t
ground representations were speed-related. Rememeber Borland's 50-100
times slower Prolog-in-TurboProlog (meta)-interpreter!. And they have
not built a partial evaluator for it :-). Being quite poor at that
time, the cheapest way to have a decent Prolog around was to compile
from (Micro)-Prolog to TurboProlog. I have eventually managed to do
this, but sadly enough, I had to throw it away, because the resulting
code broke the mode inference mechanism, which was probably trying to
figure out how arbitrary unification should look :-)

Reflective representations directly benefit from fast object level
unification compilation. This is also important for good compilation
time when the compiler is written in the language itself. Having
learned a few things from the `minefields of purity', now BinProlog
compiles itself to bytecode in about 50 seconds (on a Toshiba 210
laptop) and to C in about 70 seconds :-)

Actually, Prolog's reflective non-ground representation helps a lot
when going even further: self-adjusting dynamic recompilation which
happens on the fly, only when needed (see BinProlog 5.00), is quite
easy to add to any Prolog system. While benefiting from compilation,
the user will have consult times typical for interpreted code i.e. only
limited by the speed of the parser. Again, not being restricted in
expressiveness by _mandatory_ ground representation was crucial.

The funniest thing about compilers is that the better they know
themselves, the better they now other programs.

My Prolog `dream-machine' is just one step further:
the _emulator_ itself can be written in Prolog. Actually the
BinWAM is simple enough (a minimal emulator for pure Prolog
would fit in about 8 instructions). Some impurity might
be needed to make the abstract machine completely transparent
as a Prolog data structure. Once the thing works on top
of Prolog, throwing it directly to Java or C by a compilation
scheme which knows enough about itself would close the loop.

Still have to write it down, maybe the next week :-)

Paul Tarau

unread,
May 15, 1996, 3:00:00 AM5/15/96
to

I (Paul Tarau) wrote:

>>I still believe that this is unrelated
>>to the ground vs. non-ground representation. It simply means that
>>translation should either suspend until we know more about A in
>>expand_term(A,B), or simply return an instantiation error.

Fergus Henderson wrote:

>There is no doubt at all that this *is* related to the use of the
>non-ground representation. The difficulty in defining what the "right"
>behaviour is stems directly from the fact that when using a non-ground
>representation there are *two* different things that a variable could
>mean: either it could be a representation of an object variable (in
>which case the SICStus etc. behaviour is correct), or it could be just
>an uninstantiated variable which doesn't represent anything other than
>absence of information, in which case delaying or returning an
>instantiation error is the right thing to do. If a ground
>representation were used, this confusion just simply wouldn't arise.
>It is the (ab)use of free variables to mean something other than
>absence of information that causes the problem.

Well, I know this better, as it was my error :-) Although I have been
(mildly) bitten by this (as well as by mosquitos in Florida) I do not
think that winter vacation at the North Pole is the solution!

It might well be the case that the _methodological pitfall_ with DCGs is
the very fact of using a _preprocessor_. (I hope that Fergus will
agree that empty preprocessors cannot go wrong, whatever their
data representation is :-) ). More seriously,
translation based implementation of DCGs makes people mix
grammar and hand-translated stuff in very tricky ways, especially
because DCGs are limited to exactly one pair of implicit arguments.

That's why the preprocessing `implementation' of DCGs is _optional_ in
BinProlog. Multiple-stream DCG grammars based on implicit state are
more and more popular among users: for serious natural language
processing or compiler writting, ordinary DCGs are simply too weak.

In fact, even extended DCGs can be seen just as
special instances of `hypothetical assumptions'.
Assumptions (which are a kind of backtrackable assert/1) can be
discarded either by limiting their scope (with intuitionistic implication)
or by making them `linear' (usable only once).

This avoids the pitfalls of retract/1 which, when combined with
backtracking, is one of impure Prolog's most frightening constructs
(just look at the Tacoma Narrows Bridge in the home page of Peter
Reintjes, at http://www.als.com/nalp/people/pbr/pbr.html to see what
I mean :-)).

I think this cleaner style of meta-programming is not intrinsically
alien to Mercury as _typed_ logic languages like LambdaProlog-MALI are
using such constructs quite heavily.

Fergus Henderson

unread,
May 15, 1996, 3:00:00 AM5/15/96
to

ta...@clement.info.umoncton.ca (Paul Tarau) writes:

>It would be interesting to see how the ground representation of
>append/3 actually _looks_ (type/mode info included!) in Goedel and
>Mercury. Maybe you (and Fergus) can post them on the net.

Which ground representation? The Mercury compiler actually uses about
three different representations of Mercury programs (not to mention a
couple of representations of the generated C code). First the source
code is read in as a list of terms. Then, these terms are analyzed to
figure out which terms are declarations, which are clauses, etc.; the
result of this stage is a lists of `items', which is a parse tree
representation that captures most of the semantic information. This
representation is designed to avoid losing any information present in
the source code, so that it can be used as the basis of pretty printers
or source-level translators. But for the compiler's purposes, much of
this information is redundant. The compiler converts the `items' into
the HLDS (High Level Data Structure) representation, at the same time
flattening unifications and performing other simplifications.

All of these ground representations are implemented using just ordinary
Mercury code; the `term' representation is in the Mercury standard library,
whereas the others are not, but apart from that there is no sense in which
any of them is built-in, and you are free to create your own representations.

The `term' representation is the closest to Prolog's standard
non-ground representation. But unlike the standard Prolog representation,
it includes information about the source code file and line number
where the terms were read in from.

Asking what the representation of append/3 actually _looks_ like is not
a very useful question. I started writing down the `term'
representation of append/3 in Mercury, but then I realized that the
question was meaningless. The Mercury representations are a mixture of
concrete and abstract data types. If the representation you are using
is an abstract data type, why does it matter what it looks like? In
fact, what does it even mean to say that the representation _looks_
like so-and-so? At the lowest level, the representation is a bunch of
ones and zeros.

The important question is how difficult is it to write programs that
manipulate this representation.

My experience is that it is easy to write meta-programs in Mercury.
But you don't have to take my word for it; you're welcome to download
the source to the Mercury compiler and have a look at it, or to have
a go at writing some meta-programs of your own in Mercury.

>I recall from my TurboProlog experience that some of my concerns w.r.t
>ground representations were speed-related. Rememeber Borland's 50-100
>times slower Prolog-in-TurboProlog (meta)-interpreter!.

Yes, as I said last time, to get an efficient interpreter you need to
use destructive update. This is not possible in Goedel, but it can be
done in Mercury using unique modes, e.g. with the unique_array module.

>Actually, Prolog's reflective non-ground representation helps a lot
>when going even further: self-adjusting dynamic recompilation which
>happens on the fly, only when needed (see BinProlog 5.00), is quite
>easy to add to any Prolog system.

It would be quite easy to add to any Mercury system, or any language
implementation for that matter, if the system already had an
interpreter and a native-code or byte-code compiler.
I think having the interpreter and compiler are the key ingredients;
I don't see how using a non-ground representation would help.

>My Prolog `dream-machine' is just one step further:
>the _emulator_ itself can be written in Prolog. Actually the
>BinWAM is simple enough (a minimal emulator for pure Prolog
>would fit in about 8 instructions). Some impurity might
>be needed to make the abstract machine completely transparent
>as a Prolog data structure.

In Mercury, I think you could use unique modes to do this sort of
thing without any impurity.

Dominic Binks

unread,
May 16, 1996, 3:00:00 AM5/16/96
to

In article <4nd0bp$1...@mulga.cs.mu.OZ.AU> f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:

ta...@clement.info.umoncton.ca (Paul Tarau) writes:

>It would be interesting to see how the ground representation of
>append/3 actually _looks_ (type/mode info included!) in Goedel and
>Mercury. Maybe you (and Fergus) can post them on the net.

Granted the Goedel version is enormous. However, this is a somewhat
unfair example. append/3 is a very short predicate in Prolog (2
clauses). However, in Goedel the representation is not of Append/3
but of the program that contains Append/3 which includes typing
information, the Lists module and the Integers module. This is like
including part of the Prolog interpreter.

Asking what the representation of append/3 actually _looks_ like is not
a very useful question. I started writing down the `term'
representation of append/3 in Mercury, but then I realized that the
question was meaningless. The Mercury representations are a mixture of
concrete and abstract data types. If the representation you are using
is an abstract data type, why does it matter what it looks like? In
fact, what does it even mean to say that the representation _looks_
like so-and-so? At the lowest level, the representation is a bunch of
ones and zeros.

The important question is how difficult is it to write programs that
manipulate this representation.

Exactly. In order to program meta-programs using the ground
representation you never have to trouble yourself with what the
representation looks like. In Goedel (and presumably Mercury) there
is a set of modules that provide access to a representation. What
they do is of no interest to the programmer.

My experience is that it is easy to write meta-programs in Mercury.
But you don't have to take my word for it; you're welcome to download
the source to the Mercury compiler and have a look at it, or to have
a go at writing some meta-programs of your own in Mercury.

Well, I have to somewhat disagree with this. Writing meta-programs
with the ground representation is easy once you've unlearnt the
(rather unnatural) non-ground representation. I say unnatural because
if you were writing a Prolog interpreter/compiler in C, C++, Haskell,
Java ... you would implement a set of functions/objects for
manipulating the representation of the program. So Prolog is
unnatural in it's treatment of object programs.

>I recall from my TurboProlog experience that some of my concerns w.r.t
>ground representations were speed-related. Rememeber Borland's 50-100
>times slower Prolog-in-TurboProlog (meta)-interpreter!.

Yes, as I said last time, to get an efficient interpreter you need to
use destructive update. This is not possible in Goedel, but it can be
done in Mercury using unique modes, e.g. with the unique_array module.

Yes, Goedel meta-programs are slow to run. But, it is still easier to
write a correct meta-program that is more amenable to specialisation
in the ground representation than in the non-ground representation.
Consider Corin Gurr's self applicable partial evaluator.

>Actually, Prolog's reflective non-ground representation helps a lot
>when going even further: self-adjusting dynamic recompilation which
>happens on the fly, only when needed (see BinProlog 5.00), is quite
>easy to add to any Prolog system.

It would be quite easy to add to any Mercury system, or any language
implementation for that matter, if the system already had an
interpreter and a native-code or byte-code compiler.
I think having the interpreter and compiler are the key ingredients;
I don't see how using a non-ground representation would help.

In practice Goedel actually does this to implement the Succeed and
Compute families of predicates. It is not required to work that way,
but the implementors felt that the delay on the first call to such a
predicate would be worth the improvement of performance on succesive
calls.

>My Prolog `dream-machine' is just one step further:
>the _emulator_ itself can be written in Prolog. Actually the
>BinWAM is simple enough (a minimal emulator for pure Prolog
>would fit in about 8 instructions). Some impurity might
>be needed to make the abstract machine completely transparent
>as a Prolog data structure.

In Mercury, I think you could use unique modes to do this sort of
thing without any impurity.

Again this doesn't sound too far from Corin Gurr's partial evaluator
that uses an interpreter that uses WAM like structures and can be
partially evaluated to be a compiler to WAM instructions. The whole
of the system is written entirely in Goedel. I don't claim it is
fast, but it can be done.

Fernando Pereira

unread,
May 16, 1996, 3:00:00 AM5/16/96
to

In article <4nbqdo$7...@sol.sun.csd.unb.ca>, ta...@clement.info.umoncton.ca
(Paul Tarau) wrote:
> [...]

> I think this cleaner style of meta-programming is not intrinsically
> alien to Mercury as _typed_ logic languages like LambdaProlog-MALI are
> using such constructs quite heavily.
Good that you brought up lambda Prolog. Universal quantification in
negative contexts (essentially universal variables) is a better
representation of variables in metaprogramming than either the ground
representations of Mercury and Goedel or the standard Prolog hack. It
avoids the hassles of managing explicitly sets of constants and
substitutions in ground representations, and it avoids the bugs caused by
the standard Prolog hack. It is more expressive than either, and it is
compatible with lambda conversion is just the right way.

However, metaprogramming in lambda Prolog still requires explicit encoding
of object-level formulas as metalevel terms, because of the type system.
In the original implementation of lambda Prolog, this could be avoided by
using runtime type polymorphism, but at considerable danger of
nontermination because of underconstrained types in higher-order
unification. More recent implementations use stricter type systems, so
encoding is inevitable.

Jonas Nygren

unread,
May 17, 1996, 3:00:00 AM5/17/96
to

> ta...@clement.info.umoncton.ca (Paul Tarau) writes:
>
> >It would be interesting to see how the ground representation of
> >append/3 actually _looks_ (type/mode info included!) in Goedel and
> >Mercury. Maybe you (and Fergus) can post them on the net.
>

I have followed the Mercury/Prolog debate with some interest but I am
somewhat unsure if I have understood the ground/non-ground representation
issue. As I have understood ground-representation is that a ground term
may not be a, or contain any, un-initialised variables. Is that correct?

What are the practical implications of ground representation for me as
a programmer (not compiler/interpreter implementor)? Can it be shown with
a simple example?

/jonas

Dennis Merritt

unread,
May 17, 1996, 3:00:00 AM5/17/96
to

per...@research.att.com (Fernando Pereira) wrote:

>languages such as Perl. The main component of the language ecosystem is
>its carrier, the programmer. Thus the social, economic and educational
>forces that affect the selection, training and employment of programmers
>have great influence on the survival and prosperity of languages. In
>particular, languages that demand of the majority of programmers and
>organizations more up-front investment in intellectual development and new
>practices are at a serious disadvantage (cf. the history of functional and
>logic languages with their foundations in deep mathematical notions).

I believe the problem with most ‘higher-level’ languages is not
related to their theoretical roots, but rather to the simple fact that
in the process of providing a more intuitive mapping to certain
application areas, they obscure the mapping of the language to the
underlying machine, which is what is being programmed in the first
place.

A programmer’s job is one of mapping a problem specification described
in the semantics of the problem domain to code expressed in the
semantics of the computer. The semantics of the computer language
used to write a program lies somewhere in between the problem
semantics and the computer semantics. This leaves two semantic gaps,
one between the language and the machine and the other between the
language and the problem domain.

A large semantic gap on either side of the language makes programming
difficult.

The reason C is such a popular language is it has a small semantic gap
between it and the machine. In other words, if the task is
‘programming the computer’ C is a great language because it is close
to the computer. The underlying architecture of memory and a CPU
executing one instruction at a time is perfectly clear in C.

As long as the problem domain semantics can map relatively easy into
the concepts of memory, data and a series of executable instructions,
then C, or any other procedural language, is the best tool for the
job.

The difficulties in programming arise when the problem semantics do
not map easily to the tool. It is pondering these situations that
leads programmers to invent new languages with semantics that map more
closely to the problem domain.

The results of these efforts are languages that are good at expressing
the solutions to certain classes of problems, but these languages, by
necessity, leave a larger semantic gap between them and the machine.
As soon as you get beyond the simple sample programs that illustrate
how good the language is for its problem domain, the language becomes
obtuse, difficult to control, and frustrating to use.

It is only after acquiring a good clear conceptual understanding of
how the language maps to the machine underneath that a programmer can
become skilled in using the language.

This problem is made worse by the language designers and implementers,
who, by definition have an extremely clear understanding of the
mapping of the language to the machine, but try to hide it. What they
try to do, in the semantics of the language and the documentation and
the examples and the tutorials is show how good the language is it at
mapping to the problem domain, leaving few hints as to how it really
works.

It is no wonder that only the most dedicated of programmers, or those
who understand the implementation, are the only ones who are
successful with these languages.

Take for example Prolog. The whole semantics of Prolog is designed to
make it look like logic, and the classic first example shows that:

person(socrates).
person(plato).
person(zeno).

mortal(X) :- person(X).

This is great, and anyone can see how logical it is, but when the
programmer’s manager asks for a report of all mortals, there isn’t a
clue in the language design that this program is the way to create
that report:

mortal_report :-
write(‘Mortal Report’), nl,
mortal(X),
write(X), nl,
fail.
mortal_report.

Given that all programmers know that this program, ultimately, is
going to cause the CPU to loop through a series of instructions that
are going to display the characters found at different addresses, this
C program that prints a report of persons is more satisfying in that
it more clearly maps to the machine.

void main()
{
char *person[] = {"socrates", "plato", "zeno", NULL};
char **p;

printf("Person Report\n");
for (p = person; *p != NULL; p++)
printf(" %s\n", *p);
}

But, of course, if you try to add the wrinkle of the inference rule
about mortals, then the C code starts to get more obtuse, because an
inference rule does not map as nicely onto the underlying machine
architecture.

The net of all this is most computer applications will remain in the
restrictive category of applications that map easily to the machine
architecture, and the languages used to implement them will map easily
both from the application domain and to the machine.

Difficult applications, that is those that do not map easily to the
machine, will still be difficult for one of two reasons. Either the
language used is inadequate for expressing the application, or the
language does not map easily to the underlying machine, making it
difficult to understand and control.

In either case, the programmer implementing a difficult application is
going to have to press the envelop a bit, one way or the other.

Dennis Merritt

Fergus Henderson

unread,
May 17, 1996, 3:00:00 AM5/17/96
to

per...@research.att.com (Fernando Pereira) writes:

>ta...@clement.info.umoncton.ca (Paul Tarau) wrote:
>> [...]
>> I think this cleaner style of meta-programming is not intrinsically
>> alien to Mercury as _typed_ logic languages like LambdaProlog-MALI are
>> using such constructs quite heavily.
>Good that you brought up lambda Prolog. Universal quantification in
>negative contexts (essentially universal variables) is a better
>representation of variables in metaprogramming than either the ground
>representations of Mercury and Goedel or the standard Prolog hack. It
>avoids the hassles of managing explicitly sets of constants and
>substitutions in ground representations, and it avoids the bugs caused by
>the standard Prolog hack. It is more expressive than either, and it is
>compatible with lambda conversion is just the right way.

LambdaProlog does have a nice, elegant way of representing variables
that has a simple declarative semantics based on intuitionistic logic.
I basically agree with you that LambdaProlog's mechanism for this aspect
of meta-programming is more expressive than either Mercury's or Prolog's.

However, the expressiveness advantage for meta-programming of
LambdaProlog compare to Mercury is only slight, and the LambdaProlog
approach has some very significant disadvantages in other areas -- the
most important being the loss of negation and if-then-else.

I had a long discussion about these issues with Olivier Ridoux via
email a few months ago.

The reason I say that the expressiveness advantage of LambdaProlog's
meta-programming features is only slight is that the functionality
they provide is easy enough to provide in a library module such as
the Mercury `varset' module. The main difference then is that in
LambdaProlog you avoid the need to explicitly pass around varsets.
However, if you need to keep track of other information such as the
names of the variables, then you will still need to explicitly
pass around a data structure recording this information. So in
that situation, there is not really much advantage to the LambdaProlog
approach.

But more important than that is the issue of negation and if-then-else.
The reason that Mercury and Goedel can handle negation and if-then-else in
a logically sound manner is that their semantics are based on the
Clark completion of the program. However, Lambda-Prolog's theoretical
foundation is not based on the completion semantics, and in particular
Lambda-Prolog's intuitionistic implication construct (an essential part
of how it handles meta-progamming) would not make any sense in the
completion semantics.

Now when it comes to expressiveness, negation and if-then-else are absolutely
critical, for meta-programming and indeed any other kind of programming.
LambdaProlog programmers manage to get by without negation or
if-then-else by using cut instead, but I'm sure you can understand why
we would not consider that to be an acceptable work-around in Mercury.

Now, don't get me wrong. I think it would be eminently worthwile
researching ways to unite the theoretical frameworks behind these two
different approaches in order to get the best of both worlds.
However, as Olivier Ridoux said in one of his emails (I hope he won't
mind me quoting this small part):

| All this is much work to do!

To which I responded

| Too true! And I have a thesis to write...

--------------------

In any case, meta-programming is only one small part of programming.
For many researchers, it is a very important part, but for most
programmers in industry it is not very important at all. To address
Fernando Pereira's previous post on language ecology, I think one
reason why many previous "scientifically designed languages" have not
caught on very well in the main-stream is that their designers have
concentrated on areas of more interest to theorists than
practicianers. I offer this as explanation, not as criticism.
There's nothing wrong with theorists designing languages which address
theoretical concerns rather than directly addressing practical ones,
indeed I think they play a very necessary part in supplying the "meme"
pool which gives the programming language ecosystem its necessary
biodiversity.

--------------------

>However, metaprogramming in lambda Prolog still requires explicit encoding
>of object-level formulas as metalevel terms, because of the type system.
>In the original implementation of lambda Prolog, this could be avoided by
>using runtime type polymorphism, but at considerable danger of
>nontermination because of underconstrained types in higher-order
>unification. More recent implementations use stricter type systems, so
>encoding is inevitable.

Could you please explain this in a little bit more detail, with say an
example or two?

John Fletcher

unread,
May 18, 1996, 3:00:00 AM5/18/96
to

den...@amzi.com (Dennis Merritt) wrote:
<snip>

>
>person(socrates).
>person(plato).
>person(zeno).
>
>mortal(X) :- person(X).
>
> This is great, and anyone can see how logical it is, but when the
> programmer’s manager asks for a report of all mortals, there isn’t a
> clue in the language design that this program is the way to create
> that report:
>
>mortal_report :-
> write(‘Mortal Report’), nl,
> mortal(X),
> write(X), nl,
> fail.
>mortal_report.
>

.. or in C ...

>void main()
>{
> char *person[] = {"socrates", "plato", "zeno", NULL};
> char **p;
>
> printf("Person Report\n");
> for (p = person; *p != NULL; p++)
> printf(" %s\n", *p);
>}
>

<snip>

IMHO, the directly comparable Prolog program is:

person(socrates).
person(plato).
person(zeno).

person_report :-
write(‘Person Report’), nl,
forall( person(X), (write(X), nl) ).

(forall/2 is usually provided as a built-in or library predicate).

This still obscures the mapping to the machine, which I don't care about,
but makes the intended meaning clear and explicit.

Fernando Pereira

unread,
May 18, 1996, 3:00:00 AM5/18/96
to

In article <4nj74a$j...@soap.news.pipex.net>, John Fletcher

<lk...@dial.pipex.com> wrote:
> den...@amzi.com (Dennis Merritt) wrote:
> <snip>
> >
> >person(socrates).
> >person(plato).
> >person(zeno).
> >
> >mortal(X) :- person(X).
> >
> > This is great, and anyone can see how logical it is, but when the
> > programmer’s manager asks for a report of all mortals, there isn’t a
> > clue in the language design that this program is the way to create
> > that report:
> >
> >mortal_report :-
> > write(‘Mortal Report’), nl,
> > mortal(X),
> > write(X), nl,
> > fail.
> >mortal_report.
> >
>
>
> IMHO, the directly comparable Prolog program is:
>
> person(socrates).
> person(plato).
> person(zeno).
>
> person_report :-
> write(‘Person Report’), nl,
> forall( person(X), (write(X), nl) ).
>
> (forall/2 is usually provided as a built-in or library predicate).
>
> This still obscures the mapping to the machine, which I don't care about,
> but makes the intended meaning clear and explicit.
Actually, neither program makes the meaning that explicit. I would rather write:

people(People) :-
findall(Person, person(Person), People).

people_report(Report) :-
people(People),
format_list(People, Report).

print_people_report :-
people_report(Report),
print_report(Report).

with appropriate definitions of format_list to map a list to a report
object (whatever that is) and of print_report.

The reason I go over this seemingly trivial example is that it actually
illustrates the point I was was making originally and Dennis Merritt
disagreed with. Both Dennis's original mortal_report and your
person_report are too tied to the underlying (Prolog) machine (they depend
on backtrack loops), where in fact what we want to accomplish has a
simple, mostly declarative, specification: print (the imperative part, a
verb) a report of the list of all people we know about (the declarative
part, expressed in English here by a noun phrase). In writing the program
this way, I follow both the natural language structure of the
specification and Richard O'Keefe's advice to keep imperative computation
as "high up" as possible in a program. The lesson here is that our own
experience of working close to the machine in other languages often
obscures the possibility of using higher-level languages to bring the
level of programming discourse much closer to the natural-language
specification of tasks. If this problem arises even with experienced
Prolog programmers, that supports my point that the mental environment
created by lower-level languages is a difficult one for high-level ones,
*independently* of the suitability of the languages to particular tasks in
an ideal world.

John Fletcher

unread,
May 19, 1996, 3:00:00 AM5/19/96
to

per...@research.att.com (Fernando Pereira) wrote:
>In article <4nj74a$j...@soap.news.pipex.net>, John Fletcher
><lk...@dial.pipex.com> wrote:
>> den...@amzi.com (Dennis Merritt) wrote:
>> <snip>
>> >
>> >person(socrates).
>> >person(plato).
>> >person(zeno).
>> >
>> >mortal(X) :- person(X).
>> >
>> > This is great, and anyone can see how logical it is, but when the
>> > programmer’s manager asks for a report of all mortals, there isn’t a
>> > clue in the language design that this program is the way to create
>> > that report:
>> >
>> >mortal_report :-
>> > write(‘Mortal Report’), nl,
>> > mortal(X),
>> > write(X), nl,
>> > fail.
>> >mortal_report.
>> >
>>
>>
>> IMHO, the directly comparable Prolog program is:
>>
>> person(socrates).
>> person(plato).
>> person(zeno).
>>
>> person_report :-
>> write(‘Person Report’), nl,
>> forall( person(X), (write(X), nl) ).
>>
>> (forall/2 is usually provided as a built-in or library predicate).
>>
>> This still obscures the mapping to the machine, which I don't care about,
>> but makes the intended meaning clear and explicit.
>Actually, neither program makes the meaning that explicit. I would rather write:
>
>people(People) :-
> findall(Person, person(Person), People).
>
>people_report(Report) :-
> people(People),
> format_list(People, Report).
>
>print_people_report :-
> people_report(Report),
> print_report(Report).
>
>with appropriate definitions of format_list to map a list to a report
>object (whatever that is) and of print_report.
>

I agree that separating the details of presentation is always a good idea;
I didn't do it here because I wanted to stick closely to the pattern of
Dennis's C program - using a higher-order Prolog predicate where he had
used a higher-order C construct.

Here is Dennis's C Program again:

void main()
{
char *person[] = {"socrates", "plato", "zeno", NULL};
char **p;

printf("Person Report\n");
for (p = person; *p != NULL; p++)
printf(" %s\n", *p);
}

If we introduce the predicates output_title/1 and output_person/1, to hide
the details of presentation, we get:

mortal_report2:-
output_title(‘Mortal Report’),
mortal( X ),
output_person( X ),
fail.
mortal_report2.

person_report2:-
output_title(‘Person Report’),
forall( person(X), output_person(X) ).

A broken output_person/1 which fails for some values of X would cause
person_report2 to fail whereas mortal_report2 would succeed. I think that
is a reasonable justification for claiming that the meaning is made (more)
explicit by the use of forall/2.

> The reason I go over this seemingly trivial example is that it actually
> illustrates the point I was was making originally and Dennis Merritt
> disagreed with. Both Dennis's original mortal_report and your
> person_report are too tied to the underlying (Prolog) machine (they

> depend on backtrack loops),where in fact what we want to accomplish has


> a simple, mostly declarative, specification: print (the imperative part,
> a verb) a report of the list of all people we know about (the
> declarative part, expressed in English here by a noun phrase).

<snip>

I think you're reading a lot into this, I don't believe that forall/2 is
tied to the underlying machine any more than findall/3 is.

I agree that we would like the definition of the program to follow a
natural language specification closely, but that specification might be:
"Print the report title then print the name of each person".

> The lesson here is that our own
> experience of working close to the machine in other languages often
> obscures the possibility of using higher-level languages to bring the
> level of programming discourse much closer to the natural-language
> specification of tasks. If this problem arises even with experienced
> Prolog programmers, that supports my point that the mental environment
> created by lower-level languages is a difficult one for high-level ones,
> *independently* of the suitability of the languages to particular tasks in
>an ideal world.

I agree that anyone who has used imperative languages is going to have
some residual influences from them; but I don't think comparing a bad
Prolog program with a (for all I know) reasonable C program actually
illustrates anything useful. In comparing person_report with the
C version it should be clear where the similarities and differences
are: we lose some declarations, some pointers and NULL - the non-person
:).

I suppose this all illustrates the importance of showing how Prolog can be
exploited practically, as well as "ideally". Your program shows that even
the use of imperative side-effects doesn't have to be kludged and can be
layered so that it doesn't dominate the structure of the program. I wonder
how many introductory courses even touch on this sort of thing...

Fernando Pereira

unread,
May 19, 1996, 3:00:00 AM5/19/96
to

In article <4nnvln$7...@soap.news.pipex.net>, John Fletcher
<lk...@dial.pipex.com> wrote:
> [...]

> I think you're reading a lot into this, I don't believe that forall/2 is
> tied to the underlying machine any more than findall/3 is.
I disagree strongly. Since forall/2 does not produce a result but just
success or failure, you are forced to put a side-effect in its second
argument. The sequence of side effects generated by that call to forall/2
is specified by the underlying machine (backtracking). In contrast,
findall/3 does not (or need not -- cf. its improved cousin setof/3)
require that kind of knowledge of the underlying machine to be used. The
order of the result list is dependent on the machine, of course, but
proper use of findall/3 does not depend on the order of the list.

> I agree that we would like the definition of the program to follow a
> natural language specification closely, but that specification might be:
> "Print the report title then print the name of each person".
That English specification is already contaminated by imperativism. Unless
we are instructing someone to do something step by step because we don't
believe they can fulfill correctly a higher-level request, we don't speak
that way. We describe the result sought, not how to accomplish it.
> [...]

> I suppose this all illustrates the importance of showing how Prolog can be
> exploited practically, as well as "ideally". Your program shows that even
> the use of imperative side-effects doesn't have to be kludged and can be
> layered so that it doesn't dominate the structure of the program. I wonder
> how many introductory courses even touch on this sort of thing...
Not that many, I'm afraid. But the problem is not just with Prolog texts,
one sees example programs with I/O spread all over the leaves of the call
tree in textbooks for all sorts of languages. Not only does this style
undermine modularity --- the meaning of a subprogram is now some awful
combination of argument/result relation and side-effects --- but it is
also impractical in event-driven situations (eg. interactive editors), in
which interaction with the outside is near the root of the call tree, not
near the leaves. As far as the standard argument that separating I/O from
computation forces the program to remember more and is thus less efficient
I prefer to build it right and *then* change it for efficiency if
necessary, after finding efficiency bottlenecks. Furthermore, there are
general ways of thinking about interleaving the computations of multiple
modules, either by using concurrent languages or by using "deforestation"
techniques. The typical deforestation example is sameleaves/2, which
checks if two trees have identical lists of leaves. The naive definition
is

sameleaves(T1, T2) :-
leaves(T1, L),
leaves(T2, L).

Exercise: eliminate the intermediate list L. Similar reasoning can be used
to combine I/O with computation if efficiency demands it.

Olivier Ridoux

unread,
May 20, 1996, 3:00:00 AM5/20/96
to

In article <4niugj$8...@mulga.cs.mu.OZ.AU>, f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:
|> per...@research.att.com (Fernando Pereira) writes:
|>
|> >ta...@clement.info.umoncton.ca (Paul Tarau) wrote:
|> >> [...]
|> >> I think this cleaner style of meta-programming is not intrinsically
|> >> alien to Mercury as _typed_ logic languages like LambdaProlog-MALI are
|> >> using such constructs quite heavily.
|> >Good that you brought up lambda Prolog. Universal quantification in
|> >negative contexts (essentially universal variables) is a better
|> >representation of variables in metaprogramming than either the ground
|> >representations of Mercury and Goedel or the standard Prolog hack. It
|> >avoids the hassles of managing explicitly sets of constants and
|> >substitutions in ground representations, and it avoids the bugs caused by
|> >the standard Prolog hack. It is more expressive than either, and it is
|> >compatible with lambda conversion is just the right way.
|>
...

|> However, the expressiveness advantage for meta-programming of
|> LambdaProlog compare to Mercury is only slight, and the LambdaProlog
|> approach has some very significant disadvantages in other areas -- the
|> most important being the loss of negation and if-then-else.
|>
|> I had a long discussion about these issues with Olivier Ridoux via
|> email a few months ago.
|>
|> The reason I say that the expressiveness advantage of LambdaProlog's
|> meta-programming features is only slight is that the functionality
|> they provide is easy enough to provide in a library module such as
|> the Mercury `varset' module. The main difference then is that in
|> LambdaProlog you avoid the need to explicitly pass around varsets.
|> However, if you need to keep track of other information such as the
|> names of the variables, then you will still need to explicitly
|> pass around a data structure recording this information. So in
|> that situation, there is not really much advantage to the LambdaProlog
|> approach.

No. You still can do it using the universal quantification+implication
device. Suppose the unnamed representation of object terms is as follows:

(append [] X X) is represented as (all X\ (app (app (app append []) X) X)) .

Note that the occurrences of the 'X' in the object term and the 'X' in the
meta-term are only coincidental. The type of 'all' is ((term->term)->term) .
The type of 'app' is (term->term->term). Note that they have nothing to do with
the object-types. In the same vein, a constructor for object-level abstractions
could be 'abs' with type ((term->term)->term).

One can extend this representation to cope for names.

(append [] X X) is represented as (all "X" X\ (app (app (app append []) X) X)) .

Now the connection with the object 'X' and the meta 'X' is done via the string
that is passed to 'all'. The type of the new 'all' is
(string->(term->string)->string).

This is for the representation of object terms. How do we use it? In fact,
a program that has something to do with such a representation will have to
perform an induction on the constructors of the representations. E.g., suppose we
want to display an object term.

type display term -> o .

display (app A B) :- print "(" , display A, print " ", display B, print ")" .
display (all Var Term) :- pi x\ ( display x :- print Var
=> display (Term x)
) .

No 'varset', no nothing.

Let me explain these two clauses. The first one is a very Prolog-style
analysis of a compound term. The second one is more LambdaPrologish.
Since 'Term' is an abstraction with type (term->term) it is not eligible for
serving as a parameter to display. The only way to transform an abstraction
into a primitive term without losing any information is to apply it to a
universal variable. This corresponds to the intentional reading of a function;

f: x |----> blabla x

is read as "To every 'x', 'f' associates (blabla x)".
^^^^^

Having introduced a universal variable, and pursuing the induction, the program
will eventually fall on the new universal variable, and will have no clause
for handling this case because, by definition, the new universal variable is none
of the constructors of the representation. One has to extend the program
"on-the-fly" for handling the new universal variable.

To sum-up, "pi x\" introduces a new universal variable which by definition is
not known statically. The implication "=>" introduces a new clause
"display x :- print Var" which knows about "x". The induction goes on with
"display (Term x)".

A few remarks. (1) I insist on an operational reading because I think it is
more helpful. However, connectives "pi" and "=>" are simply governed by the
right introduction rules of the intuitionistic sequent calculus; nothing less,
nothing more. (2) The program above relies on the left-to-right interpretation
of the "and" connective (','). This can be fixed by adapting the DCG formalism
to LambdaProlog. This is called LambdaHHG (Higher-order Hereditary Harrop
Grammars) and it has been presented at ICLP'93. (3) The program should be
made more sophisticated for displaying object terms without useless brackets.
(4) Note the free variables in (display x :- print Var). 'x' and 'Var' are
quantified outside the implied clause. This is out of reach for assert/1,
though it is only natural using implication,

|> But more important than that is the issue of negation and if-then-else.
|> The reason that Mercury and Goedel can handle negation and if-then-else in
|> a logically sound manner is that their semantics are based on the
|> Clark completion of the program. However, Lambda-Prolog's theoretical
|> foundation is not based on the completion semantics, and in particular
|> Lambda-Prolog's intuitionistic implication construct (an essential part
|> of how it handles meta-progamming) would not make any sense in the
|> completion semantics.

There have been some works on intuitionistic negation, but they do not have the
operational appeal of negation-as-failure. I think there is a way
negation-as-failure can be incorporated into LambdaProlog. However, the relation
with implication will not be the classical one. This is not a problem since it
is already the case that negation-as-failure does not behave classically (eg.
fail(fail(P)) is not equivalent to 'P').


|> >However, metaprogramming in lambda Prolog still requires explicit encoding
|> >of object-level formulas as metalevel terms, because of the type system.
|> >In the original implementation of lambda Prolog, this could be avoided by
|> >using runtime type polymorphism, but at considerable danger of
|> >nontermination because of underconstrained types in higher-order
|> >unification. More recent implementations use stricter type systems, so
|> >encoding is inevitable.
|>
|> Could you please explain this in a little bit more detail, with say an
|> example or two?

I do not know to which work Fernando is alluding to, but let me offer an
explanation.

The type of 'app' above (term->term->term) can be used for
controlling the well-formedness of a meta-term, but it says nothing of the
well-formedness of the object term. In fact, it considers the object term
as untyped.

A type like ((A->B)->A->B) seems more informative but in fact it is
more a problem than a solution. The reason is that the object types can
"invade" the meta-structure, and it is the well-formedness of the latter that is
now questioned. For instance, assume an object-level constant f has type
(i->i->i); (app f X) is well-typed, but (app f X Y) is also well-typed. The
boundary between the meta-terms and the object terms is lost. Note also
that the type for 'abs' would be exactly the same, though it is expected to
have only one argument. In fact, these types are variants of the type of the
identity function. Meta-programming is more than manipulating identity functions!

One can regain the boundary between the meta-terms and the object terms
by giving type ((term A->B)->(term A)->(term B)) to 'app', and type
((term A->B)->(term A->B)) to 'abs'. Now, (app f X) is
well-typed, but (app f X Y) is not. The expected difference between 'app'
and 'abs' is now visible.

This works fine for many applications, but
it breaks when object-level constants are themselves polymorphic. The reason
is that with this scheme the object polymorphism must be equivalent to the
meta-polymorphism (e.g., constants are polymorphic in both worlds, bound
variables are monomorphic in both worlds). If the polymorphisms are different in
the two worlds one has to explicitly represent at the the meta-level the object
level types. This sometimes amounts to emulating higher-order types, and we are
working on incorporating second-order types in LambdaProlog.

Olivier


Exercise: Forget about the conventional types of 'cons' and 'nil' and
give types for '=..', 'cons', and 'nil' that are compatible with common usage.

A+B =.. [+,A,B] . (cons + (cons A (cons B nil)))
(f X1 X2 ... Xn) =.. [f,X1,X2,...,Xn] .
(cons f (cons X1 (cons X2 ... (cons Xn nil) ... )))

Answer to come tomorrow (whatever 'tomorrow' may signifies for you all).


Fergus Henderson

unread,
May 20, 1996, 3:00:00 AM5/20/96
to

rid...@calypso.irisa.fr (Olivier Ridoux) writes:

>f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:
>|> The reason I say that the expressiveness advantage of LambdaProlog's
>|> meta-programming features is only slight is that the functionality
>|> they provide is easy enough to provide in a library module such as
>|> the Mercury `varset' module. The main difference then is that in
>|> LambdaProlog you avoid the need to explicitly pass around varsets.
>|> However, if you need to keep track of other information such as the
>|> names of the variables, then you will still need to explicitly
>|> pass around a data structure recording this information. So in
>|> that situation, there is not really much advantage to the LambdaProlog
>|> approach.
>
>No. You still can do it using the universal quantification+implication
>device.

Olivier Ridoux is right here. My comment about still needing to pass
around data structures in LambdaProlog in that situation was wrong.
I must confess that I don't know LambdaProlog as well as I perhaps ought to.

(However, my other points still hold; the expressiveness advantage
of LambdaProlog-style meta-programming is a bit more than I thought it was,
but the alternative of passing around some extra data structures, although
a bit more tedious, is not a major burden in my experience.)

>|> But more important than that is the issue of negation and if-then-else.
>|> The reason that Mercury and Goedel can handle negation and if-then-else in
>|> a logically sound manner is that their semantics are based on the
>|> Clark completion of the program. However, Lambda-Prolog's theoretical
>|> foundation is not based on the completion semantics, and in particular
>|> Lambda-Prolog's intuitionistic implication construct (an essential part
>|> of how it handles meta-progamming) would not make any sense in the
>|> completion semantics.
>
> There have been some works on intuitionistic negation, but they do not
> have the operational appeal of negation-as-failure. I think there is a
> way negation-as-failure can be incorporated into LambdaProlog.
> However, the relation with implication will not be the classical one.
> This is not a problem since it is already the case that
> negation-as-failure does not behave classically (eg. fail(fail(P)) is
> not equivalent to 'P').

Negation-as-failure in Mercury and Goedel does behave "classically",
i.e. it obeys the usual laws of predicate calculus. Indeed, both the
Mercury compiler and the Goedel implementation will automatically replace
`not(not(P))' (or in Goedel `~ ~ P') with `P' whenever it occurs.

>|> >However, metaprogramming in lambda Prolog still requires explicit encoding
>|> >of object-level formulas as metalevel terms, because of the type system.
>|> >In the original implementation of lambda Prolog, this could be avoided by
>|> >using runtime type polymorphism, but at considerable danger of
>|> >nontermination because of underconstrained types in higher-order
>|> >unification. More recent implementations use stricter type systems, so
>|> >encoding is inevitable.
>|>
>|> Could you please explain this in a little bit more detail, with say an
>|> example or two?
>
>I do not know to which work Fernando is alluding to, but let me offer an

>explanation. [...]

Thanks. I'm still a bit mystified about the "danger of nontermination".

Dennis Merritt

unread,
May 20, 1996, 3:00:00 AM5/20/96
to

per...@research.att.com (Fernando Pereira) wrote:

>> person_report :-
>> write(‘Person Report’), nl,
>> forall( person(X), (write(X), nl) ).
>>
>> (forall/2 is usually provided as a built-in or library predicate).
>>
>> This still obscures the mapping to the machine, which I don't care about,
>> but makes the intended meaning clear and explicit.
>Actually, neither program makes the meaning that explicit. I would rather write:

>people(People) :-
> findall(Person, person(Person), People).

>people_report(Report) :-
> people(People),
> format_list(People, Report).

>print_people_report :-
> people_report(Report),
> print_report(Report).

>with appropriate definitions of format_list to map a list to a report
>object (whatever that is) and of print_report.

>The reason I go over this seemingly trivial example is that it actually


>illustrates the point I was was making originally and Dennis Merritt
>disagreed with.

My intention wasn't actually to disagree, but rather to present some
ideas on why programmers don't, in general, gravitate to higher-level
languages, based on having wrestled with these issues from the vendor
side of the business for a number of years. It has been my experience
that users often get confused and frustrated with products that seem
intuitive and easy-to-use to individuals working for the vendor.

I originally started working with database vendors selling, among
other things, report writers. Report writers, like Prolog, were sold
by hyping the advantages of declarative specification of reports, as
opposed to the old way of writing COBOL programs. They looked great
for the various sample reports, but they could not be expanded to
handle more complex reports.

As the report specifications became more complex, with various running
totals and subtotals and formatting requirements and the like, the
declarative syntax begins to get in the way. It is only the vendor's
experienced field support people, who know exactly how the tool works,
that can come up with the magic sequence of statements to make a
complex report appear.

From the user's point of view, it is not at all clear where the line
is that the declarative report writer tool starts to become difficult
to use. In other words, the user becomes frustrated as he/she tries
to push the envelop of the tool. At that point, it is often easier to
go back to COBOL than try to figure out the tool. And when the next
report comes in, well maybe its just safer to use COBOL again anyway.

The same is true for spread sheets. At some time, a sophisticated
user of a spread sheet is going to have to understand the very
procedural way in which the cells get calculated on the spread sheet.
Armed with that knowledge, the spread sheet user becomes a power user.

I worked for awhile for Dr. Larry Harris at what was once Artificial
Intelligence Corp and later AICorp (and now part of Trinzic). We sold
Intellect, a natural language query language for mainframe databases.

Again, many examples showed the power and expressiveness of Intellect,
but when the database queries started to become complex, it became
difficult to figure out how to express the queries, because underneath
there was always the same old CPU running one instruction at a time,
and unless you understood how Intellect worked, you couldn't get the
queries to come out like you wanted.

But, Larry Harris could quickly write the most complex queries and get
them right the first time. He didn't see a problem with generating
complex elegant declarative natural language queries. But he knew
exactly how the thing worked, although I don't believe he consciously
used that information in generating his queries.

I propose that the same is true amongst those in the Prolog community.
After awhile we begin to see the language declaratively and become
more comfortable using it as Fernando Pereira does (who wins the
person-report contest in which I finished a distant third). But I
suspect his skill level with the language is due in large part to what
is at this point for him, a second nature, deep understanding, of
exactly how Prolog works. (For example, it is not even worth showing
the code to generate a report from a list, but it is exactly those
kinds of recursive tasks that often drive new users nuts.)

My point again, after this rambling, is that I believe experienced
users of any tool are no longer conscious of the depth of knowledge
they have of the tool.

I recently re-read my Adventure in Prolog book (which we've just
republished), and was surprised at how painstakingly slow it goes, and
the time spent on topics which seem obvious. I would not write the
book that way today, but it was written shortly after I learned Prolog
and covers the language from the point of view of an experienced
procedural programmer who found it difficult to get started with the
language.

Dennis


Fernando Pereira

unread,
May 20, 1996, 3:00:00 AM5/20/96
to

In article <4npgg4$d...@news.irisa.fr>, rid...@irisa.fr wrote:
> [ Fernando Pereira said originally]

> |> >However, metaprogramming in lambda Prolog still requires explicit encoding
> |> >of object-level formulas as metalevel terms, because of the type system.
> |> >In the original implementation of lambda Prolog, this could be avoided by
> |> >using runtime type polymorphism, but at considerable danger of
> |> >nontermination because of underconstrained types in higher-order
> |> >unification. More recent implementations use stricter type systems, so
> |> >encoding is inevitable.
> [Fergus Henderson said]

> |> Could you please explain this in a little bit more detail, with say an
> |> example or two?
>
> I do not know to which work Fernando is alluding to, but let me offer an
> explanation.
[Excellent explanation]
That's exactly the difficulty I had in mind, but you put it more clearly
than I could have, thanks.

Olivier Ridoux

unread,
May 21, 1996, 3:00:00 AM5/21/96
to

In article <4nqjpn$3...@mulga.cs.mu.OZ.AU>, f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:
|> rid...@calypso.irisa.fr (Olivier Ridoux) writes:
|>
|> >f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:
...

|> >|> But more important than that is the issue of negation and if-then-else.
|> >|> The reason that Mercury and Goedel can handle negation and if-then-else in
|> >|> a logically sound manner is that their semantics are based on the
|> >|> Clark completion of the program. However, Lambda-Prolog's theoretical
|> >|> foundation is not based on the completion semantics, and in particular
|> >|> Lambda-Prolog's intuitionistic implication construct (an essential part
|> >|> of how it handles meta-progamming) would not make any sense in the
|> >|> completion semantics.
|> >
|> > There have been some works on intuitionistic negation, but they do not
|> > have the operational appeal of negation-as-failure. I think there is a
|> > way negation-as-failure can be incorporated into LambdaProlog.
|> > However, the relation with implication will not be the classical one.
|> > This is not a problem since it is already the case that
|> > negation-as-failure does not behave classically (eg. fail(fail(P)) is
|> > not equivalent to 'P').
|>
|> Negation-as-failure in Mercury and Goedel does behave "classically",
|> i.e. it obeys the usual laws of predicate calculus. Indeed, both the
|> Mercury compiler and the Goedel implementation will automatically replace
|> `not(not(P))' (or in Goedel `~ ~ P') with `P' whenever it occurs.

R :- not(P) .
Q :- not(R) .

Is that also true that P and Q are equivalent in Mercury? If you answer "Yes,
negations are pushed through clauses and simplified", then I will ask whether
P' and Q' are equivalent in Mercury?

T' :- X=X . % or anything that makes T' true
R' :- not(P') , T' .
Q' :- not(R') .

Olivier

Olivier Ridoux

unread,
May 21, 1996, 3:00:00 AM5/21/96
to

In article <4npgg4$d...@news.irisa.fr>, rid...@calypso.irisa.fr (Olivier Ridoux) writes:
...

|> Exercise: Forget about the conventional types of 'cons' and 'nil' and
|> give types for '=..', 'cons', and 'nil' that are compatible with common usage.
|>
|> A+B =.. [+,A,B] . (cons + (cons A (cons B nil)))
|> (f X1 X2 ... Xn) =.. [f,X1,X2,...,Xn] .
|> (cons f (cons X1 (cons X2 ... (cons Xn nil) ... )))
|>
|> Answer to come tomorrow (whatever 'tomorrow' may signify for you all).
|>

Answer (given in commented LambdaProlog syntax):

The conventional typing of nil and cons makes lists homogeneous; all the
elements of a given list have the same type. However, a list that contains
a function and its arguments *cannot* be homogeneous. A solution is to forget
about types and to wrap everybody in a non-transparent constructor.

kind f type . % Forget about type .

type ~ _ -> f . % wraps everything and forgets types.

type =.. A -> (list f) -> o . % =.. relates terms and lists of non-typed
% terms.

So doing, we have:

A+B =.. [~ +, ~ A, ~ B] .
etc.

This loses the fact that though non-homogeneous the lists in '=..' have some
regularity. We want to capture this regularity.

kind univ type -> type -> type . % Declares a type constructor or arity 2
% The idea is that if the elements of a list
% of type (univ A B) are applied successively
% to a term of type A, the result will have
% type B.

type nil (univ A A) . % Indeed, if nothing is applied to a term of
% type A, the result has type A.

type cons A -> (univ S R) -> (univ A->S R) .
% if a term of type A is prepended to a list
% (univ S R), then when all elements are
% applied in sequence to a term of type
% A->S, the result has type R.

type =.. A -> (univ H->H A) -> o . % =.. relates any term of type A to a sequence
% of term such that if it is applied
% successively to a term of type H->H (an
% identity function) the result has type A.

Note 1. This does not give the semantics of '=..' . For instance,

12 =.. [13] or A+B =.. [A+B]

are well-typed but false according to the usual semantics.

Note 2. The H in the type of '=..' is in fact the type of the head of the term.
This suggests a variant of '=..' in which the head is isolated.

type =... A -> H -> (univ H A) -> o .

Example:

=... 12 12 [] .
=... (A+B) + [A,B] .

In fact, many classical predicates have been designed in a pre-typed era.
Lot of them could benefit of some re-engineering under the light of typing.
This remark applies also to programming style. As Richard O'Keefe remarked
recently text-books are full of non-typable predicates (e.g., the classical
flatten/2).

Olivier

Fergus Henderson

unread,
May 21, 1996, 3:00:00 AM5/21/96
to

rid...@calypso.irisa.fr (Olivier Ridoux) writes:

>f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:


>|> rid...@calypso.irisa.fr (Olivier Ridoux) writes:
>|> > negation-as-failure does not behave classically (eg. fail(fail(P)) is
>|> > not equivalent to 'P').
>|>

>|> Negation-as-failure in Mercury and Goedel does behave "classically",
>|> i.e. it obeys the usual laws of predicate calculus. Indeed, both the
>|> Mercury compiler and the Goedel implementation will automatically replace
>|> `not(not(P))' (or in Goedel `~ ~ P') with `P' whenever it occurs.
>
>R :- not(P) .
>Q :- not(R) .
>
>Is that also true that P and Q are equivalent in Mercury?

Yes, presuming they are both legal, the declarative and operational
semantics of Q are exactly same as those of P.

The compiler is not guaranteed to generate exactly identical code --
the operational semantics doesn't nail down every little detail of
code generation, of course -- although in this particular case I
think it probably would, if you enable optimization.

[I'm assuming that P and Q are meta-variables here, i.e. they are
supposed to stand for some predicates `p' and `q', possibly with some
arguments variables. Taken literally, what you have written is not
legal syntax, since the head of a clause can't be a variable.]

Fernando Pereira

unread,
May 21, 1996, 3:00:00 AM5/21/96
to

In article <4nqqm4$k...@orion.cybercom.net>, den...@amzi.com wrote:
> [...]

> >The reason I go over this seemingly trivial example is that it actually
> >illustrates the point I was was making originally and Dennis Merritt
> >disagreed with.
>
> My intention wasn't actually to disagree, but rather to present some
> ideas on why programmers don't, in general, gravitate to higher-level
> languages, based on having wrestled with these issues from the vendor
> side of the business for a number of years. It has been my experience
> that users often get confused and frustrated with products that seem
> intuitive and easy-to-use to individuals working for the vendor.
> [... interesting discussion of the difficulties programmers have
> with declarative tools such as 4GLs and NL DB interfaces]
I particularly liked the NL DB interface example, since I worked in the
area (not commercially) and also noticed the mismatch between the original
claims ("NL makes it easier") and reality. But with hindsight this should
not have been surprising. Crafting a complex specification in English, or
a complex program in a declarative language, is a difficult, skilled task
very different from day-to-day programming. It has been often observed
that good English majors make better specification writers than good
engineering majors. Only a minority who has the inclination, training and
time is able to bridge the gap comfortably between specifications and
execution, whether for programming in a declarative language or for
designing an imperative program from a declarative specification. This is
precisely the ecological factor that I was discussing. Traditionally,
programming selected for people capable of mentally orchestrating complex
sequences of events and their interactions, not for specification crafters
(those typically went to law school ;-)). Thus, languages that favor the
latter will find poor soil in the former, who are the majority of the
programming profession. The two ways of thinking come together in
mathematics, although the marriage has never been perfect, viz the fights
between "dynamic" and "static" views of infinity and limits that go back
to the conflicting Newtonian and Leibnizian views of the calculus if not
to the classical world.
Thus some mathematically-inclined people find it relatively easy to jump
between declarative and imperative views of a problem. However, it is
presumptuous to suppose that this what "real world" programming is like.

In addition to the examples we have been discussing, I encourage anyone to
look at the Microsoft Active VRML language for another instance. It is in
my opinion a very elegant functional language based on ideas from Lucid,
Concurrent ML and others to specify combinations of complex behaviors
(space-time extents in virtual space) without unnecessary explicit
mentions of time or synchronization. However, I suspect that it will have
a hard time being adopted compared with more conventional VR scripting
languages because the relationship between an algebraic specification of
a dynamical system and the synchronizations and numerical integrations
that implement it as a concrete behavior on a machine could be far two
remote for the practitioner. This is not a criticism of practitioners, by
the way, but rather a reflection on what skills and inclinations have been
selected for in the development of the programming profession (hence the
fights between "systems engineers" and "coders," too).

Olivier Ridoux

unread,
May 22, 1996, 3:00:00 AM5/22/96
to

In article <4ntai9$f...@mulga.cs.mu.OZ.AU>, f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:
|> rid...@calypso.irisa.fr (Olivier Ridoux) writes:
|>
|> >f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:
|> >|> rid...@calypso.irisa.fr (Olivier Ridoux) writes:
|> >|> > negation-as-failure does not behave classically (eg. fail(fail(P)) is
|> >|> > not equivalent to 'P').
|> >|>
|> >|> Negation-as-failure in Mercury and Goedel does behave "classically",
|> >|> i.e. it obeys the usual laws of predicate calculus. Indeed, both the
|> >|> Mercury compiler and the Goedel implementation will automatically replace
|> >|> `not(not(P))' (or in Goedel `~ ~ P') with `P' whenever it occurs.
|> >
|> >R :- not(P) .
|> >Q :- not(R) .
|> >
|> >Is that also true that P and Q are equivalent in Mercury?
|>
|> Yes, presuming they are both legal, the declarative and operational
|> semantics of Q are exactly same as those of P.
|>
|> The compiler is not guaranteed to generate exactly identical code --
|> the operational semantics doesn't nail down every little detail of
|> code generation, of course -- although in this particular case I
|> think it probably would, if you enable optimization.
|>
|> [I'm assuming that P and Q are meta-variables here, i.e. they are
|> supposed to stand for some predicates `p' and `q', possibly with some
|> arguments variables. Taken literally, what you have written is not
|> legal syntax, since the head of a clause can't be a variable.]

You are assuming right.

Back to the subject. From what you say I conclude negated goals do compute
a solution substitution. Is that still negation-as-failure? It seems closer
to constructive negation or similar devices.

Olivier

Fergus Henderson

unread,
May 22, 1996, 3:00:00 AM5/22/96
to

rid...@calypso.irisa.fr (Olivier Ridoux) writes:
>f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:
>|> rid...@calypso.irisa.fr (Olivier Ridoux) writes:
>|>
>|> >f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:
>|> >|> Negation-as-failure in Mercury and Goedel does behave "classically",
>|> >|> i.e. it obeys the usual laws of predicate calculus.
[...]

>|> >R :- not(P) .
>|> >Q :- not(R) .
>|> >
>|> >Is that also true that P and Q are equivalent in Mercury?
>|>
>|> Yes, presuming they are both legal, the declarative and operational
>|> semantics of Q are exactly same as those of P.
[...]

>From what you say I conclude negated goals do compute a solution substitution.

I'm sorry, I didn't really explain things. "Presuming they are both
legal" is an important phrase. In Mercury, negated goals are not
allowed to bind any non-local variables. So if P above is a goal which
binds some non-local variables (i.e. some of the arguments to R) then
the code above is not legal Mercury.

If you write a negation which would bind a non-local variable, the
compiler will first try to reorder the code so that it can schedule a
producer for that variable before the negation. If it can't do that,
it will report a mode error.

Lee Naish

unread,
May 23, 1996, 3:00:00 AM5/23/96
to

In article <4nrsa1$i...@news.irisa.fr>, rid...@calypso.irisa.fr (Olivier Ridoux) writes:


> In article <4nqjpn$3...@mulga.cs.mu.OZ.AU>, f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:

> |> > negation-as-failure does not behave classically (eg. fail(fail(P)) is
> |> > not equivalent to 'P').
> |>

> |> Negation-as-failure in Mercury and Goedel does behave "classically",

Using a rather strict meaning of "equivalent", as I think Olivier intends, Mercury and Goedel do not behave classically. In fact P is not even equivalent to P. Because the clause selection rule is unspecified, P may succeed today and loop tomorrow; ie. different operational behaviour with the same declarative semantics.

Is this a problem? I think not. Such programs should be considered buggy, just like programs with errors which lead to wrong answers or type errors or syntax errors. What I believe we should be aiming for in logic programming languages is a nice declarative semantics, a nice procedural semantics and a nice relationship between them *for nice programs* (niceness is partly a matter of personal taste).

In the conventional view of Horn Clause programs (pure Prolog without negation) nice programs are simply those which are syntactically correct. In Goedel, programs must also be well typed and in Mercury they must be well moded to be considered nice. Introducing negation increases the richness of the language - the semantics is nicer - but there are also some nasty consequences. Much research in logic programming is involved in comming up with new models which move this nastiness around, trying to reduce
it. The most obvious place for the nastiness to sit is with the relationship between the declarative and procedural semantics, hence this thread about NAF not behaving classically. A second possibility is to use a more different declarative semantics, one which is closer to the procedural semantics (this makes the declarative semantics nastier). A third possibility is to adapt the procedural semantics, eg introduce fairness
(which has nasty efficiency consequences). The final possibility, which I have advocated, is to expand the class of programs which are considered nasty, eg those which don't terminate. See http://www.cs.mu.oz.au/~lee/papers/prune for some discussion.

lee (currently on holiday so I may not respond further)

Fergus Henderson

unread,
May 23, 1996, 3:00:00 AM5/23/96
to

I agree with almost all of the points Lee Naish makes in his article.
I do have a couple of comments, however.

l...@cs.mu.OZ.AU (Lee Naish) writes:

>rid...@calypso.irisa.fr (Olivier Ridoux) writes:
>> f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:
>
>> |> > negation-as-failure does not behave classically (eg. fail(fail(P)) is
>> |> > not equivalent to 'P').
>> |>
>> |> Negation-as-failure in Mercury and Goedel does behave "classically",
>
> Using a rather strict meaning of "equivalent", as I think Olivier
> intends, Mercury and Goedel do not behave classically. In fact P is
> not even equivalent to P. Because the clause selection rule is
> unspecified, P may succeed today and loop tomorrow; ie. different
> operational behaviour with the same declarative semantics.

What Lee says above is not entirely true of Mercury, as I shall explain.

I think it is useful to distinguish between operational semantics and
behaviour: the operational semantics may specify a _set_ of allowed
behaviours. An implementation correctly implements a particular
operational semantics iff the behaviour of programs run with that
implementation is one of the behaviours allowed by the operational
semantics. An operational semantics for which the set of behaviours
allowed is always a singleton set could be called deterministic, but
most existing programming languages have non-deterministic operational
semantics.

In Mercury, `P' and `not(not(P))' always have the same operational
semantics, presuming that both are legal. (As noted previously,
in some circumstances `P' may be legal but `not(not(P))' may be
a mode error.)

Does that mean that they have the same behaviour?
Not necessarily. The same behaviour would be guaranteed
only if you have a deterministic operational semantics.

We recently revised the definition of the operational semantics of
Mercury, partly to deal with these conflicting aims.
The situation in Mercury is a little bit complicated, because in order
to satisfy the conflicting goals that Lee refers to in his article,
we've defined _two_ different operational semantics: a determinismtic
one called the "strict sequential" semantics, and a non-deterministic
one called "commutative" semantics. Implementations of Mercury are
required to support both. Since any implementation of the "strict
sequential" semantics is also an implementation of the "commutative"
semantics, this does not impose any additional difficulty on
implementations. Users of Mercury may use whichever they prefer; the
University of Melbourne Mercury implementation allows users to select
between them using command line options.

So in Mercury, `P' and `not(not(P))' always have the same operational
semantics, whether you are using the "strict sequential" semantics or
the "commutative" semantics, and since the "strict sequential"
semantics is a deterministic semantics, this means that `P' and
`not(not(P))' have exactly the same behaviour if you are using that
operational semantics. In the "strict sequential" semantics, the
clause selection rule *is* specified.

Interested readers should consult the "Semantics" chapter
of the latest Mercury language reference manual, available from
<http://www.cs.mu.oz.au/mercury/doc/reference_manual.html#SEC33>.

Paul Singleton

unread,
May 24, 1996, 3:00:00 AM5/24/96
to

Fergus Henderson (f...@mundook.cs.mu.OZ.AU) wrote:

: % min(X, Y, Min): Min is the minimum of X and Y.
: min(X, Y, X) :- X < Y, !.
: min(X, Y, Y).
:
: In Mercury, you can write this as
:
: min(X, Y, Min) :- if X < Y then Min = X else Min = Y.

May I inject a bit of lateral thinking here?

In imperative languages, if/then/else is redundant, being merely
a 'case' construct specialised for the two-valued "Boolean" type.

E.g. we might write (in no particular syntax :-)

case X < Y in
TRUE: return X ;
FALSE: return Y ;
esac

In Prolog, we unthinkingly represent this two-valued type by
success and failure, but maybe we should represent it more
explicitly. e.g.

min( X, Y, Min) :-
'<'( X, Y, B),
min_1( B, X, Y, Min).

min_1( true, X, _, X).
min_1( false, _, Y, Y).

----
__ __ Paul Singleton PhD CEng MBCS email: p.sin...@keele.ac.uk
|__) (__ Dept. of Computer Science tel: +44 (0)1782 583477
| . __). Keele University, Newcastle, fax: +44 (0)1782 713082
Staffs ST5 5BG, U.K. road: UK M6 J15 follow signs

Sebastian Millies

unread,
May 24, 1996, 3:00:00 AM5/24/96
to

In article <4o43u9$3...@gerry.cc.keele.ac.uk>, pa...@cs.keele.ac.uk (Paul
Singleton) wrote:

> In imperative languages, if/then/else is redundant, being merely
> a 'case' construct specialised for the two-valued "Boolean" type.
>
> E.g. we might write (in no particular syntax :-)
>
> case X < Y in
> TRUE: return X ;
> FALSE: return Y ;
> esac
>
> In Prolog, we unthinkingly represent this two-valued type by
> success and failure, but maybe we should represent it more
> explicitly. e.g.
>
> min( X, Y, Min) :-
> '<'( X, Y, B),
> min_1( B, X, Y, Min).
>
> min_1( true, X, _, X).
> min_1( false, _, Y, Y).

Actually, many predicates in the Sicstus library are
written similarly, using compare/3, which provides
a "Comparison" type (instead of "Boolean") for the
table of cases. E.g., I'd write:

min(X, Y, Min) :-
compare(Order, X, Y),
min_table(Order, X, Y, Min).

min_table(<, X, Y, X).
min_table(=, X, Y, Y).
min_table(>, X, Y, Y).

--
Sebastian Millies
Universitaet des Saarlandes phone: +49 (681) 302 4497
FR 8.7 / Computerlinguistik fax: +49 (681) 302 4351
D-66041 Saarbruecken e-mail: mil...@coli.uni-sb.de

Fergus Henderson

unread,
May 25, 1996, 3:00:00 AM5/25/96
to

mil...@coli.uni-sb.de (Sebastian Millies) writes:

Yes, that works in Mercury too.

But in Mercury you can implement Paul Singleton's suggestion more
directly by defining a function `to_bool' which converts a predicate to
a boolean value:

:- func to_bool(pred) = bool.
:- mode to_bool((pred) is semidet) = out is det.

to_bool(Pred) = (if call(Pred) then yes else no).

You can then write

min( X, Y, Min) :-

min_1(to_bool(X < Y), X, Y, Min).

min_1(yes, X, _, X).
min_1(no, _, Y, Y).

The Mercury compiler can generate just as efficient code for this version
as for the version using if-then-else, so there's no efficiency penalty
for using this style either.

(It doesn't do quite as well with the version using compare/3. It
specializes the call to `compare' to instead call `builtin_compare_int',
but it doesn't manage to inline it. Then again, I doubt if many
Prolog compilers do any better.)

Mercury 0.5 didn't have functions, so if you're using that, you would
have to make `to_bool' a predicate instead of a function.

:- pred to_bool(pred, bool).
:- mode to_bool((pred) is semidet, out) is det.

to_bool(Pred, Result) :-
(if call(Pred) then Result = yes else Result = no).

min( X, Y, Min) :-

to_bool(X < Y, B),
min_1(B, X, Y, Min).

Mercury 0.5 also won't do as good a job of optimizing it as the latest
development version.

0 new messages