Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Arrays in Prolog

976 views
Skip to first unread message

Albrecht Schmiedel

unread,
Aug 27, 1990, 11:52:43 AM8/27/90
to
We are implementing a knowledge representation tool involving large
inheritance networks, and until now we have been using Prolog, mainly
because this is the language we are most familiar with, and because 'rapid
prototyping' seems to be easy (compared to C, or even Lisp).
Also, we would like to continue using Prolog (for exactly
those reasons), but we are increasingly confronted with
performance problems.

Essentially, this has to do with the inability of Prolog to store,
retrieve, and replace data indexed by integers. This is actually
all we need to search and to modify large concept graphs; but
Prolog forces us (as far as I can see) to do asserts, retracts
and database calls where simple assignments and lookups in
arrays would suffice. Our program spends large portions of
the total run time doing assertions, retractions and calls in those
parts of the Prolog database that could easily be replaced by an
array storing Prolog terms.

My questions are the following:
1) Are there Prologs that adress this kind of need?
(I.e. Prologs that make assertions, retractions, and calls
involving predicates that fulfill certain conditions
- for example, a guaranteed instantiated integer as its first
argument, one clause per integer, no backtracking -
similarly fast as array operations.)
2) Are there Prologs which are easily extendable by predicates
written in C that could do the job?
(It should be possible to pass *terms* back and forth;
having to decompose terms and lists before processing them
by a foreign language predicate and then having to put them
together again later on is definitely not what we want.)
3) Shouldn't we be using Prolog?
4) Is there a 'Prolog-way' of handling large graphs efficiently?
(Graphs where data referencing other nodes is attached to nodes,
and search as well as updates are required.)

-------
Albrecht Schmiedel +49 30/314-25494
Technische Universitaet Berlin sch...@db0tui11.bitnet
Sekr. FR 5-12 sch...@tubvm.cs.tu-berlin.de
Franklinstr. 28/29
1000 Berlin 10
West Germany

R. Kym Horsell

unread,
Aug 27, 1990, 5:20:07 PM8/27/90
to
In article <90239.175...@DB0TUI11.BITNET> SCH...@tubvm.cs.tu-berlin.de (Albrecht Schmiedel) writes:
\\\

>My questions are the following:
>1) Are there Prologs that adress this kind of need?
> (I.e. Prologs that make assertions, retractions, and calls
> involving predicates that fulfill certain conditions
> - for example, a guaranteed instantiated integer as its first
> argument, one clause per integer, no backtracking -
> similarly fast as array operations.)
>2) Are there Prologs which are easily extendable by predicates
> written in C that could do the job?
> (It should be possible to pass *terms* back and forth;
> having to decompose terms and lists before processing them
> by a foreign language predicate and then having to put them
> together again later on is definitely not what we want.)
>3) Shouldn't we be using Prolog?
>4) Is there a 'Prolog-way' of handling large graphs efficiently?
> (Graphs where data referencing other nodes is attached to nodes,
> and search as well as updates are required.)

So it's come to this -- destructive assignment in Prolog!

There are several methods for implementing arrays in Prolog
and the one I have implemented is a variant of the setarg/3
precicate that also appears elsewhere.

setarg(Index,Term,Value)

replaces argument ``index'' in ``term'' with ``value'' but
allows backtracking so that ``term'' may be restored to its
former state -- no destructuve assignment happens unless you
setarg(X,Y,Z), !.

-Kym Horsell

Tom Fawcett

unread,
Aug 27, 1990, 7:33:04 PM8/27/90
to

I too have needed to use arrays in Prolog, and have been generally
dissatisfied with what I've found. I'm using Quintus Prolog, which has two
separate library files that implement arrays in different ways. Neither have
constant-time access and update, and the one that claims to be better requires
that you retract, modify, and re-assert the array for every modification.

When asking about this deficiency, the reaction I've gotten is that you
shouldn't need to use arrays - there is no logical interpretation of
destructive modifications to a data structure. I suppose this is generally
true; for most applications there is a more natural and elegant way to
accomplish what you want without resorting to arrays. Nevertheless, for some
applications Prolog's indexing mechanism simply isn't fast enough, and you
need the speed of arrays.

The solution I've settled on is to use a library file provided by Quintus
called vectors.pl, which allows you to create and access single dimension
arrays of arbitrary length. I haven't looked closely at the code, but it uses
Unix's malloc and free routines to do storage management inside Prolog's heap.
The package was designed to be used to pass data between C and Prolog, and not
be a general array management package, but it seems to do the job. It should
be fairly easy to write one yourself.

Unfortunately, besides having to write the code yourself, you'll end up with a
system split across two languages. You have to make sure that the split is
fairly clean, since you don't want to spend a lot of time passing data between
C and Prolog.


--


-Tom Fawcett
GTE Labs, and UMass/Amherst

Roland Karlsson

unread,
Aug 28, 1990, 2:53:53 AM8/28/90
to

> So it's come to this -- destructive assignment in Prolog!

> There are several methods for implementing arrays in Prolog
> and the one I have implemented is a variant of the setarg/3
> precicate that also appears elsewhere.

> setarg(Index,Term,Value)

> replaces argument ``index'' in ``term'' with ``value'' but
> allows backtracking so that ``term'' may be restored to its
> former state -- no destructuve assignment happens unless you
> setarg(X,Y,Z), !.

Setarg/3 is implemented in SICStus Prolog. But 'setarg,!' will NOT
lead to a destructive assignment. The correct value is reinstalled
att backtracking. I would consider it a bug if it did so.

( There is SECRET an DANGEROUS system setarg/4 that can do destructive
assignment. Non experts on SICStus implementation shall NOT use it.
Please don NOT tell anybody (;-) )

Roland Karlsson

PS. I agree that Prolog should benefit from having some unclean data types,
such as arrays, global variables etc. All with destructive assignment.
They should be typed or (as in ESP Prolog) defined as some type at
creation. (X=1 will give you a normal term X and Y:=1 will give you an
unclean variable Y that can be changed to an other value of compatible
type but any atempt to unify it with anything but itself will fail. That
is 'A:=1,B:=1,A=B' will fail but 'A:=1,B:=1,A==B' and 'A:=1,A=A'
will succeed.)
--
Roland Karlsson
SICS, PO Box 1263, S-164 28 KISTA, SWEDEN Internet: rol...@sics.se
Tel: +46 8 752 15 40 Ttx: 812 61 54 SICS S Fax: +46 8 751 72 30

Richard A. O'Keefe

unread,
Aug 28, 1990, 1:14:22 AM8/28/90
to
In article <96...@bunny.GTE.COM>, to...@GTE.COM (Tom Fawcett) writes:

> I too have needed to use arrays in Prolog, and have been generally
> dissatisfied with what I've found. I'm using Quintus Prolog, which has two
> separate library files that implement arrays in different ways. Neither have
> constant-time access and update, and the one that claims to be better requires
> that you retract, modify, and re-assert the array for every modification.

I don't understand the reference. The Quintus library contains, apart
from library(vectors), two packages: library(arrays) and library(logarr).
Here's how an old copy of library(arrays) starts:

% Package: arrays
% Author : Richard A. O'Keefe
% Updated: 10/13/89
% Defines: updatable arrays in Prolog.

% Adapted from shared code written by the same author; all changes
% Copyright (C) 1987, Quintus Computer Systems, Inc. All rights reserved.

/* The operations in this file are fully described in an Edinburgh
Department of Artificial Intelligence Working Paper (WP-150) by
me, entitled "Updatable Arrays in Prolog", dated 1983.

The basic idea is that these operations give you arrays whose
elements can be fetched AND CHANGED in constant expected time
and space. This is an example of logic hacking, rather than
logic programming. For a logical solution (with O(lgN) cost)
see library(logarr) by David Warren and Fernando Pereira.

Here's how library(logarr) starts:

% Package: logarr
% Authors: Mostly David H.D. Warren, some changes by Fernando C. N. Pereira
% Updated: 11/10/87
% Purpose: Extendable arrays with logarithmic access time.

NIETHER of these packages does any asserting or retracting, nor does
either of the packages require or recommend that *you* do any asserting
or retracting. And library(arrays) specifically DOES claim to provide
constant-time access and update. (That's actually *average* time; the
worst case for any operation is O(N), but the average is O(1).)



> When asking about this deficiency, the reaction I've gotten is that you
> shouldn't need to use arrays - there is no logical interpretation of
> destructive modifications to a data structure. I suppose this is generally
> true; for most applications there is a more natural and elegant way to
> accomplish what you want without resorting to arrays. Nevertheless, for some
> applications Prolog's indexing mechanism simply isn't fast enough, and you
> need the speed of arrays.

Please, be specific. What precisely is it that you are trying to do?
Prolog's indexing is simply irrelevant to applications where you would
have used arrays; arrays are dynamic data structures which you would
pass around, they have no more to do with the data base than lists have.
Pretty well anything you might have done with Lisp-style arrays can be
*expressed* the same way using library(arrays) or library(logarr); the
remaining question is speed. Trees (such as 3+4 trees) are surprisingly
fast, and make multi-version structures easy to implement.

> The solution I've settled on is to use a library file provided by Quintus
> called vectors.pl, which allows you to create and access single dimension
> arrays of arbitrary length. I haven't looked closely at the code, but it uses
> Unix's malloc and free routines to do storage management inside Prolog's heap.
> The package was designed to be used to pass data between C and Prolog, and not
> be a general array management package, but it seems to do the job. It should
> be fairly easy to write one yourself.

Thanks for the kind words. I wrote library(vectors) so that I could pass
arrays to Fortran, not C. It isn't as fast as it could be; but of course
the answer to that is that *if* enough customers identified that as being
of importance to them, a prudent company might build the facility in, but
if no-one appears to be using the scheme that is provided, a prudent
company would leave well enough alone.

--
The taxonomy of Pleistocene equids is in a state of confusion.

James Mcguire

unread,
Aug 28, 1990, 1:57:04 PM8/28/90
to

Thought I'd put in my two cents. I have been generally pleased with the
library(logarr) facility in Quintus (mentioned by Richard O'Keefe).
I have used it many times to implement graph algorithms (strongly connected
components, depth-first spanning forests, etc) on large graphs
(> 1000 nodes with approx 5000 edges). If you're willing to modify the
code, you can make array lookup logarithmic to the base of your own choosing.
Furthermore the arrays are sparse (don't have to pre-allocate a pre-defined
number of array storage positions).

I have also used it to implement what is essentially a non-persistent
property list in Prolog (i.e. can be reclaimed on backtracking).
The first step is to compute a set of persistent integer indices for each
symbol. Given a list of symbols [s1,s2,..], compute an integer key for each
symbol and assert the created index to the prolog database. When the symbol
names are pretty stable (hang around a long-time) but attributes describing the
symbol change frequently during computation, I've always found it
cost-effective to pre-compute persistent integer indices and to store
the attribute's of a given symbol in a non-persistent (stored in a variable)
property-lists data-structure as such:

%integer_key(?Symbol,?Array_Index_Key)
:- dynamic integer_key/2.
integer_key(symbol1,12)

%% Update the attribute +Slot of symbol +Symbol to have value +NewValue,
%% where +PropertyListsDB1 is the existing array database containing the
%% property list of each symbol. ?PropertyListsDB2 is the new array database
%% reflecting the change to +Symbol's property list.
add_to_property_list(Symbol,Slot,NewValue,PropertyListsDB1,PropertyListsDB2):-
integer_key(Symbol,Key), %retrieve Symbol's array index

aref(Key,PropertyListsDB1,List), %retrieve the List stored at the
%Key'th index of the array PropertyListsDB1

modify_property_list(List,Slot,NewValue,NewList), %update Symbol's property
%list

aset(Key,PropertyListsDB1,NewList,PropertyListsDB2).%Create a new property
%lists array database
%which
%contains the change
%to Symbol's property
%list.

I would still like to see Prolog incorporate a richer set of built-in
non-persistent (not stored in database) variable-based data-types, such as
hash-tables and arrays. This could be done in a very clean way where the
modification to an existing array/hash-table would be reflected in a newly
introduced variable. This is what is done in Quintus' library(logarr) facility.

----
Jim McGuire
mcg...@laic.lockheed.com
Lockheed AI Center, Menlo Park, Ca, USA.

Kave Eshghi

unread,
Aug 28, 1990, 8:53:18 AM8/28/90
to
First of all: it is nonesense to say that one should not use arrays
becauses it involves destructive assignment. In fact, the reason one
needs to use arrays is precisely that one needs the efficiency of destructive
assignment for very large data structures. (Well it is one of the reasons
anyway.) I find the dogmatism of the Prolog purists who oppose arrays and
then (surreptitiously) condone using asserts and retracts bordering on
religious fanaticism.

On the practical side, for those who use the Poplog system, there is a very
neat solution to this problem. Simply use Pop11 arrays to store your Prolog
terms. The interface between Pop11 and Prolog is almost transparent for
this type of thing, and you will not have any of the usual headaches related
to multiple-language programming.

For those who do not use Poplog, well....

Kave Eshghi

R. Kym Horsell

unread,
Aug 28, 1990, 3:24:52 PM8/28/90
to
In article <1990Aug28.0...@sics.se> rol...@sics.se (Roland Karlsson) writes:
\\\

>> allows backtracking so that ``term'' may be restored to its
>> former state -- no destructuve assignment happens unless you
>> setarg(X,Y,Z), !.
>
>Setarg/3 is implemented in SICStus Prolog. But 'setarg,!' will NOT
>lead to a destructive assignment. The correct value is reinstalled
>att backtracking. I would consider it a bug if it did so.

I *realize* setarg/3 is implemented in Sicstus and elsewhere -- I
wasn't trying to claim anything particularly original. My
comment about ``destructive'' assignment was meant to imply that
the assignment was not really permanent -- it was *normally* undone on
backtracking -- unless a ! was subsequently specified (I try
to keep away from terms like ``executed'' and ``performed'' in
preparation for the day we *really* have logic programming -- or
maybe this is a perfect reason for doing the opposite)?.

I was, however, under the impression that setarg/3 in Sicstus
(and certainly in my current compiler) edited the actual heap
structure that represented the PROLOG term. Why do you consider
it a bug if ``setarg(X,Y,Z),!'' DOES NOT remove the backtrack
point that restores the structure ``Y''?

-Kym Horsell

Andrew Taylor

unread,
Aug 28, 1990, 8:55:48 PM8/28/90
to
In article <39...@bingvaxu.cc.binghamton.edu> vu0...@bingvaxu.cc..binghamton.edu (R. Kym Horsell) writes:

>Why do you consider
>it a bug if ``setarg(X,Y,Z),!'' DOES NOT remove the backtrack
>point that restores the structure ``Y''?

Here is a hint, consider this predicate:

a(X) :- write(first - X), b(X).
a(X) :- write(second - X), c(X).

You'd expect that if backtracking occurred the second value of X written out
would be the same as the first value of X written out. This is not so
with your version of setarg. This is disasterous for either compiler
analysis or human understanding of programs.

Even when always undone on backtracking setarg is still a disaster so
maybe all you have done is turn a disaster into a catastrophe.

R. Kym Horsell

unread,
Aug 29, 1990, 12:40:42 AM8/29/90
to
In article <11...@cluster.cs.su.oz> and...@cluster.cs.su.oz (Andrew Taylor) writes:
>Here is a hint, consider this predicate:
>
> a(X) :- write(first - X), b(X).
> a(X) :- write(second - X), c(X).
>
>You'd expect that if backtracking occurred the second value of X written out
>would be the same as the first value of X written out. This is not so
>with your version of setarg. This is disasterous for either compiler
>analysis or human understanding of programs.
>
>Even when always undone on backtracking setarg is still a disaster so
>maybe all you have done is turn a disaster into a catastrophe.

1) I *agree* *any* form of destructive assignment is a bad idea
from the point of view of declarative (read as ``any'') reading
of a program. I am not trying to advocate it.

2) By using a cut after setarg you *lose* the backtrack
point so the modification becomes ``permanent'', which
was the whole point in my previous message.

3) If, in your example b/1 uses setarg/3 to modify (part of
whatever) X then, unless it issues a cut the value of X used by the second
clause of a/1 *will* be the same as the first since backtracking had to occur
(and, as you acknowledge backtracking thru setarg/3 restores the original
argument).

On the other hand, if a cut is executed somewhere subsequent to
setarg/3 (dynamically inside b/1 somewhere) then, presumably,
the idea *was* that the change to X become ``permanent''.

4) see (1).

-Kym Horsell

Fernando Pereira

unread,
Aug 28, 1990, 10:20:31 AM8/28/90
to
In article <36...@goanna.cs.rmit.oz.au> o...@goanna.cs.rmit.oz.au (Richard A. O'Keefe) writes:
>Pretty well anything you might have done with Lisp-style arrays can be
>*expressed* the same way using library(arrays) or library(logarr); the
>remaining question is speed. Trees (such as 3+4 trees) are surprisingly
>fast, and make multi-version structures easy to implement.

Unfortunately, not fast enough. I might be able to put up with the lg(N) part,
but my practical experience is that the constant factors in library(logarr)
and some other tree representations increase the running time of certain
graph algorithms between one and two orders of magnitude over similar
algorithms implemented with straight assignment. Algorithms such as
strongly connected components for directed graphs and DFA minimization
make essential use of assignment, and are impractical for large graphs/DFAs
(~10^5 nodes) if one uses trees in place of arrays. I wish I had better
news, but some kind of data structure with fast updates (which does not
necessarily push us outside logic, cf. the multiversion arrays of Eriksson
and Rayner) is needed for graph or automata algorithms with realistic data.

Fernando Pereira
2D-447, AT&T Bell Laboratories
600 Mountain Ave, Murray Hill, NJ 07974
per...@research.att.com

Roland Karlsson

unread,
Aug 29, 1990, 5:53:08 AM8/29/90
to

OK Kym. I am lost. I do not understand what you mean by ! making the
destructive change made by setarg permanent. This is how it works
in SICStus though.

When doing a setarg you will first save code on heap that will restore
the old value at backtracking. Then you put a pointer on trail that
refers to this code. Then you do the change. Neither the heap or
the trail will be effected at !. So at backtracking (untrailing) you
will trap at the pointer to the heap. The code on the heap will be
executed and thereby the old value restored. So a ! will NOT make
the change permanent.

R. Kym Horsell

unread,
Aug 29, 1990, 11:14:56 AM8/29/90
to
In article <1990Aug29.0...@sics.se> rol...@sics.se (Roland Karlsson) writes:
\\\

>the trail will be effected at !. So at backtracking (untrailing) you
>will trap at the pointer to the heap. The code on the heap will be
>executed and thereby the old value restored. So a ! will NOT make
>the change permanent.

Oh. From your previous post I presumed we had implemented the same
thing, but apparently not quite.

The version of setarg/3 I have uses the *stack* to remember the
fact that a given argument (i.e. part of the term structure
stored on the heap) has been changed -- in this respect it's just
like other backtrack points.

A cut therefore will forget the fact that the change was done
and therefore it *won't* be restored on backtracking.

We have the possibility of doing things like communicating
between alternatives in a disjunction, or implementing arrays
(does Sicstus allow large arities for structs -- it's
a bit hard to do (e.g. in "copyterm" -- part of the assert/clause
mechanism) everywhere)? This comes in handy for implementing
bagof/3, etc.

I realize this is all a bit nasty, and use of setarg/3 is open to abuse,
but at the moment the system I have is only for my use (and if you saw the
state of the documentation it would be obvious why).

-Kym Horsell

Micha Meier

unread,
Aug 29, 1990, 7:02:22 AM8/29/90
to
In article <96...@bunny.GTE.COM> to...@GTE.COM (Tom Fawcett) writes:
>
>I too have needed to use arrays in Prolog, and have been generally
>dissatisfied with what I've found.
...

>When asking about this deficiency, the reaction I've gotten is that you
>shouldn't need to use arrays - there is no logical interpretation of
>destructive modifications to a data structure.

In every Prolog system, it is possible to use the arg/3
predicate to have a constant-time access to a structure
argument, and so for example instead of having
char_in_sequence4(1, c).
char_in_sequence4(2, u).
char_in_sequence4(3, c).
...
one can write

char_in_sequence4(I, Element) :-
arg(I, chars(c, u, c, ...), Element).

however in many Prolog systems the arity of functors is limited, and
some of the structure-copying compilers will try to construct the
whole chars/N structure on every char_in_sequence4/2 call.

The ECRC SEPIA 3.0 system provides both these logical arrays
(implemented efficiently when the array structure is ground),
and real arrays. The latter are mainly a means to pass
compound terms between Prolog and C, so that e.g. C structures
can be mapped on Prolog arrays and handled in Prolog,
however the arrays can also be used only in Prolog.

For example, the sequence

:- make_array(rotate(3, 3), integer),
setval(rotate(0, 0), 1),
setval(rotate(1, 2), -1),
setval(rotate(2, 1), 1).

creates a matrix for a rotation about the x-axis, the predicate
getval/2 can be used to retrieve elements of the array.
The SEPIA system is available for a nominal fee to academic sites
and it has a number of other very useful features.

-- Micha Meier

USA micha%ecr...@pyramid.com MAIL Micha Meier
ECRC, Arabellastr. 17
EUROPE mi...@ecrc.de 8000 Munich 81
West Germany

--
USA micha%ecr...@pyramid.com MAIL Micha Meier
ECRC, Arabellastr. 17
EUROPE mi...@ecrc.de 8000 Munich 81
West Germany

jgar...@kean.ucs.mun.ca

unread,
Aug 30, 1990, 6:20:22 AM8/30/90
to
Concerning the recent discussion on Prolog and arrays, F. Pereira
kindly supplied me with the reference to Eriksson and Rayner to which
he alluded in his posting. We don't have it. Is it
available in electronic form from the authors (if you're out there)?
Or in reprint form? If #2, and you're viewing this, please e-mail me
and I'll provide my address.

@Inproceedings(Eriksson+Rayner: arrays,
AUTHOR = {Lars-Erik Eriksson and Mammy Rayner},
TITLE = {Incorporating Mutable Arrays into Logic Programming},
BOOKTITLE = {Proceedings of the Second International Logic Programming
Conference},
YEAR = {1984},
PAGES = {101--114},
ORGANIZATION = {Uppsala University},
EDITOR = {Sten-\AAke T\"{a}rnlund},
ADDRESS = {Uppsala, Sweden},
MONTH = {July}

Thanks in advance.

John Garland

jgarland@mun Bitnet
jgar...@kean.ucs.mun.ca Internet

Peter Schachte

unread,
Aug 29, 1990, 4:37:55 PM8/29/90
to
In article <160...@otter.hpl.hp.com> k...@otter.hpl.hp.com (Kave Eshghi) writes:
>First of all: it is nonesense to say that one should not use arrays
>becauses it involves destructive assignment.

Hold on a minute. What do arrays have to do with destructive assignment?
It is perfectly possible to have arrays without having destructive
assignment, and vice versa.
--
-Peter Schachte
p...@quintus.uucp
...!sun!quintus!pds

Roland Karlsson

unread,
Aug 30, 1990, 2:53:19 AM8/30/90
to

> We have the possibility of doing things like communicating
> between alternatives in a disjunction, or implementing arrays
> (does Sicstus allow large arities for structs -- it's
> a bit hard to do (e.g. in "copyterm" -- part of the assert/clause
> mechanism) everywhere)? This comes in handy for implementing
> bagof/3, etc.

The only way the user can communicate between different Or-branches is
via the database (or a file or a UNIX* socket). Bagof etc. is
implemented with an optimized version of database predicates. A
system programmer can use a destructive setarg that never restores its
value at backtracking.

I am very very sad (%-<). SICStus only allows up to 256 arguments for struct.

Tom Fawcett

unread,
Aug 30, 1990, 12:21:27 PM8/30/90
to

Let me say a few things about my use of arrays in Prolog, and specifically why
I want them.

I'm using them to represent large amounts of state information that have to be
accessed and updated quickly. In particular, I'm representing chessboards,
so each state has sixty-four components. I may need to store several thousand
such boards at any one time.

The "obvious" way of representing these is to use a predicate like
"boardn(square,contents)", eg "board73(a3,blank)", where a3 is a square
designator. Because Quintus Prolog indexes on the functor and first argument,
I can access and update a given square of a given board in constant time.
However, these take up more space than a 64-element vector, and it's still too
slow (every modification must involve a retract and an assert, which is much
slower than a destructive array assignment). Also, I often want to make one
board be a slight modification of another, which involves copying the contents
of one board to another, and then making the change. This is also relatively
slow.

As I mentioned before, I tried using an array package, but library(logarr) and
library(arrays) both do what I call constructive, rather than destructive,
modification. That is, to alter an array element you call a predicate that
passes back a new array with the desired element changed. If you're storing
these arrays in the database, as I must, every alteration requires a retract
and an assert. Since these database accesses seem to be more costly than the
array updates themselves, this doesn't help.

I suppose that in other applications it is not necessary to keep these arrays
in the database, and so constructive modification is fine. But in my
application, and that mentioned in the original arrays-in-prolog message
(involving storing frame information), keeping these arrays in the database is
desirable/necessary.

Real (destructive) arrays, as provided in library(vectors) under Quintus
Prolog, seem to be about 5-10 times faster than any of the other methods I
tried that had to update the database with every access.--

David John Hawley

unread,
Aug 29, 1990, 8:40:59 PM8/29/90
to
In article <1990Aug29.0...@sics.se> rol...@sics.se (Roland Karlsson) writes:
>
>OK Kym. I am lost. I do not understand what you mean by ! making the
>destructive change made by setarg permanent. This is how it works
>in SICStus though.

I'm lost too. There seems to be some confusion between deleting choice-
points, and throwing away trail information. I thought that (*) trail
entries can only be thrown away when the trailed item is not accessible
at or above ANY choice point. As far as trailing is concerned, a cut
just "concatenates" segments of the trail corresponding to the choice-points
that are deleted (+ garbage collection for inaccessible items as noted above).
In the example


a(X) :- write(first - X), b(X).
a(X) :- write(second - X), c(X).

a ! inside b/1 will not remove the choice-point from a/1, and so
any trail entries for X (from setarg/* or $$anything else for that matter$$)
will remain after the cut.

Footnote: (*) Better (correct?) explanations from gurus are welcome.

---------------------------
David Hawley, ICOT, 4th Lab
csnet: hawley%ico...@relay.cs.net uucp:{enea,inria,mit-eddie,ukc}!icot!hawley
ICOT, 1-4-28 Mita, Minato-ku, Tokyo 108 JAPAN. TEL/FAX {81-3-456-}2514/1618

Lars-Henrik Eriksson

unread,
Aug 30, 1990, 3:34:56 AM8/30/90
to
In article <14...@quintus.UUCP> p...@quintus.UUCP (Peter Schachte) writes:
> writes:
> >First of all: it is nonesense to say that one should not use arrays
> >becauses it involves destructive assignment.
>
> Hold on a minute. What do arrays have to do with destructive assignment?
> It is perfectly possible to have arrays without having destructive
> assignment, and vice versa.

Mats Carlsson and Ken Kahns LM-Prolog system had an array implementation
where you assigned array elements by creating a (conceptual) copy of the
array with one element changed. The array primitives thus did not do any
(visible) side effects. (Of course the arrays were not actually copied.)

Provided that once an array element had been set, the old version of the
array was not used anymore, this scheme guaranteed constant time access and
update of the arrays (with virtually no overhead). If older versions of an
array was used, the access and update time increased linearly with the number
of changes made since that version.

I haven't seen this or any array scheme with similar properties implemented
in any other Prolog than LM-Prolog.

An article about this was presented at the 2nd international Logic
Programming conference in Uppsala 1984.

Lars-Henrik Eriksson Internet: l...@sics.se
Swedish Institute of Computer Science Phone (intn'l): +46 8 752 15 09
Box 1263 Telefon (nat'l): 08 - 752 15 09
S-164 28 KISTA, SWEDEN

R. Kym Horsell

unread,
Aug 30, 1990, 4:34:36 PM8/30/90
to
In article <33...@skye.ed.ac.uk> ric...@aiai.UUCP (Richard Tobin) writes:
>I have never used setarg, but it seems you are confusing two things:
>the ability to change the value of an instantiated variable, and the
>ability to preserve the value of a variable when backtracking.

In my original post my point was that setarg/3 as implemented in
the compiler I'm working on works as follows (a bit hard to
write this in Prolog, of course):

setarg(Index,Term,Arg) :-
arg(Index,Term,OldArg),
/* save "OldArg" somewhere */,
/* destructively change argument number
``Index'' of ``Term'' to ``Arg'' via
some dirty internal mechanism */
; /* destructuvely change argument number
``Index'' of ``Term'' to ``OldArg'' via
some dirty internal mechanism */.

Thus I am perfectly correct to say a cut after *my* setarg/3
will lose a backtrack point (note the ;/2) and the dirty
internal stuff that changes the relevant arg *back* to its
original value is lost.

If we therefore use *my* setarg/3 information *can* be
transferred from one disjunctive branch to another by
means other than assert/retract.

I have no intention of *defending* the implementation
of such a dirty mechanism, which I realize destroys both
the *forward* and even *backward* declarative reading of
Prolog programs in which it is included.

I hope this post saves some bandwidth in the long run.

-Kym Horsell

Richard Tobin

unread,
Aug 30, 1990, 1:20:57 PM8/30/90
to
In article <39...@bingvaxu.cc.binghamton.edu> vu0...@bingvaxu.cc.binghamton.edu.cc.binghamton.edu (R. Kym Horsell) writes:
>2) By using a cut after setarg you *lose* the backtrack
>point so the modification becomes ``permanent'', which
>was the whole point in my previous message.

I have never used setarg, but it seems you are confusing two things:


the ability to change the value of an instantiated variable, and the
ability to preserve the value of a variable when backtracking.

Cut doesn't normally affect the resetting of variables on
backtracking. If X is uninstantiated and I do X=1, ! then X will
become uninstantiated when I backtrack past the unification, regardless
of the cut. This doesn't involve a choice point, just the trail.

Similarly, if a predicate is added which can alter an already-instantiated
variable, why should cut affect whether it is reset?

If you really want assignments (or just pain instantiations) that are
not reset by backtracking, don't overload cut for it, use something else.

-- Richard
--
Richard Tobin, JANET: R.T...@uk.ac.ed
AI Applications Institute, ARPA: R.Tobin%uk.a...@nsfnet-relay.ac.uk
Edinburgh University. UUCP: ...!ukc!ed.ac.uk!R.Tobin

Jose Alberto Fernandez R

unread,
Aug 31, 1990, 2:47:31 PM8/31/90
to
In article <96...@bunny.GTE.COM> to...@GTE.COM (Tom Fawcett) writes:

The "obvious" way of representing these is to use a predicate like
"boardn(square,contents)", eg "board73(a3,blank)", where a3 is a
square
designator. Because Quintus Prolog indexes on the functor and
first argument,
I can access and update a given square of a given board in constant
time.
However, these take up more space than a 64-element vector, and
it's still too
slow (every modification must involve a retract and an assert,
which is much
slower than a destructive array assignment). Also, I often want to
make one
board be a slight modification of another, which involves copying
the contents
of one board to another, and then making the change. This is also
relatively
slow.

Your problem is that you are trying to express "time" in your program,
without expressing it.

1) Let us look at your predicate "boardn(square, contents)", you are
trying to express with "board73(a3,blank)" that at some point in time
board73 has a blank in position a3. Actually, you probably want to
express information about only one board per game. So, why you dont
say what you really want: "board(square, time, contents)" now you now
at at "time" some "content" was in position "square".

2) You don't need to represent "blanks". If the position X is not true
in the database, for a given point in time, then assume is blank.

3) You can compute the value of a square, with the following rule:

value(Square, Time, Contents):-
( (board(Square, Time1, Contents1),Time1 < Time) =>
\+ moved(Square,Time1)
; Contents = blank ).

where "moved(Square, Time)" indicates that whatever it was in position
Square at Time, has been moved. Therefore now there is a blank.

Note that for this code to work you should do asserta instead of assert.
This means that you never retract information, only assert
information, unless you want to do some garbage collection every n
moves, that shouldn't be difficult to implement.

4) With this approach you are recording much less information than the
entier board every time.

Jose Alberto.
--
:/ \ Jose Alberto Fernandez R | INTERNET: alb...@cs.umd.edu
:| o o | Dept. of Computer Sc. | BITNET: alb...@cs.umd.edu
:| ^ | University of Maryland | UUCP: {...}!mimsy!alberto
:\ \_/ / College Park, MD 20742 |

Fernando Pereira

unread,
Sep 1, 1990, 1:52:32 PM9/1/90
to
In article <130...@kean.ucs.mun.ca> jgar...@kean.ucs.mun.ca writes:
>Concerning the recent discussion on Prolog and arrays, F. Pereira
>kindly supplied me with the reference to Eriksson and Rayner to which
>he alluded in his posting.
A couple of typos seem to have found their way into my bib database, for which
I apologize.

> AUTHOR = {Lars-Erik Eriksson and Mammy Rayner},
It should be: Manny

> EDITOR = {Sten-\AAke T\"{a}rnlund},
It should be: \Ake

Fernando Pereira
per...@research.att.com

Lars-Henrik Eriksson

unread,
Sep 3, 1990, 4:14:20 AM9/3/90
to
In article <11...@alice.UUCP> per...@alice.UUCP (Fernando Pereira) writes:
> I apologize.
> > AUTHOR = {Lars-Erik Eriksson and Mammy Rayner},
> It should be: Manny

Well, actually also: Henrik

Graham Matthews

unread,
Sep 3, 1990, 5:18:47 AM9/3/90
to

I am using SBPROLOG on a SparcStation.
What I want to do is set up a predicate to catch incoming signals.

The SB manual mentions that this can be done, but doesn't say how.

Has someone out there already solved this problem? If so help.

ta graham

Micha Meier

unread,
Sep 3, 1990, 8:03:51 AM9/3/90
to
>In article <1990Aug29.0...@sics.se> rol...@sics.se (Roland Karlsson) writes:
>\\\
>>the trail will be effected at !. So at backtracking (untrailing) you
>>will trap at the pointer to the heap. The code on the heap will be
>>executed and thereby the old value restored. So a ! will NOT make
>>the change permanent.
>
...

>The version of setarg/3 I have uses the *stack* to remember the
>fact that a given argument (i.e. part of the term structure
>stored on the heap) has been changed -- in this respect it's just
>like other backtrack points.
>
>A cut therefore will forget the fact that the change was done
>and therefore it *won't* be restored on backtracking.

This is interesting. Is no-one out there using the trail to store
directly the old value together with its address? This is the
so-called value-trailing, as opposed to the usual address trailing
where only the address is trailed and on untrailing its contents
is reset to a free variable. Value trailing is necessary to implement
efficiently coroutining systems (yes, I know that it is possible
without, but not fast enough). It was already present in MU-Prolog.
Once this mechanism is available, it can also be used to implement
backtrackable destructive assignment, if someone has to have it.
It seems to be much simpler and less space-consuming than
the two approaches mentioned above. Why to trail a pointer
to a code that restores the value if you can trail the value itself?
A cut of course does not prevent the restoring of the previous value,
like in Karlsson's approach.

--Micha Meier

Tom Fawcett

unread,
Sep 4, 1990, 3:55:36 PM9/4/90
to
alb...@cs.umd.edu (Jose Alberto Fernandez R) writes:
>
> [Long message suggesting a way to represent a board sequence
> without using arrays]
>

Thanks for the suggestion. There are a few problems with it (the code in #3
is wrong, but I get the idea), but the biggest problem is that this doesn't
really address what I'm trying to do. I'm not simply trying to represent a
sequence of boards, each of which is a minor modification of another. I'm
doing a search through successor boards, trying to find the best next move
(like a minimax search, but not really). I may end up having to search
several ply. In this case, your scheme doesn't work because the boards aren't
in a linear sequence. I really do need something like independent board IDs
to represent alternate moves at each level, and it seems like the most
efficient way to implement these is with arrays.

(Indidentally, I'm told that assert is fairly expensive, so even if I can avoid
retracting facts that are no longer true, it doesn't buy me that much.)

>:/ \ Jose Alberto Fernandez R | INTERNET: alb...@cs.umd.edu
>:| o o | Dept. of Computer Sc. | BITNET: alb...@cs.umd.edu
>:| ^ | University of Maryland | UUCP: {...}!mimsy!alberto
>:\ \_/ / College Park, MD 20742 |

Fernando Pereira

unread,
Sep 4, 1990, 10:32:47 PM9/4/90
to
In article <96...@bunny.GTE.COM> to...@GTE.COM (Tom Fawcett) writes:
>alb...@cs.umd.edu (Jose Alberto Fernandez R) writes:
>> [Long message suggesting a way to represent a board sequence
>> without using arrays]
> I'm
>doing a search through successor boards, trying to find the best next move
>(like a minimax search, but not really). I may end up having to search
>several ply. In this case, your scheme doesn't work because the boards aren't
>in a linear sequence.
A multiversion data structure, such as the flexible arrays mentioned earlier
by Richard O'Keefe, will almost certainly be much faster for this application,
as well as much cleaner logically, than database modification. Ideally, of
course, Prolog systems should provide efficient implementations of
multiversion aggregate data structures (arrays, hash tables)
with O(1) access time and update to the elements of some versions (eg.
the most recent one). Such schemes have been described in the
literature, and discussions like this one indicate that they are badly
needed in Prolog (in in pure functional languages as well!).

Multiversion graph data structures are also very important in applications
such as CAD, heuristic compiler optimizations, in which complex descriptions
must be tentatively changed to examine alternative courses of action.
For DAGs with fixed in-degree, there are very nice multiversion
representations due to Tarjan et al (can't find the exact reference
right now).

Graham Matthews

unread,
Sep 6, 1990, 6:44:40 AM9/6/90
to

I am trying to get an email address for the following people

David Scott Warren
Suzanne Dietrich
Saumya K. Debray

They are (were) all at SUNY Stony Brook and were all involved in the
SBProlog project. Anybody out there know there current addresses?

ta graham

Mats Carlsson

unread,
Sep 11, 1990, 11:02:45 AM9/11/90
to
In article <10...@ecrc.de> mi...@ecrc.de (Micha Meier) writes:
Value trailing is necessary to implement
efficiently coroutining systems (yes, I know that it is possible
without, but not fast enough). It was already present in MU-Prolog.

This is an interesting claim. Could you provide some arguments for
why it is that coroutining systems become so much slower without value
trailing? Value trailing seems to have a rather high price: the trail
becomes twice as large, and there is extra work for each (normal)
variable to reset.
--
Mats Carlsson
SICS, PO Box 1263, S-164 28 KISTA, Sweden Internet: ma...@sics.se
Tel: +46 8 7521543 Ttx: 812 61 54 SICS S Fax: +46 8 7517230

Peter G Ludemann

unread,
Sep 11, 1990, 8:39:27 PM9/11/90
to
IBM-Prolog has arrays and "item-lists" (like arrays, except
that the indexes are arbitrary atoms). They are O(1) for
access and update. Updates are undone on backtracking.
(There are also "global terms" which provide associative
tables whose changes may be undone by backtracking or not,
at the programmers control).

I have used item-lists to implement a constraint logic programming
version of n-queens: the algorithm is essentially linear on "n"
whereas even the best of the permute-and-test algorithms is exponential
(for n=20, the constraint version took 40 milliseconds and I gave
up waiting for the conventional program after a few minutes).

- Peter Ludemann prefered e-mail: lude...@mlpvm1.iinus1.ibm.com

Kish Shen

unread,
Sep 12, 1990, 1:05:53 PM9/12/90
to
Mats Carlsson said:

>trailing? Value trailing seems to have a rather high price: the trail
>becomes twice as large, and there is extra work for each (normal)
>variable to reset.

Perhaps I am showing my ignorance, but can you not have a "special trail" to
store the those trailing that need value trailing? This is what I am doing for
my own work with implementing dependent and-parallelism in Prolog -- I have a
special trail which grows in the opposite direction to the normal
trail, towards
the normal trail. This means that you only pay for the cost of value
trailing for
those variables that need to use it. Am I missing something?

Kish Shen
k...@cl.cam.ac.uk
Computer Lab.,
University of Cambridge
U.K.

R. Kym Horsell

unread,
Sep 13, 1990, 12:26:36 AM9/13/90
to
In article <1990Sep12.1...@cl.cam.ac.uk> k...@cl.cam.ac.uk (Kish Shen) writes:
\\\

>Perhaps I am showing my ignorance, but can you not have a "special trail" to
>store the those trailing that need value trailing? This is what I am doing for
>my own work with implementing dependent and-parallelism in Prolog -- I have a
>special trail which grows in the opposite direction to the normal
>trail, towards
>the normal trail. This means that you only pay for the cost of value
>trailing for
>those variables that need to use it. Am I missing something?
\\\

No you're not.

Although the trail _can_ grow to twice the size using naive value
trailing the cases where values _are_ trailed (and I don't have any
figures on this to hand on how _often_ this happens) can be ``special
cased''. I prefer the using a single bit in the trail entries (you come
across them during backtracking in sequence anyway, right)? Probably
not too portable `tho.

-Kym Horsell

Mark Johnson

unread,
Sep 13, 1990, 7:25:23 AM9/13/90
to

Here's a question I'd like to throw out to the prolog
community. The specific issue I'd like to raise is
that it is difficult to find an efficent term representation
of labelled cyclic graphs (for access and update). Representing
the nodes of such a graph as a label and a list of edges, where an
edge is a pair consisting of a label and a node, results in
a ``cyclic structure'', with all the logical and implementational
problems discussed in this newsgroup a while ago. And an
obvious solution: to store the nodes in an array, and change
the representation of an edge to be a pair consisting of
a label and an index into this array, suffers from the lack
of an efficient way of implementing for arrays.

So how can cyclic graphs be efficiently implemented in Prolog?

Suggestions welcome,

Mark Johnson.

Kish Shen

unread,
Sep 13, 1990, 12:27:01 PM9/13/90
to
In article <1990Sep13....@sics.se>, ma...@sics.se (Mats
Carlsson) writes:

|> A comment to Kish's note: Presumably your scheme requires an extra
|> "special trail pointer" register and a corresponding slot in every
|> choicepoint. This means extra work when creating choicepoints and at
|> backtracking. At backtracking you would also need to check whether
|> there are special trail items to untrail.

Yes, however the cost seems small to me compared to doubling the
trail size (which increase the cost of detrailing automatically,
as you need to detrail two cells instead of one whatever you do),
or checking for a special bit, which has to be done for every
trailed value. Note that there might be more than one "special" type
of trailing: you may want to trail delayed goals and destructive
assignments, and (in my case) dependent variables. So you may need
a tag instead of a bit, and the checking is more expensive, so it
seems to make sense to make the "normal" trailing as fast as possible,
without the need to check any tags.

Of course if you are trailing an old value that is destructively
assigned to, you need to detrail your special trail before the
normal trail.

Kish Shen (k...@cl.cam.ac.uk)
Computer Lab.
University of Cambridge
Cambridge, UK

Mats Carlsson

unread,
Sep 13, 1990, 3:17:15 AM9/13/90
to
In article <39...@bingvaxu.cc.binghamton.edu> k...@bingvaxu.cc.binghamton.edu (R. Kym Horsell) writes:
In article <1990Sep12.1...@cl.cam.ac.uk> k...@cl.cam.ac.uk (Kish Shen) writes:
\\\
>Perhaps I am showing my ignorance, but can you not have a "special trail" to
>store the those trailing that need value trailing? This is what I am doing for
>my own work with implementing dependent and-parallelism in Prolog -- I have a
>special trail which grows in the opposite direction to the normal
>trail, towards
>the normal trail. This means that you only pay for the cost of value
>trailing for
>those variables that need to use it. Am I missing something?

A comment to Kish's note: Presumably your scheme requires an extra


"special trail pointer" register and a corresponding slot in every
choicepoint. This means extra work when creating choicepoints and at
backtracking. At backtracking you would also need to check whether
there are special trail items to untrail.

Although the trail _can_ grow to twice the size using naive value


trailing the cases where values _are_ trailed (and I don't have any
figures on this to hand on how _often_ this happens) can be ``special
cased''. I prefer the using a single bit in the trail entries (you come
across them during backtracking in sequence anyway, right)? Probably
not too portable `tho.

In Kym's scheme you would have to watch out for the magic bit in every
trail entry. This means extra work even if there are no value trail
entries at all.

Andrew Taylor

unread,
Sep 13, 1990, 9:16:55 PM9/13/90
to
In article <10...@ecrc.de> mi...@ecrc.de (Micha Meier) writes:
>This is interesting. Is no-one out there using the trail to store
>directly the old value together with its address? This is the
>so-called value-trailing, as opposed to the usual address trailing

I'm experimenting with a scheme where bound variables never need dereferencing.
This is done by keeping sharing variables in a circular list and when
binding a variable binding each variable in the list. This needs value-trailing.

The hope is this scheme works better than the usual reference chain scheme
where global analysis is involved. It should produce faster analysis
and better results where analysis becomes imprecise, where a module is
analysed alone and where information from mode declarations.

I'm not worried about doubling the trail size. Global analysis removes
most trailing anyway. I think the cost of the extra trail read & writes is
probably less than the benefits.

Andrew

R. Kym Horsell

unread,
Sep 13, 1990, 10:51:56 AM9/13/90
to
In article <1990Sep13....@sics.se> ma...@sics.se (Mats Carlsson) writes:
\\\

>In Kym's scheme you would have to watch out for the magic bit in every
>trail entry. This means extra work even if there are no value trail
>entries at all.
\\\

Yes extra work is involved on every trail access. However, how much
backtracking occurs when compared to, say, dereferencing
and taking into account that the usual stuff for reducing trailed
info seems fairly reasonable?

As I said, this extra bit _can_ be anywhere that's convenient.
Apropos of machine alignment checks, it _could_ be the low-order bit
of an otherwise address word. Personally, I don't like this, but
this idea _has_ been used various places with apparent success.

-Kym Horsell

Joachim Schimpf

unread,
Sep 17, 1990, 12:29:38 PM9/17/90
to
In article <1990Sep11.1...@sics.se> ma...@sics.se (Mats Carlsson) writes:
>In article <10...@ecrc.de> mi...@ecrc.de (Micha Meier) writes:
> Value trailing is necessary to implement
> efficiently coroutining systems (yes, I know that it is possible
> without, but not fast enough). It was already present in MU-Prolog.
>
>This is an interesting claim. Could you provide some arguments for
>why it is that coroutining systems become so much slower without value
>trailing?

The main advantage - compared to a scheme as presented in Mats
Carlsson's ICLP 87 paper - is a more efficient handling of updates
to the list of delayed goals associated to every suspending variable.

In a typical delay-tests-and-generate application it is quite common
to have hundreds of goals delayed on only a few variables, so these
lists can become rather long.

Using destructive and value-trailed updates, all operations can be
performed in O(1) time. Without, the only choice seems to be using
open-ended lists, which means all operations are O(length).

The reason why these lists can be destructively changed is that they
are an internal data structure, and we are sure we don't need multiple
versions at the same time.

>Value trailing seems to have a rather high price: the trail
>becomes twice as large, and there is extra work for each (normal)
>variable to reset.

It is not necessary to use value trailing even for normal variables.
Having two trails, one for normal, one for value trails, is probably
too expensive (you also might get into trouble since you couldn't
untrail in exactly the reverse order than trailing had been done).
In a paper of Bowen et al (ICLP 86) there was a suggestion to
interleave the two trails by linking all the value trail entries,
assuming they occur infrequently.
In the SEPIA system we use a sort of tagged trail entries, which
has turned out to be reasonably efficient.

A point to mention, however, is that garbage collection becomes
somewhat more difficult in the presence of value trailing.


--
Joachim Schimpf
ECRC, Arabellastrasse 17, D-8000 Munich 81, West-Germany
joa...@ecrc.de (from USA: joachim%ecr...@pyramid.com)

Peter G Ludemann

unread,
Sep 19, 1990, 1:11:26 AM9/19/90
to
The recent postings on arrays in Prolog have given me the idea
of taking a survey of advanced Logic Programming features.
I'm thinking of the concepts which make Prolog-III, NU-Prolog,
Trilogy, CLP(R) and other such implementations special.
If you can help, please fill in the survey below and
send it to me; when I have collected everything, I will post a
summary (if there's enough interesting information, I will write a
Technical Report).

For some questions, I have given a choice:
Logical: does not allow updating ground terms.
Quasi-logical: allows destructive updating of a ground term; any
changes are undone on backtracking.
Non-logical: allows destructive updating of a ground term but
changes are *not* undone on backtracking.

For example, suppose an implementation provides character strings
and ha the ability to update a character. Notice that the
"logical" flavour has one extra parameter (the InString cannot be
changed, so OutString gets a modified copy). A "logical"
implementation is, of course, free to analyze the code and do
destructive updating if it is safe.

Logical: update(InString, Position, Character, OutString):
?- S1 = 'abc',
(update(S1, 2, '-', S2),
write([s1 = S1, s2 = S2]),
fail)
; write([s1 = S1, s2 = S2]) .

[s1 = 'abc', s2 = 'a-c'] .
[s1 = 'abc', s2 = _1] .

Quasi-logical: update(String, Position, Character):
?- S1 = 'abc',
(update(S1, 2, '-'),
write(s1 = S1))
; write(s1 = S1) .

s1 = 'a-c' .
s1 = 'abc' .

Non-logical: update(String, Position, Character):
?- S1 = 'abc',
(update(S1, 2, '-'),
write(s1 = S1))
; write(s1 = S1) .

s1 = 'a-c' .
s1 = 'a-c' .


<<<----------------------------------------------------------------->>>

*--------------------------------------------------------------------*
| This questionnaire is for my own personal research and will not |
| be used for commercial purposes. After I have collected the data, |
| a summary will be posted. |
| |
| Please feel free to insert additional information in the |
| questionnaire, or add comments at the end. Please return to: |
| |
| lude...@mlpvm1.iinus1.ibm.com *
*--------------------------------------------------------------------*


Name of the logic pramming product: ______________________________
Contact address: _________________________________________________
Contact e-mail: __________________________________________________

I am: _ implementer _ friend/colleague of the implementer
_ user _ passing on what I've heard

Type of implementation:
_ Commercial
_ Academic only
_ Robust limited distribution
_ Experimental

Hardware/software platform(s): _______________________________

Type of "logic programming" implemented:
_ Prolog
_ Prolog + extensions
_ Functional language + logic programming extensions
_ Object-oriented language + logic programming extensions
_ Guarded Horn clauses
_ Other

Arrays and tables:
_ Integer indexes
_ O(1) access
_ Can change size dynamically
_ Atom indexes (associative tables)
_ Multiple indexes (e.g. 3-dimensional arrays)
_ Complex indexes (e.g., functors)
_ Logical / Quasi-logical / Non-logical
_ Built-in
_ Library package

Character manipulation:
_ Strings as a primitive datatype
_ Predicates for substring, concatenate, search, index, etc.
_ Kanji support
_ "string length" predicate counts characters, not bytes

Internal database:
_ Undone on backtracking
_ Not undone on backtracking (that is, permanent)
_ Atom indexes (associative tables)
_ Multiple indexes (e.g. 3-dimensional arrays)
_ Complex indexes (e.g., functors)

Arithmetic:
_ Overflow detected
_ "Big numbers" ("infinite precision")
_ Rational numbers
_ Floating point

Types:
_ User-defined types and unification
_ Functors with non-atomic names, e.g.: P(X)
_ Type-checking predicates
_ Lattice or hierarchy of types
_ "Infinite" (or circular) terms
_ Printing terminates for: ?- X = f(X), write(X)
_ Unification terminates for: ?- X = f(X), f(f(Z)) = X
_ Object-oriented features
_ Built-in
_ Provided by a library package

Syntax:
_ Clocksin&Mellish
_ expand_term or expand_rule predicates
_ operator declarations
_ DCG or other grammar rule notation
_ Not Clocksin&Mellish
_ Syntax can be tailored by the user
_ Multiple syntaxes supported
_ Support for non-standard keyboards

Control:
_ Interrupts
_ Delayed evaluation (wait/freeze/geler/etc.)
_ Co-routines
_ Lazy evaluation
_ Eager evaluation
_ User-defined functions (e.g., evaluated by is/2)
_ Higher-order functions ("thunks"/continuations)
_ Constraints
_ Floating point
_ Integer
_ User-tailorable
_ Logical if-then-else
_ Logical negation
_ Multi-way switch
_ Error trapping and recovery
_ Recover from stack overflow
_ Recover from operating system traps
_ Recover from foreign-language traps
_ Detect undefined or incomplete predicate
_ Can intercept *all* error messages
_ User can write own debugging predicates

Modules:
_ Atom-based
_ Predicate-based
_ Can hide predicates (control of exporting)
_ Can control importing
_ Can rename on import or export
_ Can override built-in predicates
_ Meta-predicates can control module context on call(_)

Implementation:
_ Compiler
_ Compiles some built-in predicates into in-line code
_ Compiles arithmetic in-line
_ Compiled and interpreted code can readily call each other
with the same semantics
_ Generates machine code
_ Generates optimized WAM-like code
_ Can create stand-alone read-only modules
_ Mode declarations (in,out,ground,etc.)
_ Automatically propagated by compiler
_ Multi-argument indexing
_ Indexing within an argument
_ Compiler detects deterministic predicates
_ Data area / stack limited to 2**16
_ Data area / stack limited to 2**24
_ Data area / stack limited to 2**31
_ Data area / stack is limited only by hardware resources
_ Global stack garbage collector
_ Local/trail stacks garbage collector
_ Incremental (background) garbage collector
_ Can exploit parallel machines

Database, I/O access:
_ Internal database (record, recorded)
_ Can call SQL
_ Backtrack through all answers
_ Get a list of answers
_ Automatic translation of datatypes to/from SQL
_ Can write predicates which are translated to external database
(e.g., "database" predicates are combined so that SQL can do
the joins)
_ Logical I/O (that is, using lazy/eager lists)

System integration:
_ Formatted I/O (e.g., like C's printf)
_ Windowing interface (X, Motif, PM, etc.)
_ Editor interface
_ Other interfaces
_ On-line help / manual
_ Can invoke system commands
_ Can write predicates in:
_ assembler _ C/C++ _ Cobol
_ Fortran _ Lisp _ Other
_ Can call Prolog from:
_ assembler _ C/C++ _ Cobol
_ Fortran _ Lisp _ Other
_ Can pass arbitrary Prolog terms
_ Can create Prolog terms inside foreign language code
_ Can write backtrackable predicates in foreign languages

Additional notes:

If you could get 5 things which you haven't marked above,
what would you pick?
Would they be worth adding if they increased the
product's price?
Anything that is missing from the list?
Anything you would like but haven't seen anywhere?
Additional description of some of the above features?


<<<----------------------------------------------------------------->>>

Olivier Ridoux

unread,
Sep 19, 1990, 3:53:14 AM9/19/90
to
From article <12...@ecrc.de>, by joa...@ecrc.de (Joachim Schimpf):

> In article <1990Sep11.1...@sics.se> ma...@sics.se (Mats Carlsson) writes:
>>In article <10...@ecrc.de> mi...@ecrc.de (Micha Meier) writes:
>> Value trailing is necessary to implement
>> efficiently coroutining systems (yes, I know that it is possible
>> without, but not fast enough). It was already present in MU-Prolog.
>>
>>This is an interesting claim. Could you provide some arguments for
>>why it is that coroutining systems become so much slower without value
>>trailing?
>
> The main advantage - compared to a scheme as presented in Mats
> Carlsson's ICLP 87 paper - is a more efficient handling of updates
> to the list of delayed goals associated to every suspending variable.
>
> In a typical delay-tests-and-generate application it is quite common
> to have hundreds of goals delayed on only a few variables, so these
> lists can become rather long.
>
> Using destructive and value-trailed updates, all operations can be
> performed in O(1) time. Without, the only choice seems to be using
> open-ended lists, which means all operations are O(length).

There is another solution which involves no value-trailing. The idea is to
have "attributed variables". They behave like variables but have an attribute
that is a term which may in turn be constructed with attributed variables.
The attribute is accessible when and only when the attributed variable is
free. When bound the attributed variable is transparent (just like a regular
variable). This means that it is dereferenced upon every encounter.

An attributed variable can be seen in to ways.
(1) It can be seen as a decorated variable. In this case the attribute is
meant to represent a constraint, a type, a domain etc. Anything that can been
interpreted as a descriptor of what is the variable or how to use it will do.
(2) It can be seen as a reversibly modifiable term. In this case the
attribute is the current value of the term. To modify the term is to bind
the attributed variable to a new value. If the term is meant to remain
modifiable, the new value must be represented by a new attributed variable.

Coroutining uses the two ways. The suspending variable is decorated by the
decriptor of the delayed goals (first vision). The descriptor of the delayed
goals must be a modifiable structure (second vision) in order to add
efficiently new delayed goals.

So all operations can be performed on O(1) time with no value-trailing.

>>Value trailing seems to have a rather high price: the trail

> It is not necessary to use value trailing even for normal variables.

The regular trail is used for both types of variable.

> A point to mention, however, is that garbage collection becomes
> somewhat more difficult in the presence of value trailing.

Attributed variables add no difficulty for garbage collection. The garbage
collection of attributes is a side effect of the "variable shunting" (also
known as "variable collapsing"). Variable shunting is the ability to recognize
that no choice-point exists in which a given variable is free. In other term,
the variable is either bound or does not exist. Such a variable can be
physically replaced by its binding value in all its occurrences. Since an
attribute is accessible only when its variable is free, it is garbage collected
with the variable.

Attributed variables are available in the MALI memory (S. Le Huitouze
PLILP'90). A similar object, called "meta-structure" is described by
U. Neumerkel (META'90). Meta-structures are associated to an escape mechanism
that allows to hook a user defined unification procedure in the system.

Olivier Ridoux

Mats Carlsson

unread,
Sep 25, 1990, 8:48:16 AM9/25/90
to

This is in fact exactly the same scheme as I described in my ICLP'87
paper. What may have confused people is that the paper introduces so
called "suspension structures", but they are really the same thing as
attributed variables.

An optimization of step (2) above, implemented in SICStus, is that
when the term to be modified was created more recently than the
youngest choicepoint, the term itself (the "attribute") can be
destructively modified, instead of binding the variable to a new
attributed variable.

So all operations can be performed on O(1) time with no value-trailing.

That's my point too. There is one snag, however. If the optimization
of step (2) cannot be applied, it can happen that a very long chain of
variable bindings is created, turning accesses to the constrained
variables into O(N) operations. This potential problem does not occur
in a value-trailing scheme. Variable shunting can sometimes mitigate
the problem too.

Richard A. O'Keefe

unread,
Sep 26, 1990, 5:43:06 AM9/26/90
to
Fernando Pereira actually said something extremely encouraging to me:
he said that there were algorithms where something "one or two orders
of magnitude faster" than N+K trees was needed (hence arrays).

Now, my claim that N+K trees are a practical alternative to arrays is
to be understood as a claim that a particular *data structure* (which
can be inspected and manipulated as a plain Prolog data structure if
need be) is a practical alternative. That's all. The claim is that
a data structure is ok, not that it shouldn't be built in.

Here's what I have in mind. 3+4 trees represented as perfectly ordinary
Prolog terms can nevertheless be manipulated by built-in predicates that
are implemented in hand-optimised machine code. The observable behaviour
of such built-in predicates would be exactly the same as the observable
behaviour of appropriate Prolog code, but it could be a lot faster. In
fact, the attainable speedup is in exactly the "one or two orders of
magnitude faster" ballpark that Fernando says we need.

I haven't got all the figures I want yet. But on an Encore Multimax
(NS32532), plain ordinary C code (not hand-optimised assembler) gave
me the following average times to access any element of an N-element
array represented as an 3+4 tree:
N = 10 t = 8 usec
100 12 usec
1,000 16 usec
10,000 20 usec
100,000 24 usec
1,000,000 29 usec
That's about a 4usec increase for every 10-fold expansion of the array.
The C code behaved exactly like get_label/3 on p143 of the book, and this
time does include loops to dereference, type checks, checks that the
nodes of the tree have the right functor, and preparation to build missing
parts of the tree and so on.

I repeat, this is plain ordinary C code. Hand-optimised assembly code
should be able to do somewhat better (at a rough guess, 3 usec steps
rather than 4 usec, so that 1,000-element "arrays" might cost 13-usec
per element access).

There are more measurements to be made, but it looks as though 3+4
trees _are_ a practical alternative to arrays _if_ they are implemented
well.

--
Fixed in the next release.

Joachim Schimpf

unread,
Sep 27, 1990, 12:20:09 PM9/27/90
to
In article <1990Sep19.0...@irisa.fr> rid...@irisa.irisa.fr (Olivier Ridoux) writes:
>From article <12...@ecrc.de>, by joa...@ecrc.de (Joachim Schimpf):
>>
>> ...

>> Using destructive and value-trailed updates, all operations can be
>> performed in O(1) time. Without, the only choice seems to be using
>> open-ended lists, which means all operations are O(length).
>
>There is another solution which involves no value-trailing. The idea is to
>have "attributed variables".
>
>...

>(2) It can be seen as a reversibly modifiable term. In this case the
>attribute is the current value of the term. To modify the term is to bind
>the attributed variable to a new value. If the term is meant to remain
>modifiable, the new value must be represented by a new attributed variable.
>
>...

>So all operations can be performed on O(1) time with no value-trailing.

I would like to point out that there is no such thing as
"(virtual) modification in O(1) without destructive assignment".
Basically, your modifiable data structure is still a list, where
the current value is stored at the end, together with a free place
for adding further modifications.
A change history will therefore look like this (using infix dots):

X = value1 . T1
X = value1 . value2 . T2 Trail: T1
X = value1 . value2 . value3 . T3 Trail: T2, T1
X = value1 . value2 . value3 . value4 . T4 Trail: T3, T2, T1

For modification and update, the list has to be traversed to the end.
This can be speeded up by

- hiding it in the dereferencing loop (done by MALI and SICSTUS)
- "shunting" the chain from time to time (done by MALI)

When you claim O(1) updates, do you have in mind that the update chains
cannot grow indefinitely because the garbage collector will shorten them ?
Then, still, this scheme is likely to be much slower than destructive
assignment (value-trailed if necessary), which could be pictured as follows
(as above, the trail entries need only be there if a choicepoint has been
created since the previous modification):

X = value1
X = value2 Trail: X=value1
X = value3 Trail: X=value2, X=value1
X = value4 Trail: X=value3, X=value2, X=value1

The current value is always immediately accessible, the change history
is where you need it, i.e. on the trail. The trail could be collapsed
by the garbage collector, but this is not essential for access speed.

What Mats Carlsson has mentionned is in fact a compromise, i.e. taking
advantage from destructive assignment if no trailing is needed, and
building up the chain otherwise.

>Attributed variables are available in the MALI memory (S. Le Huitouze
>PLILP'90). A similar object, called "meta-structure" is described by
>U. Neumerkel (META'90).

I agree with Mats Carlsson that his suspension structures are essentially
the same.

Peter G Ludemann

unread,
Oct 2, 1990, 10:47:45 PM10/2/90
to
> I would like to point out that there is no such thing as
> "(virtual) modification in O(1) without destructive assignment".
> Basically, your modifiable data structure is still a list, where
> the current value is stored at the end, together with a free place
> for adding further modifications.
> A change history will therefore look like this (using infix dots):
>
> X = value1 . T1
> X = value1 . value2 . T2 Trail: T1
> X = value1 . value2 . value3 . T3 Trail: T2, T1
> X = value1 . value2 . value3 . value4 . T4 Trail: T3, T2, T1

Eh? All you need to store for each entry in the trail is:
(pointer to array, index, old value)
and you're back to O(1).

You'll speed things up by a constant factor if you can figure
when the values don't need to be trailed.

Can we change the subject a bit? How about list structures
which can be as easily accessed from the end as from the front?
That, to me, is the big advantage of arrays; not the performance
improvement. Does anyone have a nice notation for getting the
last element of a list as opposed to the first element?
Maybe something like Icon, where a[1] gets the first element and
a[-1] gets the last element.

- Peter Ludemann lude...@mlpvm1.iinus1.ibm.com

Denys Duchier

unread,
Oct 4, 1990, 11:21:10 AM10/4/90
to
In article <60...@hplabsz.HPL.HP.COM> kirs...@hplabsz.HPL.HP.COM (Evan Kirshenbaum) writes:
> I called ..X a splice (... was equivalent to .._), and it was an
> actual object in the sequence (although you couldn't get one outside
> of a sequence). Unifying a sequence that contained splices was
> possibly nondeterministic, but was reasonably easy to implement.
> Multiple solutions were obtained on backtracking. This allowed you to
> define
> append(Xs,Ys,[..Xs,..Ys]).
> member(X,[...,X,...]).
> last(L,[...,L]).

This is not exactly a novel idea. In PLANNER [Hewitt circa 67 -- I
think] it was called a segment variable. The problem with segment
variables is that unification not only may have several solutions, but
occasionally infinitely many of them. Consider:

[..X foo] to be unified with [foo ..Y]

The set of unifiers for this example is:

{ X = Y = [] }
{ X = Y = [foo] }
{ X = Y = [foo foo] }
{ X = Y = [foo foo foo] }
...

Nonetheless, segment variables are very convenient, and I still use
them in my work.

--Denys

Evan Kirshenbaum

unread,
Oct 3, 1990, 8:00:12 PM10/3/90
to
In article <34...@cup.portal.com> p...@cup.portal.com (Peter G Ludemann) writes:
>> I would like to point out that there is no such thing as
>Can we change the subject a bit? How about list structures
>which can be as easily accessed from the end as from the front?
>That, to me, is the big advantage of arrays; not the performance
>improvement. Does anyone have a nice notation for getting the
>last element of a list as opposed to the first element?
>

Once upon a time, I was playing with the design of a language based on
prolog, and one of the extensions was exactly this. The notation I
used was:
[A,B,C] a sequence containing exactly A, B, and C
[A,...] a sequence starting with A
[A,..As] a sequence with head A and tail As
[...,L] a sequence ending in L
[..Ls,L] a sequence ending in L with everything up to it Ls
[...,X,...] a sequence containing X
[..Pre,..Mid,..Post] a sequence partitioned into Pre, Mid,
and Post
[...,X,...,Y,...] a sequence in which X occurs before Y


I called ..X a splice (... was equivalent to .._), and it was an
actual object in the sequence (although you couldn't get one outside
of a sequence). Unifying a sequence that contained splices was
possibly nondeterministic, but was reasonably easy to implement.
Multiple solutions were obtained on backtracking. This allowed you to
define
append(Xs,Ys,[..Xs,..Ys]).
member(X,[...,X,...]).
last(L,[...,L]).

If you unified [..ButLast,Last] with [..Pre,X,..Post] you got
ButLast = Pre = Post = [], Last = X
and
ButLast = [..Pre,X,..As], Post = [..As,L]

My system also contained a notion of anonymous variables, so sequences
with anonymous splices were relatively cheap as well (no structures
needed to be consed).

Evan Kirshenbaum
HP Laboratories
3500 Deer Creek Road, Building 26U
Palo Alto, CA 94304

kirsh...@hplabs.hp.com
(415)857-7572

Evan Kirshenbaum

unread,
Oct 4, 1990, 8:09:19 PM10/4/90
to
>This is not exactly a novel idea.
Never claimed it was new. The original poster asked for a notation.

> The problem with segment
>variables is that unification not only may have several solutions, but
>occasionally infinitely many of them. Consider:
>
> [..X foo] to be unified with [foo ..Y]
>
>The set of unifiers for this example is:
>
> { X = Y = [] }
> { X = Y = [foo] }
> { X = Y = [foo foo] }
> { X = Y = [foo foo foo] }
> ...

Actually, [..X, foo] unified with [foo, ..Y] would yield
X = Y = []
and
X = [foo, ..A], Y = [..A, foo]
I don't remember the proof offhand, but if neither sequence is
circular, there is always a finite number of unifiers. The problem
occurs when you try to unify X:[a,..X] with [...,E,...] (infinitely
many unifiers) or [...,L] (no unifiers, but it's hard to prove that).

Denys Duchier

unread,
Oct 5, 1990, 12:03:48 PM10/5/90
to
In article <60...@hplabsz.HPL.HP.COM> kirs...@hplabsz.HPL.HP.COM (Evan Kirshenbaum) writes:
> > [..X foo] to be unified with [foo ..Y]
> >
> >The set of unifiers for this example is:
> >
> > { X = Y = [] }
> > { X = Y = [foo] }
> > { X = Y = [foo foo] }
> > { X = Y = [foo foo foo] }
> > ...
>
> Actually, [..X, foo] unified with [foo, ..Y] would yield
> X = Y = []
> and
> X = [foo, ..A], Y = [..A, foo]

This can't be right! ..A must be constrained to be a sequence of
foo's only.

--Denys

0 new messages