[comp.lang.prolog] Destructive predicates

lagache@violet.berkeley.edu (Edouard Lagache) (03/01/88)

          >In article <710@cresswell.quintus.UUCP>, ok@quintus.UUCP
          >(Richard A. O'Keefe) writes (on replaca in PROLOG):
          >
          >Oh dear oh dear oh dear oh dear.  Have you ever felt that
          >everything you have done for the last N years has been
          >futile?  Have you ever been telling people about the milk
          >and honey of the Promised Land, only to have them turn
          >around and say "What about leeks?  We had great leeks in
          >Egypt."
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
          Hmmmmm, Doesn't seem that sympathetic to the idea, but at
          least Dr. O'Keefe is keeping an open mind !!!!!!!!!!!!!!!

          Okay, okay, I agree that 'replaca' isn't such a good idea.  I
          forgot to make the old declarative test, and sure enough if
          it hard to imagine what reversing the fields (i.e.
          replaca(X,Y,[junk,b,c,d]), instead of replaca(junk,[a,b,c],Z)
          ) would mean declaratively.  Still, that is not quite what I
          was curious about.

               For example, suppose we had a destructive append.
          Obviously this cannot be implemented with respect to other
          PROLOG primitives, so the actual machine implementation could
          be make reversible.  (i.e. append([a,b,c],X,Y) would do what
          regular append does but append([a,b,c],[d],Z) would
          destructively bash [a,b,c] with [d] and store it in Z).

               Ah, but you reply, what would happen upon backtracking,
          or more importantly suppose the problem is append(A,B,C)
          where A = [a,b], and B = [c].  Well, we don't have to take
          destructive list bashing too seriously and could allow
          ourselves copies of A and B.  That would allow us to solve
          both the backtracking and A,B messed up pointer problem.
          Even in this case we would save some number of copies (length
          of B copies of A to be exact in "standard" append).

               But of course that is what Quintus (and every self
          respecting PROLOG) does - right?  Well, then aren't we
          allowing an okay form of destructive list bashing?  If this
          isn't okay - well, why not?

               This point not withstanding, what would be wrong with
          having a "fully" destructive append that would be used only
          in the situations without reversibility and backtracking.  So
          what if I save only a penny - what did Ben Franklin say about
          a penny in time .......

               I guess there is something more fundamental here.  There
          is a tension here between "the function soup" of LISP, and the
          comparative austerity of the (standard) PROLOG predicate
          library.  Implementation issues aside (for the moment), each
          approach has some merits, but PROLOG tends for force the
          programmer to sacrifice "simple decomposition" for
          efficiency.  What is "simple decomposition"?  Well, I better
          save that for my next note - stay tuned!

                                                        Edouard Lagache

debray@arizona.edu (Saumya Debray) (03/01/88)

lagache@violet.berkeley.edu (Edouard Lagache) writes:

> [ ... ] what would be wrong with having a "fully" destructive
> append that would be used only in the situations without reversibility
> and backtracking.

In general, it isn't enough to have directionality and determinism.  You
have to make sure that the "old" value of the (sub)structure being
destructively assigned to isn't accessible from anywhere else.  In other
words, it isn't enough to combine mode and determinacy analysis with
a simple live variable analysis to figure out when a structure can
be reused: sharing patterns between structures have to be looked at as
well.  As an example, consider

	p(X,Y,Z,W) :- W = [a|X], append(X,Y,Z).

where p/4 has the mode <ground, ground, free, free> -- then, given
the usual implementation of unification for the literal "W = [a|X]",
append/3 cannot use destructive assignment even though its use in
this context is directional and deterministic.

Maurice Bruynooghe has done some work on static analyses to detect
situations where destructive assignment can be introduced as a compiler
optimization, see his paper in SLP87.
-- 
Saumya Debray		CS Department, University of Arizona, Tucson

     internet:   debray@arizona.edu
     uucp:       {allegra, cmcl2, ihnp4} !arizona!debray

pds@quintus.UUCP (Peter Schachte) (03/02/88)

In article <7289@agate.BERKELEY.EDU>, lagache@violet.berkeley.edu (Edouard Lagache) writes:
>     There
>     is a tension here between "the function soup" of LISP, and the
>     comparative austerity of the (standard) PROLOG predicate
>     library.

You mentioned the Prolog library.  There are several Prolog libraries,
containing MANY, many predicates, covering all sorts of things you
might want to do.  But they are (I think properly) not a part of the
language.  If you want them, you can load them in, but you don't have
to have all of them around if you don't need them.  Most Lisps build in
many functions that you don't really need.  (I think) many of them would be
better exported to a standard Lisp library.

Also, note that Prolog doesn't really need as many predicates as Lisp.
For example, in the case of list processing, Lisp needs to have CAR,
CDR, and CONS, while Prolog handles all these through pattern matching.
Lisp has APPEND, LDIFF, MEMBER, and ASSOC (and CommonLisp has ASSOC-IF
and ASSOC-IF-NOT), while Prolog can do all these things with just
append/3.  And by adding in a bidirectional length/2 predicate and
difference lists, Prolog provides most of the list-handling capabilities
that Lisp provides with quite a few built-in functions.  And I could go
on.  Prolog really does gain some useful generality from unification and
nondeterminism.

>     Implementation issues aside (for the moment), each
>     approach has some merits, but PROLOG tends for force the
>     programmer to sacrifice "simple decomposition" for
>     efficiency.

No more than Lisp.  In either language, one often wants to process a
data structure (often a list), and wind up with something else.  And it
is often most convenient to do the processing in small steps,
transforming the first data structure into a second, then a third, and
so forth, until you get what you want.  Often this could be done more
efficiently by doing all the steps at once.  Then, in either language,
one has to ask himself, "how important is efficiency here, and is it
worth writing a lot of my own predicates (functions) to do this more
efficiently?"

Lisp DOES provide destructive operations to make some things more
efficient.  And Lisp has looping constructs to save them from having to
write lots of auxilliary functions.  But Prolog can often get the effect
of side-effects with unbound variables.  And Prolog doesn't loose much
efficiency from having auxilliary predicates, since they can usually be
tail-recursive.

Prolog doesn't loose all that much from not having destructive
operations.  Often this loss can be mitigated by using better data
structures.  Trees instead of lists mean that to change an element you
only have to copy log2(N) elements, rather than N, where N is
proportional to the size of the list or tree.  It would be nice if we
could have compilers that magically turned our nice pure code into
side-effects.  This is VERY hard.

For now, Prolog still has some big wins over Lisp sytems in performance
(faster cons, for example, and cheaper memory management overall) to put
against Lisp's advantage in having destructive operations.
-- 
-Peter Schachte
pds@quintus.uucp
...!sun!quintus!pds

alf@sics.se (Thomas Sj|land) (03/09/88)

Reacting against introducing more "non-logical" features into Prolog is
natural and justified. The very reason Prolog is something different than
backtracking LISP is just that a subset of the language can be given a
logical reading where program execution is construction of a proof using
some simple proof rules. 

When practical people turned to making this proof procedure into a usable
programming language, i.e. inventing Logic Programming, they quickly
discovered the need for some kind of control over the theorem prover, which
does not immediately destroy this nice feature. The logical reading of a
program also gives no clue to whether a predicate is used as producer or
consumer of a variable. This may vary with different queries, and seriously
affects the execution efficiency of a logic program.

'Setarg' is considered "non-logical" by most "logic programmers", since its
logical reading would be something like this: " replace any given assumption
you are currently using for this variable with this new one for the rest of
this branch, but use the old one when you explore the rest of the tree".
This informal understanding of the primitive has a clearly non-monotonous
character, and this is what is understood as "non-logical".

It might be argued that destructive assignment is needed for efficiency.
My intuition is that examples CAN be formulated that perform better with
'setarg' than any "pure" reformulation of the problem. It is also possible to
write assembly programs, which are more efficient than Prolog programs.
(This is perhaps a more sensible idea, if performance is the only purpose of 
the programming enterprise.)

The non-naive proponent of non-logic logic programming features goes ahead 
and tries to formulate a methodology or (better still) a compilation scheme
with which pure programs can be transformed to more efficient ones, which
perhaps use non-logical features. 

Such methods are already used by logic programmers.

Let me list some, which I come to think of:

- Bagof, Setof, Findall and all other more or less intuitive variants
- metaprogramming 
- Programming with difference structures like d-lists
- various uses of cut (red an green, if-then-else etc.)
- read-only annotations like wait, delay, dif, freeze


D-lists are normally (e.g. in grammar rules) considered a perfectly logical
extension, or rather, programming technique. Indeed, their use demand no 
particular control primitive to be introduced into the language, and using
them is therefore often considered to be totally harmless.
Some subtleties occur when you try to use them as ordinary lists in a program,
though. Try to define a predicate dlength/2 that determines the length of
a d-list and the predicate dappend/3 that appends two d-lists into a third one.
Try to use them together and you see what I mean: 
...dlength(D1,L1), dlength(D2,L2), dappend(D1,D2,D3)...

There is to our knowledge no way in which it is possible to formulate a 
predicate dlength/2 without using non-logical primitives like '==', that 
doesn't instantiate the lists, thereby destroying the possibility to use
the d-list in the same way as a normal list, in this example. 
This has its explanation in the fact that d-lists have no meaning as such
in the standard logic semantics, but are INTENSIONAL in nature, i.e. their
meaning is CODED into the program, not a part of the program semantics.
More bluntly put: A D-list is not an object in the Herbrand domain.

Using careful heuristics, which has been well investigated by the logic 
programming community programming with difference-structures is nevertheless
a very powerful technique and is part of the standard material of any Prolog
curriculum. 

What is perhaps needed is a methodology for the use of destructive assignment
in Prolog. I doubt, however, whether such an endeavour will be succesful, and 
it will probably yield very rigourous demands on how programs must be written
in order to preserve some form of "logical" reading.

It all boils down to the standard problems with the semantics of assignment:
How should it be understood as anything else than an operational description
of what the system does with its internal structures on a given place in the
execution, i.e. be given a "logical" or "declarative" meaning ?

PS. Note that setarg can be fully defined using primitives already in the
    language, namely 'var'/1 and '!'/0. The version of setarg which keeps
    the value assigned also when backtracking (sometimes called $setarg)
    seems to demand something else, however. There is a project at our lab 
    concerning a theorem prover implemented in SICStus Prolog, which except
    from lots of cuts, vars and '=='s seems to need also $setarg to reach
    efficient implementation. For such an implementation, with non-naive
    programmers to do the job, the designers/implementors see no problems
    generated by using non-logic features.
DS.


Thomas Sj|land
Logic Programming Systems Laboratory
SICS, PO Box 1263, S-164 28 KISTA, SWEDEN
Tel: +46 8 752 15 42	Ttx: 812 61 54 SICS S	Fax: +46 8 751 72 30
Internet: alf@sics.se or {mcvax,munnari,ukc,unido}!enea!sics.se!alf

ok@quintus.UUCP (Richard A. O'Keefe) (03/14/88)

In article <1797@sics.se>, alf@sics.se (Thomas Sj|land) writes:

> More bluntly put: A D-list is not an object in the Herbrand domain.

Right!  This is why I think it is a bad idea to use
	Front-Back
or	Front\Back
to package the two ends of a difference list as a single object,
and try always to use two separate arguments in my programs.

However, there is an encapsulation which works nicely, due I think to
Stuart Shieber, which Fernando Pereira told me about.

use	q(0, X, X)	for an empty list
and	q(s(s(...(0)...)), [X1,...,Xn|X], X)	for a list of N elements.

	empty_queue(q(0,X,X)).

	queue_first(X, q(s(N),[X|Front],Back), q(N,Front,Back)).

	queue_last(X, q(s(N),Front,Back), q(N,Front,[X|Back])).

queue_first(X, Queue1, Queue0) is analogous to List1 = [X|List0]
queue_last( X, Queue1, Queue0) is similar but works at the other end.