[comp.lang.prolog] PROLOG Digest V5 #85

PROLOG-REQUEST@SUSHI.STANFORD.EDU (Chuck Restivo, The Moderator) (11/08/87)

PROLOG Digest             Monday, 9 Nov 1987       Volume 5 : Issue 85

Today's Topics:
           Query - Side Effects & Intelligent Backtracking,
             Implementation - Style & Badness & Machines
----------------------------------------------------------------------

Date: 4 Nov 87 06:49:46 GMT
From: munnari!mulga!jayen@uunet.uu.net  (Jayen Vaghani)
Subject: Preserving side-effects in Intelligent backtracking

I am currently completing a project on preserving side-effects in
intelligent backtracking and are currently implementing the algorithm
in a full Prolog system. The intelligent backtracking method we are
using is one similar to the B-list scheme of Kumar and Lin. I would be
interested in hearing from any other people who are looking at the
same area, and what sort of results they have come up with.

Thank you.

-- Jayen

------------------------------

Date: Thu, 05 Nov 87 16:57:01 +1100
From: Jeff Schultz <munnari!mulga.oz.au!jws@uunet.UU.NET>
Subject: Programming Style

John Cugini asks how to re-write cleanly the ancestor relation so that
it runs whether it is the ancestor or the descendent argument that is
known.

His original code
        ancestor_descendant(Anc, Des) :- parent_child(Anc, Des).
        ancestor_descendant(Anc, Des) :-
                parent_child(Anc, Middle),
                ancestor_descendant(Middle, Des).

is fine for finding descendents, but performs a blind search for
ancestors.  One can write a similar predicate with the opposite
problem:

        descendant_ancestor(Des, Anc) :- parent_child(Anc, Des).
        descendant_ancestor(Des, Anc) :-
                parent_child(Middle, Des),
                descendant_ancestor(Middle, Anc).

One way to get behaviour independent of instantiation pattern
is to use both of these.  For example
        ad(Anc, Des) :-
                (var(Anc) ->
                        descendant_ancestor(Des, Anc)
                ;       ancestor_descendant(Anc, Des)
                ).

A cleaner,though less efficient, solution is possible with a Prolog
capable of delaying goals until some suitable combination of their
arguments is instantiated.  In NU-Prolog, the following would work

        ?- ancestor_descendant(Anc, _) when Anc.
        ?- descendant_ancestor(Des, _) when Des.
        ad(Anc, Des) :-
                ancestor_descendant(Anc, Des),
                descendant_ancestor(Des, Anc).

Whichever version is sufficiently instantiated runs first with the
other verifying the answer afterwards.  The amount of work saved over
using ancestor_descendant/2 depends on the nature of the parent_child/2
relation.  The worst case is when everyone is related.

As another example, the NU-Prolog code for length/2
        ?- length(A, B) when A or B.
        length([], 0).
        length(_.L, N) :-
                N > 0,
                plus(N1, 1, N),
                length(L, N1).
also runs in either direction.  (Plus/3 waits for any two of its
arguments, and then computes the remaining one so that Z = X + Y.)

------------------------------

Date: Fri, 6 Nov 87 11:46:09 +0100
From: "Micha Meier" <unido!ecrcvax!micha@uunet.UU.NET>
Subject: Programming Style

The question is, whether

ancestor_descendant(Anc, Des) :- parent_child(Anc, Des).
ancestor_descendant(Anc, Des) :-
  parent_child(Anc, Middle),
  ancestor_descendant(Middle, Des).

can be executed efficiently and cleanly.

Although the problems mentioned by John Cugini are solved by query
optimizations in the database area, there is as well a clean Prolog way
to handle them, but it requires extensions to Prolog. What is required
here is to change the depth-first, left-to-right Prolog search
depending on the instantiation of the arguments in the call.
Suspending a call which is not instantiated enough is a clean
and easy solution. A solution could be simulated even with the freeze/2
primitive of Prolog II, however other Prolog dialects provide
more sophisticated mechanisms - wait declaration in MU-Prolog
and ECRC-Prolog, when declarations in NU-Prolog, delay declarations
in Sepia etc.

In our new system, Sepia, I would add the following:

?- delay parent_child(Anc, Des) if var(Anc) & var(Des).

which means exactly what it says - the call to parent_child/2 will
be delayed if both arguments are variables and it will be resumed
if one of the arguments is instantiated or if there is nothing else
to execute.
The 'delay' clause is in fact a metaclause which specifies a certain
control for the procedure. In such a way metalogical predicates
need not be included in the program itself, they are present
in metaclauses only.

Although there are not many Prolog systems available that
support predicate delaying, it is clear that their importance
will grow - they can handle database applications, constraints
propagation, increase efficiency, improve backtracking behavior
so that intelligent backtraking follows automatically, infinite
loops and many more. At the same time, the implementation can
be so efficient that the speed of non-delaying execution
is not visibly affected (we still make 200kLIPS with Sepia
on the 25MHz SUN-3).

------------------------------

Date: Wed, 4 Nov 87 16:48:20 EST
From: blenko@burdvax.PRC.Unisys.COM (Tom Blenko)
Subject: Badness

I'm not certain Richard's posting was really a question, but I promise
the following will be brief (well, sort of).

I think the answer to his question is simple enough.  Most code, Prolog
and otherwise, is good enough because it runs.  Programmers have
neither the time nor the interest to engage in the pensive, reflective
approach to programming that O'Keefe espouses.  Lest anyone take this
as a denigration of "industrial quality" software development, my
experience has been that academics write significantly worse software
(by Richard's measure) than professional programmers. Which is
fine, because academics often have other more important things to
occupy themselves with.

So the answer, in a nutshell, is that goodness (and even correctness)
of software depends upon one's perspective. And the overwhelming
majority of those writing software do not share Richard's perspective.
Professional software developers can, and sometimes do, expend
additional time and effort when they determine it is necessary to
produce more "correct" software (although, of course, they are not
always successful).

My favorite bit of erroneous Prolog software is the oft-cited append()
procedure/relation:

        append([],X,X).
        append([H|T1],X,[H|T2]) :-
                append(T1,X,T2).

I don't believe I've ever seen the incorrectness (by Richard's view)
acknowledged in a text or publication, nor have I seen a correct
version (even in the Quintus library!). The problem, of course, is
illustrated by the fact that cons() may be defined in terms of
append():

        cons(A,B,C) :-
                append([A],B,C).

My claim is that the definition of append() above has been acceptable
to Prolog programmers for the same reason that most erroneous code is
acceptable to other programmers: it runs correctly enough for the task
at hand, and (in this case) significantly faster than a correct
implementation would.


In response to Richard's observations about the quality of Prolog code
as an instance of a more general phenomenon, and more in keeping with
the issue I suppose he intended to raise, I propose that bad Prolog
code is in many cases likely to be worse than bad Pascal or C code for
these reasons:

        1. It is almost never the case that the programmer intends to
           use the full generality of relational definitions.
           Consequently he or she is unlikely to test for (or even take
           into account) unintended uses of the software.

        2. Interactions with assert() and retract() are no easier to
           anticipate than the effects of assignment in imperative
           languages (in fact I would guess that they are often
           harder).

        3. Backtracking and cut are more difficult use than more
           conventional control constructs (I'd be interested if
           anyone is willing to present a strong argument to the
           contrary).

        4. As Richard's earlier diatribe and several subsequent
           messages have indicated, there are (currently) some rather
           dramatic implementation-dependent pitfalls with respect to
           performance (far more than one would expect to see between
           and among C or Pascal implementations).

        5. It is well known that O'Keefe's and Naish's exhortations
           to Lagache (for sensitivity to efficiencies in the
           implementation and for conformity to simple declarative
           interpretations) will sometimes conflict. This presents
           the programmer with a dilemma that he or she may not
           (indeed may not be able to) escape.

-- Tom

------------------------------

Date: 3 Nov 87 02:27:59 GMT
From: ji.Berkeley.EDU!carlton@ucbvax.Berkeley.EDU  (Mike Carlton)
Subject: Looking For Prolog Machines

Thanks for the kind words.  Let me give some background on the Aquarius
project here at Berkeley.  The group has built several Prolog machines to
date, beginning with an NCR 9300 based system.  Prolog programs were compiled
to WAM (Warren Abstract Machine) and then to microcode.  With a cycle time
of 150 ns., this system achieved 53 KLIPS for determinate concat in 1985.

The next machine, the PLM, was a TTL implementation of the WAM with some
local improvements, see [Dobry87].  The PLM was simulated to run at 400 KLIPS
for determinate concat, based on a 100 ns. cycle.

Another project implemented the WAM instruction set in microcode on a
VAX 8600.  This machine, with an 80 ns. cycle, reached 108 KLIPS on
determinate concat.

The most recent machine (chip actually) is the VLSI PLM, being fabricated
right now.  This is a 200K equivalent transistor, 1.4 micron CMOS implement-
ation of an extended PLM.  With a cycle time of 100 ns. this chip has been
simulated at 416 KLIPS for determinate concat and 98 KLIPS for Warren's Chat
program.

The Prolog compiler used for most of this research was written by Peter Van
Roy [VanRoy85], and is available directly from him.  He can be reached via
email at vanroy@ji.berkeley.edu.

Unfortunately, the VLSI PLM is not available as a MOSIS tape as Renu's
message indicated.  The chip was funded in part by NCR and they are
fabricating it for us and for their own use.

Our focus has shifted away from single processor systems; current research
centers around parallel execution Prolog machines.  For some results from
recent work in this area, see [Fagin87].  Altogether, more than 50 papers
and reports have been produced by the Aquarius project.  If anyone would
like a list of the publications, please send electronic mail to Kathleen
Nimr (nimr@ji.berkeley.edu).  Most of these publications are still available
from her.

As for commercial products, Xenologic licensed the PLM design from the
university and has designed an improved version, called the X-1.  This is a
set of 2 VME boards which plug into a Sun workstation and function as a
coprocessor.  I don't have exact performance figures for their machine, but
have been told that they are better than the PLM.  

Hope this helps, if you would like more information or have any questions
please send us mail,

-- Mike (and the rest of the Aquarius group)

References:
Dobry, T., "A High Performance Architecture for Prolog", Ph.D Dissertation,
       UCB, May 1987

Fagin, B., Despain, A., "Performance Studies of a Parallel Prolog
       Architecture", 14th ICSA, June 1987

Van Roy, P., "A Prolog Compiler for the PLM", UCB/CSD Report No. 84/203,
       November 1984

Disclaimer:  I have nothing to do with Xenologic (other than wanting my
own X-1 for the Sun in my office!).  However, some members of our research
group do have connections with Xenologic.  This should not be construed as
an advertisement for Xenologic.

------------------------------

End of PROLOG Digest
********************