[comp.lang.prolog] PROLOG Digest V5 #89

PROLOG-REQUEST@SUSHI.STANFORD.EDU (Chuck Restivo, The Moderator) (11/22/87)

PROLOG Digest            Sunday, 22 Nov 1987       Volume 5 : Issue 89

Today's Topics:
             Query - Proceedings & Test & Constraint LP,
          Implementation - Prolog Machines & Type Conversion,
                & Numbers & Specifications & Chestnuts
----------------------------------------------------------------------

Date: Tue, 17 Nov 87 22:10:49+0900
From: Dongwook Shin <dwshin%csd.kaist.ac.kr@RELAY.CS.NET>
Subject: Proceedings


Is there anyone out who has the ESPRIT'86 proceedings or knows
where to buy it? A report which appeared at ESPRIT Project 415
is important to me.
Thanks.

-- D.W. Shin

------------------------------

Date: 14 Nov 87 15:08:42 GMT
From: clyde!watmath!dalcs!aucs!ifocs9d@rutgers.edu  (Rick Giles)
Subject: Tail Recursion Optimization Test

Is there a straightforward test one can apply to a Prolog implementation
which will indicate that it is very likely that the implementation applies
tail recursion optimization?

------------------------------

Date: Mon, 9 Nov 87 11:26:08 GMT
From: Fernando Pereira <pereira%cam.sri.com@drakes.ai.sri.com>
Subject: Chat LIPS

Being the author of the natural-language analysis part of the Chat-80
program that Mike Carlton mentions in his message about Prolog
machines, I'm baffled at his statement that the new VLSI PLM achieves
``98 KLIPS for Warren's [sic] Chat program''.  How is this measured?
The system contains a top-level loop with reader, a parser, two
semantic interpretation modules, a simplifier, a query planner, a
query evaluator and a database, all with very different coding styles,
different uses of evaluable predicates, and with different degrees of
determinacy. For most examples, execution time for a query is
dominated by database access, which includes calls to ``setof''. How
could one draw any useful conclusion from a single aggregate number
for the whole program?  How does one measure ``LIPS'' for programs
using ``setof''?

------------------------------

Date: 9 Nov 87 21:52:17 GMT
From: tektronix!sequent!mntgfx!bobk@ucbvax.Berkeley.EDU  (Bob Kelley)
Subject: Constraint Logic Programming info sought

I haven't heard much in this Digest about Constraint Logic Programming.
It seems to me that CLP is what I originally thought PROLOG should have
been, before I learned about PROLOG.

Can anyone describe exactly what the differences are between CLP and PROLOG?
Is it practical to modify existing PROLOG interpreters to handle, say,
CLP(R)?  Is it practical to do this with compiled PROLOG systems based on
the Warren Abstract Machine, e.g. SB-Prolog?

I have heard that some CLP systems involve some sort of constraint solver
grafted on to a regular PROLOG system.  What tasks would such a solver
be able to perform in the domain of real numbers?  Would it solve sets
of simultaneous equations and inequalities?  Would it need to do any
algebraic rearrangement of clauses?  Could a CLP(R) meta-interpreter be
written in PROLOG?

-- Robert J. Kelley

------------------------------

Date: 17 Nov 87 02:43:14 GMT
From: ji.Berkeley.EDU!holmer@ucbvax.Berkeley.EDU  (Bruce K. Holmer)
Subject: Integer/floating point type conversion in Prolog

[]
In your opinion, what is the proper behavior of the following code fragments?

        % Is type conversion is done before test?
        main1 :- a(1.0, 1).
        a(X, Y) :- X == Y, write(a), fail.
        a(X, Y) :- X = Y, write(b), fail.
        a(X, Y) :- X =:= Y, write(c), fail.

        % What about round-off errors?
        main2 :- X is (2.0/7.0)*7.0, Z is X - 2.0, write(Z), b(X, 2.0).
        b(X, Y) :- X == Y, write(a), fail.
        b(X, Y) :- X = Y, write(b), fail.
        b(X, Y) :- X =:= Y, write(c), fail.


To save you the work, here are the results for prologs we use:

        C-Prolog (version 1.5)
                main1:  abc
                main2:  0abc
        Quintus (VAX version 1.6 interpreted):
                main1:  c
                main2:  -1.9073e-06
        Quintus (VAX version 1.6 compiled):
                main1:  c
                main2:  0.0abc
        SB-Prolog (version 2.2 compiled)
                main1:  bc
                main2:  -0.000000b

As you can see, no one can agree on what the correct answers are!

So should type conversion be done before =/2 or ==/2?
Quintus says no, C-Prolog says yes, and SB-Prolog is in between.

Concerning round-off errors, Quintus compiled code probably computes
multi-operation expressions with full precision (and rounds the
answer), whereas execution time floating point is 29-bit for all
operations.  My opinion is that a program should produce identical
results, regardless of whether it is interpreted or compiled.
SB-Prolog uses an 'EPSILON' during unification to do a
'prettymuch_equal', but I would think that anything that is '=/2'
should also be '=:=/2'.

Thanks for your comments,
-- Bruce

------------------------------

Date: 17 Nov 87 19:15:59 GMT
From: cbosgd!mandrill!leon@ucbvax.Berkeley.EDU  (Leon Sterling)
Subject: Comparing numbers

Bruce Holmes questions' are good examples of `kosher pigs',
i.e. strictly legalistic curiosities that can't (more realistically
shouldn't) arise.

In his example of type conversion, only one of the queries makes
sense, namely 1.0 =:= 1?  (The testing procedure is irrelevant.)
This is the ONLY Prolog predicate which does evaluation as part of its
comparison.

It is probably sensible for this goal to succeed, thouigh it would
not be unreasonable for =:= only to compare values of the same type.

The meta-logical test 1.0 == 1? should only be applied to non-ground
terms. If its behaviour must be defined, I feel strongly that the
query should fail. The real number 1.0 is not the same object as the
integer 1.

The unification test 1.0 = 1? should also fail, it seems to me.
Do we want a list with one element, a say, to unify with a binary
tree with a single leaf node a?

Roundoff behaviour is a question for implementing arithmetic,
and is irrelevant in language design.

-- Leon Sterling

------------------------------

Date: Fri, 20 Nov 87 19:51:27 EST
From: munnari!mulga.oz.au!lee@uunet.UU.NET (Lee Naish)
Subject: Specifications vs programs

Fernando suggested the following specification for append:
        (forall x y z) (list(x) & list(y) & append(x,y,z) =>
                                list(z) & x::y = z)
where :: is the list concatenation function.

A problem with it is that the following query may result in X and/or Y
being bound to arbitrary non-lists:

?- append(X, Y, [1,2,3]).       % eg. X=42, Y=g(fred)

It is important for Prolog procedures to be complete (at least when they
terminate normally, ie. fail on backtracking eventually), as well as sound.
Recently I wrote some code in the following form:

        all ... (append(...), ... => ...)

Implication (in NU-Prolog) is implemented using not, which uses negation
as failure.  NAF is unsound if procedures terminate without returning
all true answers.  The code above will return an incorrect answer with
the following (sound by not complete wrt the intuitive specification)
definition of append:

append(_, _, _) :- fail.

The high level specification for which (standard) append/3 is sound and
complete is horrible.  The sound and complete implementation of the
intuitive specification is also not good (its inefficient).

I think my approach using types is a reasonable solution to this
dilemma, though it would be nicer for everyone if the dilemm didn't
exist in the first place.

P.S. please dont leave this message around where anti-Prolog people may
see it :-).

------------------------------

Date: Sun, 15 Nov 87 01:02:10 PST
From: quintus!ok@Sun.COM (Richard A. O'Keefe)
Subject: An Old Chestnut

I see that an old chestnut is being discussed again.
The problem is that

        ancestor_descendant(Anc, Des) :-                /* V1 */
                parent_child(Anc, Des).
        ancestor_descendant(Anc, Des) :-
                parent_child(Anc, Cld),
                ancestor_descendant(Cld, Des).

looks good for finding descendants of a known ancestor,
but searches blindly for ancestors of a known descendant.

The question is what should be done to this to make it
beahve itself both ways around.

Jeff Schultz suggests either coding both directions of the
relation and picking the appropriate one depending on the
instantiation state of the arguments (this is what used to
be done in IC-Prolog:  you could write several copies of a
clause that differed only in goal order; IC Prolog realised
that they were really the same clause and would pick one)
or else using some form of delaying.

Micha Meier also suggests using some sort of delaying.

There are several important points.  One is that delaying is
not really going to help.  What delaying will do for you is
to put off anything that looks too hard.  Eventually SOMEONE
has to guess a descendant otherwise the ancestor_descendant
goal will NEVER be executed.  As Jeff Schultz's program shows,
delaying is a good way to do the scheduling, but you still
have to code both directions of the relation.

But there is more to worry about.  The ancestor_descendant
code given above only LOOKS ok for finding descendants of a
known ancestor.  In fact, unless the parent/child graph is
a tree, that code is SHOCKINGLY inefficient.

What do I mean?  Well, let's confine ourselves to parent/child
graphs which are acyclic, but relate N individuals.

    ancestor_descendant(A, D) may prove the relatedness of
    *given* A and D O(2**N) times.  Consider the graph
    for i = 1..N/2 {
        parent(a[2i+0], a[2i+2]).
        parent(a[2i+1], a[2i+2]).
        parent(a[2i+0], a[2i+3]).
        parent(a[2i+1], a[2i+3]).
    }
    (i.e. N/2 generations of brother-sister incest.)

    It is also easy to construct an acyclic graph in which there
    is a unique path from A to D, but an exponential amount of
    work is done searching for it.

So even in the "forward" direction (whichever that is, both these
nasties happen when both A and D are known) this algorithm is
pretty blind.

The nice thing about declarative programming is that you can write
a specification and run it as a program.  The nasty thing about
declarative programming is that some clear specifications make
incredibly bad programs.  The hope of declarative programming is
that you can move from a specification to a reasonable program
without leaving the language.

What is the best way to handle transitive closure problems like this?
Well, a logic programming system with a bottom-up component should be
able to do a reasonable job by computing the closure bottom-up.  For
ordinary Prolog systems, it is possible to compute the transitive
closure of a graph expressed as a list of From-To arcs in cubic time.
{See the predicate warshall/2 in the DEC-10 Prolog library file
GRAPHS.PL or the Quintus Prolog library file library(graphs).}
Even doing

        ancestor_descendant(Ancestor, Descendant) :-            /* V2 */
                setof(Parent-Children,
                      setof(Child, parent_child(Parent, Child), Children),
                      Graph),
                warshall(Graph, Closure),
                member(Ancestor-Descendants, Closure),
                member(Descendant, Descendants).

(which doesn't look too good) has cubic cost, which beats exponential
any day.  {By the way, this is yet another of the many examples where
it is a major help that setof/3 never yields an empty answer.}
Yes, the calls to setof/3 are going to take O(N**2.lg(N)) time here,
but this is dominated by the O(N**3) time of warshall/2.

A practical compromise might be to cache this result:

        :- dynamic
                parent_child/2,
                anc_des/2,
                needs_recomputing/0.

        c_assert(Fact) :-
                (   clause(Fact, true) -> true
                ;   assert(Fact)
                ).

        add_link(P, C) :-
                (   parent_child(P, C) -> true
                ;   assert(parent_child(P, C)),
                    c_assert(needs_recomputing)
                ).

        del_link(P, C) :-
                (   retract(parent_child(P, C)) ->
                    c_assert(needs_recomputing)
                ;   true
                ).

        recompute_if_necessary :-
                (   retract(needs_recomputing),
                    retractall(anc_des(_, _)),
                    setof(Parent-Children,
                          setof(Child, parent_child(Parent, Child), Children),
                          Graph),
                    warshall(Graph, Closure),
                    member(Ancestor-Descendants, Closure),
                    member(Descendant, Descendants),
                    assert(anc_des(Ancestor, Descendant)),
                    fail
                ;   true
                ).


        ancestor_descendant(Ancestor, Descendant) :-            /* V3 */
                recompute_if_necessary,
                anc_des(Ancestor, Descendant).

Now, while the parent/child graph isn't changing, a call to
ancestor_descendant/2 takes at worst quadratic time, and when
the graph does change, it takes cubic time to recompute it.

As a matter of fact, I had a paper in the Melbourne conference
which showed how the transitive closure could be maintained as
linked were added and still maintain cubic total time.  I have
no idea how to attain this upper bound if links are deleted.

Of course the program V2 is not as pretty as the specification V1,
but it is enormously more efficient in the general case (it has no
trouble with cyclic graphs, for example) and it is still declarative.

With release 2.0 of Quintus Prolog and the latest versions of
library(graphs) and library(ordsets) -- that's the one that speeded
up by 20% when I made it purer -- the cross-over for V2 being faster
than V1 on an incestuous graph is N=10.  For N=20, V2 was 10 times
faster than V1, and V3 was another 12 times faster still.
For a 120-fold speed-up, I'm prepared to put up with a lot...
I tried V1, V2, and V3 on some randomly generated acyclic graphs
with N=20 nodes.  The lowest V1/V2 ratio I obtained was 6 (that is
the setof & warshall version was six times faster), and V3 was some
70 times faster still.  That's a V3/V1 speedup of 420 for a tiny
little graph.  I'm not sure that my definition of "random" graphs
is reasonable, so this should still be taken as upper bound-ish.

I have a nasty feeling that there is an O(N**2) algorithm for finding
the transitive closure of an acyclic graph and that I'm looking silly
using a cubic algorithm.  Anyone know if there is, and if so what?

So, if you want to find transitive closures, a little bit of patching
here and there with :-when declarations simply isn't good enough.
You have to use a better algorithm, and if you do that, the original
problem happens to go away.

------------------------------

End of PROLOG Digest
********************