[net.lang.prolog] Reply To My Critics

OKeefe.R.A.%EDXA@sri-unix.UUCP (10/31/83)

From:  Richard HPS (on ERCC DEC-10) <OKeefe.R.A.@EDXA>

     Russ Abbott raised the question of how pure =.. (UNIV) is,
claiming that it is "in violation of all first order principles".
Not so.  As David Warren pointed out in his paper

@InProceedings<HigherOrder,
  Key = "Warren",
  Author = "Warren, D.H.D.",
  Title = "Higher-order extensions to Prolog - are they needed ?",
  BookTitle = "Tenth International Machine Intelligence Workshop,
               Cleveland, Ohio",
        Year = 1981,
        Month = "Apr",
        Note = "DAI Research Paper 154"
    >
suppose we have a logic program P containing predicates
<Pi/Ni; i=1..p> and functions <Fj/Nj; j=1..f> where each
predicate is also listed among the functions, we can form
a new logic program P' by adding axioms

        call(Pi(X1,...,XNi)) :- Pi(X1,...,XNi). % i = 1..p

        functor(K, K, 0) :- integer(K).
        functor(Fj(X1,...,XNj), Fj, Nj).        % j = 1..f

        arg(k, Fj(X1,...,Xk,...,XNj), Xk).      % j = 1..f
                                                % k = 1..Nj

        K =.. [K] :- integer(K).
        Fj(X1,...,XNj) =.. [Fj,X1,...,XNj].     % j = 1..f

     If the Prolog interpreter finds a solution for Goal using the
program P, Goal is a logical consequence of the program P', even
in the presence of cuts, but not of course with assert and retract.
So call, functor, arg, and univ *are* first-order, in a suitably
augmented program. It is perfectly proper to call them first order,
because the augment is just definitions for those predicates, no
other parts of the program being changed.  (It is NOT the case that
Prolog working on P will find all the answers that Prolog working
on P' would find.  For example, the built-in arg does not backtrack.
But then it isn't the case that Prolog will find all the logical
consequences of a P that doesn't use these predicates, so that is
nothing new.)

     You might even accept ancestors(_) and subgoal_of(_) as first-
order.  In the transformed program each predicate has an extra
argument: the list of ancestor goals.  Although the transformed
program is different from the original, the important things are
that the transformation can be accomplished entirely at compile
time, the result is a pure first-order program which is very
like the original.

     Chris Moss criticises "Richard's assumption that the Edinburgh
implementation is normative for such things as setof".  I make no
such assumption.  The version of setof and bagof I mailed to this
Digest is not in fact "the Edinburgh implementation".  My view
is the perfectly commonplace and widely accepted one that

        the first person to invent or discover something
        has the right to name it,
        other people knowing this name do not have the right
        to apply it to other things or to use another name
        for the same thing without consulting the relevant
        professional community

     If enough of the Prolog community agree that setof or bagof
is a bad name (and since Chris Moss himself has proposed an
operation called seqof I assume he likes the name) I will go along
with whatever is chosen to replace it.  Till then:

    1.  The definition of setof is available in 5 places:
        MI 10, DAI RR 154, the Dec-10 Prolog manual, the C-Prolog
        manual, and this Digest.
    2.  David Warren has priority on the names setof and bagof.
        He also has priority on the operation (the one where free
        variables in the generator can be bound).
    3.  The name "findall" for what Chris Moss calls "a flat setof"
        is established on page 152 of "Programming in Prolog",
        which is the most widely available text on Prolog.  I think
        Chris Mellish invented the name.

     Chris Moss also asks 'which implementation introduced the
word "find_all"? I don't know.' The answer is two-fold.  The
first part of the answer is that the proper (according to the
normal practice of scientific and mathematical naming) name
for the operation in question is 'findall', and I occasionally
mis-spell it.  The second part of the answer is that when I
mailed an implementation of 'findall' to net.lang.prolog I
called it 'find_all' so that people could try it out whose
Prolog implementation reserved the word 'findall'.  This has
reinforced my mis-spelling habit.

     The following paragraph in his message is one I whole
heartedly agree with, except to point out that the reasonably
clear descriptions in the 1981 DEC-10 Prolog reference manaul
seem to have done no good, so there is little reason to expect
more formal definitions to help.  Alan Mycroft and someone else
(Neil Jones?) have published a formal semantics for Prolog with
the cut but without i/o or database hacking.

     Yoav Shoham is thankful for assert and retract.  Yes, and
BASIC programmers should be thankful for GOSUB.  But Pascal
programmers are even more thankful for real procedures which
are even better suited to the task of writing correct readable
programs.  I cannot read the example, but the data base seems
to be used for two different tasks.  In 'next'/2 we find

        asserta(tmpstoreglist((L1,X2,Fnextfn))),
        <<<DANGER>>>
        retract(tmpstoreglist(Copy)),

Now all that this does is to bind Copy to a copy of (L1,X2,Fnextfn)
with new variables.  The "meta-logical utilities" available as
{SU-SCORE}PS:<Prolog>METUTL.PL contain a predicate

        copy(OldVersion, NewVersion) :-
                asserta(.(OldVersion)),
                retract(.(NewVersion)).

or something like that.  Now there are three points to make.
The first is that this is a perfectly clear operation which
in itself has nothing whatsoever to do with the data base.
It could be implemented very easily in C or MACRO-10.  The
second point is that it is not at all difficult to implement
it in Prolog, using the var/1 and ==/2 primitives.  The third
point is that by not giving the operation a name, and writing
the asserta and retract separately, Shoham has been seduced
into putting some other code in between the parts of this
operation, where I put <<<DANGER>>> above.  In the case that
Fnextfn fails, a tmpstoreglist clause will be left in the data
base, which I assume was not intended.

     So this is for me a PERFECT example of the UNdesirability
of assert and retract: there is a straightforward operation which
is easy to explain (NewVersion is a renaming of OldVersion where
all thevariables are new) which could have been provided by the
implementor more efficiently than by using the data base, and
where using assert and retract has lead to a program which is
harder to understand and more error-prone.  Thank you very much
for this example, Yoav Shoham.

     The other use that example makes of assert and retract is to
maintain a global variable in the data base.  If that's what you
want, fine.  But if you want lazy lists to look like Prolog objects
which you can pass around, share variables with, etc. just like
ordinary lists, then sticking them in the data base is the last
thing you want.  Yoav Shoham challenges "Does anyone have a pure
solution that is correct and efficient ?"  Since no specification
is presented, I have no idea what "correct" means.  I can't take
the specification from the code, because it puts the answer in the
data base, which a pure solution can't do.  So here is my first
attempt at lazy lists in Prolog, which I have tested, but only
on two examples.

%   File   : LAZY.PL
%   Author : R.A.O'Keefe
%   Updated: 30 October 1983
%   Purpose: Lazy lists in Prolog.
%   Needs  : apply/2 from APPLIC.PL.

%   Note: this code is "pure" only in the sense that it has no
%   side-effects.  It does rely on 'nonvar' and cuts,.
%   The lists are a little bit too eager to really be called lazy, as
%   if you look at N elements N+1 will be computed instead of N.
%   If you backtrack, the computed elements will be undone just like
%   other Prolog data structures.  "Intelligent backtracking" might
%   be a good thing if lazy lists were to be used a lot.
/*
:- type
        lazy_list(T) --> list(T)/void(T,T).

:- pred
        make_lazy(T, void(T,T), lazy_list(T)),
        head_lazy(lazy_list(T), T),
        tail_lazy(lazy_list(T), lazy_list(T)),
        member_check_lazy(T, lazy_list(T)).
*/
:- public
        make_lazy/3,
        head_lazy/2,
        tail_lazy/2,
        member_check_lazy/2.

:- mode
        make_lazy(+, +, -),
        head_lazy(+, ?),
        tail_lazy(+, -),
        member_check_lazy(+, +).


%   A lazy list is a pair consisting of a normal Prolog list (usually
%   ending with an unbound variable) and a goal which may be used to
%   generate new elements.  The idea is that [X0,X1,X2,...]/R should
%   satisfy X0 R X1, X1 R X2, ...  These objects should only be used
%   as arguments to these predicates.

make_lazy(First, Step, [First|_]/Step).


head_lazy([Head|_]/_, Head).


tail_lazy([_|Tail]/Step, Tail/Step) :-
        nonvar(Tail), !.        %  delete this clause to get logic
tail_lazy([Head|Tail]/Step, Tail/Step) :-
        apply(Step, [Head,Next]),
        Tail = [Next|_].


member_check_lazy(Thing, LazyList) :-
        head_lazy(LazyList, Thing), !.
member_check_lazy(Thing, LazyList) :-
        tail_lazy(LazyList, LazyTail),
        member_check_lazy(Thing, LazyTail).

end_of_file.

     Wilkins says "LISP without these [rplaca and tconc] is an
adequate programming language for a novice and still quite
challenging." Just so.  I'd drop "for a novice".  Maybe my
example of rplaca and tconc was ill-chosen (though the student
in question had been writing good Lisp for about a year).
But the claim that "Nearly all programming in a given language
is done by people who are not novices" is true only in the sense
that those people have experience.  It is not always true that
they have understanding.  I have seen a lot of programs written
by experienced people (in Algol, Fortran, PL/I, Pascal, C, and
Lisp) that I would have sworn were written by people who'd never
looked at the manual.  I have never met another Fortran programmer
who had bothered to read the standard (I know they exist, I'm just
saying they're rare).

     As Prolog stands *now*, there is a definite need for assert
and retract.  To deny this would be to condemn myself, as my
programs  do from time to time hack the data base.  The trouble
is that people  who are accustomed to other languages feel lost
without assignment and pounce on assert and retract with loud
cries of joy, **far too early**. Too early when they are learning.
I have seen someone hacking the data base like this:

    :-  asssert(p(X=47+Y)).
    ?-  Y = 12, p(Eqn), write(Eqn), nl.

and being very surprised when he got "_1 = 47 + _2".  He was
expecting "X = 47 + 12".  He had not yet understood that clauses
don't share variables.  (Before criticising the instructor: he
had none.) Too early when writing a program.  You should always
look for an applicative solution FIRST.  Second too.  When an
applicative solution exists, it is usually faster than a data
base solution.

     Here is an example.  Suppose you want to maintain a table
which maps integers 1..N (for N known when the table is created),
and want to be able to change elements of it.  Most people seem
to go for

        make_table(N, Init, T) :-
                gensym(table, T),
                forall(between(1, N, K), assert(table(T, K, Init))).

        fetch(T, N, Elem) :-
                table(T, N, Elem).

        store(T, N, Elem) :-
                retract(table(T, N, _)),        % bounds check
                assertz(table(T, N, Elem)).

first thing.  But in most existing Prologs, the cost of a fetch or
store is O(N+|Elem|), with a large constant.  If you use binary trees
instead, the cost of a fetch or store is O(lg N) with a small constant
(though you do need a garbage collector).  There will of course be
occasions when you need to use the data base, but you will use it more
effectively if you have considered a variety of data structures first.

     With regard to assert/a/z, retract, recorda/z, instance, erase,
recorded, all I am saying is "please give me instead operations that I
and compiler writers can understand".  I have seen no evidence that
anyone except David Warren, Fernando Pereira, Lawrence Byrd, and Chris
Mellish understands data base hacking in DEC-10 Prolog any better than
I do, which after four years is pretty well but not well enough.  I
have found a lot of people who THINK they understand data base hacking
but whose code shows they are wrong, and quite a few people who are
sure they don't understand it and so stick to simple cases and whose
programs in consequence work.  What I want to see is a collection of
these simple cases given names and definitions and coded efficiently.
The end of the quest will still be a form of data base hacking.  I'm
not all that bothered about assignment, or stepping outside logic.
What I DO object to is being forced to use primitives I don't
understand fully.

                 Fiat ratiocinatio, fiat ratio autem.


End of PROLOG Digest
********************