[net.lang.prolog] Univ Continued, Partitioned Data Bases

OKeefe.R.A.%EDXA@sri-unix.UUCP (11/05/83)

From:  O'Keefe HPS (on ERCC DEC-10) <OKeefe.R.A. at EDXA>

     I am obliged to agree with much of what Uday Reddy says.  But
definitely not all.  I should perhaps apologise for being a little
bit lax about what I meant by "first-order".  The fact that call,
univ, arg, functor, ancestors, subgoal_of can all be transformed
away so easily means that they have first-order POWER.  ancestors
and subgoal_of are a bit remote from logic, and I am quite happy
to concede that they are not any order, and I'd be perfectly
content if I never saw them again.  But the case of functor,
arg, call, and univ is very different.  The "transformation"
required is

                     Just Fill In The Definitions

     That is, any *logic program* using functor, arg, call, name
and univ can be *completed* by adding clauses to define those
five predicates, NO change being made to any of the original
clauses.  The meaning of functor..univ is thus in some sense
dependent on the program, but any fixed program can be completed
to a proper logic program.

     The *REAL* trouble-maker is 'var'.  Anything Reddy cares to
say about the illogicality of 'var' has my full support.  We could
imagine having predicates %integer, %atom which recognised integers
and atoms, they would then be first-order the same way that functor
is.  The current definitions aren't quite that, though, they are

        integer(X) :- nonvar(X), %integer(X).
        atom(X)    :- nonvar(X), %atom(X).

The horrible thing about var is that it can succeed with a given
argument and a short while later fail with exactly the same
argument. Oddly enough, var doesn't have to be a primitive.  If
we have cuts,

        var(X) :-
                nonvar(X), !, fail.
        var(X).

        nonvar(X) :-            % vars unify with 27
                X \= 27, !.     % so does 27, but it
        nonvar(X) :-            % doesn't unify with
                X \= 42.        % 42 (vars do again).

        X \= X :- !,
                fail.
        X \= Y.

It is because of this that I was careful to say that functor, arg,
univ, and call are first-order *in logic programs*.  What they
may  be in Prolog I have no idea.

     functor, arg, and univ cause the type checker TypeCh.Pl no
end of trouble.  Basically the problem is that knowing the type
of T in

        arg(N, T, A)

doesn't help you find the type of A, and the elements of the list
yielded by univ have different types.  But then there are other
perfectly good logic programs by anyone's standards that defeat
the type checker.

     The heart of the disagreement between Reddy and me is in the
following paragraph from Reddy's message:

        Prolog has several features that violate referential
        transparency.  Some of them are var, functor, arg, univ
        and call.  To see why, consider the simple example

                1 + 2 =.. [+, 1, 2]

        Since 1+2 denotes a semantic object (the integer 3) its
        syntactic structure should be transparent to the program.
        But using =..  allows the program to look at its syntactic
        structure.  2+1 denotes the semantic object as 1+2.  But,
        replacing 1+2 by 2+1 in the above literal does not
        preserve it's truthful-ness.

Wrong.  1+2 does NOT denote 3.  It denotes the tree +(1,2) .  In a
piece of logic, terms do not denote anything but themselves until
you specify an interpretation, and then the interpretation relation
lies OUTSIDE the program.  I am perfectly free to set up a model in
which 1+2 denotes "for all X not both elephant(X) and forgets(X)"
[there is no reason why a model cannot be another logic].  The only
semantic objects (in a computational sense) in Prolog are TREEs,
and if some predicates happen to interpret some trees as if they
were arithmetic expressions, so what ?  Certainly there is a Prolog
relation which states "1+2 =:= 3", but I am at liberty to define
another such relation according to which "1+2 =::= 1".  I repeat,
in Prolog and logic programming generally, 1+2 just (computationally)
denotes +(1,2) and any other meaning we may want to give it is our
problem.

     The thing is, logic programs (as currently defined) do not use
paramodulation or demodulation.  Michael Fay's paper in the 4th CADE
looks as though it might yield a clean way of building in equational
theories (he shows how to handle complete sets of rewrites in the
unification algorithm).  In a logic programming language where say
arithmetic was built into the unification algorithm, then indeed it
would be true that "1 + 2" and "3" would be the same thing.  But in
such a system "3" would match "1+X+Y" as well, at least 3 ways.

     Wayne Christopher seems to have heard of Prolog/KR.  If you
can specify which clause pools are to be searched, and into which
clause pool(s) a new clause is to be inserted, there is no need
to create or destroy clause pools: you think of them as always
existing but as being empty until you put something into them (no
need to create) & you leave it to the garbage collector to page
out pools you haven't used in a while (no need to destroy).

     You don't actually get any extra logical power, as you can
add the name of the data base in which clause lives as an extra
argument of its head, and so

        call_in(db1 v db2 v db3, foo(X, Y))
becomes
        ( foo(db1, X, Y) ; foo(db2, X, Y) ; foo(db3, X, Y) )

     I  must admit that I don't see how this gives us "more control
over evaluation."  I'm not saying it doesn't, just that I don't
understand.  Could you go into a bit more detail, Wayne Christopher ?
Actually, I'm not at all sure that I *want* more control over
evaluation.  Prolog has now been taught in several Universities
for periods ranging from one to five years (that I know of) and
has been taught to schoolchildren for about three.  All the
Prolog teachers I have talked with agree that

        the LOGIC part of Prolog is easy to teach
        the CONTROL part of Prolog is hard to teach.

Remember also that CONNIVER was eventually abandoned...

     I have used the DEC-10 Prolog "recorded" data base a fair
bit.  That is in fact why I want a replacement !  I can testify
that a partitioned data base where you have to figure out for
yourself what holes to put things down is no substitute for a
good indexing scheme. If you can request an index on any or all
argument positions of a predicate (IC-Prolog does this), and
especially if you could specify a multi-level index on some
argument positions, you can simulate any partitioning system you
like.  Efficiently.  If you want to handle "very large data
bases",  I would suggest John Lloyd's dynamic hashing.  It has
been used in at least two Prolog interpreters already.  His
reports are available from Melbourne University.