[net.lang.prolog] PROLOG Digest V3 #14

RESTIVO@SU-SCORE.ARPA (03/27/85)

From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>


PROLOG Digest           Wednesday, 27 Mar 1985     Volume 3 : Issue 14

Today's Topics:
         Implementations - CProlog bug & New Engine & RF-Maple,
                & Cuts & Numbervars/3 & Semantics & CP
----------------------------------------------------------------------

Date: Tue 26 Mar 85 14:57:11-PST
From: Byrd@SRI-AI.ARPA
Subject: C-Prolog bug

(R.A.O'Keefe using Lawrence's account)

This concerns the bug in

        f(Y) :- (true,! ; X = 0), Y =< 7.
        ?- f(3).

I have no idea what the symptom may be, as this
works just fine in version 1.5a.edai.  The code
for ;/2 in 1.5a.edai does indeed use $call rather
than $hidden_call, so if that isn't the fix I'm
in trouble too.

------------------------------

Date: Tue 26 Mar 85 14:54:40-PST
From: Byrd@SRI-AI.ARPA
Subject: New Engine question

(R.A.O'Keefe using Lawrence's account)

No, put_unsafe_variable is NOT needed for ALL
occurrences of an unsafe variable in the last
goal, only the first.  The reason is that
put_unsafe_variable SIDE-EFFECTS the Yn variable
in question, so that from then on it is not unsafe.
In fact if your compiler is the least bit clever,
after having done put_unsafe_value Yn,Ai you can
thereafter copy the value of Ai directly instead of
looking at Yn again.

------------------------------

Date: Mon, 25 Mar 85 10:19:12 pst
From: Paul Voda <Voda%ubc.csnet@csnet-relay.arpa>
Subject: This is a logic language

This is a note commenting on Uday Reddy's understanding
of logic languages.

Logic does not go beyond predicate calculus. Logic is predicate
calculus (at least the classical logic). Lambda calculus can be
presented as a first order theory exactly as ZF is a first order
presentation of set theory.

Not every rewriting system is a logic language. Nobody would call
Turing machines, Post systems, Markov algorithms, amd even
von Neumann computers logic languages. A logic language must
bear very close relation to either predicate logic with
terms and formulas, or else to the so called term logic
(Hermes) where formulas are just special cases of terms
(the ones yielding boolean values). Logic programming languages
based on functions are usually term logics.

What makes a programming language a logic language
is the interpretation (model) assigning the standard
logical functions to connectives, quantifiers and identity.
Here is Uday right, the syntax does not matter very much,
but semantics does.  There is no straightforward
interpretation of languages, ergo they cannot be called
logic languages.

The traditional functional languages indeed compute with
closed (Uday calls it ground) terms. The solving aspect,
where one assigns values to free variables occurring in
programs is, as Uday observes, the very characteristic
aspect which makes an otherwise functional language
a logic programming language. There are quite a few
languages of this variety FGL+LV of Lindstrom, Qute of
Sato, Eqlog of Goguen, Tablog of Manna and Malachi, etc.

That all these languages use unfication (and/or narrowing)
proves only that the unification is a method of producing
output values. But to go from this fact to the claim Uday
makes that without unification there cannot be output
variables is rather strong. Uday effectively says that
the unification (or narrowing) is the only way to create
output values.  RF-Maple does not use unfication and it is
a logic programming language with output variables. The
absence of unification is not the only feature
distinguishing RF-Maple from the languages (with a
possible exception of Qute).  The other one is that
RF-Maple has explicit control allowing the programmer
to control the process of derivation.

To quote the last sentence of Uday's note "I wonder how
it [i.e. RF-Maple] can [create output variables], without
using unification". Here one can offer a simple advice
"by reading the paper on RF-Maple".

-- Paul Voda

------------------------------

Date: Mon, 25 Mar 85 11:05:37 pst
From: Paul Voda <Voda%ubc.csnet@csnet-relay.arpa>
Subject: Cuts, retracts, commits

The ongoing discussion in this Digest of uncertainties,
and ambiguities of cuts and retracts of Prolog as well
as of the behaviour of commits and read-only annotations
of C-Prolog is a proof that there is something unfinished
in definitions of both languages. A cut-free Prolog has
a rigid semantics (Kowalski, van Emden). Cuts, retracts,
commits and read-only variables are present in the
practical versions of both Prologs only because the
control is quite important in practice.  The designers
and implementors of both Prologs clearly recognized this
fact but they failed (even operationally) to completely
describe what they really mean by the control features.
The standards of  the design and implementation of these
languages do not differ very much from those of imperative
languages (Algols, Pascals, Adas). Yet we claim that the
logic programming languages are superior (as the semantics
goes) to the imperative ones. To define the operational
semantics of a language by a concrete interpreter is
clearly not enough. One should come up with a complete
definition of control in both Prologs before one can call
them logic programming languages with a clear conscience.

-- Paul Voda

------------------------------

Date: Tue 26 Mar 85 15:28:12-PST
From: Byrd@SRI-AI.ARPA
Subject: "soft" cuts

(R.A.O'Keefe using Lawrence's account)

[a] They are trivial to implement
[b] To my knowledge, at least two Prolog dialects
already have a closely related feature, where the
soft cut refers to an explicit disjunction.  One
is LM Prolog, where the construct is called "cases".
The other is C Prolog; I forget just when I put it
in, but 1.5a..edai certainly has it.  (Your version
of C Prolog has it if 'cases' is defined as a prefix
operator.)

The syntax in C Prolog is

        cases
            Test1 -> Branch1
        ;   Test2 -> Branch2
        ;   ...
        ;   Testn -> Branchn
        ;   /* otherwise*/ BranchElse

    where this has the "logical" reading

        (   Test1, Branch1
        ;   \+ Test1, Test2, Branch2
        ;   ...
        ;   \+ Test1, \+ Test2, ..., Testn, Branchn
        ;   \+ Test1, ..., \+ Testn, BranchElse
        )

and the "procedural" reading is the same except that
the negated forms are not actually executed.  To be
specific, the tests may backtrack.

[c] However, I have never yet found a genuine use for
this facility. You see, the negations themselves only
really have a logical reading when they are ground
(well, you can relax this, but you still end up with
something having no solutions or one only).  That being
the case, the tests aren't going to backtrack in the
first place, so the normal if-then-else (just delete
the word "cases") is good enough.  Could either C
Could either Chris Moss or Peter Ludemann (or someone
else) supply an example of why this would be useful?
If the example can be done as clearly using the existing
machinery, that will not do.  Just because the
implementation of something is trivial is no reason to
put it in.

[d] A note on terminology: I've been using the phrase
"strong cut" to refer to "!" (because it is powerful
enough to force its way through intervening conjunctions
and disjunctions and cut right back to the parent goal)
and "weak cut" to refer to "->" (because it is so weak
it can't fight its way out of a paper bag, otherwise
known as a disjunction).  Having implemented "cases" over
a year ago and never having found a use for it, I suggest
that we either call it "the 'cases' construct" using the
original name from LM-Prolog or else give it no name at all;

[e] Ludemanns's claim that "last call optimisation ... can
be fully determined only at run time" is in error.  His
example is ill-chosen, as in

        append([], L, L).
        append([H|T], L, [H|R]) :-
                append(T, L, R).

the instantiation pattern of the arguments has nothing to do
with the fact that the call to append/3 is the last goal in the
last clause of a predicate.  Never mind whether we ever tried
the first clause or not, THIS goal is in a determinate context.
So his example is irrelevant to his claim.  It is indeed true
that you cannot reclaim the local stack frame at the point of
doing a last call unless the parent goal is determinate.  But
you can still obtain some benefits, e.g. you can set the new
goal up so that it will return where the parent was to return,
thus making subsequent returns faster.  There are other things
you can do as well.  Alert readers of David Warren's "New
Engine" paper will have spotted that ALL last calls use the
same instruction, and the only difference is whether or not
put_unsafe_var needs to do anything.  (I had better say at
this point that although I now work for Quintus I do not yet
understand exactly what the Quintus version of that instruction
does.)

------------------------------

Date: Tue 26 Mar 85 15:54:10-PST
From: Byrd@SRI-AI.ARPA
Subject: numbervars/3

[1] The source code for numbervars/3 can be found in
{SU-SCORE}PS:<PROLOG>METUTL.PL
this file was also broadcast to net.lang.prolog a long time
ago, and its contents were discussed and included in my
Edinburgh DAI Working Paper "COding Metalogical Operations
in Prolog".

[2] a manual which suggests that the bindings produced by
numbersvars look like integers must be a really old one;
and it has never been the case that the output resembled
integers.  At one time, despite numbervars binding
variables to $VAR(N), these terms were written as _N, so
you'd get variables coming out like _0,_1,...

[3] But that was an obvious loser.  It is entirely
possible for a term to have some things which were
bound by numbervars, some things which are still
variables, and some constants which start with
capital letters.  We would like to be able to tell
these apart. The fact that the DEC-10 Prolog output
routines (except display/1) write $VAR(_) terms as
capital letters followed by digits was my idea,
that means that

        X = f(U,V), numbervars(X,0,_),
        writeq(p(W,X,'A'))

prints

        p(_273,f(A,B),'A')

so that you can tell what is going on.

[4] Source code for the output routines, including
the bit that hacks this feature, may be found in

        {SU-SCORE}PS:<PROLOG>WRITE.PL

In fact there is a complete kit of I/O routines;
they'll be updated as soon as I figure out how to
get the files to SU-SCORE.

Bill Clocksin's version using '_' WILL NOT WORK
with these output routines.  But then he has his own
output routines with which it does work.  Anyone
hacking numbervars/3 in a Prolog which is so <insults
deleted> that it hasn't already got it would be well
advised to copy the one in METUTL.PL; all the other
code that I've contributed to the Prolog library
assumes $VAR(_), and that is true in DEC-10 Prolog,
C Prolog, and Quintus Prolog.

[5] Allen van Gelder's comment that "numbervars fails
if you give it too small a range" is true but misleading;
it will also fail if you give it too large a range.
The idea is that after calling nummbervars(Term, Before,
After) the number of distinct variables which used to be
in the term is After-Before.  Normally, Before should be
passed as the number of variables which have already been
instantiated this way (so that this call won't introduce
bogus aliases) and After should be a variable, so that
you can do

        numbervars(X, 0, NV1),
        numbervars(Y, NV1, NV2),
        numbervars(Z, NV2, _)

I've only ever come across one usage of numbervars/3
where it makes any sort of sense to pass a constant
as the third argument:

        ground(Term) :-
                numbervars(Term, 0, 0).

In fact, in the previous example, I would normally
write
        numbervars(.(X,Y,Z), 0, _)

------------------------------

Date: Tue, 26 Mar 85 10:46:12 pst
From: Paul Voda <Voda%ubc.csnet@csnet-relay.arpa>
Subject: Semantics of CP

This is a comment on the operational semantics of CP
as sketched by Uday Reddy in the 3#13 issue of Prolog
Digest. Variables in classical logic serve only one
purpose: to stand for (or to range over) the universe
of discourse. Thus no open predicates (i.e. predicates
containing free variables) denote truth or falsehood
unless the free variables are fixed by an assignment
of values (in computer science called environments)
or replaced by constants denoting individuals.  Both
assignments and replacements by constants are widely
used in the interpretation of formal theories in logic.

Now, we are suppossedly in logic programming. Hence
there is no need to rediscover this basic logical fact
by proposing a "substitution semantics for non-ground
terms". The problem of the predicate "var(X)" (we cannot
substitute for X) is a well-known confusion of the use
and mention. The variable "X" is simply mentioned in the
predicate "var". The predicate is thus an intensional
predicate on the par with "John believes that ....".
The standard method of dealing with intensions in logic
is to go to Goedel-numbers. Thus we have Var("X") {I do
not have the corners of Quine so I use the quote} and no
substitution is possible. If an operationally minded
Prologist disagrees and insists on using the "var" as if
it were a normal predicate then he is  definitely not a
logic programmer but rather a logic hacker.

The proposal of Uday to treat read-only annotations in a
denotational way does not really solve the problem.
It escapes to the metatheory and the vital connection
to the predicate logic is lost: variables do not range
over the universe. Here I cannot resist  a small attack
on the traditional  denotational semantics. Is it not
significant that whenever we have control different from
the head normal reduction the denotational semantics is
in a trouble? All attempts to describe non-standard control
denotationally (parallelism, unification, read-only
annotations, etc.) are hopelessly complicated (if adequate
at all). This makes them useless in any practical reasoning
about our programs.

I think that the solution lies in the recognition of the
fact that  computations are proofs (we are paying a lip
service to this anyway). Thus operational semantics can be
explained by a formal theory permitting exactly those
derivations we are willing to compute with. This is the
point I am trying to explain in my paper "A View of
Programming Languages as Symbiosis of Meaning and
Computations" in vol. 3 no. 1 of the New Generation Computing.
I did not attempt to set up a formal theory for
computations of CP but I think it is not too complicated.
The read only annotation "X?" of a variable will have to be
explained as a normal one place (postfix) function symbol
defined as "x? = x", i.e. the identity. This allows also
syntactically and semantically valid terms as "(3 + x)?"
(There is no way in the classical logic to restrict the
domain of a function to variables only unless we are in
the meta-theory, which we clearly do not want to be). The
axioms and the rules of inference of the operational
semantics can be rigged up in such a way that formulas with
these non-intended annotations are not derivable.  Since
the terms "x" and "x?", although semantically equal, are
two different terms the Symbiosis approach allows to
specify transformations to "x?" which do not apply to "x".
Remember that in the classical logic axioms are just
sequences of symbols. It is true, that they are valid
formulas in any interpretation, but we use the axioms
only syntactically in our proofs (i.e. computations).
I think that it is perfectly possible to give a meaningful
set of axioms describing just the intended uses of
read-only annotations and ignoring the not-intended.
But of course our axioms will be then incomplete. We know,
however, that the explicit control precludes any
completeness anyway.

-- Paul Voda.

------------------------------

End of PROLOG Digest
********************