[net.lang.prolog] PROLOG Digest V3 #11

RESTIVO@SU-SCORE.ARPA (03/22/85)

From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>


PROLOG Digest            Friday, 22 Mar 1985       Volume 3 : Issue 11

Today's Topics:
                    Puzzles - Oliver's Confusion,
      Implementations - Denotational semantics & RF-Maple & CP & Cuts
----------------------------------------------------------------------

Date: Wed, 20 Mar 85 09:48:24 PST
From: "David G. Cantor" <DGC@UCLA-LOCUS.ARPA>
Subject: If f(f(x)) = x^2 - 2, what is f(x)?

Q: "Can a computer solve the query:

"If f(f(x)) = x^2 - 2, what is f(x)?
If so, how?"

The solution is essentially contained in the
article by Michael Restivo in the March 20
issue of Prolog Digest.

The nth Tchebycheff Polynomial may be defined
as
                         n        n
        T   (x)   =    u    +   v   ,
          n

where  u  =  (x + d)/2,  v   =  (x - d)/2, with
       d  =  sqrt(x * x - 4).

It is easy to check that, when n is an integer,
the powers of d cancel and hence that the above
functions are really are polynomials.  These
polynomials satisify numerous identities.  The
pertinent one here is that

        T   (T  (x))   =   T      (x) .
          m    n             m * n

This can be verified by elementary algebra (note
that u * v = 1).  It holds certainly for all complex
numbers m and n, subject to choosing appropriate
branches of the mth and nth power as well as the
square root, in the complex plane.

The function

        f(x)   =   T        (x)
                     sqrt(2)
then satisfies

        f(f(x)) =  T         (T         (x))
                     sqrt(2)    sqrt(2)

                =  T                   (x)
                     sqrt(2) * sqrt(2)

                =  T   (x)
                     2

                =   x * x   -   2,

and hence solves the original problem.

As to how a computer could solve this:

It need only search the mathematical literature
to find a paper by Michael Fried giving all
solutions to the functional equation (due to
Issai Schur):

F   (F   (x))   =   F        (x) .
  m    n              m * n

Fried shows that, under very general conditions,
the solutions are either
               n
F   (x)  =   x       or      F   (x)   =   T   (x) ,
  n                            n             n

as given above. The computer then need only recognize
that the given function

f(x)   =   T   (x) .
             2

Alternatively it could recognize the latter first,
and be led to study identities of the Tchebycheff
polynomials.

-- David G. Cantor

------------------------------

Date: Wed, 20 Mar 85 11:05 CST
From: Bill Wood <Woodw@HI-MULTICS.ARPA>
Subject: Semantics Inquiry (v3-9)

The article "Formal Vienna-Definition-Method
models of Prolog", by J.F.Nilsson, is the first
article in chap.4 of *Implementations of Prolog*,
J.A.Campbell (ed.), Ellis Horwood Series in
Artificial Intelligence, Ellis Horwood Limited,
Halstead Press (a division of John Wiley and Sons),
1984.  It is available in paperback.  The article
develops denotational-semantics models of Prolog,
starting from an abstract model of unification and
variable renaming, through an applicative model of
the procedural interpretation of Prolog and a couple
of storage structure models, and ending in a discussion
of imperative models suitable for implementation in
for example Pascal.

-- Bill Wood

------------------------------

Date: Mon 18 Mar 85 13:54:37-MST
From: Uday Reddy <U-REDDY@UTAH-20.ARPA>
Subject: Denotational semantics

The conventional van Emden-Kowalski style semantics
for Horn clauses is given in the domain called Herbrand
Base, which is essentially the domain of functions

        B1 = ground-tuples -> 2-bool

where 2-bool is {bottom, true}.

To talk about failure, we need to use the domain

        B2 = ground-tuples -> 3-bool

where 3-bool is {bottom, true, fail}.  Here "fail"
denotes inferrable undefined-ness whereas "bottom"
denotes nontermination.  One can specify sequential
"and" and sequential "or" operations in 3-bool thereby
formalizing the sequential operational semantics of
Prolog.  But, when sequential "or" operation is used,
":-" cannot anymore be interpreted as implication.
See Mycroft, Logic programs and many-valued logic,
1st STACS, Springer-Verlag, 1984 for more details.

But, to really talk about Prolog's operational semantics,
we need more involved domains:

N1 = non-ground-tuples -> substitutions -> 2-bool
N2 = non-ground-tuples -> substitutions -> 3-bool

We can inject B1/B2 to N1/N2 by an injection h
        h p n s = p s(n)

So, pure Horn clauses can be given semantics in B1/B2
and it getsnaturally extended to N1 and N2.  Jones-Mycroft
semantics uses a domain similar to N2, but with
list[substitutions] instead of [substitutions -> 3-bool].

What are generally considered impure features such as cut,
var etc. force us to remain in N1/N2 instead of the simpler
domains B1/B2.  I believe that modes, annotations etc. can
also be given semantics in N2, but nobody has done so yet.
But, interestingly, sequential operational semantics
doesn't require the more involved domains N1/N2.

-- Uday Reddy

------------------------------

Date: Wed, 20 Mar 85 22:10:49 pst
From: Paul Voda <Voda%ubc.csnet@csnet-relay.arpa>
Subject: RF-Maple Horn clauses and negation

Vijay Saraswat of CMU should be publicly commended
for the courage of  questioning the generality of
the sacrosanct Horn-clauses.  After all, he is expressing
his doubts to the most partial of the audiences. His
contribution to the Prolog Digest V3#10 encouraged this
note about  my language RF-Maple presented at the Fifth
Generation Conference in Tokyo (November 1984).

RF-Maple is a true union of two former languages of mine
R(elational)-Maple and F(unctional)-Maple. The logical
language R-Maple was designed in the  spring of 1983 as
a relational language (concurrently or perhaps a little
bit before Tablog). It has explicit control and is based
on the full first order logic. It contains both sequential
and parallel connectives of conjunction and disjunction.
It has negation and explicit existential quantifiers.
The  effect of output variables is achieved by the
"assignments". Yet the language is declarative with
absolutely rigid semantics.  Interested reader is
referred to the just appearing issue of the New
Generation Computing (Vol 3 No 1) where it is described
as the language R in my paper "A View of Programming
Languages as the Symbiosis of Meaning and Computations".
Sadly enough - the year 1983 being the year of the
Concurrent Prolog - R-Maple was rejected in POPL-84
and another conference so it was published only as a
Research report of University of British Columbia
Vancouver (August 1983).

R-Maple can do everything both Prologs can, including
the cute stream oriented cooperation of C-Prolog
processes. And it has a well-behaved negation. R-Maple
does not use unification, the predicates are invoked
by substitution which can be implemented (using
environments) faster than the unification. In the
retrospective, I think that the absence of unification,
going evidently against the fad, may have contributed
to the rejection of R-Maple.

The functional language F-Maple was designed in December
1984 as a typed programming language with ML like types.
It has only four constructs. The data types are specified
by CF-Grammars giving the user extensible syntax for the
type constructors. This extensibility applies also to the
definition of functions which are freely specified by
grammar productions. The two additional constructs
(besides the productions for data types and function
headings) are productions for the variables and for the
case construct. F-Maple, being completely, specified by
the user-definable grammars uses structure editor (which
is menu driven by menus extracted from the grammar
productions) to construct both the programs and the data
required as input from the terminal by the running programs.

F-Maple was designed as integrated with its own environment
(operating system) taking care of files. Files are simply
typed data structures brought in and out from the secondary
store by a virtual memory scheme.  Needless to say, F-Maple
was rejected from the Lisp and Functional Language
Conference 1984. Fortunately, the union of the two languages
RF-Maple was accepted in Tokyo. But such is the spell of
Horn clauses and unification that it apparently went
unnoticed. Here I refer to the exhaustive discussion on
functions cum relations in the Prolog digest.

This leads me back to the  Vijay's note again. He claims
that having the full predicate calculus, there might not be
models for some of our "recursive" predicates, as for
instance P(x) iff not P(x). This is true only if one
accepts arbitrary "definitions" of predicates as axioms.
My Symbiosis paper (the one in the New Generation computing)
presents a first order Theory of Pairs (TP) axiomatizing
essentially the S-expressions of Lisp. TP is equivalent to
Peano arithmetic and thus it is consistent (i.e. it has a
model). Definitions of new predicates are possible only by the
so called conservative extensions. Conservative extension
of theories has been used in logic for at least half a
century yet they went absolutely unnoticed in the computer
science (just about the only exception is the system of
Boyer and Moore). The gradual extension of theories so
naturally corresponds to the gradual addition of new
functions in programming. Moreover, the conservative
extension  guarantee the existence of a model. There is
no need to constatly set up a new model for each set of
axioms.   This worries Vijay (and myself too) in Tablog.

Finally, Vijay has to be commended again for saying:

"...one has to introduce more control in order to write
programs  efficiently. The art of designing a logic
programming language is  basically concerned with finding
a delicate balance between expressibility and control ...."

This is exactly what the explicit control in RF-Maple is
about. The relationship between control and meaning is
discussed in depth in the Symbiosis paper. Obviously,
having a control, we deliberately abandon completeness.
But this does not mean that we cannot reason about the
termination of our programs. The Symbiosis paper advocates
the use of a full power first order theory (such as TP)
with unlimited quantifiers and with induction to allow the
reasoning about the programs. The control (operational
semantics) is defined by a subtheory of the meaning theory
in such a way that the proofs in the subtheory exactly
correspond to the computations by a computer. The use of
the control subtheory allows to restrict the full deductive
power of the meaning theory in such a way that the programs
can be efficiently executed. The Symbiosis paper discusses
three functional languages (having respectively
unrestricted, applicative, and lazy evaluations) as well
as two logic programming languages (a Prolog-like language
and R-Maple).

------------------------------

Date: Thu, 21 Mar 85 10:31:59 pst
From: John Gerald Cleary <Cleary%Calgary.cdn%ubc@csnet-relay>
Subject: PROLOG Digest   V3 #9

I agree with Jacob Levy it is certainly time we sorted
out the semantics of Concurrent Prolog.

First concerning the problem of ? annotations in the
head of clauses, Jacob says he doesnt want the status
of a variable to become read-only in a delayed fashion
on a distributed system. I have running a distributed
system which solves this particular problem (it was much
easier than a lot of other problems).

An example program may help:

Goal:   ...a(X), b(X), ...

a(Z?):- true | some things which better have references to Z.

b(1).

In a sequential implementation (that is any where all
bindings are seen immediately by everyone) either a
or b is executed first. If a first then X is bound to Z?
becoming read only, then b(X) suspends.  If b first
then X is bound to 1 and a suspends.

If however the goal is
        ... a(X)@hither, b(X)@yon ...

where hither and yon are distant processes with their own
local copies of X. Then both may succeed locally and you
have X bound to both 1 and Z?

The approach I have taken is that this is fine but that
when the bindings meet  the unification suspends. I
generate a ghost process unify(Z?,1) which will suspend
until someone somewhere binds Z.  This effect can be
achieved locally by rewriting the goal as

        ... a(X1), b(X2), eq(X1,X2), ...

eq(X,X).

Then even in a sequential simulation of concurrency
all possible things can happen depending on the order
that a, b and eq are executed.

This raises a serious problems (apart from the problem
of what is CP?).  I want to write distributed systems
and I dont want the semantics of my programs to change
when I run them distributed.  Effects such as the above
cannot occur in any of the CP interpreters I know of as
all distribute their bindings immediately to everyone.

Udi advises us to trust in our debuggers and interpreters
and not in God.

It seems that any trust in the debuggers is sadly misplaced.
A good rule for a debugging system for concurrent programs
is that if it can happen in any semanticly legal system
then it can happen in your debugging system -- at least
then it is possible to find your bugs.

It seems that unification should NOT be seen as an atomic
operation, but one whose individual steps occur concurrently.
This is reinforced by the following problem:

Goal:           a(X?,X)

                a(1,1):- true.

As pointed out by Vijay Saraswat this suspends if you
unify left to right but doesnt if you unify right to left.
(I wonder if a Hebrew implementation would differ here (-:))
The correct solution (it my opinion anyway) is to
'partially' suspend the attempt at unification at the point
where unify(X?,1) is attempted but to continue with the
second part unify(X,1) which will succeed waking the
earlier unify which too succeeds so that ultimately
there is no suspension.  To do this I essentially split the
unification into concurrent steps.  The result gves an
order independent unification.  Code for my unify routine
follows (it works or has given me no trouble for a year
now, but is slower than Udi's original by a factor of about
2!).  It also handles (one of) Vijays other concerns,
unify(X?,X) this is special cased out and succeeds.

Question: if unification is treated concurrently like this
should the implicit sequencing between the unification in
the head of a goal and execution of the guards be
maintained, does it make any difference to the semantics?
To put it another way are the following two clauses
semanticaly identical:

a(1,foo):- some guard | some body.

a(X,Y):- unify(X,1), unify(Y,foo), some guard | some body.

I think as a point of principle that they aught to be.


% this can replace the original unify in Ehud Shapiro's 1983
% paper a subset of concurrent prolog

unify(X,Y):-
        u(X,Y,Head-[mark|Tail]),
        unify1(Head-Tail,changed).

unify1([mark|_]-_,suspended):- !, fail.
unify1([mark]-[],Flag):- !.
unify1([mark|Head]-[mark|Tail],changed):- !,
        unify1(Head-Tail,suspended).
unify1([u(X,Y)|Head]-Tail,Flag):-
        u(X,Y,Tail-NewTail),
        check_changed(u(X,Y),Tail-NewTail,Flag,NewFlag),
        unify1(Head-NewTail,NewFlag).

check_changed(_,_,changed,changed).
check_changed(XY,[Z|Tail]-Tail,suspended,suspended):- XY == Z, !.
check_changed(XY,[Z|Tail]-Tail,suspended,changed).

u(X,Y,H-H):-   var(X), !, deref(Y,Y1), u0(X,Y1).
u(X,Y,H-H):-   var(Y), !, deref(X,X1), u0(Y,X1).
u(X,Y,H-T):-   deref(X,X1), deref(Y,Y1), u1(X1,Y1,H-T).

u0(X,Y):- nonvar(Y), Y=Y1?, X==Y1, !.
u0(X,X).

/*Because of deref X&Y are both free variables*/
u1(X?,Y?,H-H):- X==Y, !.
u1(X?,Y,[u(X?,Y)|T]-T):- !.
u1(X,Y?,[u(X,Y?)|T]-T):- !.
u1([X|Xs],[Y|Ys],H-T):- !, u(X,Y,H-T1), u(Xs,Ys,T1-T).
u1([],[],H-H):- !.
u1(X,Y,H-T):- X=..[F|Xs], Y=..[F|Ys], !, u1(Xs,Ys,H-T).

deref(X,Y):- nonvar(X), X=X1?, nonvar(X1), !, deref(X1,Y).
deref(X,X).

------------------------------

Date: Wed, 20-Mar-85 12:35:53 PST
From: cdsm@icdoc.UUCP (Chris Moss)
Subject: Does the cut do too much?

Logically, though not in implementation terms,
the standard cut/slash primitive in Prolog does
two things:

1. It prevents backtracking to any calls in this clause.
2. It prevents the evaluation of other clauses for the

same predicate.

What I would like to ask Prolog implementors is: are
there any major problems in providing a primitive which
does the second of these without the first?

There's a paper by Smolka  (Making Control and Data Flow
in Logic Programs Explicit. by Gert Smolka (Cornell), in
ACM Functional Programming Conference, 1984, pp 311-322.)
which proposes replacing the standard cut by this 'soft'
cut and a 'functional call' primitive, which between them
seem to cater for all the reasonable uses of cut. I am at
the moment asking something more modest (provide it in
addition), but I need to motivate what are the advantages
of such a primitive.

It is impossible at the moment to provide a test which is
fully equivalent to the addition of negation of the test
added to subsequent clauses:

e.g.    A if test & then.
        A if not(test) & else.

is NOT equivalent to:

        A if test & ! & then.
        A if else.

because if 'then' backtracks to 'test' the second
formulation does not yield extra answers. The 'soft'
cut does the trick fine. In general it is impossible
to evaluate all the solutions to a subproblem with cut
distinguishing the case which there are no answers
from the case in which there is one or more. An obvious
example where this is desirable is a supervisor which
prints "no answers" in the first case and the answers
in the second case.  We usually have to resort to "no
(more) answers" or similar. Again, the soft cut is
adequate.

In a straightforward Prolog implementation there seem
to be no problems in implementing this, although some
entries on the reset list might never be accessed. But
there may be interactions with various tail recursion
and other optimisations in compiling that I don't know
about. I'd be interested in people's ideas, whether they
see problems or not.

------------------------------

End of PROLOG Digest
********************