[net.lang.prolog] PROLOG Digest V3 #16

RESTIVO@SU-SCORE.ARPA (04/08/85)

From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>


PROLOG Digest             Monday, 8 Apr 1985       Volume 3 : Issue 16

Today's Topics:
                      Puzzles - Maps & C-Prolog
         Implementations - Bugs & Cases & Cuts & Determinism,
                  & Snips & Denotational Semantics
----------------------------------------------------------------------

Date: Wed, 27 Mar 85 11:04:31 -0200
From: Ehud Shapiro  <Udi%Wisdom.bitnet@WISCVM.ARPA>
Subject: Not the semantics of Concurrent Prolog

I know, I know I should be doing semantics of Concurrent
Prolog, but instead I wrote a Prolog program for map
colouring.  It seems that the program doesn't do any work.
The magic is, as usual, in the combination of unification
and non-determinism.

The program defines the relation colour(Map,Colours),
between a map and a list of colours, which is true if Map
is legally coloured using Colours.

A map is represented using adjacency-lists, where with
each country we associate its colour, and the list of
colours of its neighbours.  If the program is used in
generate-mode, i.e. to colour an uncoloured map, these
colours would be uninstantiated variables.  The program
would then compute all possible colourings.  For example,
the uncoloured map

        __________
        |   a    |
        |________|
        |b |c |d |
        |__|__|__|
        |e  |f   |
        |___|____|

is represented by the term:

map(    [country(a,A,[B,C,D]),
         country(b,B,[A,C,E]),
         country(c,C,[A,B,D,E,F]),
         country(d,D,[A,B,F]),
         country(e,E,[B,C,F]),
         country(f,F,[C,D,E])
        ]
).

And here is the program:

colour_map([Country|Map],Colours) :-
        colour_country(Country,Colours),
        colour_map(Map,Colours).
colour_map([],_Colours).

colour_country(country(_Name,C,AdjacentCs),Colours) :-
        remove(C,Colours,Colours1),
        subset(AdjacentCs,Colours1).

Which uses a couple of utilities:

subset([C|Cs],Colours) :-
        remove(C,Colours,_),
        subset(Cs,Colours).
subset([],_Colours).

remove(C,[C|Cs],Cs).
remove(C,[C1|Cs],[C1|Cs1]) :-
        remove(C,Cs,Cs1).

To test the program, use the following code:

test(Map) :-
        map(Map),
        colours(Colours),
        colour_map(Map,Colours).

colours([red,green,blue,white]).


By the way, the running time of the algorithm is
exponential in the size of the map.

-- Ehud Shapiro

------------------------------

Date: Thu, 28 Mar 85 00:23:12 -0200
From: Ehud Shapiro  <Udi%Wisdom.bitnet@WISCVM.ARPA>
Subject: Addendum

p.s.  An exercise, for all you program-provers out there:

Prove that colour_map(Map,Colours) terminates succesfully,
if Maprepresents a planar graph, and Colours contains four
distinct colours.

Hint: assume that remove/3 and subset/2 are defined
correctly, just concentrate on proving the first three
axioms...

------------------------------

Date: Tue, 2 Apr 85 19:24:06 pst
From: Allen VanGelder <AVG@Diablo>
Subject: Riddle: How did I do this in C Prolog?

In the following script I did what it looks like I did.
cpl gives me C Prolog top level.  write/1 has not been
redefined; the write you see actually wrote "thisIsX(6)";
p.pl has no output statements.

RIDDLE: What kind of rule(s) in p.pl account for this
behavior?

Script started on Tue Apr  2 18:50:19 1985
% cpl
C Prolog version 1.4e.edai

yes
| ?- ['p.pl'].
p.pl consulted 84 bytes 0.016667 sec.

yes
| ?- p(X).

X = 6 ;

no
| ?- p(X), write(thisIsX(X)), X=6.
thisIsX(6)
no
| ?- halt.

% Prolog execution halted
% ^D
script done on Tue Apr  2 18:53:33 1985

For those not fluent in C Prolog syntax, I
loaded "p.pl", then issued goal "p(X)" and
got the solution "X=6".  The semi-colon asks
for more solutions; there were none.  Then I
issued the more complex goal, which printed
"thisIsX(6)" and failed.

------------------------------

Date: Thu, 28 Mar 85 15:54:12 pst
From: Newton@CIT-Vax (Mike Newton)
Subject: CProlog bug (revisited)

Hi,

There was an error in my posting on the C-Prolog Problem,
instead of

f(A) :- (true, ! ; B=0), A =< 7.

                the test case should have been:

f(A) :- (true, ! ; A=0), A =< 7.

                   ^------ i goofed here.


Here is a transcript of the session using the distributed
C-Prolog 1.5:

Script started on Thu Mar 28 10:09:24 1985
((cit-vax:1)) pd
/usr/src/cit/cprolog ~
((cit-vax:2)) ls
AIDigest       STRIPSTANFORD  cpl1x4a        fprolog
README         coling         cpl1x5         stanford
((cit-vax:3)) cd cpl1x5
((cit-vax:4)) ./prolog
C-Prolog version 1.5
| ?- [user].
| f(A) :- (true, ! ; A = 0) , A =< 7.
user consulted 128 bytes 0.0500008 sec.

yes
| ?- f(3).

! Error in arithmetic expression: ! is not a number

no
| ?
[ Prolog execution halted ]
((cit-vax:5)
script done on Thu Mar 28 10:10:16 1985


The problem first surfaced in a grammar rule that was
being used in our compiler.  I simplified it to the
above test case... The fix that i posted in the earlier
newsletter has not yet caused any noticeable problems, but..

Regards,

-- Mike

[818-356-6771]

------------------------------

Date: 29 Mar 85 14:05 PST
From: Kahn.pa@XEROX.ARPA
Subject: A Case for Cases

The case for Cases is mostly a mater of taste. I think
a Prolog clause should be meaningful on its own.  Yet
this often leads to redundant computation.  Consider
the common way of writing a predicate to compute the
maximum of two numbers.

max(X,Y,X) :- X > Y, !.
max(X,Y,Y).

The second clause is pretty bizarre.  Using Cases one
can package up the two clauses and get some clearer (and
just as efficient code).  I admit that in this example and
those that come to mind If would do as well.  (Richard
is right that its hard to find examples where the
non-determinism of the test is useful).   Cases has a
neater semantics than "If though".

I sometimes think that max should be written simply
as follows

max(X,Y,X) :- X > Y.
max(X,Y,Y) :- not (X > Y).

and let the compiler compile out the redundant test.
The problem with this is that it can be pretty verbose
if there are many cases since one needs to explictly
write the negation of the previous tests.

------------------------------

Date: Wed, 27 Mar 85 18:13:23 pst
From: Peter Ludemann <Ludemann%UBC.Csnet@csnet-relay.arpa>
Subject: Cuts, Determinism.

I agree that "soft" cuts don't seem to have much use.
I think that tests by their nature shouldn't instantiate
any variables, so  backtracking over them won't produce
anything new.  However, the semantics of soft cuts are
nicer: it is straightforward to rewrite clauses containing
soft cuts into a form which contains no cuts at all.

As to non-determinism of append.  What I meant is this.
Suppose we have a clause:

    g(A, B, C) :- append(A, B, C), g2(A, B, C).

Unless we know that at least two of the three variables
are ground, we don't know if the call TO append is
deterministic.  For example, if we called "g(X, Y,
[1,2,3])", the call to append is clearly
non-deterministic and we cannot do the last call
optimisation with "g2".  So, in the absence of mode
declarations and some moderately complicated compile-time
analysis, we can only tell at run time that a given call
is deterministic.  This is easy (and reasonably efficient)
to determine at run-time - if a goal is "done with"
(backtracking over it will only result in failure), it can
be removed from the stack.  The last call optimisation is
applicable if the backtrack clause is the parent clause.

Warren's SRI Tech Note 309 talks about goal stacking in
the last few pages and he makes the point that tail
recursion optimization (which I prefer the call last call
optimization) is applicable at every procedure call -
"one simply discards the calling goal if it is later than
the last choice point".  I think that my (run-time) method
achieves that.

------------------------------

Date: Wed, 27 Mar 85 22:12:56 pst
From: Prolog@CIT-Vax (Paul Prolog)
Subject: Cuts and Snips

We have an implementation of prolog at Caltech
with a new concept we are calling snips, a concept
conceived by Jim Kajiya. This is somewhat like the
"soft" cut described in the last couple of digests.
Our point is to allow more localized control of
backtracking within a clause. Snips allow a backtrack
frame to be created within the scope of a clause,
but then, unlike a cut, allow only the backtrack
points the user wishes, to be removed. It would work
like this. If one had a clause

                a :- b,c,[!,d,e,f,!],g

where the [! marks the beginning of a snip region,
and !] marks the end of a snip region, backtracking
would be normal for the b and c clauses. Backtracking
would continue normally into the snip region, but once
the !] was passed, any backtrack points created by
d, e, or f would be deleted from the backtrack stack.
Then, if g failed, any alternatives of c or b would be
tried. If before the !] were executed, backtracking
took execution to the left of the snip region (left
of [!), nothing special would happen. Cut regions can
be nested as deeply as the user wishes.

Snips are useful when one wants to have backtracking
procedures, but also wants to use the same procedure
in deterministic programs. For instance, in a pattern
matcher I am writing to handle associativity and
commutivity, I need to take a list and take elements
out, while keeping a list of the non-selected
elements. This is written in a clause as

   ..... ,
   append(Begin,[TestElement|End],OriginalList),
   append(Begin,End,RestList),
   .....

(Of course, there may be a better way to write this,
but I wrote it this way). Now the second append
doesn't ever need to backtrack, since it is not
being used for search. However, having two append
predicates, one with no backtrack point, and one
without, would be silly. A cut may not be appropriate,
since the procedure that this clause fragment is a
part of may need to backtrack. A singlet clause
with a cut could be used, but program comprehension
would be better if all logical parts of a clause
could stick together, as in the example above.
So, rewriting it as

   ..... ,
   append(Begin,[TestElement|End],OriginalList),
   [!,append(Begin,End,RestList),!],
   .....

would remove the problem.

We feel that snips are a valid addition to
standard prolog. Though there are ways of getting
around the need for them, they are useful in their
own right. (A more detailed article may be coming).

Any comments?

-Keith Hughes
hughes@cit-vax

PS. The other members of our group are Mike Newton
(newton@cit-vax) and Jim Kajiya (kajiya@cit-20).

------------------------------

Date: Thu, 28 Mar 85 21:06:54 pst
From: Paul Voda <Voda%ubc.csnet@csnet-relay.arpa>
Subject: Denot. semantics versus the Symbiosis approach.

This is a note on the relation between the
denotational and "logical" semantics of logic
programming languages. Denotational semantics deals
with the meaning and control at the same time. Thus
for a sequential disjunction the formula
"P(5) or 5 = 5" will denote the undefined element if
the computation of "P(5)" does not terminate.
Viewed as a formula of a predicate calculus, i.e.
ignoring the control and concentrating only on the
meaning, the formula is true. This combination of
the meaning and control in the denotational semantics
is not too bad when the control is just the normal
reduction (lazy evaluation) of the lambda calculus.
As the control gets more complicated (explicit
parallelism, input-output annotations, cuts and commits,
etc.) the meaning functions of the denotational
semantics must necessarily grow more and more complicated
(in the case of parallelism there is even no satisfactory
solution).

In my Symbiosis paper I essentially took the step taken
by Hoare when he separated the partial correctness from
the termination. Divide and impera is always a good
methodological principle but in the case of logic (or
functional) languages the gains are quite significant.
First of all, programs of these languages are formulas resp.
terms of a first order logic theory. The meaning of programs
(corresponding to the partial correctness) is obtained
simply from the standard model of the theory. The reasoning
about the programs can utilize the full deductive power
of first order theories (quantifiers, induction). The
termination is separated from the "logical" meaning.
Now, computations are proofs and in the Symbiosis paper
I show on five different programming languages how to set up
a deductively restricted subtheory of the meaning theory
where the computations exactly correspond to the derivations
in the subtheories. This takes care of the operational
semantics and the problem of termination is reduced to
the proof-theoretical question of existence of a proof.

The questions of existence of proofs are much harder
then the quite straightforward questions of meaning.
But I do not share the gloomy view of Uday
(Prolog Digest 3#15) that we cannot do better than "run-it-
and-see". First of all, we are dealing with deductively
restricted subtheories. Thus it is not too difficult to
arithmetize the predicate of the provability (i.e the
provability in the operational subtheory) in the meaning
theory. One can then proceed and prove a couple of
sufficient conditions for the termination. The conditions
are generally of the form "if .... terminates then
____ ....___ also terminates". These theorems can
be then used to reason about the termination.

 ----Paul Voda

------------------------------

Date: Tue 2 Apr 85 13:46:32-PST
From: Joseph A. Goguen <GOGUEN@SRI-AI.ARPA>
Subject: logic programming semantics

I have been following the discussions of the semantics
of Prolog and Concurrent Prolog in the Digest with a
rising mixture of fascination and despair.  If the
designers of a language have trouble explaining (and
perhaps even understanding) some constructs of their
language, then perhaps there is something wrong with
those constructs.  I hasten to add that I find the
read-only annotation idea very appealing, and I hope
that somehow its semantics can be made as simple as I
once thought it might be.

There is a point that I would like to add to the
discussion:  Having a denotational semantics for a
language does not make it more respectable.  There are
many nice things about denotational semantics, besides
its mathematical elegance and the way it handles
potentially infinite control structures; one of these
is that it serves as a pretty accurate measure of
just how horrible a programming language is; in particular,
you can see how many levels of continuations are needed,
which constructs need the extra complications, how large
the definition is (e.g., how many hundreds or thousands
of semantic equations), etc.  Of course, you can give a
denotational semantics for almost anything if you really
want to (e.g., Ada).  But I claim that if a language
*needs* a denotational semantics in order to explain its
constructs, then that language is too complicated.  The
good thing about a real logic programming language is
exactly that it *does* *not* *need* such a semantics:
a program means exactly what it says.  It is a symptom
of the constructs of a language being too operational that
one is driven to denotational semantics to explain them.
So I guess what I'm saying is I am afraid that Concurrent
Prolog is getting us further away from the original
ideals of logic programming, closer to PL/I and its cousins.
In my opinion, what we need is more research on how to do
practical programming with pure logic programming
languages; this may mean enriching the logic, or doing some
things in a metalogic, or whatever.  But making Prolog less
and less pure does not seem to me the right way to go.

In case you wonder what I mean by a "logic programming
language", here is a general definition:  a language
whose programs consist of sentences in a well understood
(and reasonably simple) logical system, with operational
semanticsgiven by deduction in that logical system.  In
particular, "well understood" should include a completeness
theorem for deduction.  Notice that both "relational" or
"Horn clause" programming (i.e., what is usually called
"logic programming") and functional programming are special
cases, with logics respectively first order Horn clause
logic, and equational logic (in fact, usually higher order
equational logic).

I would also impose another requirement, that every program
has an initial model.  This provides a foundation for
database manipulations, since you know exactly what is true
-- namely, what can be proved from the axioms -- and
everything else is false (this is Occam's famous razor); in
fact, the "closed world" that the program is talking about
is exactly the initial model.  (For those not familiar with
this terminology, the initial model is actually
characterized (for the usual logics) uniquely up to
isomorphism by the property that only what is provable is
true; it is closely related to the Herbrand universe.
It has very recently been proven (by Tarlecki, and by Mahr
and Makowsky) that the largest sublanguage of first order
logic such that all sets of sentences have initial models
is Horn clause logic.  This result extends to many-sorted
logic and/or logic with equality.  So that gives us a
good idea of just how far we can go with logic programming
and still provide the programming with a good idea of what
his program is about (namely, whatever is in the initial model).

------------------------------

End of PROLOG Digest
********************