[comp.lang.prolog] PROLOG Digest V5 #91

PROLOG-REQUEST@SUSHI.STANFORD.EDU (Chuck Restivo, The Moderator) (11/23/87)

PROLOG Digest            Tuesday, 24 Nov 1987      Volume 5 : Issue 91

Today's Topics:
                             Query - WAM,
                        Theory - Impediments,
                Implementation - Algorithmic Debugging
----------------------------------------------------------------------

Date: Fri, 20 Nov 87 09:54:23+0900
From: Dongwook Shin <dwshin%csd.kaist.ac.kr@RELAY.CS.NET>
Subject: Compiling Prolog features

I'd like to know how to compile two Prolog features on the Warren
Abstract Machine. First is how to compile high-order feature, say,
call.  As the argument of "call" is made in run-time, it seems that
the execution of the argument is different from the normal execution.
Second is how to embed the information on the operator precedence from
"op(_,_,_)" into WAM.  Any information would be appreciated.

-- D.W. Shin

------------------------------

Date: Sun, 22 Nov 87 15:40:37 PST
From: narain%pluto@rand-unix.ARPA
Subject: Theoretical impediments to logic programming?

In his paper  "Theoretical  impediments  to  Artificial  Intelligence"  M.
Rabin  points out that deciding truth of even the simplest of sentences in
logic can require super-exponential time.

Specifically, he cites a theorem obtained by him and M.J.  Fischer.  Let L
be  the  set  of all sentences of first order logic written using +, =, ~,
->, ().  Let <N,+> be the structure consisting of the set N={0,1,2,..}  of
natural  numbers  with  the  operation  +  of  addition.  Then  Presburger
Arithmetic (PA) is said to be the set of all sentences in L which are true
in <N,+>.  PA is decidable.  The theorem states that:

        There exists a constant c>0 such that for every decision
        procedure (algorithm) AL for PA, there exists an integer
        n0 so that for every n>n0 there exists a sentence F of L
        of length n for which AL requires more than 2**(2**(c*n))
        computational steps to decide whether F is in PA.

Surely the Horn clausal calculus is richer  than  L,  so  this  complexity
result should apply to SLD-resolution.  Does this diminish its importance?

To answer this question, let  us  address  a  related  one:  what  is  the
advantage  of  SLD-resolution over conventional brands of resolution, say,
hyper-resolution?  One might rather hastily  answer  that  the  former  is
"more  efficient"  than  the  latter.  Presumably this would mean that for
every refutable set of Horn clauses,  the  number  of  steps  required  by
SLD-resolution  to  refute  it  is  less than or equal to that required by
hyper-resolution.  To see that this is, in general,  false,  consider  the
following set of clauses:

        <-fib(s(s(s(s(0)))),Z).
        fib(0,s(0)).
        fib(s(0),s(0)).
        fib(s(s(X)),Z)<-fib(s(X),Z1),fib(X,Z2),plus(Z1,Z2,Z).

        plus(0,X,X).
        plus(s(X),Y,s(Z))<-plus(X,Y,Z).

The number of steps required by SLD-resolution to refute it is larger than
that required by hyper-resolution.

So, the importance of SLD-resolution cannot be that it is "more efficient"
than  other proof procedures.  Instead it appears to lie in its simplicity
and  consequent  programmability.   It  is  simple  enough  that  one  can
visualize  and  predict  its  behavior as it is interpreting Horn clauses.
This enables one to write clauses in such a way that  when  SLD-resolution
interprets  them it behaves, in most cases, in whatever fashion one wishes
it to behave.  In particular, one can use  clauses  not  only  to  express
relations but also to express *algorithms* for computing them.

For example, not only can one write clauses expressing the  list  reversal
relation,  one can do so in such a way that lists are reversed in a number
of steps proportional to their length.  Not only  can  one  write  clauses
expressing  the  sorting  relation, one can do so in such a way that lists
are sorted in a number of steps proportional to that required by quicksort
(or  mergesort, or bubblesort).  Not only can one write clauses expressing
grammar rules, one can do so in such a way that phrases are  parsed  in  a
top-down fashion.

In contrast, resolution does not enjoy this property  of  programmability.
Its behavior is substantially more difficult to visualize and predict, and
hence to control.

Once  algorthims  are  expressed  as  clauses  they  can  be  analyzed  as
statements  of  logic  with  all  attendant  benefits.  Thus  the  central
importance of SLD-resolution is that it allows expression of  *algorithms*
in  a  logical  framework.  This  point, of course, was made originally by
Kowalski when he introduced the procedural interpretation of Horn clauses.

It is still the programmer's responsibility to  develop  good  algorithms.
SLD-resolution  only  behaves as efficiently as the algorithms which it is
interpreting will permit it to behave.  It does not claim to be an optimal
proof procedure.  Hence  the result of Rabin and Fischer does not diminish
its importance.

Comments?

-- Sanjai Narain

------------------------------

Date: Thu, 19 Nov 87 22:29:29+0900
From: Dongwook Shin <dwshin%csd.kaist.ac.kr@RELAY.CS.NET>
Subject: A comment on Algorithmic Debugging

 Recently, we are developing a practical version  of  Algorithmic
debugger  originally devised by Shapiro.  One of the most confus-
ing problem in Shapiro's algorithm is occurred when the  solution
set  in a user's mind is actually different from that in declara-
tive semantics and the user hardly understand the error  checking
message. Let me take a simple example.

 (1) g(X,Y) :- even(X), Y is X/2.
 (2) g(X,X).

 (3) even(X) :- 0 is X mod 3

 In this example, the error a user may explicitly recognize is in
clause (3).  In other words, the user may take the clause (2) for
being correct, because he may think clause (2) is  executed  only
when  clause  (1)  fails,  i.e.,  he think clause (2) is equal to
"g(X,X) :- not(even(X)).". However, we might not  blame  him,  as
the  very  one  to  be blamed is the non-fair search mechanism in
Prolog and he is only a victim of this mechanism.

 Consider the Shapiro's  algorithmic  debugger  in  more  detail.
Let's  suppose  that  a user drops a query "q(4,X)". He certainly
gets the wrong answer "g(4,4)" with  the  above  program  and  he
thinks  it is wrong because he expects X would be instantiated to
2. Now, suppose he decides to run the  wrong  solution  mode.  In
this  mode,  he  may respond with "no" to the question "g(4,4) is
correct?", so force the debugger to decide that the wrong  clause
is (2), because g(4,4) logically follows from g(X,X). However the
user may be confused with this decision if he is not well trained
in Logic. Can you tell he does not have the quality enough to use
such a beautiful "Logic"? If you can, I'll give you the following
program.

 (1) g(X,Y) :- even(X), Y is X/2.
 (2) g(X,X).

 (3) even(X) :- 0 is X mod 2.


 You will think this is wrong. However almost all Prolog  systems
do  not  tell that the clause (2) is wrong. Isn't it strange that
when "0 is X mod 2" is changed to "0 is X mod  3",  the  debugger
tells  that  "g(X,X)"  is  not correct, which was not told in the
original program?

 As mentioned above, I'd like to say again that the basic  reason
why  users  tend to write codes like that is in the sequentiality
of Prolog.  Therefore, the debugger should consider  this  situa-
tion.  Finally, I'd like to draw the following conclusion:

 If the Prolog execution mechanism is sequential, i.e.,  non-fair
in  its  search mechanism, the debugger should provide the flexi-
bility for victims of this non-logical feature.  So,  We  suggest
the following policy in the algorithmic debugging.

 In the first program, if a user answers  "no"  to  the  question
"q(4,4)  is  true?"  thrown  by the debugger, the debugger should
confirm the user's opinion like this: "Do you  think  the  clause
g(X,X)  is  false?". If the user is well trained in logic and not
contaminated by the Prolog's sequentiality, he will surely recog-
nize  that the clause g(X,X) is not true and answer "yes". So the
debugger can conclude that g(X,X) is false. Now  if  he  replaces
"g(X,X)"  by  "g(X,X)  :-  not(even(X))", this time, the debugger
will indicate that even(X) is not correct, finally compel him  to
make  a  complete program. Shapiro's algorithmic debugger assumes
only such a "high-level" programmer and "high-level"  process  of
debugging  (poor  people  may  complain  this  process  is  user-
unfriendly).

 However, if a user is not well trained in logic, or contaminated
by  the  Prolog's sequentiality, he may think the clause "g(X,X)"
is not false, so definitely respond with "no" to the question "Do
you  think  the  clause g(X,X) is false?". The debugger should be
able to find that the wrong clause he wants is "even(X) :- 0 is X
mod 3", not "g(X,X)".

 We are developing an algorithmic debugger with the above policy.
Do  you  have any comment on this policy? Will Lee Naish also at-
tack this  policy  with  his  electronic  butcher's  knife?  Will
O'keefe  say  that the policy will only accelerate Lisp-like pro-
gramming in Prolog?

-- D.W. Shin

------------------------------

End of PROLOG Digest
********************