[net.lang.prolog] PROLOG Digest V3 #36

PROLOG-REQUEST@SU-SCORE.ARPA (The Moderator) (08/19/85)

PROLOG Digest            Tuesday, 20 Aug 1985      Volume 3 : Issue 36

Today's Topics:
               Query - LP Based Specification Language,
                   LP Philosophy - Hewett Challenge
----------------------------------------------------------------------

Date: Sat, 10-Aug-85 14:36:30 PDT
From: P. Allen Jensen gatech!gitpyr!allen@UCB-Vax
Subject: Prolog based Software Specification Language

I am trying to find information on Software Specification
Languages written in Prolog.  I have looked at two non
procedural description techniques, one based on regular
expressions, the other utilizing a data base approach.
Bibliographic information would be of help.  It seems
to me that Prolog would be a good language to use for
developing a software specification language.

-- P. Allen Jensen

------------------------------

Date: Sun, 11-Aug-85 10:34:32 PDT
From: P. Allen Jensen  gatech!gitpyr!allen@UCB-Vax
Subject: Program Specification Languages

I am doing some research on Program Specification Languages.
I am aware of only two system currently available:

o - Program Statement Language/Program Statement Analyzer
    (PSL/PSA) University of Michigan

o - Software Development System (SDS, SREM)
    Ballistic Missile Defense and TRW, Inc.

PSL uses Objects  and Relationships to  describe a system.
The language allows 22 possible objects and 36 relationships.
These descriptions are then analyzed by  PSA for  redundancies
or  logical  inconsistencies.  PSA,  however, is  not
rigorous  and therefore cannot provide a mathematically correct
verification of the logical consistency of the specifications.

I am  not  familiar with  SDS,  but understand  that it is more
extensive than PSL/PSA.

Any  further information  on currently  available products  or
research in this area would be  appreciated.  I am considering
developing a  prototype language for specifications in Prolog.
All comments and suggestions are welcome.

-- P. Allen Jensen

------------------------------

Date: Wed, 14 Aug 85 01:03:48 EDT
From: Carl E. Hewitt <HEWITT@MIT-MC.ARPA>
Subject: Prolog will fail as the foundation for AI; so will

           [ The following four messages are reprinted from
                       the Phil-Sci list. -ed ]

Folks,

This list has been rather dormant.  To liven things up, I
would like to throw out the following little ticker:

Prolog (like APL before it) will fail as the foundation for
Artificial Intelligence because of competition with Lisp.
There are commercially viable Prolog implementations written
in Lisp but not conversely.

LOGIC as a PROGRAMMING Language will fail as the foundation
for AI because:

1.  Logical inference cannot be used to infer the decisions
    that need to be taken in open systems because the decisions
    are not determined by system inputs.

2.  Logic does not cope well with the contradictory knowledge
    bases inherent in open systems.  It leaves out
    counterarguments and debate.

3.  Taking action does not fit within the logic paradigm.

------------------------------

Date: Wed 14 Aug 85 10:49:57-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Challenge Problem

I have long believed that any algorithm developed in a procedural
language could be reformulated more elegantly in a logic language.
This may be true, but I have come to doubt its utility.  Restating
an algorithm in nonprocedural form can be a useful exercise, but
does not improve on the procedural nature of the algorithm.
Advocates of logic programming would have a weak case if their
languages were just notations for telling the computer what to do,
step by step.

In order to sharpen my understanding of logic programming, I would
be interested in hearing about logic-based solutions to a problem
that I consider particularly "algorithmic": extraction of connected
components in images.  I would like to learn of any logic-based
approach to the problem that would entail neither massive search nor
such extensive procedural embedding as to make the "logic" portion
trivial.

The problem:  Assume that you have an array of integers, and are
to identify all maximal connected components -- i.e., all clusters
of identical integers.  The integers range from 0 to some known
maximum, with 0 being a special code that connects with the space
"outside" the array.  You are free to use any bounded intermediate
data structures.  The output is to consist of an array in which
elements of each connected component are marked with the same
integer code, and elements of different components are marked with
different codes.  Other outputs (boundary traces, lists of region
adjacencies and inclusion relationships, number of holes per
region, etc.) are desirable but of secondary importance.

I know of several good algorithms for solving this problem.  Most
involve an intermediate map or equivalent data structure that records
nonmaximal connected components found during a row-by-row scan,
together with a list of discovered equivalences that can be used to
link these into maximal components.  Such algorithms are quite
efficient (aside from having to scan the array completly, in the
worse case, before starting to build the output map).  Others involve
visiting (and marking) every point in the array, tracing region
boundaries until they close and then raster scanning to find the
next unvisited point.  (This is a reasonable procedure if the
boundary traces are of primary interest.)

Is there any hope that a logic-based language could compile a
[nonalgorithmic] declarative statement of this problem into similar
code, or would the compiler have to be guided by domain knowledge that
essentially supplies it with the right answer?  If the latter
is true, in what sense is logic programming the answer to our
current needs?

-- Ken Laws

------------------------------

Date: Wed, 14 Aug 85 10:23:10 cdt
From: Kenneth Forbus Forbus%uiucdcsp@Uiuc
Subject: Prolog will fail as the foundation for AI

(Carl, I wish you had found a fresher topic to re-open the
list with. And doing it right before conference season...)

To equate "prolog" with "logic programing, as Carl & Wayne's
messages comes very close to doing, is somewhat less than
accurate.  True, prolog is currently the logic programming
language of choice.  True, prolog provides a handy notion
of variable that takes painful effort to build in Lisp.
But prolog really is "Micro-Planner done right", as its
advocates claim, and that is enough to allow most AI people
to tune out prolog because we found out a long time ago that
Micro-Planner isn't what we want.

Prolog suffers from being both too old and too new.  It is
too old, in the sense that it has chronological backtracking
built into its roots and the standard versions don't take
into account new ideas (such as TMS').  Yes, of course one
can encode these things in logic and "run them" in Prolog
(If you really believe prolog is "programming in logic" I
have a bridge to sell you).  However, you may not get an
answer in the lifetime of the universe, and doing three or
four experiments will have you sitting fruitlessly at the
console long enough to write a fast lisp program to do the
same thing!  But prolog is too new, in that it has also not
kept pace with progress in programming environments and
language design.  Talking to prolog programmers is a bit
like talking to lisp programmers 10 years ago -- there are
many dialects, often changing rapidly, wild disagreements
on syntax, etc.  The days of CommonProlog are likely to be
pretty far off.

Judging the idea of logic programming by prolog is a bit like
judging lisp by Lisp 1.5.  I believe that only a tiny fraction
of the interesting ideas which should be encapsulated in a real
logic programming language have been developed.  If some venture
capitalist were to ask me into whose pocket should he put $1M to
get a signficant advance in the state of logic programming, I
wouldn't suggest spending it on the prolog community.  I'd
probably suggest splitting it between McAllester & de Kleer,
whose ideas on TMS', while rather different, are both far more
interesting and potentially productive ideas than, say, improving
the speed of unification or studying and-parallelism.  We simply
don't have enough experience writing and using reasoning languages
yet to either cast our ideas into stone (i.e., adopt prolog and
spend our time optimizing it) OR pass final judgement on the
merits of logic programming.

On logic and action:  Why shouldn't Shakey be considered a counter
example to Carl's claim about the impossibility of action in a
system based on logic?  Shakey used resolution theorem proving to
decide what to do and other kinds of routines (such as A* search)
to flesh out the details.  If the existence of these other routines
is considered sufficient to discount Shakey as a counterexample
then the claim seems rather uninterestingly obvious.

------------------------------

Date: Wed 14 Aug 85 12:07:05-EDT
From: Vijay <Vijay.Saraswat@CMU-CS-C.ARPA>
Subject: On logic programming as a foundation for AI.

This is a response to some of the issues recently raised
by Hewitt in a post to the PHIL-SCI digest. (Phil SCI
Digest, Aug 14).

For the sake of argument,  Prolog is probably a `more convenient'
language for writing compilers  than Lisp; on the other hand,
the current state of sophistication of Lisp implementations makes
them extremely attractive for developing rapid prototype
implementations.   For the near future I see the ideal programming
environment as being a Lisp Machine with a very fast Prolog on it
(If Symbolics Prolog lives up to rumours of 100K Lips that will
do, thank you.)  Why would anyone EVER want to write a CommonLisp
interpreter in  Prolog?  There are much better things to do ...
particularly when you already have CommonLisp and Prolog!

What does this have to do with "being a foundation for AI"?
If he means that "X is a foundation for AI" if X is used by
most people for writing their AI programs, then that is
irrelevant.  I would say that "X is a foundation for AI"
if most work being carried out in AI can fit in the theoretical
framework of X. With such a working definition it is rather
doubtful if any one paradigm can be a foundation for AI.  But I
would submit that the concept of logic programming is a step
towards reaching such a foundational understanding.

"Logic as a programming language will fail as the foundations
for AI because..."

Let's be careful how we bandy terms around.  Logic is not a
programming language. Logic is logic, a formal system. The
first axiom of logic programming (the LOGIC axiom) is that
the definite clause subset of logic has an appealing
interpretation as a programming language, if the process of
SLD-refutation (which is complete for this subset) is taken
as the inference mechansim.  In this programming language,
the user has DON'T KNOW CHOICE, i.e. the power of specifying
existential searches.  This power is rather unnerving.  The
second axiom of logic programming (the CONTROL axiom) is that
it is possible to provide general control mechanisms which can
be exploited by a programmer for controlling the search. This
led to the rise of logic programming languages, which allow,
in general, the programming of incomplete searches, (hence the
handling of non-montonic inference, defaults, inheritance ...).

The most ancient of these is PROLOG, which essentially provided
the control structure of SEQUENTIALITY. The language PROLOG has
something to do with logic of course, but, because of its
incomplete proof procedure, its semantics is best given via
denotational means, rather than logical means.  This is the
first corollary of logic programming: a formal understanding of
logic programming languages has to appeal to traditional computer
science techniques for giving semantics to programming languages.
The surprising discovery is that because of the simplicity of the
underlying execution mechanism such semantics are surprisingly
simple: far simpler than semantics for ALGOL, LISP (has anyone
ever attempted a semantics for something like COMMONLISP?),
ACTORS...  This has very powerful connotations with respect to
semantically based programming environments, program
transformations, and meta-programming.

There have been more recent languages which have provided other
control features: PARLOG, GHC, Concurrent Prolog and CP[!,|,&],
to name just a few in the main stream of Horn logic programming,
have attempted to also provide don't care choice and parallelism.
To frame it in the current parlance, these languages essentially
provide support for object-oriented programming.  While a formal
denotational semantics for the last is still being developed
(the others do not yet have formal semantics), it has already
been demonstrated that it has a surprisingly simple partial
correctness semantics.

More important, these languages demonstrate that while
keeping the LOGIC component more or less identical, it is
possible to achieve a great variety of operational
behaviours by changing the CONTROL component.  Hence logic
programming is not a LANGUAGE, it is a PROGRAMMING PARADIGM
encompassing all these operational behaviours.

To sum up, logic programming langauges are not just any
programming languages, at some level, most programs written
in such languages have a declarative interpretation
compatible with a logic system, typically universally quantified
definite clause logic. Logic programming languages are not just
logic systems because their control component plays an important
part.  The art of designing logic programming languages is
concerned with maintaining a delicate balance between these two
divergent themes. Even with their control components, logic
programming languages can generally be given simple semantics,
which reflects their underlying conceptual simplicity.

As far as supporting most of the current AI paradigms is concerned,
it should be clear that Definite clause logic supports naturally
the notions of  goal-driven backward chaining.

It is my contention (as yet unsupported) that the basic style of
computation provided by Concurrent Prolog and CP[!,|,&] naturally
supports data-driven, forward-chaining inference and also knowledge
structuring a la semantic network based languages, again via the
primary mechanism of message-passing.  ("reserach in progress",
though there is an article by Shapiro andd Takeuchi on object
oriented programming in Concurrent Prolog).

Hence I would conclude that, to the extent that logic programming
languages support such  programming paradigms, and to the extent
that they themselves have secure theoretical foundations, one can
at least claim that logic programming offers a step towards a
unified understanding of the foundations of (programming paradigms
used in) AI.

Note that I am not saying anything at all about the psychological
plausibility of controlled inference as the essential "problem
solving capability" in  intelligent agents.

Let me now discuss three specific poinst raised by Hewitt:

"1.  Logical inference cannot be used to infer the decisions that
need to be taken in open systems because the decisions are not
determined by system inputs.

2.   Logic does not cope well withthe contradictory knowledge
bases inherent in open systems.  It leaves out counterarguments
and debate.

3.   Taking action does not fit within the logic paradigm."

Some of these contentions have been made by Hewitt in other
contexts and I still remain as mystified by them as I was then.

Let us keep in mind that we are talking of programming langauges,
albeit peculiar programming languages. How does a LISP function
'take action'? Presumably by doing a (setq foo 'take-action).
How does an OPS-5 program take action? Presumably by executing a
(make task-312 ^task take-action) on the right hand side of a
production. So also  logic programming languages take action by
instantiating a variable to some value. If that variable was
actually implemented as a particular memory cell which is being
monitored by ahard-wired coke-dispenser, then it would 'take
action': deliver a coke bottle.

Presumably what is confusing Hewitt is that in Prolog bindings
can be done on backtracking.  But all that is in the programmer's
control! If the programmer intended that the action to be taken
is irrevocable, he would write his axioms in such a fashion that
the binding would not be backtracked over. That is a control
problem!

About contradictory data bases.  First let me make the obvious
statement that a certain level of understanding, every set of
definite clause has a model, indeed an infinity of models. Hence
there is no scope for representing contradictory knowledge
directly via axioms in a Horn logic program.  So how do logic
programming systems do it?  Well you have to PROGRAM IN your
handling of such data.  You have to write a program, just as you
would write a program in LISP or ATOLIA which does to this data
what you want to do with it.  The axioms you write (the program
you write) will be executed in the fashion determined by the logic
programming language you choose and by the control you provide:
usually this approximates some kind of inferencing. BUt the action
that gets taken when you execute your program depends upon your
program.  If you had written your program such that when you
encounter an inconsistency it would print out "The moon is made of
green cheese" and stop, then, if an inconsitency is encountered,
believe it or not, it will print out "The moon is made of green
cheese" and stop. If you wanted to program in counterargumentsd
and debate then go ahead and do that.  Logic programming does not
provide "counterarguments and debate" as a primtive concept.
The advantage that you get by writing your program in a logic
programming language is that you can reason about it, you can
(hopefully) prove that when the program encounters an inconsistency
it will print out "The moon is made of greencheese" and terminate.
And because of the simplicity of the programming paradigm that you
are using, your proofs would be relatively simple.  If you were
careful in writing your program and in choosing your logic
programming langauge, the answer that you would get would be a
logical consequence of YOUR AXIOMS (program) which of course COULD
HAVE DONE whatever it wanted to to the data.

Hence, while on the face of it Hewitt's statement (2) is true,
it is totally irrelevant.

This also takes care of the argument that "logical inference
cannot be used to  infer decisions that need to be taken in open
systems ..." because logical inference is being used to execute
YOUR PROGRAM, which determines how to make those decisions.  If
your program is actually an OPS-5 interpreter which acts on the
data that it gets by using a knowledge base of productions, then,
by heavens, YOUR PROGRAM is NOT doing logical inference.  Hence
even if Hewitt's contention is correct, it is totally irrelevant.

-- Vijay A. Saraswat

------------------------------

End of PROLOG Digest
********************

RESTIVO@SU-SCORE.ARPA (08/20/85)

From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>


PROLOG Digest            Tuesday, 20 Aug 1985      Volume 3 : Issue 36

Today's Topics:
               Query - LP Based Specification Language,
                   LP Philosophy - Hewett Challenge
----------------------------------------------------------------------

Date: Sat, 10-Aug-85 14:36:30 PDT
From: P. Allen Jensen gatech!gitpyr!allen@UCB-Vax
Subject: Prolog based Software Specification Language

I am trying to find information on Software Specification
Languages written in Prolog.  I have looked at two non
procedural description techniques, one based on regular
expressions, the other utilizing a data base approach.
Bibliographic information would be of help.  It seems
to me that Prolog would be a good language to use for
developing a software specification language.

-- P. Allen Jensen

------------------------------

Date: Sun, 11-Aug-85 10:34:32 PDT
From: P. Allen Jensen  gatech!gitpyr!allen@UCB-Vax
Subject: Program Specification Languages

I am doing some research on Program Specification Languages.
I am aware of only two system currently available:

o - Program Statement Language/Program Statement Analyzer
    (PSL/PSA) University of Michigan

o - Software Development System (SDS, SREM)
    Ballistic Missile Defense and TRW, Inc.

PSL uses Objects  and Relationships to  describe a system.
The language allows 22 possible objects and 36 relationships.
These descriptions are then analyzed by  PSA for  redundancies
or  logical  inconsistencies.  PSA,  however, is  not
rigorous  and therefore cannot provide a mathematically correct
verification of the logical consistency of the specifications.

I am  not  familiar with  SDS,  but understand  that it is more
extensive than PSL/PSA.

Any  further information  on currently  available products  or
research in this area would be  appreciated.  I am considering
developing a  prototype language for specifications in Prolog.
All comments and suggestions are welcome.

-- P. Allen Jensen

------------------------------

Date: Wed, 14 Aug 85 01:03:48 EDT
From: Carl E. Hewitt <HEWITT@MIT-MC.ARPA>
Subject: Prolog will fail as the foundation for AI; so will

           [ The following four messages are reprinted from
                       the Phil-Sci list. -ed ]

Folks,

This list has been rather dormant.  To liven things up, I
would like to throw out the following little ticker:

Prolog (like APL before it) will fail as the foundation for
Artificial Intelligence because of competition with Lisp.
There are commercially viable Prolog implementations written
in Lisp but not conversely.

LOGIC as a PROGRAMMING Language will fail as the foundation
for AI because:

1.  Logical inference cannot be used to infer the decisions
    that need to be taken in open systems because the decisions
    are not determined by system inputs.

2.  Logic does not cope well with the contradictory knowledge
    bases inherent in open systems.  It leaves out
    counterarguments and debate.

3.  Taking action does not fit within the logic paradigm.

------------------------------

Date: Wed 14 Aug 85 10:49:57-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Challenge Problem

I have long believed that any algorithm developed in a procedural
language could be reformulated more elegantly in a logic language.
This may be true, but I have come to doubt its utility.  Restating
an algorithm in nonprocedural form can be a useful exercise, but
does not improve on the procedural nature of the algorithm.
Advocates of logic programming would have a weak case if their
languages were just notations for telling the computer what to do,
step by step.

In order to sharpen my understanding of logic programming, I would
be interested in hearing about logic-based solutions to a problem
that I consider particularly "algorithmic": extraction of connected
components in images.  I would like to learn of any logic-based
approach to the problem that would entail neither massive search nor
such extensive procedural embedding as to make the "logic" portion
trivial.

The problem:  Assume that you have an array of integers, and are
to identify all maximal connected components -- i.e., all clusters
of identical integers.  The integers range from 0 to some known
maximum, with 0 being a special code that connects with the space
"outside" the array.  You are free to use any bounded intermediate
data structures.  The output is to consist of an array in which
elements of each connected component are marked with the same
integer code, and elements of different components are marked with
different codes.  Other outputs (boundary traces, lists of region
adjacencies and inclusion relationships, number of holes per
region, etc.) are desirable but of secondary importance.

I know of several good algorithms for solving this problem.  Most
involve an intermediate map or equivalent data structure that records
nonmaximal connected components found during a row-by-row scan,
together with a list of discovered equivalences that can be used to
link these into maximal components.  Such algorithms are quite
efficient (aside from having to scan the array completly, in the
worse case, before starting to build the output map).  Others involve
visiting (and marking) every point in the array, tracing region
boundaries until they close and then raster scanning to find the
next unvisited point.  (This is a reasonable procedure if the
boundary traces are of primary interest.)

Is there any hope that a logic-based language could compile a
[nonalgorithmic] declarative statement of this problem into similar
code, or would the compiler have to be guided by domain knowledge that
essentially supplies it with the right answer?  If the latter
is true, in what sense is logic programming the answer to our
current needs?

-- Ken Laws

------------------------------

Date: Wed, 14 Aug 85 10:23:10 cdt
From: Kenneth Forbus Forbus%uiucdcsp@Uiuc
Subject: Prolog will fail as the foundation for AI

(Carl, I wish you had found a fresher topic to re-open the
list with. And doing it right before conference season...)

To equate "prolog" with "logic programing, as Carl & Wayne's
messages comes very close to doing, is somewhat less than
accurate.  True, prolog is currently the logic programming
language of choice.  True, prolog provides a handy notion
of variable that takes painful effort to build in Lisp.
But prolog really is "Micro-Planner done right", as its
advocates claim, and that is enough to allow most AI people
to tune out prolog because we found out a long time ago that
Micro-Planner isn't what we want.

Prolog suffers from being both too old and too new.  It is
too old, in the sense that it has chronological backtracking
built into its roots and the standard versions don't take
into account new ideas (such as TMS').  Yes, of course one
can encode these things in logic and "run them" in Prolog
(If you really believe prolog is "programming in logic" I
have a bridge to sell you).  However, you may not get an
answer in the lifetime of the universe, and doing three or
four experiments will have you sitting fruitlessly at the
console long enough to write a fast lisp program to do the
same thing!  But prolog is too new, in that it has also not
kept pace with progress in programming environments and
language design.  Talking to prolog programmers is a bit
like talking to lisp programmers 10 years ago -- there are
many dialects, often changing rapidly, wild disagreements
on syntax, etc.  The days of CommonProlog are likely to be
pretty far off.

Judging the idea of logic programming by prolog is a bit like
judging lisp by Lisp 1.5.  I believe that only a tiny fraction
of the interesting ideas which should be encapsulated in a real
logic programming language have been developed.  If some venture
capitalist were to ask me into whose pocket should he put $1M to
get a signficant advance in the state of logic programming, I
wouldn't suggest spending it on the prolog community.  I'd
probably suggest splitting it between McAllester & de Kleer,
whose ideas on TMS', while rather different, are both far more
interesting and potentially productive ideas than, say, improving
the speed of unification or studying and-parallelism.  We simply
don't have enough experience writing and using reasoning languages
yet to either cast our ideas into stone (i.e., adopt prolog and
spend our time optimizing it) OR pass final judgement on the
merits of logic programming.

On logic and action:  Why shouldn't Shakey be considered a counter
example to Carl's claim about the impossibility of action in a
system based on logic?  Shakey used resolution theorem proving to
decide what to do and other kinds of routines (such as A* search)
to flesh out the details.  If the existence of these other routines
is considered sufficient to discount Shakey as a counterexample
then the claim seems rather uninterestingly obvious.

------------------------------

Date: Wed 14 Aug 85 12:07:05-EDT
From: Vijay <Vijay.Saraswat@CMU-CS-C.ARPA>
Subject: On logic programming as a foundation for AI.

This is a response to some of the issues recently raised
by Hewitt in a post to the PHIL-SCI digest. (Phil SCI
Digest, Aug 14).

For the sake of argument,  Prolog is probably a `more convenient'
language for writing compilers  than Lisp; on the other hand,
the current state of sophistication of Lisp implementations makes
them extremely attractive for developing rapid prototype
implementations.   For the near future I see the ideal programming
environment as being a Lisp Machine with a very fast Prolog on it
(If Symbolics Prolog lives up to rumours of 100K Lips that will
do, thank you.)  Why would anyone EVER want to write a CommonLisp
interpreter in  Prolog?  There are much better things to do ...
particularly when you already have CommonLisp and Prolog!

What does this have to do with "being a foundation for AI"?
If he means that "X is a foundation for AI" if X is used by
most people for writing their AI programs, then that is
irrelevant.  I would say that "X is a foundation for AI"
if most work being carried out in AI can fit in the theoretical
framework of X. With such a working definition it is rather
doubtful if any one paradigm can be a foundation for AI.  But I
would submit that the concept of logic programming is a step
towards reaching such a foundational understanding.

"Logic as a programming language will fail as the foundations
for AI because..."

Let's be careful how we bandy terms around.  Logic is not a
programming language. Logic is logic, a formal system. The
first axiom of logic programming (the LOGIC axiom) is that
the definite clause subset of logic has an appealing
interpretation as a programming language, if the process of
SLD-refutation (which is complete for this subset) is taken
as the inference mechansim.  In this programming language,
the user has DON'T KNOW CHOICE, i.e. the power of specifying
existential searches.  This power is rather unnerving.  The
second axiom of logic programming (the CONTROL axiom) is that
it is possible to provide general control mechanisms which can
be exploited by a programmer for controlling the search. This
led to the rise of logic programming languages, which allow,
in general, the programming of incomplete searches, (hence the
handling of non-montonic inference, defaults, inheritance ...).

The most ancient of these is PROLOG, which essentially provided
the control structure of SEQUENTIALITY. The language PROLOG has
something to do with logic of course, but, because of its
incomplete proof procedure, its semantics is best given via
denotational means, rather than logical means.  This is the
first corollary of logic programming: a formal understanding of
logic programming languages has to appeal to traditional computer
science techniques for giving semantics to programming languages.
The surprising discovery is that because of the simplicity of the
underlying execution mechanism such semantics are surprisingly
simple: far simpler than semantics for ALGOL, LISP (has anyone
ever attempted a semantics for something like COMMONLISP?),
ACTORS...  This has very powerful connotations with respect to
semantically based programming environments, program
transformations, and meta-programming.

There have been more recent languages which have provided other
control features: PARLOG, GHC, Concurrent Prolog and CP[!,|,&],
to name just a few in the main stream of Horn logic programming,
have attempted to also provide don't care choice and parallelism.
To frame it in the current parlance, these languages essentially
provide support for object-oriented programming.  While a formal
denotational semantics for the last is still being developed
(the others do not yet have formal semantics), it has already
been demonstrated that it has a surprisingly simple partial
correctness semantics.

More important, these languages demonstrate that while
keeping the LOGIC component more or less identical, it is
possible to achieve a great variety of operational
behaviours by changing the CONTROL component.  Hence logic
programming is not a LANGUAGE, it is a PROGRAMMING PARADIGM
encompassing all these operational behaviours.

To sum up, logic programming langauges are not just any
programming languages, at some level, most programs written
in such languages have a declarative interpretation
compatible with a logic system, typically universally quantified
definite clause logic. Logic programming languages are not just
logic systems because their control component plays an important
part.  The art of designing logic programming languages is
concerned with maintaining a delicate balance between these two
divergent themes. Even with their control components, logic
programming languages can generally be given simple semantics,
which reflects their underlying conceptual simplicity.

As far as supporting most of the current AI paradigms is concerned,
it should be clear that Definite clause logic supports naturally
the notions of  goal-driven backward chaining.

It is my contention (as yet unsupported) that the basic style of
computation provided by Concurrent Prolog and CP[!,|,&] naturally
supports data-driven, forward-chaining inference and also knowledge
structuring a la semantic network based languages, again via the
primary mechanism of message-passing.  ("reserach in progress",
though there is an article by Shapiro andd Takeuchi on object
oriented programming in Concurrent Prolog).

Hence I would conclude that, to the extent that logic programming
languages support such  programming paradigms, and to the extent
that they themselves have secure theoretical foundations, one can
at least claim that logic programming offers a step towards a
unified understanding of the foundations of (programming paradigms
used in) AI.

Note that I am not saying anything at all about the psychological
plausibility of controlled inference as the essential "problem
solving capability" in  intelligent agents.

Let me now discuss three specific poinst raised by Hewitt:

"1.  Logical inference cannot be used to infer the decisions that
need to be taken in open systems because the decisions are not
determined by system inputs.

2.   Logic does not cope well withthe contradictory knowledge
bases inherent in open systems.  It leaves out counterarguments
and debate.

3.   Taking action does not fit within the logic paradigm."

Some of these contentions have been made by Hewitt in other
contexts and I still remain as mystified by them as I was then.

Let us keep in mind that we are talking of programming langauges,
albeit peculiar programming languages. How does a LISP function
'take action'? Presumably by doing a (setq foo 'take-action).
How does an OPS-5 program take action? Presumably by executing a
(make task-312 ^task take-action) on the right hand side of a
production. So also  logic programming languages take action by
instantiating a variable to some value. If that variable was
actually implemented as a particular memory cell which is being
monitored by ahard-wired coke-dispenser, then it would 'take
action': deliver a coke bottle.

Presumably what is confusing Hewitt is that in Prolog bindings
can be done on backtracking.  But all that is in the programmer's
control! If the programmer intended that the action to be taken
is irrevocable, he would write his axioms in such a fashion that
the binding would not be backtracked over. That is a control
problem!

About contradictory data bases.  First let me make the obvious
statement that a certain level of understanding, every set of
definite clause has a model, indeed an infinity of models. Hence
there is no scope for representing contradictory knowledge
directly via axioms in a Horn logic program.  So how do logic
programming systems do it?  Well you have to PROGRAM IN your
handling of such data.  You have to write a program, just as you
would write a program in LISP or ATOLIA which does to this data
what you want to do with it.  The axioms you write (the program
you write) will be executed in the fashion determined by the logic
programming language you choose and by the control you provide:
usually this approximates some kind of inferencing. BUt the action
that gets taken when you execute your program depends upon your
program.  If you had written your program such that when you
encounter an inconsistency it would print out "The moon is made of
green cheese" and stop, then, if an inconsitency is encountered,
believe it or not, it will print out "The moon is made of green
cheese" and stop. If you wanted to program in counterargumentsd
and debate then go ahead and do that.  Logic programming does not
provide "counterarguments and debate" as a primtive concept.
The advantage that you get by writing your program in a logic
programming language is that you can reason about it, you can
(hopefully) prove that when the program encounters an inconsistency
it will print out "The moon is made of greencheese" and terminate.
And because of the simplicity of the programming paradigm that you
are using, your proofs would be relatively simple.  If you were
careful in writing your program and in choosing your logic
programming langauge, the answer that you would get would be a
logical consequence of YOUR AXIOMS (program) which of course COULD
HAVE DONE whatever it wanted to to the data.

Hence, while on the face of it Hewitt's statement (2) is true,
it is totally irrelevant.

This also takes care of the argument that "logical inference
cannot be used to  infer decisions that need to be taken in open
systems ..." because logical inference is being used to execute
YOUR PROGRAM, which determines how to make those decisions.  If
your program is actually an OPS-5 interpreter which acts on the
data that it gets by using a knowledge base of productions, then,
by heavens, YOUR PROGRAM is NOT doing logical inference.  Hence
even if Hewitt's contention is correct, it is totally irrelevant.

-- Vijay A. Saraswat

------------------------------

End of PROLOG Digest
********************