[comp.lang.prolog] The meaning of "declarative"

alf@sics.se (Thomas Sj|land) (10/03/88)

This is somewhat philosophical, but it turns up in all Prolog
classes when you try to motivate the language with something
else than its pragmatic virtues. (Please submit explanatory followups only.)

Statement 1, (commonly accepted, it seems, although the meaning is unclear):
	 "Prolog is a declarative language"

Statement 2, (viewpoint held by some experts who agree to statement 1):
	 "functional languages are NOT declarative, while logic programming
	  and equational languages are if they have completeness"

My view: "(declarative programming <-> programming without added intension)"
	  -> Prolog allows declarative programming but doesn't guarantee it.

	Indeterminism of the theorem prover
	which gives completeness has nothing to do with the "declarativeness".
	My intuition leads me to think of soundness rather than completeness
	when I hear the term "declarative".  Functional programs which are pure
	lambda expressions are declarative, even when they are "higher order"
	in contrast to first order logic programs, which are declarative only
	when the intension coincides with the Herbrand interpretation. This
	view rules out the use of difference structures a.o.t. from 
	"declarative logic programs".

	 The phrase "declarative programming" makes sense if your intension
	 of the program is understood as a database for a theorem prover and
	 not as a coded algorithm, i.e. any fact you intended to express in
	 the database can be concluded by the interpreter. Consider the difference
	 between the completeness for first order predicate calculus and
	 the incompleteness of arithmetic as expressed by the Peano axioms
	 as a first order theory. Coded algorithms in logic programs seems
	to me to have a similar relation to the underlying theorem prover
	for horn clauses as arithmetic to the underlying first order theory
	used to code them. A difference is perhaps that logic programming
	assumes a particular model, the Herbrand interpretation.

Question: Does "declarative" refer to the completeness of the theorem prover
	used to run the programming language or does it refer to the intension
	of the program ? 










	 

--
Thomas Sj|land
SICS, PO Box 1263, S-164 28 KISTA, SWEDEN
Tel: +46 8 752 15 42	Ttx: 812 61 54 SICS S	Fax: +46 8 751 72 30
Internet: alf@sics.se or {mcvax,munnari,ukc,unido}!enea!sics.se!alf

jha@lfcs.ed.ac.uk (Jamie Andrews) (10/06/88)

In article <ALF.88Oct2184429@hathor.sics.se> alf@sics.se (Thomas Sj|land) writes:
>Statement 1, (commonly accepted, it seems, although the meaning is unclear):
>	 "Prolog is a declarative language"

     To me, "declarative" just means that you are writing
programs by declaring (or "defining", or "describing")
mathematical structures.  I think the term was first used to
draw a distinction between imperative languages and languages
like LISP.  So in functional languages, programs describe
functions, and in logic programming, programs describe
relations.

     I remember that when I learned LISP, it was very unclear
to me why being declarative made it so superior to conventional
languages.  I was left with the impression that my prof. just
thought it was better for some vague aesthetic reasons.
Frankly, I'm still unimpressed by purely functional programming
languages.

>My view: "(declarative programming <-> programming without added intension)"

     I'm not sure what you mean by "added intension".  Do you mean
implementation considerations that make the program do more or less
than it would seem to from a purely declarative reading?

>	My intuition leads me to think of soundness rather than completeness
>	when I hear the term "declarative".

     But consider the Prolog program

P(x) :- Infinite_loop(x).
P(3).

and the query

P(3).

In the declarative reading of the predicate, it looks like P(3)
holds.  But when we run the query, it goes off into an infinite
loop -- thus P(3) effectively does not hold.  In this case, the
declarativeness or intuitiveness of the program has just been
misleading.  A complete system, or an understandable semantics
with respect to which a system is complete, would eliminate
this problem.

     How much of a problem is this?  It depends on your point
of view.  I think that if a system is just sound, that's good
enough for most purposes; but if it's also complete with respect
to some clean semantics, it's even better because we can also
(supposedly) see when a program is going to terminate, as
opposed to just seeing when it's not going to succeed (the only
thing we can get for sure from a declarative reading when the
system is just sound).

     BTW, re. my comment about functional programming:  logic
programming uses the language of logic, which is (by definition,
almost) very close to the way we think and deduce information
from assumptions.  So logic programming is very useful for
systems with lots of logical dependencies (expert systems,
protocol testers, etc.), and its structure corresponds to a wide
class of problems.  Functional programming can make no such
claim -- very few problems seem better thought of in terms of
mathematical functions.  The main thing that functional
programming seems to offer over relational is a clear and
deterministic decision procedure.

--Jamie.
  jha@lfcs.ed.ac.uk
"I always find mathematics so difficult" -- D.Hilbert

kh@s1.sys.uea.ac.uk (Kevin Hammond CMP) (10/07/88)

In article <818@etive.ed.ac.uk>, jha@lfcs.ed.ac.uk (Jamie Andrews) writes:
> I think the term was first used to
> draw a distinction between imperative languages and languages
> like LISP.  So in functional languages, programs describe
> functions, and in logic programming, programs describe
> relations.

But Lisp has assignment, I/O and other imperative features. How can it
be declarative?    Lisp is   functional   yes  (it has   higher  order
functions), but declarative no.

-- 
Wot? No Signature?			UUCP:	...!mcvax!ukc!uea-sys!kh
					JANET:	kh@sys.uea.ac.uk

ttey@mulga.oz (YEO Tee Thiam Eric) (10/08/88)

in article <152@s1.sys.uea.ac.uk>, kh@s1.sys.uea.ac.uk (Kevin Hammond CMP) says:
> But Lisp has assignment, I/O and other imperative features. How can it
> be declarative?    Lisp is   functional   yes  (it has   higher  order
> functions), but declarative no.

I believe that "declarative" is a relative adjective when used in the
context of programming languages. When talking about programming languages,
one should talk about the degree of "declarativeness". If one say that
Prolog is a declarative language, one could also argue that there are
features in Prolog which makes it "non-declarative", e.g. the cut.

Certainly, there is a subset of Lisp (pure Lisp) which conforms to our
notion of "declarativeness" as there is a subset of Prolog. Instead of
branding a programming language as "declarative" or "non-declarative"
we should consider the features of the language which encourages the
declarative style of programming.

					Eric Yeo

ok@quintus.uucp (Richard A. O'Keefe) (10/10/88)

In article <ALF.88Oct2184429@hathor.sics.se> alf@sics.se (Thomas Sj|land) writes:
>Statement 1, (commonly accepted, it seems, although the meaning is unclear):
>	 "Prolog is a declarative language"

Like "The USA is a democracy", it may be commonly accepted, but it isn't
true.  (The USA is, by design, a _republic_, not a pure democracy.)
It is possible to write a Prolog program in which most of the code can
be taken as declarative, just as it is possible to write pure functional
code in Lisp.  In both languages, it is a good idea to write pure code
whenever you can.  "Prolog is _based_ on a declarative language", perhaps.

>Statement 2, (viewpoint held by some experts who agree to statement 1):
>	 "functional languages are NOT declarative, while logic programming
>	  and equational languages are if they have completeness"

Please name these experts, so that I will know whose books to avoid!

I wonder whether there is any virtue in declarativeness as such, or whether
the real issue may not be hidden dependencies within a program?

lee@munnari.oz (Lee Naish) (10/10/88)

The meaning is summed up in the first chapter ("declarative
semantics") on page 24 of the first edition of Lloyd's book.

It defines theta to be a correct answer substitution for
a program P and goal G if the universal closure of G theta
is a logical consequence of P (that is, all instances of
G theta are true in all models of P).

This definition is given before SLD resolution, the operational
semantics of Prolog, is discussed at all.  The basic semantics of
pure Prolog can be given without any operational concepts.  Most
functional languages and imperative languages have semantics based
on operational concepts such as reduction and assignment.  However,
this does not necessarily mean that declarative semantics do not
exist for such languages.  We should therefore be wary of calling
such languages non-declarative (or even non- logic programming).

Are difference lists (etc) a problem?  If you misunderstand them 
they are, otherwise they are not (after all, they are just terms).
Thinking that [1,2,3] \ [3] is the same as [1,2] is like thinking
that 3 - 1 is the same as 2.  The fact that some people get
expressions confused with numbers does not mean that expressions
cause a problem for the semantics of the language.

Is incompleteness a problem?  Unfortunately, there is not yet a
theorem prover implementation which is complete (you need an unbounded
amount of memory for a start).  One of important features of SLD
resolution is that queries with correct answers substitutions will
not finitely fail.  This means that negation as failure is sound.
In programming languages, as opposed to theorem proving systems,
we want to be able to specify algorithms and completeness is less
important.

	lee

kh@s1.sys.uea.ac.uk (Kevin Hammond CMP) (10/12/88)

In article <517@quintus.UUCP>, ok@quintus.uucp (Richard A. O'Keefe) writes:
> I wonder whether there is any virtue in declarativeness as such, or whether
> the real issue may not be hidden dependencies within a program?

If a language is  declarative,  then it is  possible to apply  program
transformations in order to gain  efficiencies which are not otherwise
possible, or to compile for  a parallel processor  without needing  to
change  (or recompile for the  same machine, but   different number of
processors) the code.  Even if the dependencies are up front, and your
program is essentially declarative you lose  a  lot of this potential,
and it's  hard  work for the  compiler writer (as I've found   from my
PhD).  For example one  single assignment, intended  to improve  space
and time  efficiency working under   normal assumptions,  can actually
force   a   parallel  program   to run    sequentially.   Things  like
higher-order functions or dynamic variables are also very hard to sort
out  when you allow   assignment.   I/O has much  the  same  effect as
assignment.

Note that this isn't  to say that an  expert armed with an  imperative
language and the   appropriate scheduling constructs  (PAR, SEQ  etc.)
can't  write efficient  parallel  programs, but surely    the   aim of
language  designers  is  to   make   programming  easier  and  machine
independent (and that includes independent of the number of processors
available).  Declarative languages make this possible, at some cost in
compiler-writer time.

Incidentally, at Imperial College, London, they have a pure functional
language with lazy data  structures  (infinite lists),  called  HOPE+,
which  runs faster   than compiled C on a   Sun-3, at  least for  some
examples (benchmarking is very emotive so I won't go into details!).
-- 
Wot? No Signature?			UUCP:	...!mcvax!ukc!uea-sys!kh
					JANET:	kh@sys.uea.ac.uk

alf@sics.se (Thomas Sj|land) (10/12/88)

I wanted to discuss the actually rather vague term "declarative",
to make the point that this term should perhaps be avoided.

I don't assume that something that is "declarative", a priori is
"finer" than anything else. The unclarity has to do with which
objects you attribute the property "declarative" to (is it a program,
a specification, a programming language or a programming style or
something else ?). Lloyd in his book "The foundation of logic
programming" talks about "the declarative concept of finite failure"
which indicates that concepts can be declarative, etc. 

Since "declarative" is a property attributed to programming languages
I would like to make the point that we could consider a FORTRAN
program as a syntactic sugar for a sequence of transition relations
which when satisfied by a theorem prover represents the execution of
the FORTRAN program, and thereby FORTRAN should be a "declarative"
language.  The "declarative reading" of some PROLOG programs is well
defined in terms of least Herbrand models, of course, even though its
practical use is more limited than we sometimes claim, since most
PROLOG programming involves coding which sometimes adds meaning
to the program that the PROLOG theorem prover cannot deduce if
goal order is reversed etc.

I would like to point out in contrast to some other comments on my
posting that many normal uses of 'cut' does NOT cause a problem for
the "declarative" reading of a Horn clause program as a set of axioms.
It just affects the completeness in that some solutions are not found.

In <517@quintus.UUCP> ROK writes:
>In article <ALF.88Oct2184429@hathor.sics.se> alf@sics.se (Thomas Sj|land) writes:
>>Statement 2, (viewpoint held by some experts who agree to statement 1):
>>	 "functional languages are NOT declarative, while logic programming
>>	  and equational languages are if they have completeness"
>
>Please name these experts, so that I will know whose books to avoid!

This opinion was stated by some of my colleagues over dinner, and it
may not be as silly as it looks. You can view "declarative programming"
as meaning "programming with axioms and a theorem prover". If you see
a pure functional program as syntactic sugar for a lambda expression,
I guess it makes sense to say that the functional program is not 
"declarative", simply since a lambda expression is not an axiom 
in a theory. Another reading of a functional program could be that
it is a way of defining a directed relation, whereby it suddenly becomes 
"declarative". Well... let's just avoid the term "declarative" when we
try to describe properties of languages like Prolog, shall we ? 

I shouldn't have brought this up in the first place. We all know what we
are talking about anyway.






--
Thomas Sj|land
SICS, PO Box 1263, S-164 28 KISTA, SWEDEN
Tel: +46 8 752 15 42	Ttx: 812 61 54 SICS S	Fax: +46 8 751 72 30
Internet: alf@sics.se or {mcvax,munnari,ukc,unido}!enea!sics.se!alf

jha@lfcs.ed.ac.uk (Jamie Andrews) (10/15/88)

[ I posted this article before, but I suspect that it didn't get
  outside Edinburgh.  If it did, please forgive me for reposting. ]

In article <ALF.88Oct2184429@hathor.sics.se> alf@sics.se (Thomas Sj|land) writes:
>Statement 1, (commonly accepted, it seems, although the meaning is unclear):
>	 "Prolog is a declarative language"

     To me, "declarative" just means that you are writing
programs by declaring (or "defining", or "describing")
mathematical structures.  I think the term was first used to
draw a distinction between imperative languages and languages
like LISP.  So in functional languages, programs describe
functions, and in logic programming, programs describe
relations.

     I remember that when I learned LISP, it was very unclear
to me why being declarative made it so superior to conventional
languages.  I was left with the impression that my prof. just
thought it was better for some vague aesthetic reasons.
Frankly, I'm still unimpressed by purely functional programming
languages.

>My view: "(declarative programming <-> programming without added intension)"

     I'm not sure what you mean by "added intension".  Do you mean
implementation considerations that make the program do more or less
than it would seem to from a purely declarative reading?

>	My intuition leads me to think of soundness rather than completeness
>	when I hear the term "declarative".

     But consider the Prolog program

P(x) :- Infinite_loop(x).
P(3).

and the query

P(3).

In the declarative reading of the predicate, it looks like P(3)
holds.  But when we run the query, it goes off into an infinite
loop -- thus P(3) effectively does not hold.  In this case, the
declarativeness or intuitiveness of the program has just been
misleading.  A complete system, or an understandable semantics
with respect to which a system is complete, would eliminate
this problem.

     How much of a problem is this?  It depends on your point
of view.  I think that if a system is just sound, that's good
enough for most purposes; but if it's also complete with respect
to some clean semantics, it's even better because we can also
(supposedly) see when a program is going to terminate, as
opposed to just seeing when it's not going to succeed (the only
thing we can get for sure from a declarative reading when the
system is just sound).

     BTW, re. my comment about functional programming:  logic
programming uses the language of logic, which is (by definition,
almost) very close to the way we think and deduce information
from assumptions.  So logic programming is very useful for
systems with lots of logical dependencies (expert systems,
protocol testers, etc.), and its structure corresponds to a wide
class of problems.  Functional programming can make no such
claim -- very few problems seem better thought of in terms of
mathematical functions.  The main thing that functional
programming seems to offer over relational is a clear and
deterministic decision procedure.

--Jamie.
  jha@lfcs.ed.ac.uk
"I always find mathematics so difficult" -- D.Hilbert

simon@comp.lancs.ac.uk (Simon Brooke) (10/17/88)

In article <818@etive.ed.ac.uk> jha@lfcs.ed.ac.uk (Jamie Andrews) makes some
assertions which I find contentious:
>
>     BTW, re. my comment about functional programming:  logic
>programming uses the language of logic, which is (by definition,
>almost) very close to the way we think and deduce information
>from assumptions.  

Oh, come on now! Very, very few people can even be taught logic. Most of
those who can will always find it excrutiatingly difficult. It is a most
unnatural and tedious way of 'deducing information from assumptions'; and
if you cand find anyone who thinks logically, I wish you'ld introduce them
to me - it would be a new and fascinating experience.

>So logic programming is very useful for
>systems with lots of logical dependencies (expert systems,

That's a pretty bald assertion, too. The resolution theorem prover is not
good either at handling uncertainty or at generating highlevel explanation
traces; consequently, to implement an expert system, you essentially have
to implement a new inference mechanism. But my experience (and judging by
the lack of Prolog-based expert systems out there in the real world, other
people's, too) is that Prolog is not an easy language in which to write an
inference engine.

>protocol testers, etc.), and its structure corresponds to a wide
>class of problems.  Functional programming can make no such
>claim -- very few problems seem better thought of in terms of
>mathematical functions.  The main thing that functional
>programming seems to offer over relational is a clear and
>deterministic decision procedure.
>
Well yes, and that's a serious point. The notion that you can write 
declarative code in Prolog is a nice one, and it may be that there are
some people out there who can do it. Me, I can't, and God knows I've tried.
Furthermore, I haven't seen any significant chunk of declarative code written
in Prolog by anyone else either. Most Prolog programs which do anything useful
are procedural; but when a piece of procedural Prolog goes wrong, it is 
(in my experience) horrible to debug.

>--Jamie.
>  jha@lfcs.ed.ac.uk
>"I always find mathematics so difficult" -- D.Hilbert

And as for David #@%*&!* Hilbert and his assertion that '...after w (omega),
counting continues naturally w+1, w+2, w+3...' I don't believe it and
never did.


** Simon Brooke *********************************************************
*  e-mail : simon@uk.ac.lancs.comp                                      * 
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
*                                                                       *
*  Thought for today: isn't it time you learned the Language            * 
********************* International Superieur de Programmation? *********

alf@sics.se (Thomas Sj|land) (10/17/88)

Some comments to the text by lee@munnari.oz (Lee Naish) 
<2452@munnari.oz>:

>The meaning is summed up in the first chapter ("declarative
>semantics") on page 24 of the first edition of Lloyd's book.

The definition of the word "declarative" is not found on this page.

There are definitions of the concepts "correct answer substitution",
"relation" and "partial order". The adjective "declarative" is used
in the following phrases:
	"This definition of correct answer substitution captures
	 the intuitive meaning of a 'correct answer'. It provides a 
	 declarative understanding of the desired output 
	from a program and a goal. ... showing the equivalence between 
	this declarative concept and the corresponding procedural one, 
	which is defined by the refutation procedure used by the system."

It seems that an "understanding" and a "concept" can be "declarative".
What a "procedural" concept is I don't even dare to ask...:-)

I didn't really want to argue about Lloyd's book which explains many
things about the semantics of "logic programs" in a reasonable way
(even though some logicians I know claim that the notions used are
unnecessarily complex for the task).  The task of proving this
equivalence is important if you want to say that programs can be read
as logical axioms, where results follow logically from the program and
the query seen as a specification.  (I don't really understand why you
need to prove this for ALL MODELS, though, since the least Herbrand
model is the only one a "logic programmer" is interested in, but
perhaps it is not more difficult.)

The natural reading of "declarative" in this context becomes "model
theoretic": "model theoretic understanding", "model theoretic concept".

But what is a "model theoretic language" ?

>this does not necessarily mean that declarative semantics do not
>exist for such languages.  We should therefore be wary of calling
>such languages non-declarative (or even non- logic programming).

Exactly so. I can't think of a language that cannot be given a
"declarative reading". I think that the point originally made by
advocates of logic programming is perhaps that this reading is so
evidently natural. As long as you do not add intension to the program,
i.e., e.g. difference structures, which are NOT a part of the
"declarative reading", but may make perfect sense given a proper
methodology, just like other features of the language. Problems arise
in program transformation, though.

>The fact that some people get
>expressions confused with numbers does not mean that expressions
>cause a problem for the semantics of the language.

Let me just refer to the result discussed by Soendergaard and Marriot 
in another entry. If things were so trivial as you imply, they wouldn't
have a result to present.

>Is incompleteness a problem? 
Well no, I guess I said something unclear about that earlier. I was
trying to relate the notion of "declarativeness" to "completeness"
and various forms of correctness (partial, total), but that line
of thought seems unfruitful. 

Just two concluding citations from your text:

>One of important features of SLD resolution is that queries with
>correct answers substitutions will not finitely fail.
>This means that negation as failure is sound.
Doesn't this say that completeness of SLD-resolution is necessary
for the understanding of negation as failure ?

>In programming languages, as opposed to theorem proving systems,
>we want to be able to specify algorithms and completeness is less
>important.

So what does SLD-resolution have to do with a programming language
anyway ? 

(This can go on and on. Shall we stop now ?  People will get bored. 
 I am sure they all have some important hacks to finish. :-)

	/Thomas
--
Thomas Sj|land
SICS, PO Box 1263, S-164 28 KISTA, SWEDEN
Tel: +46 8 752 15 42	Ttx: 812 61 54 SICS S	Fax: +46 8 751 72 30
Internet: alf@sics.se or {mcvax,munnari,ukc,unido}!enea!sics.se!alf

abbott@aero.ARPA (Russell J. Abbott) (10/18/88)

In general, it would seem that any programming language for which there
is a semantics that maps programs onto mathematical relations could be
called declarative, although doing so does not seem particularly
insightful.  For example, it is not that unfair to say that to a great
extent denotational semantics translates programs to mathematical
functions by substituting functional composition for semi-colons.  While
that might make the resulting objects declarative rather than
procedural, it is not clear that making that transformation improves
one's intuition a great deal.

From a prolog perspective, I would guess that most people, when
programming in prolog, think relatively procedurally when writing
"rules" and relatively declaratively when writing "facts."  What is
particularly powerful about prolog as a programming language is the ease
with which one can move back and forth between the declarative and
procedural domains.  "Facts" can be made "smart" by embedding them
within rule predicates, and one can gain declarative leverage on "rules"
by storing them as facts and interpreting them.

-- Russ

jha@lfcs.ed.ac.uk (Jamie Andrews) (10/19/88)

     Hum, I guess I should be happy that I got *some* response,
although I didn't really expect this...

     Perhaps I wasn't clear enough in what I was saying.  I was
not asserting that Prolog is "more declarative" than Lisp or
that you can program everything declaratively in Prolog.  I was
comparing the logic programming paradigm to the functional
programming paradigm, in general.  I was pointing out that the
overall structure of logic programs makes them suitable for
problem domains with lots of logical dependencies, whereas the
overall structure of functional programs makes them suitable
for problem domains involving functions -- seemingly fewer.
Though functions come up naturally quite frequently in programs,
they seldom seem to capture the essence of the whole problem.

     That doesn't mean that functional programming is "bad",
just that people seem to use functional programming languages
for reasons having nothing to do with the utility (or lack
thereof) of the functional programming paradigm.

In article <590@dcl-csvax.comp.lancs.ac.uk> simon@comp.lancs.ac.uk (Simon Brooke) writes:
>Oh, come on now! Very, very few people can even be taught logic. Most of
>those who can will always find it excrutiatingly difficult. It is a most
>unnatural and tedious way of 'deducing information from assumptions'; and
>if you cand find anyone who thinks logically, I wish you'ld introduce them
>to me - it would be a new and fascinating experience.

     These statements are quite bizarre to me.  Formulae are
formalisations of mathematical statements.  Derivations are
formalisations of proofs.  Proof systems are often given direct
philosophical justifications, from introspection and observation
of the way informal proofs in papers work.  If you meant
something like "proof systems are not close to the way we search
for true assertions", I would agree with you, but that's not
what I was saying.

>                   ... Prolog is not an easy language in which to write an
>inference engine....
>                             ... Most Prolog programs which do anything useful
>are procedural; but when a piece of procedural Prolog goes wrong, it is 
>(in my experience) horrible to debug.

     You're talking about things to do with the problems of
control, determinism, ease of doing code with assignment,
etc., in Prolog.  I didn't deny these problems exist, but you
must realise that these are being addressed in more modern 
logic programming languages like Trilogy.  I was talking,
in any case, about the general applicability of the paradigms.

     Your comments about useful Prolog programs also apply to
useful programs in the Langage International Superieur de
Programmation (:-)).  In most of these, the functional structure
is just superficial and the real work is non-declarative.  The
difference is that the functional structure (*alone*) rarely
buys you anything that you couldn't get in imperative programs,
whereas a logical structure (*alone*) often does.

--Jamie.
  jha@lfcs.ed.ac.uk
"Autumn, to me the most congenial of seasons;
 the university, to me the most congenial of lives" --R.Davies

goldfain@osiris.cso.uiuc.edu (10/20/88)

Simon Brook (simon@comp.lancs.ac.uk) writes Oct 17, 1988 in comp.lang.prolog:
] if you cand find anyone who thinks logically, I wish you'ld introduce them
] to me - it would be a new and fascinating experience.

forall(X, thought(me,X) --> logical(X)).

please(suggest(Y, want(you, exists(Z, conversation(Z) &
                                      participants(Z,S) & subset({you,me},S)
                                      & topic(Z,Y) ))))?

:-> :->

bjr@aipna.ed.ac.uk (Brian Ross) (10/20/88)

In article <859@etive.ed.ac.uk> jha@lfcs.ed.ac.uk (Jamie Andrews) writes:
> [...] I was
>not asserting that Prolog is "more declarative" than Lisp or
>that you can program everything declaratively in Prolog.  I was
>comparing the logic programming paradigm to the functional
>programming paradigm, in general.  I was pointing out that the
>overall structure of logic programs makes them suitable for
>problem domains with lots of logical dependencies, whereas the
>overall structure of functional programs makes them suitable
>for problem domains involving functions -- seemingly fewer.
>Though functions come up naturally quite frequently in programs,
>they seldom seem to capture the essence of the whole problem.

A discussion of this is in the I.C. report "The Unification of Functional 
and Logic Languages" by Darlington, Field, and Pull. They argue that logic 
programs are more descriptively powerful than functional ones due to 
the existence of logical variables. The fact that logical variables can 
be both input and output in nature gives logic programs an extra degree 
of expressiveness (nondeterminism) not possible in conventional functional 
programs, which are strictly deterministic. This means that higher-level
"declarative" program statements are often more naturally expressed in Prolog 
than, say, Lisp.  On the negative side, having lots of complicated logical 
dependencies in a logic program means that the programmer has to be 
acutely aware of them. 

As far as what is meant by "declarative" Prolog programming, I think
that if a Prolog clause can be informally interpreted as a 
logical statement about the problem, be it high-level or low-level, it is 
declarative. Confusion seems to arise when one considers how high-level a
statement it is, ie. whether it can be interpreted as a statement of the 
program specification. For example, a pure Prolog program which mimics an 
imperative-style computation might still be declarative, but it likely 
doesn't share much in common with the natural specification of the problem. 
Thus many people would say it really isn't declarative -- I would say it is, 
but it isn't a very high-level declarative description of the problem!

Brian Ross

"We have all lived in this century. I have not lived in this century."
	- Senator "Zippy" Quayle

ok@quintus.uucp (Richard A. O'Keefe) (10/30/88)

In article <41315@linus.UUCP> bwk@mbunix (Kort) writes:
>In Bratko's book, the Monkey and Bananas problem handily demonstrates
>the both the power and weakness of declarative languagues.

I have never recommended this book.  I think you have misunderstood what
the example should be teaching about the power of declarative languages.
If I may be permitted to quote myself:

	The nice thing about declarative programming is that
	you can write a specification and run it as a program.

	The nasty thing about declarative programming is that
	some clear specifications make incredibly bad programs.

	The hope of declarative programming is that
	you can move from a specification to a reasonable program
	without leaving the language.

[My example wasn't as high-tech as monkey-and-bananas.  Just boring old
transitive closure.  Unimaginative, that's me.]

I say that this is what the example _should_ be teaching.
Having put together a nice clean specification, the next step is to
play with the algebra for a bit.  For example, a useful thing to do
might be to have the program start by looking for macro-operators.
(This is where Explanation-Based Learning comes in, and it turns out
to be beautifully easy to do with a modified pure-Horn-clause interpreter.)
Or you might even try bounded depth-first search:  trivially easy to do in
Prolog.

>After spending an hour or two with this problem, it is easy to understand
>why Prolog programs are hard to debug.

Eh?  If you'd said *C* programs were hard to debug, or Lisp programs were
hard to debug...  Chances are you're trying to debug at the wrong level.

One of the biggest things to grasp about declarative programming (and this
means pure Lisp, ML, Miranda, Prolog, NAIL!, you name it) is the importance
of keeping it clean and obvious so that you can play with it at a high
level, looking for transformations of the problem (leading to transformations
of the program).  The big gains are almost always to be found at the upper
levels (eliminating some operation entirely by transforming the problem
space, for example) rather than by hacking at the low levels (coding the
operation a few microseconds faster).

bwk@mitre-bedford.ARPA (Barry W. Kort) (11/07/88)

In article <602@quintus.UUCP >  ok@quintus.UUCP (Richard A. O'Keefe) writes:
 > 	The nice thing about declarative programming is that
 > 	you can write a specification and run it as a program.
 > 
 > 	The nasty thing about declarative programming is that
 > 	some clear specifications make incredibly bad programs.
 > 
 > 	The hope of declarative programming is that
 > 	you can move from a specification to a reasonable program
 > 	without leaving the language.
 > 
 >                            * * * 
 > 
 > One of the biggest things to grasp about declarative programming
 > (and this means pure Lisp, ML, Miranda, Prolog, NAIL!, you name it)
 > is the importance of keeping it clean and obvious so that you can
 > play with it at a high level, looking for transformations of the
 > problem (leading to transformations of the program).  The big gains
 > are almost always to be found at the upper levels (eliminating some
 > operation entirely by transforming the problem space, for example)
 > rather than by hacking at the low levels (coding the operation a
 > few microseconds faster).

A tip of the hat to Richard for his well-penned remarks.
I am amazed at the power of Prolog to solve a problem in
combinatorial logic without being given an explicit method.
But I am also disheartened by the difficulty of untangling
the effects of reordering the clauses.

When a bright student solves a brain-teaser, the order in which
the clues are stated does not strongly affect the method of
solution.  In Prolog, resequencing the clues can have dramatic
effects on the search order.  I find this phenomenon bewildering.

--Barry Kort

ok@quintus.uucp (Richard A. O'Keefe) (11/08/88)

In article <41581@linus.UUCP> bwk@mbunix (Kort) writes:
>A tip of the hat to Richard for his well-penned remarks.
>I am amazed at the power of Prolog to solve a problem in
>combinatorial logic without being given an explicit method.
>But I am also disheartened by the difficulty of untangling
>the effects of reordering the clauses.

Purr purr...

>When a bright student solves a brain-teaser, the order in which
>the clues are stated does not strongly affect the method of
>solution.  In Prolog, resequencing the clues can have dramatic
>effects on the search order.  I find this phenomenon bewildering.

Well, I'm not a student any more, but even when I was, the order of the
clues had a pretty strong effect on me.  (...rrup rrup) Why should it be
bewildering that an intelligent-but-slow organism which spends some time
thinking about the problem and a totally-stupid-but-fast programming
language interpreter should act differently?

Let's look at the monkey-and-bananas program which started this
discussion.  It's just a depth-first search, where states have the
form	state(MonkeyHoriz,MonkeyVert,BoxVert,HasBananas).

initial_state(state(atdoor,onfloor,atwindow,hasnot)).

move(	state(middle,onbox,middle,hasnot),
	grasp,
	state(middle,onbox,middle,has)).
move(	state(P,onfloor,P,H),
	climb,
	state(P,onbox,P,H)).
move(	state(P1,onfloor,P1,H),
	push(P1,P2),
	state(P2,onfloor,P2,H)).
move(	state(P1,onfloor,Q,H),
	walk(P1,P2),
	state(P2,onfloor,Q,H)).

depth_first_monkey(Moves) :-
	initial_state(State),
	depth_first_monkey(State, Moves).

depth_first_monkey(state(_,_,_,has), []).
depth_first_monkey(State1, [Move|Moves]) :-
	move(State1, Move, State2),
	depth_first_monkey(State2, Moves).

Note that this state space is just about as bad for depth-first search
as it can possibly be:  for any P,H,  push(P,P) is a possible
action which transforms state(P,onfloor,P,H) to itself, and similarly
for any P,Q,H,  walk(P,P) is a possible action which transforms
state(P,onfloor,Q,H) to itself.  Or if you're smart enough to avoid
trivial loops, push(P,Q);push(Q,P) and walk(P,Q);walk(Q,P) are loops.
Loops are BAD NEWS for depth-first search.

There is an easy way of fixing this, and while it looks like a hack it
is actually a principled strategy which competes well with breadth-first
search.  That is ITERATIVE DEEPENING.  To keep this message short:

iterative_deepening_monkey(Moves) :-
	list(Moves),
	depth_first_monkey(Moves).

list([]).
list([_|L]) :-
	list(L).

I suggest that you consult a good AI text to find out when iterative
deepening is appropriate.  In this particular case, the iterative
deepening version works well no matter how you shuffle the rules.

This is actually a problem where working backwards is a good idea:
there is only one move which can possibly yield the final state, only
one move which can yield that state, and not too many before that.

What one really wants is a "smart" program which notes for example that
any sequence Walk*{Push;Walk*}* can be collapsed to either Walk or
Walk?Push;Walk?, that no action can follow a 'grasp', and that no
other action than 'grasp' can follow 'climb', so that the only
"interesting" sequences are
	{Walk | Walk? Push Walk?} {Climb Grasp?}?

-- This is why I have never really got interested in things like the Zebra
puzzle:  by the time you have figured out a cunning way to represent the
search space and the rules there is nothing left to write in Prolog.

lang@macbeth.PRC.Unisys.COM (Francois-Michel Lang) (11/16/88)

In article <648@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
...
	Let's look at the monkey-and-bananas program which started this
	discussion.  It's just a depth-first search, where states have the
	form	state(MonkeyHoriz,MonkeyVert,BoxVert,HasBananas).
...
	There is an easy way of fixing this, and while it looks like a hack it
	is actually a principled strategy which competes well with breadth-first
	search.  That is ITERATIVE DEEPENING.  To keep this message short:
...
	I suggest that you consult a good AI text to find out when iterative
	deepening is appropriate.  In this particular case, the iterative
	deepening version works well no matter how you shuffle the rules.
...

Can somebody point to a reference for "iterative deepening"?
I have looked in the indexes of a half-dozen or so standard AI textbooks
and found nothing.  I know this is a term and a search strategy
that Richard discusses.  Richard, did you come up with the name?
If not, do you know who did, or where the strategy is referred to
as such (other than your messages and tutorial notes)?
----------------------------------------------------------------------------
Francois-Michel Lang
Paoli Research Center, Unisys Corporation lang@prc.unisys.com (215) 648-7256
Dept of Comp & Info Science, U of PA      lang@cis.upenn.edu  (215) 898-9511

ok@quintus.uucp (Richard A. O'Keefe) (11/16/88)

In article <8326@burdvax.PRC.Unisys.COM> lang@macbeth.PRC.Unisys.COM (Francois-Michel Lang) writes:
>Can somebody point to a reference for "iterative deepening"?

I picked up the idea from one of Stickel's papers about his
"Prolog Technology Theorem Prover".  I wish I had thought of it, but I'm
not that smart.  I _thought_ I had seen it in Judea Pearl's book
"Heuristics", but can't find it again (I may have been thinking of
"staged search").  Just at the moment I can't locate Stickel's paper
(if you could see my desk you'd know why!)  or I'd quote the reference
he gave.  The Handbook of AI describes depth-first search and bounded
depth-first search, and iterative deepening is just one controlling
the other...

Actually, looking at iterative deepening as a _factored_ search gives
you ideas about other ways of exploiting the basic idea.

debray@arizona.edu (Saumya K. Debray) (11/17/88)

> In article <8326@burdvax.PRC.Unisys.COM> lang@macbeth.PRC.Unisys.COM (Francois-Michel Lang) writes:
> >Can somebody point to a reference for "iterative deepening"?

Richard E. Korf, "Depth-first Iterative-Deepening: An Optimal
Admissible Tree Search", Artificial Intelligence 27 (1985)
pp. 97-109.

-- 
Saumya Debray		CS Department, University of Arizona, Tucson

     internet:   debray@arizona.edu
     uucp:       arizona!debray

ok@quintus.uucp (Richard A. O'Keefe) (11/17/88)

In article <7919@megaron.arizona.edu> debray@arizona.edu (Saumya K. Debray) writes:
>Richard E. Korf, "Depth-first Iterative-Deepening: An Optimal
>Admissible Tree Search", Artificial Intelligence 27 (1985)
>pp. 97-109.

Apparently he also had a paper about this in IJCAI '85.

patch@hpldola.HP.COM (Pat Chkoreff) (11/18/88)

The elegant expression of a problem in Prolog yields great peace.


... Random musings after successful coding

mgv@usceast.UUCP (Marco Valtorta) (11/19/88)

See also: Korf, R.E.  "Optimal Path-Finding Algorithms."  In:
Kanal, L. and V. Kumar (eds). _Search In Artificial Intelligence_.
Springer_Verlag, 1988.

My first reaction was also to look up "iterative deepening" in Pearl's book!