[comp.lang.prolog] Fun. vs. Logic

patch@hpldola.HP.COM (Pat Chkoreff) (11/10/89)

I have been reading "Elements of Functional Programming", by Chris Reade,
and it has caused me to reconsider the relative merits of logic versus
functional programming languages.  Specifically, I am considering the merits
of Prolog versus ML or Miranda.

Most Prolog predicates implement functions, so why not be honest and just
use a functional language?  It is very difficult, if not impossible, to
write Prolog relations which work robustly in "all directions", i.e., solve
for their inputs given their outputs.  O.K., it's possible to write a parser
which can be run backward, converting a syntax tree to a token stream.  But
once you add a symbol table and a code generator, you might as well be
honest:  you have implemented a function mapping a token stream into object
code, it probably isn't reversible, and you probably don't even care.  The
nice thing about a functional language is that the functional dependencies
are documented within the language, rather than as extraneous "modes" or in
comments.

In Prolog, there is a wide divergence between theory and practice, but in a
functional language, the theory itself is practical.  Most practical Prolog
programs use constructs which are impossible to define within the pure Horn
clause model -- they are riddled with constructs such as cut, not, and
findall, and sometimes even assert and retract.  These detract from the
value of a Prolog program as a mathematical specification.  The only impure
constructs I've seen in functional programming are exceptions and
destructive assignment, and these are very rarely used.  In the "Elements of
Functional Programming", each of these is used exactly once in order to
implement some low-level stuff.  Every other program is defined purely
functionally, which means that it has well-defined semantics within lambda
calculus.

Furthermore, functional programming languages are strongly typed and
higher-order, which cannot be said for Prolog.  That allows me to do things
like manipulate graphs as full-fledged objects, rather than as data
scattered around in extensional relations.  I can do that in pure Prolog,
too, but it ain't easy.

Are my points mistaken, or are there some features of Prolog which are just
so wonderful that they eclipse all the bad things?  Maybe logical variables
or something?  I have quite a bit of experience with Prolog, but none
whatsoever with ML or Miranda.


Patrick Chkoreff   719-590-5983           ---  R.S.V.P. ---
{hpfcla,hplabs}!hpldola!patch

ok@mudla.cs.mu.OZ.AU (Richard O'Keefe) (11/18/89)

In article <11500018@hpldola.HP.COM>, patch@hpldola.HP.COM (Pat Chkoreff) writes:
> Most Prolog predicates implement functions, so why not be honest and just
> use a functional language?

A function is a relation, so what's dishonest about using a relation?
Don't laugh: but one of the things I like about Prolog is how easy it
is to have a procedure return more than one value without having to
make a special data structure to hold the values and without having
to use special syntax.

There is not one tiny little thing to stop you using functional syntax in
Prolog if you really want to.  Lee Naish and I both have term_expansion
hooks to let you use functional syntax in ordinary Prolog code; mine is
for Quintus/SICStus Prolog, his is for NU Prolog.  With NU Prolog
coroutining behind you, you can generate lazy code if you want.

> It is very difficult, if not impossible, to
> write Prolog relations which work robustly in "all directions".

The simple reason for this is that many predicates have infinitely many
solutions in some directions.  But I fail to see why the fact that a
predicate can't be used ALL ways round is a reason for only letting me
use it ONE way round.  Why not let me use it in all the modes where it
does work?

> The nice thing about a functional language is that the functional
> dependencies are documented within the language, rather than as
> extraneous "modes" or in comments.

So use functional syntax in your Prolog programs.  No-one's stopping you.
I would point out, though, that I have _often_ found that the discipline
of taking something that I initially thought of as a function, trying to
express it in logic, and then asking "is it possible in principle to use
this more than one way around" has led me to a simpler and more general
interface.

> In Prolog, there is a wide divergence between theory and practice

The width of the difference is a good measure of the ability of the
programmer.  Wide gap = bad programmer or bad implementation.

> Most practical Prolog
> programs use constructs which are impossible to define within the pure Horn
> clause model -- they are riddled with constructs such as cut, not, and
> findall, and sometimes even assert and retract.

The theory is not "pure Horn clauses" but "first-order logic".
In NU Prolog, there are logical versions of not/1 (not/1 *is* the logical
version, as in DEC-10 Prolog and Quintus Prolog; \+ /1 is the unsound one)
and all-solutions predicates, and thanks to indexing, a competent programmer
doesn't need many cuts at all.  And while NU Prolog is not the world's
fastest Prolog system, it's not bad.

> Furthermore, functional programming languages are strongly typed and
> higher-order, which cannot be said for Prolog.

Functional languages do not have to be strongly typed.
Prolog *can* be strongly typed.  My formal specification of Prolog is
a strongly typed Prolog program.  (NU Prolog these days has two type
checkers that I know of.  Lee Naish's one is really amazing.)

Prolog-as-we-know-it is not higher order, but that doesn't stop me using
maplist and such things in Prolog, and there is Lambda Prolog which is a
higher-order logic-based language.

> That allows me to do things
> like manipulate graphs as full-fledged objects, rather than as data
> scattered around in extensional relations.  I can do that in pure Prolog,
> too, but it ain't easy.

Damn right it's easy.  I've done it.

Anything I can do in a functional language I can do in a logic-based language
in roughly the same amount of code.  If I want type-checking, I can have it.
If I want higher-order operations, I can use them.  If I want lazy functions,
I can get that with coroutining.  And I also get reversible code when I take
the trouble to write it, backtracking search when I want it, AND a language
with very strong ties to relational data base theory.

I'm not knocking functional languages:  I have a couple of Schemes here,
I'm a fan of ML from Edinburgh days, I've got and keep rereading Peyton-
Jones, and I'm really impressed with Lazy ML.  But does it have to be
only functional or only logic languages?  Of course not.  Functional and
logic languages both try to be declarative, and they have a lot in common.
One thing which Prolog programmers ought to copy from functional programming
is the determination to stick with the theory as long as possible:  it
really is the case that clean Prolog code tends to be faster than hacky stuff.

markv@phoenix.Princeton.EDU (Mark T Vandewettering) (11/19/89)

In article <11500018@hpldola.HP.COM> patch@hpldola.HP.COM (Pat Chkoreff) writes:

>Specifically, I am considering the merits
>of Prolog versus ML or Miranda.

Strictly speaking, I suppose I fall into the functional programming
camp, but I know less of logic programming techniques and
implementation, so take what I say with a grain of salt.  I think I will
play devil's advocate and point out some relatively novel features of 
logic programming languages.

>Most Prolog predicates implement functions, so why not be honest and just
>use a functional language?  It is very difficult, if not impossible, to
>write Prolog relations which work robustly in "all directions", i.e., solve
>for their inputs given their outputs.  

Actually, I am sure this point will be argued by many others, I don't
believe it is that hard.  It may be difficult to make a PROGRAM that is
able to be run in reverse, but individual relations are probably not
that difficult.

Even if it is relatively difficult, it is precisely this ability that
gives logic programming languages an advantage over functional
programming languages.   It is true that if a prolog relation is used
only in a particular mode, it is advantageous to compile it relatively
conventionally.  Mode declarations and abstract interpretation can be
used to good advantage here.  The ability to perform in either mode is a
strong advantage however, because one can query and test facts using the
same set of relations.

>In Prolog, there is a wide divergence between theory and practice, but in a
>functional language, the theory itself is practical.  Most practical Prolog
>programs use constructs which are impossible to define within the pure Horn
>clause model -- they are riddled with constructs such as cut, not, and
>findall, and sometimes even assert and retract.  

There are many purists in the logic programming camp who carefully
rethink the existance of these primitives and try to develop rational
theories for the inclusion, or rational reasons for their abolishment. 
Remember, not all of logic programming is Prolog.

>These detract from the
>value of a Prolog program as a mathematical specification.  The only impure
>constructs I've seen in functional programming are exceptions and
>destructive assignment, and these are very rarely used. 

AIIIGH!  The horror!  ASSIGNMENT?

Exceptions are not necessarily a non-functional construct, it is
relatively simple to design a moderately effective exception handling
mechanism if you adopt a language in which continuations are first
class.  

Destructive assigment is an atrocity, and totally against the spirit of
functional programming, neatly messing up lots of nice properties.  I am
for a language without the crutch of assignment.

>Furthermore, functional programming languages are strongly typed and
>higher-order, which cannot be said for Prolog.  That allows me to do things
>like manipulate graphs as full-fledged objects, rather than as data
>scattered around in extensional relations.  I can do that in pure Prolog,
>too, but it ain't easy.

Functional languages need not be strongly typed, higher order, or
polymorphic (one you missed).  Those are properties of individual
languages in the family.  Again I would stress that there are other
languages than Prolog in the logic programming world.  The reason for
Prolog's popularity among researchers is no doubt because it presents
enough problems that haven't been solved yet :-)

>Are my points mistaken, or are there some features of Prolog which are just
>so wonderful that they eclipse all the bad things?  Maybe logical variables
>or something?  I have quite a bit of experience with Prolog, but none
>whatsoever with ML or Miranda.

Two features of Prolog are particularly outstanding: backtracking, and
logic variables.  

Backtracking allows the simple program of complex control, and promotes
certain kinds of program structures that are cumbersome under
traditional languages, such as "generate-and-test" programs.  

Logic variables allow the instantion of only partially defined data,
without destroying the "pure" nature by having to introduce assignment
to structures to fill in undefined portions of a data object.

Each family have interesting strengths, and provide a rich environment
for future research.  In particular, I believe these two families
provide the most hope for effective parallel implementation.

Till later.

Mark VandeWettering

finin@antares (Tim Finin) (11/19/89)

In article <11500018@hpldola.HP.COM>, patch@hpldola (Pat Chkoreff) writes:
>I have been reading "Elements of Functional Programming", by Chris Reade,
>and it has caused me to reconsider the relative merits of logic versus
>functional programming languages.  Specifically, I am considering the merits
>of Prolog versus ML or Miranda....

It's hard and somewhat pointless to argue the merits of functional
programming versus logic programming languages independant of any
class of applications.  Luckily, we can have many programming
paradigms and chose one well suited for any given application.  I
think that Prolog, and most other LP languages, offer an interesting
and well integrated collection of tools: implicit backtracking,
unification, logic variables, relational programming, database, etc.
The result is very good for programming certain kinds of problems and
less good for others.

Tim
  Tim Finin                       finin@prc.unisys.com (internet)
  Unisys Paoli Research Center    215-648-7446 (office) 215-648-7412 (fax)
  PO Box 517, Paoli, PA 19301     215-386-1749 (home)

dorai@titan.rice.edu (Dorai Sitaram) (11/20/89)

Mark,
	I have cross-posted to comp.lang.scheme. If there is lack of
interest in comp.lang.prolog (since I have digressed from the
subject-line "Fun. vs. Logic"), you may consider restricting
distribution to comp.lang.scheme. Here goes...

In article <11610@phoenix.Princeton.EDU> markv@phoenix.Princeton.EDU (Mark T Vandewettering) writes:
>In article <11500018@hpldola.HP.COM> patch@hpldola.HP.COM (Pat Chkoreff) writes:
>>These detract from the
>>value of a Prolog program as a mathematical specification.  The only impure
>>constructs I've seen in functional programming are exceptions and
>>destructive assignment, and these are very rarely used. 
>
>AIIIGH!  The horror!  ASSIGNMENT?
>
>Exceptions are not necessarily a non-functional construct, it is
>relatively simple to design a moderately effective exception handling
>mechanism if you adopt a language in which continuations are first
>class.  
>
>Destructive assigment is an atrocity, and totally against the spirit of
>functional programming, neatly messing up lots of nice properties.  I am
>for a language without the crutch of assignment.

I have two questions for you, as I consider both continuations and
assignment imperative, i.e., extrafunctional features. (I use
"extrafunctional" non-pejoratively, since I have been caught using
said features without qualms. :-])

@ Why are exceptions (and continuations) "not necessarily
  non-functional"? I'd like to know the metric you use to judge
  "functionalness."

@ If, by your definition of "functionalness," continuations turn out to
  be functional, then why are assignments denied the same privilege?

In other words, what are the nice properties of "functionalness" that
are violated by assignment but preserved by adding first-class
continuations. (I concur with you that continuations and assignment
affect a functional language in remarkably different ways: which is
but natural, since they are two distinct extensions to the base
language.)

>...
>Two features of Prolog are particularly outstanding: backtracking, and
>logic variables.  ...
>
>Each family have interesting strengths, and provide a rich environment
>for future research.  In particular, I believe these two families
>provide the most hope for effective parallel implementation.

It is probably only moderately well-known that a functional language
with extrafunctional facilities like assignment and continuations can
fairly easily embed (note: "embed" is better than "model") a logic
programming language with extralogical facilities such as Prolog. The
literature on this subject (i.e., embedding only, _not_ suggested
combinations of log+fun, which are rife) is confined to two short but
interesting papers, one by Chris Haynes and the other by Matthias
Felleisen, and probably one by Srivastava, Oxley & Srivastava which I
haven't seen.  It might be interesting to see if the gulf has been
bridged from the other side, i.e., embedding (functional + assignment
+ continuations) in a (logic + cut) language.

--dorai
--
-------------------------------------------------------------------------------
It may be that the gulfs will wash us down;
It may be we shall touch the Happy Isles.
-------------------------------------------------------------------------------

lee@munnari.oz.au (Lee Naish) (11/20/89)

In article <2743@munnari.oz.au> ok@mudla.cs.mu.OZ.AU (Richard O'Keefe) writes:
>There is not one tiny little thing to stop you using functional syntax in
>Prolog if you really want to.  Lee Naish and I both have term_expansion
>hooks to let you use functional syntax in ordinary Prolog code; mine is
>for Quintus/SICStus Prolog, his is for NU Prolog.  With NU Prolog
>coroutining behind you, you can generate lazy code if you want.

I don't think current coroutining systems are quite smart enough to give
you lazy evaluation.  My implementation of NU-Prolog + equations
("NUE-Prolog") does allow lazy evaluation but it can run under a
conventional Prolog with no problems.  The generated code does include
"when declarations" which control coroutining, but normally the
execution proceeds left to right.  The when declarations are there so
that if you call a function with the input uninstantiated the
evaluation will be delayed and, more importantly, the compiler will
generate better indexing for the code (it will index on all inputs and
thus avoid creating choice points).  An added advantage is that the code
can be run using stream and-parallelism (PNUE-Prolog?).  The input to
the system looks like this:

?- lazy concat/2.	% optional
concat([], A) = A.
concat(A.B, C) = A.concat(B, C).

The system is currently written as a preprocessor.  I have been meaning
to make it a bit more flexible, so it can output conventional Prolog,
(P)NU-Prolog, Parlog or (Flat) GHC (the if-then-else construct I use has
to be restricted for the latter two).

>NU Prolog these days has two type checkers that I know of
There are three that are currently being combined into one system, plus
some others....

>Prolog-as-we-know-it is not higher order, but that doesn't stop me using
>maplist and such things in Prolog

My system has some basic things like apply defined.
Prolog-as-we-know-it has call and univ/functor+arg, which is all you
need.

	lee

pereira@alice.UUCP (Fernando Pereira) (11/20/89)

In article <2754@munnari.oz.au> lee@munmurra.UUCP (Lee Naish) writes:
>I don't think current coroutining systems are quite smart enough to give
>you lazy evaluation.  My implementation of NU-Prolog + equations
>("NUE-Prolog") does allow lazy evaluation but it can run under a
>conventional Prolog with no problems.

Sanjai Narain's LOG(F) \cite{Narain86} is another example of lazy first-order
functional programming compilable into Prolog, working seamlessly with
logical variables. Lazy stream operations in LOG(F) are the basis for
narrowing grammars, a nice grammar formalism to describe traces of
concurrent processes [Chau&Parker89].

>>Prolog-as-we-know-it is not higher order, but that doesn't stop me using
>>maplist and such things in Prolog
>
>My system has some basic things like apply defined.
>Prolog-as-we-know-it has call and univ/functor+arg, which is all you
>need.

To notions of ``higher-order'' need to be distinguished here. ``Higher-order''
as in functional programming means having functions as first-class
*functional* citizens. This is whas I think Lee means here. The
point was made in [Warren82], and other variants are
possible [Ait-Kaci&Nasr86]. The other notion is where functions are
first-class *logical* objects, as in Church's simple theory of types.
Lambda-Prolog is the prime example of a logic programming system in
this category. While for the first notion basically what is necessary
is some means to create and apply closures (maybe with delaying until
things are sufficiently instantiated for functional evaluation as in
LeFun [Ait-Kaci&Nasr86]), the first notion requires much more complex
machinery including higher-order unfication.

Fernando Pereira
2D-447, AT&T Bell Laboratories
600 Mountain Ave, Murray Hill, NJ 07974
pereira@research.att.com


@Article{Narain86,
  author = 	"Sanjai Narain",
  title = 	"A Technique for Doing Lazy Evaluation in Logic",
  journal = 	"Logic Programming",
  year = 	"1986",
  volume = 	"3",
  number = 	"3",
  pages = 	"259-???",
  month = 	"October"
}
@TECHREPORT{Ait-Kaci&Nasr86,
	AUTHOR = {Hassan Ait-Kaci and Roger Nasr},
	TITLE = {Residuation: a paradigm for Integrating Logic and Functional
	Programming},
	INSTITUTION = {{MCC}},
	YEAR = {1986},
	TYPE = {Technical Report},
	NUMBER = {AI-359-86},
	ADDRESS = {Austin, Texas}
}
@incollection(Warren82,
   annote={Extensions; predicate objects; sets.},
   title={Higher-Order Extensions to {Prolog}---Are They Needed?},
   key={Warren},
   author={David H.D. Warren},
   booktitle={Machine Intelligence 10},
   editor={Hayes and Michie and Pao},
   publisher={Ellis Horwood},
   year={1982}
)
@InProceedings{Chau&Parker89,
  author = 	"H. Lewis Chau and D. Stott Parker",
  title = 	"Narrowing Grammars",
  booktitle = 	"Logic Programming: Proceedings of the Sixth
		 International Conference",
  year = 	"1989",
  editor = 	"Giorgio Levi and Maurizio Martelli",
  pages = 	"199-217",
  publisher = 	"{MIT} PRess",
  address = 	"Cambridge, Massachussets",
  month = 	"July",
  note = 	"Held in Lisbon, Portugal"
}

gateley@m2.csc.ti.com (John Gateley) (11/21/89)

In article <3055@brazos.Rice.edu> dorai@titan.rice.edu (Dorai Sitaram) writes:
>It might be interesting to see if the gulf has been
>bridged from the other side, i.e., embedding (functional + assignment
>+ continuations) in a (logic + cut) language.

In the proceedings of COMPSAC 89, there is a paper by Barrett Bryant
on implementing attribute grammars, static semantics, and axiomatic
semantics in Prolog.

John
gateley@m2.csc.ti.com

patch@hpldola.HP.COM (Pat Chkoreff) (11/22/89)

>> patch@hpldola.HP.COM (Pat Chkoreff) writes:
> ok@mudla.cs.mu.OZ.AU (Richard O'Keefe) writes:

>> Most Prolog predicates implement functions, so why not be honest and just
>> use a functional language?

> A function is a relation, so what's dishonest about using a relation?
> Don't laugh: but one of the things I like about Prolog is how easy it
> is to have a procedure return more than one value without having to
> make a special data structure to hold the values and without having
> to use special syntax.

Admittedly, things like divide(X,Y,Q,R) are nice, and it's nice to just use
append/3 instead of ldiff, append, and all those "functional" variants of
the same thing.  Absolutely.  But here's what ticks me off about Prolog:
the danger of NON-TERMINATION is constantly rearing its ugly head.  I write
a predicate, and then I immediately see how it is riddled with dangers when
I call it in certain ways.  I'm not even talking about dangers due to the
lack of the occurs check:  there's not a thing I can do about those.

I can either document these dangers in comments, mode declarations, or some
other "bag" on the side of the language, or I can rewrite my predicate to be
robust.  I, a self-styled purist, usually opt for the latter.

> So use functional syntax in your Prolog programs.  No-one's stopping you.
> I would point out, though, that I have _often_ found that the discipline
> of taking something that I initially thought of as a function, trying to
> express it in logic, and then asking "is it possible in principle to use
> this more than one way around" has led me to a simpler and more general
> interface.

Exactly.  That's what I do.  And I end up with omni-directional predicates.
But in order to do that, I have to come up with some really strange
representations for objects.  I suspect that these representations have a
lot in common with what is known as "constructive mathematics".  To build an
object, you start with [].  Given any object, there are a FINITE number of
constructors you can apply to create another object.  My domain predicates
for these objects always take the form:

    object([]).
    object([C|X]) :-
        object(X),
        construct(C, X).

The construct/2 predicate always returns a ground term for C.  Furthermore,
if X is a ground term representing an object, then C will be a ground term
representing a constructor applied to that object, and there are a FINITE
number of solutions for C.

What does all of this mean?  It means that if I invoke the query:

    object(X), write(X), nl, fail.

then the interpreter will systematically generate all possible objects in
the domain.  Of course, the objects must be recursively enumerable, but that
goes for anything you can represent in a computer.

>> It is very difficult, if not impossible, to
>> write Prolog relations which work robustly in "all directions".

> The simple reason for this is that many predicates have infinitely many
> solutions in some directions.  But I fail to see why the fact that a
> predicate can't be used ALL ways round is a reason for only letting me
> use it ONE way round.  Why not let me use it in all the modes where it
> does work?

But how do you determine the modes in which it works, how do you document
them in the language, and what does it mean to the compiler?  What would you
end up with?  A sort of hybrid functional/logic language?  GREAT!  I'd fill
out a purchase request for that.

By the way, I'll bet that a lot of predicates which SEEM to have infinitely
many solutions in some directions can be recast so that they don't.  The
problem is:  it's a major pain in the ass.

Here's an example.  Let's write a predicate which defines "sequences of
natural numbers" in a straightforward way.

    sequence([]).
    sequence([N|Ns]) :-
        natural(N),
        sequence(Ns).

    natural([]).
    natural(s(N)) :-
        natural(N).

Now we write a predicate card(Ns,N), meaning that the cardinality (length)
of Ns is N:

    card([], []).
    card([_|Ns], s(N)) :-
        card(Ns, N).

Now we look for sequences of cardinality 2 (s(s([]))):

    sequence(Ns), card(Ns, s(s([]))).

We get one solution Ns=[[],[]], and on backtracking we fly off into the
weeds.  I know that I could fold card/2 back into sequence/1, but I could
replace card/2 with an arbitrarily complex condition which might be very
difficult to unfold.

So here's the bottom line.  In order to get REALLY robust predicates, it
boils down to the ability to do a robust generate-and-test:

    interesting_object(X) :-
        object(X),
        interesting(X).

This should iterate over all interesting objects, where the interesting/1
predicate is arbitrarily complex.  Of course, if there are no interesting
objects, then it will not terminate.  But that's life, and I can live with
that.

Let's apply this criterion to our example.  We must rewrite the domain
predicate sequence/1 to be a robust generator.  To do this, we must adopt a
purely "constructive" representation for sequences:

    sequence([]).
    sequence([C|Cs]) :-
        sequence(Cs),
        construct(C, Cs).

    construct([], _).
    construct(s(C), [C|Cs]) :-
        construct(C, Cs).

Now it's a little tricky to interpret these lists of constructors as
sequences of naturals, so I've provided a function interpret(Cs,Ns) which
maps a Cs-style sequence into the more familiar Ns-style sequence:

    interpret(Cs, Ns) :-
        sequence(Cs),
        gather(Cs, Ns).

    gather([], []).
    gather([C|Cs], [C|Ns]) :-
        toss(C, Cs, Cs0),
        gather(Cs0, Ns).

    toss([], L, L).
    toss(s(N), [_|L], L0) :-
        toss(N, L, L0).

If you invoke the query:

    ?- interpret(_, Ns), write(Ns), nl, fail.

then Prolog will begin enumerating all possible sequences of naturals, and
you'll see them in a familiar notation.

By the way, don't try to run the interpret/2 function backwards.  You'll get
the unique solution, but on backtracking it'll go into the weeds.  Oh well,
that's yet another nontermination quirk that I'll have to squash (if
possible).

Anyway, I've done these "constructive" representations for many kinds of
objects:

    -    natural numbers (trivial)

    -    rational numbers

    -    sequences of natural numbers

    -    partitioned sets

            This will iterate over all ways of partitioning N objects into N
            or fewer non-empty classes, for all N >= 0.

    -    directed acyclic graphs

            This will iterate over all possible DAGs, with any number of
            vertices and any number of edges between any two vertices.  I
            suspect that allowing cycles should be a fairly simple
            extension.

    -    entire block design hierarchies with connectivity

            A "design hierarchy" is used in computer-aided design.


As I said, this was a major pain in the ass.  But O'Keefe is right:  by
trying to make robust relations instead of just functions, I achieved some
strong insights.

The problem is:  I'm not sure that my data structures are practical.  In the
worst case, they're all O(N^2) in both time and space.  I could get rid of
the s/1 notation for numbers and save some space, but that's about all I can
do in Prolog.  However, it's interesting that all of these structures could
be implemented with arrays and integer indices in a language like C, and
they'd be really fast!  (But I'm not in a great hurry to code in C.)

>> In Prolog, there is a wide divergence between theory and practice

> The width of the difference is a good measure of the ability of the
> programmer.  Wide gap = bad programmer or bad implementation.

The problem is that I held onto the theory like a man possessed, and I might
have ended up with something totally impractical.  Perhaps I'm a bad
programmer.  In order to make things more efficient, I'll have to start
using more ad-hoc data structures.  For example, I could represent "sets" as
trees and get O(logN) access time.  But I'd have to give up my so-called
"constructive" ideal:  I doubt that it's possible to write omni-directional
predicates for manipulating trees, since they're doubly-recursive.

>> That allows me to do things
>> like manipulate graphs as full-fledged objects, rather than as data
>> scattered around in extensional relations.  I can do that in pure Prolog,
>> too, but it ain't easy.

> Damn right it's easy.  I've done it.

Yeah, but I'll bet your graphs weren't "constructive".  I'll bet you used
atoms as node identifiers, didn't you?  (:-)

> The theory is not "pure Horn clauses" but "first-order logic".

That's where we differ.  Notice that all my code consists of pure Horn
clauses.  I understand that I have to give up that ideal in order to do I/O
and machine arithmetic, but I stick to the ideal as long as possible.
Besides, what is "first-order logic" other than Horn clauses?  (That's not a
rhetorical question.)

> In NU Prolog, there are logical versions of not/1 (not/1 *is* the logical
> version, as in DEC-10 Prolog and Quintus Prolog; \+ /1 is the unsound one)
> and all-solutions predicates, and thanks to indexing, a competent programmer
> doesn't need many cuts at all.  And while NU Prolog is not the world's
> fastest Prolog system, it's not bad.

I'd like to see NU Prolog working on this database:

    person(pat).
    person(diana).

with this query:

    ?- not person(X).

I'm guessing that either X remains an unbound variable which is constrained
to not be a person, or the query freezes on X.  What DOES happen?

As far as "cuts" go, you certainly don't need them in functional languages:
they are replaced with "commitment".  Of course, you still have order
dependencies, where a clause is chosen only if its guard is true and the
disjunction of the preceding guards is false.  But "commitment" is
fundamentally syntactic sugar, whereas "cuts" are not.  You could replace
"cuts" with negated conjunctions, but how do you define negation?  In terms
of cuts, most likely.

>> Furthermore, functional programming languages are strongly typed and
>> higher-order, which cannot be said for Prolog.

> Functional languages do not have to be strongly typed.
> Prolog *can* be strongly typed.  My formal specification of Prolog is
> a strongly typed Prolog program.  (NU Prolog these days has two type
> checkers that I know of.  Lee Naish's one is really amazing.)

> Prolog-as-we-know-it is not higher order, but that doesn't stop me using
> maplist and such things in Prolog, and there is Lambda Prolog which is a
> higher-order logic-based language.

As I read about functional programming languages such as Miranda or ML, I am
impressed with their purity, coherence, simple model of execution, and
support for large-scale and higher-order programming.  Every program
corresponds to a lambda expression, and execution is essentially
beta-reduction.  Modules, concrete types, abstract types, and functional
data are a coherent part of the language.

Now pure Prolog is a slick little language, too.  But then you add cuts,
negation, and findall.  And then you add type checkers and mode
declarations.  And then you invent extended languages.  You stick all these
"bags" on the side of Prolog, and the result just doesn't cohere as well as
I'd like.

Let's say I want to write a Prolog program which maintains a dynamic
database of facts.  I can store the facts in the internal database and use
assert and retract.  But that's not clean.  So I decide to represent the
database as a data structure and filter a transaction stream through it.  I
write "relations" which edit the data structure, but they're not reversible,
and in general it is IMPOSSIBLE to make them so.  (Note well:  impossible.)
Certain query modes will be meaningless and probably nonterminating.

Since in general I cannot implement completely robust relations, then give
me a language that allows me to document functional dependencies and types,
and make them meaningful to the compiler.  That would be more "honest" than
a language which LOOKS relational but really isn't, and is fraught with
dangers if you think that it is.

> One thing which Prolog programmers ought to copy from functional programming
> is the determination to stick with the theory as long as possible:  it
> really is the case that clean Prolog code tends to be faster than hacky
> stuff.

I think that I might have stuck with the theory longer than possible (:-).

In my original posting, I mentioned that some functional languages have
destructive assignment.  In response:

>markv@phoenix.Princeton.EDU (Mark T Vandewettering) writes:
> AIIIGH!  The horror!  ASSIGNMENT?
>
> Destructive assigment is an atrocity, and totally against the spirit of
> functional programming, neatly messing up lots of nice properties.  I am
> for a language without the crutch of assignment.

I agree.  I am for a pure language without crutches.  What I failed to
emphasize was that with one exception, EVERY example in "Elements of
Functional Programming" was written in the PURE language.  Destructive
assignment was used in only one place, simply to demonstrate how lazy
evaluation could be implemented in an eager language.  In contrast, MOST
Prolog programs use "impure" constructs and are not purely "relational".  So
are we in the position that virtually EVERY functional program is purely
functional, but virtually NO Prolog programs are purely relational?

Although I may sound extremely hostile to Prolog, keep in mind that I have
learned things using Prolog that I probably wouldn't have learned using
Miranda.  And I'm not "sold" on functional programming:  it seems less
general than logic programming.  It is certainly the case that the class of
mathematical relations is a proper superset of the class of mathematical
functions.  But pure relations are so difficult to implement robustly, and
pure functions are so easy, that I wonder what is going on here.  Perhaps
relations are more general, but functions are more computationally "honest".

Or what?


Patrick Chkoreff   719-590-5983
{hpfcla,hplabs}!hpldola!patch

djk@cs.mu.OZ.AU (David Keegel) (11/23/89)

I'll just respond to a couple of points in this (LONG) article.

In article <11500019@hpldola.HP.COM> patch@hpldola.HP.COM (Pat Chkoreff) writes:

] the danger of NON-TERMINATION is constantly rearing its ugly head.

] I can either document these dangers in comments, mode declarations, or some
] other "bag" on the side of the language, or I can rewrite my predicate to be
] robust.  I, a self-styled purist, usually opt for the latter.

Or you can use a co-routining system like NU-Prolog which can often
delay evaluation of a predicate which would cause infinite loops.

]     object([]).
]     object([C|X]) :-
]	   object(X),
]	   construct(C, X).

I don't understand why you don't have "construct(C, X), object(X)" as
the body of the second clause, so you can use tail recursion. Never mind.

] By the way, I'll bet that a lot of predicates which SEEM to have infinitely
] many solutions in some directions can be recast so that they don't.  The
] problem is:  it's a major pain in the ass.

Your example doesn't have show infinite _solutions_, but infinite
_computation_. This is a horse of a different colour. (Except we all
know that there is only one colour of horses :-)

] Here's an example.  Let's write a predicate which defines "sequences of
] natural numbers" in a straightforward way.
]
]     sequence([]).
]     sequence([N|Ns]) :-
]         natural(N),
]         sequence(Ns).
] 
]     natural([]).
]     natural(s(N)) :-
]         natural(N).
] 
]     card([], []).
]     card([_|Ns], s(N)) :-
]         card(Ns, N).
] 
] Now we look for sequences of cardinality 2 (s(s([]))):

(I thought s(s(....(s(0))...)) was the standard notation, but never mind.)

]     sequence(Ns), card(Ns, s(s([]))).

] We get one solution Ns=[[],[]], and on backtracking we fly off into the
] weeds.

This problem is easy to solve in NU-Prolog. By writing
	?- sequence(List) when List.
NU-Prolog will delay when List is a variable. This means your call to
sequence/1 will initially delay (intuitively: ``call me when you know
what the sequence is''), and card/2 will start computing. Now instead of
testing Ns, card/2 is _generating_ Ns.

As it instantiates Ns, sequence/1 can wake up and test that each
element is a natural number (in fact it generates the numbers). Since
card instantiates Ns to [_1, _2], sequence can't "fly off into the
weeds". Natural/1 still generates infinite solutions, but this is the
intended behaviour -- you asked for all sequences of two natural
numbers. (_This_ is what Richard was talking about, I presume.)


] In order to make things more efficient, I'll have to start
] using more ad-hoc data structures.  For example, I could represent "sets" as
] trees and get O(logN) access time.  But I'd have to give up my so-called
] "constructive" ideal:  I doubt that it's possible to write omni-directional
] predicates for manipulating trees, since they're doubly-recursive.

I don't understand. What is the problem with double recursion? Why do
you call trees "ad-hoc"?

] Besides, what is "first-order logic" other than Horn clauses?  (That's not a
] rhetorical question.)

First-order logic includes negation (you have heard of that, haven't
you? :-), as well as implication and quantifiers.

] I'd like to see NU Prolog working on this database:

]     person(pat).
]     person(diana).

] with this query:

]     ?- not person(X).

] I'm guessing that either X remains an unbound variable which is constrained
] to not be a person, or the query freezes on X.  What DOES happen?

Your query fails immediately, because X only occurs once. A query

	?- not person(X), X = _.

will delay (freeze) the not, and "flounder" because nothing instantiates X.
(Uniquely occurring variables have existential quantification where
they occur; in this case _inside_ the not. Or something like that.)

--
                        David Keegel    (djk@cs.mu.OZ.AU)

   "Flattery will get you nowhere, unless someone else does it to you"

lee@munnari.oz.au (Lee Naish) (11/23/89)

In article <DJK.89Nov23184835@munginya.cs.mu.OZ.AU> djk@cs.mu.Oz.AU writes:
>...
Astute readers may have noticed some overlap with my posting - sorry if
it bored you, but I couldn't be bothered cancelling my article and
rewording it to avoid overlap.  Really clever readers would have noticed
that David and I contradicted each other:
>
>]     ?- not person(X).
>
>Your query fails immediately, because X only occurs once.

The reason for the disagreement is a useful but unfortunately confusing
feature of NU-Prolog, namely implicit quantification.  NU-Prolog allows
you to write explicit quantifiers as follows:

?- all X not p(X).

This query fails, since there is an X such that p(X).  It turns out that
some code ends up looking like this:

	all [X1,X2,X3,X4] not p(RelevantVar, X1, X2, X3, X4)

which looks much nicer if you allow implicit quantifiers:

	not p(RelevantVar, _, _, _, _)

Variables can be implicitly quantified if they occur only once in a
clause (and there is a style checker which warns you if such variables
are not prefixed with an underscore).  Last time I had anything to do
with the implementation, this implicit quantification was not done with
queries at the top level of the system.  This has changed (I guess I
should have known), so X in the original query is treated as universally
quantified outside the negation (or existentially inside).

	lee

jha@lfcs.ed.ac.uk (Jamie Andrews) (11/23/89)

Is it my imagination, or did Pat Chkoreff thrash through a
remarkably similar discussion on fun. vs. logic a couple of
years ago, on comp.ai or something?

In article <11500019@hpldola.HP.COM> patch@hpldola.HP.COM (Pat Chkoreff) writes:
>....  But here's what ticks me off about Prolog:
>the danger of NON-TERMINATION is constantly rearing its ugly head.

     You mean it isn't in functional languages?

>  I write
>a predicate, and then I immediately see how it is riddled with dangers when
>I call it in certain ways.  I'm not even talking about dangers due to the
>lack of the occurs check:  there's not a thing I can do about those.

     As far as I can tell, what you're saying is that the Prolog
interpreter is difficult to understand, ie. that its operational
semantics is more complicated than that of functional languages.
I think we knew that already.  We also know that its declarative
semantics makes it a much more appropriate language for *certain*
problems than any functional language.

>I can either document these dangers in comments, mode declarations, or some
>other "bag" on the side of the language,...

     Comments are a "bag" on the side of a language?

>....  I suspect that these representations have a
>lot in common with what is known as "constructive mathematics".

     Bing!  Five points for Pat.  The crowd goes wild.

>But how do you determine the modes in which it works, how do you document
>them in the language, and what does it mean to the compiler?  What would you
>end up with?  A sort of hybrid functional/logic language?  GREAT!  I'd fill
>out a purchase request for that.

     Double great!  Send it to:

     Complete Logic Systems, Inc.
     741 Blueridge Ave.
     North Vancouver, BC
     Canada   V7R 2J5

     Ask for a copy of Trilogy.

>By the way, I'll bet that a lot of predicates which SEEM to have infinitely
>many solutions in some directions can be recast so that they don't.  The
>problem is:  it's a major pain in the ass.
>Here's an example....

     Hmm.  So when a logic programmer encounters difficulties
like this, he or she writes papers in conferences about them and
how to solve them.  When a functional programmer encounters
difficulties like this, he or she throws up his or her hands and
goes back to Interlisp.  (No problems there!)  I guess that's
the real difference between funs. and logic?

>Besides, what is "first-order logic" other than Horn clauses?  (That's not a
>rhetorical question.)

     Yet another example of the shocking absence of logic from
our undergrad curricula.

>...  But "commitment" is
>fundamentally syntactic sugar, whereas "cuts" are not.  You could replace
>"cuts" with negated conjunctions, but how do you define negation?  In terms
>of cuts, most likely.

     Wow!  Cuts and negation cause semantic problems!  Another
five points for Pat.

>Now pure Prolog is a slick little language, too.  But then you add cuts,
>negation, and findall.  And then you add type checkers and mode
>declarations.  And then you invent extended languages.  You stick all these
>"bags" on the side of Prolog, and the result just doesn't cohere as well as
>I'd like.

     Boy, sounds a lot like Interlisp, eh?

--

     Well, enough of this; I've obviously been reading too much
alt.flame for my own good!  The serious point I want to make is
that logic progamming languages often have serious problems,
many of which Pat has pointed out.  But we shouldn't judge the
logic programming paradigm by the performance of given systems,
or assume that LP researchers are blind to the problems and
don't care about solving them.  Trilogy is only one of a number
of examples of logic programming languages which try to address
such problems as irreversability of predicates and the need for
functional notation.  (Standard disclaimer:  Trilogy was
developed by my M.Sc. advisor, Paul Voda, and I wrote the user
manual, so I'm a little biased.)

     Functional languages have their problems too.  I used the
example of Interlisp above, which was about the most extreme
example I could think of -- an ad hoc assembler-cum-operating-
system masquerading as a functional programming language.  That
was a good many years ago now, and languages like Scheme and ML
have come along since then.  The logic programming paradigm is
more complex, and has been around for a shorter time, so I don't
think we can expect beautiful things like ML to come out of it
for a while yet.

     But ML has problems too.  The type system is just too rigid
for some people's taste.  The absence of EVAL (due to its
non-typability) means that certain programming techniques are
out.  Whatever your distaste for destructive assignment, every
programmer writing serious large applications will tell you that
it's often an absolute necessity; ML has it but doesn't give
it a dynamic semantics.  (Trilogy, on the other hand, has it and
accounts for it fully.)  And, more to the point of this
discussion, you can't program search for solutions, and stay as
true to the logical structure of the problem, as well as you can
in logic programming languages.

     So the summary of the summary is that all languages have
problems; if there was one that didn't, everyone would use that
one.  You can reel off a litany of problems with a language or a
class of languages, but the response should be ``let's integrate
things and try to make everything work better'' rather than
``my paradigm is better than yours.''

     My apologies if any of the flaming above caused serious
offense.  :-)

--Jamie.
  jha@lfcs.ed.ac.uk
"The sun and the moon, the wind and the rain"

patch@hpldola.HP.COM (Pat Chkoreff) (12/01/89)

|| / hpldola:comp.lang.prolog / djk@cs.mu.OZ.AU (David Keegel) / 12:48 am
|| Nov 23, 1989 /

| In article <11500019@hpldola.HP.COM> patch@hpldola.HP.COM (Pat Chkoreff)
| writes:

|| the danger of NON-TERMINATION is constantly rearing its ugly head.

|| I can either document these dangers in comments, mode declarations, or some
|| other "bag" on the side of the language, or I can rewrite my predicate to be
|| robust.  I, a self-styled purist, usually opt for the latter.

| Or you can use a co-routining system like NU-Prolog which can often
| delay evaluation of a predicate which would cause infinite loops.

I'm glad that you (and others) brought up NU-Prolog.  I am now more aware
that the *operational* semantics of Prolog should be kept distinct from its
*model-theoretic* semantics.  My goal of being able to generate all objects
in a domain upon backtracking is of no real value.  It leads to data
structures which are artifacts of the operational semantics of the
particular dialect of Prolog I'm using.  For example, my structure for
"sequences of natural numbers" was highly unnatural, but allowed all
sequences to be generated using standard, "eager" Prolog.  (Of course, I can
still do this with "standard" sequences if I first constrain the arithmetic
sum of all the elements in the sequence.)

This means that I can relax a little when I'm using Prolog.  Let's say I'm
writing a "file editor", and I have a function which applies an editing
command to a file:

    apply_command(Command, File0, File1).

    % File1 is the file resulting from the application of Command to the
    % file File0.

I don't have to worry too much about what happens when it's called with no
arguments bound.  I certainly don't need to enumerate all possible solutions
upon backtracking!  If Command and File1 are bound, but File0 is not, I
don't need to enumerate all possible solutions for File0.

Many relations implement functions which are not invertible in principle.
My reasoning about the termination properties of such relations is
intimately related to the *operational* semantics of the Prolog I'm using.
For example, in "eager" Prolog, I have no *general* way of preventing the
interpreter from "flying into the weeds" when File0 is unbound.  The Command
could implement an arbitrary function which cannot be inverted.

With NU-Prolog, I can control this behavior:  I can simply "block" on
Command and File0, but not on File1.  This gives me the ability to implement
robust "functions" when I have to, while retaining the power of logic
programming to implement more general "relations" when I can.

I still maintain that the danger of non-termination is higher in "eager"
Prolog than in any functional language, either eager or lazy.  In
particular, eager Prolog is more dangerous than eager ML.  In eager ML, you
know that arguments to functions will be ground terms, and you can reason
about termination accordingly.  In eager Prolog, you have no such knowledge,
and there are many nonterminating call modes that you cannot prevent.

Lazy Prolog is a "horse of a different color":  nontermination can often be
replaced by suspension.

Since I'm using eager Prolog, I'll just have to be careful, and document
things in comments.

||     object([]).
||     object([C|X]) :-
||         object(X),
||         construct(C, X).

| I don't understand why you don't have "construct(C, X), object(X)" as
| the body of the second clause, so you can use tail recursion. Never mind.

Because construct(C,X) is guaranteed to terminate a finite number of times
with C ground, but only if X is a ground term representing an object, and
the only way to guarantee that is to call object(X).  This is an artifact of
my desire to generate all possible objects upon backtracking, and my desire
to represent objects as ground terms.

Regarding my representation for natural numbers:

| (I thought s(s(....(s(0))...)) was the standard notation, but never mind.)

I figured that [] was a perfectly good niladic functor, so I might as well
use it to represent the natural number 0.  But that's kind of obscure, I'll
admit.  The fact that I'm using "0" right now to represent 0 is a good
argument against me!

|| In order to make things more efficient, I'll have to start
|| using more ad-hoc data structures.  For example, I could represent "sets" as
|| trees and get O(logN) access time.  But I'd have to give up my so-called
|| "constructive" ideal:  I doubt that it's possible to write omni-directional
|| predicates for manipulating trees, since they're doubly-recursive.

| I don't understand. What is the problem with double recursion? Why do
| you call trees "ad-hoc"?

Using trees obscures the specification.  I like to represents "sets of
objects" as lists, and just use the natural numbers 0, s(0), s(s(0)), etc.
to identify the objects.  It keeps things as simple as possible, but no
simpler.  Some of my algorithms are O(N^2) in the worst case, but the same
would go for trees unless I kept them balanced.  (Besides, I could translate
these specs into C using arrays and integer indices and get that blazing
speed I was talking about.)

Of course, I CAN write robust doubly-recursive predicates -- sort of.  For
example, the flatten/2 predicate below will work in all call modes, even
though it's a functional relation from T onto Xs.  I have to pass an extra
reference to Xs into an auxiliary predicate flatten/5 to give it something
to "chew on" before the left-recursive call.

    %----------------------------------------------------------------
    % flatten(T, Xs).
    %
    % T is a tree which is flattened to form the list Xs.
    %----------------------------------------------------------------
    flatten(T, Xs) :-
        flatten(T, Xs, [], Xs, []).

    flatten([], Xs, Xs, Ys, Ys).
    flatten(t(X,Ta,Tb), Xs, Xs0, [_|Ys], Ys0) :-
        flatten(Ta, Xs, [X|Xs1], Ys, Ys1),
        flatten(Tb, Xs1, Xs0, Ys1, Ys0).

However, the auxiliary predicate flatten/5 is "dangerous" in eager Prolog.
The query below succeeds with {T=[],Ys=[]}, and then fails to terminate upon
backtracking.

    flatten(T,[],[],Ys,[]).

I suppose I could document this in NU-Prolog with something like:

    ?- flatten(_,_,_,Ys,_) when Ys.

|| Besides, what is "first-order logic" other than Horn clauses?  (That's not
|| a rhetorical question.)

| First-order logic includes negation (you have heard of that, haven't
| you? :-), as well as implication and quantifiers.

I wanted to find out what O'Keefe was calling "first-order logic".  I
claimed that the "theory" behind Prolog was "pure Horn clauses"; he claimed
that it was "first-order logic", which includes negation.  In the literature
that I've seen, the semantics of Horn clause programs are defined model-
theoretically, but the semantics of negation are defined operationally (cit.
Chapter 5 of "The Art of Prolog").

A perverse reader might have surmised that my question was "yet another
example of the shocking absence of logic from our undergrad curricula".
You, in contrast, used a smiley face (:-).

Negation worries me.  A language feature should either be essential for
computation, or defined in terms of something that is.  Negation is NOT
essential for computation, so I HOPE that it can be defined in terms of pure
Horn clauses, which are.  Is there an effective procedure for removing
negation from a first-order logic program, resulting in a pure Horn clause
program?  I wouldn't necessarily USE the procedure -- I just want to know if
it exists.  If not, then something stinks.

This is one thing that I like about functional languages like ML and
Miranda:  everything in the language is just syntactic sugar for pure
combinatorial lambda expressions (K's and S's).

I'll continue to use the eager Prolog that's available to me.  I need to
find out about NU-Prolog, even if I don't use it.  At least it might give me
a "meta-language" for documenting (and thinking about) the termination
properties of my predicates.


Patrick Chkoreff   719-590-5983
{hpfcla,hplabs}!hpldola!patch


P.S.

/jha@lfcs.ed.ac.uk Jamie Andrews at Laboratory for the Foundations of
/Computer Science writes:
| My apologies if any of the flaming above caused serious
| offense.  :-)

None taken, but who pissed in your Corn Flakes?

cleary@cpsc.ucalgary.ca (John Cleary) (12/12/89)

In article <11500020@hpldola.HP.COM>, patch@hpldola.HP.COM (Pat Chkoreff) writes:
> 
> Negation worries me.  A language feature should either be essential for
> computation, or defined in terms of something that is.  Negation is NOT
> essential for computation, so I HOPE that it can be defined in terms of pure
> Horn clauses, which are.  Is there an effective procedure for removing
> negation from a first-order logic program, resulting in a pure Horn clause
> program?  I wouldn't necessarily USE the procedure -- I just want to know if
> it exists.  If not, then something stinks.

There is a simple translation of negation to first order logic that requires
a correct implementation of not equals.  Given a predicate say p(X1,...) which
is called somewhere in the form not(p(Y1...)) then the call can be replaced
by a positive call np(Y1,...) where the definition of np can be derived
systematicly from that for p.
If p is defined as follows:

	p(X1,...) :- a11(...), ..... a1n1(....)
        p(X1,...) :- a21(...), ......a2n2(....)
	...
	p(X1,...) :- am1(...), ..... amnm(....)

then np is defined as

	np(X1,...):- not(a11(...)), not(a21(...)), ... not(am1(...))
	np(X1,...):- not(a11(...)), not(a21(...)), ... not(am2(...))
        ....
	np(X1,...):- not(a1n1(...)), not(a21(...)), ... not(amnm(...))

where each of the not(aij(..)) can be further translated or if they
are of the form not(not(q(...)) replced by q(...).
Warning there can be a lot of these clauses: n1*n2*n3*...nm to be precise.
Also I have assumed the heads of teh clauses contain only free variables
so the aij(...) may be equality(=). To avoid a circular definition of not(_=_)
this needs to be done explicitly.  Given a finite Herbrand Universe
there are only a finite (but possibly large) number of pairs of things that
are not equal so this too can be written down (it is however different for
each program).

I am not entirely sure of the theoretical status of this translation.
It assumes that the completion of the program is what you really meant 
and gives the usual semantics for stratified programs as near as I can tell.
For nostratified programs, for example:

	p:- not(p)

the translated program is:
	p:- not_p

	not_p:-p.
which would conclude that both p and not_p are false.  So in there is no 
guarantee that the result will be consistent in these cases.

The standard minimal meta-interpreter for Prolog can be extended to give this
type of behavior as follows.  Assume that the program is represented by the 
clauses(...) predicate which representes all the clauses for a particular
program as a single list of clauses.  The positive call is given by call(...)
and the negative by neg(...).

	call(true):- true.
	call((A,B)):- call(A), call(B).
	call(not(A)):- neg(A).
	call(X=X):- true.
        call(A):- clauses(List), scan(A,List).

	scan(A,[(A:-B)|_]):- call(B).
	scan(A,[_|Tail]):- scan(A,Tail).

	neg((A,B)):- neg(A).
	neg((A,B)):- neg(B).
	neg(X=Y):- not_equals(X,Y).
	neg(not(A)):- call(A).
        neg(A):- clauses(List), neg_scan(A,List).

	neg_scan(A,[(H:-B)|Tail]):- not_equals(A,H), neg_scan(A,Tail).
	neg_scan(A,[(A:-B)|Tail]):- neg(B), neg_scan(A,Tail).

	not_equals(...) %needs to be defined depending on the program

As you say you might not want to use this although depending on the 
circumstances programs translated or interpreted this way can be efficient
and these techniques contain the essence of a number of the approaches
to negation that are available.
> 
> I'll continue to use the eager Prolog that's available to me.  I need to
> find out about NU-Prolog, even if I don't use it.  At least it might give me
> a "meta-language" for documenting (and thinking about) the termination
> properties of my predicates.
> 
My personal experience is that NU-Prolog is a wonderful language for 
programming in.  In fact the step up from Prolog to NU-Prolog was almost
as much fun as the step from imperative languages to Prolog.
You need many fewer cuts and can adopt a much freer and more expressive
programming style.  Try it you'll love it.
As for efficiency it is not intrinsicly much less efficient than standard
Prologs and I am sure that if as much effort was put into its implementtion
as into other systems out there it would run just as fast (I have heard
figures of 10% to 15% slowdown over eager Prologs but this is surely
application dependent).


	John G. Cleary
	Department of Computer Science
	University of Calgary
	2500 University Drive 
	N.W. Calgary
	Alberta T2N 1N4
	Canada

	cleary@cpsc.UCalgary.ca
	Phone: (403)282-5711 or (403)220-6087

bradley@cs.utexas.edu (Bradley L. Richards) (12/14/89)

In article <11500020@hpldola.HP.COM>, patch@hpldola.HP.COM (Pat Chkoreff) writes:
>> 
>> Negation worries me.  A language feature should either be essential for
>> computation, or defined in terms of something that is.  Negation is NOT
>> essential for computation, so I HOPE that it can be defined in terms of pure
>> Horn clauses, which are.  Is there an effective procedure for removing

I know you've already gotten a lengthy response on eliminating negation, but
I think one quick additional note is in order.  While it's true that 
negation is unnecessary when computing in infinite domains (since positive
programs possess the full power of recursive function theory), this is
*not* true when computing in finite domains.  If you are computing over
the domain, say, of citizens in Hoboken, then the ability to say

    Stranger(Who) :- not citizen(Who)

in fact gives you a power you cannot duplicate without negation.

brad

torna@majestix.ida.liu.se (Torbjorn Naslund) (12/14/89)

In article <2221@cs-spool.calgary.UUCP> cleary@cpsc.ucalgary.ca (John
Cleary) writes:  

>There is a simple translation of negation to first order logic that requires
>a correct implementation of not equals.  Given a predicate say p(X1,...) which
>is called somewhere in the form not(p(Y1...)) then the call can be replaced
>by a positive call np(Y1,...) where the definition of np can be derived
>systematicly from that for p.
>If p is defined as follows:
>
>	p(X1,...) :- a11(...), ..... a1n1(....)
>        p(X1,...) :- a21(...), ......a2n2(....)
>	...
>	p(X1,...) :- am1(...), ..... amnm(....)
>
>then np is defined as
>
>	np(X1,...):- not(a11(...)), not(a21(...)), ... not(am1(...))
>	np(X1,...):- not(a11(...)), not(a21(...)), ... not(am2(...))
>        ....
>	np(X1,...):- not(a1n1(...)), not(a21(...)), ... not(amnm(...))
>
>where each of the not(aij(..)) can be further translated or if they
>are of the form not(not(q(...)) replced by q(...).
>Warning there can be a lot of these clauses: n1*n2*n3*...nm to be precise.
>Also I have assumed the heads of teh clauses contain only free variables
>so the aij(...) may be equality(=). To avoid a circular definition of not(_=_)
>this needs to be done explicitly.  Given a finite Herbrand Universe
>there are only a finite (but possibly large) number of pairs of things that
>are not equal so this too can be written down (it is however different for
>each program).

If some of the clauses for p above have right-free variables, i.e
variables that occur in the body but not in the head, the above approach
does not work. Consider for instance the following example (due to
Barbuti el. al. [1])

Let P be the program
	s(X,Y) :- p(X,Y,Z).
	p(X,Y,Z) :- q(X,Y,Z), r(X,Y,Z).
	q(X,Y,Z) :- X = b.
	q(X,Y,Z) :- Z = b.
	r(X,Y,Z) :- Z = a.
	r(X,Y,Z) :- Y = b.
	
With the above approach (as I understand it) you get the following
clauses, where the intended meaning of ns is not s
	ns(X,Y) :- np(X,Y,Z).
	np(X,Y,Z) :- nq(X,Y,Z).
	np(X,Y,Z) :- nr(X,Y,Z).
	nq(X,Y,Z) :- not_eq(X,b), not_eq(Z,b).
	nr(X,Y,Z) :- not_eq(Z,a), not_eq(Y,b).
	not_eq(a,b).
	not_eq(b,a).

Let P' be the program obtained by adding the above clauses to P. Now,
both s(a,b) and ns(a,b) are logical consequences of P'! The problem is
that the variable Z in the the clause for s becomes universally
quantified when this clause is "negated", and this is not taken care of
in the above approach.

[1] Barbuti, Mancarella, Pedreschi and Turini: A Transformational
Approach to Negation in Logic Programming. To appear in Journal of Logic
Programming 


		Torbjorn Naslund
		Dept. of Computer and Information Science
		Linkoping University
		S-581 83 Linkoping
		Sweden

puget@ilog.UUCP (Francois Puget) (12/14/89)

in a previous article cleary@cpsc.ucalgary.ca (John Cleary) writes

>There is a simple translation of negation to first order logic that requires
>a correct implementation of not equals.  Given a predicate say p(X1,...) which
>is called somewhere in the form not(p(Y1...)) then the call can be replaced
>by a positive call np(Y1,...) where the definition of np can be derived
>systematicly from that for p.
>If p is defined as follows:
>
>	p(X1,...) :- a11(...), ..... a1n1(....)
>        p(X1,...) :- a21(...), ......a2n2(....)
>	...
>	p(X1,...) :- am1(...), ..... amnm(....)
>
>then np is defined as
>
>	np(X1,...):- not(a11(...)), not(a21(...)), ... not(am1(...))
>	np(X1,...):- not(a11(...)), not(a21(...)), ... not(am2(...))
>        ....
>	np(X1,...):- not(a1n1(...)), not(a21(...)), ... not(amnm(...))
>
>where each of the not(aij(..)) can be further translated or if they
>are of the form not(not(q(...)) replced by q(...).

This is unfortunately incorrect. Let us take a simple example

	member(X,Y):- Y=[X|A].
	member(X,Y):- Y=[B|C], member(X,C).

the proposed solution would define the negation of member as

	n_member(X,Y):- not(Y=[X|A)), not(Y=[B|C]).
	n_member(X,Y):- not(Y=[X|A]), not(member(X,C).

Well, if anybody is convinced that this is a good definition of n-member,
i would like some explanations.

>
... deleted ...
>I am not entirely sure of the theoretical status of this translation.

You are right

>It assumes that the completion of the program is what you really meant 

No

The completion would give 

(1)	forall X,Y not(member(X,Y)) <-
	   ((forall A,A' not(X=A') or not(Y=[A'|A]))
	    and
	    (forall A'',B,C, not(X=A'') or not(Y=[B|C]) or not(member(A'',C))))
	
You see that the quantifications of variables are very important here since 
some of them are universally quantified in the 'body' of the above formula.
(In fact the completion gives an equivalence instead of the implication above).
A good treatement of this subject can be found in (Shepherdson, 88).
Beside quantifications, the proposed solution ignores shared variables.
	
>and gives the usual semantics for stratified programs as near as I can tell.
>For nostratified programs, for example:
>
>	p:- not(p)
>
>the translated program is:
>	p:- not_p
>
>	not_p:-p.
>which would conclude that both p and not_p are false.  So in there is no 
>guarantee that the result will be consistent in these cases.

I can add that for Definite clause programs, such as pure prolog, the 
completion is consistent (Shepherdson, 88).

>The standard minimal meta-interpreter for Prolog can be extended to give this
>type of behavior as follows.  Assume that the program is represented by the 

... deleted ...

the proposed program is not justified on logical grounds. I have develloped a 
system that transforms the completion into a set of definite clauses 
(puget, 89). In the member case it produces the following program for n-member

	n_member(X,[]).
	n_member(X,[Y|Z):-not(X=Y),n_member(X,Z).

The principle is to use the equality theory defined in the completion to 
simplify the formula (1) above. This is done with a rewrite system similar to the disunification algorithm introduced in (Comon, 88).

A similar approach is described in (Lugiez, 89), and in (Barbuti et al, 88). These approaches are not perfect : all the universal cannot be eliminated in the body of the formulas. All those approaches assume that an implementation of 
not-equal is given. In other terms they use a cosure domain axiom.

I give here the references from memory. 

(Comon, 88) Disunification: theorie et applications
	These, Universite de Grenoble, December 88.
There are two papers in English. I will [post the references later.

Maher and Lassez have studied this also.

(Lugiez, 89) A deductive Procedure for first order formulas, in proceedings of
ICLP 89.

(Puget. 89) Explication d'echecs en Prolog, in proceedings of SPLT'89, 
march 89, tregastel, France. (in french).

(Shepherdson, 88) Negation in logic programming, in Foundation of logic 
programming and deductive databases, Minker and Gallaire (eds), 88.

The facts expressed above are my beliefs and any error are mine.

Jean-Francois Puget
email; puget@lri.lri.fr