[comp.lang.prolog] Constraint satisfaction in LOG

narain@randvax.UUCP (Sanjai Narain) (03/27/89)

SEND+MORE=MONEY is not a good problem for exhibiting one of the main
virtues of constraint languages: that coroutining between generation and
testing is performed *automatically*.  It does not have to be programmed
by the user.

Programming coroutining can greatly obscure the logic of final program.
This is not so obvious with Prolog formulations of SMM.  For example,
Moss's first Prolog program contains coroutining, as it is verifying
addition from left to right while also generating the digits in the
summands.  Still, its logic is fairly close to that of its statement in
English.

Here is another problem where a solution in straight Prolog would not be
so clear.  Given a number K, and a set S of positive numbers, generate all
those subsets of S, the sum of whose members is equal to K.  Clearly, for
members U1,..,Uk of S, if their sum is greater than K, then generation of
all subsets of the form [U1,..,Uk|X] can be suppressed.  This can be
accomplished by coroutining between generation of subsets, and their
testing.

I now show how the problem can be stated in F*, without programming
coroutining. The coroutining is performed automatically, by lazy
evaluation. Curiously enough, Prolog itself is utilized, via compilation,
to accomplish this:

	subset([])=>[].
	subset([U|V])=>[U|subset(V)].
	subset([U|V])=>subset(V).

	sum_eq([],S,K)=>if_2(equal(S,K),[]).
	sum_eq([U|V],S,K)=>if_2((U+S)=<K,[U|sum_eq(V,S+U,K)]).

	if_2(true,X)=>X.

	ans=>sum_eq(subset([1,11,2,12,3,13,4,14,5,15,6,16,
			    7,17,8,18,9,19,10,20]),0,10).

The first three rules compute subsets of a list. The second two
rules determine whether S plus the sum of elements of a list
is equal to K. If so, they return the list itself, otherwise they
return nothing, and simply fail. Now, the term sum_eq(subset(S,0,K))
has as value precisely those subsets of S whose members add up to K.

Their compilation into Prolog is listed below.  Try reduce(ans,Z).  The
ten or so solutions are found in a breeze.  The total number of subsets is
2**20 (about a million), so exhaustive search would take quite long.

Thus, in LOG(F), one can program arbitrary constraints, not just
in a fixed domain. I am curious to see the formulation of this
problem, and the above optimization, in CLP, or CHIP.

Sanjai Narain

=============================================================================
:-op(650,xfx,=>).

reduce(true,true).
reduce(false,false).
reduce([],[]).
reduce([A|B],[A|B]).
reduce(ans,A) :-
        ans=>B,
        reduce(B,A).
reduce(subset(A),B) :-
        reduce(A,C),
        subset(C)=>D,
        reduce(D,B).
reduce(if_2(A,B),C) :-
        reduce(A,D),
        if_2(D,B)=>E,
        reduce(E,C).
reduce(solution(A,B),C) :-
        solution(A,B)=>D,
        reduce(D,C).
reduce(sum_eq(A,B,C),D) :-
        reduce(A,E),
        sum_eq(E,B,C)=>F,
        reduce(F,D).

subset([])=>[].
subset([A|B])=>[A|subset(B)].
subset([A|B])=>subset(B).
sum_eq([],A,B)=>if_2(C,[]) :-
        equal(A,B,C).
sum_eq([A|B],C,D)=>if_2(E,[A|sum_eq(B,F,D)]) :-
        G is A+C,
        less_than_equal(G,D,E),
        F is C+A.
ans=>sum_eq(subset([1,11,2,12,3,13,4,14,5,15,6,16,7,17,8,18,9,19,10,20]),0,10).
solution(A,B)=>sum_eq(subset(A),0,B).
if_2(true,A)=>A.

equal(A,A,true):-!.
equal(A,B,false).

less_than_equal(A,B,true):-A=<B,!.
less_than_equal(A,B,false).

make_list(X,Y):-reduce(X,Z),make_list_1(Z,Y).

make_list_1([],[]).
make_list_1([U|V],[U|Z]):-make_list(V,Z).

lee@.cs.mu.oz (Lee Naish) (04/04/89)

In article <1919@randvax.UUCP> narain@randvax.UUCP (Sanjai Narain) writes:
>	subset([])=>[].
>	subset([U|V])=>[U|subset(V)].
>	subset([U|V])=>subset(V).
>
>	sum_eq([],S,K)=>if_2(equal(S,K),[]).
>	sum_eq([U|V],S,K)=>if_2((U+S)=<K,[U|sum_eq(V,S+U,K)]).
>
>	if_2(true,X)=>X.
>
>	ans=>sum_eq(subset([1,11,2,12,3,13,4,14,5,15,6,16,
>			    7,17,8,18,9,19,10,20]),0,10).
>
>Their compilation into Prolog is listed below.  Try reduce(ans,Z).
Doesn't work - try make_list(ans, Z) (defined in the Prolog version)
instead!

Here is the equivalent NU-Prolog version.  Note that sum_eq is simpler
in Prolog because it is a simple test and doesn't have to return a
value.

subset([], []).
subset([A|B], [A|C]) :-
	subset(B, C).
subset([A|B], C) :-
	subset(B, C).

?- sum_eq(A, B, C) when A.
sum_eq([], A, A).
sum_eq([A|B], C, D) :-
	E is C + A,
	E =< D,
	sum_eq(B, E, D).

ans(A) :-
	sum_eq(A, 0, 10),
	subset([1,11,2,12,3,13,4,14,5,15,6,16,7,17,8,18,9,19,10,20],A).

The code was run through a program called nac to add control information
automatically.  Nac is able to determine that sum_eq should only be
called when its first argument is instantiated, and that it is a test.
Nac therefore reorders the calls in ans so sum_eq is before subset and
the two calls will act as coroutines.  Under NU-Prolog, this code runs
about 4-5 times as fast as the transformed LOG(F) code.  The main
reasons are (probably) that it interpreted functions are not passed
around and clause indexing is better because the '=>' predicate in the
LOG(F) version really needs an extra level of indexing to make it as
efficient.  Because the control is so simple, it should be easy to get
this running under Sicstus, which would make it even faster.

	Lee Naish

bimbart@kulcs.uucp (Bart Demoen) (04/07/89)

As a reaction to <1919@randvax.UUCP> narain@randvax.UUCP (Sanjai Narain) and
<1333@murtoa.cs.mu.oz.au> lee@munmurra.UUCP (Lee Naish), here is a Prolog
program:

ans(_l) :-
	sub(10,[1,11,2,12,3,13,4,14,5,15,6,16,7,17,8,18,9,19,10,20],_l).

sub(0,_,[]) :- ! .
sub(_n,[_x|_l],[_x|_o]) :- _nn is _n - _x , _nn >= 0 , sub(_nn,_l,_o) .
sub(_n,[_|_l],_o) :- sub(_n,_l,_o) .

the `breeze' in which it finds the solutions, is about 7 times shorter ...

although I am very much convinced of the usefulness of built-in coroutining,
I think it is - as in this example - very often an overkill

bimbart@kulcs

narain@randvax.UUCP (Sanjai Narain) (04/09/89)

Lee, thanks for pointing out that make_list(ans,Z) should be used instead
of reduce(ans,Z).  My point was simply that coroutining can be
accomplished efficiently, invisibly, and in *straight* Prolog.  The
underlying theoretical basis is well worked out in the form of
completeness, computational determinism, confluence and optimality
results.

It is interesting that the NAC compiler can generate coroutining
information automatically.  I would be interested in knowing how general
it is.  Does the user ever have to specify coroutining annotations, or
does the compiler always do the right thing?  Can it handle arbitrary
symbolic generate and test predicates, or did it do a good job because it
knew about +?

As to Bart's message, I never said that coroutining could not be
programmed in Prolog. Of course it can be. My point was that its
programming can greatly obscure the logic of the program. This
does appear to be the case in Bart's program. Yes, it is a lot
faster, but then wouldn't it be even faster in C?

Regards.

Sanjai Narain

lee@munnari.oz (Lee Naish) (04/10/89)

In article <1942@randvax.UUCP> narain@randvax.UUCP (Sanjai Narain) writes:
>It is interesting that the NAC compiler can generate coroutining
>information automatically.  I would be interested in knowing how general
>it is.  Does the user ever have to specify coroutining annotations, or
>does the compiler always do the right thing?  Can it handle arbitrary
>symbolic generate and test predicates, or did it do a good job because it
>knew about +?

It works fairly well for programs which manipulate recursive data
structures (lists in this example).  It doesn't know anything about
numbers, which is a limitation.  See

%A Lee Naish
%T Automating control of logic programs
%J Journal of Logic Programming
%K jlp
%V 2
%N 3
%D October 1985
%P 167-183

It might be easier to automate this kind of coroutining in a functional
environment because more information is known about modes.  However, one
of the prices to pay is that tests must explicitly return true or false
(rather than just succeeding or failing).  Predicates seem nicer than
functions for expressing "generate and test" problems.

Although coroutining can simulate some uses of lazy evaluation (perhaps
more efficiently also), lazy evaluation seems more powerful in general.
This is something that functions have over predicates (in my opinion).
However, I think logic programming is a good implementation vehicle for
mixed "logic"/functional programming.  Quite a while back I wrote a
preprocessor which compiles NU-Prolog + functions defined by equations
down to NU-Prolog.  It has (optional) lazy evaluation and fits very
nicely with Parallel NU-Prolog, but thats another story...

	lee

narain@randvax.UUCP (Sanjai Narain) (04/11/89)

Thanks for the reference.

>> However, I think logic programming is a good implementation vehicle for
>> mixed "logic"/functional programming.

Precisely what LOG(F) accomplishes.

>> It might be easier to automate this kind of coroutining in a functional
>> environment because more information is known about modes.  However, one
>> of the prices to pay is that tests must explicitly return true or false
>> (rather than just succeeding or failing).  Predicates seem nicer than
>> functions for expressing "generate and test" problems.

In F* the rewriting system underlying LOG(F), there is a notion of failure
even when evaluating function applications.  This is why generate-and-test
programs are as natural in it as in logic.

>> It might be easier to automate this kind of coroutining in a functional
>> environment because more information is known about modes.

There is another reason why functional programming is more elegant
than logic programming, which has to do with manipulation of
infinite structures. Suppose one wants to compute the first 3 elements of
an infinite list l. In logic one may say:

	compute_list_l(L),first(3,L,X).

By coroutining compute_list_l and first, one may solve first(3,X), but
one would never be able to solve compute_list_l fully.  Thus, no theorem
would be proved, so within the strict framework of logic programming
one would not be entitled to infer anything.

In functional programming however, one may evaluate the term:

	first(3,compute_list_l)

which would neatly reduce to its normal form, namely the first 3 elements
of l.

ok@quintus.UUCP (Richard A. O'Keefe) (04/13/89)

In article <1944@randvax.UUCP> narain@randvax.UUCP (Sanjai Narain) writes:
>There is another reason why functional programming is more elegant
>than logic programming, which has to do with manipulation of
>infinite structures. Suppose one wants to compute the first 3 elements of
>an infinite list l. In logic one may say:
>	compute_list_l(L),first(3,L,X).
>By coroutining compute_list_l and first, one may solve first(3,X), but
>one would never be able to solve compute_list_l fully.  Thus, no theorem
>would be proved, so within the strict framework of logic programming
>one would not be entitled to infer anything.
>
>In functional programming however, one may evaluate the term:
>
>	first(3,compute_list_l)
>
>which would neatly reduce to its normal form, namely the first 3 elements
>of l.

When I saw this, I thought "that can't be right, functional programming
and logic programming are hard to tell apart, they can't be that different."
And of course they aren't.

Consider a Horn clause definition (I avoid Prolog notation because we're
talking about logic programming, not Prolog as such):

	list_of_ones(cons(1,X)) <- list_of_ones(X);

	first(0, L, nil) <- true;
	first(s(N), cons(X,Xs), cons(X,Ys)) <- first(N, Xs, Ys);

Now ask a coroutining system the question

	? Ans | first(s(s(s(0))), Xs, Ans), list_of_ones(Xs);

We get the intermediate results
answer(Ans) <- first(s(s(s(0))), Xs, Ans), list_of_ones(Xs);  % do first
answer(Ans) <- Xs = cons(X1,Xs1), Ans = cons(X1,Ys1),	      % do list...
	       first(s(s(0)), Xs1, Ys1), list_of_ones(cons(X1,Xs1));
answer(Ans) <- Ans = cons(1,Ys1),
	       first(s(s(0)), Xs1, Ys1), list_of_ones(Xs1);
	by calling first then list_of_ones
answer(Ans) <- Ans = cons(1,cons(1,Ys2)),
	       first(s(0), Xs2, Ys2), list_of_ones(Xs2);
	by two more such steps
answer(Ans) <- Ans = cons(1,cons(1,cons(1,Ys3)),
	       first(0, Xs3, Ys3), list_of_ones(Xs3);	% do first
	and the final result is
answer(cons(1,cons(1,cons(1,nil)))) <- list_of_ones(Xs3);

Surely this is a valid theorem?  (The rules give us no way to find a
finite Xs3, but the theorem only says that _if_there_were_ such an
Xs3, the answer would be [1,1,1].)

If we consider a functional definition:
	def list_of_ones = cons(1, list_of_ones);
	def first(0,L) = nil
	+++ first(s(N),cons(H,T)) = cons(H,first(N,T));
and ask for the value of the expression
	first(s(s(s(0))), list_of_ones)?
we get precisely the same kind of response:
the value of the expression is cons(1,cons(1,cons(1,nil)))
PROVIDED that list_of_ones has some finite value.  Of course,
it is unavoidable in functional programming that some expressions
do NOT have (non-bottom) values...

Each answer is as valid as the other.  In functional languages one
hacks lazy evaluation by moving away from "strict" evaluation.  We
could probably work out a non-strict logic if we tried.  (Hmmm...)