[net.lang.prolog] PROLOG Digest V3 #9

RESTIVO@SU-SCORE.ARPA (03/18/85)

From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>


PROLOG Digest            Monday, 18 Mar 1985        Volume 3 : Issue 9

Today's Topics:
               Query - XEROX & Denotatinal Semantics,
     Implementation - Warren Engine &  Unsafe Var's & Numbervars,
                         & Bounded Merge & CP
----------------------------------------------------------------------

Date: Tue, 12 Mar 85 20:14:28 gmt
From: William Clocksin <WFC%CL.Cam@ucl-cs.arpa>
Subject: Implementation?

Has anyone got a Prolog implementation running on
Xerox workstations under XDE (it would have to be
written in Mesa)?  Alternatively, does anyone want
one badly enough for me to write one (at cost)?

------------------------------

Date: 11 Feb 85 10:24 PST
From: Kahn.pa@XEROX.ARPA
Subject: is there Prolog on Xerox 1100's/Interlisp?

Unfortunately I'm not at liberty to say much about
Prolog on Xerox 1100's.  All I've been authorized
to say is that Xerox is exploring several avenues
and that something will be announced during the
summer, we think.

------------------------------

Date: Tue, 12 Mar 85 13:22:41 pst
From: (Carl Ponder) Ponder%ucbernie@Berkeley
Subject: "Prolog inquiry"

I am attempting to do a dissertation in which I
develop a model of denotational semantics for describing
different Prologs (conventional Prolog, Dado Prolog,
Concurrent Prolog, etc.) and using it to show where they
might contain or exclude each other, and showing logical
consistency or inconsistency.

I have found very few articles dealing with formal Prolog
semantics;  just Pereira & Monteiro on operational
semantics of concurrency, Jones & Mycroft on denotational
semantics of conventional Prolog, and a large number of
articles on pure logic programming (Apt & Van Emden, Wise,
Kowalski, etc.).

Do you know of any references dealing with the formal
semantics of how  Prolog actually works (i.e. formalizing
the procedural, not the declarative semantics)? Also, do
you know of any discussing the discrepancies (formally or
in terms of programming techniques) between the declarative
& procedural  interpretations (i.e. where the logic works
but the interpreter fails due to  the search ordering), or
any discussing the discrepancies between Prolog and
any of the proposed parallel variations?

Thanks,

-- Carl Ponder

------------------------------

Date: Monday, 11 Mar 1985 13:00:31-PST
From: (Karl Puder) Puder%Bach.DEC@decwrl.ARPA
Subject: unsafe variables in SRI-309

According to Warren's rules, in the clause:

nrev([X|L0], L) :- nrev(L0, L1), conc(L1, [X], L).

the variable L is safe because it appears in the head.

Since L appears in the head, some other clause must have
allocated storage for it in order to pass it in.  There
is no need to keep yet another copyof L once it has been
put into an A register to hand it to conc.

If this still doesn't work, recheck your implementation
of the put instructions against the PROLOG code in the
paper, and/or read Warren's paper "An Improved Prolog
Implementation which Optimizes Tail Recursion" for a
better explanation of what is going on.  (This paper
describes TRO for a structure-sharing system, but the
basic principle is the same).

-- Karl

------------------------------

Date: 13 Mar 1985 09:59-EST
From: David Scott Warren <Warren%suny-sbcs@csnet-relay>
Subject: WPE and unsafe variables again

To continue the discussion of unsafe variables in the
(D.H.D.) Warren Prolog Engine (WPE), and the example
raised by Clocksin:

nrev([X|L0],L) :- nrev(L0,L1), conc(L1,[X],L).

The final occurrence of L is not unsafe. A variable is
unsafe if its final occurrence might dereference to a
(free) variable STORED IN THE CURRENT ENVIRONMENT. This
cannot be the case with L. When dereferencing the final
occurrence of L, it might well dereference to a free
variable, but not to a free variable stored in this
environment. The unification on entry to this clause
will point the local variable L to whatever is passed
in as the second argument, maybe a free variable, but
it will be a free variable in an environment deeper in
the stack (or in the heap).  The problem would be if
when Register 2 is loaded with the value of L, it pointed
into the current environment. This can't happen with L
occurring in the head; it will point into an environment
deeper in the stack (or already on the heap).

Another related point that caused me some difficulty
concerns what D.H.D. Warren calls a temporary variable,
in the circumstance that the first occurrence of the
variable appears at the top level in the final literal of
the clause, and is thus translated to a `put_temp_var'
instruction. I find this situation to be better understood
as an unsafe variable situation. It is a `permanent'
variable which on its first use is definitely known to be
unsafe and so must be moved to the heap.

(Note also that it wasn't needed before this point and so
needn't be allocated space in the current environment at
all; thus is `temp'.)

Anyway, I found the name `temp' confusing at first.

I might mention that in a graduate compiler course that I
taught last winter, I had the students write a Prolog
compiler and we used (a minor variant of) the WPE as the
intermediate code language. I wrote some rough notes
explaining the operation of the WPE for the students. It
explains a number of these issues. I'll be happy to send a
copy to anyone who is interested.

------------------------------

Date: Tue, 12 Mar 85 20:11:07 gmt
From: William Clocksin <wfc%cl.cam@ucl-cs.arpa>
Subject: numbervars/3

A definition of numbervars/3, requested by
nbs-amrf!hopp in Digest 3(8), follows:

numbervars('_'(M),M,N) :- !, succ(M,N).
numbervars(A,M,M) :- atomic(A), !.
numbervars(T,M,N) :- functor(T,_,A), numbervars(0,A,T,M,N).

numbervars(A,A,_,N,N) :- !.
numbervars(Am,Ar,T,M,N) :-
        succ(Am,An),
        arg(An,T,A),
        numbervars(A,M,K),
        !,
        numbervars(An,Ar,T,K,N).



The variables are labelled (UK spelling) with compound
term _/1. It is not a good idea to write programs that
rely on a particular labelling scheme.  Oh yes:
succ(X,Y) gives an X and Y such that Y is 1 more than X.
'numbervars'/3 is often used by programs that need their
own theory of substitution.  It is possible to dispense
with this by programming the theory explicitly (define a
predicate called, say, 'match').  The latter can often
be more efficient, I suspect.

------------------------------

Date: Thu, 14 Mar 85 21:44:36 pst
From: Allen VanGelder <AVG@Diablo>
Subject: numbervars reply

There were some questions about numbervars.  I
tried to send to the author, but the mail was
returned.  Maybe you can figure out what address
is needed, or put it in the Digest.

Date: Mon, 11 Mar 85 13:28:37 pst
From: Allen VanGelder <avg@diablo>
Subject: numbervars
To: hopp@nbs-amrf.arpa

numbervars should not produce terms that are likely
to occur "by accident".  See last paragraph.

Here is a photo of what  C-Prolog does:

C-Prolog version 1.4e.edai

| ?- numbervars([X,Y],0,2), numbervars(g(U,V),0,2), X=U, Y=V.

V = B
U = A
Y = B
X = A ;

no

In other words, it succeeds exactly once.
Needless to say "A" and "B" are not variables.  When you go
over 26, it continues "A1, B1, ...".  The advantage of this
over integers is that a written term can be human-readable
as far as variables go, and still has variables in the
right places if re-read. For your purposes, I guess this is
immaterial.  Incidentally, numbervars fails if you give it
too small a range.

Finally, in spite of the above evidence,
numbervars SUPPOSEDLY gives you '$VAR'(0),
'$VAR'(1), ...  This costs you little in
implementation terms and is probably necessary
for correctness.  You do NOT want this to succeed:

Y=0, numbervars(g(X),0,1), numbervars(g(Y),0,1), g(X)=g(Y)

because a primary purpose of numbervars is to distinguish
between logically different, but unifiable, terms.
Variants SHOULD be equal after numbervars, so if Y=0 is
omitted above, the rest succeeds, because g(X) and g(Y)
are the same up to renaming variables.

Hope this helps.

------------------------------

Date: Mon 11 Mar 85 11:44:19-EST
From: Vijay <Vijay.Saraswat@CMU-CS-C.ARPA>
Subject: Bounded Merge revisited

[ This is in response to Kuslik's and Shaprio's
  notes in V3(8) responsing to Vijay's bounded
  merge program in CP in the issue before that.
  My apologies for the confusion -ed]

Thanks to Ehud Shapiro and Tony Kusalik (private
communication) for pointing out why the bounded
merge program proposed in the previous Digest
would not work.  I now believe that  `bounded
merge' isn't a robust concept.  (When in doubt
change the topic!)  The problem is that it isn't
compositional.  If I have a bounded merge process
and I place it in a context where two streams are
being independently generated and then merged by
the given b-merge, there can be no guarantee that
two successive elements coming from the same stream
will occur always within a fixed number of elements
of each other in the output stream. This is because
the scheduler could be AND-unfair.  Consider:

system(Z):- cycle(a, 0, R), cycle(b, 0, S), b-merge(R?,S?, Z).

cycle(Tag, N, element(Tag, N).R):- N1 is N+1, cycle(Tag, N1?, R).

b-merge is your favourite bounded merge.

Now unless I assume some kind of AND-fair shceduler,
I cannot guarantee that  if element(a,1) is the 1st
element in Z then element(a,2) will be the Nth element
for some arbitrary but fixed N.  This is because it
could very well be the case that the scheduler does
not schedule cycle(a, 2, R) for execution but keeps
on scheduling cycle(b,N,S) and then merge will faithfully
keep on producing element(b,X) in the output stream.
The point is that having a so-called bounded merge
does not help produce a system which preserves that
property unless we asssume some form of AND-fairness
of the scheduler of the kind that if two independent
AND-goals are ready to execute, they will be executed
in a bounded amount of time, perhaps bounded by the
number of such processes at that time or maybe even
the total number of AND-processes at that time.
For example, a breadth first AND-scheduler would be
some such scheduler.

Note that AND-fairness is very different from OR-fairness
which would actually be quite difficult to define.  Suffice
to say that under a reasonable assumption of what an
OR-fair scheduler would be the normal merge would be a
bounded merge.

[Note also that in their 1984 ISL paper, Shapiro and
Mierowsky assume a property similar to what we call
AND-fairness (which they call fairness, and which is
not to be confused with the fair as in fair merge,
which isn't what everyone calls fair anyway. I think
its fair to say that all this can be fairly confusing.)]

If we are going to assume AND-fairness anyway, we may
as well assume it to get a `bounded' merge. Here is a
version which uses only `?' and `|':

X=X.
copy(A.X, A.X, A.X).

merge(X, Y, Z):- copy(X?, NewX1, NewX2),
                 copy(Y?, NewY1, NewY2),
                 merge(NewX1?, NewY1?, NewX2, NewY2, Z).

I.   merge(R, I.Y, S?, _,  I.Z):-   R=S | merge(R, Y, Z).
II.  merge(I.X, R, _, S?,  I.Z):-   R=S | merge(X, R, Z).
III. merge(I.X, I1.Y, _, _, I.I1.Z):- merge(X, Y, Z).
IV.  merge(I.X, I1.Y, _, _, I1.I.Z):- merge(X, Y, Z).
V.   merge(nil, I1.Y, _, _, I1.Y).
VI.  merge(I1.X, nil, _,_, I1.X).
VII. merge(nil, Y, _, _,  Y).
VIII.merge(X, nil, _, _,  X).

This works exactly as before.

I. can fire ONLY IF there is input in the second stream
to be merged and none in the first stream.

Similarly for II.

III and IV can fire only if there is input in both streams,
and then the order in which the two input elements are output
depends upon the guard which commits.

V-VIII take care of termination conditions.

Consider I. (Ignore the calls to copy/3 for the time
being.)  There is only one occurence of S in the clause
head  and this is read-protected.  Therefore it can only
match a variable, otherwise it will suspend forever. The
only other read-only annotation to bother about is that
of the call's 2d argument, which will be satisfied if it
is instantiated when called. Similarly for all the other
clauses. Effectively, by keeping two copies of X and Y
in each call, one read-protected and the other not, we
have avoided the X?-X? clash that occurred earlier.

Now where's the catch?  According to the definition
of a S?-X match, X  becomes a read-only variable, i.e.
ALL INSTANCES OF X ARE NOW READ-ONLY.  Hence as soon
as the first clause commits its bindings X globally
becomes S? and of course there is no producer for S,
so everything hangs again.  This points out the
confusion of concepts involved in making X?-Y result
in Y being a read-only copy of X in all its instances.
Whereas the '?' used to be a purely local annotation,
affecting only the place it occurs in, suddenly it has
become a global annotation.  The problem now is that
instances of variables which had not explicitly been
declared read-only (and hence declared explicitly NOT
-Read-Only, there being no way in CP to declare something
explictly not-read-only, that being the default case)
suddenly become read-only leading to immediate dead-lock.

Enter  copy/3.  Given an input stream, copy/3 produces two
copies of the stream (and is also suspended if there is no
input in the input stream).  Now when S? unfiies with NewX2
in I, NewX2 becomes S?, i.e. a read-only copy of S.  But
the guard declares S to be the same as R, which is NewX1?.
Hence NewX2 becomes NewX1??, i.e. NewX1?.  But now the goal
copy(X?, NewX1, NewX1?) can STILL execute successfully
because when X is instantiated, say to B.Q the second
arguments will unify leading to NewX1? becoming B.Q? which
will unify with the third argument in the clause-head of
copy/3.

The price we pay for this is that in addition we now have
to assume AND-fairness in order to show that if a value
arrives at the original X stream, then it will be output in
a finite amount of time. All the goals of the form copy(X?,
NewX1, NewX1?), copy(NewX1?, NewX11, NewX11?), .... that
have been built up (effectively forming a linear pipeline
N long if N is the number of Y elements output since the
last X element was output) are independent of the goals
copy(Y?, NewY1, NewY2), copy(NewY1?, NewY11, NewY12) and
the system must not discriminate against them.   For
example, under breadth-first scheduling one can show that
if N elements of Y are output before an element arrives at
the X input, then at most N more elements from Y will be
output before the element that arrived at X is output
(assuming that Y supplies elements as fast as they are
consumed.)

Now for some light hearted asides.  Gimme a break!
Since when did we start writing and studying programs
in order to RUN them? Quote from the Bible p.8:

"... In our approach we shall try to redress the
balance,and we shall do so by regarding the fact that
our algorithms could actually be carried out by a
computer as a lucky accidental circumstance that need
not occupy a central position in our considerations..."

Thanks to Udi again for giving another exposition of
"stability" , a concept with which I totally disagree.
The  whole exercise involving bounded merge was carried
out with the purpose of determining how powerful a
language CP is.  Stability is a very strong assumption
indeed and the aim here is to see how far can you go
with as little as possible.  The attempt here, in brief,
was  to get the '?' to do the work of var.  Look, ma,
no hands.

And, to reiterate, I DON'T CARE (as contrasted
with DON'T KNOW; pardon this terrible humour)
if all known implementations of CP are stable.
We are talking of the model of computation here,
and how much mileage can be obtained from the
minimum of  extra-logical assumptions.  Do you
want the CP model to specify that all its
implementations must be stable or fair?  As Tony
Hoare said long ago (CSP) paper, one would NOT
like to do that in order to prove one's programs
correct, but here (Sigh!) some such property seems
necessary.  My deence is that you would need it
anyway if you were to prove some boundedness
property for a system.

And lastly, as I am sure Ehud would agree, the
only real touchstone for answering questions like
this (the appropriate behaviour for ?-?) is a formal,
well-justified, abstract semantics, (which takes the
place of God in these circumstances) not whether
something works or doesn't work on a (possibly buggy...
no offence meant!) interpreter.

Cheers.

-- Vijay.

------------------------------

Date: Tue 12 Mar 85 15:23:38-EST
From: Vijay <Vijay.Saraswat@CMU-CS-C.ARPA>
Subject: Bounded merge once again.

In his message to the Digest, Jacob Levy proposed
an alternate definition of the read-only annotation
in CP. Briefly, this proposal suggests that unification
of a read-only variable in a clause head with anything
in the goal should be defined to fail. Without commenting
here on the merits or otherwise of this proposal, here
is ANOTHER b-merge which works for this new-definition.

test(A.X, A.X, test(done)).

merge(X,Y,Z):-
      test(X?, NewX, TestX),
      test(Y?, NewY, TestY),
      merge(NewX?, TestX, NewY?, TestY, Z).

I.  merge(X, test(Test?), A.Y, _, A.Z):- Test=done | merge(X, Y, Z).
II. merge(A.X, _, Y, test(Test?), A.Z):- Test=done | merge(X, Y, Z).
III.merge(A.X, _, B.Y, _, A.B.Z):- merge(X,Y,Z).
IV. merge(A.X, _, B.Y, _, B.A.Z):- merge(X,Y,Z).
V.  merge(nil, _, Y, _, Y).
VI. merge(X, _, nil, _, X).

This works just as it did before, with clause I firing
ONLY IF there is no input on the X channel and some on the
Y channel. Clause II handles the symmetric case and III
and IV the case in which there is input on both channels.
V and VI handle the termination conditions.

Note that you only need the fairness assumption with
respect to test/3, merge/3 and merge/5 goal. When input
arrives on a channel, it will be output a fixed but
arbitrary number of elements from the other stream have
been output.  This number is bounded above by the number
of consecutive elements from the other channel that have
been output before the arrival of this element.
The input can be assumed to arrive asynchronously from
externally scheduled producers. A variant of this program
is needed for a k-fair (k > 0) scheduler, though.

------------------------------

Date: Tue, 12 Mar 85 11:20:01 -0200
From: Ehud Shapiro  <Udi%Wisdom.bitnet@WISCVM.ARPA>

Unfortunately, Vijay's 5-argument upgrade to his
3-argument fair merge is still not correct.  In
contrast with his merge/3, which does not work on
any example, the new merge/5 works for some examples
--- for goals merge(Xs,Ys,Zs),  in which both Xs and
Ys are fully instantiated streams (i.e. lists).
However, when called with two streams which are produced
lazyly --- slower then the pace in which merge/5 can
consume them --- Vijay's program deadlocks. The reason
is that his method for checking that a stream is
uninstantiated, while succeeding in the check, also binds
that stream to a read-only variable, which causes further
attempts of the producer of the stream to suspend.
Vijay's merge/7 upgrade to the merge/5 upgrade to the
merge/3 suffers from the same problem.

On the other hand, Vijay proposes in his third note to the
Digest to change the behavior of unification in Concurrent
Prolog, namely that unify(X?,Y?), where X and Y are
variables, to suceed, binding X to Y. An interesting
change, I must admit.  Under this change, Vijay's original
merge/3 program will work correctly, and, with some additional
effort by the producers of the streams, will also allow
his merge/5 and merge/7 to work.  In addition to this
proposal, to which we will give serious consideration in
our forthcoming implementation discussions, Vijay raises
several other points concerning unification of read-only
variables.  Similar points have been made by Toni Kusalik,
and by Kazunori Ueda, in unpublsihed notes. I will try and
respond to some of them (although I can't promise to keep
up with Vijay's pace in the future) in light of some
implementation decisions made in our new compiler for
Flat Concurrent Prolog.

-- Concerning unify(X?,X), and unify(X?,X?):
Unification of a term with itself always succeeds,
irrespective... The problem in the Concurrent Prolog
interpreter written in Prolog is caused by X? being
represented as the term '?'(X), the lack of occur-check in
reasonable Prolog implementations, and the fact that
no-one cared for this bug before.

-- Concerning order dependence of unification:
Unification is performed from left to right, so it is not
"desparate". This order dependence is a bug or a feature,
depending on whether you are selling or buying (actually
there are some neet FCP programming tricks/techniques
that rely on this, including implementing deterministic
constraint systems, and probing a shared blackboard).

--- Concerning read-only's in the head of a clause:
When originally defining Concurrent Prolog, I noticed
that the definition allows read-only's in clause heads,
but didn't know what one ould use them for.  A use for
them was discovered independently almost a year later, by
Lisa Hellerstein, when trying to implement a complex
parallel algorithm in Concurrent Prolog, and by Steve
Gregory, when trying to implement PARLOG in Concurrent
Prolog.  Since then the technique is known as 'exporting
protected data-structures', which is just what it says:
it allows a process to create an incomplete data-structure,
and fill in the parts later, without worrying that some
other process will step on the incomplete parts
malliciously or by mistake.

--- It is true that there is no use suspending on a
variable that is local to a clause, since no-one can
wake-up such a suspended process.  Fortunately, it is
easy to detect this case at runtime, and fail instead.
This should be viewed as an optimization that does not
change the semantics of Concurrent Prolog.

A last comment concerning fairness.  Fairness of a
computation specified by a logic program is a property
which goes outside of the program's model theoretic
semantics.  Hence I find it completely appropriate that
extralogical features should be used to specify it.  For
that matter, read-only annotation is an extra-logical
control facility.  Certainly, for reasons of elegance,
it is preferable to be able to specify fair Concurrent
Prolog programs using this construct alone, without
introducing other extra-logical facilities such as the
fairness or stability of the underlying machine.  It
seems that following Vijay's proposed modification to the
semantics of Concurrent Prolog, it is possible to specify
fair merge using read-only variables alone, without relying
on the fairness of the underlying machine.  This should be
taken as a point in favor of this proposal.

Happy Purim,

-- Ehud Shapiro

------------------------------

Date: Thu 14 Mar 85 07:37:58-EST
From: Vijay <Vijay.Saraswat@CMU-CS-C.ARPA>
Subject: An alternative to '?' in CP.

As an alternative to CP's complicated definition of the
read-only annotation, let me propose another annotation.

The '!' annotation can only decorate instances of terms in
the head of a clause.  If a Term (variable OR constant OR
compound term) T! occurs in the head of a clause, then
unification will suspend if an attempt is made to unify a
variable against the term and will remain suspended until
the variable has been instantiated (to a constant or a
possibly non-ground compound term), after which the two
terms will be recursively unified.  (wait/1 in CP seems to
achieve the suspension part of '!' but cannot be used to
simulate it in CP-without-? without a control primtive to
sequence goals... and pleeaase don't use '?' for
sequencing!.)  Like the `?', the `!' is
not inherited by embedded terms, that is, it applies only
to the functor or variable instance textually indicated in
the program.

Note that under these rules, unification is still
order-dependent, but there is a linear algorithm.  Also
unify(Y!, X!) can never occur. unify(Y, Y!) can, and
suspends till Y is instantiated.  Note that there is no
'inheritance' of '!'  via X!-Y unifications like there is
for X?-Y.  Indeed the !-annotation can never occur 'in'
any goal call at run-time.

Here each CLAUSE decides what is to be INPUT to it.
Previously each CALL decided what would be input to that
call.  If all the clauses have the same pattern of input
specifications, then that could be considered a
mode-specification for the predicate.

Examples:

merge([A|X]!, Y, [A|Z]):- merge(X, Y, Z).
merge(X, [A|Y]!, [A|Z]):- merge(X, Y, Z).

plus(X!, Y!, Z):- Z is X+Y.
plus(X!, Y, Z!):- Y is Z-X.
plus(X, Y!, Z!):- X is Z-Y.


'!' is less powerful than '?' because it cannot be used to
simulate 'var' and hence write a b-merge like program,
even assuming strict-AND fairnes of the scheduler.

Nor can it be used to export 'protected' variables.  But
the whole thesis here is that clauses should be forced to
specify what their input is and hence the 'embedded
channel' problem in Hellerstein and Shapiro, ISLP, 1984
can be taken care of by !-protecting at the consume site
instead of ?-protecting at the produce site.

'!' seems to cover the vast majority of uncontroversial
uses of the `?' and is far simpler semantically.  For
example, it can be shown that once a goal previously
blocked on '!' becomes unblocked, it remains unblocked
through commit time.  With even the new proposal for '?'
that is not the case, unless restriction 1 in my note on
the read-only annotation in the previous Digest is
enforced.

(By the way, most of what I said in 'The read-only
annotation in CP' continues to be valid even under the
new proposal for '?' given by Levy in the last Digest.)

------------------------------

Date: Thu 14 Mar 85 10:20:02-EST
From: vijay <Vijay.Saraswat@CMU-CS-C.ARPA>
Subject: Levy's version of '?'.

Levy's version of '?' in CP as presented in the
last issue of the Digest is  inconsistent, as the
following example explains.

Goal: ?- foo(X?, Z).

Clause: foo(Y, Y).

Suppose the first arguments are unified first.  Then
Y becomes X?, and hence unifying the second arguments
leads to unifying X? in the head with Z in the clause
which will fail.

Suppose the second arguments arre unified first. Then Y
becomes Z and now of course unifying the first arguments
succeeds with Z becoming X?.

This is different from the example I had given
earlier to point out the order-dependence of
unification. There the choice was between
suspension and success, here it is between
failure and success. Does Levy mean to imply some
fixed ordering to his unification algorithm? (e.g.
unify the arguments of a compund term from left to
right) That's a strong condition!

Note that  Condition 1 (all occurrences of a variable
in a goal should be  either read-only protected or
none should be read-only protected) is met by the goal
?-foo(X?,Z).  So that condition is not suficient to
handle this  inconsistency.  It is also not sufficient
to ensure that once a goal becomes unblocked it will
remain unblocked through commit time, for the same reason
(unlike what I had thought in the previous message 'An
alternative to CPs ?' in this issue of the Digest).

------------------------------

End of PROLOG Digest
********************