[comp.lang.prolog] Higher Order Extensions

patch@hpldola.HP.COM (Pat Chkoreff) (04/12/89)

There are certain extensions to pure Prolog which are commonly regarded
as necessary for `real world' programming, namely:

    o  set expressions (findall, etc.)
    o  negation as failure

Are these extensions really necessary, and why?

They surely are not necessary for the purpose of computation, since pure
Prolog is equivalent to the most powerful known model of computation.  I
suspect that much of the so-called `need' for these extensions arises
when structured data is encoded as a relational structure _in the
program_, rather than as data structures passed as predicate arguments.

For example, a stream of symbols ['3','+','7'] may be encoded as a
relational structure as follows:

    % input(P,S,Ps).
    %
    % The symbol at position P >= 0 in the input stream is S, and Ps is
    % the successor of P.

    input(0,'3',1).
    input(1,'+',2).
    input(2,'7',3).

However, if the stream is encoded this way, it is impossible in pure
Prolog to calculate the `length' of the stream.  You would need at least
one of the extensions.  If instead the stream were encoded as the data
structure ['3','+','7'], then it would be trivial to calculate its
length using pure Prolog.

This may explain why _The_Art_of_Prolog_ (section 17.2) suggests that
you `need' both extensions for the breadth-first search of possibly
cyclic graphs.  In their example, they encode the graph into the program
as the edge/2 relation.  If the graph were instead represented as a data
structure, then I maintain that the extensions would be unnecessary.

I am not claiming that these data structure problems are easy:  in fact,
I find it quite difficult to represent objects that have a complex,
cross-referenced structure.  I'm doing it anyway, using some techniques
that do NOT involve circular structures and rely heavily on the natural
numbers.  It works, but that doesn't imply that I know what the hell I'm
doing.

I would appreciate some good references and lively discussion on this
subject.  I haven't seen much in the five Prolog books that I own.


Regards,
Patrick Chkoreff            "AC:  It's not just an axiom; it's the LAW!"

ok@quintus.UUCP (Richard A. O'Keefe) (04/14/89)

In article <11500013@hpldola.HP.COM> patch@hpldola.HP.COM (Pat Chkoreff) writes:
>This may explain why _The_Art_of_Prolog_ (section 17.2) suggests that
>you `need' both extensions for the breadth-first search of possibly
>cyclic graphs.  In their example, they encode the graph into the program
>as the edge/2 relation.  If the graph were instead represented as a data
>structure, then I maintain that the extensions would be unnecessary.

There is an intermediate representation:
	start(WhereToStartTheSearch).
--->	children(Node, SetOfAllThatNodesChildren).
	solution(X) :- X is a node which has what we're looking for.

In effect,
	children(Node, Children) :-
		set_of_all(Child, edge(Node,Child), Children).
With that representation, breadth-first search can be done in 4 pure
(slightly obfuscated) Horn clauses:

breadth_first(Answer) :-
	start(Start),
	breadth_star([], s(0), [Start|Q], Q, Answer).

breadth_star([], s(_), [Answer|_], [], Answer) :-
	solution(Answer).
breadth_star([], s(N), [X|F], B, Answer) :-
	children(X, Children),
	breadth_star(Children, N, F, B, Answer).
breadth_star([X|Xs], N, F, [X|B], Y) :-
	breadth_star(Xs, s(N), F, B, Y).

It is worth noting that iterative deepening is competitive with
breadth first search in Prolog.

brock@tuvie (Inst.f.Prakt.Info 1802) (04/21/89)

In article <11500013@hpldola.HP.COM> patch@hpldola.HP.COM (Pat Chkoreff) writes:
>There are certain extensions to pure Prolog which are commonly regarded
>as necessary for `real world' programming, namely:
>    o  set expressions (findall, etc.)
>    o  negation as failure
>Are these extensions really necessary, and why?
>They surely are not necessary for the purpose of computation, since pure
>Prolog is equivalent to the most powerful known model of computation.
                      ^^^^^^^^^^^^^ yes you _should_ use it
And in article <1790@etive.ed.ac.uk> jah@lfcs.ed.ac.uk (James Harland)
notes:
>[nearly everything omitted] I know that findall etc are
>strictly second-order, and that pure Prolog is strictly first-order, and so
>there is a mis-match.
There is not always a real mismatch!

          In some cases, such as the one You (Pat) have
     mentioned <<['3','+','7']>>, it is possible to
     have a much more convenient way to reason about
     "all solutions" in Prolog - more clearly: to rea-
     son about Prolog's nondeterminism. It is not
     applicable in general but it is useful and con-
     venient. With this programming methodology you can
     have to a limited degree a kind of data abstrac-
     tion. You can apply it when the set as an object
     of its own is not your primiary interest but when
     you want to state something about this set.

One of the first tasks when writing predicates is to  choose
appropriate  datastructures; and usually lists are chosen...
They are mostly (because of lack of time) never changed,  as
one  would  have  to rewrite a very large part of his predi-
cates. If one looks to OOPL's but also CLU one  gets  really
jealous   about   their  possibility  to  change  the  rep(-
resentation) of something so easily. A very boring thing  to
program  in  Prolog is similar to (don't flame me about that
comparison) iterators - say in CLU. This is  the  case  when
you make some treewalks on datastructures. You want to state
something about all elements -- more general all  solutions.
Universal  quantification  in deductive databases is to some
extend similar to it etc.etc.  I never saw it used as in the
following  - but maybe I'm wrong about that - every program-
mer does it unconsciously - indeed.
 
This method can only be applied  when  you  have  enumerable
sets  of  solutions,  when you can transform the predicate P
which describes the set of solutions to the following  form.
Please forgive me the misleading names.

        normalform(In,Out) :-
                compute(In,Intermediate),
                (show(Intermediate,Out); normalform(Intermediate,Out)).

The idea is that all nondeterministic parts which will  con-
tribute  to  the  set  of solutions are expressed by the ;/2
only. compute/2 should be thus deterministic.

Example: list/1 + member/2

    list([]).
    list([_|Xs]) :- list(Xs).

    member(X,[X|Ys]).
    member(Y,[X|Ys]) :- member(Y,Ys).
    <==>
    member(X,Xs) :- Xs = [X|_]; Xs = [_|Ys], member(X,Ys).
    <==>
    member(X,Xs) ;- Xs = [Y|Ys], (X = Y; member(X,Ys)).
    <==>

Final result:
    member(X,Xs) :- normalform([_|Xs],X).
    compute([_|Xs],Xs).
    show([X|_],X).

Example: tree/1 + member_tree/2

    tree(empty).
    tree(node(_,R,L)) :- tree(R), tree(L).

    member_tree(node(X,R,L),Y) :-
            X = Y; member_tree(R,Y); member_tree(L,Y)
    <==>
    member_tree(node(X,R,L),Y) :-
            X = Y; member(Z,[R,L]), member_tree(Z,Y).
    <==>
    member_tree(node(X,R,L),Y) :- X = Y; member_member_tree([A,B],Y).
            member_member_tree([X|Xs],Y) :-
                   member_tree(X,Y); member_member_tree(Xs,Y).
            <==>
            member_member_tree([node(X,R,L)|Xs],Y) :-
                   X = Y; member_member_tree([R,L],Y); member_member_tree(Xs,Y).
            member_member_tree([empty|Xs],Y) :-
                   member_member_tree(Xs,Y).
            <==>
            member_member_tree([node(X,A,B)|Xs],Y) :-
                   X = Y; member_member_tree([A,B|Xs],Y).
            member_member_tree([empty|Xs],Y) :-
                   member_member_tree(Xs,Y).
    <==>

Final result:
    member_tree(Y,X) :- normalform([node(_,X,empty)],Y).
    compute([node(_,R,L)|Xs],[R,L|Xs]).
    compute([empty|R],R).
    show([node(X,_,_)|_],X).

After this step the predicate stating p about all  solutions
is:
    forallnormalform(X) :-
            not compute(X,_).
    forallnormalform(X) :-
            compute(X,Y),
            show(Y,Z), 
            p(Z),
            forallnormalform(Y).

Instead of the declarative "not  compute(X,_)"  you'll  find
for tree/1 respectively list/1 empty and [].

Program transformation is funny  but  not  simple.  Most  so
called  program  transformations are done by hand and so are
the mine. (There might be some typos in it,  but  I  believe
the  <==> is right.) To take just the applications mentioned
having no completely general way to  do  this  is  no  great
disadvantage.   When  you  are  capable  to design some term
structures you'll also be  capable  to  define  the  related
"iterator".  The  advantage is just that it is not necessary
to rewrite your programs. You can play around with  the  rep
similar to OOPLs. After these steps you'll unfold show/2 and
compute/2 to get the predicate you  would  have  written  by
hand.

This programming style is in some  way  a  more  declarative
alternative  to  the  ad  hoc  map*  features  of functional
languages because the patterns of recursion are explained by
logical  quantification.  In  Hope  for example you could do
this too, however list/1 and tree/1 would be a type _defini-
tion_  only.  How the related iterators depend on these type
definitions, how they can be generated,  remains  mysterious
in Hope.


Ulrich (ulrich@vip.at ~ UUCP: ...!mcvax!tuvie!vip!ulrich)

patch@hpldola.HP.COM (Pat Chkoreff) (04/22/89)

/ hpldola:comp.lang.prolog / jah@lfcs.ed.ac.uk (James Harland) /  6:53 am  Apr 19, 1989 /

> It is a little hard for me to understand what you are getting at here.
> ...
> However, the result you quote in the last sentence
> above is not really relevant to this discussion, as whilst pure Prolog is
> equivalent to (your favourite variation of) Turing machines, there is the
> question of convenience. Turing machines are adequate for computation, but
> does anyone seriously wish to program such beasts? A less extreme comment
> may be made about pure Prolog; it is computationally sufficient in the
> sense you mentioned above, but is it expressive enough for *convenient*
> programming? This seems to be the question that you are addressing.

I thank Richard O'Keefe and James Harland for their responses.  Allow me
to clearly state the intent of my posting, which I should have done in
the first place.

I have had a recent experience (epiphany?)  that makes me wary of the
`convenience' of using extensions outside of pure first-order logic,
including findall/3, !/0, and \+/1.  I do not claim that they are
unnecessary for building practical systems, and I do not claim that
there is a moral imperative not to use or implement them.  I claim that
their convenience may lull you into using them even though a perfectly
logical solution could be found if you tried.

My experience is with the manipulation of weighted attributed directed
acyclic graphs.  I started by encoding the weights, attributes, and
edges as relations (i.e., into the program).  To verify that the graph
was acyclic, I had to use extensions.  To do attribute synthesis, I had
to use extensions.  And I used 'em all:  findall, !, \+ -- the works.
Given my data representation, I had no choice.  I found myself having to
repeatedly write essentially the same 26 lines of cut- and findall-
laden code for the various tasks.  No really viable leap of abstraction
was possible.  I tried some ideas similar to the one O'Keefe suggested,
but they didn't go far enough.

Finally, I committed myself to the task of representing arbitrary DAGs
as terms in pure first-order logic.  I mean _pure_:  no arbitrary atomic
node identifiers, no extralogical extensions, no arg/3 or functor/3, not
even any machine arithmetic.  I wanted those things right out there in
the Herbrand universe where I could denote 'em.

I succeeded.  Now, I don't have to _check_ that a graph is acyclic:
there is no way to even _express_ a graph with cycles.  I wrote 8
(eight) lines of pure, general-purpose code that related together a
graph, edge weights, input attributes, and output attributes.  It's an
excellent generator:  if you wait long enough, you'll see any solution
that exists.  In retrospect, my solution is trivial, and that's why I
like it.

To deal with the inefficiencies of computing with the natural numbers []
(zero) and s(N) (N+1), I build natural expressions, postponing their
evaluation as long as I desire.  This also preserves the generation and
termination characteristics of the predicate, preventing it from
mindlessly generating ever-increasing natural numbers or getting lost
searching for things that don't exist.

If I want to convert an expression to a machine integer, I have a
predicate that will do so (using machine arithmetic, of course).  So
even though I have attributes with values such as 144140, I never need
to build a term of that depth.

Here's the scary thing.  This version is much more efficient than the
old one.  I haven't even counted the efficiency gained by simply reading
the data instead of asserting it.

I could easily fold the evaluation back into the attribute synthesis,
making it work directly with machine integers and floats, but I don't
really need to.  In any case, at least I have a working specification.
Those 26-line monsters were not only illogical, they didn't even have
the decency to fit on a single screen!

Thus, my limited experience suggests that although extralogical
extensions may seem convenient, a purely logical solution is out there
waiting for you.  It may be coy and elusive as hell, but its rewards may
far exceed the cost of the search.  Then again, maybe not.

I would certainly desire a language based on a higher-order logic.  That
would spare me from having to rely on ad hoc (albeit useful) language
extensions when things get really tough.  I want a language that allows
me to express what I want cleanly and logically, and when that's
impractical, I'd like a good C interface.

I am obviously somewhat religious on this point, and would thoroughly
appreciate a good bash from an wise iconoclast.  I'm venturing into a
new area of my project now, where my faith will be tested in a much
hotter fire.


- Patrick Chkoreff                 "Ada:  it's not just a language ...."

cdsm@doc.ic.ac.uk (Chris Moss) (04/26/89)

In article <11500013@hpldola.HP.COM> patch@hpldola.HP.COM (Pat Chkoreff) writes:
>There are certain extensions to pure Prolog which are commonly regarded
>as necessary for `real world' programming, namely:
>
>    o  set expressions (findall, etc.)
>    o  negation as failure
>
>Are these extensions really necessary, and why?
>
The standard reference for this is:
  D.H.D. Warren: Higher order extensions to Prolog: are they needed?
   Machine Intelligence 10, 1982, pp441-454. (This is a book to all intents
   and purposes).

2 points: 
  1. The predicate to find the set of all solutions can easily be written 
in pure Prolog with negation. (It's the list of which there are no
other solutions). The only problem is that it is O(N^2). 
"setof" (defined in Warren's paper) also needs metalogical predicates such
as var, though these _can_ be defined in Prolog with cut.
  2. findall cannot be defined in Prolog without assert/retract or equivalent
as far as I know (I've never seen a proof). The problem here is precisely
showing that there are multiple solutions.

>They surely are not necessary for the purpose of computation, since pure
>Prolog is equivalent to the most powerful known model of computation. 

Negation is needed if you want to program strictly at "first-order" level.
Ok if you raise everything up to the term level you can do everything
(e.g. write Turing machine programs).

>For example, a stream of symbols ['3','+','7'] may be encoded as a
>relational structure as follows:
>
>    % input(P,S,Ps).
>    %
>    % The symbol at position P >= 0 in the input stream is S, and Ps is
>    % the successor of P.
>
>    input(0,'3',1).
>    input(1,'+',2).
>    input(2,'7',3).
>
>However, if the stream is encoded this way, it is impossible in pure
>Prolog to calculate the `length' of the stream. You would need at least
>one of the extensions.  

Yes, negation is sufficient, using the trick above.

Chris Moss.

lang@macbeth.PRC.Unisys.COM (Francois-Michel Lang) (05/04/89)

In article <783@gould.doc.ic.ac.uk> cdsm@doc.ic.ac.uk (Chris Moss) writes:
...
>2 points: 
>  1. The predicate to find the set of all solutions can easily be written 
>in pure Prolog with negation. (It's the list of which there are no
>other solutions). The only problem is that it is O(N^2). 

I know how to find the set of all solutions in O(N^2) time
without using assert/retract (a la findall),
but unless we have somewhat different notions of "pure Prolog",
I can't do it completely purely.

How is the pure version of such a predicate written?
----------------------------------------------------------------------------
Francois-Michel Lang
Paoli Research Center, Unisys Corporation lang@prc.unisys.com (215) 648-7256
Dept of Comp & Info Science, U of PA      lang@cis.upenn.edu  (215) 898-9511

patch@hpldola.HP.COM (Pat Chkoreff) (05/05/89)

/ hpldola:comp.lang.prolog / cdsm@doc.ic.ac.uk (Chris Moss) /  4:00 am  Apr 26, 1989 /

> Ok if you raise everything up to the term level you can do everything
> (e.g. write Turing machine programs).

Precisely:  you can do anything in pure Prolog if you "raise everything
up to the term level".  All you need is pure Horn clauses:  no negation,
assert/retract, findall, etc.

The point about Turing machines proves this, but (obviously) you don't
have to go as far as emulating Turing machines.  Instead, you can
represent more interesting objects.  In my system, I like to represent
natural numbers, sets of natural numbers, partitioned sets of natural
numbers, directed acyclic graphs, etc.  I do this using very basic
concepts from universal algebra.

To model an enumerable set of objects which exist `in my head', I simply
define a set of terms which name the objects, and establish a one-to-one
correspondence between objects and their names.

The names are defined by a `term structure', by which I mean a set of
functors with fixed arities, along with some pure predicates that define
the `valid' terms constructed from applications of these functors.  Each
valid term is a name which denotes a unique object, and there is exactly
one name for every object.

I try to make the names as terse as possible, taking advantage of every
ordering or precondition I can think of to optimize the encoding scheme.

For example, let's say I wanted to represent all sets consisting of two
distinct natural numbers.  I might use the following term structure:

%     name (term)     object (interpretation)
%     ------------------------------------------------------------
%     [].             the natural number:  0.
%     s(N).           the natural number:  N+1.
%     p(A,B).         the set:             {A, (A+B+1)}.

natural([]).
natural(s(N)) :-
	natural(N).

pair(p(A,B)) :-
	natural(A),
	natural(B).


For example:

	p([],[])               denotes   {0,1}.
	p(s(s(s([]))),s([]))   denotes   {3,5}.
	p(s([]),s(s(s([]))))   denotes   {1,5}.

Note that this scheme captures the `precondition' that the numbers in
the set must be distinct.  You don't have to _check_ this condition,
since it is impossible to violate it.

I can change interpretation if I feel like it.  For example, the same
term structure can represent all sets consisting of two distinct _even_
natural numbers:

%     name (term)     object (interpretation)
%     ------------------------------------------------------------
%     [].             the natural number:  0.
%     s(N).           the natural number:  N+2.
%     p(A,B).         the set:             {A, (A+B+2)}.


For example:

	p([],[])               denotes   {0,2}.
	p(s(s(s([]))),s([]))   denotes   {6,10}.
	p(s([]),s(s(s([]))))   denotes   {2,10}.

---

In my system, the objects of interest are directed acyclic graphs, sets
of natural numbers, etc., so I'm doing this sort of thing for them.  In
pure Prolog, there is simply no way to manipulate these objects unless
they can be represented _as objects_.  And since I _want_ to represent
these objects anyway, I might as well use pure Prolog.

I vote in favor of "raising everything up to the term level".  I like
the sound of that.

All of this idealism is probably going to earn me a good kick in the
butt one of these days, the universe being what it is.  But maybe not,
Prolog compilers being what they are (:-}).



Patrick Chkoreff   719-590-5983  %  The quest for perfection is natural,
{hpfcla,hplabs}!hpldola!patch    %  rational, irrational, and transcendental.

tarau@ouareau.iro.umontreal.ca (Paul Tarau) (05/08/89)

In article <783@gould.doc.ic.ac.uk> cdsm@doc.ic.ac.uk (Chris Moss) writes:

>  1. The predicate to find the set of all solutions can easily be written 
>in pure Prolog with negation. (It's the list of which there are no
>other solutions). The only problem is that it is O(N^2). 
>"setof" (defined in Warren's paper) also needs metalogical predicates such
>as var, though these _can_ be defined in Prolog with cut.
>  2. findall cannot be defined in Prolog without assert/retract or equivalent
>as far as I know (I've never seen a proof). The problem here is precisely
>showing that there are multiple solutions.
>

Defining a findall-like predicate in pure Prolog is relatively easy,
if we use a data-structure representation of the database.

Prolog's implementation of SLD-resolution uses a partially destructive
unification (substitutions are "applied" to goals, although input clauses are
used as fresh variants). All we have to do is to replace it with a
no-side-effect unification predicate:

	unify(X,Y,Mgu)

where Mgu is a most general unifier of X and Y, which contains
only fresh variables.


% Constructive "all-answers" predicate in pure Prolog

:-op(600,xfx,<=).

all_answers(Pattern,Goal,Answers):-
	clauses(Cs),
	solve_all([Pattern<=[Goal]],Cs,[],Answers).

solve_all([],_,As,As):-!.			% no more goals
solve_all([G|Gs],Cs,OldAs,NewAs):-
	solve_one(G,Cs,Gs,NewGs,OldAs,As),	% solve the first goal
	solve_all(NewGs,Cs,As,NewAs).		% solve the others

solve_one(Answer<=[],_,Gs,Gs,As,[Answer|As]):-!.	% keep Answer
solve_one(Goal,Cs,Gs,NewGs,As,As):-			% 
	reduce_all(Cs,Goal,Gs,NewGs).			% reduce Goal

reduce_all([],_,OldGs,OldGs):-!.		% no more clauses
reduce_all([Clause|Cs],Goal,OldGs,NewGs):- 	% select the first clause
	reduce_one(Clause,Goal,OldGs,Gs),	% reduce the others
	reduce_all(Cs,Goal,Gs,NewGs).

reduce_one(Clause,Goal,Gs,[NewGoal|Gs]):-	% successful reduction
	reduce(Goal,Clause,NewGoal),!.		% giving NewGoal
reduce_one(_,_,Gs,Gs).				% failure: no new goal

reduce(Answer<=Goal,Head<=Body,NewAnswer<=NewBody):-	% reduces a goal
	unify(tuple(Answer,	Goal,	_	),	% and an input clause
	      tuple(_,		Head,	Body	),	% giving a new goal
	      tuple(NewAnswer,	_,	NewBody	)).


% data

clauses([
	[app([],Ys,Ys)|Cont]<=Cont,
	[app([A|Xs],Ys,[A|Zs])|Cont]<=[app(Xs,Ys,Zs)|Cont],

	[nrev([],[])|Cont]<=Cont,
	[nrev([X|Xs],R)|Cont]<=[nrev(Xs,T),app(T,[X],R)|Cont],

	[perm([],[])|Cont]<=Cont,
	[perm([X|Xs],Ys)|Cont]<=[perm(Xs,Zs),ins(X,Zs,Ys)|Cont],

	[ins(X,Xs,[X|Xs])|Cont]<=Cont,
	[ins(X,[Y|Ys],[Y|Zs])|Cont]<=[ins(X,Ys,Zs)|Cont]
]).

% tests

t1(Xs):-all_answers(X,nrev([a,b,c],X),Xs). 

t2(Xs):-all_answers(X,perm([1,2,3],X),Xs). 

t3(R):-all_answers([X,Y],app(X,Y,[a,b]),R). 


Although non-destructive unification is at least as "logical" as
its Prolog cousin, one can easily avoid it by using "numbervars" 
and "melt-new" (arithmetic and "structure inspection" predicates
have pure Prolog equivalents).

In Quintus-Prolog a practical definition is:

	unify(X,Y,U):-copy_term(X,U),copy_term(Y,U).

In ALS-Prolog (a nice system, by the way) we can use "$$bagof" which
creates terms directly on the heap, without "assert" and "retract".

	unify(X,Y,U):-$$bagof(X,X=Y,[U]).

In other prologs the fastest predicate is probably:

	unify(X,Y,U):-findall(X,X=Y,Us),Us=[U].


With a unify(X,Y,U) predicate which never fails, returning a 
"bottom" element if X and Y are not unifiable, it is
even possible to give a version of the previous program which
does not use the "negation-as-failure" implicit in the "red" cut
of reduce_one/4.

Paul Tarau

cdsm@doc.ic.ac.uk (Chris Moss) (05/09/89)

In article <10145@burdvax.PRC.Unisys.COM> lang@macbeth.PRC.Unisys.COM (Francois-Michel Lang) writes:
>I know how to find the set of all solutions in O(N^2) time
>without using assert/retract (a la findall),
>but unless we have somewhat different notions of "pure Prolog",
>I can't do it completely purely.
>
>How is the pure version of such a predicate written?

There are 2 problems. 
1. "how do I write a version of 'call' in pure Prolog?"
Answer: for all predicates in your program supply a clause of the form:
   call(foo(A,B)) :- foo(A,B).

This could easily be done by a preprocessor. 
Call does not, by itself, add any metalevel power to Prolog. Only convenience.

2. How do I recursively call a goal with uninstantiated variables?
Answer: Make a copy of the term each time with new variables.
This can be done with freeze and melt (as in Sterling and Shapiro).
It requires something akin to "univ" (or alternatively functor and arg) 
which requires a clause (similar to "call") for each function symbol 
in the program.

These are clearly inconvenient, but they CAN be done in pure Prolog.
They are more "structural" than "metalevel" facilities.

Chris Moss.

cdsm@doc.ic.ac.uk (Chris Moss) (05/10/89)

In article <1019@mannix.iros1.UUCP> tarau@iros1.UUCP (Paul Tarau) writes:
>
>Defining a findall-like predicate in pure Prolog is relatively easy,
>if we use a data-structure representation of the database.

Quite. What you've done is to take everything "up" a level.
What you haven't shown is the implementation of "findall" AS IT EXISTS,
applying to normal Prolog.

Chris Moss