[comp.lang.prolog] PROLOG DIGEST V6 #42

restivo@POLYA.STANFORD.EDU (Chuck Restivo) (07/10/88)

Date: Sunday, 10 July 1988 02:53-PST
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@POLYA.STANFORD.EDU>
Reply-to: PROLOG@POLYA.STANFORD.EDU>
US-Mail: P.O. Box 4584, Stanford CA  94305
Subject: PROLOG Digest   V6 #42
To: PROLOG@POLYA.STANFORD.EDU


PROLOG Digest           Sunday, 10 July 1988      Volume 6 : Issue 42

Today's Topics:
				          Query - Call/1,
                                                  Implementation - KR,
                                               LP Library - Book Review
-----------------------------------------------------------------------------------------------------------------------------

Date: 7 Jun 88 02:31:00 GMT
From: uicsrd.csrd.uiuc.edu!sehr@uxc.cso.uiuc.edu
Subject: Call/1 and goal processing

I am in the process of writing an Or-parallel interpreter for Prolog,
and have run across a difficulty.  The question I wonder about is how
other interpreters that use a structure-shared representation go about
implementing call/1 in an efficient manner without unduly penalizing
ordinary goal processing.  It seems that if one allows the interpretation
of dynamically constructed goals (i.e. those constructed via functor/3,
arg/3, and univ (=..)),  then one must cope with the possibility of
variable dereferencing during the selection of a goal to process.  Does
anyone out there know how it is done in other interpreters (C-Prolog,
SB-Prolog, etc.)?

-- David

------------------------------

Date: 15 Jun 88 19:41:11 GMT
From: quintus!ok@unix.sri.com
Subject: Knowledge representation and Prolog

This is a forwarded message.
In the future, I suggest that instead of sending things to me for
forwarding, people should send mail to the Prolog Digest at
	Prolog@Polya.Stanford.EDU

FORWARDED MESSAGE:
  Does anyone know of any KRL written in Prolog besides APES?  Has anyone any
  comments on attempts to mimic KEE or ART type systems in Prolog? 
   
  Jerry Harper: Merrion Gates Software, Rosemount Court, Booterstown Avenue,
              : Co Dublin, IRELAND
  jharper@euroies.uucp

------------------------------

Date: 16 Jun 88 18:13:38 GMT
From: antares!finin@burdvax.prc.unisys.com  (Tim Finin)
Subject:  Knowledge representation and Prolog

We here at the Unisys Paoli Research Center have been using KNET, a
Knowledge Represention system implemented in Prolog, on a number of AI
projects since about 1982.  Attached is a brief description of KNET.
More details can be found in an article which will appear in a
forthcoming issue of the Journal of Logic Programming.  Tim

KNET is a frame-based knowledge representation system developed at the
Unisys Paoli Research Center to support the acquisition and
maintenance of large, real-world knowledge bases.  The KNET system is
currently being used in a rule-based diagnosis and maintenance system
(KSTAMP), in a model-driven configuration expert system (BEACON) and
in our natural language processing system (PUNDIT).

The KNET representation language is similar to KL-ONE in that it is
based on a formal semantic model which defines the meaning of objects
in its knowledge bases.  Objects in the knowledge base are called
concepts, each of which has a unique name.  A KNET knowledge structure
consists of a number of concepts organized into two distinct data
structures, called hierarchies.  Each hierarchy has its own structure,
but the two are related in a complex manner.

The first hierarchy is the specialization hierarchy.  Concept B is
said to specialize concept A if B is a kind of A.  There is a single
designated root concept, and all concepts participate in the
specialization hierarchy.  It is essentially the IS-A hierarchy used
as a basis of conventional semantic networks.  It is used so that
components (see below) may be described at a single concept and
inherited by all the children of that concept.  The specialization
hierarchy is more general than the conventional IS-A network in that
it is possible for a concept to specialize more than one other
concept, thus inheriting properties from all its parents.  This
feature is termed multiple inheritance.

The second hierarchy is the aggregation hierarchy.  In this hierarchy,
the children of a concept are its components, and taken as an
aggregate they completely define that concept.  In some cases these
children are not components in a literal sense, but are properties, as
for example the wall voltage required by a device may be a property of
that device.  A link in this hierarchy consists of an object called a
role which names the component, together with a connection to the
owner of the role and another connection to the type of the role
(which is another concept).

The final constituent of a KNET structure is the constraint.  A
constraint is a piece of executable code attached to the network.  The
code is available for use by a program using KNET (an application);
whether, when, and how it is used is up to the application.  A
constraint is housed at some particular concept, and refers to one or
more roles.  Exactly one of the roles referred to by a constraint is
designated as the trigger of the constraint; the remaining roles are
the targets of the constraint.  The purpose of the trigger is to
provide a means for indicating within the KNET structure when an
application program should use a constraint.  The meaning of a
constraint is always defined relative to the context in which the
constraint occurs.  This means that references to roles made from
within an inherited constraint always refer to the local context
rather than to the context in which the constraint was originally
defined.

Finally, it is important to maintain consistency in knowledge
networks.  The definition of consistency varies for differing kinds of
knowledge representation, and depends on the semantic model
implemented by the knowledge representation.  For KNET, a fundamental
consistency requirement is the subsumption requirement, defined as
follows: If concept A has a role R with type B, and if concept A2
specializes A, then the type of the inherited role R2 must be B or a
specialization of (a specialization of ...) B.  If this requirement is
not met, a subsumption violation is said to occur.  The program which
builds KNET structures, the Browser/Editor, automatically checks for
and disallows subsumption violations and several other types of
inconsistencies.

KNET has been implemented in a standard subset of Prolog which has
allowed it to be ported to several different Prolog systems on a
variety of machines.  Versions of KNET run on VAXes, Suns, Lisp
machines and PCs.  An interactive browser/editor system for KNET
knowledge bases can use a simple ASCII terminal interface, enabling
its use on each of these machines.  A more sophisticated graphical
browser/editor is available for use on Sun workstations.

-- Tim Finin			

------------------------------

Date: 17 Jun 88 02:44:14 GMT
From: shardy@teknowledge-vaxc.arpa  (Steve Hardy)
Subject: Knowledge representation and Prolog

Jerry Harper asks about knwoledge representation languages
written in Prolog (besides APES.)

Teknowledge developed a prolog-based expert system shell called
M.1.  It was released in June 1984 and has sold close to 4,000
copies.

M.1 is unusual in that it is a complete logic programming
language as well as being an easy-to-use expert system shell.

For example:

/* --- simple EMYCIN-like heuristic rule --- */

	if main-component-of-meal = beef
		then best-color-of-wine = red cf 75.

/* --- list processing --- */

	infix <>.		/* for "append" */

	[] <> LIST = LIST.

	if LIST1 <> LIST2 = LIST3
	   then [ITEM|LIST1] <> LIST2 = [ITEM|LIST3].

/* --- objects and inheritance --- */

	issquare(obj-22).
	size(obj-22) = 5.

	if isrectangle(X) and
		height(X) = H and width(X) = W and H * W = R
	    then area(X) = R.

	if issquare(X) then isrectangle(X).
	if issquare(X) and size(X) = S then height(X) = S.
	if issquare(X) and size(X) = S then width(X) = S.

After releasing four versions of M.1 in Prolog, Teknowledge
recoded the system in C.  This led to the system being
approximately five times faster and able to handle knowledge
systems five times larger (up to 2000 rules on a 640K IBM-PC.)

It was easier to work out the design of M.1 with Prolog than it
would have been with C.

-- Steve Hardy

------------------------------

Date: 1 Jul 88 01:15:52 GMT
From: quintus!ok@unix.sri.com  (Richard A. O'Keefe)
Subject: An example from "Knowledge Systems & Prolog"

Yesterday I bought a copy of
	[A Logical Approach to Expert Systems and Natural Language Processing]
	Knowledge Systems and Prolog
	Adrian Walker, Michael McCord, John Sowa, & Walter Wilson
	Addison-Wesley 1987
	ISBN 0-201-09044-9

I haven't had time to read much of it yet.
There's some rather good advice in it, and it is easily the most
powerful argument I have ever seen for Edinburgh syntax.

For now I'd just like to comment on one particular example, found
found on page 413.  I'll transliterate it into Edinburgh syntax.

%   most_general_terms(Terms, GeneralTerms)
%   is true when GeneralTerms is the set of terms in Terms which are
%   subsumed by no other term in Terms.

most_general_terms(Terms, GeneralTerms) :-
	qsort(Terms, TermsInOrder),
	most_general_terms_in_order(TermsInOrder, GeneralTerms).

%   most_general_terms_in_order(TermsInOrder, GeneralTerms)
%   is just like most_general_terms/2, except that less general terms
%   precede more general terms in TermsInOrder (that is, if X and Y
%   are both in TermsInOrder and X subsumes Y, X must follow Y).  The
%   order of terms in the result is the same as the order in the input.

most_general_terms_in_order([], []).
most_general_terms_in_order([Term|Terms], GeneralTerms) :-
	member(MoreGeneral, Terms),
	subsumes(MoreGeneral, Term),
	!,
	most_general_terms_in_order(Terms, GeneralTerms).
most_general_terms_in_order([Term|Terms], [Term|GeneralTerms]) :-
	most_general_terms_in_order(Terms, GeneralTerms).

%   This version of quicksort is supposed to order its input so that
%   if X and Y are both given and X subsumes Y, X will follow Y in
%   the output.

qsort(Terms, TermsInOrder) :-
	qsort(Terms, TermsInOrder, []).

qsort([], L, L).
qsort([Term|Terms], L0, L) :-
	partition(Terms, Term, Before, After),
	qsort(Before, L0, [Term|L1]),
	qsort(After, L1, L).

partition([], _, [], []).
partition([Term|Terms], Pivot, Before, [Term|After]) :-
	subsumes(Term, Pivot),
	!,
	partition(Terms, Pivot, Before, After).
partition([Term|Terms], Pivot, [Term|Before], After) :-
	partition(Terms, Pivot, Before, After).

%---- end of first version

Now, my antipathy to (falsely so-called) "quicksort" is well known by now.
But in this case its quadratic worst case hardly matters, because if there
are N terms initially, most_general_terms_in_order/2 will do O(N**2) calls
to subsumes/2 anyway.  So we might as well do

most_general_terms(Terms, GeneralTerms) :-
	most_general_terms(Terms, [], GeneralTerms).

%   At each call of most_general_terms(T, G, X)
%   there is a list L such that append(L, T, Terms) and
%   most_general_terms(L, G).

most_general_terms([], GeneralTerms, GeneralTerms).
most_general_terms([Term|Terms], GeneralTerms0, GeneralTerms) :-
	knock_out(GeneralTerms0, Term, GeneralTerms1),
	most_general_terms(Terms, GeneralTerms1, GeneralTerms).

knock_out([], Pivot, [Pivot]).
knock_out([Term|Terms], Pivot, GeneralTerms) :-
	subsumes(Pivot, Term),
	!,
	knock_out(Terms, Pivot, GeneralTerms).


knock_out([Term|Terms], Pivot, [Term|Terms]) :-
	subsumes(Term, Pivot),
	!.
knock_out([Term|Terms], Pivot, [Term|GeneralTerms]) :-
	knock_out(Terms, Pivot, GeneralTerms).

%---- end of second version

This has the nice property that the order of terms in GeneralTerms is
exactly the same as the order of terms in the original Terms list,
apart from the deletion of subsumed terms.  It does at most N(N-1)
calls to subsumes/2, and while the version of Walker et alia does
at most (1/2)N(N-1) such calls in most_general_terms_in_order/2,
it may do a similar number in partition/4.  Indeed, because subsumes/2
is a partial order (not a total order), it is likely to fail rather more
often than it succeeds, so partition/4 is likely to put most of its
first argument into the Before list, and quadratic behaviour is very
likely.  (In fact, whenever subsumes(Term, Pivot) succeeds, that tells
us that Pivot will not be in the final result, so we might as well drop
it.)

Actual measurements show that the two versions do about the same number
of calls to subsumes/2: both tend to be close to N(N-1) calls.  Sometimes
the "unsorted" method does much better than the "sorted" one, sometimes it
does a little worse.


This is actually a very interesting problem.  It crops up in all sorts of
guises.  I currently have an algorithm which does at most N(N-1) calls to
subsumes/2 and reduces to at most 2(N-1) when subsumes/2 happens to be
total.  If anyone knows of a better algorithm I would be _very_ interested
to hear of it.

------------------------------

Date: Fri, 1 Jul 88 17:30:43 PDT
From: quintus!ok@Sun.COM (Richard A. O'Keefe)
Subject: "Knowledge Systems in Prolog", more examples

Well, I've read more of the Walker, McCord, Sowa, & Wilson book,
and while I still say that there is some good advice in it, there are
one or two things it would pay you *NOT* to imitate.

(1) Wrappers.

    On pp 34-35, we are told that the Pascal declaration
	type person =
	    record
		name	      : string;
		address	      : string;
		date_of_birth : array [1..3] of integer;
		sex	      : boolean;
	    end {record};
    -- which, by the way, is not a brilliant piece of Pascal --
    introduces a type instances of which have as Prolog equivalents
    terms of the form
	person(name(N), address(A), date_of_birth(D.M.Y), sex(S))

    Again, on page 51 we are shown

	/* Prolog structure */	{ Pascal record }
				type loc =
	loc(			    record
	    farmer(F),			farmer    : string;
	    wolf(W),			wolf	  : string;
	    goat(G),			goat	  : string;
	    cabbage(C)			cabbage	  : string;
	)			    end {record};
    -- which, in context, is an amazingly bad piece of Pascal --

    DON'T *DO* THIS!  Supposing F, W, G, and C to be constants,
    the representation they recommand would cost 13 words in Quintus
    Prolog (18 words in another Prolog I know of), whereas the
    sum-of-products approach yields the *real* equivalent of the
    Pascal record as
	loc(Farmer, Wolf, Goat, Cabbage)
    at a cost of 5 words in Quintus Prolog (6 in another Prolog).
    That's a substantial waste of space and time, and worse still,
    it confuses the reader, because when you see patterns like
	loc(farmer(F),_,_,_)
    and	loc(_,wolf(W),_,_)
    floating around, your natural assumption is that patterns like
    loc(wolf(W),_,_,_) and loc(_,farmer(_),_,_) may be possible, for
    why else would anyone have unary wrappers if not to distinguish
    such cases?

(2) Over-use of "."

    At least in chapter 2, the book has an excessive fondness for ".".
    Consider the birth_date(Day.Month.Year) example above.  This would
    be better as date(Year,Month,Day) --when, amongst other advantages,
    it will sort properly-- at a space cost of 4 cells, which is
    admittedly the same as the cost of Day.Month.Year.  The big
    advantage here is that all things made out of dots look alike, so it
    is very hard for a human reader to tell D.M.Y from any other X.Y.X,
    but date(Y,M,D) proclaims its nature to the world.  On page 55, this
    book actually _recommends_ X.Y, X.Y.Z and the like, so this was not
    an accidental slip but a matter of policy.

    The authors have finally cured me of my lingering fondness for infix dot.
    I am now thoroughly convinced that bracket notation is to be preferred.
    Perhaps if I had been reading Lee Naish's code I might have been swayed
    the other way, but I fear that he may not be typical.

(3) Factorial

    On page 36 they present the following code:

	factorial(0,1).
	factorial(N,X) <- lt(N,0) &
	    write('Negative input to factorial').
	factorial(N,X) <- (M := N - 1) & factorial(M,Y) &
	    (X := N * Y).

    {Note: * in VM/Prolog is usually an anonymous variable, but not here.}

    What's wrong with this?  Well, according to this, factorial(-1, 472)
    is true.  {For the benefit of those fortunate enough not to have
    VM/Prolog, try
	factorial(0, 1).
	factorial(N, X) :-
		N < 0,
		write('Negative input to factorial.'), nl.
	factorial(N, X) :-
		M is N-1,
		factorial(M, Y),
		X is N*Y.
    }  For real fun, try factorial(1, 2).  If you are going to print an
    error message like this, you should at least have the decency to fail.
    So the second clause would have been better as
	factorial(N, *) <- lt(N,0) &
	    write('Negative input to factorial') & / & fail.

(4) (falsely so-called) "quick" sort    

    Section 2.4.1 (p89) starts out admirably, saying that "The choice of
    algorithm is the most important factor in determining performance".
    But the example they consider is sorting, and they say "Three
    different algorithms might be considered:
	- a permutation sort that tries all possible permutations of a
	  list in order to find one that is sorted
	- a bubble sort that interchanges pairs of elements until the
	  result is sorted
	- a version of Quicksort ..."

    I am really getting thoroughly sick of "quick" sort.  Remember, if you
    check the fine print in Knuth, Sedgewick, Melhorn, or any good book
    on the subject, you will find that the average case of "quick" sort
    does about 1.4 times as many comparisons as the WORST case of merge sort.
    If people are going to presume to give advice about sorting, they might
    at least check the standard literature.  (Not that sorting is a major
    topic in Walker et al, but it wasn't a major topic in Clocksin&Mellish
    either, and that didn't stop people mistaking the difference-list
    example for advice about how to sort.)

(5) Breadth-first search.

    On pp 82-83 we find

	breadth_first(Goal, Histories, Answer) <-
	    member(Goal.Past, Histories) &
	    reverse(Goal.Past, Answer).
	breadth_first(Goal, Histories, Answer) <-
	    Histories=(*,*) &
	    set_of(Next.Current.Past,
		member(Current.Past, Histories) &
		move(Current,Next) & \member(Next,Past),
					New_history_list) &
	    remove_duplicate_head(New_history_list, Clean_list) &
	    breadth_first(Goal, Clean_list, Answer).

	remove_duplicate_head(F.S.Tail, Clean) <-
	    ((F=(Head.*) & S=(Head.*)) ->
		remove_duplicate_head(F.Tail,Clean);
		(remove_duplicate_head(S.Tail,L) &
		 Clean=(F.L))).
	remove_duplicate_head(F.nil, F.nil).
	remove_duplicate_head(nil, nil).

    Let's start with remove_duplicate_head.  The input is a list of lists,
    which is sorted, and subsequences ...,[A|T1],...,[A|Tn],... with the
    same head (A) are to be replaced by the first member of the subsequence
    (here [A|T1]).  Can we do this more cleanly?

	remove_duplicate_heads([], []).
	remove_duplicate_heads([[Tag|Info]|Given], [[Tag|Info]|Clean]) :-
		skip_duplicate_heads(Given, Tag, Clean).

	skip_duplicate_heads([[Tag|_]|Given], Tag, Clean) :- !,
		skip_duplicate_heads(Given, Tag, Clean).
	skip_duplicate_heads(Given, _, Clean) :-
		remove_duplicate_heads(Given, Clean).

    In Quintus Prolog, the cleaner version is about three times faster.

    I think the test \member(Next, Past) might perhaps be better as
    \member(Next, Current.Past), in case the problem graph has self-loops.
    If you are given a generation function which yields all of the children
    of a node at once instead of a move/2 which enumerates them, you can
    write breadth first search without appealing to set_of/3 at all.
    {The predicate called set_of/3 in this book corresponds to the
    predicate called set_of_all/3 in the Quintus library, not to the
    predicate called set_of/3, except that set_of_all/3 checks that it
    is being called soundly, and set_of/3 in this book doesn't.}

Reminder:
    In this note I've concentrated on some of the infelicities in the book.
    I repeat that there is a lot of good stuff in it, and on the whole I do
    not regret its purchase.

------------------------------

Date: Sat, 2 Jul 88 22:03:31 PDT
From: quintus!ok@Sun.COM (Richard A. O'Keefe)
Subject: "Knowledge Systems and Prolog", chapter 3

Yes, it's me again, with yet a third set of comments on the
"Knowledge Systems and Prolog" book by Walker, McCord, Sowa, & Wilson.
This particular set of comments pertains to chapter 3.  I hasten to
say at the outset that there is a lot of good stuff in this chapter
which I wish I had written myself, but of course I have nothing to
say about _that_.


1.  Logic Programming Development Process (pp 110-111)

    Don't take steps 1..9 too seriously; that's not how I do it, and
    one of the most important steps has been omitted, namely "check to
    see if someone else has already solved the problem".  But the rest
    of the advice on p111 is good.

2.  Reading (p 113)

    The book presents two fragments (recast here as Prolog):
	...
	read(Stream1, X),
	read(Stream2, T),
	process(X, T)
	...
    and
	...
	customer(Name),
	catalogue_item(Item),
	interested_in(Name, Item),
	send_letter(Name, Item)
    saying "For example, catalogue_item/1 may simply read the next term".

    Now the second fragment looks as though it has a declarative reading.
    But its behaviour is radically different from the behaviour which
    would result if customer/1 and catalogue_item/1 were tables.  It is
    _NOT_ good style to give commands with side effects names which suggests
    that they are pure relations.  In fact it is very bad style.  Suppose
    we had the customer and catalogue_item tables in memory as pure
    relations, and wrote this failure-driven loop:

	send_letters :-
		customer(Customer),
		catalogue_item(Item),
		interested_in(Customer, Item),
		send_letter(Customer, Item),
		fail
	    ;	true.

    This would combine *every* Customer with *every* catalogue Item.
    But the original fragment with the two read commands doesn't do that;
    it reads an X and a T and combines _that_ X with _that_ T and no other.

    By the way, you _can_ keep (a small number of) tables in files and
    access them with read/1.  Here's an example (using Quintus Prolog):

	:- dynamic
		table_stream/2.

	initial_position('$stream_position'(0,1,0)).

	read_from_external_relation_1(Table, Entry) :-
		(   table_stream(Table, Stream) -> true
		;   external_relation(Table, FileName),
		    open(FileName, read, Stream),
		    assert(table_stream(Table, Stream))
		),
		initial_position(Zero),
		do_external_access(Zero, Stream, Entry).

	do_external_access(Before, Stream, Entry) :-
		stream_position(Stream, _, Before),
		read(Stream, Term),
		stream_position(Stream, After),
		(   Term == end_of_file -> fail
		;   Entry = Term
		;   do_external_access(After, Stream, Entry)
		).

	external_relation(customer,       'custom.dat').
	external_relation(catalogue_item, 'catalo.dat').

	customer(Customer) :-
		read_from_external_relation_1(customer, Customer).

	catalogue_item(Item) :-
		read_from_external_relation_1(catalogue_item, Item).

    Now, this _is_ a version of catalogue_item/1 which "simply read[s]
    the next term", but it is pretty remote from the first fragement.
    I don't know whether Frank McCabe invented this technique, but he's
    the one I got it from (testing the code above was the first time I
    have ever _used_ the technique, mind...).

    If you can possibly fit the information into memory, _do_ it.
    Don't keep reading it from a file over and over again.  Another
    possibility, instead of using read_from_external_relation_1/2,
    would be to read a file once and build an index in memory, so
    that fewer reads would be done.

    For example, suppose we have a a file 'passwd.pl' containing items like

	passwd(teksup,123,45,'Tech Support',       '/peano/teksup','/bin/csh').
	passwd(halt,    6, 7,'Stop Machines',      '/',            '/etc/halt').
	passwd(ok,     89,10,'Richard A. O''Keefe','/dedekind/ok', '/bin/csh').

    {Crackers beware: these are _not_ real Quintus /etc/passwd entries!}

    We might do this:

	:- dynamic
		passwd_stream/1,
		passwd_index/2.

	initialise_passwd(Stream) :-
		open('passwd.pl', read, Stream),
		assert(passwd_stream(Stream)),
		repeat,
		    stream_position(Stream, Position),
		    read(Stream, Term),
		    (   Term = passwd(Key,_,_,_,_,_) ->
			assert(passwd_index(Key, Position)),
			fail
		    ;	true
		    ),
		!.	% The Stream is left open!

	passwd1(Key, Uid, Gid, Name, Home, Shell) :-
		(   passwd_stream(Stream) -> true
		;   initialise_passwd(Stream)
		),
		passwd_index(Key, Position),
		stream_position(Stream, _, Position),
		read(Stream, passwd(Key,Uid,Gid,Name,Home,Shell)).

    It is fair to give a predicate so defined a name which suggests
    that it is a pure table, because (apart from tying up a stream and
    being rather slow) that's how it _acts_.

    ONLY use this technique if you can build a good index; it can be
    hundreds, even thousands, of times slower than accessing tables in
    main memory.

3.  Wrappers (page 114-115)

    There is an example on pp 114-115 which reads thus:

	name(Name, TelephoneNumber, File) :-
		get_next_record(File, Record),
		parse_record(Record, name(Name,TelephoneNumber)).
				     ^^^^^                    ^

	parse_record(Record, name(Name,TelephoneNumber)) :-
			     ^^^^^                    ^
		parse_name(Name, Record, Rest_of_record),
		parse_blanks(_, Rest_of_record, Last_of_record),
		parse_telephone(TelephoneNumber, Last_of_record, _).

    {If you look at the code in the book, you'll see that the first
    argument of parse_blanks/3 is an anonymous variable in all calls
    and definitions, so it might as well not be there.}
    There is much to quarrel with in the choice of names here:
    the parse_*** predicates are genuinely relational, so should have
    noun-phrase or adjective-phrase names, not names that look like
    commands.  Worst of all, the operation which _is_ a command (that
    is, which has a side effect) is given the name name/3 which looks
    like a pure relation.  It would be better named e.g.
	read_name_and_phone_number(File, Name, PhoneNumber)

    My main point here is that name(_,_) is a useless wrapper.  If the
    name and phone number had to be returned to the caller so packaged,
    there'd be some sense in it, but not here.  It would be better as

	read_name_and_phone_number(Stream, Name, PhoneNumber) :-
		get_line(Stream, Chars),
		name_and_phone_number(Name, PhoneNumber, Chars, _).

	name_and_phone_number(Name, Exchange-Extension) -->
		token(Name),
		blanks,
		number(Exchange), "-", number(Extension).

    where the compound term -(_,_) here _is_ justified because that's
    what the caller wants.


4.  Query-the-user (pp 116-117, p120).

    When you hit page 116, look ahead to page 120.  I'll not comment on
    the rather major problems of the code on page 116, because the code
    on page 120 remedies some of them.

    The idea is that you want a relation user('User''s first name')
    -- by the way, I find it offensive to have computers address me by
    my first name, so whenever a program asks me my first name I lie;
    my computer account name will do fine for any genuine identification--
    but you don't want to ask unless you turn out to need the information.
    The code on page 120 is (converted to Quintus Prolog):

	:- dynamic
		known/1.

	user(Name) :-
		known(user(Name)),
		!.
	user(Name) :-
		prompted_line('What is your first name? ', Chars),
		(   verify(Chars) ->	% you have to supply verify/1
		    atom_chars(Name, Chars),
		    assert(known(user(Name)))
		;   format('~s: not verified.~n', [Chars]),
		    user(Name)
		).

    {This is not identical to the code in the book, but it has the same
    kinds of bugs, which is what I'm concerned with.}
    I loaded this into Quintus Prolog, together with library(prompt)
    and this definition of verify/1:

	verify([_,_]).
    Now see what happens:
	| ?- user(fred).
	What is your first name? jim
	jim: not verified.			% right
	What is your first name? ok
	no					% right, BUT

	| ?- listing(known/1).			% prints _nothing_

	| ?- user(Who).				% should pick up 'ok', BUT
	What is your first name? ok		% it asks AGAIN!
	Who = ok				% but that's not all...

	| ?- user(dz).
	What is your first name? ok		% it asks yet AGAIN!
	no

    The code breaks down in (at least) two ways:
    A)  If we call it with Name bound when known(user(X)) is true--i.e.
	the user has been asked for his name--but X\==Name, the user
	will be asked for his name again.  Fortunately, the _second_
	breakdown then occurs, so at least it isn't _stored_ again.
    B)	If we call it with Name bound when the user name is not known
	(or when it is known to be different from Name), we'll ask for
	the name, verify it, and then fail before storing the name.

    How should it be written?

	:- dynamic
		property_known/2.

	property_type(user, atom).
	...
	property_prompt(user, 'What is your first name? ').
	...
	property_verify(user, Chars) :-
		verify_user(Chars).
	...

	simple_query(Property, Value) :-
		simple_query_1(Property, X),
		Value = X.

	simple_query_1(Property, Value) :-
		property_known(Property, Value),
		!.
	simple_query_1(Property, Value) :-
		property_type(Property, Type),
		simple_query_1(Type, Property, Value),
		assert(property_known(Property, Value)).

	...
	simple_query_1(atom, Property, Value) :-
		property_prompt(Property, Prompt),

		repeat,
		    prompted_line(Prompt, Chars),
		    (   property_verify(Property, Chars)
		    ;   format('~s: not verified~n', [Chars]), fail
		    ),
		!,
		atom_chars(Value, Chars).
	...

	user(UserName) :-
		simple_query(user, UserName).

    Now, with this definition, we get

	| ?- user(fred).
	What is your first name? jim
	jim: not verified
	What is your first name? ok
	no

	| ?- listing(property_known/2).
	property_known(user,ok).
	yes

	| ?- user(Who).
	Who = ok 

	| ?- user(dz).
	no

    which is what you would expect something like this to do.


5.  "Global variables" (p122)

    I have to quote their VM/Prolog code here, because delax/1 doesn't
    behave quite like retract/1, more like retract_first/1.

	A ::= B <-
	    try(delax(global_value(A, *))) &
	    addax(global_value(A, B)).

	try(X) <- X.
	try(*).

    What's wrong with this?  Well, after converting to Quintus Prolog,
    I got the following result:

	| ?- x ::= 1, listing(global_value/2).
	global_value(x,1).

	| ?- x ::= 2, global_value(x, 1).
	no

	| ?- listing(global_value/2).
	global_value(x, 2).
	global_value(x, 2).

    I'll leave you to figure _that_ one out for yourselves, but the
    moral is that any time the last clause of a predicate is a
    catch-all case like the last clause of try/1, the chances are
    someone is trying to be clever and failing.


6.  Another steadfastness problem (p122)

    We are told on page 122 that "the global_value/2 technique
    described above can be used to simulate a stack, as follows:

	push(Stack, Item) <-
		addax(global_value(Stack, Item), 0).	% "asserta"

	pop(Stack, Item) <-
		delax(global_value(Stack, Item)).

    The predicates push/2 and pop/2 have their obvious meaning."

    Actually, they haven't.  Or at any rate pop/2 hasn't.  If pop/2
    succeeds, it did delete an item from the stack, but the item it
    deleted may not have been at the top of the stack...  Working
    Prolog code is

	push(Stack, Item) :-
		asserta(global_value(Stack,Item)).

	pop(Stack, Item) :-
		retract(global_value(Stack,Top)),
		!,
		Item = Top.

	top(Stack, Item) :-
		global_value(Stack, Top),
		!,
		Item = Top.

TO BE CONTINUED

    I have about as much more to add, but it's time to go home.

DISCLAIMER

    Remember, I'm criticising slips in a by-and-large-reasonable book;
    the good stuff can speak for itself.

    You'll recall that I had very little to criticise in the _content_
    of the Warren & Maier book, but raged about the misleading preface
    and blurb.  I recommended it to the Prolog class I taught last week,
    and if I had bought the Walker et al book by then I'd probably have
    suggested it to them as worth reading too.  Yes, I've made up my
    mind, I _shall_ recommend it to the next Quintus class, if only to
    show them why they should buy QP370 rather than VM/Prolog (:-).

---------------------------------

End of PROLOG Digest