[comp.lang.prolog] PROLOG Digest V6 #3

PROLOG-REQUEST@SUSHI.STANFORD.EDU (Chuck Restivo, The Moderator) (01/05/88)

PROLOG Digest            Tuesday, 5 Jan 1988        Volume 6 : Issue 3

Today's Topics:
                       Query - Variable Order,
                   Implementation - Task Analysis,
                  LP Library - Computing with Logic
----------------------------------------------------------------------

Date: Fri, 01 Jan 88 00:29:10 EST
From: Michael Covington <MCOVINGT%UGA.BITNET@forsythe.stanford.edu>
Subject: Dependency Grammar / Variable Word Order

  I would like to hear of any work, published or unpublished, on the
following topics:

(1) Parsing using a dependency rather than a constituency grammar,
i.e., by establishing grammatical relations between individual words
rather than gathering words into groups.

(2) Parsing strategies that have been used successfully with languages
that have variable word order, such as Russian or Latin.

I am familiar with GPSG, HPSG, and ID/LP grammars, and with various
proofs that (some) dependency grammars are equivalent to (some) con-
stituency or phrase-structure grammars. However, there must have been
some good ideas in circulation much earlier, perhaps in connection
with early machine translation work. 

------------------------------

Date: Sat, 2 Jan 88 01:52:07 PST
From: quintus!ok@Sun.COM (Richard A. O'Keefe)
Subject: Task analysis

Has anyone done a task analysis for Prolog?

I'm picking the brains of someone who was incautious enough to let
his e-mail address appear in a message sent to comp.edu.  Amongst
other things, he wrote

        I carried this further in my introductory courses, producing an
        enumerated Basic Skill Set that explained what each student was
        expected to attain.  If, at any time, the student felt he/she
        didn't have any of the basic skills in any part of the course,
        it was up to them to come and get assistance.  We had found that
        students new to computing didn't have any firm idea of the level
        of detailed knowledge expected of them --- and didn't know how
        to find out.  We ended up with a "contract" document that
        contained all that and the student performance JUMPED!  I've
        seen a few texts that use this idea, too.

Rudolf Flesch, in "Why Johnny STILL Can't Read" has some similar remarks
about the value of a Task Analysis.

The generally excruciatingly bad level of Prolog textbooks strongly
suggests that the people who wrote them didn't have an explicit task
analysis written down.  I've just started trying to sketch one out,
and it looks like a lot of hard work.  Of course, I have never done
a task analysis of anything before, which doesn't help.  Frankly, I'd
like references to task analyses for ANY programming language, or even
for word processing.

For the benefit of people who don't know what task analysis is all about,
the idea is that you work out what it is you want your students to be
able to do by the end of the course.  Then you ask yourself, ok, what
sub-tasks and skills does that require?  And so on recursively.  The
point is to break up the rather amorphous topic "understanding Prolog"
or whatever into a partially ordered set of subskills.  And not just any
subskills:  you want the small chunks to be such that you can test
whether or not a student understands each particular chunk.

For the sake of concreteness, I am starting with a single program
fragment.

/*  We are given a tree defined by two predicates:
        start(Start)              is the root of the tree
        children(Node, Children)  relates a node to a list of its children.
    We want to find nodes which satisfy a third given predicate:
        solution(Node)            is true of the nodes wanted and no others
*/

%   breadth_first(Answer)
%   is true if Answer is a node in the tree defined by start/1 and
%   children/2 and satisfies solution/1.  The searching method used
%   is simple breadth-first search.  That is, all the nodes at any
%   level of the tree will be checked before any of their children.

breadth_first(Answer) :-
        start(Start),
        breadth_star([Start], Answer),
        solution(Answer).

breadth_star([OpenNode|OpenNodes], Node) :-
        (   Node = OpenNode
        ;   children(Node, Children),
            append(OpenNodes, Children, OpenNodes1),
            breadth_star(OpenNodes1, Node)
        ).

The point is, what do you have to know to understand this?

(1) English.
    It is important to be able to read English comfortably.
    It is not enough to recognise a few thousand words by sight and
    guess at the rest.  You have to be able to SEE the letters and
    punctuation marks and appreciate that spelling and punctuation
    matter.  You also have to have some glimmerings of discourse
    structure.  (E.g. the ordering background-comment, predicate-
    comment, top-level predicate, loop predicate is very far from
    being arbitrary!)

    Of course I could have written this using French comments and
    French identifiers, and then the first thing to understand would
    have been French.  The specific language doesn't matter.  But
    familiarity with an alphabetic writing system is important.
    The idea that words have structure is also important here.
    There are at least three regular morphological processes going
    on in the variable names here:
        OpenNode        has the form <Adjective><Type>
        OpenNodes       has the form <NounPhrase>+plural
        OpenNodes1      has the form <SequencePhrase>+ordinal
    You have to be able to SEE this kind of thing.

    {Digression: in a Prolog system which supports Kanji, the convention
    would presumably be to affix kanji signifying "sequence" or whatever
    and ordinal position, so this same pattern would be seen as phrase
    structure within variable names rather than morphological structure.
    Anyone care to comment?}

    Anyrate, basic reading skills are needed here for three things:
        - the accompanying text
        - the comments
        - the Prolog code itself
    One of the teacher's tasks is to help the students transfer their
    existing reading skills to Prolog.  This means that we have to
    make explicit
        - character set
        - lexical structure
        - morphological rules
        - discourse rules

(2) Trees.
    I have long been of the opinion that high-school children should
    be explicitly taught "trees".  Trees show up all over the place;
    game trees, decision trees, phylogenetic trees, organisational
    charts.  I cannot think of any high-school subject I took where
    trees didn't appear.  (That specifically includes PhysEd.)  Why
    not spend a couple of periods being explicit about them?  It'd
    take a lot of the mystery out of algebra.

    Trees are needed two ways in this example.
    The *problem* is about searching trees.  The idea of a generator
    (in the mathematical sense) is important here too:  we are not
    given the tree to be searched as an explicit data structure, but
    as a base (start/1) and generator (children/2).  In fact a really
    important skill you need for this kind of problem is to be able
    to shift between different representations of a tree and still
    understand that the same thing is going on.  (For example, we
    might be given root(Start) and arc(Parent, Child) predicates
    instead of start/1 and children/2, and it has to be clear that
    nothing fundamental has changed.)  Later examples in the set
    this is drawn from attach heuristic estimates to children and
    sort them in various ways.  Again it has to be understood that
    this doesn't change what the tree IS, only how we look at it.

    Prolog data structures are trees.  In order to understand Prolog
    data structures, you have to understand trees first.  You have
    to grasp the fact that fred(1,2) is isomorphic to a tree with
    three nodes, and that 1 is isomorphic to a tree with one node,
    and that it makes just as much sense to ask for the label on
    the root of the tree 1 is isomorphic too (in other words, to
    call functor(1, Label, Arity)) as it does to ask for the label
    on the root of the tree fred(1,2) is isomorphic to (in other
    words, to call functor(fred(1,2), Label, Arity)).  Of course
    buggy Prolog systems which get functor(1, F, N) wrong don't
    exactly help here...

    There is a third way in which trees are relevant, and it is the
    one that most Prolog books stress.  Namely, the execution tree.
    The Open University have a tree representation which is easily
    the best I have seen.  BUT this tree is totally irrelevant to
    the way I understand this fragment.

    This is an important point.  Keeping the wrong things out of a
    Task Analysis is just as important as getting the right things in.
    I cannot stress sufficiently that I do not find execution trees
    helpful in thinking about Prolog.  I have never in my life drawn
    a Prolog execution tree of any sort for my own benefit.  This
    particular fragment started life as
        start. .child.* .answer
    (an algebraic view), hence the name breadth_star/2.

(3) Basic logic.
    Relations, conjunction and disjunction.

(4) Lists and append.
    An important distinction in this particular example is that between
    sequences and sets.  Exploiting this is the key to deriving better
    searching methods.  The full context includes depth first search
    (swap the first two arguments of append/3) and it is important to
    grasp that this changes the sequence in which the tree is searched
    without changing the set of nodes which might be searched.

(5) Argument use.
    There is a conventional order of arguments in Prolog.
    If you don't know it (or if the programmer ignored it) you have a
    hard time reading Prolog code.  In this particular example, there
    was no freedom of choice whatsoever with respect to argument order,
    and it is important to understand why.

The trouble is, this is much too general, and there are gaping holes in
it even so.  For example, I've known 1st order PC since high-school days;
I can no more remember what it was like not to be able to read predicate
calculus than I can remember what it was like not to be able to read music.
I'm really having trouble trying to see the gaps, and trying to work out
what the subskills are.

I should make it clear that the goal is to teach people how to read
WELL-written Prolog programs.  I have seen some real shockers (even
one written by someone who used to be at Quintus, but isn't any more...),
and it isn't reasonable to expect ANYONE to be able read badly written
programs.  I know there has been some work done with Pascal showing that
"expert" programmers were little better than novices at reading programs
that violated Pascal discourse conventions, and I believe the same is
true of Prolog.

So, does anyone have a Task Analysis for Prolog?

------------------------------

Date: Mon, 4 Jan 88 08:53:15 cst
From: overbeek@anl-mcs.ARPA (Overbeek)
Subject: Comments on "Computing with Logic" by Maier and Warren

Richard O'Keefe recently posted a review of

        Computing with Logic:  logic programming with Prolog
        David Maier/David S. Warren

While I have the greatest respect for Richard O'Keefe, I believe
that many of his comments were inappropriate for two reasons:

        1.  The book actually is very, very good.  It's main
            strengths are pedagogical, and Richard simply
            failed to understand or appreciate them.

        2.  Richard frequently exhibits a rather brutal style.
            He obviously prefers this type of interchange, where
            harsh statements do not carry the connotations that
            they do in ordinary discourse (i.e., normally statements
            of the sort that Richard makes mean not only "you have
            made serious technical oversights", they also mean
            "and I think you are an idiot").  There is no right
            or wrong to setting the ground rules of intercourse.
            However, it seems worth observing that Richard's tone
            is inhibiting the exchanges that might occur.  I certainly
            would not open myself up to one of Richard's caustic,
            public lashings under normal circumstances.  This time,
            I think that someone must respond.  The book is simply
            too good to allow a review like Richard's to go unanswered.
            On the other hand, I am tempted to use a pseudonym.

Why do I consider the book a useful contribution to a growing
literature?  A short response would be something like

    If you wanted to train young researchers in the field of logic
    programming, this book is one of the best starting points.
    It offers a good initial introduction to the concepts of logic
    programming, the formal theory supporting the use of logic
    programming, and the implementation of logic programming.

In creating an introductory textbook, authors make a number of critical
decisions.  Perhaps the most important involves a selection of material.
You must rank topics according to priority and make sure that the
most important get covered, while the less fundamental do not.

The authors of this text chose to start their exposition with
propositional logic, from the point of view of logic programming.
They successfully conveyed most of the fundamental ideas of logic
programming, much of the essential theory supporting its use
(resolution, interpretations, models, satifiability, etc.) in a highly
simplified context, and some insights into implementation.  All of
this is done rather painlessly in the early chapters of this book.
It represents a marvelous pedagogical achievement, in my opinion.

Then, they move on to predicate calculus without functions.  In this
simplified formalism, things like Herbrand's theorem can be made
easily accessible.  (I can just hear Richard: "Don't pander to the
idiots; give it to them straight so we can separate the real
students from those that are wasting the taxpayers money!").  By
mixing informal discussions, the theory in graduated doses, and
a carefully motivated introduction to implementation, the book
successfully reduces the "intellectual cost" of acquiring the
background required to understand the more advanced issues.

Finally, the book goes on to cover the predicate calculus with
function symbols, Prolog, and some advanced topics in implementation.

Now, let me return to the point that an author must carefully select
material for an introductory text.  Failure to leave out complex, but
less than fundamental topics, is just as damaging as failure to include
important topics.  It is simply not wise to criticise such a text for
failing to cover garbage collection within the WAM; indeed, the authors
should be commended for restraining themselves.  They include an
incredible amount of material, most of which is presented beautifully.
This single book includes an easily understood introduction to
logic programming, Herbrand's theorem, and an introduction to the WAM.
Figuring out how to pull that off required a great deal of effort and
work.  The result deserves applause, not the flippant criticism
offered by O'Keefe.

-- Ross Overbeek

------------------------------

End of PROLOG Digest
********************