[comp.lang.prolog] Simple decomposition and PROLOG

lagache@violet.berkeley.edu (Edouard Lagache) (03/01/88)

     One major difference between PROLOG and LISP is that LISP really takes
     modules to extremes.  There are very few "tasks" in LISP that you might
     want to do that someone hasn't written a function to do it, and a
     healthy fraction of those functions are already in the standard library
     of most "modern" LISPs.  That allows a certain advantage in
     programming, in that one can decompose a problem into "simple blocks"
     many of which are already lying around.  The idea of simple
     decomposition is one by my boss, Professor Michael Clancy (of Oh Pascal
     fame).  Basically the idea is that problems should be broken down into
     sub-components that are as simple and "atomic" as possible.  To take a
     silly example, it is better in procedural programming to have separate
     functions for input, process, and output than to merge say process and
     output into one procedure.

     For a programmer's cognitive processes standpoint, there is some value
     in this approach in that is allows for more effective isolation of each
     program components from each other.  This is of particular value in
     learning how to program, and in fact there is some research on the
     "evils" of merging related programming tasks for beginning programmer.

     So what does this have to do with PROLOG?  Well, in PROLOG one often
     finds value merging separate operations of a predicate to improve
     efficiency.  For example, lets look at Clocksin and Mellish's two
     codings of reverse.  The first example uses append:

          reverse1([],[]).    /* Base case */
          /* recursive case, append first element to last of reversed */
          reverse1([H|T],L) :- reverse(T,Z), append(Z,[H],L). /* list */

     The second example uses an internal routine to do the appending to
     speed up the operation of reverse:

          reverse2(L1,l2) :- revzap(L1,[],L2).    /* create "holder" list */
          /* Remove element off one list, add it to reversed list */
          /* since it starts empty, it will "fill up" in reverse order */
          revzap([X|L],L2,L3) :- revzap(L,[X|L2],L3).
          revzap([],L,L).     /* When first element list is empty, then */
                              /* copy "holder" list to return reverse */

     Well, I suppose PROLOG hackers actually refer the second version, but
     since both versions really use the concept of appending, I would think
     that the second example obscures the meaning of the code.  Why, because
     one has to "read in" the append from the code and the comments.  In
     contrast, the first example uses the word append that immediately
     conveys the process that is intended.

     I suppose there are two responses to this: one is to say "so what,
     PROLOG programming isn't for sissies!  especially when one is striving
     for optimal code", the other might be "PROLOG isn't like procedural
     languages, thus techniques to improve the readability of procedural
     programs are not appropriate (and are perhaps even harmful to) the
     expression of PROLOG code".  I will let the net argue these points for
     a while.  However, I will say this: LISP is clearly well suited to a
     "simple decomposition" view of programming.  For PROLOG to compete, it
     will either have to provide enough efficient primitives to permit
     reasonably simple decomposition of problems (even if such solutions are
     not as optimal), or it will have to force programmers to de-empathizes
     the value of simple decomposition as a paradigm for good programming
     style.  I am not saying it can't be done, but it won't be easy
     particularly when it comes to training the PROLOG programmers of the
     future (without which this language will not survive).

     So everyone in PROLOG land - flame away! (if it wasn't for my stupid
     comments, you guys would have nothing to talk about!)

                                                             Edouard Lagache

     P.S. Yes I wrote this note at a God-awful hour of the night, so please
          forgive any abuse of the English language (never mind the concepts
          I was trying to convey).

jws@mulga.oz (Jeff Schultz) (03/03/88)

Two brief points about

>          reverse1([],[]).    /* Base case */
>          /* recursive case, append first element to last of reversed */
>          reverse1([H|T],L) :- reverse(T,Z), append(Z,[H],L). /* list */

>          reverse2(L1,l2) :- revzap(L1,[],L2).    /* create "holder" list */
>          /* Remove element off one list, add it to reversed list */
>          /* since it starts empty, it will "fill up" in reverse order */
>          revzap([X|L],L2,L3) :- revzap(L,[X|L2],L3).
>          revzap([],L,L).     /* When first element list is empty, then */
>                              /* copy "holder" list to return reverse */

Just replacing append with nconc or whatever isn't going to change the
time complexity of reverse1, though languages that have already paid
for the ability to do destructive modifications should run faster with
it.  Of course, the space complexity will change.

But, if you want backtrackable destructive modification, you need a
rather more complicated trail which will take up twice the space and
time.  You may not gain anything unless destructive modification is a
significant part of your program.

I'd be surprised to find a LISP programmer writing something similar
to either reverse1 or reverse2 -- with or without replacd.  A do loop
producing code essentially equivalent to reverse2 seems more likely.
Now, there is nothing except syntax preventing this from being done
for Prolog, provided that it expands to Prolog code in some
intelligible way, but there doesn't seem that much to save besides
the dummy function name.

cdsm@ivax.doc.ic.ac.uk (Chris Moss) (03/03/88)

In article <7290@agate.BERKELEY.EDU> lagache@violet.berkeley.edu (Edouard Lagache) writes:
>     So what does this have to do with PROLOG?  Well, in PROLOG one often
>     finds value merging separate operations of a predicate to improve
>     efficiency.  For example, lets look at Clocksin and Mellish's two
>     codings of reverse.

...rev/2 and rev/3 example

>  However, I will say this: LISP is clearly well suited to a
>     "simple decomposition" view of programming.  For PROLOG to compete, it
>     will either have to provide enough efficient primitives to permit
>     reasonably simple decomposition of problems (even if such solutions are
>     not as optimal), or it will have to force programmers to de-empathizes
>     the value of simple decomposition as a paradigm for good programming
>     style.

Forgive me, but I don't see what this example has to do with the
"simple decomposition" view of programming (doesn't sound like a new
idea to me :-).  The optimisation of naive reverse (an order
n-squared procedure) to the normal reverse (order n) is a
straightforward example of program transformation in the interests of
efficiency.  Another of the standard transformations is the merging
of two loop operations into a loop, though as it happens this isn't
an example.

Now there are lots of researchers beavering away round the world to
produce systems that do this automatically. What they ARE agreed
about is that it's much easier to do all the hard work in a pure
declarative language. Whether that is pure Lisp or pure Prolog
doesn't seem nearly so important. For the record there are people
(e.g at Imperial:  Chris Hogger) who are doing this in Prolog.

Chris Moss.