[net.lang.prolog] PROLOG Digest V3 #40

PROLOG-REQUEST@SU-SCORE.ARPA (Chuck Restivo, The Moderator) (09/27/85)

PROLOG Digest            Friday, 27 Sep 1985       Volume 3 : Issue 40

Today's Topics:
               Implementation - Destructive Assignment,
                 LP Philosophy - Hewitt's Challenge,
                       LP Library - Benchmarks
----------------------------------------------------------------------

Date: 26 Sep 85 8:35:05 PDT
From: Deutsch.PA@Xerox.ARPA
Subject: Destructive Assignment

With regard to destructive assignment: I am constantly amazed at how
people who appear to know little about the last 20 years of
language/compiler implementation technology make statements about
"language X is intrinsically inefficient".  Even today, the amount of
compiler technology invested in Lisp is miniscule compared to Fortran.
Why??  I think it is because sub-field specialization in CS has gotten
to the point where almost no one in the Lisp/AI sub-field studies the
optimizing compiler sub-field.  Look at what David Warren's compilers
did for Prolog!

Specifically with regard to the assertion that "applicative languages
are horrendously inefficient because they have to copy all the time":
static analysis, not much more subtle than performed by current
optimizing compilers, can identify patterns of sharing, and can result
in compiled code that implements the semantics of the source program
by updating in place in many cases.  Sometimes this can even be done
dynamically with reference counts -- the overhead of reference
counting is well documented at around 10% using deferred reference
counting for stack frames (also known as Deutsch/Bobrow reference
counting), climbing to around 50% if you count everything.

There is a real chicken-and-egg problem here: people think
applicative languages are inefficient, so they don't take them
seriously, so the work doesn't get done that would produce the
efficient implementations that would lead people to take them
seriously!

------------------------------

Date: Wed 25 Sep 85 11:22:32-PDT
From: Fernando Pereira <PEREIRA%sri-candide@sri-marvin>
Subject: Prolog vs. Lisp (again!)

I humbly confess my incompetence at the debating style Carl
Hewitt is using in the Prolog vs. Lisp discussions, which
seems to consist in ignoring the FACTUAL points of other
contributions and just continuing to repeat the same OPINIONS.

It is a FACT that no practical Prolog system is written entirely
in Lisp: Common, Inter or any other. Fast Prolog systems have
been written for Lisp machines (Symbolics, Xerox, LMI) but their
performance depends crucially on major microcode support (so
much so that the Symbolics implementation, for example, requires
additional microstore hardware to run Prolog). The reason for
this is simple: No Lisp (nor C, for that matter...) provides the
low-level tagged-pointer and stack operations that are critical
to Prolog performance.

The fact that Lisp is not good enough as an implementation
language for Prolog should not be considered as a weakness of Lisp,
BECAUSE Lisp was not designed for such low-level operations in the
first place. In fact, NO ``high-level'' programming language that I
know of provides those kinds of operations, and for the very simple
reason that, being high-level languages, they have no business in
exploring the recesses of particular machine architectures. ALL
fast Prolog systems that I have seen (some of which I helped
implement) rely on a careful exploitation of the underlying machine
architecture.

By the same argument, the fact that Lisp cannot be efficiently
implemented in Prolog cannot be the basis of a valid criticism of
Prolog. Prolog is not a systems programming language, and in any
case a good Lisp implementation must be carefully coupled to the
underlying machine architecture -- so much so the the fastest Lisps
rely on specialized architectures!

It seems clear to me that no single existing programming language
can be said to provide a ``foundation'' for AI. In fact, the very
notion of a programming language providing a foundation for a
scientific  subject seems to me rather misguided. Does Fortran
provide  a ``foundation'' for physics?  The relation between AI
problems, formal  descriptions and programming concepts is far too
subtle for us to  expect a ``foundation'' for AI in a mere
programming language.

The crusading tone of Hewitt's comments is also rather
unsettling.  AI researchers will use whatever language they
feel most comfortable  with for the problem they are working
on, without need for any  guidance from on high as to the
ultimate suitability of that language. If more researchers use
Prolog, is that a threat to Lisp users? If I do a piece of AI
research using Prolog, will it not be judged according to its
content, independently of programming language?

That kind of battle might be very important for AI software
companies, but surely we should not let marketing hype get in
the way of our research. I am sitting at a Sun workstation typing
this, with a Prolog window just to the right. Will my research be
useless to someone who sits at a Xerox 1109? If I walk down the
corridor and write a Lisp program on a Symbolics machine (as I
have done and surely will continue to do), will THAT work have
a different value? If I decide to use Hoare's CSP for the
navigation component of our robot Flakey, will I be then outside
AI, because I am not using an ``official'' AI language?

With respect to Rick McGeer's points: there are some elegant ways
of compiling Prolog into Lisp, particularly into those statically
scoped variety (or into other statically-scoped languages such as
Algol-68...). I have reason to believe that a compiler along these
lines would produce code considerably faster than the 100 LIPS he
reports, even though still much slower than what is attainable with
a lower-level implementation.

The idea has been around in the ``folklore'' for a long time, I do
not know to whom to attribute it. Basically, given predicate
definitions like

        p :- q, r.
        p :- s.

        q.

we get the code

        (defun p (continuation)
           (progn
              (q (function (lambda () (r continuation))))
              (s continuation)
           )
        )

        (defun q (continuation)
           (funcall continuation)
        )

This is the basic control skeleton, extra code is needed for
unification. This technique can be made more complicated (and
more efficient...) in a number of ways, and it will be somewhat
more space-efficient on a tail-recursive Lisp.

As to the speed of Franz, if all the appropriate compilation
flags are used an append function written in Lisp runs at
around 25K calls/sec. on a VAX 780, if I remember correctly.
The lower speed he listed is the result of using the (default)
slow function-calling protocol that allows breaks and stack
backtraces. As a comparison, one of the commercial compiled
Prologs does 23K calls/sec. on the equivalent test.

-- Fernando Pereira

------------------------------

Date: Wed 25 Sep 85 11:34:13-PDT
From: Fernando Pereira <PEREIRA%sri-candide@sri-marvin>
Subject: My original rebuttal of Hewitt's flame

Some combination of Carl's and editing has made it impossible
to distinguish in his message what I said and his comments on
what I said. Here is my original note, which I sent only to
the AIList, where the discussion had migrated from COG-SCI
(which I don't read, time being limited...)

Carl Hewitt's message is based on several misconceptions:

1. (the least interesting one) All the so-called commercially
viable Prolog systems in Lisp are not really Prolog systems
written IN Lisp, but rather Prolog systems written FOR Lisp
machines. Or is it that a microcode sublanguage or Lisp machine
pointer-smashing operations are part of List as we know it? Without
those machine-level operations, those Prolog systems would run too
slow and use too much memory to be useful for serious Prolog
programming. From the Prolog implementation point of view, what
is important about the Lisp machines is not that they run Lisp,
but that they can be microcoded and have good support for tagged
data types and stack operations.

2. If the decisions (actions) of a system are not determined by
its inputs, the system is nondeterministic. Nondeterminism in a
system can be either an artifact of our incomplete knowledge (or
lack of interest) of the detailed operation of the system; or it
can be ``real physical'' nondeterminism. It would take us to far
to discuss whether the second kind of nondeterminism is ``real''
or also an artifact. In any case, most uses of nondeterminism,
say in models of concurrency, are of the first kind, and can be
expressed appropriately in various temporal/dynamic logics.
Admittedly, these are not Prolog, but then Common Lisp is not Lisp
1.5! (Prolog is 13 years old, Lisp 25).

3. The first logic course dictum ``from a contradiction one can
conclude anything'' is getting in the way. Notice that the dictum
says ``can'', not ``must''. There is an enormous difference between
things that are in principle true and things that an agent knows to
be true in a way that can affect its action. An agent might know
``p'' and ``not p'', but it might well never come to infer the
dreaded totally unrelated ``q'' which IN PRINCIPLE follows from
the contradiction. This inference might not happen either because
of inference control mechanisms of the agent (eg. it uses the set
-of-support strategy) or because the agent's logic is just TOO WEAK
to conclude anything from a contradiction (vide Hector Levesque's
work in the proceedings of the last AAAI). In any case, Horn clauses
(the basis of Prolog) are too weak to represent contradictions... :-)

4. The question of whether ``taking action'' fits in a logic paradigm
tends to be answered negatively after an hour's worth of
consideration.  If you persist for several years, though, this
question becomes a source of insight on the relations between
knowledge, state and action that is not available to those who started
by dismissing the question after that initial hour. There is just too
much work on logics of knowledge and action in AI and computer science
for me to try to discuss it here. Some of this work has been applied
to logic programming, either in the form of new logic programming
languages based on temporal or dynamic logics or in representations of
temporal reasoning and decision in, say, Prolog.

5. It is curious to see someone by implication defend Lisp as a
language for expressing the taking of action! We know by now that the
most difficult issue in ``reactive systems'' is not EXPRESSING action,
but rather keeping it under control to prevent unwanted interactions.
In this area, less is REALLY more, and highly complex languages like
Lisp are just not suitable for the formal reasoning about programs
that is needed to help us believe our programs do what we intend. To
those who say ``...but we can keep to a carefully constrained subset
of Lisp, use only message passing for interactions,...'' I will answer
that the history of all large Lisp programs I know (and I think that
is a representative sample) is littered with the brutalized corpses of
constrained programming styles. Anyone who has looked at the current
flavor mechanism in Zetalisp and its use in the window system will
know what I mean...

6. Underlying Carl Hewitt's misconceptions is the old chestnut ``logic
is static, systems are dynamic''. Any language, be it first-order
logic or Lisp, is static; it is its USE which is dynamic (changes the
state of communicating agents). A good analogy here is the use of
differential equations to model dynamic systems in classical
mechanics. The differential equations themselves do not say directly
what happens when (they are ``static'' in Hewitt's jargon). It is
their SOLUTIONS that tell us the sequence of events. Even the
solutions are given as static objects (functions from an open interval
of the reals to some space). Does anyone worry that such equations do
not ``really'' describe the dynamic behavior of a system? Is it not
possible to combine such ``static'' entities with an incremental
solution procedure to build systems that actually control their
(classical mechanical) environment?

-- Fernando Pereira

------------------------------

Date: Tue, 24 Sep 85 09:43:30 edt
From: "Bruce T. Smith" <bts%mcnc.csnet@CSNET-RELAY>
Subject: A set of PROLOG benchmarks...

I received the following file late in the Summer, with permission
to post it to the net.  It's a set of Prolog benchmarks, by folks
at Tektronix and Portland State University.  The times included
are for C-Prolog on a Tektronix Magnolia-- as I understand it, a
predecessor of the 4404.

I've run them-- with minor changes-- with Quintus Prolog, on a VAX
785 at MCNC, and with C-Prolog, UNSW Prolog and MU-Prolog at UNC
Chapel Hill, also on a 785. (Both running 4.2bsd UNIX.) For those
folks with other Prologs or other machines, I will volunteer to
collect the results you mail and post a summary to the net later
in the Fall.

I'd also like to hear comments on the benchmarks themselves.  For
that, a discussion on the net is more appropriate.

[ I've left this file in the <Prolog> directory as:

                  TEKTRONIX_PSU-BENCHMARK.PL  -ed ]

------------------------------

End of PROLOG Digest
********************