broome@brl-bmd@sri-unix.UUCP (08/19/83)
From: Paul Broome (CTAB) <broome@brl-bmd>
Stan,
Let me climb into my pulpit and respond to your FP/AI prod. I don't
think FP and AI are diametrically opposed. To refresh everyone's
memory here are some of your comments.
... Having worked with both AI and FP languages,
it seems to me that the two are diametrically
opposed to one another. The ultimate goal of functional
programming language research is to produce a language that
is as clean and free of side effects as possible; one whose
semantic definition fits on a single side of an 8 1/2 x 11
sheet of paper ...
Looking at Backus' Turing award lecture, I'd have to say that
cleanliness and freedom of side effects are two of Backus' goals but
certainly not succinctness of definition. In fact Backus says (CACM,
Aug. 78, p. 620), "Our intention is to provide FP systems with widely
useful and powerful primitive functions rather than weak ones that
could then be used to define useful ones."
Although FP has no side effects, Backus also talked about applicative
state systems (AST) with one top level change of state per
computation,i.e. one side effect. The world of expressions is a
nice, orderly one; the world of statements has all the mush. He's
trying to move the statement part out of the way.
I'd have to say one important part of the research in FP systems is to
define and examine functional forms (program forming operations) with
nice mathematical properties. A good way to incorporate (read
implement) a mathematical concept in a computer program is without
side effects. This side effect freeness is nice because it means that
a program is 'referentially transparent', i.e. it can be used without
concern about collision with internal names or memory locations AND
the program is dependable; it always does the same thing.
A second nice thing about applicative languages is that they are
appropriate for parallel execution. In a shared memory model of
computation (e.g. Ada) it's very difficult (NP-complete, see CACM, a
couple of months ago) to tell if there is collision between
processors, i.e. is a processor overwriting data that another
processor needs.
On the other hand, the goal of AI research (at least in the
AI language area) is to produce languages that can effectively
work with as tangled and complicated representations of
knowledge as possible. Languages for semantic nets, frames,
production systems, etc, all have this character.
I don't think that's the goal of AI research but I can't offer a
better one at the moment. (Sometimes it looks as if the goal is to
make money.)
Large, tangled structures can be handled in applicative systems but
not efficiently, at least I don't see how. If you view a database
update as a function mapping the pair (NewData, OldDatabase) into
NewDatabase you have to expect a new database as the returned value.
Conceptionally that's not a problem. However, operationally there
should just be a minor modification of the original database when
there is no sharing and suspended modification when the database is
being shared. There are limited transformations that can help but
there is much room for improvement.
An important point in all this is program transformation. As we build
bigger and smarter systems we widen the gap between the way we think
and the hardware. We need to write clear, easy to understand, and
large-chunked programs but transform them (within the same source
language) into possibly less clear, but more efficient programs.
Program transformation is much easier when there are no side effects.
Now between the Japanese 5th generation project (and the US
response) and the various projects to build non-vonNeumann
machines using FP, it looks to me like the seeds of a
controversy over the best way to do programming. Should we be
using FP languages or AI languages? We can't have it both ways,
right? Or can we?
A central issue is efficiency. The first FORTRAN compiler was viewed
with the same distrust that the public had about computers in general.
Early programmers didn't want to relinquish explicit management of
registers or whatever because they didn't think the compiler could do
as well as they. Later there was skepticism about garbage collection
and memory management. A multitude of sins is committed in the name
of (machine) efficiency at the expense of people efficiency. We
should concern ourselves more with WHAT objects are stored than with
HOW they are stored.
There's no doubt that applicative languages are applicable. The
Japanese (fortunately for them) are less affected by, as Dijkstra puts
it, "our antimathematical age." And they, unlike us, are willing to
sacrifice some short term goals for long term goals.
- Paul Broome
(broome@brl)