[mod.ai] Against the Tide of Common LISP

jjacobs@lll-lcc.ARPA@well.UUCP (02/11/87)

        "Against the Tide of Common LISP"

        Copyright (c) 1986, Jeffrey M. Jacobs, CONSART Systems Inc.,
        P.O. Box 3016, Manhattan Beach, CA 90266 (213)376-3802
        Bix ID: jeffjacobs, CIS Userid 75076,2603
        
        Reproduction by electronic means is permitted, provided that it is not
        for commercial gain, and that this copyright notice remains intact."

The following are from various correspondences and notes on Common LISP:

Since you were brave enough to ask about Common Lisp, sit down for my answer:

I think CL is the WORST thing that could possibly happen to LISP.  In fact, I
consider it a language different from "true" LISP.  CL has everything in the
world in it, usually in 3 different forms and 4 different flavors, with 6
different options.  I think the only thing they left out was FEXPRs...

It is obviously intended to be a "compiled" language, not an interpreted
language. By nature it will be very slow; somebody would have to spend quite a
bit of time and $ to make a "fast" interpreted version (say for a VAX).  The
grotesque complexity and plethora of data types presents incredible problems to
the developer;  it was several years before Golden Hill had lexical scoping,
and NIL from MIT DOES NOT HAVE A GARBAGE COLLECTOR!!!!
It just eventually eats up it's entire VAX/VMS virtual memory and dies...

Further, there are inconsistencies and flat out errors in the book.  So many
things are left vague, poorly defined and "to the developer".

The entire INTERLISP arena is left out of the range of compatability.

As a last shot; most of the fancy Expert Systems (KEE, ART) are implemented in
Common LISP.  Once again we hear that LISP is "too slow" for such things, when
a large part of it is the use of Common LISP as opposed to a "faster" form
(i.e. such as with shallow dynamic binding and simpler LAMBDA variables; they
should have left the &aux, etc as macros).  Every operation in CL is very
expensive in terms of CPU...


______________________________________________________________

I forgot to leave out the fact that I do NOT like lexical scoping in LISP; to
allow both dynamic and lexical makes the performance even worse.  To me,
lexical scoping was and should be a compiler OPTIMIZATION, not an inherent
part of the language semantics.  I can accept SCHEME, where you always know
that it's lexical, but CL could drive you crazy (especially if you were 
testing/debugging other people's code).

This whole phenomenon is called "Techno-dazzle"; i.e. look at what a super
duper complex system that will do everything I can build.  Who cares if it's
incredibly difficult and costly to build and understand, and that most of the
features will only get used because "they are there", driving up the cpu useage
and making the whole development process more costly...

BTW, I think the book is poorly written and assume a great deal of knowledge
about LISP and MACLISP in particular.  I wouldn't give it to ANYBODY to learn
LISP

...Not only does he assume you know a lot about LISP, he assume you know a LOT
about half the other existing implementations to boot.

I am inclined to doubt that it is possible to write a good introductory text on
Common LISP;  you d**n near need to understand ALL of it before you can start
to use it. There is nowhere near the basic underlying set of primitives (or
philosophy) to start with, as there is in Real LISP (RL vs CL).  You'll notice
that there is almost NO defining of functions using LISP in the Steele book.
Yet one of the best things about Real LISP is the precise definition of a
function!

Even when using Common LISP (NIL), I deliberately use a subset.  I'm always
amazed when I pick  up the book; I always find something that makes me curse.
Friday I was in a bookstore and saw a new LISP book ("Looking at LISP", I
think, the author's name escapes me).  The author uses SETF instead of SETQ,
stating that SETF will eventually replace SETQ and SET (!!).   Thinking that
this was an error, I  checked in Steel; lo and behold, tis true (sort of).
In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom
of page 94!  And it isn't even clear; if the variable is lexically bound AND
dynamically bound, which gets changed (or is it BOTH)?  Who knows? 
Where is the definitive reference?

"For consistency, it is legal to write (SETF)"; (a) in my book, that should be
an error, (b) if it's not an error, why isn't there a definition using the
approprate & keywords?  Consistency?  Generating an "insufficient args"
error seems more consistent to me...

Care to explain this to a "beginner"?  Not to mention that SETF is a
MACRO, by definition, which will always take longer to evaluate.

Then try explaining why SET only affects dynamic bindings (a most glaring
error, in my opinion).  Again, how many years of training, understanding
and textbooks are suddenly rendered obsolete?  How many books say
(SETQ X Y) is a convenient form of (SET (QUOTE X) Y)?  Probably all
but two...

Then try to introduce them to DEFVAR, which may or may not get
evaluated who knows when!  (And which aren't implemented correctly
very often, e.g. Franz Common and Golden Hill).

I don't think you can get 40% of the points in 4 readings!  I'm constantly
amazed at what I find in there, and it's always the opposite of Real LISP!

MEMBER is a perfect example. I complained to David Betz (XLISP) that MEMBER
used EQ instead of EQUAL.  I only checked about 4 books and manuals (UCILSP,
INTERLISP, IQLISP and a couple of others).  David correctly pointed out that
CL defaults to EQ unless you use the keyword syntax.  So years of training,
learning and ingrained habit go out the window.  How many bugs
will this introduce.  MEMQ wasn't good enough?

MEMBER isn't the only case...

While I'm at  it, let me pick on the book itself a little.  Even though CL
translates lower case to upper case, every instance of LISP names, code,
examples, etc are in **>> lower <<** case and lighter type.  In fact,
everything that is not descriptive text is in lighter or smaller type.
It's VERY difficult to read just from the point of eye strain; instead of 
the names and definitions leaping out to embed themselves in your brain,
you have to squint and strain, producing a nice avoidance response.
Not to mention that you can't skim it worth beans.

Although it's probably hopeless, I wish more implementors would take a stand
against COMMON LISP; I'm afraid that the challenge of "doing a COMMON LISP"
is more than most would-be implementors can resist.  Even I occasionally find
myself thinking "how would I implement that"; fortunately I then ask myself
WHY?

 Jeffrey M. Jacobs <UCILSP>
 CONSART Systems Inc.
 Technical and Managerial Consultants
 P.O. Box 3016, Manhattan Beach, CA 90266
 (213)376-3802
 CIS:75076,2603
 BIX:jeffjacobs
 USENET: jjacobs@well.UUCP
(originally written in late 1985 and early 1986; more to come RSN)

jjacobs@lll-lcc.ARPA@well.UUCP (02/12/87)

Some comments on "Against the Tide of Common LISP".

First, let me point out that this is a repeat of material that appeared
here last June.  There are several reasons that I have repeated it:

1) To gauge the ongoing change in reaction over the past two years.
The first time parts of it appeared in 1985, the reaction was
uniformly pro-CL.

When it appeared last year, the results were 3:1 *against* CL, mostly
via Mail.

Now, being "Against the Tide..." is almost fashionable...

2)  To lay the groundwork for some new material that is in progress
and will be ready RSN.

I did not edit it since it last appeared, so let me briefly repeat some
of the comments made last summer:

I.  My complaint that "both dynamic and lexical makes the 
performance" even worse refers *mainly* to interpreted code.

I have already pointed out that in compiled code the difference in
performance is insignificant.

2.  The same thing applies to macros.  In interpreted code, a
macro takes significantly more time to evaluate.

I do not believe that it
is acceptable for a macro in interpreted code to by destructively
exanded, except under user control.

3.  SET has always been a nasty problem;  CL didn't fix the problem,
it only changed it.  Getting rid of it and using a new name would
have been better.

After all, maybe somebody *wants* SET to set a lexical variable if that's
what it gets...

I will, however, concede that CL's SET is indeed generally the desired
result.

4.  CL did not fix the problems associated with dynamic vs lexical
scoping and compilation, it only compounded them.   My comment
that

>"lexical scoping was and should be a compiler OPTIMIZATION"

is a *historical* viewpoint.  In the 'early' days, it was recognized
that most well written code was written in such a manner that
it was an easy and effective optimization to treat variables as
being lexical/local in scope.   The interpreter/compiler dichotomy
is effectively a *historical accident* rather than design or intent of the
early builders of LISP.

UCI LISP should have been released with the compiler default as
SPECIAL.  If it had been, would everybody now have a different
perspective?

BTW, it is trivial for a compiler to default to dynamic scoping...

5. >I  checked in Steel; lo and behold, tis true (sort of).
>In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom
>of page 94!  

I was picking on the book, not the language.  But thanks for all
the explanations anyway...

6.  >"For consistency, it is legal to write (SETF)"

I have so much heartburn with SETF as a "primitive" that I'll save it
for another day.

7. >MEMBER used EQ instead of EQUAL.

Mea culpa, it uses EQL!

8.  I only refer to Common LISP as defined in the Steele Book, and
to the Common LISP community's subsequent inability to make
any meaningful changes or create a subset.  (Excluding current
ANSI efforts).

Some additional points:

1.  Interpreter Performance

I believe that development under an interpreter provides
a substantially better development environment, and that
compiling should be a final step in development.

It is also one of LISP's major features that anonymous functions
get generated as non-compiled functions and must be interpreted.

As such, interpreter performance is important.

3.  "Against the Tide of Common LISP"

The title expresses my 'agenda'.  Common LISP is not a practical,
real world language.

It will result in the ongoing rejection of LISP by the real world; it is
too big and too expensive.  To be accepted, LISP must be able to run
on general purpose, multi-user computers.

It is choking off acceptance of other avenues and paths of
development in the United States.

There must be a greater understanding of the problems, and benefits
of Common LISP, particularly by the 'naive' would be user.

Selling it as the 'ultimate' LISP standard is dangerous and
self-defeating!

 Jeffrey M. Jacobs
 CONSART Systems Inc.
 Technical and Managerial Consultants
 P.O. Box 3016, Manhattan Beach, CA 90266
 (213)376-3802
 CIS:75076,2603
 BIX:jeffjacobs
 USENET: jjacobs@well.UUCP

brothers@TOPAZ.RUTGERS.EDU.UUCP (02/14/87)

The fun thing about common lisp, though, is that any given little
utility function you care to write probably already exists.... I was
working on a project last year that caused me to want to resize an
array - I wrote the little routine, then something caused me to look
in the arrays section of Steele, and -- lo and behold -- resize-array
(or something like that). 
-- 
			 Laurence R. Brothers
		      brothers@topaz.rutgers.edu
    {harvard,seismo,ut-sally,sri-iu,ihnp4!packard}!topaz!brothers
"I can't control my fingers -- I can't control my brain -- Oh nooooo!"