[comp.lang.lisp] Against the TIde of Common LISP

jjacobs@well.UUCP (02/08/87)

	"Against the Tide of Common LISP"

	Copyright (c) 1986, Jeffrey M. Jacobs, CONSART Systems Inc.,
	P.O. Box 3016, Manhattan Beach, CA 90266 (213)376-3802
	Bix ID: jeffjacobs, CIS Userid 75076,2603
	
	Reproduction by electronic means is permitted, provided that it is not
	for commercial gain, and that this copyright notice remains intact."

The following are from various correspondences and notes on Common LISP:

Since you were brave enough to ask about Common Lisp, sit down for my answer:

I think CL is the WORST thing that could possibly happen to LISP.  In fact, I
consider it a language different from "true" LISP.  CL has everything in the
world in it, usually in 3 different forms and 4 different flavors, with 6
different options.  I think the only thing they left out was FEXPRs...

It is obviously intended to be a "compileable" language, not an interpreted
language. By nature it will be very slow; somebody would have to spend quite a
bit of time and $ to make a "fast" interpreted version (say for a VAX).  The
grotesque complexity and plethora of data types presents incredible problems to
the developer;  it was several years before Golden Hill had lexical scoping,
and NIL from MIT DOES NOT HAVE A GARBAGE COLLECTOR!!!!
It just eventually eats up it's entire VAX/VMS virtual memory and dies...

Further, there are inconsistencies and flat out errors in the book.  So many
things are left vague, poorly defined and "to the developer".

The entire INTERLISP arena is left out of the range of compatability.

As a last shot; most of the fancy Expert Systems (KEE, ART) are implemented in
Common LISP.  Once again we hear that LISP is "too slow" for such things, when
a large part of it is the use of Common LISP as opposed to a "faster" form
(i.e. such as with shallow dynamic binding and simpler LAMBDA variables; they
should have left the &aux, etc as macros).  Every operation in CL is very
expensive in terms of CPU...


______________________________________________________________

I forgot to leave out the fact that I do NOT like lexical scoping in LISP; to
allow both dynamic and lexical makes the performance even worse.  To me,
lexical scoping was and should be a compiler OPTIMIZATION, not an inherent
part of the language semantics.  I can accept SCHEME, where you always know
that it's lexical, but CL could drive you crazy (especially if you were 
testing/debugging other people's code).

This whole phenomenon is called "Techno-dazzle"; i.e. look at what a super
duper complex system that will do everything I can build.  Who cares if it's
incredibly difficult and costly to build and understand, and that most of the
features will only get used because "they are there", driving up the cpu useage
and making the whole development process more costly...

BTW, I think the book is poorly written and assume a great deal of knowledge
about LISP and MACLISP in particular.  I wouldn't give it to ANYBODY to learn
LISP

...Not only does he assume you know a lot about LISP, he assume you know a LOT
about half the other existing implementations to boot.

I am inclined to doubt that it is possible to write a good introductory text on
Common LISP;  you d**n near need to understand ALL of it before you can start
to use it. There is nowhere near the basic underlying set of primitives (or
philosophy) to start with, as there is in Real LISP (RL vs CL).  You'll notice
that there is almost NO defining of functions using LISP in the Steele book.
Yet one of the best things about Real LISP is the precise definition of a
function!

Even when using Common LISP (NIL), I deliberately use a subset.  I'm always
amazed when I pick  up the book; I always find something that makes me curse.
Friday I was in a bookstore and saw a new LISP book ("Looking at LISP", I
think, the author's name escapes me).  The author uses SETF instead of SETQ,
stating that SETF will eventually replace SETQ and SET (!!).   Thinking that
this was an error, I  checked in Steel; lo and behold, tis true (sort of).
In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom
of page 94!  And it isn't even clear; if the variable is lexically bound AND
dynamically bound, which gets changed (or is it BOTH)?  Who knows? 
Where is the definitive reference?

"For consistency, it is legal to write (SETF)"; (a) in my book, that should be
an error, (b) if it's not an error, why isn't there a definition using the
approprate & keywords?  Consistency?  Generating an "insufficient args"
error seems more consistent to me...

Care to explain this to a "beginner"?  Not to mention that SETF is a
MACRO, by definition, which will always take longer to evaluate.

Then try explaining why SET only affects dynamic bindings (a most glaring
error, in my opinion).  Again, how many years of training, understanding
and textbooks are suddenly rendered obsolete?  How many books say
(SETQ X Y) is a convenient form of (SET (QUOTE X) Y)?  Probably all
but two...

Then try to introduce them to DEFVAR, which may or may not get
evaluated who knows when!  (And which aren't implemented correctly
very often, e.g. Franz Common and Golden Hill).

I don't think you can get 40% of the points in 4 readings!  I'm constantly
amazed at what I find in there, and it's always the opposite of Real LISP!

MEMBER is a perfect example. I complained to David Betz (XLISP) that MEMBER
used EQ instead of EQUAL.  I only checked about 4 books and manuals (UCILSP,
INTERLISP, IQLISP and a couple of others).  David correctly pointed out that
CL defaults to EQ unless you use the keyword syntax.  So years of training,
learning and ingrained habit go out the window.  How many bugs
will this introduce.  MEMQ wasn't good enough?

MEMBER isn't the only case...

While I'm at  it, let me pick on the book itself a little.  Even though CL
translates lower case to upper case, every instance of LISP names, code,
examples, etc are in **>> lower <<** case and lighter type.  In fact,
everything that is not descriptive text is in lighter or smaller type.
It's VERY difficult to read just from the point of eye strain; instead of 
the names and definitions leaping out to embed themselves in your brain,
you have to squint and strain, producing a nice avoidance response.
Not to mention that you can't skim it worth beans.

Although it's probably hopeless, I wish more implementors would take a stand
against COMMON LISP; I'm afraid that the challenge of "doing a COMMON LISP"
is more than most would-be implementors can resist.  Even I occasionally find
myself thinking "how would I implement that"; fortunately I then ask myself
WHY?

 Jeffrey M. Jacobs <UCILSP>
 CONSART Systems Inc.
 Technical and Managerial Consultants
 P.O. Box 3016, Manhattan Beach, CA 90266
 (213)376-3802
 CIS:75076,2603
 BIX:jeffjacobs
 USENET: jjacobs@well.UUCP

yerazuws@rpics.UUCP (02/08/87)

In article <2545@well.UUCP/, jjacobs@well.UUCP (Jeffrey Jacobs) writes:
/ 
/ 	"Against the Tide of Common LISP"
/ 
/ 	Copyright (c) 1986, Jeffrey M. Jacobs, CONSART Systems Inc.,
/ 	P.O. Box 3016, Manhattan Beach, CA 90266 (213)376-3802
/ 	Bix ID: jeffjacobs, CIS Userid 75076,2603
/ 	
/ 	Reproduction by electronic means is permitted, provided that it is not
/ 	for commercial gain, and that this copyright notice remains intact."
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

                                                                                                                                                                                                                                                                                                 
Do I really need to keep that there?   :-)
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

                                                                                                                                                                                                                                                                                                 
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

                                                                                                                                                                                                                                                                                                 
/ 
/ The following are from various correspondences and notes on Common LISP:
/ 
/ Since you were brave enough to ask about Common Lisp, sit down for my answer:
/ 
/ I think CL is the WORST thing that could possibly happen to LISP.  In fact, I
/ consider it a language different from "true" LISP.  CL has everything in the
/ world in it, usually in 3 different forms and 4 different flavors, with 6
/ different options.  I think the only thing they left out was FEXPRs...
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

                                                                                                                                                                                                                                                                                                 
Sorry, no.  Flavors are not part of the CL definition.  You can add them
yourself if you want.  :-)    
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

                                                                                                                                                                                                                                                                                                 
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

                                                                                                                                                                                                                                                                                                 
/ It is obviously intended to be a "compileable" language, not an interpreted
/ language. By nature it will be very slow; somebody would have to spend quite a
/ bit of time and $ to make a "fast" interpreted version (say for a VAX).  The
/ grotesque complexity and plethora of data types presents incredible problems to
/ the developer;  it was several years before Golden Hill had lexical scoping,
/ and NIL from MIT DOES NOT HAVE A GARBAGE COLLECTOR!!!!
/ It just eventually eats up it's entire VAX/VMS virtual memory and dies...
                                                                             
Agreed- but garbage collectors are no fun to write.  The DEC product
GC's a full three megabytes in less than ten seconds- provided you have
enough physical memory.  Not my (or CL's) fault if someone decides not to
complete the implementation.
                                                                             
/ 
/ Further, there are inconsistencies and flat out errors in the book.  So many
/ things are left vague, poorly defined and "to the developer".
/ 
/ The entire INTERLISP arena is left out of the range of compatability.
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

                                                                                                                                                                                                                                                                                                                                                                                                                                                          
True - there is no "spaghetti stack".  But I've never really found
a good use for such a stack.  I've never had to deal with a problem
that sat up and said to me "Hey, dummy, use the spaghetti stack!"

/ 
/ As a last shot; most of the fancy Expert Systems (KEE, ART) are implemented in
/ Common LISP.  Once again we hear that LISP is "too slow" for such things, when
/ a large part of it is the use of Common LISP as opposed to a "faster" form
/ (i.e. such as with shallow dynamic binding and simpler LAMBDA variables; they
/ should have left the &aux, etc as macros).  Every operation in CL is very
/ expensive in terms of CPU...
/ 
 
That depends on what model you use to interpret/compile your lisp source
into.  There's no reason why your compiler/interpreter can't remember what
few variables are lexically scoped and handle them accordingly, keeping
the rest shallowly-bound in hash-table with a fixup stack.  
 
/ 
/ ______________________________________________________________
/ 
/ I forgot to leave out the fact that I do NOT like lexical scoping in LISP; to
/ allow both dynamic and lexical makes the performance even worse.  To me,
/ lexical scoping was and should be a compiler OPTIMIZATION, not an inherent
/ part of the language semantics.  I can accept SCHEME, where you always know
/ that it's lexical, but CL could drive you crazy (especially if you were 
/ testing/debugging other people's code).

The rule I use is simple - if it appears in the argument list, it's 
lexical.  If not, it's dynamic.  Then just look for PROGV's and that tells
you whether it's a binding that will get undone someday or if it
is a global. 
 
I also ignore all the warnings from the compiler "Foo has been assumed 
special". (to the extent that I worry only that I didn't blow it and
write "foo" when I meant "foobar")

/ 
/ This whole phenomenon is called "Techno-dazzle"; i.e. look at what a super
/ duper complex system that will do everything I can build.  Who cares if it's
/ incredibly difficult and costly to build and understand, and that most of the
/ features will only get used because "they are there", driving up the cpu useage
/ and making the whole development process more costly...
/ 

Well, I can say that the features concerning general sequences are 
something I've had to kludge for myself in most other lisps, and I am
very glad to see them in CL.  They make writing an optimizing compiler
much easier (that is, writing an optimizing compiler IN lisp, not necessarily
FOR lisp) 
 
So what if they're only macros?  The save me the trouble of having to 
write such a thing myself, they are probably tested better than I would
bother to test my own creations, and maybe they're even somewhat 
optimized (Hi Walter and Paul! :-)  ).

I admit, when I sit down to do something truly bizarre, I do need a copy
of the book with me- but I really can write cleaner (that is, more 
understandable and easier-to-test-and-debug) code if I use the 
built-ins.  

/ BTW, I think the book is poorly written and assume a great deal of knowledge
/ about LISP and MACLISP in particular.  I wouldn't give it to ANYBODY to learn
/ LISP
/ 
 
True.  Franz Opus 36 is better for people starting out.  

/ ...Not only does he assume you know a lot about LISP, he assume you know a LOT
/ about half the other existing implementations to boot.
/ 
/ I am inclined to doubt that it is possible to write a good introductory text on
/ Common LISP;  you d**n near need to understand ALL of it before you can start
/ to use it. There is nowhere near the basic underlying set of primitives (or
/ philosophy) to start with, as there is in Real LISP (RL vs CL).  You'll notice
/ that there is almost NO defining of functions using LISP in the Steele book.
/ Yet one of the best things about Real LISP is the precise definition of a
/ function!
/ 
/ Even when using Common LISP (NIL), I deliberately use a subset.  I'm always
/ amazed when I pick  up the book; I always find something that makes me curse.
/ Friday I was in a bookstore and saw a new LISP book ("Looking at LISP", I
/ think, the author's name escapes me).  The author uses SETF instead of SETQ,
/ stating that SETF will eventually replace SETQ and SET (!!).   Thinking that
/ this was an error, I  checked in Steel; lo and behold, tis true (sort of).
/ In 2 2/3 pages devoted to SETF, there is // 1 << line at the very bottom
/ of page 94!  And it isn't even clear; if the variable is lexically bound AND
/ dynamically bound, which gets changed (or is it BOTH)?  Who knows? 
/ Where is the definitive reference?
/ 

Yeah, I ignore SETF pretty much unless I'm making an array reference.  Then
I pretend I'm typing AREF and then fix the syntax.  

/ "For consistency, it is legal to write (SETF)"; (a) in my book, that should be
/ an error, (b) if it's not an error, why isn't there a definition using the
/ approprate & keywords?  Consistency?  Generating an "insufficient args"
/ error seems more consistent to me...
/ 
/ Care to explain this to a "beginner"?  Not to mention that SETF is a
/ MACRO, by definition, which will always take longer to evaluate.
/ 
/ Then try explaining why SET only affects dynamic bindings (a most glaring
/ error, in my opinion).  Again, how many years of training, understanding
/ and textbooks are suddenly rendered obsolete?  How many books say
/ (SETQ X Y) is a convenient form of (SET (QUOTE X) Y)?  Probably all
/ but two...
/ 
/ Then try to introduce them to DEFVAR, which may or may not get
/ evaluated who knows when!  (And which aren't implemented correctly
/ very often, e.g. Franz Common and Golden Hill).
 
Why bother DEFVARing?  It says right in CLTL that it won't affect correctness,
just efficiency.

/ 
/ I don't think you can get 40% of the points in 4 readings!  I'm constantly
/ amazed at what I find in there, and it's always the opposite of Real LISP!
/ 
/ MEMBER is a perfect example. I complained to David Betz (XLISP) that MEMBER
/ used EQ instead of EQUAL.  I only checked about 4 books and manuals (UCILSP,
/ INTERLISP, IQLISP and a couple of others).  David correctly pointed out that
/ CL defaults to EQ unless you use the keyword syntax.  So years of training,
/ learning and ingrained habit go out the window.  How many bugs
/ will this introduce.  MEMQ wasn't good enough?
/ 
 
Different lisp designers have different ideas about where it's right to 
EQ, EQUAL, etc.  Matter of personal taste.

/ MEMBER isn't the only case...
/ 
/ While I'm at  it, let me pick on the book itself a little.  Even though CL
/ translates lower case to upper case, every instance of LISP names, code,
/ examples, etc are in **// lower <<** case and lighter type.  In fact,
/ everything that is not descriptive text is in lighter or smaller type.
/ It's VERY difficult to read just from the point of eye strain; instead of 
/ the names and definitions leaping out to embed themselves in your brain,
/ you have to squint and strain, producing a nice avoidance response.
/ Not to mention that you can't skim it worth beans.
/ 
 
True.  I wish I could get a copy of CLTL in TeXable form- and then modify
the function-font macro to be about 1.2 times the size of descriptive text.

/ Although it's probably hopeless, I wish more implementors would take a stand
/ against COMMON LISP; I'm afraid that the challenge of "doing a COMMON LISP"
/ is more than most would-be implementors can resist.  Even I occasionally find
/ myself thinking "how would I implement that"; fortunately I then ask myself
/ WHY?
/ 
/  Jeffrey M. Jacobs <UCILSP/
/  CONSART Systems Inc.
/  Technical and Managerial Consultants
/  P.O. Box 3016, Manhattan Beach, CA 90266
/  (213)376-3802
/  CIS:75076,2603
/  BIX:jeffjacobs
/  USENET: jjacobs@well.UUCP

Common lisp does have an "interior logic" that you can get into.  It makes
sense after a while.  Things like the scoping DO make a LOT of sense when
you start seriously considering a compiler.  I know my code runs faster
with lexical than with dynamic.  Remember, when you scope lexically and
compile, you have an absolute displacement onto the stack. This is
a fast thing.  If you dynamically scope, you have to call a routine
to go into a hash table and find out where whatever-it-is is kept.
This is a not-so-fast thing for a compiled language.  On calling
and return, you have to fix up the hash symbol table for each and
every dynamically-scoped argument.  This is a very-not-fast thing.
 
I agree that this argument goes away when you interpret code.
 
If you wonder why I'm so worried about why the code should be compilable,
efficiency considerations, etc, it's because I'm writing a CL compiler
and so it DOES matter to me. (no, the compiler's not done yet, no
beg-to-post mail please)
	
	-Bill Yerazunis

rpk@lmi-angel.UUCP (02/10/87)

In article <> jjacobs@well.UUCP (Jeffrey Jacobs) writes:
>
>	"Against the Tide of Common LISP"
>
>Further, there are inconsistencies and flat out errors in the book.  So many
>things are left vague, poorly defined and "to the developer".

This is sadly true, though at least CL has a spec isn't simply a summary of
what the first implementation did, unlike Interlisp or Maclisp.

>The entire INTERLISP arena is left out of the range of compatability.

Most of this can be done with a compatibility package, except for the
Interlisp ``feature'' about all arguments being optional.  (This can be done
on the Lisp Machine with lambda-macros, but that's only for MIT-derived
machines.)

>I forgot to leave out the fact that I do NOT like lexical scoping in LISP; to
>allow both dynamic and lexical makes the performance even worse.  To me,
>lexical scoping was and should be a compiler OPTIMIZATION, not an inherent
>part of the language semantics. 

I cannot agree with this at all; if somebody can't implement lexical scoping in a
efficient manner, they're doing something WRONG.  At compile time, no time
is spent ``deciding'' whether a variable is lexical or special  (it's
lexically apparent from the code, right ?).  In most cases, lexical
variables can go on the stack or in registers, which IS efficient.  Special
variable references go through symbols, at least in a straightforward
implementation.  That can be pretty efficient, too.

>I can accept SCHEME, where you always know
>that it's lexical, but CL could drive you crazy (especially if you were 
>testing/debugging other people's code).

Huh ?  Whether or not a variable is lexical can be determined by looking at
its lexical context (practically an axiom, eh ?).  So if it's being used
freely, you can assume it's special.

>This whole phenomenon is called "Techno-dazzle"; i.e. look at what a super
>duper complex system that will do everything I can build.  Who cares if it's
>incredibly difficult and costly to build and understand, and that most of the
>features will only get used because "they are there", driving up the cpu useage
>and making the whole development process more costly...

Well, maybe having a function like MAP (takes a result type, maps over ANY
combination of sequences) is a pain to implement, but the fact there is
quite a bit of baggage in the generic sequence functions shouldn't slow down
parts of the system that don't use it.  The CORE of Common Lisp, which is
lexical scoping, 22 special forms, some data types, and
evaluation/declaration rules, is not slow at all.  It is not as elegant as
Scheme, true, there is certainly a manageable set of primitives.  Quite a
bit of Common Lisp can be implemented in itself.

>BTW, I think the book is poorly written and assume a great deal of knowledge
>about LISP and MACLISP in particular.  I wouldn't give it to ANYBODY to learn
>LISP

If you're talking about CLtL (Steele), that's true, but it's not meant to
teach [Common] Lisp anyway.

[More stuff about trying to learn about Lisp (in general) from CLtL.  Sort
 of like trying to learn English from the Oxford Unabridged Dictionary.]

>...The author uses SETF instead of SETQ,
>stating that SETF will eventually replace SETQ and SET (!!).

This is silly.  SETQ is very ingrained in Lisp, though ``theoretically''
it's not needed anymore.  The author was drawing a conclusion not based on
the way people actually use Lisp.  The reason why SETF works on symbols
(turning into SETQ) is that macros which are advertised to use ``places''
(expressions that give values and can be written into) don't have to check
for the simple case themselves -- it's just the logical way for SETF to
work.

>Thinking that this was an error,

What ?

>I  checked in Steel; lo and behold, tis true (sort of).
>In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom
>of page 94!  And it isn't even clear; if the variable is lexically bound AND
>dynamically bound, which gets changed (or is it BOTH)?  Who knows?
>Where is the definitive reference?

It follows the rules for variable scoping, so it follows the same rules that
SETQ and MULTIPLE-VALUE-SETQ do.  The reason why this is not made explicit
is that the author expects (and rightly so) that if a language feature uses
a basic language concept (like setting variables), it will follow the rules
described for that concept, which were built up early in the book.  In this
case, the sections on variables (and scoping) and the page before the one
you mentioned, which discussed the generalized variable concept (not the
generalized ``symbol naming a particular variable which is stored specially
or lexically'' concept).

>"For consistency, it is legal to write (SETF)"; (a) in my book, that should be
>an error, (b) if it's not an error, why isn't there a definition using the
>approprate & keywords?  Consistency?  Generating an "insufficient args"
>error seems more consistent to me...

Well, (SETF) does nothing.  You probably wouldn't write this, but again, a
macro would find it useful.  Should (LIST) signal an error too ?

>Care to explain this to a "beginner"?  Not to mention that SETF is a
>MACRO, by definition, which will always take longer to evaluate.

Since you're a beginner, by your own admission, why do you think that a form
which is a macro call will be noticeably more expensive (in the interpreter,
the compiled case can't ever be slower) ?  There are ways to optimize
macroexpansion, you know.  Also, anybody can implement SETF as a special
form as long as they hide the fact from the user.

>Then try explaining why SET only affects dynamic bindings (a most glaring
>error, in my opinion).  Again, how many years of training, understanding
>and textbooks are suddenly rendered obsolete?  How many books say
>(SETQ X Y) is a convenient form of (SET (QUOTE X) Y)?  Probably all
>but two...

Once you acknowledge the existence of lexical scoping, then SET only makes
sense on special variables, because lexically scoped variables can be stored
in ways that (1) don't depend on the symbols that name them (2) aren't
accessible dynamically from the callee.  SET is a FUNCTION that operates on
SYMBOLS, not variables.

Much of the problem is due to the fact that many textbooks on Lisp before
1982 or so (as opposed to a Scheme derivative with lexical scoping) assume
an all-special variable implementation.  This is fast becoming a minority
for serious users of Lisp.  So (SETQ x value) is equivalent to (SET 'x
value) in old Lisps, but even then, you're treading on thin ground.  The old
Lisp Machine implementation went like this: all variables were special in
the interpreter, but in the compiler you had shallow binding and local
variables lived in the stack and were not accessible at all via SET or
SYMBOL-VALUE (called SYMEVAL in Lisp Machine Lisp).  The SAME piece of CODE
behaved differently depending on whether it OR its callers OR it callees
were interpreted or compiled.  I think this is true of Maclisp and maybe of
Franz.

>I don't think you can get 40% of the points in 4 readings!  I'm constantly
>amazed at what I find in there, and it's always the opposite of Real LISP!

Ah, see, now maybe CL suffers from the Swiss Army Knife syndrome, but by
using the word ``Real'' you obviously have a few prejudices of your own.
(Oh, by the way, they reversed the arguments to CONS, ha ha...)

>MEMBER is a perfect example. I complained to David Betz (XLISP) that MEMBER
>used EQ instead of EQUAL.

It's actually EQL...

>How many bugs will this introduce.

It won't introduce bugs into new code written by people who read the manual
and understand the interface and semantics of MEMBER.  Your (legitimate)
obstacle is porting ``traditional Lisp'' code to Common Lisp.  The
experience at LMI is that you use the package system to apparently redefine
functions which have the same names as different Common Lisp functions.  It
is a familiar technique for me because Lisp Machine Lisp had quite a few
name conflicts with Common Lisp:

----------------------------------------
(make-package 'real-lisp)

(in-package 'real-lisp)

(shadow '(member assoc rassoc delete )) ; etc...

(export '(memq member))

;;; If you're a speed freak, change this to a macro, or hope the compiler
;;; can handle inline functions, or that the implementation can call
;;; functions quickly, or use ZL:DEFSUBST on the Lisp machine...
(defun member (x list)
  (lisp:member x list :test #'equal))

(defun memq (x list)
  (lisp:member x list :test #'eq))
----------------------------------------

Now you can move things into the REAL-LISP package.  If you're more
ambitious you could make REAL-LISP a package that actually had all the right
symbols in it itself (as opposed to inheriting them) and exporting them, and
then other packages could USE it.

>Although it's probably hopeless, I wish more implementors would take a stand
>against COMMON LISP; I'm afraid that the challenge of "doing a COMMON LISP"
>is more than most would-be implementors can resist.  Even I occasionally find
>myself thinking "how would I implement that"; fortunately I then ask myself
>WHY?

Well, the main winning alternative is even further away from your Real Lisp
than Common Lisp is: Scheme, or T, which can be pretty much turned in a
systems programming language.-- 
Robert P. Krajewski
Internet/MIT: RPK@MC.LCS.MIT.EDU
        UUCP: ...{cca,harvard,mit-eddie}!lmi-angel!rpk

preece@ccvaxa.UUCP (02/10/87)

	jjacobs@well.UUCP:
>	"Against the Tide of Common LISP"
>	...
> MEMBER is a perfect example. I complained to David Betz (XLISP) that MEMBER
> used EQ instead of EQUAL.  I only checked about 4 books and manuals (UCILSP,
> INTERLISP, IQLISP and a couple of others).  David correctly pointed out that
> CL defaults to EQ unless you use the keyword syntax.  So years of training,
> learning and ingrained habit go out the window.  How many bugs
> will this introduce.  MEMQ wasn't good enough?
----------
This error of fact (MEMBER defaults to EQL in CL, not to EQ) is just
one of many things that got bashed on when this was posted before
(6-9 months ago).  It's often useful to have a diatribe posted to make
us think about our preconceptions and reconsider our biases, but one
would appreciate it if a NEW diatribe could be come up with rather
than one which has already been around the block...
[I could be wrong; it could be that it was posted to the CL mailing
list and that this list didn't see it; in either case the author
should have reviewed it in the light of the discussion.]

-- 
scott preece
gould/csd - urbana
uucp:	ihnp4!uiucdcs!ccvaxa!preece
arpa:	preece@gswd-vms

jjacobs@well.UUCP (02/11/87)

In <768@rpics.RPI.EDU>, Bill Yerazunis writes:

>Do I really need to keep that there?   :-)

 Only if you reproduce the whole thing somewhere else :-)

 >Why bother DEFVARing?  It says right in CLTL that it won't affect
>just efficiency.

It says right in CLTL "DEFVAR is the recommended way to declare the
use of a special variable".

I contend that this does affect correctness (and is also a good reason to
refer to the manual even when doing *simple* things).  :-)

>Common lisp does have an "interior logic" that you can get into.  It makes
>sense after a while.  Things like the scoping DO make a LOT of sense when
>you start seriously considering a compiler.

 Some of it makes sense, but not *good* sense :-) 

The issue I raise is not "lexical vs dynamic"; it's the godawful mess that CL
uses!

(As a general rule of language design I agree that lexical is better;
dynamic scoping for LISP is both a personal prejudice and a
performance issue).

>  I know my code runs faster
>with lexical than with dynamic.  Remember, when you scope lexically and
>compile, you have an absolute displacement onto the stack. This is
>a fast thing. 

>If you dynamically scope, you have to call a routine
>to go into a hash table and find out where whatever-it-is is kept.
>This is a not-so-fast thing for a compiled language.  On calling
>and return, you have to fix up the hash symbol table for each and
>every dynamically-scoped argument.  This is a very-not-fast thing

Say what???????????

The "value cell" is normally statically located at a known address!!!
No need to perform hash table lookup at all!!!  All references are by
address.

Access time may possibly be *faster*, i.e. MOV ADDR, dest instead of 
MOV INDEX(SP),dest.depending on CPU architecture!

Simplisticially, dynamic binding becomes:

PUSH	#SPEC_CELL_ADDR	; save address for later restoral 
PUSH	SPEC_CELL_ADDR	; save current value
MOV	new_value, SPEC_CELL_ADDR ; set value

restoring becomes simply

POP	R1	; get address of special cell, R1 assumed to be a reg.
POP	(R1)	; restore value.

 Jeffrey M. Jacobs
 CONSART Systems Inc.
 Technical and Managerial Consultants
 P.O. Box 3016, Manhattan Beach, CA 90266
 (213)376-3802
 CIS:75076,2603
 BIX:jeffjacobs
 USENET: well!jjacobs

ram@spice.cs.cmu.edu.UUCP (02/11/87)

For the record, I am not replying to this message in hopes of convincing
MR. Jacobs of anything, since he has shown himself to be beyond reason in
previous discussions.  I simply want to point out that almost nothing he
says is true, and the few true things he says are irrelevant.

>
>I think CL is the WORST thing that could possibly happen to LISP.  In fact, I
>consider it a language different from "true" LISP.  CL has everything in the
>world in it, usually in 3 different forms and 4 different flavors, with 6
>different options.  I think the only thing they left out was FEXPRs...
I presume that you have some dialect in mind which is "true" lisp, but all
the Lisps I have every used have had a great deal of duplication of
functionality, since Lisps tend to accrete rather than being designed.

>
>It is obviously intended to be a "compileable" language, not an interpreted
>language. By nature it will be very slow; somebody would have to spend quite 
> a bit of time and $ to make a "fast" interpreted version (say for a VAX).  
Compiled = slow?  How silly of me, I thought the purpose of compilation was
to make code run faster.  I presume that there is an unspoken assumption
that you will run code interpreted even though a compiler is available.  In
my experience, non-stupid people debug code compiled when they have a modern
Lisp environment supporting incremental compilation.

>The grotesque complexity and plethora of data types presents incredible 
>problems to the developer;  it was several years before Golden Hill had
>lexical scoping, and NIL from MIT DOES NOT HAVE A GARBAGE COLLECTOR!!!!
It is true that Common Lisp has a few types that are non-trivial to
implement and are not supported by some Lisps.  The main examples are
bignum, complex and ratio arithmetic.  Your other two assertions, while
true, have nothing to do with the complexity of Common Lisp datatypes.  It
is true that implementing full lexical scoping in a compiler is non-trivial,
causing some implementors headaches; the Common Lisp designers felt that
this cost was adequately compensated for by the increment in power and
cleanliness.  NIL existed after a fashion before anyone had thought of
designing Common Lisp, and it didn't have a garbage collector then either.

>Further, there are inconsistencies and flat out errors in the book.  So many
>things are left vague, poorly defined and "to the developer".
True: this is a major motivation for the current ANSI standards effort.
However, some of the vaguenesses in the spec are quite deliberate.  People
who have not participated in a standards effort invloving many
implementations may not appreciate how much a standard can be simplified my
leaving behavior in obscure cases undefined.  This is quite different from
documenting a single implementation system where you can assume that what
the implementation does is the "right" thing.

>
>The entire INTERLISP arena is left out of the range of compatability.
True, and quite deliberate.  Interlisp is substantially incompatible with
all the Lisps that we wanted to be compatible with.  Of course, this is
largely because all of the active members of the Common Lisp design effort
were using Maclisp family Lisps.  Other Lisp communities such as
XEROX/Interlisp were hiding their heads in the sand, hoping we would never
accomplish anything.

>
>As a last shot; most of the fancy Expert Systems (KEE, ART) are implemented in
>Common LISP.  Once again we hear that LISP is "too slow" for such things, when
>a large part of it is the use of Common LISP as opposed to a "faster" form
>(i.e. such as with shallow dynamic binding and simpler LAMBDA variables; they
>should have left the &aux, etc as macros).  Every operation in CL is very
>expensive in terms of CPU...
Even if you personally insist on using an interpreter, vendors using Lisp as
an implementation substrate will be less stupid.  As you mentioned earlier,
Common Lisp was designed to be efficiently compilable, and none of the
above "ineffencies" have a negative effect on compiled code.  As for
fundamental innefficiency, look at Robert P. Gabriel's book on measuring
Lisp performance.  He compares many Lisps, both Common and uncommon, and the
Common Lisps do quite well.  For example, Lucid Common Lisp on the SUN is 
2x-4x faster than Franz on the same hardware.

>
>______________________________________________________________
>
>I forgot to leave out the fact that I do NOT like lexical scoping in LISP; to
>allow both dynamic and lexical makes the performance even worse.  
Only in compiled code...

>To me,
>lexical scoping was and should be a compiler OPTIMIZATION, not an inherent
>part of the language semantics.  
Sticking your foot in your mouth and revealing that you have no understanding
of lexical scoping (as opposed to local scoping)...

>I can accept SCHEME, where you always know
>that it's lexical, but CL could drive you crazy (especially if you were 
>testing/debugging other people's code).
For one, Common Lisp is hardly unique in having both dynamic and static
variables.  Every Lisp that I know of allows dynamic binding, and every Lisp
that I know of will use also statically bind variables, at least in compiled
code.  I believe that Scheme allows fluid binding; certainly T does.
I have also never heard anyone but you claim that mixed 
lexical/dynamic scoping makes programs hard to understand, and I have to
deal with some real dimbulb users as part of my job.  In contrast, I have
frequently heard claimed (and personally experienced) obscure bugs and
interactions due to the use of dynamic scoping.

>
>BTW, I think the book is poorly written and assume a great deal of knowledge
>about LISP and MACLISP in particular.  I wouldn't give it to ANYBODY to learn
>LISP
>
>...Not only does he assume you know a lot about LISP, he assume you know a LOT
>about half the other existing implementations to boot.
There is a substantial element of truth here, but then CLTL wasn't intended
to be a "learning programming through Lisp book".  The problem is that you
have all these Lisp wizards defining a standard, and they find it impossible
to "think like a novice" when specifying things.

>
>I am inclined to doubt that it is possible to write a good introductory text on
>Common LISP;

This is questionable.  I believe that all the Maclisp family Lisp
introductory books are going over to Common Lisp (e.g. Winston and Horne).
Of course, you probably consider these books to be a priori not good.

>you d**n near need to understand ALL of it before you can start
>to use it. There is nowhere near the basic underlying set of primitives (or
>philosophy) to start with, as there is in Real LISP (RL vs CL).  
Not really true, although this is the closest that you have come to a valid
esthetic argument against Common Lisp.  Once you understand it, you realize
that there actually is a primitive subset, but this is only hinted at in
CLTL.

>You'll notice
>that there is almost NO defining of functions using LISP in the Steele book.
>Yet one of the best things about Real LISP is the precise definition of a
>function!
Once again, this is largely a result of good standards practice.  If you say
that a given operation is equivalent to a piece of code, then you vastly
over-specify the operation, since you require that the result be the same
for *all possible conditions*.  This unnecessarily restricts the
implementation, resulting the the performance penalties you so dread.

>
>Even when using Common LISP (NIL), I deliberately use a subset.  I'm always
>amazed when I pick  up the book; I always find something that makes me curse.
I will avoid elaborating possible conclusions about your limited mental
capacity; this statement only shows how emotionally involved you are in
denouncing a system which takes you out of your depth.

>Friday I was in a bookstore and saw a new LISP book ("Looking at LISP", I
>think, the author's name escapes me).  The author uses SETF instead of SETQ,
>stating that SETF will eventually replace SETQ and SET (!!).   Thinking that
>this was an error, I  checked in Steel; lo and behold, tis true (sort of).
>In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom
>of page 94!  And it isn't even clear; if the variable is lexically bound AND
>dynamically bound, which gets changed (or is it BOTH)?  Who knows? 
>Where is the definitive reference?
Well, obviously it sets the place named, and in a particular lexical
environment, a given name only names one variable, lexical or special as the
case may be.  Your incomprehension provides some evidence that the
specification is inadequate, although you do exhibit an amazing capacity for
incomprehension.

>
>"For consistency, it is legal to write (SETF)"; (a) in my book, that should be
>an error, (b) if it's not an error, why isn't there a definition using the
>approprate & keywords?  Consistency?  Generating an "insufficient args"
>error seems more consistent to me...
The syntax specified in CLTL is:
  SETF {place value}*
In CLTL's notation for macro syntax, this states that an arbitrary number of
(place, value) pairs may be specified.  The sentence you complain about
is only restating the obvious so that even you could not miss this point.

>
>Then try explaining why SET only affects dynamic bindings (a most glaring
>error, in my opinion).  Again, how many years of training, understanding
>and textbooks are suddenly rendered obsolete?  How many books say
>(SETQ X Y) is a convenient form of (SET (QUOTE X) Y)?  Probably all
>but two...
Well, the times they are a changin'...  Of course, if you understood lexical
variables, you would understand why you can't compute a variable name at run
time and then reference it.

>
>Then try to introduce them to DEFVAR, which may or may not get
>evaluated who knows when!  (And which aren't implemented correctly
>very often, e.g. Franz Common and Golden Hill).
It is true that DEFVAR's behavior is somewhat non-intuitive, but it is
usually the "right thing" unless you are doing wrong things in your variable
inits.  This is an instance of the MIT philosophy of doing the right
thing even if it is a bit more complicated (Like in ITS EMACS v.s.
imitations, a subject which I could flame about with verbosity and
irrationality comparable to yours).

>
>MEMBER is a perfect example. I complained to David Betz (XLISP) that MEMBER
>used EQ instead of EQUAL.  I only checked about 4 books and manuals (UCILSP,
>INTERLISP, IQLISP and a couple of others).  David correctly pointed out that
>CL defaults to EQ unless you use the keyword syntax.  So years of training,
>learning and ingrained habit go out the window.  How many bugs
>will this introduce.  MEMQ wasn't good enough?
Of course you are wrong here, although only in a minor way.  It uses EQL,
like every other Common Lisp function that has an implicit equality test.
This particular decision was agonized over for quite a while, but it was
decided to change MEMBER in the interest of consistency (which I believe you
defended earlier).

>
>While I'm at  it, let me pick on the book itself a little.  Even though CL
>translates lower case to upper case, every instance of LISP names, code,
>examples, etc are in **>> lower <<** case and lighter type.  In fact,
>everything that is not descriptive text is in lighter or smaller type.
Yep, Digital Press botched the typesetting pretty badly.  Of course, the
reason that the code is in lower case is that everyone with any taste codes
in lower case.  The reason that READ uppercases is that Maclisp did.

Flamingly yours...
                   Rob MacLachlan (ram@c.cs.cmu.edu)

yerazuws@rpics.UUCP (02/11/87)

In article <2565@well.UUCP>, jjacobs@well.UUCP (Jeffrey Jacobs) writes:
> 
> In <768@rpics.RPI.EDU>, Bill Yerazunis writes:
> 
> >  I know my code runs faster
> >with lexical than with dynamic.  Remember, when you scope lexically and
> >compile, you have an absolute displacement onto the stack. This is
> >a fast thing. 
> 
> >If you dynamically scope, you have to call a routine
> >to go into a hash table and find out where whatever-it-is is kept.
> >This is a not-so-fast thing for a compiled language.  On calling
> >and return, you have to fix up the hash symbol table for each and
> >every dynamically-scoped argument.  This is a very-not-fast thing
> 
> Say what???????????
> 
> The "value cell" is normally statically located at a known address!!!
> No need to perform hash table lookup at all!!!  All references are by
> address.
> 
> Access time may possibly be *faster*, i.e. MOV ADDR, dest instead of 
> MOV INDEX(SP),dest.depending on CPU architecture!

Yes- and no.  The fixed address will be fine and dandy as long
as you assume a fixed symbol table size and location.  In LISP, 
assuming a fixed size and/or location for anything is a bad idea,
because you never can be sure that the garbage collector isn't going
to sneak up behind your back and move it on you when you aren't
looking.
	
Or even worse, you gensym up a few thousand temporary symbols and
you run out of symbol table space.  OUCH!
	
The obvious cure for this is to call some sort of hashing function
and thereby circumvent the move/growth problem.
	
	
A related problem is the problem inherent in a lambda-definition. If I have

(defun foo (x)
  (setq x 3)
)
 
foo returns the value 3 but a global value of X, (if it exists) should
be unchanged!  Therefore, you have to save and restore EVERY formal
parameter (possibly to an undefined state).  This requires a lot of
instructions.... and cycles.
	
Lexical scoping gives each function a discardable copy on the stack. 
Therefore, no difficult restores.
	
	-Bill Yerazunis

yerazuws@rpics.UUCP (02/11/87)

My apologies for the EMACS/postnews interface that added lots of 
blank spaces and lines.  It has been dealt with.
	
Regrets and apologies.

	-Bill Yerazunis

paul@osu-eddie.UUCP (02/12/87)

In article <2565@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes:
>(As a general rule of language design I agree that lexical is better;
>dynamic scoping for LISP is both a personal prejudice and a
>performance issue).

I must take an anti-dynamic stand at this point.  The main thing I
have against dynamic scoping is that I ALLWAYS run the risk of having
someone else's code do uncool things to my routine's variables just
because the names are the same.  The name is what I happen to call it,
but if it is inside a routine, it should be THAT ROUTINE'S variable,
and no one elses.  Lisp just happens to be a language, but that shouldn't
make any difference.

>Say what???????????
>
>The "value cell" is normally statically located at a known address!!!
>No need to perform hash table lookup at all!!!  All references are by
>address.

Not allways.  What if I happen to be running a compiler that puts all
local variables into REGISTERS, and only pushes them when they need to
be saved (see _Structure and Interpretation of Computer Programs_
Abelson & Sussman, 1985 MIT Pres for a good example of register
handeling).  In that case, a local variable fetch is the fastest thing
your machine can do.  It is possible to make a dynamic code compiler
use registers, but it is VERY much harder.

As fror intrepeted code, what if your intrepter has an evaluate lambda
routine that works by compiling the lambda body (ONLY if it hasn't
done so allready) and then CALLing it?  Note that CALLing might
actually mean running a psudo-machine on the compiled code.  If this
part of the intrepter is written correctly, it will be just as fast as
a normal intrepter for most code, and VERY much faster for loops.

	     -- Paul Placeway
		Department of Computer and Information Science
	SNail:	The Ohio State University
		2036 Neil Ave. Columbus OH USA 43210-1277
	ARPA:	paul@ohio-state.{arpa,csnet}
	UUCP:	...!cb{osgd,att}!osu-eddie!paul
-- 
	     -- Paul Placeway
		Department of Computer and Information Science
	SNail:	The Ohio State University
		2036 Neil Ave. Columbus OH USA 43210-1277
	ARPA:	paul@ohio-state.{arpa,csnet}
	UUCP:	...!cb{osgd,att}!osu-eddie!paul

jjacobs@well.UUCP (02/12/87)

Some comments on "Against the Tide of Common LISP".

First, let me point out that this is a repeat of material that appeared
here last June.  There are several reasons that I have repeated it:

1) To gauge the ongoing change in reaction over the past two years.
The first time parts of it appeared in 1985, the reaction was
uniformly pro-CL.

When it appeared last year, the results were 3:1 *against* CL, mostly
via Mail.

Now, being "Against the Tide..." is almost fashionable...

2)  To lay the groundwork for some new material that is in progress
and will be ready RSN.

I did not edit it since it last appeared, so let me briefly repeat some
of the comments made last summer:

I.  My complaint that "both dynamic and lexical makes the 
performance" even worse refers *mainly* to interpreted code.

I have already pointed out that in compiled code the difference in
performance is insignificant.

2.  The same thing applies to macros.  In interpreted code, a
macro takes significantly more time to evaluate.

I do not believe that it
is acceptable for a macro in interpreted code to by destructively
exanded, except under user control.

3.  SET has always been a nasty problem;  CL didn't fix the problem,
it only changed it.  Getting rid of it and using a new name would
have been better.

After all, maybe somebody *wants* SET to set a lexical variable if that's
what it gets...

I will, however, concede that CL's SET is indeed generally the desired
result.

4.  CL did not fix the problems associated with dynamic vs lexical
scoping and compilation, it only compounded them.   My comment
that

>"lexical scoping was and should be a compiler OPTIMIZATION"

is a *historical* viewpoint.  In the 'early' days, it was recognized
that most well written code was written in such a manner that
it was an easy and effective optimization to treat variables as
being lexical/local in scope.   The interpreter/compiler dichotomy
is effectively a *historical accident* rather than design or intent of the
early builders of LISP.

UCI LISP should have been released with the compiler default as
SPECIAL.  If it had been, would everybody now have a different
perspective?

BTW, it is trivial for a compiler to default to dynamic scoping...

5. >I  checked in Steel; lo and behold, tis true (sort of).
>In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom
>of page 94!  

I was picking on the book, not the language.  But thanks for all
the explanations anyway...

6.  >"For consistency, it is legal to write (SETF)"

I have so much heartburn with SETF as a "primitive" that I'll save it
for another day.

7. >MEMBER used EQ instead of EQUAL.

Mea culpa, it uses EQL!

8.  I only refer to Common LISP as defined in the Steele Book, and
to the Common LISP community's subsequent inability to make
any meaningful changes or create a subset.  (Excluding current
ANSI efforts).

Some additional points:

1.  Interpreter Performance

I believe that development under an interpreter provides
a substantially better development environment, and that
compiling should be a final step in development.

It is also one of LISP's major features that anonymous functions
get generated as non-compiled functions and must be interpreted.

As such, interpreter performance is important.

3.  "Against the Tide of Common LISP"

The title expresses my 'agenda'.  Common LISP is not a practical,
real world language.

It will result and too expensive.  To be accepted, LISP must be able to run
on general purpose, multi-user computers.

It is chtance of other avenues and paths of
development in the United States.

There must be a greater understanding of the problems, and benefits
of Common LISP, particularly by the 'naive' would be user.

Selling it as the 'ultimate' LISP standard is dangerous and
self-defeating!

 Jeffrey M. Jacobs
 CONSART Systems Inc.
 Technical and Managerial Consultants
 P.O. Box 3016, Manhattan Beach, CA 90266
 (213)376-3802
 CIS:75076,2603
 BIX:jeffjacobs
 USENET: jjacobs@well.UUCP

patrick@mcc-pp.UUCP (02/12/87)

As a new arrival to the Lisp world (2 years experience with Lisp,
vs 15 with C, Bliss, and Fortran), I think I have a different
view of Common Lisp vs other Lisps than either the definers of
the Common Lisp standard or jjacobs.

... On compilers vs interpreters
As a systems and performance measurement type, I have always been
concerned with how fast my programs run.  One of the critical
measures of success of OS code is how fast it is perceived to be.
My default programmer's model says interpreters are slow.
Also, old rumors about programs behaving differently in compiled
and interpreted mode made me distrust the interpreter as a naive user.
Since I have an incremental compiler (Lisp machine), I compile everything
before I run it, except top-level commands.  I have not noticed
significant impediments to development using this procedure.
Breakpoints and function tracing are still available as well
as the old, old reliable of print statements.  Indeed, when at
a breakpoint, I can rewrite and recompile any function that I am
not currently within.  Thus, from my viewpoint, all discussion of
how fast something is in an interpreter is irrelavent to my purposes.
Dynamically created closures can also be handled by an incremental compiler.
I claim that this approach to Lisp development is followed without lossage
by many of the new arrivals to the Lisp world.

...on Common Lisp environments
I recognize that Lisp machines are too expensive for most developers,
but workstations such as Sun now have Common Lisp compilers
(from Kyoto, Franz, and Lucid at a minimum), with runtime
environment development continuing.  I claim that reasonable
Common Lisp development environments are available on $15,000 workstations
and multiuser systems such as Vaxes and Sequents today, and will be
available soon on $5000 PCs (based on high performance, large address
space chips such as M68020 or Intel 386)

...on portability
Implementors of major commercial programs want as wide a potential
market for their product as possible.  Thus, they chose to implement
in the best available PORTABLE environment, rather than the best
environment.  Common Lisp appears the best choice.
Researchers without the requirement for portability
may chose other environments such as Scheme or InterLisp.

...on commonality
I was shocked to discover that MacLisp and InterLisp are significantly
more different than C and Pascal.  I am surprised that they
are commonly lumped together as the same language.  Scheme is
yet farther away both in syntax and philosophy.  All are in the
same family just as C and Pascal are both related to Algol60,
but beyond that...

...on Common Lisp the Language
I learned Common Lisp from Steele's book and found it heavy going.
A excellent first effort for defining a standard, but definitely
not a teaching or implementators aid.  The intro books are becoming
available (with some lingering historical inconsistencies).
Someone should write a book describing the "definitive" core of the language,
followed by reasonable macros and library functions for the rest of
the language.  It would be a great aid to experimental implementors.
Commercial implementations would continue to set themselves apart
by the quality of their optimiziers, debugging environments, etc.
<<A side note, C debuggers provide dynamic access to lexical
  variables.  I am sure Common Lisp ones can too, at some
  implementation cost.  I wonder when they will...>>
On the other hand, with only a brief exposure to Franz Lisp and 
MultiLisp before plunging into Common Lisp (with ZetaLisp extensions),
I did not have the disadvantage of historical assumptions about the
definition of Lisp.

...on dynamic vs lexical scoping
Common Lisp did not go far enough in lexical scoping.  Specifically,
it did not provide a way to define a lexical variable at the
outermost level.  As it is, I cannot define global variables without
some risk of changing the performance of some function that
happened to use the same variable name, even if the variable
is used in a lexical way.  The current recommended convention of
defining specials with *foo* leaves much to be desired.
Other than a globals, the only other uses I have seen for
for dynamically scoped variables is to pass additional values
to and from functions without using the argument list (generally
considered poor practice by software engineering types, but
occasionally preferred for performance reasons).

I seem to have rambled on, but I agree with Jeffrey Jacobs in saying:
> There must be a greater understanding of the problems, and benefits
> of Common LISP, particularly by the 'naive' would be user.
> 
> Selling it as the 'ultimate' LISP standard is dangerous and
> self-defeating!

Instead, I consider it the current Lisp standard, subject to
"slow and careful" revision and improvement.
By comparison, Fortran 77 is not Modula2, but it is far better than
Fortran II.

I hope and expect "Common Lisp 2000" will represent significant
improvements over Common Lisp, perhaps with some remaining
historical uglynesses removed or better hidden (including dynamic scoping).

-- Patrick McGehearty,
   representing at least one view of the growing community of Lisp users.

willc@tekchips.UUCP (02/12/87)

Jeffrey Jacobs (jjacobs@well.UUCP):
>I can accept SCHEME, where you always know
>that it's lexical, but CL could drive you crazy (especially if you were 
>testing/debugging other people's code).

Robert P Krajewski (rpk@mc.lcs.mit.edu):
>Huh ?  Whether or not a variable is lexical can be determined by looking at
>its lexical context (practically an axiom, eh ?).  So if it's being used
>freely, you can assume it's special.

Rob MacLachlan (ram@spice.cs.cmu.edu):
>I have also never heard anyone but you claim that mixed 
>lexical/dynamic scoping makes programs hard to understand, and I have to
>deal with some real dimbulb users as part of my job.  In contrast, I have
>frequently heard claimed (and personally experienced) obscure bugs and
>interactions due to the use of dynamic scoping.

Thanks to proclamations and DEFVARs (which perform proclamations), it
is not possible to tell whether a Common Lisp variable is lexical simply
by looking at its lexical context.  See page 157 of CLtL.  This is a major
lose.  As Mr Jacobs observed, it drives you crazy when you try to read
code.

I certainly agree with Mr MacLachlan's point that dynamic scoping makes
programs hard to understand.

Rob MacLachlan (ram@spice.cs.cmu.edu):
>For one, Common Lisp is hardly unique in having both dynamic and static
>variables.  Every Lisp that I know of allows dynamic binding, and every Lisp
>that I know of will use also statically bind variables, at least in compiled
>code.  I believe that Scheme allows fluid binding; certainly T does.

Neither the 1985 nor 1986 Scheme reports talk about dynamic (fluid)
variables.  The reason is that many different semantics are possible for
dynamic variables, each with their own best use, and Scheme is powerful
enough that these various semantics can be implemented by portable Scheme
code.  We reasoned that programmers can load whichever variety of dynamic
variables they want out of a code library.  The standard procedure library
described in the Scheme reports doesn't describe any of the possibilities
for dynamic variables because we wanted to avoid premature standardization.

Peace from a real dimbulb user,
Will Clinger
willc%tekchips@tektronix.csnet

mincy@think.UUCP (02/13/87)

In article <2573@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes:
>Some additional points:

>1.  Interpreter Performance
>I believe that development under an interpreter provides
>a substantially better development environment, 
Absolutly.  
>and that compiling should be a final step in development.
I find that I almost always run code compiled, even code under
development.  The reasons are, most silly typos are caught by 
the compiler, such as mispelled variables and functions.
When I have bugs, the lisp debugger is usually sufficient on
compiled functions.  If I have an unually hard bug to understand,
then I might run interpreted with step.  

>It is also one of LISP's major features that anonymous functions
>get generated as non-compiled functions and must be interpreted.
Most anonymous functions (I presume you mean lambda expressions where 
you write #'(lambda (...) ...) in your code) will get compiled.
Only by saying '(lambda ...) or consing one up on the fly will you 
get anonymous interpreted functions.

>As such, interpreter performance is important.
Yes, but not *that* important.

>3.  "Against the Tide of Common LISP"
>The title expresses my 'agenda'.  Common LISP is not a practical,
>real world language.
I find that attitude most unfortunate.  If you want to say Common LISP
is a large language, and that it is difficult to implement well - then
I would agree with you totally.  But I have to disagree strongly with
this statement.  

> Jeffrey M. Jacobs


-- jeff
seismo!godot.think.com!mincy

sharma@uicsrd.UUCP (02/13/87)

	There is a pretty good critique of Common Lisp in :

	"A Critique of Common Lisp" by Rodney Brooks and Richard Gabriel 
(Stanford). It appeared in the proceedings of the 1984 ACM Symposium on
Lisp and Functional Programming.

rpk@lmi-angel.UUCP (02/14/87)

In article <> jjacobs@well.UUCP (Jeffrey Jacobs) writes:
>
>Some comments on "Against the Tide of Common LISP".
>
>1) To gauge the ongoing change in reaction over the past two years.
>The first time parts of it appeared in 1985, the reaction was
>uniformly pro-CL.
>
>When it appeared last year, the results were 3:1 *against* CL, mostly
>via Mail.

What exactly are you trying to imply here ?  What were the circumstances of
rejection ?

>4.  CL did not fix the problems associated with dynamic vs lexical
>scoping and compilation, it only compounded them.   My comment
>that
>
>>"lexical scoping was and should be a compiler OPTIMIZATION"
>
>is a *historical* viewpoint...  The interpreter/compiler dichotomy
>is effectively a *historical accident* rather than design or intent of the
>early builders of LISP.

Well, it's a fairly large blot on language semantics.  Common Lisp decided
to get the semantics right, while not removing historical phenemona like the
names of certain list manipulation functions (NCONC or RPLACA).

>BTW, it is trivial for a compiler to default to dynamic scoping...

Yeah, the Lisp Machine compiler used to allow that.  It's pretty disgusting.
It would also be trivial put a lot of other switches in the compiler that
would permit it to be more ad hoc because it was more convenient to implement
it that way.

>I have so much heartburn with SETF as a "primitive" that I'll save it
>for another day.

Well, I'd like to hear them.  It would be interesting to see what your
objections are.

>7. >MEMBER used EQ instead of EQUAL.
>
>Mea culpa, it uses EQL!

Nitpicking aside, this is hardly arbritrary -- remember that since Common
Lisp is a new dialect, there was only a secondary consideration in being
compatible with other Lisp dialects.  This decision was made, so there's no
use complaining that MEMBER in Common Lisp is a different function than MEMBER
in Maclisp.  In a previous posting I indicated one solution for porting
*existing* code.  There is a need for various older Lisp -> Common Lisp
compatibility packages, and to a large extent they can be very portable.

>1.  Interpreter Performance
>
>I believe that development under an interpreter provides
>a substantially better development environment, and that
>compiling should be a final step in development.

It depends on what implementation you're using.  Because the Lisp Machine
effort was driven by system programmers and a specialized architecture,
debugging compiled code is *easier* in most cases than debugging interpreted
code.  In non-specialized implementations, this is less likely to be true if
not many conventions of a ``virtual Lisp machine'' are honored in compiled
code.

>It is also one of LISP's major features that anonymous functions
>get generated as non-compiled functions and must be interpreted.

This is another a priori opinion.  How many mature implementations of Lisp
actually behave like this in the first place ?  What are the applications of
a such an accidental behavior ?

>3.  "Against the Tide of Common LISP"

>The title expresses my 'agenda'.  Common LISP is not a practical,
>real world language.

OK, back to the crusade...

>It will result and too expensive.  To be accepted, LISP must be able to run
>on general purpose, multi-user computers.

It takes a consultant to come up with conclusions like this, and
requirements like this...

>There must be a greater understanding of the problems, and benefits
>of Common LISP, particularly by the 'naive' would be user.

You're talking about a group with little clout.  If the analogous attitude
were true in the personal computer marketplace, then everybody would have
Macs; instead ``power users''are perfectly happy to tweak PCs.  Everyone
starts out naive, but people who want programs written will not tolerate
naivete in the would-be implementors.  Would be users are not catered to by
a language definition, but by good textbooks, education, and lots of
hands-on experience.

>Selling it as the 'ultimate' LISP standard is dangerous and
>self-defeating!

Who said that ?  Common Lisp is not a step forward in terms of Lisp
``features.''  By reining in the spec and getting diverse implementors to
agree on something, I can write a program on a Sun, and have it work on
(say) a Lisp Machine (Zetalisp), a Silicon Graphics box (Franz Common Lisp),
a DEC-20 (Hedrick's Common Lisp), a VAX (NIL or DEC Standard Lisp), and so
on.  Before, one had to make a decision on whether to use a safe Maclisp
subset or a safe Interlisp subset, if one indeed expected portability to be
worth one's while at all.  And then you got show off your expertise in #+/-,
STATUS and SSTATUS, and the knowledge of n dialect's opinions on whether NTH
was 0 or 1-based.  At least now there is a Lisp which is no more repugnant
than C (actually, at lot less, in my freely admitted biased opinion) as a
portable programming language.
-- 
Robert P. Krajewski
Internet/MIT: RPK@MC.LCS.MIT.EDU
        UUCP: ...{cca,harvard,mit-eddie}!lmi-angel!rpk

jjacobs@well.UUCP (02/14/87)

In <131@spice.cs.cmu.edu> Rob MacLachlan writes, without using any
four letter words or sexual innuendoe!  The improvement in his 
vocabulary and manners since last summer is to be commended!

Unfortunately, his reading skills have not improved as much.

In general, I do not hold most of the views that he attributes to me,
and his ability to misinterpret what I write amazes me, particularly
since he has already seen most of this discussed previously.

I'm sure his leaping to irrelevant and unwarranted conclusions is 
already legion, as is his rapier wit, subtle sarcasm, and debating
society method of argumentation.

As such, I will only reply to those points of his which deal with
my original arguments, or which need addressing, such as his
apparent lack of understanding of the basic issues of software
engineering!

>>
>>It is obviously intended to be a "compileable" language, not an interpreted
>>language. By nature it will be very slow; somebody would have to spend quite 
>> a bit of time and $ to make a "fast" interpreted version (say for a VAX).  
>Compiled = slow?  How silly of me, I thought the purpose of 
>compilation was to make code run faster.

Read the paragraph again Rob!

>However, some of the vaguenesses in the spec are quite deliberate.  People
>who have not participated in a standards effort invloving many
>implementations may not appreciate how much a standard can be simplified my
>leaving behavior in obscure cases undefined.  This is quite different from
>documenting a single implementation system where you can assume that what
>the implementation does is the "right" thing.

Gee, just what the world needs; deliberately vague specs!!!

(And the COMMON LISP effort certainly doesn't begin to achieve
what the rest of the world considers a "standards effort")

>>The entire INTERLISP arena is left out of the range of compatability.
>True, and quite deliberate.  Interlisp is substantially incompatible with
>all the Lisps that we wanted to be compatible with.  Of course, this is
>largely because all of the active members of the Common Lisp design effort
>were using Maclisp family Lisps.  Other Lisp communities such as
>XEROX/Interlisp were hiding their heads in the sand, hoping we would never
>accomplish anything.

How to win friends and influence people!  I hope Rob gets tenure
at CMU, cause he might have a hard time getting ajob elsewhere.

>>As a last shot; most of the fancy Expert Systems (KEE, ART) are implemented
>>Common LISP.  Once again we hear that LISP is "too slow" for such things,
>>a large part of it is the use of Common LISP as opposed to a "faster" form
>>(i.e. such as with shallow dynamic binding and simpler LAMBDA variables; they
>>should have left the &aux, etc as macros).  Every operation in CL is very
>>expensive in terms of CPU...
>Even if you personally insist on using an interpreter, vendors using Lisp as
>an implementation substrate will be less stupid.

Many vendors have *already* abandoned *compiled* Common LISP!
Interpreter speed had nothing to do with it.

>>I forgot to leave out the fact that I do NOT like lexical scoping in LISP; to
>>allow both dynamic and lexical makes the performance even worse.  
>Only in compiled code...

And I'm supposed to be ignorant about building compilers??? 

>>There is nowhere near the basic underlying set of primitives (or
>>philosophy) to start with, as there is in Real LISP (RL vs CL).  

>Not really true, although this is the closest that you have come to a valid
>esthetic argument against Common Lisp.  Once you understand it, you realize
>that there actually is a primitive subset, but this is only hinted at in
>CLTL.

There is a *BIG* difference between what I say and "hinting"!  The 
failed attempt to create a subset is sufficient proof of that..

Any *true* core' exists only in Rob's imagination!

>>You'll notice
>>that there is almost NO defining of functions using LISP in the Steele book.
>>Yet one of the best things about Real LISP is the precise definition of a
>>function!
>Once again, this is largely a result of good standards practice.

Good standards practice = vague and poorly defined?????

>  If you say
>that a given operation is equivalent to a piece of code, then you vastly
>over-specify the operation, since you require that the result be the same
>for *all possible conditions*.  This unnecessarily restricts the
>implementation, resulting the the performance penalties you so dread.

Oh, I see.  Expecting understandable, consistent results is an 
unnecessary restriction on the implementor!!!

>Well, the times they are a changin'...  Of course, if you understood lexical
>variables, you would understand why you can't compute a variable name at run
>time and then reference it.

B.S!  All the compiled code for SET need do is check that the first argument
be lexically equivalent to a lexically apparent variable and change
the appropriate cell, stack location, or whatever.  Easy for a compiler
to do!

(This may not be what a lot of people *want*, but it is possible).

>Flamingly yours...
>                   Rob MacLachlan (ram@c.cs.cmu.edu)

 Jeffrey M. Jacobs
 CONSART Systems Inc.
 Technical and Managerial Consultants
 P.O. Box 3016, Manhattan Beach, CA 90266
 (213)376-3802
 CIS:75076,2603
 BIX:jeffjacobs
 USENET: jjacobs@well.UUCP

jjacobs@well.UUCP (02/14/87)

In 138@lmi-angel.UUCP, Bob Krajewski writes:

>>I can accept SCHEME, where you always know
>>that it's lexical, but CL could drive you crazy (especially if you were 
>>testing/debugging other people's code).
>
>Huh ?  Whether or not a variable is lexical can be determined by looking at
>its lexical context (practically an axiom, eh ?).  So if it's being used
>freely, you can assume it's special.

I was referring to *debugging* code, not compiling it.  In examining
a function, it is not *lexically* apparent whether an argument is
SPECIAL or local, i.e. if I enter

(DEFUN CONFUSE_ME (X Y Z)...

Now, you tell me if the variables are going to be dynamic or lexical?
Was a DEFVAR or a PROCLAIM issued earlier?  No way to tell, is
there?

And should I assume that in

(DEFUN FOO (FUM FIE) (LIST FUM FIE F1E))

F1E is SPECIAL?  Would you?  Especially when someone else wrote it?

Common LISP makes an old problem worse, not better.

>
>Well, maybe having a function like MAP (takes a result type, maps over ANY
>combination of sequences) is a pain to implement, but the fact there is
>quite a bit of baggage in the generic sequence functions shouldn't slow down
>parts of the system that don't use it.  The CORE of Common Lisp, which is
>lexical scoping, 22 special forms, some data types, and
>evaluation/declaration rules, is not slow at all.  It is not as elegant as
>Scheme, true, there is certainly a manageable set of primitives.  Quite a
>bit of Common Lisp can be implemented in itself.
>

If there is a CORE, why couldn't the committee to produce a subset come
up with anything?  Certainly *parts* of it can be implemented in itself;
why should they then be considered a critical part of the language that
can't be done without?

I'd love to believe that what you describe is a *real* core, but I can't.
The book and the rest of the CL community say otherwise!

I will address the issue of baggage in the near future, let me state that
there is 1 major pieces of baggage which I believe has more adverse
affect on CL than anything else and that is the absurd complexity
of the LAMBDA list!  Function calling will never be the same :-)

>Well, (SETF) does nothing.  You probably wouldn't write this, but again, a
>macro would find it useful.  Should (LIST) signal an error too ?

One of the most common errors found in software is improper number
of arguments.  This has plagued more programs and languages than
I care to recall.  Common LISP has given up the built-in error
checking of previous LISPs.

A macro which generates a function with zero arguments should
almost certainly be checking for the correct number of arguments.

>>Care to explain this to a "beginner"?  Not to mention that SETF is a
>>MACRO, by definition, which will always take longer to evaluate.
>
>Since you're a beginner, by your own admission, why do you think that a form
>which is a macro call will be noticeably more expensive (in the interpreter,
>the compiled case can't ever be slower) ?

You misunderstand me; I'm not a beginner.  In fact, I'm an old fogey :-)
I was one of the co-developers of UCI LISP, and crawled through
INTERLISP, MACLISP and of course Stanford LISP probably before
you could read :-)

You know perfectly well that interpreting macros takes more time!
First you have to EVAL the form, and then give the result to EVAL
again!  (And I *don't* consider it kosher to destructively expand
without the user's control).

>(Oh, by the way, they reversed the arguments to CONS, ha ha...)

You mean (CONS 'A 'B) => (B . A)? :-)

>It won't introduce bugs into new code written by people who read the manual
>and understand the interface and semantics of MEMBER.  Your (legitimate)
>obstacle is porting ``traditional Lisp'' code to Common Lisp.

Hey, CLtL specically states that one objective is to remain compatible
as much as possible.  Now MEMBER is a very basic primitive, going
back even before my time!

>Well, the main winning alternative is even further away from your Real Lisp
>than Common Lisp is: Scheme, or T, which can be pretty much turned in a
>systems programming language.-- 
>Robert P. Krajewski

What about Le_LISP, or the proposed ISO/EU-LISP?  Part of the
problem with the enormous amount of effort devoted to Common LISP
is it's stifling of other work in the United States.

 Jeffrey M. Jacobs
 CONSART Systems Inc.
 Technical and Managerial Consultants
 P.O. Box 3016, Manhattan Beach, CA 90266
 (213)376-3802
 CIS:75076,2603
 BIX:jeffjacobs
 USENET: jjacobs@well.UUCP

levin@ucla-cs.UUCP (02/14/87)

Can we all give Jeff Jacobs some slack?  We should all 
recognize by now that "Against the tide..." (got that copyrighted
yet jeff?) is his pet area. :-)

It all began back in the days when he wrote UCI Lisp with Meehan...

shebs@utah-cs.UUCP (02/14/87)

I really shouldn't respond to this crud, but as one of the people who has
spent quite a bit of time thinking about CL subsets,  I wanted to correct
the following misstatement:

In article <2582@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes:

>>The CORE of Common Lisp, which is
>>lexical scoping, 22 special forms, some data types, and
>>evaluation/declaration rules, is not slow at all.  It is not as elegant as
>>Scheme, true, there is certainly a manageable set of primitives.  Quite a
>>bit of Common Lisp can be implemented in itself.
>
>If there is a CORE, why couldn't the committee to produce a subset come
>up with anything?  Certainly *parts* of it can be implemented in itself;
>why should they then be considered a critical part of the language that
>can't be done without?

First, there have been several proposals for subsets.  They typically have
these problems:

1. Missing but desired features.  Even within Utah, a subset that would make
everyone here happy contains over 400 functions.  About the only thing
that everybody agrees on is that NSUBSTITUTE-IF-NOT should be omitted.
If you take all groups, there is even more variance about what is important.

2. Lack of orthogonality.  If CL had a MEMBER that does EQUAL tests and a
MEMQ that does EQ tests, but only an ASSOC that does EQUAL tests and no ASSQ,
you can bet that everybody would moan and complain about it being
"non-orthogonal".  Similarly for UNION and UNIONQ, INTERSECTION and INTER-
SECTIONQ, and so on.  It's a no-win situation for designers;  either they
add strange functions and be accused of making a fat language, or leave them
out and be accused of inconsistency.

3. Semantic interconnection.  Parts of a design interact with each other.
The time functions use the Universal Time standard that counts in seconds
from 1/1/1900.  It's a better choice than the Unix pseudo-standard 1/1/1970,
but unfortunately universal time is almost always a bignum, so you have to
have bignums around.  Or consider keywords.  If you decide that keywords
to functions are bad and throw them out, what do you do about DEFSTRUCT
constructor functions?  I know what's been done in the past, and it's enough
to make one retch  - like making constructor macros instead of functions :-( .
Again it's a no-win for designers, since if they didn't make things inter-
connected, people would bitch about needless duplication of language concepts.
There is still quite a bit of interest in trying to modularize, but it's
beyond the state of the art.  (EuLisp was supposed to be like that, and it
seems to have bogged down...)

4. Inconsistent extensions.  If the standard does not say anything about
a SORT function, then inevitably several people will write mutually incon-
sistent packages for sorting.  This generates two sub-problems:  first, 
programs are continually loading this module or that module.  PSL for instance
has hundreds of modules that can be loaded, but it's a pain in practice to
forget one of them (and autoloading has its disadvantages as well).  More
importantly, one can get gray hairs trying to integrate two programs each
using a SORT with different arguments and behavior (maybe one package needs
a destructive SORT, and the other a non-destructive one).  If you standardize
SORT, the problems go away.

So those are some of the more significant reasons why no subsets are favored.
I urge people to get copies of old drafts of CL and the archived discussions.
They're extremely interesting, and one gets a sense of the number of competing
interests that had to make what they considered to be major compromises
in order to produce anything at all.  Would-be introducers of new Lisp
dialects should especially study the material and reflect on their chances
of success...

>...there is 1 major pieces of baggage which I believe has more adverse
>affect on CL than anything else and that is the absurd complexity
>of the LAMBDA list!

In PCLS we do interprocedural analysis to eliminate completely the overhead
of complex lambdas, by reducing to simple calls.  It's extremely effective.
See Utah PASS Project Opnote 86-01 for details.

>Part of the
>problem with the enormous amount of effort devoted to Common LISP
>is it's stifling of other work in the United States.

That's a problem with any standard.  Who is it that's being stifled
anyway?  Not me...

> Jeffrey M. Jacobs
							stan shebs

ram@spice.cs.cmu.edu.UUCP (02/16/87)

>Subject: Re: Against the Tide of Common LISP
>Date: 13 Feb 87 19:00:00 GMT
>Nf-From: uicsrd.CSRD.UIUC.EDU!sharma    Feb 13 13:00:00 1987
>
>
>	There is a pretty good critique of Common Lisp in :
>
>	"A Critique of Common Lisp" by Rodney Brooks and Richard Gabriel 
>(Stanford). It appeared in the proceedings of the 1984 ACM Symposium on
>Lisp and Functional Programming.

Yeah, this paper is reasonably coherent, but should be taken with a grain 
of salt.  Some of the arguments in it are semi-bogus in that they present a
problem, but don't present simple, commonly used solutions that largely
solve the problem.

For example, in one section complaining about the inefficiency of the
complex calling mechanisms and their use in langauge primitives, they
basically construct a straw man out of SUBST (or some similar function).

What they do is observe that SUBST is required to take keywords in Common
Lisp and that the obvious implementation of SUBST is recursive.  From this
they leap to the conclusion that a Common Lisp SUBST must gather all the
incoming keys into a rest arg and then use APPLY to pass the keys into
recursive invocations.  If this was really necessary, then it would be a
major efficiency problem.  Fortunately, this is easily fixed by writing an
internal SUBST that takes normal positional arguments, and then making the
SUBST function call this with the parsed keywords.  It is also easy to make
the compiler call the internal function directly, totally avoiding keyword
overhead.

Now presumably the authors knew that this overhead could be avoided by a
modest increment in complexity, but this isn't at all obvious to readers not
familiar with Common Lisp implementation techniques.

As I remember, the paper also complained about the excessive hair in the
Common Lisp ARRAY type preventing the obvious implementation.  I agree that
adjustable and displaced arrays are largely useless, and not worth the
overhead.  There is no doubt that they got in the language because lisp
machine compatibility was our number one compatibility priority.  The
element of bogosity comes into the argument when they negelect to mention
Common Lisp's SIMPLE-ARRAY type which can be used as a declaration to tell
the system that you haven't done anything wierd with this array, and it can
be accessed in a reasonable fashion.  This invalidates any argument of
inherent inefficiency of Common Lisp arrays, although it does impose on the
user a bit by requiring the declaration.

Probably the best criticism that they leveled against Common Lisp was aimed
at the numeric types and operations.  Since Common Lisp only supports generic
arithmetic, extensive declarations and at least some compiler smarts are
required to generate good code for conventional architectures.  On the other
hand, I think that there are powerful cleanliness and portability arguments
in favor of the generic arithmetic decision.

The COMPLEX, RATIO and to a lesser degree BIGNUM types also require
substantial work to implement, yet are not used all that much by "ordinary"
code (whatever that is).  A lot of the complexity of numbers in Common Lisp 
was motivated by a desire to "do numbers right" in hopes that Common Lisp
would be taken seriously for number crunching.  This is definitely a break
with the past, when most implementations had poorly defined and implemented
floating point support.

I also point out that, despite any misgivings voiced in the paper, Gabriel
is a major player in Lucid Inc., whose sole product is a Common Lisp 
implementation.  Evendently he believes that it is a practical, real-world
programming language.

I think that there is a good chance that Common Lisp will become the
"FORTRAN of Lisps".  Some of the constructs will seem bizzare, and many of
the restrictions will seem arbitrary; nobody will attempt to defend it
esthetically, but many people will get lots of work done.

  Rob

ram@spice.cs.cmu.edu.UUCP (02/16/87)

Since some people may not have understood my claims for the desirability of
a standard not specifying everything, I will elaborate.

Consider the DOTIMES macro.  In CMU Common Lisp,
  (dotimes (i n) body) ==>

  (do ((i 0 (1+ i))
       (<gensym> n))
      ((>= i <gensym>))
    body)

Now, if Common Lisp required this implementation, it would imply that
setting "I" within the body is a meaningful thing to do.  Instead, Common
Lisp simply specifies in English what DOTIMES does, and then goes on to say
that the result of setting the loop index is undefined.  This allows the
implementation to assume that the loop index is not set, possibly increasing
efficiency.

The same sort of issues are present in the "destructive" functions, possibly
to a greater degree.  If an implementation was specified for NREVERSE, then
users could count on the argument being destructively modified in a
particular way.  This is bad, since the user doesn't need to know how the
argument is destroyed as long as properly he uses the result, and requiring
the argument to be modified in a particular way would have strong
interactions with highly implementation-dependent properties such as storage
management disciplines.  For example, in some implementations it might be
most efficient to make the "destructive" operations identical to the normal
operations, and not modify the argument at all.

In any case, the tremendous complexity of Common Lisp would make it very
difficult to specify it all in a formal way such as that used in the ADA
language specification.  When reading the Common Lisp manual, you must
assume that whenever the meaning of a construct is not explicitly specified,
it is undefined, and therefore erroneous.  

This difficulty of complete specification can be used as an argument against
complex languages such as Common Lisp, but you should remember that
specification is not an end in itself; languages exist to be used.
Completeness of specification certainly doesn't seem to predict language
success.  Consider Algol 68 and C.

  Rob

larryb@bcsaic.UUCP (02/16/87)

In article <2582@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes:
>a function, it is not *lexically* apparent whether an argument is
>SPECIAL or local, i.e. if I enter
>
>(DEFUN CONFUSE_ME (X Y Z)...
>
>Now, you tell me if the variables are going to be dynamic or lexical?
>Was a DEFVAR or a PROCLAIM issued earlier?  No way to tell, is
>there?

It is precisely for that reason that ALL special vartiables should be
so noted by the use of surrounding asterisks, such as *terminal-io*.
This does not, admittedly, rebut your complaint.  However, as in most
cases, maintainability is still a programmers responsibility.  If I
nhave to maintain your code, I will be upset if you do not stick with
this convention.

>You know perfectly well that interpreting macros takes more time!
>First you have to EVAL the form, and then give the result to EVAL
>again!  (And I *don't* consider it kosher to destructively expand
>without the user's control).
>
What is your alternative?  Don't use macros if you don't like them.  How
else could they be implemented that would not cause an extra step at
eval time?

LSB
-- 

* The opinions expresses are not necessarily those of my employer *

andy@Shasta.UUCP (02/17/87)

In article <2582@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes:
>You know perfectly well that interpreting macros takes more time!
>First you have to EVAL the form, and then give the result to EVAL
>again!  (And I *don't* consider it kosher to destructively expand
>without the user's control).

It is possible to cache the result of macro expansion without
destructively modifying user code.  If this technique is implemented
correctly, this cache is flushed appropriately when a macro is redefined
(or changed to a function).  In other words, speed is the only difference
between using this technique and re-expanding the macro anew each time the
relevant code is eval'd.

If one is slightly more clever, one can apply a variation of this
technique to function bodies and perform some other analysis at the
same time so that interpreted code runs fairly fast.  (No, you don't
have to do it when the function is first called.  There are other
opportunities.)  Of course, you have to do this right or it gets in
the way of debugging, but ....

I'm sure that major CL vendors use better techniques than I can come
up with in 10 minutes.

I've forgetten why JJ is so down on macros; doesn't "real" lisp have
them?

>    Certainly *parts* of [Common Lisp] can be implemented in itself;
>why should they then be considered a critical part of the language that
>can't be done without?

Every lisp dialect that I've written non-trivial programs in (Interlisp,
Maclisp, Franz, T) predefines forms that I could have defined myself using
other forms.  This is good.  I prefer to build on other people's work;
Turing machine programming is so tedious.  Since CL is not a minimal
language, each vendor can choose a different core and implement the rest
of the language using it.  This too is good; it leads to higher performance.

-andy
-- 
Andy Freeman
UUCP:  ...!decwrl!shasta!andy forwards to
ARPA:  andy@sushi.stanford.edu
(415) 329-1718/723-3088 home/cubicle

jjacobs@well.UUCP (02/18/87)

In <141@lmi-angel.UUCP>, Bob Krajewski writes:

>>When it appeared last year, the results were 3:1 *against* CL, mostly
>>via Mail.
>
>What exactly are you trying to imply here ?  What were the circumstances of
>rejection ?

Simple!  Last time I started this discussion, most of the comments
received were private, not public, and most of them were of the form "I
don't like CL much either"!

>>I have so much heartburn with SETF as a "primitive" that I'll save it
>>for another day.
>
>Well, I'd like to hear them.  It would be interesting to see what your
>objections are.

Real Soon Now :-)

>>7. >MEMBER used EQ instead of EQUAL.
>>
>>Mea culpa, it uses EQL!
>
>Nitpicking aside, this is hardly arbritrary -- remember that since Common
>Lisp is a new dialect, there was only a secondary consideration in being
>compatible with other Lisp dialects.

Foolish me, I believed the book!

>In non-specialized implementations, this is less likely to be true if
>not many conventions of a ``virtual Lisp machine'' are honored in compiled
>code.

At what point should a compiler go for full out machine dependent 
optimization, as opposed to honoring a "virtual LISP machine"?
(Not that any such VLM is defined by Common LISP anyway).

>>It is also one of LISP's major features that anonymous functions
>>get generated as non-compiled functions and must be interpreted.

I was referring to CONS'ed functions created at run time.  Sorry if
that wasn't clear...

>OK, back to the crusade...
>
>>It will result and too expensive.  To be accepted, LISP must be able to run
>>on general purpose, multi-user computers.
>
>It takes a consultant to come up with conclusions like this, and
>requirements like this...
>
>>There must be a greater understanding of the problems, and benefits
>>of Common LISP, particularly by the 'naive' would be user.

It takes a LISP Machine Vendor to ignore that large a market :-)

Seriously, how are sales of LISP machines to the commercial sector?
What percentage of sales are to Universities and DARPA/DoD funded
R&D?  How many LISP machines have been sold to banks?  To
Insurance Companies?  To aerospace companies?

The simple truth is that the perception that LISP is big and slow
is extremely common.  It also happens to be *true*.  The ability
of LISP implementors to dream up features that outstrip
improvements in hardware has been going on since before I
wrote my first function in LISP, finally resulting in a storage
management scheme where 50% of memory is always unused :-)

>>Selling it as the 'ultimate' LISP standard is dangerous and
>>self-defeating!
>
>Who said that ?  Common Lisp is not a step forward in terms of Lisp
>``features.''  By reining in the spec and getting diverse implementors to
>agree on something, I can write a program on a Sun, and have it work on
>(say) a Lisp Machine (Zetalisp), a Silicon Graphics box (Franz Common Lisp),
>a DEC-20 (Hedrick's Common Lisp), a VAX (NIL or DEC Standard Lisp), and so
>on.

Boy, are you a dreamer! Last year, when I made the rounds of non-SPICE
derived Common LISPs, I managed to break every one within 10 minutes!
And if you can get anything to run in NIL, please let me know how!

And of course, anything that gets developed on any of the LISP
machines is guaranteed to have non-CL code in them!

I'm not clear on one thing; when you say "reining in the spec", are
you refering to Common LISP as 'being a reined in spec' or
that Common LISP needs to be 'reined in'.

>At least now there is a Lisp which is no more repugnant
>than C (actually, at lot less, in my freely admitted biased opinion) as a
>portable programming language

This is stretching the term "portable"; one assumes that portable
means "easily" transported :-)  Code written in Common LISP
may be portable, but the language itself sure isn't!!!

>Robert P. Krajewski

 Jeffrey M. Jacobs
 CONSART Systems Inc.
 Technical and Managerial Consultants
 P.O. Box 3016, Manhattan Beach, CA 90266
 (213)376-3802
 CIS:75076,2603
 BIX:jeffjacobs
 USENET: jjacobs@well.UUCP

jjacobs@well.UUCP (02/18/87)

In <2626@mcc-pp.UUCP>, Patrick McGehearty writes:

>... On compilers vs interpreters
>As a systems and performance measurement type, I have always been
>concerned with how fast my programs run.  One of the critical
>measures of success of OS code is how fast it is perceived to be.

We not only share a common background, but a common *concern*;
that of performance!  In fact, this is one of my biggest gripes
about Common LISP; in exchange for very little, if any, improvement in
functionality, it requires an enormous increase in CPU and
memory.

It is hard to believe, but combining all those options for
LAMBDA lists, allowing the 'initform' of DEFVAR to not be
executed until the variable value is needed, implicit lexical
closures, etc result in a dramatic increase in CPU and memory
requirement, both directly and indirectly. 

The worst part is that you *cannot* get around them!  You
can elect to use SETQ instead of SETF, but you can't elect
to use MEMQ instead of MEMBER!

>Also, old rumors about programs behaving differently in compiled
>and interpreted mode made me distrust the interpreter as a naive user.

Well, Common LISP is supposed to be the same.  Most experienced
LISP programmers will tell you that even with the differences
between compilers and interpreter, it was seldom a problem.

>Breakpoints and function tracing are still available as well
>>as the old, old reliable of print statements.  Indeed, when at
>a breakpoint, I can rewrite and recompile any function that I am
>not currently within..
>I claim that this approach to Lisp development is followed without lossage
>by many of the new arrivals to the Lisp world.

You can't know what your are missing if you've never had it!

Interpreted LISP can provide an debugging and development
environment that is far beyond that of break, trace and print!

There is a tremendous amount of seminal work, primarily in and
from InterLISP, that is only possible in an interpreter.  Such
things as automatic error correction, being able to change
a function that you are currently 'in', being able to alter and
modify the flow and results of a lengthy computation, etc.

I've worked in situations where Integration and Test literally
can take hours or days; the ability to change something that
is in progress without having to restart from scratch would be an
enormous asset in such situations.

I recommend looking at some of the capabilities of InterLISP
(or even UCI LISP) to understand what is potentially being lost.

>...on Common Lisp environments
>I recognize that Lisp machines are too expensive for most developers,
>but workstations such as Sun now have Common Lisp compilers
>(from Kyoto, Franz, and Lucid at a minimum), with runtime
>environment development continuing.  I claim that reasonable
>Common Lisp development environments are available on $15,000 workstations
>and multiuser systems such as Vaxes and Sequents today, and will be
>available soon on $5000 PCs (based on high performance, large address
>space chips such as M68020 or Intel 386)

To quote a system manager, "Common LISP is a great tool for turning
a VAX 780 into a single user machine!"

To quote Charles Hedrick,
"Personally I would have wished for CL to be smaller.  As the manager
of a number of timesharing systems, I cringe at a Lisp where each user
takes 8 MB of swap space"

The problems is that you have been hoodwinked into believing that you
can't have similar capabilities and functionality without a LISP
machine or dedicated workstation.  (Do you think the LISP
machine vendors would have been happy with a small core of
primitives that would run on anything, and with successivly
complex layers for those who want them?)

The French produce a LISP called Le_LISP; it is "standard" across
VAXes, MS/PC-DOS, MacIntosh and various 68000 workstation.
There is also a VLSI implementation in progress.  They begin
by defining a very simple "virtual machine", with a great deal
of thought given to how people actually write LISP code, (as
opposed to the CL committee, whose basic approach was how
people *might* want to write code :-).

According to Dr. Lee Rice of Marquette (DEC Professional, March 
1986), Le_LISP on a VAX 780 supported an additional
37 student for an AI class with no noticeable degradation even
during peak periods!

The Rice article gave a simple FIBONACCI benchmark; interpreted,
LE_LISP on a VAX 780 took 4.25 seconds, compared to 8.9 for
VAX InterLISP, 16.1 FRANZ and 29 seconds on a Symbolics.  "Optimized
Compiled" gave 0.12 for LE_LISP and 0.15 for Symbolics.
(BTW, anybody having Gabriel benchmarks for LE_LISP on
other than Mac or PC, please let me know).

The Macintosh version will run on a 512K MAC, and will execute
the BROWSE benchmark!  I know of no other serious commercial
implementation that will run BROWSE in 512K.  It runs TAK on a
512K MAC in 62 seconds, *interpreted* (and remember that the
MAC has a ridiculous amount of overhead).

So take your daily dose of salt!

>...on portability
>Implementors of major commercial programs want as wide a potential
>market for their product as possible.  Thus, they chose to implement
>in the best available PORTABLE environment, rather than the best
>environment.  Common Lisp appears the best choice.
>Researchers without the requirement for portability
>may chose other environments such as Scheme or InterLisp.

Most implementors of commercial Expert Systems have *abandoned*
Common LISP, primarily due to the abysmal performance.

Portability is certainly desireable in a language, but the high
cost of CL far outweighs it's 'portability'.  Common syntax
and semantics are wonderful, but the ability to run in a cost
effective manner is also important!

>...on commonality
>I was shocked to discover that MacLisp and InterLisp are significantly
>more different than C and Pascal.  I am surprised that they
>are commonly lumped together as the same language.  Scheme is
>yet farther away both in syntax and philosophy.  All are in the
>same family just as C and Pascal are both related to Algol60,
>but beyond that...

You will also notice that InterLISP is *one* language, whereas
MacLISP is the root from which almost all of the other dialects
sprang.

(I also don't consider SCHEME to be a LISP dialect; I consider it
a separate language, with similar syntax).

You will note that I have not defended any particular dialect of
LISP.   My main complaint is that Common LISP is not only not
an improvement on other dialects, but is a major step backward in
language design.
Common LISP is enormously wasteful of CPU and memory, 
and ignores nearly all of the lessons learned throughout the years
in the field of software engineering.

>Someone should write a book describing the "definitive" core of the language,
>followed by reasonable macros and library functions for the rest of
>the language.

There is no "definitive core" at this time, (nor is there a conceptual
core).

If you examine the history of LISP, it started with a very
small, well defined set of primitive and grew explosively.  But
as large as it grew, it was still defined in terms of 'smaller'
operations.  See the NEW UCI LISP Manual and the InterLISP
manual, or, if you can get your hands on one, an old MACLISP manual.

It is hard to believe that a language with LISP's historical roots
would result in something as broadly defined as Common LISP.

Hopefully, the ANSI Committee will improve on the situation, but
I doubt that the fundamental design flaws can be eliminated.

>-- Patrick McGehearty,
>   representing at least one view of the growing community of Lisp users.

 Jeffrey M. Jacobs
 CONSART Systems Inc.
 Technical and Managerial Consultants
 P.O. Box 3016, Manhattan Beach, CA 90266
 (213)376-3802
 CIS:75076,2603
 BIX:jeffjacobs
 USENET: jjacobs@well.UUCP
P.S.  You do know that the real motivation behind the development of
LISP machines was to have something to run EMAC on? :-)

jjacobs@well.UUCP (02/18/87)

In, <1137@spice.cs.cmu.edu>, Rob MacLachlan writes:
>>Subject: Re: Against the Tide of Common LISP
>>Date: 13 Feb 87 19:00:00 GMT
>>Nf-From: uicsrd.CSRD.UIUC.EDU!sharma    Feb 13 13:00:00 1987
>>
>>       There is a pretty good critique of Common Lisp in :
>>
>>       "A Critique of Common Lisp" by Rodney Brooks and Richard Gabriel 
>>(Stanford). It appeared in the proceedings of the 1984 ACM Symposium on
>>Lisp and Functional Programming.
>
>Yeah, this paper is reasonably coherent, but should be taken with a grain 
>of salt.  Some of the arguments in it are semi-bogus in that they present a
>problem, but don't present simple, commonly used solutions that largely
>solve the problem.

>For example, in one section complaining about the inefficiency of the
>complex calling mechanisms and their use in langauge primitives, they
>basically construct a straw man out of SUBST (or some similar function).
>
>What they do is observe that SUBST is required to take keywords in Common
>Lisp and that the obvious implementation of SUBST is recursive.  From this
>they leap to the conclusion that a Common Lisp SUBST must gather all the
>incoming keys into a rest arg and then use APPLY to pass the keys into
>recursive invocations.  If this was really necessary, then it would be a
>major efficiency problem.  Fortunately, this is easily fixed by writing an
>internal SUBST that takes normal positional arguments, and then making the
>SUBST function call this with the parsed keywords.  It is also easy to make
>the compiler call the internal function directly, totally avoiding keyword
>overhead.
>
>Now presumably the authors knew that this overhead could be avoided by a
>modest increment in complexity, but this isn't at all obvious to readers not
>familiar with Common Lisp implementation techniques.

Ok, let's try another example.  Let's assume that SUBST is contained in
a loop requiring 1,000,000 executions.  What "largely" solves this problem?

And to prove my case, let me quote from a REAL EXPERT:

"Common LISP has very powerful argument passing mechanisms.
Unfortunately, two of the most poweful mechanisms, rest arguments
and keyword arguments, have a serious performance penalty
in Spice LISP.

...

Neither problem is serious unless thousands of calls are being made
to the function in question..."

- Spice LISP User's Guide, Chapter 5, Efficiency by Rob MacIachlan.

Of course the real problem with Common LISP is that the user has
no choice; there are no alternate primitives which don't involve the
keyword overhead, so the experienced user must instead rely on the
implementor for efficiency.  There is no guarantee, or even good estimate,
how the efficiency will vary from machine to machine, or implementation
to implementation, thus offsetting some of the great claims of
portability.  (What runs well on one implementation may
run terribly on another).

>I also point out that, despite any misgivings voiced in the paper, Gabriel
>is a major player in Lucid Inc., whose sole product is a Common Lisp 
>implementation.  Evendently he believes that it is a practical, real-world
>programming language.

A man who combines good aesthetic judgement with good business
judgement.  Sell 'em what they want, not what they need!
After all, "nobody ever went broker by underestimating
the taste of the American consumer":-)

>  Rob


 Jeffrey M. Jacobs
 CONSART Systems Inc.
 Technical and Managerial Consultants
 P.O. Box 3016, Manhattan Beach, CA 90266
 (213)376-3802
 CIS:75076,2603
 BIX:jeffjacobs
 USENET: jjacobs@well.UUCP

jjacobs@well.UUCP (02/18/87)

In <1138@spice.cs.cmu.edu> Rob MacLachlan

>Since some people may not have understood my claims for the desirability of
>a standard not specifying everything, I will elaborate.
>Consider the DOTIMES macro.  In CMU Common Lisp,
>  (dotimes (i n) body) ==>
>
>  (do ((i 0 (1+ i))
>       (<gensym> n))
>      ((>= i <gensym>))
>    body)

>Now, if Common Lisp required this implementation, it would imply that
>setting "I" within the body is a meaningful thing to do.  Instead, Common
>Lisp simply specifies in English what DOTIMES does, and then goes on to say
>that the result of setting the loop index is undefined.  This allows the
>implementation to assume that the loop index is not set, possibly increasing
>efficiency.

There are two possibilities here; 1. The implementation allows
the "i" to be set, as in some other, older languages, or 2.
the disclaimer can be made in English.  "Result is undefined"
is a valid specification; not specifying things is a different
animal.

>The same sort of issues are present in the "destructive" functions, possibly
>to a greater degree.  If an implementation was specified for NREVERSE, then
>users could count on the argument being destructively modified in a
>particular way.  This is bad, since the user doesn't need to know how the
>argument is destroyed as long as properly he uses the result, and requiring
>the argument to be modified in a particular way would have strong
>interactions with highly implementation-dependent properties such as storage
>management disciplines.  For example, in some implementations it might be
>most efficient to make the "destructive" operations identical to the normal
>operations, and not modify the argument at all.
>

I see; as long as the result of (RPLACA X Y) is any CONS cell with
the CAR set to Y and the CDR the same as before, then this is
ok!!!!

>In any case, the tremendous complexity of Common Lisp would make it very
>difficult to specify it all in a formal way such as that used in the ADA
>language specification.  

See the InterLISP manual, and others.

> When reading the Common Lisp manual, you must
>assume that whenever the meaning of a construct is not explicitly specified,
>it is undefined, and therefore erroneous.  

I've seen enough spec's in my time to know that making those kinds
of assumptions is deadly.  Further, that is nearly the
definition of a bad specification!

>Completeness of specification certainly doesn't seem to predict language
>success.  Consider Algol 68 and C.

C isn't successful??????

>  Rob

 Jeffrey M. Jacobs
 CONSART Systems Inc.
 Technical and Managerial Consultants
 P.O. Box 3016, Manhattan Beach, CA 90266
 (213)376-3802
 CIS:75076,2603
 BIX:jeffjacobs
 USENET: jjacobs@well.UUCP

chan@hpfclp.UUCP (02/18/87)

>>Well, the times they are a changin'...  Of course, if you understood lexical
>>variables, you would understand why you can't compute a variable name at run
>>time and then reference it.

>B.S!  All the compiled code for SET need do is check that the first argument
>be lexically equivalent to a lexically apparent variable and change
>the appropriate cell, stack location, or whatever.  Easy for a compiler
>to do!

I don't see how it's possible to do this (excuse my potential ignorance).
Once the target argument for SET is evaluated and you have some symbol,
how does the compiled code decide whether or not the symbol identifies
a lexical variable? It seems to me that the information identifying the
names of lexical variables (and their place on the stack) has been compiled 
away. It seems like this can't be done for the same reason that you
can't EVAL a form that contains a reference to a lexical variable.

Anyway, if you're keeping track, I prefer to ride the wave rather than
go against the tide. Hang Ten!

			-- Chan Benson
			{ihnp4 | hplabs}!hpfcla!chan
			Hewlett-Packard Company
			Fort Collins, CO

As usual, HP has nothing to do with what I say here.

michaelm@bcsaic.UUCP (02/20/87)

In article <2602@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes:
>
>In <2626@mcc-pp.UUCP>, Patrick McGehearty writes:
>>Also, old rumors about programs behaving differently in compiled
>>and interpreted mode made me distrust the interpreter as a naive user.
>
>Well, Common LISP is supposed to be the same.  Most experienced
>LISP programmers will tell you that even with the differences
>between compilers and interpreter, it was seldom a problem.

Well, I doubt that I count as an *experienced* LISP programmer, but the
difference between compiler and interpreter is a constant problem for me.
I've been running on VAXs and Suns; maybe it's different on LISP machines...
Some of it can be attributed to buggy implementations (e.g. I've never
succeeded in changing the readtable in compiled Franz), more of it is due to
the difference in scoping, which (hopefully) isn't a problem with Common Lisp
(or Scheme).  But the biggest problem for me--and one that still causes me
headaches in CL-- is when a program needs to refer to functions or
structures that are defined in other files.  I need to do "eval-when"s to load
the files, and invariably I get the condition under which to load wrong...
What I *really* want is a construct like
	defined-in <list of (<list of functions or structures> <file w/ def>)>
which would be smart enough to look for the definition of just those functions
at compile time for compiled code, and at load time for interpreted code,
and preserve the definitions. 
-- 
Mike Maxwell
Boeing Advanced Technology Center
	arpa: michaelm@boeing.com
	uucp: uw-beaver!uw-june!bcsaic!michaelm

jjacobs@well.UUCP (02/22/87)

In <1284@Shasta.STANFORD.EDU>, Andy Freeman writes:

>I've forgetten why JJ is so down on macros; doesn't "real" lisp have
>them?

I'm not down on macros; I am down on SETF.

In a nutshell, SETF is essentially
a primitive, i.e. there are no corresponding operations for
many of it's features, so, as a macro, it becomes excessively
expensive, particularly for arrays.  I also don't believe that 'primitives'
anything other than a fixed number of arguments.

Both of these aspects should be reserved for a "higher level".

 Jeffrey M. Jacobs
 CONSART Systems Inc.
 Technical and Managerial Consultants
 P.O. Box 3016, Manhattan Beach, CA 90266
 (213)376-3802
 CIS:75076,2603
 BIX:jeffjacobs
 USENET: jjacobs@well.UUCP

jjacobs@well.UUCP (02/22/87)

In <6950002@hpfclp.HP.COM>, Chan Benson writes:

>>B.S!  All the compiled code for SET need do is check that the first argument
>>be lexically equivalent to a lexically apparent variable and change
>>the appropriate cell, stack location, or whatever.  Easy for a compiler
>>to do!

>I don't see how it's possible to do this (excuse my potential ignorance).

Let's take a simple example:

(DEFUN FOO (X Y Z) (SET X (CONS Y Z)))

The compiler would have to generate code that would effectively
be equal to

(COND	((EQ X 'X)  (SETQ X (CONS Y Z)))
	((EQ X 'Y)  (SETQ Y (CONS Y Z)))
	((EQ X 'Z)  (SETQ Z (CONS Y Z)))
	(T (SET-SYMBOL-VALUE X (CONS Y Z)))))

i.e. a hidden 'macro-expansion' at compile time.  This should put
to rest assertions that it "can't be done"; it's actually trivial.
(Whether it's desireable or not is another discussion).

 Jeffrey M. Jacobs
 CONSART Systems Inc.
 Technical and Managerial Consultants
 P.O. Box 3016, Manhattan Beach, CA 90266
 (213)376-3802
 CIS:75076,2603
 BIX:jeffjacobs
 USENET: jjacobs@well.UUCP

ram@spice.cs.cmu.edu.UUCP (02/23/87)

I found this highly relevant document about the Common Lisp design process
while cleaning my directory.  It was written by Skef Wholey, who implemented
a large part of Spice Lisp while a full-time undergraduate student.  The
Common Lisp spec was being developed at the same time that Spice Lisp was
being written.  This was a cause for no little aggravation for Skef, since he
often had to rewrite code several times.

________________________________________________

			     Common LISP
		(to the tune of Dylan's "Maggie's Farm")

I ain't gonna hack on Common LISP no more,
I ain't gonna hack on Common LISP no more.
See, it was spawned from MACLISP,
And then they threw in Scheme,
And now everybody's made it
"just a little" short of clean.
The language specification is insane.

I ain't gonna hack on Guy Steele's LISP no more,
I ain't gonna hack on Guy Steele's LISP no more.
When you mail him a question,
And then wait for a reply,
Well you can sit for weeks now,
And begin to think he's died.
His MAIL.TXT is one great big black hole.

I ain't gonna hack on Fahlman's LISP no more,
I ain't gonna hack on Fahlman's LISP no more.
Well he gives you an X-1,
And he puts you on a Perq,
And he asks you with a grin,
"Hey son, how much can you work?"
If I reboot one more time I'll lose my brain.

I ain't gonna hack on Dave Moon's LISP no more,
I ain't gonna hack on Dave Moon's LISP no more.
We had a simple SETF,
But it choked on LDB.
So Lunar Dave done fixed it:
Go look at page eighty three.
The Gang of Five they didn't take a poll.

I ain't gonna hack on Common LISP no more,
I ain't gonna hack on Common LISP no more.
With its tons of sequence functions,
And its lexical scoping,
I've now begun to like it,
But the users are moping:
"Without EXPLODE my life is full of pain."

(harmonica and fade)

barmar@mit-eddie.UUCP (02/24/87)

In article <2603@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes:
>Of course the real problem with Common LISP is that the user has
>no choice; there are no alternate primitives which don't involve the
>keyword overhead, so the experienced user must instead rely on the
>implementor for efficiency.

And if there were, you would be complaining about the fact that the
language provides two ways of doing these things (the primitive and
keyworded versions), and that it makes the language even bigger.

>There is no guarantee, or even good estimate,
>how the efficiency will vary from machine to machine, or implementation
>to implementation, thus offsetting some of the great claims of
>portability.  (What runs well on one implementation may
>run terribly on another).

I hope you aren't intending to imply that only Common Lisp is subject
to this problem.  It is true of all languages for which there are
multiple implementations, and true of most other standardized things
(for example, VT102's implement X3.64 more slowly than VT200's).

>>I also point out that, despite any misgivings voiced in the paper, Gabriel
>>is a major player in Lucid Inc., whose sole product is a Common Lisp 
>>implementation.  Evendently he believes that it is a practical, real-world
>>programming language.

I'd like to point out that Gabriel is one of the most vocal members of
X3J13 (the Common Lisp standardization committee) regarding the issues
of simplification.  For example, he is one of the people arguing for
the merging of the function and value cells, in the style of Scheme
and EuLisp.  Evidently he would rather work WITH the Common Lisp
community than AGAINST it in order to move it in the directions he
would prefer.
-- 
    Barry Margolin
    ARPA: barmar@MIT-Multics
    UUCP: ..!genrad!mit-eddie!barmar

barmar@mit-eddie.UUCP (02/24/87)

In article <2626@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes:

Describing a way for SET to assign to lexical variables:

>(DEFUN FOO (X Y Z) (SET X (CONS Y Z)))
>
>The compiler would have to generate code that would effectively
>be equal to
>
>(COND	((EQ X 'X)  (SETQ X (CONS Y Z)))
>	((EQ X 'Y)  (SETQ Y (CONS Y Z)))
>	((EQ X 'Z)  (SETQ Z (CONS Y Z)))
>	(T (SET-SYMBOL-VALUE X (CONS Y Z)))))

That is the wrong thing, though.  Consider FOO being used in the
following:

(DEFUN FOO-CALLER (ARG1 ARG2)
  (LET ((X 3))
    (FOO 'X ARG1 ARG2)
    (PRINT X)))

The X that is passed to FOO is in the lexical scope of FOO-CALLER, so
it is the one that one would expect to be assigned.  One of the goals
of lexical scoping is that it should not make any difference to the
caller what the names of locals are in a function; if a lexical
variable is renamed, the only places you have to look for references
to the variable is within the lexical scope of that variable.  The
parameters to FOO are not lexical variables because they can be
referenced outside the function.

I will admit that there are uses for this type of thing; for example,
in an object-oriented programming system implemented using lexical
variables, one might have a SET-INSTANCE-VARIABLE function that takes
the name of an instance variable.  However, this would probably differ
from FOO because it wouldn't have the T clause, since it is ONLY
interested in lexical variables.  I doubt that there is a use for the
generality of the FOO example, in which SET will set either a lexical
or special.
-- 
    Barry Margolin
    ARPA: barmar@MIT-Multics
    UUCP: ..!genrad!mit-eddie!barmar