[net.lang.lisp] Against the Tide of Common LISP

jjacobs@well.UUCP (Jeffrey Jacobs) (06/20/86)

	"Copyright (c) 1986, Jeffrey M. Jacobs, CONSART Systems Inc.,
	P.O. Box 3016, Manhattan Beach, CA 90266 (213)376-3802
	Bix ID: jeffjacobs, CIS Userid 75076,2603
	
	Reproduction by electronic means is permitted, provided that it is not
	for commercial gain, and that this copyright notice remains intact."

The following are from various correspondences and notes on Common LISP:

Since you were brave enough to ask about Common Lisp, sit down for my answer:

I think CL is the WORST thing that could possibly happen to LISP.  In fact, I
consider it a language different from "true" LISP.  CL has everything in the
world in it, usually in 3 different forms and 4 different flavors, with 6
different options.  I think the only thing they left out was FEXPRs...

It is obviously intended to be a "compilable" language, not an interpreted
language. By nature it will be very slow; somebody would have to spend quite a
bit of time and $ to make a "fast" interpreted version (say for a VAX).  The
grotesque complexity and plethora of data types presents incredible problems to
the developer;  it was several years before Golden Hill had lexical scoping,
and NIL from MIT DOES NOT HAVE A GARBAGE COLLECTOR!!!!  It just eventually eats
up it's entire VAX/VMS virtual memory and dies...

Further, there are inconsistencies and flat out errors in the book.  So many
things are left vague, poorly defined and "to the developer".

The entire INTERLISP arena is left out of the range of compatability.

As a last shot; most of the fancy Expert Systems (KEE, ART) are implemented in
Common LISP.  Once again we hear that LISP is "too slow" for such things, when
a large part of it is the use of Common LISP as opposed to a "faster" form
(i.e. such as with shallow dynamic binding and simpler LAMBDA variables; they
should have left the &aux, etc as macros).  Every operation in CL is very
expensive in terms of CPU...


______________________________________________________________

I forgot to leave out the fact that I do NOT like lexical scoping in LISP; to
allow both dynamic and lexical makes the performance even worse.  To me,
lexical scoping was and should be a compiler OPTIMIZATION, not an inherent
part of the language semantics.  I can accept SCHEME, where you always know
that it's lexical, but CL could drive you crazy (especially if you were 
testing/debugging other people's code).

This whole phenomenon is called "Techno-dazzle"; i.e. look at what a super
duper complex system that will do everything I can build.  Who cares if it's
incredibly difficult and costly to build and understand, and that most of the
features will only get used because "they are there", driving up the cpu useage
and making the whole development process more costly...

BTW, I think the book is poorly written and assume a great deal of knowledge
about LISP and MACLISP in particular.  I wouldn't give it to ANYBODY to learn
LISP

...Not only does he assume you know a lot about LISP, he assume you know a LOT
about half the other existing implementations to boot.

I am inclined to doubt that it is possible to write a good introductory text on
Common LISP;  you d**n near need to understand ALL of it before you can start
to use it. There is nowhere near the basic underlying set of primitives (or
philosophy) to start with, as there is in Real LISP (RL vs CL).  You'll notice
that there is almost NO defining of functions using LISP in the Steele book.
Yet one of the best things about Real LISP is the precise definition of a
function!

Even when using Common LISP (NIL), I deliberately use a subset.  I'm always
amazed when I pick  up the book; I always find something that makes me curse.
Friday I was in a bookstore and saw a new LISP book ("Looking at LISP", I
think, the author's name escapes me).  The author uses SETF instead of SETQ,
stating that SETF will eventually replace SETQ and SET (!!).   Thinking that
this was an error, I  checked in Steel; lo and behold, tis true (sort of).
In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom
of page 94!  And it isn't even clear; if the variable is lexically bound AND
dynamically bound, which gets changed (or is it BOTH)?  Who knows?  Where is
reference?

"For consistency, it is legal to write (SETF)"; (a) in my book, that should be
an error, (b) if it's not an error, why isn't there a definition using the
approprate & keywords?  Consistency?  Generating an "insufficient args"
error seems more consistent to me...

Care to explain this to a "beginner"?  Not to mention that SETF is a
MACRO, by definition, which will always take longer to evaluate.

Then try explaining why SET only affects dynamic bindings (a most glaring
error, in my opinion).  Again, how many years of training, understanding
and textbooks are suddenly rendered obsolete?  How many books say
(SETQ X Y) is a convenient form of (SET (QUOTE X) Y)?  Probably all
but two...

Then try to introduce them to DEFVAR, which may or may not get
evaluated who knows when!  (And which aren't implemented correctly
very often, e.g. Franz Common and Golden Hill).

I don't think you can get 40% of the points in 4 readings!  I'm constantly
amazed at what I find in there, and it's always the opposite of Real LISP!

MEMBER is a perfect example. I complained to David Betz (XLISP) that MEMBER
used EQ instead of EQUAL.  I only checked about 4 books and manuals (UCILSP,
INTERLISP, IQLISP and a couple of others).  David correctly pointed out that
CL defaults to EQ unless you use the keyword syntax.  So years of training,
learning and ingrained habit go out the window.  How many bugs
will this introduce.  MEMQ wasn't good enough?

MEMBER isn't the only case...

While I'm at  it, let me pick on the book itself a little.  Even though CL
translates lower case to upper case, every instance of LISP names, code,
examples, etc are in **>> lower <<** case and lighter type.  In fact,
everything that is not descriptive text is in lighter or smaller type.
It's VERY difficult to read just from the point of eye strain; instead of 
the names and definitions leaping out to embed themselves in your brain,
you have to squint and strain, producing a nice avoidance response.
Not to mention that you can't skim it worth beans.

Although it's probably hopeless, I wish more implementors would take a stand
against COMMON LISP; I'm afraid that the challenge of "doing a COMMON LISP"
is more than most would-be implementors can resist.  Even I occasionally find
myself thinking "how would I implement that"; fortunately I then ask myself
WHY?

Jeffrey M. Jacobs <UCILSP>

shebs@utah-cs.UUCP (Stanley Shebs) (06/21/86)

In article <1311@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes:

>	"Copyright (c) 1986, Jeffrey M. Jacobs, CONSART Systems Inc.,
>	P.O. Box 3016, Manhattan Beach, CA 90266 (213)376-3802
>	Bix ID: jeffjacobs, CIS Userid 75076,2603
>	
>	Reproduction by electronic means is permitted, provided that it is not
>	for commercial gain, and that this copyright notice remains intact."

I don't why I'm bothering to respond to this uninformed flame, but I've got
a few minutes to kill before running off to the compiler conference...
Besides, somebody might read this and suppose that this Jeff Jacobs
knows what he's talking about, which is rarely the case, based on this
limited sample.

>I think CL is the WORST thing that could possibly happen to LISP.  In fact, I
>consider it a language different from "true" LISP.

Well gee, I wonder what is "true" Lisp...

>  CL has everything in the
>world in it, usually in 3 different forms and 4 different flavors, with 6
>different options.

Actually, they decided to omit flavors, because no one could agree on
a standard! :-)

>  I think the only thing they left out was FEXPRs...

FEXPRs are bad (see Pitman's paper in the 1980 Lisp conference)

>It is obviously intended to be a "compilable" language, not an interpreted
>language. By nature it will be very slow; somebody would have to spend quite a
>bit of time and $ to make a "fast" interpreted version (say for a VAX).

So who cares about making interpreted code fast?  By the time I get to caring
about the speed of a program I'm building, it's generally stable enough
that compiling is no big deal.  ALL production quality Lisps nowadays are
designed to be "compilable" languages.  Can't beat the speed of C programs
by running interpreted, you know...

>The grotesque complexity and plethora of data types

Yeah, Common Lisp has just about as many data types as Fortran...

>it was several years before Golden Hill had lexical scoping,
>and NIL from MIT DOES NOT HAVE A GARBAGE COLLECTOR!!!!  It just eventually eats
>up it's entire VAX/VMS virtual memory and dies...

So what do broken implementations have to do with the language standard?

>Further, there are inconsistencies and flat out errors in the book.  So many
>things are left vague, poorly defined and "to the developer".

It's hard to imagine that a first version of a language standard will be
perfect.  I hear about oddities in the Ada language standard too.
Many of the vaguenesses in the Common Lisp standard are to allow
implementations some freedom to do things in different ways.  Would
you like it if the *language* standard said that the architecture will
be tagged, and that positive fixnums range between 0 and 32767, and
that they will have a tag of 42?

>The entire INTERLISP arena is left out of the range of compatability.

Most folks that haven't followed the Common Lisp effort don't realize
how monumental a task it was to get all the Maclispers to agree on a 
language standard.  There is simply no way to get a Lisp that is both
Maclisp- and Interlisp-compatible.

>Every operation in CL is very expensive in terms of CPU...

Not in clever implementations...

>I forgot to leave out the fact that I do NOT like lexical scoping in LISP;

I LIKE lexical scoping in Lisp - dynamic scoping is for special situations,
like rebinding a global variable for the duration of a function (for
instance, one gets an equivalent effect to Unix redirection by rebinding
*standard-input* or *standard-output* to file streams).  There is no
good way to do this if you don't have dynamic scoping.  Programs that
rely heavily on dynamic scoping are usually sloppily written and extremely
difficult to analyze - things go on behind your back all the time.
Default dynamic scoping in Lisp is the result of history - in 1958,
programming language semantics was not well understood!  In another
20 years, no one will understand why anyone would suggest that default
dynamic scoping was a good thing.

>to allow both dynamic and lexical makes the performance even worse.

This is totally wrong.

>lexical scoping was and should be a compiler OPTIMIZATION, not an inherent
>part of the language semantics.

Great - a program that works interpreted, because it relies on dynamic
scoping and your misnamed variable that happened to have been bound a while
ago, will not work because your officemate that you gave a copy of the
compiled code to doesn't have that same variable bound in *his/her* Lisp!
Changing dynamic to lexical scoping is a massive and dramatic change
in program semantics - I prefer my compiler optimizations to affect speed
and not meaning.

>This whole phenomenon is called "Techno-dazzle"; i.e. look at what a super
>duper complex system that will do everything I can build.  Who cares if it's
>incredibly difficult and costly to build and understand, and that most of the
>features will only get used because "they are there", driving up the cpu useage
>and making the whole development process more costly...

To put things in perspective, the Zetalisp language (that the Symbolics
people sell like hotcakes for big bucks) is at least an order of magnitude
larger and more complex than Common Lisp, which they consider to be a small
subset for the exchange of programs.  All reports have been that the
Symbolics is super-fast for development and for execution... does anyone
want to contradict me?

>BTW, I think the book is poorly written and assume a great deal of knowledge
>about LISP and MACLISP in particular.  I wouldn't give it to ANYBODY to learn
>LISP

Neither would I - it's not *supposed* to be an instructional book.  Does
anyone tried to learn Ada from the ANSI standard?  The Common Lisp standard
is very nice for implementors.  I appreciate the explanations and comparisons
to other Lisps and I enjoy the jokes.  There are some problems, which is why
there is another version in the works.  Almost all of the problems are
extremely technical, and take a long time just to explain to someone not
familiar with the language, let alone to solve...  There are also some open
problems with language design involved - like compiling to different machines
within the same programming environment, and support of program-analyzing
programs in a portable manner.

>...Not only does he assume you know a lot about LISP, he assume you know a LOT
>about half the other existing implementations to boot.

A competent implementor should and will know about a bunch of implementations.
*I* didn't have any trouble with the references to other Lisps...

>I am inclined to doubt that it is possible to write a good introductory text on
>Common LISP;  you d**n near need to understand ALL of it before you can start
>to use it. There is nowhere near the basic underlying set of primitives (or
>philosophy) to start with, as there is in Real LISP (RL vs CL).  You'll notice
>that there is almost NO defining of functions using LISP in the Steele book.
>Yet one of the best things about Real LISP is the precise definition of a
>function!

Finally, a semi-valid point.  There has been some effort to define various
subsets, but no consensus has emerged.  Any time someone defines a subset,
everybody whose favorite feature has been left out will rant and rave and
insist that it be included.  By the time you're done, you've got the whole
language again (spoken from personal experience!).

As for the non-use of Lisp definitions, most such definitions are either
simple and wrong, or complicated and right.  The Spice Lisp sources are
in the public domain, and people often look at those for ideas.  See the
comment about freedom of implementation above.

>Friday I was in a bookstore and saw a new LISP book ("Looking at LISP", I
>think, the author's name escapes me).  The author uses SETF instead of SETQ,
>stating that SETF will eventually replace SETQ and SET (!!).

Reports about the death of SETQ and SET have been exaggerated for 
a long time...

>   Thinking that
>this was an error, I  checked in Steel; lo and behold, tis true (sort of).
>In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom
>of page 94!  And it isn't even clear; if the variable is lexically bound AND
>dynamically bound, which gets changed (or is it BOTH)?  Who knows?  Where is
>reference?

Chapters 3 and 5 contain information about scoping and binding rules.
Guy Steele has mercifully not repeated the obvious throughout the book!
To answer your question, (SETF x y) *always* expands into (SETQ x y);
see the top of page 94.  SETQ sets the lexical binding if there is one,
otherwise the dynamic binding; see the definition of SETQ on page 91 
(where there is a reference to the "usual rules"), and the definition
of "usual rules" in the section on variables, p. 55.  I think it would
be impossible to write a language standard that caters to those who
can't read.

>"For consistency, it is legal to write (SETF)"; (a) in my book, that should be
>an error, (b) if it's not an error, why isn't there a definition using the
>approprate & keywords?  Consistency?  Generating an "insufficient args"
>error seems more consistent to me...

Well, it doesn't seem more consistent to *me*!  Should we take a vote?
Should we weight the votes in inverse proportion to the ignorance of
the voter?

>Care to explain this to a "beginner"?

Suppose you want to write a macro that expands into something with
a SETF in it.  Now suppose that this macro has an list of variables
that all need to be SETFed.  If (SETF) is valid, then the list of
things need only be appended - if an empty list is not valid, then
the macro will have to test for this first and do something else if the
list is empty.  Also, in the obvious recursive definition of SETF,
expanding (SETF) into NIL is a nice termination case.  Note that this
is all convenience; if you like to suffer while programming, use Pascal.

>  Not to mention that SETF is a
>MACRO, by definition, which will always take longer to evaluate.

Bogus!  This is only true for *some* interpreters and *never* (so far as
I've seen) for compilers.  In fact, many expert Lisp hackers replace
functions with equivalent macros when possible, to save function call overhead
(I don't recommend this practice, most compilers have hooks to opencode user
functions when requested).

>Then try explaining why SET only affects dynamic bindings (a most glaring
>error, in my opinion).

So how in the world are you going to get this to work in a compiler that
compiles lexical references into positions on stack frames?

(let ((var (if (random-predicate) '*realvar1* '*realvar2*)))
  (set var 45))

Remember, SET evaluates *both* arguments, so it can't know the variable
name until runtime.  Here's another example:

(set (read) 45)

Since (set x y) == (setf (symbol-value x) y), it applies only to the
dynamic binding (read the definition of SYMBOL-VALUE carefully).

>  Again, how many years of training, understanding
>and textbooks are suddenly rendered obsolete?

Those folks that managed to pass CS 101 shouldn't have any problem.

>  How many books say
>(SETQ X Y) is a convenient form of (SET (QUOTE X) Y)?  Probably all
>but two...

I use my Lisp 1.5 manual for historical research only (sorry JMC).

>Then try to introduce them to DEFVAR, which may or may not get
>evaluated who knows when!  (And which aren't implemented correctly
>very often, e.g. Franz Common and Golden Hill).

Perhaps you should try a non-broken Common Lisp then...

>I don't think you can get 40% of the points in 4 readings!  I'm constantly
>amazed at what I find in there, and it's always the opposite of Real LISP!

So what is Real Lisp anyway?  Show the definition so we can flame at it!

>MEMBER is a perfect example. I complained to David Betz (XLISP) that MEMBER
>used EQ instead of EQUAL.  I only checked about 4 books and manuals (UCILSP,
>INTERLISP, IQLISP and a couple of others).  David correctly pointed out that
>CL defaults to EQ unless you use the keyword syntax.  So years of training,
>learning and ingrained habit go out the window.  How many bugs
>will this introduce.  MEMQ wasn't good enough?

MEMQ is a stupid name.

>MEMBER isn't the only case...

This was a tough problem for the designers.  On the one hand, everybody
complains about non-obvious names like CAR and CDR and MEMQ.  On the
other hand, the considerably better name MEMBER, which ought to be used
to denote *all* sorts of membership tests, had been grabbed up for EQUAL
tests.  I would rather have one obvious function name than MEMBER, MEMQ,
MEMQL, MEMBER-EQUALP, ad nauseam.  And I *especially* don't want to
write my own membership tests just because I needed to use some specialized
test function!!

>While I'm at  it, let me pick on the book itself a little.  Even though CL
>translates lower case to upper case, every instance of LISP names, code,
>examples, etc are in **>> lower <<** case and lighter type.  In fact,
>everything that is not descriptive text is in lighter or smaller type.
>It's VERY difficult to read just from the point of eye strain; instead of 
>the names and definitions leaping out to embed themselves in your brain,
>you have to squint and strain, producing a nice avoidance response.
>Not to mention that you can't skim it worth beans.

Guy Steele has profusely and publicly apologized for the fonts, even though
it's actually the fault of Digital Press, who couldn't seem to manage
a boldface fixed-width font (the manuals printed at CMU were much better).

>Although it's probably hopeless, I wish more implementors would take a stand
>against COMMON LISP; I'm afraid that the challenge of "doing a COMMON LISP"
>is more than most would-be implementors can resist.  Even I occasionally find
>myself thinking "how would I implement that"; fortunately I then ask myself
>WHY?

I wish more people would think about the issues of language design before
flaming about somebody else's design!

I have no idea what Jeff Jacobs thinks Real Lisp is, but I do know what
objections people have to Common Lisp.  Most of those objections go away
when they start to think about efficiency vs usability, or consistency vs
sanity.

For instance, many people object to the sequence functions, on the grounds
that users could write them themselves.  So what users want to try writing
SORT?  Probably only those with all kinds of time to waste getting a decent
algorithm, debugging fencepost errors, etc.  The smart users will suggest
that it be a library function written in Lisp.  Ah, but what should be the
interface be?  Two arguments, a sequence and a predicate?  One argument, a
sequence, and a global variable *sort-predicate*?  How about two global
variables and a 0-argument function (don't laugh, people really do these sorts
of things sometimes).  Will the sort be stable or not?  Now it's unlikely
that two different sites will define their library SORT function to be
quite the same.  So the poor person faced with porting a program must
carry along all the libraries from wherever the program was written.
The even poorer person will have to combine programs from several different
places, each with a different sort routine, and try to prevent each
definition of SORT from stepping on the others.  At this point, standardizing
on a sort function should seem like the right thing.

Fortranners and Pascallers and Cers don't worry about this sort of thing,
because they just write sort routines and membership tests over and over
and over again, and wonder why they have to work 10 times as hard to get
their programs as sophisticated as the same ones in Lisp.

Speaking of Fortran, why do people think having complex numbers and
different sizes of floats makes a language large and complex?  Perhaps
it's rationals and bignums that make the language too complex?
Personally I'm much happier when (* (/ 1 2) 2) = 1 instead of 0,
and when (+ 2405920349 2452458405) is not 45.  There's something to be said
for mathematical sanity, which is why symbolic algebra systems have rationals
and bignums.  Maybe character objects are bad, but somehow I can't
excited about trying to remember that 121 actually means 'y' (at least
in ASCII, which is not the only character set in the world).
After all, isn't Abstraction a Good Thing?

I think those who flame about language designs should be forced to
design, implement, and distribute their own language, then listen
while know-nothing second-guessers complain about things they
don't understand to begin with...

>Jeffrey M. Jacobs <UCILSP>

							stan shebs

jjacobs@well.UUCP (Jeffrey Jacobs) (06/22/86)

In <3827@utah-cs.UUCP>, Stanley Shebs responds to
my original article, <1311@well.UUCP> jjacobs@well.UUCP...

 >I don't why I'm bothering to respond to this uninformed flame, but I've got
>a few minutes to kill before running off to the compiler conference...
>Besides, somebody might read this and suppose that this Jeff Jacobs
>knows what he's talking about, which is rarely the case, based on this
>limited sample.

>I think those who flame about language designs should be forced to
>design, implement, and distribute their own language, then listen
>while know-nothing second-guessers complain about things they
>don't understand to begin with...

I certainly appreciate personal attacks like this; they make things so much
livelier :-).  To make my ignorance perfectly clear, let me state that I have
not only been extensively involved in design and distribution of LISP (and
other sytems); my experience started in 1971 with UCI LISP.  You can find
my name along with the other people who really did the work on the front
page of the Meehan book.  BTW, James Meehan had nothing to do with the
original implementations; all he did was edit the book (mostly just reproducing
the manuals; he didn't even do much editing).  "The New UCI LISP Manual"...

It's a bad idea to jump to conclusions about other people's ignorance based on
a limited sample, particularly in this medium.  (Ad hominem attacks are also
considered bad form).

>Well gee, I wonder what is "true" Lisp...

Let me use the term "Real LISP" (RL) instead.  I am not going to give a 
definition that will be worth flaming at.  The point is that CL makes a
_radical_ departure from previous LISP work both in terms of syntax
and semantics.  As an oversimplified definition; development environment
is interpreted with dynamic scoping (lexical scoping is a compiler
optimization), no distinction between executed code and data unless 
specifically compiled (i.e. not incrementally compiled),
no required keywords and pre-CL function definitions (such as MEMBER).
Examples include MACLISP, INTERLISP, FRANZ and UCI.

>>  CL has everything in the
>>world in it, usually in 3 different forms and 4 different flavors, with 6
>>different options.

>Actually, they decided to omit flavors, because no one could agree on
>a standard! :-)

>>  I think the only thing they left out was FEXPRs...

My error here; FEXPRs are easily duplicated. What is really left out is
MACROs, i.e. access to the actual form.

>FEXPRs are bad (see Pitman's paper in the 1980 Lisp conference)

How did all that work get done before 1980?  If we'd only known we were doing
it wrong :-).  Because of the above, I won't bother to address the Pittman
paper.

>>It is obviously intended to be a "compilable" language, not an interpreted
>>language.

"Compilable" should read "compiled"...

>Can't beat the speed of C programs by running interpreted, you know...

But you can get one heck of a better development environment running 
interpreted...

>>it was several years before Golden Hill had lexical scoping,
>>and NIL from MIT DOES NOT HAVE A GARBAGE COLLECTOR!!!!

>So what do broken implementations have to do with the language standard?

It's not a matter of "broken", it's a matter of incomplete.  How much time, 
effort and money will it take before a "complete" implementation that doesn't 
derive from SPICE will appear?

>Many of the vaguenesses in the Common Lisp standard are to allow
>implementations some freedom to do things in different ways  Would
>you like it if the *language* standard said that the architecture will
>be tagged, and that positive fixnums range between 0 and 32767, and
>that they will have a tag of 42?

Much of the vagueness will have a direct impact on the user and the
portability of code.

>Most folks that haven't followed the Common Lisp effort don't realize
>how monumental a task it was to get all the Maclispers to agree on a 
>language standard.  There is simply no way to get a Lisp that is both
>Maclisp- and Interlisp-compatible.

Seems easy enough to me; just put everybody's favorite feature into the
language and you can get agreement :-).

I do like the casual way Interlisp is just ignored; nothing of any importance
has ever been done in Interlisp anyway :-).

Being serious, your statement is ridiculous.  CL isn't Maclisp compatible 
either. There is a great range of commonality between Interlisp and
Mac-lispish implementations.  (We stole from the best for UCI LISP <grin>).
But there is also a tremendous rivalry between the 2 camps, and a tremendous
financial investment in Interlisp...

>>I forgot to leave out the fact that I do NOT like lexical scoping in LISP;

>I LIKE lexical scoping in Lisp - dynamic scoping is for special situations,
>like rebinding a global variable for the duration of a function (for
>instance, one gets an equivalent effect to Unix redirection by rebinding
>*standard-input* or *standard-output* to file streams).  There is no
>good way to do this if you don't have dynamic scoping.  Programs that
>rely heavily on dynamic scoping are usually sloppily written and extremely
>difficult to analyze - things go on behind your back all the time.

Thanks for your tutorial, I had NO idea what it was for:-).  

>Default dynamic scoping in Lisp is the result of history - in 1958,
>programming language semantics was not well understood!  In another
>20 years, no one will understand why anyone would suggest that default
>dynamic scoping was a good thing.

We are actually fairly close on this issue!  I certainly would not design a
_new_ language with dynamic scoping as the default.  (One of the reasons I
state that CL is a NEW language, different from RL).

The main reasons I suggest dynamic is preferable in LISP are a) historical, 
b) LISP is not block structured; the separation of declaring a variable 
SPECIAL (DEFVAR being recommmend, p 68) from an actual function definition 
makes it very difficult to debug a function "locally", 
c) the excessive need to provide compiler declarations makes for some pretty 
ugly code.

Well written LISP code should be almost completely independent of lexical or
dynamic scoping considerations.  A free variable is obviously special; the
only real problem comes in when a variable is bound.

>>to allow both dynamic and lexical makes the performance even worse.

>This is totally wrong.

Say WHAT?  It certainly is TRUE in an intepreter; it takes longer to look up
a lexical variable than a dynamic variable, and it takes even longer when you
have to determine whether the lookup should be lexical or dynamic.  Add a
little more time to check if it's a CONSTANT or DEFVAR...

>To put things in perspective, the Zetalisp language (that the Symbolics
>people sell like hotcakes for big bucks) is at least an order of magnitude
>larger and more complex than Common Lisp, which they consider to be a small
>subset for the exchange of programs.  All reports have been that the
>Symbolics is super-fast for development and for execution... does anyone
>want to contradict me?

To put things even more in perspective, Symbolics do not "sell like hotcakes".
I don't have the number handy, but I believe the number of units sold is less 
than 4000.  Big bucks is putting it mildly; figure a minimum of $120,000 for a 
decent, single user system.  Zeta-Lisp is bigger, but not in the same way as
CL; the "biggness" is in the environment and number of available functions 
(ala INTERLISP).  And it REQUIRES very expensive, very specialized hardware.
And nobody  runs in CL mode by choice; it slows things down a LOT
(and still isn't complete).

If you would like to assert that the only way to get a decent CL
implementation  is to develop a Symbolics class machine for it, I'll be glad
to agree :-).

But I'm real tired of hearing that LISP "is slow" and requires special
hardware ; if you can't work with gp multi-user hardware, maybe you should be
doing something else.

BTW, I've heard reports that Le-LISP can give Symbolics a run for the money
using gp architectures.

>Neither would I - it's not *supposed* to be an instructional book.

So what is everybody supposed to have learned from?

>There are some problems, which is why
>there is another version in the works.  Almost all of the problems are
>extremely technical, and take a long time just to explain to someone not
>familiar with the language, let alone to solve...  There are also some open
problems with language design involved - like compiling to different machines
within the same programming environment, and support of program-analyzing
programs in a portable manner.

No sh*t, Sherlock!  You think the flames I've put up are even half?

Do you think maybe the drive to make it a "standard" may be just a wee bit
premature?

>>There is nowhere near the basic underlying set of primitives (or
>>philosophy) to start with, as there is in Real LISP (RL vs CL).

>Finally, a semi-valid point.  There has been some effort to define various
>subsets, but no consensus has emerged.  Any time someone defines a subset,
>everybody whose favorite feature has been left out will rant and rave and
>insist that it be included.  By the time you're done, you've got the whole
>language again (spoken from personal experience!).

The point isn't the lack of a subset, it the lack of a starting set.  RLs 
started with a small basic set of types and functions.  Even though they
grew to tremendous size, the growth was mostly by adding new functions.
CL starts out with a tremendous base language, attempting to have everything
in it.  The user pays the price for this...

This also goes along with my earlier explanation about how agreement was
reached among the LISP community.  You have my sympathies.  It was a lot
easier in my in the "good ol" days...

>As for the non-use of Lisp definitions, most such definitions are either
>simple and wrong, or complicated and right.

That is almost slanderous; do you really want to stand there and say that
about all the other LISP manuals around?  I find it a lot easier to 
understand a well defined function than prose.
I always find that I get scr**ed by the sentence I didn't read.  Good
readers tend to skim.

>Chapters 3 and 5 contain information about scoping and binding rules.
>Guy Steele has mercifully not repeated the obvious throughout the book!
>To answer your question, (SETF x y) *always* expands into (SETQ x y);
>see the top of page 94.  SETQ sets the lexical binding if there is one,
>otherwise the dynamic binding; see the definition of SETQ on page 91 
>(where there is a reference to the "usual rules"), and the definition
>of "usual rules" in the section on variables, p. 55.  I think it would
>be impossible to write a language standard that caters to those who
>can't read.

Thanks for the references.  Too bad there aren't any in the book....

Unfortunately this isn't written in the proper style of a standard, and I
don't think I should have to read 5 pages to find what should be clearly 
and simply stated under the function definition.
Take a look at IEEE and ANSI standards to see what a "real" standard should
look like.  If I look up SETF, I should be able to
find out what it does, not have to read back to the chapter preface.

As you so clearly point out, it's a lousy reference manual to boot!

>>Then try explaining why SET only affects dynamic bindings (a most glaring
>>error, in my opinion).

>So how in the world are you going to get this to work in a compiler that
>compiles lexical references into positions on stack frames?

That's the implementor's problem <grin>. It can be done (but probably
isn't worth it).   See below for how this should have been handled.

>>  How many books say
>>(SETQ X Y) is a convenient form of (SET (QUOTE X) Y)?  Probably all
>>but two...

I will agree that the _definition_ of SET given  is definitely useful; what I
object to is the capricious changing of the semantics of a function that
is so old and so ingrained in RL.  It SHOULD HAVE BEEN GIVEN A NEW NAME!!!!!!!
(And this applies to every other function whose historical RL definition has
been incompatibly changed).

CL is supposed to promote compabitility etc.; the first thing it does is
become incompatible with most other LISPs.

>I use my Lisp 1.5 manual for historical research only (sorry JMC).

Maybe you should do a little more historical research!  Or at least read
something besides Steele and Winston&Horn v2.

>>  Again, how many years of training, understanding and textbooks are
rendered obsolete?

>Those folks that managed to pass CS 101 shouldn't have any problem.

Passed CS 101 at what school?  What year?  Using what implementation of LISP?
If you spend 10 years using SETQ thing it's the same as (SET (QUOTE)...
or MEMBER is different from MEMQ, and suddenly it all changes, you're
gonna be damned unhappy.  Especially if you have trained other people and
your company's business depends on getting debugged software out the door.
(Or, in my case, you often make a living debugging other people's code).

>MEMQ is a stupid name.

So is SHEBS, (the best I can do for an ad hominem argument :-) .

But it's obviously distinct from MEMBER, and has a great deal of "historical"
weight behind it...

>>MEMBER isn't the only case...

>This was a tough problem for the designers.  On the one hand, everybody
>complains about non-obvious names like CAR and CDR and MEMQ.  On the
>other hand, the considerably better name MEMBER, which ought to be used
>to denote *all* sorts of membership tests, had been grabbed up for EQUAL
>tests.  I would rather have one obvious function name than MEMBER, MEMQ,
>MEMQL, MEMBER-EQUALP, ad nauseam.  And I *especially* don't want to
>write my own membership tests just because I needed to use some specialized
>test function!!

Fine.  I don't agree, but DON'T CALL IT MEMBER!!  Call it something else and
if you want to add funky syntax and make it a special form, be my guest.  But
don't kill my working code that goes back many years!!!!  And don't
invalidate all of the text books, articles, etc that have already been
written and used for many years!

>I think those who flame about language designs should be forced to
>design, implement, and distribute their own language, then listen
>while know-nothing second-guessers complain about things they
>don't understand to begin with...

Terribly sorry I wasn't available to help with the CL definition.  On the other
hand, perhaps a little more time should have been taken, and some people who
weren't completely devoted to a life of developing LISP per se should also
have been consulted.  Let's face it; design by committee is a polite term.  
The kitchen sink approach is probably more apt.  As you and I point out,
one of the key means of getting agreement was including everybody's favorite
feature!  This ain't "design"...

There are a lot of good things in CL, but it's a mammoth compromise and you
and I both know it.  There are tons of problems and the mad rush to make it
a standard is very premature.   It's the rush to make it a standard that I
object to.  It is simply not a good standard.  As you say:

>Perhaps you should try a non-broken Common Lisp then...
>Not in clever implementations...

Where are these  clever, non-broken Common LISPs?  As I also point out,
implementation is very costly and the results are forcing many firms to
recreate LISP in C to get decent performance for their ES shells.  The only
"non-broken" versions I am aware of are re-written SPICE!  It's hard to
believe that MIT can't come up with a version (or if they have,
they haven't notified me of  the update; last version I have is 0.286).

Perhaps you can keep the personal attacks to a minimum next time.  You may
not know who I am; that doesn't make me an illiterate idiot!  I've probably
been around a LOT longer than you have, and probably have a much wider range
of experience, which is not confined to LISP language development (although we
have more LISP designs than you'll ever see).

-JJ, CONSART Systems Inc.
CIS: 75076,2603
BIX: jeffjacobs

hedrick@topaz.RUTGERS.EDU (Charles Hedrick) (06/22/86)

In general, I agree with Shebs' response.  A couple more things:

It was realized when CL was designed that it was larger and would take
more time to implement correctly than Lisp 1.6.  The designers took
the view that this was a one-time price, and that it was better for
each manufacturer to spend a few more man-hours once if it would save
their users from then on.  This is probably right.

There is nothing in CL that makes a fast interpreter impossible.
There is plenty that makes it difficult.  The DEC-20 implementation
has an interpreter that is reasonably fast.  The major speed problem
is caused by lexical bindings.  Simple implementations are low because
they have to put all of the bindings on lists, thus replacing push by
CONS and pop by a GC.  The DEC-20 implementation prevents this by
putting the data on the stack where possible, and copying into the
heap in the rare case of someone asking for an actual closure object.
There are similar tricks for the other difficulties.  The resulting
interpreter is not as fast as an interpreter for Lisp 1.6, but the
difference is not enough to cause trouble.  What we get for all of
this pain is an interpreter whose binding semantics are compatible
with those of the compiler.  I spent several years supporing UCI Lisp.
Many users never compiled code because when they compiled it, it
stopped working.  The primary problem was that the binding semantics
changed, and they suddenly had to declare lots of things "special".
In my opinion it is worth a man-month from each implementor to get rid
of this problem.  Actually, I voted against this feature of CL.  I
would have preferred that the compiler use "special" all the time.
(We did this to a compiler for a variant of UCI Lisp.  It helped
things immensely.)  It was turned down, I think for two reasons (1)
This makes the compiler pay in efficiency.  Adding lexicals to the
interpreter makes the interpreter pay in efficiency.  Since you
compile when you want speed, it makes sense to pick the definition
that is fastest for the compiler (2) There are a number of people who
believe that lexical binding is safer, and who want to use lexical
closures.

A number of things said in the original posting are just wrong.  I can
think of no way to implement SET so it works for lexically-bound
variables in compiled code.  There are such things in UCI Lisp.  They
are called SPECIAL.  I am reasonably sure that SET does not work with
them.  If the manual fails to say this, it is a documentation error.

The primary reason for SETF is for convenience in writing code that
writes other code.  Examples are macros and code to implement flavors
or other structured-programming constructs.  For all such purposes, it
is very handy to be able to reverse the definition of a component.
That is, suppose we have a list, and we refer to its components as
(CAR X), (CADR X), and (CADDR X).  We want to be able to change the
components uniformly by doing (SETF (CADR X) ...) instead of having to
transform this into (RPLACD (CAR X) ...)  If a person is writing the
code, this may not be so important (though it is a very common bug
for a person to write (RPLACA (CDR X)  where he meant (RPLACD (CAR
X)).  However researchers are now beginning to depend upon high-level
tools.  We want a macro or other code-constructor to be able to take
the definition of a component and be able to build code to change the
element by sticking SETF around it.  It is precisly such large systems
as KEE that make many of the complexities of CL necessary.

Personally I would have wished for CL to be smaller.  As the manager
of a number of timesharing systems, I cringe at a Lisp where each user
takes 8 MB of swap space.  (Our DEC-20 implementation does better than
this.  We were very careful to allow as much to be shared as possible.
The implementation that takes 8MB per user was designed primarily for
single-user workstations. Unfortunately we have a situation where we
want 4 people to use it.)  But I have no question that the things that
made it bigger will save a number of people time, and facilitate the
building of large systems.  My sense of things is that CL
implementations, particularly after they have been in use of a few
years and get tuned, will be about as fast as other Lisps.  However
they will be enormous.  The attitude of the CL designers was "memory
is cheap".  If you envision CL as being used for large systems on
single-user systems, this is right.  That's certainly what the CL
designers had in mind.  Those of us trying to run large student
timesharing systems may have some problems with this.  It may be that
for us a subset will prove helpful.  Or the vendors that supply
timesharing systems may simply be more careful in their
implementations, so that all of the code can be shared.

bzs@bu-cs.UUCP (Barry Shein) (06/23/86)

>From: jjacobs@well.UUCP (Jeffrey Jacobs)
>I think CL is the WORST thing that could possibly happen to LISP.  In fact, I
>consider it a language different from "true" LISP.  CL has everything in the
>world in it, usually in 3 different forms and 4 different flavors, with 6
>different options.  I think the only thing they left out was FEXPRs...

Hear hear...I agree completely, especially the first sentance.

It's clear that CL was designed almost entirely with the compiler
in mind and in doing so has really interfered with the user's
environment.

Try writing a (savework) function in CL which saves off everything
you typed in (functions, variables etc) in a re-readable form, it
can be done I suppose, but just try a general attack (don't forget
packages...)

Where is the user environment anyhow? It's not there, every vendor
gets to make it up and in so doing will add a zillion (+-3) functions.
Will *this* be part of CL? No. Will programs 'accidently' use these
vendor supplied functions to make things useable? Yes. Will your
code run on other CLs? No.

Other than lexical scoping and macros (which I hate except in very
few situations) hopefully this can all be fixed by a CL/2 which
actually addresses the issues of -using- the language. Right now
it's obviously a set of lowest-common-denominators among a few vendors,
designed by a committee obviously and skirting almost all the interesting
issues.

This could be the beginning of a long dark ages for LISP. Franz and
Interlisp were/are far superior even if the compiler has to figure a
few things out (what a waste of a human I suppose.) Others probably
were/are also.

	-Barry Shein, Boston University

kempf@hplabsc.UUCP (Jim Kempf) (06/23/86)

What really upset me about CLtL is that the quality of the book binding
was so bad. I have had my book for about a year and a half, use it
constantly, and it has already fallen apart. I hope DEC chooses a
better book binding technology when they issue the next edition.
		Jim Kempf	hplabs!kempf

cycy@isl1.ri.cmu.edu.UUCP (06/25/86)

stan shebs starts off saying:

>I don't why I'm bothering to respond to this uninformed flame, but I've got

You should talk about uninformed....

>Fortranners and Pascallers and Cers don't worry about this sort of thing,
>because they just write sort routines and membership tests over and over
>and over again, and wonder why they have to work 10 times as hard to get
>their programs as sophisticated as the same ones in Lisp.

Obviously, you know much about Common Lisp, and nothing about C (and probably
Fortan or Pascal, but since I haven't used the languages in many years, I won't
comment about them). Either that, or you are a pretty poor C programmer.
There are reasons why people have C libraries, after all. I have programmed
both in C and in Lisp (mostly Franz, but lately mostly Common-lisp). They both
have advantages and disadvantages, and one of the things I liked about Franz
was the ability to link C routines in with the Lisp code. But in my years of
programming in C, I have not duplicated code. And there are definitely things
I can do much faster in C than in Lisp, and visa-versa. Depending on the
goal, one can work ten times as hard to reach a level of sophistication
achieved in C in Lisp, or achieved in Lisp in C. My points are these: neither
language is more virtuous than the other (there are some really awful things
about Lisp, but on the other hand, there are some things I'd rather not even
consider doing in C rather than Lisp and the other way around), and second,
a good C programmer does not in fact rewrite code s/he's already written. There
is nothing in either these languages that I am aware of that makes one better
than the other on this score. I suggest you learn to programme in C before
making such remarks.

>I think those who flame about language designs should be forced to
>design, implement, and distribute their own language, then listen
>while know-nothing second-guessers complain about things they
>don't understand to begin with...

I think those who flame about a language they evidently know little or nothing
about should consider learning it first.

By the way, Steele's book is terrible. I mean, even as a reference. The UPM
is easier to read. It assumes in some sections that one is reading it like
a book (as opposed to a reference). I think that at least he could have
put the entries in each chapter in alphabetical order. I like the Franz
documentation better. And there are problems with Common-lisp. It is hardly
perfection. The thing that bothers me the most is the limitations of
parameter passing. I forget exactly what the problem was...I think I wanted
to have &optional parameters followed by &key, but I'd had to have a &rest
between them. Most annoying. Oh well. I was told the reason for this, but
as I recall, it didn't seem like a very good reason (and I wasn't alone
in my opinion).



					-- Chris Young.
arpa: cycy@cmu-ri-isl1
uucp: {...![arpa/uucp gateway (eg. ucbvax)]}!cycy@cmu-ri-isl1

"We had it tough... I had to get up at 9 o'clock at night, half an hour
before I went to bed, eat a lump of dry poison, work 29 hours down mill,
and when we came home our Dad would kill us, and dance about on our grave 
singing haleleuia..."

					-- Monty Python

martin@kuling.UUCP (06/25/86)

In article <1311@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes:
>I think CL is the WORST thing that could possibly happen to LISP.  In fact, I
>consider it a language different from "true" LISP.  
To me, CL is the first "true" Lisp I have seen! Functions are 1'st class
objects, built-in functions have names that describes what they do, not
idiotic short-hands like Maclisp's munkam or Franz' dtpr, it has the nicest
set of data types I have seen (though I admit that it is more an feature 
than an important part of the language) and powerful arithmetic.

>...
>different options.  I think the only thing they left out was FEXPRs...
You don't want FEXPRs. Use macros.

>It is obviously intended to be a "compilable" language, not an interpreted
>language. By nature it will be very slow; somebody would have to spend quite a
>bit of time and $ to make a "fast" interpreted version (say for a VAX).  ...
Why should anyone spend time and money to make a fast interpreter? It is the
compiled code that should be "fast".

>and NIL from MIT DOES NOT HAVE A GARBAGE COLLECTOR!!!!  It just eventually eats
>up it's entire VAX/VMS virtual memory and dies...
>
Neither have Zetalisp on the Lambda... What's that got to do with the language
definition?

>The entire INTERLISP arena is left out of the range of compatability.
Yes, fortunately!

>As a last shot; most of the fancy Expert Systems (KEE, ART) are implemented in
>Common LISP.  Once again we hear that LISP is "too slow" for such things, when
>a large part of it is the use of Common LISP as opposed to a "faster" form
>(i.e. such as with shallow dynamic binding and simpler LAMBDA variables; they
>should have left the &aux, etc as macros).  Every operation in CL is very
>expensive in terms of CPU...
If so, it is time to build new machines that supports the language better.
For years now we have built the machines and then designed the languages for
them when it should be the other way around. That's why we are stuck with
ugly things like F-N, Pascal and C. (No flames plz, I've heard them before.)

>I forgot to leave out the fact that I do NOT like lexical scoping in LISP; to
>allow both dynamic and lexical makes the performance even worse.  To me,
>lexical scoping was and should be a compiler OPTIMIZATION, not an inherent
>part of the language semantics.  I can accept SCHEME, where you always know
>that it's lexical, but CL could drive you crazy (especially if you were 
>testing/debugging other people's code).
That is perhaps a matter of tast but I do like lexical scoping. I have
tested/debugged a lot of "other peoples" Maclisp and Franzlisp code which is
heavily built on dynamic scope (which is a type of side effect to me) and
*that* can drive you crazy.
Besides, when do you get dynamic scope without asking for it? The only
(few) cases I could find is when it is "natural" (in some way), 
ex. with-open-file.
Lexical scoping *IS* a part of the language semantics! Lexical scoping as
a compiler optimization seems utterly stupid to me.

>BTW, I think the book is poorly written and assume a great deal of knowledge
>about LISP and MACLISP in particular.  I wouldn't give it to ANYBODY to learn
>LISP
>
>...Not only does he assume you know a lot about LISP, he assume you know a LOT
>about half the other existing implementations to boot.
>
It is simply not a tutorial book. It is a book for Lisp hackers/implementors/
developers.

>Friday I was in a bookstore and saw a new LISP book ("Looking at LISP", I
>think, the author's name escapes me).  The author uses SETF instead of SETQ,
>stating that SETF will eventually replace SETQ and SET (!!).   Thinking that
				   ^^^^^^^^^^^^^^^^^^^^
>this was an error, I  checked in Steel; lo and behold, tis true (sort of).
It is an error. SETF will (eventually) replace SETQ. It can not replace SET.

>In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom
>of page 94!  And it isn't even clear; if the variable is lexically bound AND
>dynamically bound, which gets changed (or is it BOTH)?  Who knows?  Where is
>reference?
At the moment of reference you can only see one binding of a variable. Which
one depends of the circumstances and follows the rules outlined in chapter 3.
I have never had any trouble with this, it is very simple.

>"For consistency, it is legal to write (SETF)"; (a) in my book, that should be
>an error, (b) if it's not an error, why isn't there a definition using the
>approprate & keywords?  Consistency?  Generating an "insufficient args"
>error seems more consistent to me...
>
>Care to explain this to a "beginner"?  Not to mention that SETF is a
>MACRO, by definition, which will always take longer to evaluate.
>
All functions and special forms that allow a infinite number of arguments
should allow zero args if possible, for consistency. Care to explain to a
"beginner" why a function that allows one, two, three or X (pair of) 
arguments should not allow zero arguments?
Macros do NOT always take longer to evaluate. It may be true interpreted
but never compiled.

>Then try explaining why SET only affects dynamic bindings (a most glaring
>error, in my opinion).  Again, how many years of training, understanding
>and textbooks are suddenly rendered obsolete?  ...
It is obvious. If SET affects lexical bindings a function may not always
return the same thing depending on the names of its (lexical) variables.
Suppose SET did change lexical bindings and you have this (silly) function :
(defun foo (a b) (set a b))
Now, if you have a variable 'x' dynamicaly bound to 42 then the call
(foo 'x 666)
would, of course, change the dynamic binding of 'x' to 666. But if your
dynamic variable happen to be called 'a' the call
(foo 'a 666)
would NOT change the dynamic binding of 'a' which is probably not what
you expected...


					PEM
-- 
Per-Erik Martin,  Computing Science Dep., Uppsala University, Sweden
UUCP: martin@kuling.UUCP  (...!{seismo,mcvax}!enea!kuling!martin)

jjacobs@well.UUCP (Jeffrey Jacobs) (06/25/86)

>References:<5194@topaz.RUTGERS.EDU <1311@well.UUCP> <3827@utah-cs.UUCP>
In <5194@topaz.RUTGERS.EDU>, Charles Hedrick says:

>In general, I agree with Shebs' response.  A couple more things:

Mr. Hedrick does not state what points he agrees with.  However, he then
goes on to say:

>There is nothing in CL that makes a fast interpreter impossible.
>There is plenty that makes it difficult.  The DEC-20 implementation
>has an interpreter that is reasonably fast.  The major speed problem
>is caused by lexical bindings...

I fail to see where Hedrick disagrees with me, unless it's my use of the
word "impossible".  Since the original article was deliberately "flameboyant"
I'll be glad to downgrade "impossible" to "plenty difficult" <grin>.

>Personally I would have wished for CL to be smaller.  As the manager
>of a number of timesharing systems, I cringe at a Lisp where each user
>takes 8 MB of swap space.

This also sounds suspiciously like agreement with my position about CL
being a bit overgrown.  Just how many UCI LISP versus CL users can a 2060 
support?

>Simple implementations are slow because 
>they have to put all of the bindings on lists, thus replacing push by
>CONS and pop by a GC.

(Note: Hedrick's original msg. had "low", not "slow").

The overhead penalty is not in the CONSing; the time for CONS should be roughly
equal to pushing the data on the stack.  The REAL overhead is in accessing
the data, whether searching a list or searching a stack.  If the value of
FOO is accessed many times, the overhead increases dramatically. In fairness,

should be pointed out that there is possibly BETTER performance in a lexically
scoped interpreter using a stack than in a deep bound dynamic environment,
as the interpreter would go the value cell if it wasn't found in the
lexical environment, as opposed to searching the entire stack.

In the case of using a list for the environment, a careful implementation
should be able to RETURN the "lexical" environment to the free list on
exit, reducing the need for GC.  (I assume that he didn't do a GC after
every function exit).

>What we get for all of
>this pain is an interpreter whose binding semantics are compatible
>with those of the compiler.  I spent several years supporing UCI Lisp.
>Many users never compiled code because when they compiled it, it
>stopped working.  The primary problem was that the binding semantics
>changed, and they suddenly had to declare lots of things "special".

 <flame on> 
The code didn't "stop working"; the programmer just didn't know
how to do a complete job.  It's 
a matter  of not being taught decent programming habits.
Versions of DWIM/Programmer's Assistant have been available for
UCI LISP since 1972, which make it very easy to keep track of necessary
SPECIALs, etc.  The notion that gets into student's heads that they
can just blindly write code without any forethought or planning is what
gives Comp. Sci. grads such a bad reputation in industry.   There's
really no excuse for failing to understand such a basic principle of
the language being used. 

Some people need to be protected from themselves;  LISP (and
assembler) are not for them.  Let them use PASCAL and M-2.
 <flame off>

Historically, the default to local/lexical compilation was an obvious
optimization, based in large part on the fact that most good LISP programmers
wrote very functional code with very little need of dynamic scoping.

>In my opinion it is worth a man-month from each implementor to get rid
>of this problem.  

A man-month? See below...

> Actually, I voted against this feature of CL.  I
> would have preferred that the compiler use "special" all the time.
>(We did this to a compiler for a variant of UCI Lisp.  It helped
>things immensely.)

You're gonna hate me for this; UCI LISP was meant to be that way when
released.  Daryle had planned to do it, but didn't
have time and forgot to tell me before he graduated.  I was so busy with
the interpreter and other things that as long as nobody reported bugs, I
didn't give the compiler much thought.  So it just fell through the cracks
<sigh>.  (I'm not sure we could have convinced Rusty, anyway). 
Development at UCI basically stopped when I left <sigh>.

If you voted against it, it sounds like we agree!

>It was turned down, I think for two reasons (1)
>This makes the compiler pay in efficiency.  Adding lexicals to the
>interpreter makes the interpreter pay in efficiency.  Since you
>compile when you want speed, it makes sense to pick the definition
>that is fastest for the compiler 

This is highly machine and implementation dependent.  There are 2 things that 
must be considered, time to perform the binding and time to access.  
Time to access is really the key to performance.  (Given the overhead
involved in CL's extended binding capabilities, time to bind might
even be cheaper <grin>).

>(2) There are a number of people who
>believe that lexical binding is safer, and who want to use lexical
>closures.

From an overall, general language design perspective, I believe that
lexical scoping is better.  But there should be a SYNTAX to go along
with it.  LISP syntax simply doesn't provide a decent distinction
between lexical and dynamic; I can go along with either one or t'other,
but not mixed.  Besides, we need at least one language with dynamic
scoping <grin>...

>A number of things said in the original posting are just wrong.  I can
>think of no way to implement SET so it works for lexically-bound
>variables in compiled code.  There are such things in UCI Lisp.  They
>are called SPECIAL.  I am reasonably sure that SET does not work with
>them.  If the manual fails to say this, it is a documentation error.

If you put on your "stupid ways to do it" cap, you could figure out a
way to do it.  But it's not really something you would WANT to do.
SET has ALWAYS been a problem, more so when interpreted than compiled,
due to the shadowing problem.  For consistency, CL should have thrown
out SET, and defined a new function, maybe SET-SYMBOL-VALUE.  But is
should NOT have changed the semantics of SET (or MEMBER).  The definition
of SET in CL is very useful; it's the name I object to.

BTW, SET should work with variables declared SPECIAL.  I'm not sure
what your comment means.

>The primary reason for SETF is for convenience in writing code that
>writes other code.  Examples are macros and code to implement flavors
>or other structured-programming constructs...
>... However researchers are now beginning to depend upon high-level
>tools.  We want a macro or other code-constructor to be able to take
>the definition of a component and be able to build code to change the
>element by sticking SETF around it.  It is precisly such large systems
>as KEE that make many of the complexities of CL necessary.

Perhaps this is why KEE is being rewritten in C :-)?  (So are most of
the other high end shells.  Want to bet that the C implementation looks
a lot like "Real LISP (RL")?

The results of using constructs like SETF (and similar features in ADA)
aren't in yet, and won't be for many years.  My personal opinion is that
they will prove undesireable due to difficulty in debugging them and
"type" problems.  I think it's better to write a more specific function
that's changeable rather than a "generic" function.  But SETF as a macro
does involve at least one more EVAL (despite what Shebs said).

>But I have no question that the things that
>made it bigger will save a number of people time, and facilitate the
>building of large systems.  My sense of things is that CL
>implementations, particularly after they have been in use of a few
>years and get tuned, will be about as fast as other Lisps.  However
>they will be enormous.  The attitude of the CL designers was "memory
>is cheap".  If you envision CL as being used for large systems on
>single-user systems, this is right.  That's certainly what the CL
>designers had in mind.  Those of us trying to run large student
>timesharing systems may have some problems with this.  It may be that
>for us a subset will prove helpful.  Or the vendors that supply
>timesharing systems may simply be more careful in their
>implementations, so that all of the code can be shared.

They also assume the CPU is cheap!!!

There's an old saying:

"LISP programmers know the VALUE of everything and the COST of nothing".

Have you seen the price for memory launched into orbit lately?
It's definitely not cheap!   how often have we heard about
"artificially intelligent" robots exploring Mars (or wherever). Given the
power and weight required for CL, you can forget that one!!!  In fact,
given the current trend, almost no delivered E.S. system will be in LISP.
"Rewriting" it in C after "prototyping" seems well on the way to becoming
an acceptable part of the process; you can bet that this will not be
economically viable in the real world.

It might be cheap when you have a University grant or DARPA to support you;
it gets a little more complicated when you have to justify a single user
station for $125,000 (or even $50,000) to a manager.  Then tell him
that once you have it working, you'll have to rewrite it!  From
a business point of view, I can get a lot VAX for my money...

I envision LISP being used for a great many things besides "large systems".
It's hard to visualize using CL for a small embedded system.

As long as I have been in the field, I have heard that LISP was big, slow
and hard to learn.  I always said that this was a myth, that reasonably
sized, efficient LISPs were available.  With CL as a standard, I will
be flat out wrong.  I think CL has it's place; I also think that the
choking off of other LISP development is a tremendous mistake.  Yet
this is exactly what is happening; implementors feel that "have" to
achieve full CL status (whatever that really is; even SPICE isn't
fully "Steele").

Disclaimer:  My articles on Common LISP are about Common LISP as defined
in Guy Steele's book.  It is specifically NOT about any particular
implementation; although I may point out deficiencies and deviations
from CL as defined in Steele, I am not attacking any implementation or
person (INCLUDING Guy Steele).  There is a great deal of work in CL;
it's the mad rush to make it a "standard" that I am against.

-Jeffrey M. Jacobs, CONSART Systems Inc., Manhattan Beach, CA
CIS:[75076,2603]
BIX:jeffjacobs

martin@kuling.UUCP (Erik Martin) (06/26/86)

In article <1316@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes:
>
>In <3827@utah-cs.UUCP>, Stanley Shebs responds to
>my original article, <1311@well.UUCP> jjacobs@well.UUCP...
>...

>>>  I think the only thing they left out was FEXPRs...
>
>My error here; FEXPRs are easily duplicated. What is really left out is
>MACROs, i.e. access to the actual form.

What? MACROs left out? Explain plz.

>It's not a matter of "broken", it's a matter of incomplete.  How much time, 
>effort and money will it take before a "complete" implementation that doesn't 
>derive from SPICE will appear?
>...
>recreate LISP in C to get decent performance for their ES shells.  The only
>"non-broken" versions I am aware of are re-written SPICE!  It's hard to
>...

What's wrong with an implementation derived from SPICE if it is complete?
The implementation we have (under TOPS-20) is derived from Spice and works
fine. It fast enough an lacks only complex numbers (so far, but I don't miss 
them). It's not as fast as Good ol' MacLisp but on the other hand, the 
compiler  haven't been trimmed in ten years (as a mather of fact, the compiler
is the weakest part of it, so far (again)).

				PEM
-- 
Per-Erik Martin,  Computing Science Dep., Uppsala University, Sweden
UUCP: martin@kuling.UUCP  (...!{seismo,mcvax}!enea!kuling!martin)

martin@kuling.UUCP (Erik Martin) (06/26/86)

In article <830@bu-cs.UUCP> bzs@bu-cs.UUCP writes:
>
>Where is the user environment anyhow? It's not there, every vendor
>gets to make it up and in so doing will add a zillion (+-3) functions.
>Will *this* be part of CL? No. Will programs 'accidently' use these
>vendor supplied functions to make things useable? Yes. Will your
>code run on other CLs? No.

Do you mean that this is a special property of CL? Foo! I've used
the same version of MacLisp on different machines and the user environments
can be totally different, depending of which features the local hackers
have included. Also there are different librarys which are supposed to exist,
programs just load them. This is a problem you never get around in any
interpreted, dynamic langauage.
Common Lisp is fairly well defined in environmental matters, especially, 
the I/O functions can not be better defined unless you make sertain 
assumptions about hardware and terminal capabilities.
Besides, will a Franzlisp program run in Interlisp? No. Will a Maclisp
program run in Franz'? Probably not. Will a CL program run in (another)
CL? Yes, if you don't use implementation dependent features not included in
the standard. It's up to you. (If you really *want* to loose so...)

>This could be the beginning of a long dark ages for LISP. Franz and
>Interlisp were/are far superior even if the compiler has to figure a
>few things out (what a waste of a human I suppose.) Others probably
>were/are also.

I think that, what You (and Jeffrey Jacobs) really mean is that CL does 
not look and behave like you are used to and therefore must be bad.
("I've hacked Lisp for ten years and know what a REAL Lisp looks like,
 so don't you come here with that new stuff!")
If people had thought that way the last twenty years we would still use
LISP 1.5 (the oldest version I've seen).


					PEM
-- 
Per-Erik Martin,  Computing Science Dep., Uppsala University, Sweden
UUCP: martin@kuling.UUCP  (...!{seismo,mcvax}!enea!kuling!martin)

hugh@hcrvx1.UUCP (Hugh Redelmeier) (06/26/86)

In article <313@hplabsc.UUCP> kempf@hplabsc.UUCP (Jim Kempf) writes:
>What really upset me about CLtL is that the quality of the book binding
>was so bad. I have had my book for about a year and a half, use it
>constantly, and it has already fallen apart.

Sounds like DEC uses dynamic binding (it changes over time).  Lexical
is *much* more appropriate for Common Lisp.  Dynamic binding should
be an option (cheaper in paperback).

patrick@pp.UUCP (Patrick McGehearty) (06/26/86)

The original article is clearly stirring much controversy.
I suggest we break the discussion into several major topics.

First is the general question of when is it appropriate to define a
standard and stick to it.  AI and its hype are coming out of the closet
of academic research and entering the commercial arena.  Your average
company is not interested in learning the pros and cons of five different
leading Lisp implementations (ie UCI, PSL, InterLisp, CommonLisp, FranzLisp
to name a few) and then identifying which commercial products are available
in which implementations.  Also, the commercial vendors are not interested
in reimplementing everything many times.  Thus, there are good and strong 
motivations of selecting a standard and going with it for a few years.
The Common Lisp implementors group (i.e. those who read fa.commonlisp)
are already talking about what should be in the current standard vs what
might be reserved for a future standard.  So I will not reply to minor
nitpicking, rather concentrate on what I perceive to be major issues.

Interpreter vs Compilers
I currently am working on a Lambda in Common Lisp and previously worked
in MultiLisp (developed by Halstead at MIT) on a VAX.  Since MultiLisp
did not have an interpreter, and since I come from a C development
background, I never (except for top-level commands) use the interpreter.
Incremental compilation takes a trivial amount of time and execution
is sooo much faster in compiled mode.  Thus, I wonder why anyone is
still concerned with interpreter implementations.  I note in passing
that Unix engineering workstations currently sell for less than $15000
and Kyoto Common Lisp sells for $700 object (written in C, provides
C interface to Lisp routines).  So do not complain about Common
Lisp only being useful for people who can get $125000 together.
Many of the original "difficult to implement efficiently" complaints
seemed to be related to interpreter issues.  I did not see why any
of those items would be difficult to generate good compiled code
for.

Which brings us to the question of Dynamic vs Lexical Scoping.
Coming from a C background, I find dynamic scoping to be
distasteful and lending itself to poorly engineered programs.
To understand the effects of lexical variables, the user/debugger
can rely on a static analysis of the source code.
To understand the effects of dynamic variables, the user/debugger
must understand the dynamic behavior of a running program.
It seems clear to me that the first is simpler than the second.

I further claim (here I deviate from Common Lisp) that there should
be a concept of "Global Lexical" variable.  Such a variable could
be masked by a function that happens to use the same name, but would
be available anywhere that it was not explicitly masked out.
This concept would allow those of us who find dynamic scoping
distasteful to completely avoid it.  Instead, we are told to
use *foo* as a convention to reduce the chance of a collision
with global special variables (that is, surround special variable names
with *s to make them stand out).

Parallel Implications - special variables also have significant
implications for rapid execution of parallel programs (if you
are not interested in speed, why both to execute parallel?)
In order that each subthread of the computation may have its own
value for a special variable if it has the appropriate declarations,
special variables must be supported by "deep" binding rather than
"shallow" binding.  This implemenation approach means that every
access to a special variable must be preceeded by a search down
a "current specials" stack or something similar to find the current
definition.  Global lexicals avoid this problem.

A challenge for those who prefer "dynamic" binding:
Show a short program (less than 40 lines) which demonstrates how
special variables make for a better (easier to read or faster to 
execute when compiled) program.

bsmith@uiucdcsp.CS.UIUC.EDU (06/27/86)

It seems incredible to me that anyone would prefer Franz Lisp over
Common Lisp.  I recently had to convert a natural language front-end
written in Franz on a vax to Common Lisp on a Symbolics.  It was a
nightmare I hope I never have to go through again.  Functions
continuously mentioned variables that were defined in other files
(the program was spread out across 6 files).  There was no way to
know what was happening at any given time without extensive comments
(which were missing).  I never appreciated lexical scoping more than
I did then.  If Common Lisp is the dark ages compared to Franz Lisp,
then I plan on being very happy in my monastery surrounded by 
Symbolics and Explorers.

kempf@hplabsc.UUCP (Jim Kempf) (06/27/86)

I would like to propose a new attitude to languages and language
research called "langnosticism". The langnostic avoids religious
arguments in favor of reasoned response, and, in particular, avoids
such terms as b*lls*it and f*ck when responding to others postings.
This attitude can be contrasted with "evangelical languianity", in
which the poster feels compelled either to push *a* particular
language as the one, true path to salvation, or to deride someone
else's language as being a heretical devation from the Word.

Seriously, folks, there are too many interesting issues in language
research to allow the discussion to degenerate into name-calling.
		Jim Kempf	hplabs!kempf

preece@ccvaxa.UUCP (06/27/86)

> Jeff Jacobs writes:

> As an oversimplified definition; development environment is interpreted
> with dynamic scoping (lexical scoping is a compiler optimization), no
> distinction between executed code and data unless specifically compiled
> (i.e. not incrementally compiled), no required keywords and pre-CL
> function definitions (such as MEMBER).  Examples include MACLISP,
> INTERLISP, FRANZ and UCI.
----------
I have to agree with Hedrick and Shebs and one of the primary
philosophies of CL -- the compiled environment and the interpreted
environment should behave the same way to the greatest extent possible.
The notion of dynamic scoping turning into lexical at compilation is
appalling.

On the other hand, I'm not completely pleased with the CL approach,
either.  There are times when scoping doesn't do what those of us coming
from the non-Lisp side would expect.
----------
>  It's not a matter of "broken", it's a matter of incomplete.  How much
> time, effort and money will it take before a "complete" implementation
> that doesn't derive from SPICE will appear?
----------
Isn't Lucid sui generis?  Franz Common claims to be, but I gather
you consider it to be broken (I haven't used it).
----------
>  Unfortunately this isn't written in the proper style of a standard,
> and I don't think I should have to read 5 pages to find what should be
> clearly and simply stated under the function definition.
----------
There's plenty of discussion on the CL mailing list about problems
with the document, but it's not all that bad.  A document specifying
the complete details of each function (and therefore repeating tons
of material common do other functions doing related things) would
have been MUCH bigger.  Such a thing would be useful, God knows, but
for a first statement of the principles and contents of the language,
CLtL is much preferable.  A more tutorial approach would not have been
as effective either.

I agree with your complaint about the fonts used for non-text material.
Personally, I would have preferred a non-monospaced font and something
denser (thus standing out better).  I certainly would NOT have supported
use of upper-case, which is fine for occasional emphasis but much
less readable.
----------
>  If you spend 10 years using SETQ thing it's the same as (SET
> (QUOTE)...  or MEMBER is different from MEMQ, and suddenly it all
> changes, you're gonna be damned unhappy.

> Fine.  I don't agree, but DON'T CALL IT MEMBER!!  Call it something
> else and if you want to add funky syntax and make it a special form, be
> my guest.  But don't kill my working code that goes back many years!!!!
> And don't invalidate all of the text books, articles, etc that have
> already been written and used for many years!
----------
Well, yes and no.  You can't do a derivative language with new features
without changing the meaning of some of the existing constructs.  It's
vital, of course, that you point things out sufficiently clearly that
the careful reader (anyone who skims this kind of document gets
exactly what she deserves) can take note of what has changed.  The
book is reasonably careful about giving explicit notice about such
functions (which your original posting actually complained about).

MEMBER, by the way, does not default to EQ testing, it defaults
to EQL testing.  It should not be a big deal to make that kind
of change in existing code -- just change the name of the function
in your existing code and provide a macro supporting your name
in terms of the underlying CL primitive (I'm sure that kind of change
is second nature to you anyway, given the kind of Lisp background
you have).

The number one goal of CL was to provide a common base for new Lisp
development.  It seems to be the case that the world of AI vendors
has bought that goal -- everybody seems to be building versions of
CL, even Xerox.  Nobody would expect the first version of something as
big as CL to be definitive, but conforming implementations should be
close enough that porting will be much easier than in the past.
Experience should indicate the specific areas that need further work in
specifying a formal standard (an effort now under way) and moving
towards common understanding of the issues that were insufficiently
specified in CLtL.


-- 
scott preece
gould/csd - urbana
uucp:	ihnp4!uiucdcs!ccvaxa!preece
arpa:	preece@gswd-vms

shebs@utah-cs.UUCP (Stanley Shebs) (06/29/86)

In article <1019@isl1.ri.cmu.edu> cycy@isl1.ri.cmu.edu (Christopher Young) writes:

>>Fortranners and Pascallers and Cers don't worry about this sort of thing,
>>because they just write sort routines and membership tests over and over
>>and over again, and wonder why they have to work 10 times as hard to get
>>their programs as sophisticated as the same ones in Lisp.
>
>Obviously, you know much about Common Lisp, and nothing about C (and probably
>Fortan or Pascal, but since I haven't used the languages in many years, I won't
>comment about them). Either that, or you are a pretty poor C programmer.

I started life as a Fortranner about 11 years ago, and worked with large
Fortran programs in industry.  I've also hacked C quite a bit, including
a C compiler - I might still be a lousy C programmer tho, there's no rating
system as for chess players... :-)

>There are reasons why people have C libraries, after all.

Well, non-polymorphic library functions aren't particularly useful.
A sort routine that only works on arrays of integers doesn't help me
sort some hairy structure with an integer slot in it.  My experience
with C is that getting any dynamic typing requires some really strange
union and pointer hacking that would be better done in assembly language
maybe.  In fact, much of such code seems to duplicate Lisp innards, although
someone not familiar with Lisp implementation might not realize it.

If C libraries are so useful, why are they 1 or 2 orders of magnitude
smaller than Lisp libraries?

>I have not duplicated code.

An ambitious assertion - I'd like to look at your code and see if that's
*really* true.  How do you do the equivalent of a membership test without
writing a loop each time?

> And there are definitely things
>I can do much faster in C than in Lisp

Sadly, this is still true; but the number of things has dwindled rapidly
in the past few years...

>The thing that bothers me the most [about CL] is the limitations of
>parameter passing. I forget exactly what the problem was...I think I wanted
>to have &optional parameters followed by &key, but I'd had to have a &rest
>between them. Most annoying. Oh well. I was told the reason for this, but
>as I recall, it didn't seem like a very good reason (and I wasn't alone
>in my opinion).

Sigh, most people flame about the brain-damage of having &keywords in
the *first* place, while others are unhappy about no general destructuring
a la 3-Lisp and POP-11.  Language designers just don't get no respect...

>					-- Chris Young.

							stan shebs

shebs@utah-cs.UUCP (Stanley Shebs) (06/29/86)

In article <335@hplabsc.UUCP> kempf@hplabsc.UUCP (Jim Kempf) writes:
>I would like to propose a new attitude to languages and language
>research called "langnosticism".

I have taken to calling myself an "ideologist" (as opposed to "ideologue")
but unfortunately the name of the subject ("ideology") is already
being used...

And no, I'm not a Common Lisp ideologue either - I just want to make
sure that it is criticized on valid rather than invalid grounds...

							stan shebs

jjacobs@well.UUCP (Jeffrey Jacobs) (06/29/86)

You cannot blame poor programming habits on the language; it's quite
feasible to write the same mess in Common LISP.

PLEASE NOTE: CL is NOT  lexically scoped; it allows BOTH dynamic and
lexical scoping, which makes the problem worse, not better.

It is possible to write bad programs in any language; it is possible to
write good programs in any language.

Jeffrey M. Jacobs, CONSART Systems Inc., Manhattan Beach, CA

shebs@utah-cs.UUCP (Stanley Shebs) (06/30/86)

In article <1316@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes:
>
>In <3827@utah-cs.UUCP>, Stanley Shebs responds to
>my original article, <1311@well.UUCP> jjacobs@well.UUCP...
>

>>I think those who flame about language designs should be forced to
>>design, implement, and distribute their own language, then listen
>>while know-nothing second-guessers complain about things they
>>don't understand to begin with...
>
>I certainly appreciate personal attacks like this; they make things so much
>livelier :-).  To make my ignorance perfectly clear, let me state that I have
>not only been extensively involved in design and distribution of LISP (and
>other sytems); my experience started in 1971 with UCI LISP.

I suppose at this point it's appropriate for me to put on my know-nothing
hat and proceed to comment on the design (or lack thereof) of UCI Lisp.
There's a manual right here... 

>Let me use the term "Real LISP" (RL) instead.  I am not going to give a 
>definition that will be worth flaming at.  The point is that CL makes a
>_radical_ departure from previous LISP work both in terms of syntax
>and semantics.  As an oversimplified definition; development environment
>is interpreted with dynamic scoping (lexical scoping is a compiler
>optimization), no distinction between executed code and data unless 
>specifically compiled (i.e. not incrementally compiled),
>no required keywords and pre-CL function definitions (such as MEMBER).
>Examples include MACLISP, INTERLISP, FRANZ and UCI.

Well in that case, let me substitute the term "Obsolete Lisp".  People
take a dimmer view of poorly defined semantics nowadays - it's no longer
OK for Lisp to be "Fortran with parentheses".

>>Can't beat the speed of C programs by running interpreted, you know...
>
>But you can get one heck of a better development environment running 
>interpreted...

Sure - I use the interpreter too.  But WHO CARES about the *performance*
of the interpreter!!!

>>Many of the vaguenesses in the Common Lisp standard are to allow
>>implementations some freedom to do things in different ways
>
>Much of the vagueness will have a direct impact on the user and the
>portability of code.

Yeah, like improve portability, at least when programmers adhere to
the definition of the language.  Are you suggesting the language standard
for the interchange of programs should read like the UCI manual and say that
577777Q represents zero?  I don't think that non-DEC-20 people would
like that part of the specification very much!

>Being serious, your statement is ridiculous.  CL isn't Maclisp compatible 
>either. There is a great range of commonality between Interlisp and
>Mac-lispish implementations.  (We stole from the best for UCI LISP <grin>).

Somehow I doubt a UCI Lisp program using the ";" or ";;" functions is going to
work very well in Maclisp.  Square brackets in UCI Lisp and Franz won't
go over real well in PSL, where they denote vectors.  UCI Lisp and PSL are
the only dialects to define functions with a macro DE, while Maclisp and
Franz use defun, and Interlispers do it in several different ways.

CL is just as compatible with older dialects as the older dialects are with
each other!  But once you convert to the standard, you don't have to do it
any more...  It's very misleading to suggest that there isn't a lot of pain
and agony to translate one dialect of "Real Lisp" to another.  Tim Finin
wrote his Franzlator to convert Interlisp code to Franz, and it had hundreds
of conversion rules.

Common Lisp has the advantage that a number of experienced Lisp implementors
met and argued about most aspects of the language, while some of the features
of older dialects look like they were put together by an undergrad that had
been dropped on the head as a child.  PSL has this function DELATQIP for
instance, which does destructive deletions on alists using a key, while UCI
Lisp has a function SPEAK which returns the number of CONSes executed
(although it is not said whether this count is reset between GCs - a rather
glaring screwup).  Franz defines (and documents!) operations for beating
on allocated number cells directly, so one can globally change 3.1416 to
be 2.72 (sort of like the undocumented feature of certain Fortrans).

>The main reasons I suggest dynamic is preferable in LISP are a) historical, 
>b) LISP is not block structured; the separation of declaring a variable 
>SPECIAL (DEFVAR being recommmend, p 68) from an actual function definition 
>makes it very difficult to debug a function "locally", 
>c) the excessive need to provide compiler declarations makes for some pretty 
>ugly code.

Huh?  Declaration for what?  Local variables?

>Well written LISP code should be almost completely independent of lexical or
>dynamic scoping considerations.  A free variable is obviously special; the
>only real problem comes in when a variable is bound.

(let ((num 45))
  (mapcar #'(lambda (x) (+ x num)) '(3 4 5)))

I would prefer num to be lexical and not dynamic, even though it is free
within the inner lambda.  It's also not desirable for typos to suddenly
materialize as specials - it would be a tough debug to find.

>>>to allow both dynamic and lexical makes the performance even worse.
>
>>This is totally wrong.
>
>Say WHAT?  It certainly is TRUE in an intepreter; it takes longer to look up
>a lexical variable than a dynamic variable, and it takes even longer when you
>have to determine whether the lookup should be lexical or dynamic.  Add a
>little more time to check if it's a CONSTANT or DEFVAR...

Ah, we're talking about the interpreter again.  I still don't understand why
anyone would think interpreter performance would matter - the overhead
is so tremendous already that dynamic/lexical lookups aren't necessarily
significant.

>I'm real tired of hearing that LISP "is slow" and requires special
>hardware ; if you can't work with gp multi-user hardware, maybe you should be
>doing something else.

I don't use Lisp machines, I expect my (compiled) Common Lisp programs to
go faster than equivalent C programs, and I beat on the compiler if they
don't go fast enough.

>>Neither would I - it's not *supposed* to be an instructional book.
>
>So what is everybody supposed to have learned from?

Rodney Brooks has a book "Programming in Common Lisp", Bob Kessler is
working on a book that features objects and Common Lisp, and of course
the 2nd edition of Winston & Van Horn's "LISP" uses Common Lisp.

>>There are some problems, which is why
>>there is another version in the works.  Almost all of the problems are
>>extremely technical, and take a long time just to explain to someone not
>>familiar with the language, let alone to solve...  There are also some open
>problems with language design involved - like compiling to different machines
>within the same programming environment, and support of program-analyzing
>programs in a portable manner.
>
>No sh*t, Sherlock!  You think the flames I've put up are even half?

These problems certainly aren't solved in UCI Lisp either!

>Do you think maybe the drive to make it a "standard" may be just a wee bit
>premature?

So when is it going to be less premature?  When we have 40 mutually
incompatible Lisp dialects instead of only 10?

>>>There is nowhere near the basic underlying set of primitives (or
>>>philosophy) to start with, as there is in Real LISP (RL vs CL).

It's definitely an exaggeration to suggest that UCI Lisp has some basic
underlying set of primitives that CL doesn't.

>>Finally, a semi-valid point.  There has been some effort to define various
>>subsets, but no consensus has emerged.  Any time someone defines a subset,
>>everybody whose favorite feature has been left out will rant and rave and
>>insist that it be included.  By the time you're done, you've got the whole
>>language again (spoken from personal experience!).
>
>The point isn't the lack of a subset, it the lack of a starting set.  RLs 
>started with a small basic set of types and functions.  Even though they
>grew to tremendous size, the growth was mostly by adding new functions.

Interlisp grew by adding magic flags to every property list in sight.

>CL starts out with a tremendous base language, attempting to have everything
>in it.  The user pays the price for this...

Tremendousness is in the eye of the beholder.  A lot of people want to
make Common Lisp even bigger.  The other aspect (which I mentioned originally)
is that one wants to standardize on user functions, rather end up in a
situation where each site has massive and mutually incompatible libraries,
which defeats portability.  For instance, the UCI Lisp sort function
lexorder is quite different from PSL's gsort.  UCI Lisp's set intersection
function takes any number of arguments, while PSL's only allows 2.
The user pays the price, in porting effort.

>>As for the non-use of Lisp definitions, most such definitions are either
>>simple and wrong, or complicated and right.
>
>That is almost slanderous; do you really want to stand there and say that
>about all the other LISP manuals around?  I find it a lot easier to 
>understand a well defined function than prose.
>I always find that I get scr**ed by the sentence I didn't read.  Good
>readers tend to skim.

I was exaggerating perhaps, but it does happen.  Definitions in manuals
are "wrong" if they differ from the source code, which might have error
checks or other argument processing.  For instance, the UCI Lisp manual
has a definition of EQUAL that doesn't say anything about what happens
when comparing arrays, although I might guess that it returns nil.
Without a copy of source code, I can't really be sure.
Sure, you can explain member and some other list
functions by definition, but that's about it.  For instance, (push x place)
sort of expands into (setf place (cons x place)), except that place will
get evaluated twice, so the real macro has to be trickier.  UCI Lisp
avoids this by only allowing variables as places to push things, which
is a nuisance.  I also notice that the UCI manual doesn't have definitions
for any of the array functions, or string functions, or explode, or
many others.

>Unfortunately this isn't written in the proper style of a standard, and I
>don't think I should have to read 5 pages to find what should be clearly 
>and simply stated under the function definition.
>Take a look at IEEE and ANSI standards to see what a "real" standard should
>look like.  If I look up SETF, I should be able to
>find out what it does, not have to read back to the chapter preface.

SETF contains a fine description of what it does.  It is clearly unreasonable
to expect that the description of SETF should also include documentation
on SET, RPLACA, and everything else that it expands into, or tutorial
material on the handling of symbols and their values.

A "real" standard (there's that word "real" again!), for instance my copy
of the Ada standard, has *hundreds* of cross references.  You can bet that
when Common Lisp becomes an ANSI standard, its document will be just the
same.

>>>Then try explaining why SET only affects dynamic bindings (a most glaring
>>>error, in my opinion).
>
>>So how in the world are you going to get this to work in a compiler that
>>compiles lexical references into positions on stack frames?
>
>That's the implementor's problem <grin>. It can be done (but probably
>isn't worth it).   See below for how this should have been handled.

Presumably UCI Lisp does it wrong too, since the manual notes (p. 105) that
"In compiled functions, SET can be used only on globally bound and
SPECIAL variables".  Should I assume that this gross difference in
the behavior of compiled and interpreted code is another "optimization"?

>>Those folks that managed to pass CS 101 shouldn't have any problem
> [with different languages]
>
>Passed CS 101 at what school?  What year?  Using what implementation of LISP?

We try to teach our students enduring principles while encouraging them
to be flexible in the minutiae of languages and programs.

>If you spend 10 years using SETQ thing it's the same as (SET (QUOTE)...
>or MEMBER is different from MEMQ, and suddenly it all changes, you're
>gonna be damned unhappy.

That's life in Lisp land.  Personally, I welcomed most of the changes in
Common Lisp, since they were considerable improvements over the bizarrities
of other dialects, which I had been annoyed at for a long time.

>>MEMQ is a stupid name.
>
>So is SHEBS, (the best I can do for an ad hominem argument :-) .

Maybe you should put a SHEBS function in UCI Lisp then - it takes any
amount of arguments and returns a random flame!

>But it's obviously distinct from MEMBER, and has a great deal of "historical"
>weight behind it...

"Historical weight" - spare me!  This isn't Fortran or Cobol, whose
definitions haven't changed significantly in almost the entire history of
computer science!

>Fine.  I don't agree, but DON'T CALL IT MEMBER!!  Call it something else and
>if you want to add funky syntax and make it a special form, be my guest.

Ah, but if we call it DIFFERENT-MEMBER-WITH-JJ-APPROVED-NAME, then many
people will complain because the name is too long, or because it's different
from MEMBER, or any number of other reasons.

>But don't kill my working code that goes back many years!!!!

Nobody's forcing you to use Common Lisp.  You're perfectly free to
fall into the backwaters of computing.

>And don't
>invalidate all of the text books, articles, etc that have already been
>written and used for many years!

Too late, the 1959 LISP Programmers Manual is already invalidated.

>Let's face it; design by committee is a polite term.  
>The kitchen sink approach is probably more apt.  As you and I point out,
>one of the key means of getting agreement was including everybody's favorite
>feature!  This ain't "design"...

Methinks you're getting rather inconsistent.  On the one hand, you accuse
CL of being specified by including everybody's favorite feature (presumably
from pre-existing Lisps);  on the other hand, you say that it's a new
language that is incompatible with Lisp.  Both of these assertions can
be true only if the pre-existing Lisps were fundamentally incompatible.
But you've already claimed that they are all "Real Lisp" and basically
the same!

>There are a lot of good things in CL, but it's a mammoth compromise and you
>and I both know it.

Suppose the Common Lisp committee had all been dropping acid and decided
UCI Lisp was "the Ultimate".  Non-UCI-Lispers would have promptly ignored
the committee and continued to go their own ways.  It is interesting to
note that the Lisp community has been converging faster on Common Lisp
than the DoD types have been standardizing on Ada, and there has been no
official dictum that Common Lisp will be used for projects.  There is a
lot to be said for compromise when the circumstances demand it.

>There are tons of problems

UCI Lisp (and Franz and PSL and Interlisp) have megatons of problems then.

>>Perhaps you should try a non-broken Common Lisp then...
>>Not in clever implementations...
>
>Where are these  clever, non-broken Common LISPs?

PCLS isn't bad, although it's a subset.  But then it's had only about
one person-year put into it.  Try VAXLisp or Lucid Common Lisp (which
has a hot compiler) or HP Common Lisp or Symbolics Common Lisp or
Kyoto Common Lisp.

>the results are forcing many firms to
>recreate LISP in C to get decent performance for their ES shells.

I don't know which companies you might be referring to, but the ES
shells I've had occasion to examine have such brain-damaged algorithms
no amount of translating to various languages is going to help them.
Recall the usual shibboleths about benchmarking and performance analysis...

>The only "non-broken" versions I am aware of are re-written SPICE!

PCLS has a couple functions stolen from Spice, but it's mostly new code.
Symbolics Common Lisp is all their own.  HP is careful to emphasize that
their Common Lisp is not based on Spice, but that's not necessarily an
advantage.  Spice is good code.  You could do a lot worse, especially if
you don't pay attention to the specification.

>It's hard to believe that MIT can't come up with a version (or if they have,
>they haven't notified me of  the update; last version I have is 0.286).

MIT isn't some sort of magical place whence optimal correct code appears
every so often.  There aren't many people there willing to build their
own Lisp anymore; they either go with a commercial version or hack Scheme
instead (a lot of Common Lisp features derive from Scheme, by the way).

>You may not know who I am; that doesn't make me an illiterate idiot!

I don't *care* who you are; I just go by your statements.

>I've probably been around a LOT longer than you have,

Probably true - certainly I can't get into an early 70s Lisp frame
of mind without working at it.

>and probably have a much wider range of experience,

There used to be people with a great deal of experience designing oared
galleys, but they weren't very helpful with modern programming languages.

>we have more LISP designs than you'll ever see

So where are all these great Lisp designs eh?  Are any of them implemented?
Why don't you subject them to public scrutiny?

>-JJ, CONSART Systems Inc.

Probably everybody is getting tired of this.  There is a particular topic
that I would be glad to discuss further, which is the rationale for the
various design decisions that went into Common Lisp.  Many people have asked
about various features good or bad, and when I've gone to look into my
historical material, there have been some interesting discoveries.  For
instance, did you know that an early draft (the "Swiss Cheese Edition")
proposed no less than 578 functions to operate on sequences?  It was some
time before it occurred to anyone that keywords might reduce the number
of functions.  So far, I've been able to find some pretty solid reasons
even for some of the most bizarre Common Lisp features.  So if anybody
has questions, post or mail me, and I'll try to find out the real story
and explain it.

							stan shebs

shebs@utah-cs.UUCP (Stanley Shebs) (06/30/86)

In article <830@bu-cs.UUCP> bzs@bu-cs.UUCP (Barry Shein) writes:

>Try writing a (savework) function in CL which saves off everything
>you typed in (functions, variables etc) in a re-readable form, it
>can be done I suppose, but just try a general attack (don't forget
>packages...)

I'm not 100% clear on why DRIBBLE is not the right thing, but I don't
think that Franz or PSL have any magical features that make it more
possible to write such a function...

>Where is the user environment anyhow? It's not there, every vendor
>gets to make it up and in so doing will add a zillion (+-3) functions.
>Will *this* be part of CL? No. Will programs 'accidently' use these
>vendor supplied functions to make things useable? Yes.
>Will your code run on other CLs? No.

There is some debate about that very point, and the consensus seems
to be that the LISP package should be pure and not contain anything
not already defined in the standard.  So if programmers stick to using
the lisp package they'll be OK for porting.  Alas, some implementations
still export all kinds of crud out of the LISP package.

Consider the alternative, which would be to have a language standard
that did not allow any extensions.  Imagine the howls of outrage from
Symbolics when the user environment is required to be a read-eval-print
loop.  Imagine the howls of outrage from Unix people if the standard
is changed to require at least 4 extra bits on each character, and
keyboard shift keys to go with them.  Imagine howls of outrage from
everybody if the implementation had to be designed such that it was
impossible to redefine builtin functions (improves portability, right?).

>Other than lexical scoping and macros (which I hate except in very
>few situations) hopefully this can all be fixed by a CL/2 which
>actually addresses the issues of -using- the language. Right now
>it's obviously a set of lowest-common-denominators among a few vendors,
>designed by a committee obviously and skirting almost all the interesting
>issues.

Well, they strayed into attempts to define fancy character sets and
file system interfaces, and the results are pretty disastrous.  Some
things are better left unspecified by a standard...

If you have a proposal for what the user environment should look like,
by all means let's hear it!  Remember that you have to accommodate fancy
interfaces on Lisp machines and dumb terminals on IBMs and everything
else in between.

>This could be the beginning of a long dark ages for LISP.

Hmmm, I thought it was the *end* of the Dark Ages! :-)

>	-Barry Shein, Boston University

bzs@bu-cs.UUCP (Barry Shein) (07/01/86)

From: shebs@utah-cs.UUCP (Stanley Shebs)
>If you have a proposal for what the user environment should look like,
>by all means let's hear it!  Remember that you have to accommodate fancy
>interfaces on Lisp machines and dumb terminals on IBMs and everything
>else in between.

[first off, this is a general statement, not a flame or anything]

Hmm, maybe I'm more disturbed by this statement than any other so
far, not that it isn't obviously true, it is.

Hey, I'll live with CL, as I've said before, give me a sharp knife
and a length of rope...(and I'll probably hang myself.) Having used
Lisp for almost 10 years now and basically loving the language I
was disturbed by a number of things I saw in the Steele book. But,
I'm open minded, maybe there are things I just don't "see" about
the exigencies. I still hate macros. Doubt you'll shake me of that.

Back to the above, I believe you, I guess I'm just disappointed that
the "standard" gave in to this kind of thing, I know I know, pragmatism,
but since when have "us lispers" ever yielded to that except when we
had to (single case lisps?)

I would have thought that the lisp machines would have been a strong
indication that to make lisp a raving success YOU MUST HAVE AN
ENVIRONMENT, well, maybe not MUST, but it does seem to be where the
action has been since around 1980 (or earlier if you count PARC.)

When are we going to screw up the courage to tell them to throw
away their damn trashy hardware, this field is too young to settle
in like that. I certainly remember the days when everything had
to run on card images to be "useful", then paper terminals (hey,
everyone doesn't *HAVE* a CRT, will you CS types get out of the
clouds!), I guess we just have to decide when the next phase has
been reached.

Gee, I sure hope the Multi-Media-Mail folks aren't saying their
stuff is useless unless it works on a 3278.

Maybe we *do* build our own prisons.

>Imagine the howls of outrage from Unix people if the standard
>is changed to require at least 4 extra bits on each character, and
>keyboard shift keys to go with them.

I'm a "unix people" I guess, I wouldn't howl, I think it's the
only general purpose OS that even *might* adapt given the challenge.
Most of the others are still nervous about upper/lower case character
sets and the dreaded COLUMN 72!

As a matter of fact, the day they howl is the day I stop being
a "unix people". Maybe you're right, maybe the day is coming.
UNIX "solved" device, process and file system abstraction, maybe
this next gulp will choke it? User abstraction (or is that distraction.)

The twelve-piece-suiters are carpetbagging on our brains.

It sure looks like the theme of the 80's is "Standards vs Innovation",
not just computers either, life imitates art ya know.

	-Barry Shein, Boston University

jjacobs@well.UUCP (Jeffrey Jacobs) (07/02/86)

s.UUCP>
Sender: 
Reply-To: jjacobs@well.UUCP (Jeffrey Jacobs)
Followup-To: 
Distribution: net
Organization: CONSART Systems Inc, Manhattan Beach, CA
Keywords: 




In <3837@utah-cs.UUCP>, Stan Shebs writes:

>I suppose at this point it's appropriate for me to put on my know-nothing
>hat and proceed to comment on the design (or lack thereof) of UCI Lisp.
>There's a manual right here... 

and goes on to beat on UCI LISP.

I don't know why he bothered; nobody suggested it should be a standard, 
or was without problems.

With the necessary changes, it might be a good model for a subset of CL
(i.e. range and number of functions, etc).  But we already know what
happens when a subset is attempted :-)

>Are you suggesting the language standard
>for the interchange of programs should read like the UCI manual and say that
>577777Q represents zero?  I don't think that non-DEC-20 people would
>like that part of the specification very much!

No, I'm not suggesting any such thing.  The manual(s) are a description
of the language implementation, not a "specification".  (And it was
originally written for a DEC-10).

I do think that _parts_ of the manual provide a good example of how
things should be written; much of this is actually from the Stanford 1.6
manual.  (I also wish that Meehan had done some _real_ editing on the book;
there are still typos from the original 1973 Tech Report)!

>>But you can get one heck of a better development environment running 
>>interpreted...
>
>Sure - I use the interpreter too.  But WHO CARES about the *performance*
>of the interpreter!!!

I care about the performance of the interpreter; development time costs
$ (in the real world).  A bad interpreter seriously impacts productivity.
The longer I can work in the interpreter, the more I can produce.

Let's also not forget that (APPLY foo args), where foo is an arbitrary
structure results in interpretation, not compilation.

>Yeah, like improve portability, at least when programmers adhere to
>the definition of the language.  

It will be interesting to see just how "portable" things will really be.
C is nowhere near as portable as many people claim; I suspect that due
to the size and complexity of CL that we will see the same problem,
(particularly with so many implementations being "broken").

>Somehow I doubt a UCI Lisp program using the ";" or ";;" functions is going to
>work very well in Maclisp.  Square brackets in UCI Lisp and Franz won't
>go over real well in PSL, where they denote vectors.  UCI Lisp and PSL are
>the only dialects to define functions with a macro DE, while Maclisp and
>Franz use defun, and Interlispers do it in several different ways.

Is that UCI, Franz or INTER-LISP's fault?  They all came before PSL, so
it's really PSL's fault.

Maclisp used to use DE (and I'm sure it's still there).  Remember, the
basic order of creation was MAC->Stanford 1.6->UCI.

>CL is just as compatible with older dialects as the older dialects are with
>each other!  But once you convert to the standard, you don't have to do it
>any more...  It's very misleading to suggest that there isn't a lot of pain
>and agony to translate one dialect of "Real Lisp" to another.  Tim Finin
>wrote his Franzlator to convert Interlisp code to Franz, and it had hundreds
>of conversion rules.

Translating between Interlisp and MacLisp derived versions was ALWAYS a
pain; between MacLisp types it was much less so.  (I did CNNVR in about
a week).

There used to be a program called TRANSOR that worked pretty well; did it
get lost?  Or does everybody feel the need to start from scratch...

>>The main reasons I suggest dynamic is preferable in LISP are a) historical, 
>>b) LISP is not block structured; the separation of declaring a variable 
>>SPECIAL (DEFVAR being recommmend, p 68) from an actual function definition 
>>makes it very difficult to debug a function "locally", 
>>c) the excessive need to provide compiler declarations makes for some pretty 
>>ugly code.
>
>Huh?  Declaration for what?  Local variables?

No, declarations of SPECIALs.

>>Well written LISP code should be almost completely independent of lexical or
>>dynamic scoping considerations.  A free variable is obviously special; the
>>only real problem comes in when a variable is bound.
>
>(let ((num 45))
>  (mapcar #'(lambda (x) (+ x num)) '(3 4 5)))
>
>I would prefer num to be lexical and not dynamic, even though it is free
>within the inner lambda.  It's also not desirable for typos to suddenly
>materialize as specials - it would be a tough debug to find.

The example you give is meaningless.  The results are identical for
either case.'

*PLEASE NOTE* that my gripe with CL is that it allows BOTH dynamic
and lexical binding.

(DEFUN FOO (X Y Z)...

can get you in even WORSE trouble if X, Y or Z has been DEF'ed previously.

Dynamic is my *personal* preference; I can live with lexical without much
heartburn.  Allowing both is crazy.

CL should have come up with a different mechanism for dynamic binding
(maybe DLAMBDA, DLET and DLET*).

>>>>to allow both dynamic and lexical makes the performance even worse.
>>
>>>This is totally wrong.
>>
>>Say WHAT?  It certainly is TRUE in an intepreter; it takes longer to look up
>>a lexical variable than a dynamic variable, and it takes even longer when you
>>have to determine whether the lookup should be lexical or dynamic.  Add a
>>little more time to check if it's a CONSTANT or DEFVAR...
>
>Ah, we're talking about the interpreter again.  I still don't understand why
>anyone would think interpreter performance would matter - the overhead
>is so tremendous already that dynamic/lexical lookups aren't necessarily
>significant.

It is VERY significant.  Variable reference is certainly the most frequent
operation in LISP (CL or RL).

HISTORICAL note: it is much easier to build a dynamically scoped compiler.
The access time to a special cell is equivalent to a stack based
variable.  A version of UCI LISP at Rutgers did just that; the original
release of UCI would have had this, but it fell through the cracks.

The main reasons for the lexical scoping in the compiler was
space and speed; dynamic binding required an extra PUSH and space on
the stack.  Memory was a much more critical resource in those days.
(I have already pointed out that it was an obvious optimization based
on observed coding practices).

I assume that nobody would complain if the compilers had also been
dynamic, resulting in identical semantics :-)

>I don't use Lisp machines, I expect my (compiled) Common Lisp programs to
>go faster than equivalent C programs, and I beat on the compiler if they
>don't go fast enough.

That's great, but not everybody in the world is going to have a compiler
beater handy.  (And I would certainly like to see your system).

>Tremendousness is in the eye of the beholder.  A lot of people want to
>make Common Lisp even bigger.  

No argument here.  LISP development is a disease akin to drug addiction
or alcoholism; once you start, you can't stop!  One of my gripes with CL
is that there are too many ways to do the same thing.

>>>As for the non-use of Lisp definitions, most such definitions are either
>>>simple and wrong, or complicated and right.

>>I always find that I get scr**ed by the sentence I didn't read.  Good
>>readers tend to skim.
>
>I was exaggerating perhaps, but it does happen.  Definitions in manuals
>are "wrong" if they differ from the source code, which might have error
>checks or other argument processing.  For instance, the UCI Lisp manual
>has a definition of EQUAL that doesn't say anything about what happens
>when comparing arrays, although I might guess that it returns nil.
>Without a copy of source code, I can't really be sure.

If you read it carefully, you will find that an ARRAY is a SUBR; EQUAL
would compare the values.

The aren't "wrong" as long as the result is specified by the given
definition.

Steele's book could be MUCH better...

>Maybe you should put a SHEBS function in UCI Lisp then - it takes any
>amount of arguments and returns a random flame!

"Random" is the key word :-).

>Too late, the 1959 LISP Programmers Manual is already invalidated.

So are about 100 or so other books.

>Suppose the Common Lisp committee had all been dropping acid and decided
>UCI Lisp was "the Ultimate".  Non-UCI-Lispers would have promptly ignored
>the committee and continued to go their own ways.  It is interesting to
>note that the Lisp community has been converging faster on Common Lisp
>than the DoD types have been standardizing on Ada, and there has been no
>official dictum that Common Lisp will be used for projects.  There is a
>lot to be said for compromise when the circumstances demand it.

Aha! So the real point is that ANY standard is good, not that CL is a
good standard!!!  My point is that CL is not a good standard...
(However, I have been know to compromise when the circumstances demand it).

>>Where are these  clever, non-broken Common LISPs?
>
>PCLS isn't bad, although it's a subset.  But then it's had only about
>one person-year put into it.  Try VAXLisp or Lucid Common Lisp (which
>has a hot compiler) or HP Common Lisp or Symbolics Common Lisp or
>Kyoto Common Lisp.

Lot's of subsets around, aren't there <grin>?  Last time I checked,
neither HP nor Symbolics were "Steele complete"; there were still
things either not meeting the standard or still missing.  I haven't
had a chance to try the other 2.  It is still necessary to stay
away from some of the more subtle aspects of the specification
(in fact one should probably stay away from them anyway).

>>the results are forcing many firms to
>>recreate LISP in C to get decent performance for their ES shells.
>
>I don't know which companies you might be referring to, but the ES
>shells I've had occasion to examine have such brain-damaged algorithms
>no amount of translating to various languages is going to help them.
>Recall the usual shibboleths about benchmarking and performance analysis...

Inference, Carnegie Group and Teknowlege to start.
KES II and NEXPERT are both written in C.  I was apparently wrong about
Intellicorp converting KEE to C (although from what I hear they should).

>>The only "non-broken" versions I am aware of are re-written SPICE!
>
>PCLS has a couple functions stolen from Spice, but it's mostly new code.
>Symbolics Common Lisp is all their own.  HP is careful to emphasize that
>their Common Lisp is not based on Spice, but that's not necessarily an
>advantage.  Spice is good code.  You could do a lot worse, especially if
>you don't pay attention to the specification.

Do you mean the Steele specification?  If so, and they don't meet it,
aren't they "broken", i.e. non-portable and non-CL?

>>we have more LISP designs than you'll ever see
>
>So where are all these great Lisp designs eh?  Are any of them implemented?
>Why don't you subject them to public scrutiny?

Like I said, Lisp development is a disease, once you get it you can't
get rid of it.  However, these designs are proprietary, and wouldn't
be of much interest to you anyway.  Some of the work will be published,
but publication isn't a very high priority (remember, I'm
in industry, not academia).  They are the result of applying years
of software engineering experience rather than any great theoretical
advance.  Mostly practical solutions and a couple of new "laws" on
garbage collection.  

-Jeffrey M. Jacobs, CONSART Systems Inc., Manhattan Beach, CA
CIS:[75076,2603]
BIX:jeffjacobs

shebs@utah-orion.UUCP (Stanley Shebs) (07/02/86)

In article <1372@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes:

>>Ah, we're talking about the interpreter again.  I still don't understand why
>>anyone would think interpreter performance would matter - the overhead
>>is so tremendous already that dynamic/lexical lookups aren't necessarily
>>significant.
>
>It is VERY significant.  Variable reference is certainly the most frequent
>operation in LISP (CL or RL).

The most frequent operation in the interpreter is to go through the
dispatch loop deciding whether to do variable lookups or macro expansion
or function application or hook functions or whatever other feature
the implementors decided was handy.  Variable lookup may be the most
common case, but even lexical lookup is likely to be cheaper than
function application.  I don't recall seeing any thorough studies of
the real costs in an interpreter; although when one gets a 20-to-1
or better speedup using a compiler, it's easy to see why some systems
rely on an incremental compiler instead!

>That's great, but not everybody in the world is going to have a compiler
>beater handy.  (And I would certainly like to see your system).

The nice thing about compiler beating is that it only needs to be done
once, and the optimizations are available forever after.  If you want
a copy of PCLS, send your USnail address to cruse@utah-20 to get the forms.
We also have a paper on PCLS that was unfortunately not accepted for the
Lisp conference - it shows the severe hacks our compiler does to transform
Common Lisp into efficient PSL.

>>Spice is good code.  You could do a lot worse, especially if
>>you don't pay attention to the specification.
>
>Do you mean the Steele specification?  If so, and they don't meet it,
>aren't they "broken", i.e. non-portable and non-CL?

Sorry to be confusing.  What I meant was that there are a number of
subtle aspects of the language that an unwary implementor can get caught
by.  Such things include error and default handling in the package system,
argument list binding and declarations, the user hooks for SETF, and
many other things.  These things are complex for the implementor so that
the language user has an easier time with the language - there's no
sillinesses with installing magic properties on symbols, etc.  When we left
out some of these details in versions of PCLS, you can bet we heard from
the users!  Spice Lisp is the model implementation.  There are a few boners
here and there, but it goes to a lot of trouble to get details right.
Every Common Lisp implementor should study Spice Lisp code, whether or
not s/he actually uses any of it directly.  PCLS on the other hand is
quick and dirty, and invalid programs will get the most mystifying of
error messages, since we go for speed!

							stan

andy@Shasta.STANFORD.EDU (Andy Freeman) (07/03/86)

(apply foo <args>) should be interpreted iff the function it is
in is interpreted or it is typed at top level (or appears at top
level in a file that is loaded).  The function being called will
be interpreted if it hasn't been compiled but what's strange about
that?  If the function was compiled, then compiled code will be run.
Any implementation that can't handle this is wrong.

It is possible to do lexical variable reference in interpreted code
as fast as deep-binding lookup of dynamic variables.  Deep-binding 
lookup in compiled code isn't much faster than it is in interpreted
code.  (The exception to this is when it can be shown equivalent to
lexical scope.  Unfortunately, broken compilers don't bother to
prove equivalence, they just change the semantics.)  Lexical variable
reference is much faster in compiled code than in interpreted code.

For those of you who are saying "but shallow-binding is much faster,
why would anyone use deep-binding", think about multi-processors, or
even multi-tasking.  Now name a lisp machine that uses shallow-binding.
Shallow-binding only works in a very restricted environment.  No
one wants to work in that kind of environment anymore.

-andy

darrelj@sdcrdcf.UUCP (Darrel VanBuer) (07/03/86)

Summary: Transor still exists

In article <1372@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes:
>There used to be a program called TRANSOR that worked pretty well; did it
>get lost?  Or does everybody feel the need to start from scratch...
>
TRANSOR still exists in the Interlisp world, though it almost disappeared
because noone had wanted to see why it wouldn't compile in VAX or D versions.
I think there are almost no users.
The worst problem now is that the majority of Interlisp programmers have
only used Interlisp D, so have almost no familiarity with the old teletype
structure editor.  Transor translations are specified in terms of those
editor commands.  Even Jim Goodwin, the author, would like to see transor
disappear, but the problem is that it's not easy to create a tool with the
power and flexibility provided by Transor + editor.
-- 
Darrel J. Van Buer, PhD
System Development Corp.
2525 Colorado Ave
Santa Monica, CA 90406
(213)820-4111 x5449
...{allegra,burdvax,cbosgd,hplabs,ihnp4,orstcs,sdcsvax,ucla-cs,akgua}
                                                            !sdcrdcf!darrelj
VANBUER@USC-ECL.ARPA