[comp.lang.lisp] What's the value of lexical scoping?

mkent@dewey.soe.berkeley.edu (Marty Kent) (06/03/88)

I've been wondering lately why it is that "modern" lisps like Common Lisp
and Scheme are committed to lexical scoping.  To me, the only *obvious*
effect of lexical scoping is that it makes it very much more difficult to
write reasonable debugging tools (so the system writers don't bother with
it). Actually I have in mind the lisps for the Mac II, which are Allegro
Common Lisp and MacScheme.  (Since it lacked a compiler last I heard, I
haven't taken XLisp seriously. Perhaps there are other "serious" lisp
systems available for the Mac or Mac II, if there are any, I'd love to
hear about them...)

(To return to my main stream...) With dynamic scoping, you can actually
implement a break loop by just reading, eval'ing and printing.  With
Common Lisp's way of evaluating calls to EVAL in a null lexical
environment, it seems to me that in order to set up a decent break package
one has to know about the implementation of the runtime stack, the
structure of stack frames etc. (NOTE: by "a decent break package" I mean
one in which you can *at the very least* examine the values of locals on
the stack at break-time.) In fact, with Allegro Common Lisp the situation
is even worse, because the compiler doesn't save the names of locals in
the stack frames, which makes it pretty much impossible to scan at runtime
to resolve a name-based reference. 

It seems to me the Common Lisp specification missed out by not mandating
certain runtime stack manipulation primitives, a la (for instance)
Interlisp.  

I realize that discarding variable names from compiled code makes for
faster and smaller object modules, but it seems to me this kind of
"optimization" should be dependent on something like the setting of
"speed" in an optimize declaration.

Well, I don't really mean  to just sit here and bitch,  what I'm really
hoping is that someone will tell me either:
1) actually it's easy to set up a decent runtime debugger using stock
Common Lisp functions, you simply have to ...
or
2) while it's true that Common Lisp's scoping makes it difficult to write
debuggers, lexical scoping is still a good trade-off because it buys
you...

I'd be glad to hear about either of these alternatives, or some new way of
looking at the situation...


Marty Kent  	Sixth Sense Research and Development
		415/642 0288	415/548 9129
		MKent@dewey.soe.berkeley.edu
		{uwvax, decvax, inhp4}!ucbvax!mkent%dewey.soe.berkeley.edu
Kent's heuristic: Look for it first where you'd most like to find it.

psa@otter.hple.hp.com (Patrick Arnold) (06/03/88)

I think there are two issues at stake here. Decent debuggers and binding
rules.

The first issue should be addressed by the language implementors. There is
no reason why compiled code should not retain enough information to produce
comparable debugging information. From what I remember of scheme it uses
lexical scoping and has a very good debugger (though it sometimes has no
information because of continuations but thats a different story).

The biggest pain with dynamic binding is that it suffers from the
downward (or upward) funarg problem.  This refers to the potential for a
procedure (function?)  to capture variables from the environment in
which it is being used.  This may not always be desirable because it
violates the "black box" notion of a procedure, namely that a procedure
behaves the same in any context. Lexical binding does not have this
problem.

The justification for dynamic binding is that it makes some forms of
abstraction easier to handle (this is important for programming in the
large).  Suppose we had two procedures which share a common
sub-procedure.  Further suppose we want to use implicit parameter
passing (i.e are not passed explicitly) then in a statically scoped
language you would be forced to repeat the definition of the shared
procedure in order to be able to capture the implicit parameters whereas
in a dynamically bound language you would be able to share a single
definition amongst many procedures (carefully).

So ideally a langauge should enable both styles of binding with there being
a set of pragmatic guidelines about how they should and shouldn't be used
in programming. The two types of binding enable two of the most important
aspects of a structured approach to computer software, namely abstraction
and information hiding.

There is a basic (but expositional) discussion of this in Structure and
Interpretation of Computer Programs by Abelson and Sussman on pages 321 to
323.

Hope this helps.

			Patrick.

tlh@cs.purdue.EDU (Tom "Hey Man" Hausmann) (06/03/88)

In article <1350015@otter.hple.hp.com>, psa@otter.hple.hp.com (Patrick Arnold) writes:
> The first issue should be addressed by the language implementors. There is
> no reason why compiled code should not retain enough information to produce
> comparable debugging information. 

    Optimizations (e.g. code motion) can make debugging the original source
    difficult unless the debugger is a very good one and maintains a great
    deal of information about the original source.

    -Tom

cox@renoir.Berkeley.EDU (Charles A. Cox) (06/05/88)

In article <24508@ucbvax.BERKELEY.EDU> mkent@dewey.soe.berkeley.edu (Marty Kent) writes:
>  [...] In fact, with Allegro Common Lisp the situation
>is even worse, because the compiler doesn't save the names of locals in
>the stack frames, which makes it pretty much impossible to scan at runtime
>to resolve a name-based reference. 

I am more familiar with Allegro Common Lisp that runs on Unix
machines, but I am told that with the MAC-OS Allegro, beginning with
version 1.2, setting the *SAVE-DEFINITIONS* compiler flag will cause
the parameter names and values to be printed in a backtrace.  This
will aid in debugging.

In the UNIX version of Allegro Common Lisp, there is a variable called
COMP:SAVE-LOCAL-NAMES-SWITCH which is bound to a function.  When this
user redefinable function returns T, the compiler will save the
names of all the local variables.  These variable are then accessible
by name using the `:LOCAL' top level command.

Hope this helps.

	Charley Cox
	cox@renoir.Berkeley.EDU

simon@comp.lancs.ac.uk (Simon Brooke) (06/07/88)

In article <24508@ucbvax.BERKELEY.EDU> mkent@dewey.soe.berkeley.edu (Marty Kent) writes:
>I've been wondering lately why it is that "modern" lisps like Common Lisp
>and Scheme are committed to lexical scoping.  

Good! somebody else prepared to stand up and say Common LISP is a mess. If
you share this opinion, read the end of this posting even if you skip the
middle... it is important.

[I'm just commenting here on bits from Marty's posting - serious stuff
later]
>
>I realize that discarding variable names from compiled code makes for
>faster and smaller object modules, but it seems to me this kind of
>"optimization" should be dependent on something like the setting of
>"speed" in an optimize declaration.
>
This sort of 'optimisation' is pointless anyway, now that we work in
32-bit address spaces and memory is cheap. It must, surely, always be
better to keep your local names with your code.

>Well, I don't really mean  to just sit here and bitch,  what I'm really
>hoping is that someone will tell me either:
>1) actually it's easy to set up a decent runtime debugger using stock
>Common Lisp functions, you simply have to ...

Throw away that cruddy fortran-with-brackets and buy yourself a real LISP.
I don't know if Metacomco have yet ported Cambridge LISP onto the Mac, but
they easily could, and probably would if they felt there was a demand;
this wouldn't solve your problem, as it static binds when compiled (ugh)
but it is otherwise a nice lisp. More seriously, LeLisp has certainly been
ported onto the Mac, and - I haven't played with it - it is reported to be
a really nice LISP. I understand that the manuals are still only available
in French, though. Finally, if you (or your employer) have a wallet as
deep as the Marianas trench, there's the much-heralded micro-explorer.
That *ought* to give a decent LISP environment, but again I haven't seen
one.

>or
>2) while it's true that Common Lisp's scoping makes it difficult to write
>debuggers, lexical scoping is still a good trade-off because it buys
>you...
>
We had a long discussion about this on the uk.lisp newsgroup. I still have
much of this on file and could post it if people are interested (I can't
easily mail to the States). Advocates of lexical scoping offered a number
of extremely tricky programming examples which couldn't be done with
anything else. These were very impressive *as tricks*, but I couldn't ever
imagine using any of them in a serious programming situation. In short, I
wasn't convinced - but I should add that I didn't convince anyone else
either.
>
>
*** If you don't like Common LISP, the future is hopeful - but you should
*** do something about it now!

As you *ought* to know, an ISO working group is currently prepareing a new
LISP standard, to be known as ISLISP. They hope to have this ready for the
end of 1989, so the time to influence it is *as soon as possible*.
Regrettably, this group is working from Common LISP as a basis; however,
the good news is that it appears that dynamic binding a la EuLisp will be
incorporated, and there will be no packages. The character set is being
looked after by the Japanese, which has to be good news, because it
guarantees that we will get an extended character set (how the CL
committee were ever allowed to get away with upper case only - and, for
G*d's sake, why they wanted to - is far beyond me). 

Obviously, I have my ideas about what a good LISP looks like (all right,
as a minimum it has dynamic binding, both LAMBDA and NLAMBDA forms, at
least the option of non-intrusive garbage collection; although it allows
macros, there is nothing you can't do with a function; and it does not
have packages, PROG, GO, stupid tokens in parameter lists, SETF....) -
everybody else out there has their own list. If you *care* about your
working language, the best way to make sure that this committee does not
produce another ugly camel is to identify your nearest working group
member and lobby as hard as you can. *DO IT NOW*.


** Simon Brooke *********************************************************
*  e-mail : simon@uk.ac.lancs.comp                                      * 
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
*                                                                       *
*  Thought for today: isn't it time you learned the Language            * 
********************* International Superieur de Programmation? *********

jackson@esosun.UUCP (Jerry Jackson) (06/09/88)

>>Obviously, I have my ideas about what a good LISP looks like (all right,
>>as a minimum it has dynamic binding, both LAMBDA and NLAMBDA forms, at
>>least the option of non-intrusive garbage collection; although it allows
>>macros, there is nothing you can't do with a function; and it does not
>>have packages, PROG, GO, stupid tokens in parameter lists, SETF....) -
>>everybody else out there has their own list. If you *care* about your
>>working language, the best way to make sure that this committee does not
>>produce another ugly camel is to identify your nearest working group
>>member and lobby as hard as you can. *DO IT NOW*.


FLAME ON

This is really incredible.... I've heard people flame about CommonLisp
many times.. (I have even done it myself on a few occasions..), but
I've never heard anyone attack some of these features -- 

*ahem* -- First of all, CL supports dynamic binding for those cases where
it is useful (I admit they definitely exist), although dynamic binding
is quite clearly a *BUG* (the names you give to local variables should not
matter...)

NLAMBDA -- cannot be made efficient (unless you consider a run-time call
to EVAL efficient)

packages -- Ok, I agree with this one, however a case may be made for 
an environment oriented package system (requiring lexical scoping)

PROG,GO -- For people who never have to write powerful tools I would
agree that these are not necessary, but if you had ever tried to compile
a special purpose language to lisp and make it reasonably efficient, you
would appreciate the value of having things like PROG and GO as 
compilation targets

tokens in parameter lists -- Isn't it really obvious that something
like member with a few options is better than the excessive proliferation
of look-alike functions (ala memq memql memqual ...)

SETF -- I can't believe my eyes... This is one of the BEST things about
CL... I don't know what to say.  Anyone who has actually USED CL with setf
for a while knows what I'm talking about.

HAVE YOU EVER USED LISP????? (I'm quite sure you have never used CL --
no one who had could have said the things you said.)

FLAME OFF

+-----------------------------------------------------------------------------+
|   Jerry Jackson                       UUCP:  seismo!esosun!jackson          |
|   Geophysics Division, MS/22          ARPA:  esosun!jackson@seismo.css.gov  |
|   SAIC                                SOUND: (619)458-4924                  |
|   10210 Campus Point Drive                                                  |
|   San Diego, CA  92121                                                      |
+-----------------------------------------------------------------------------+

jeff@aiva.ed.ac.uk (Jeff Dalton) (06/10/88)

In article <515@dcl-csvax.comp.lancs.ac.uk> simon@comp.lancs.ac.uk (Simon Brooke) writes:
>In article <24508@ucbvax.BERKELEY.EDU> mkent@dewey.soe.berkeley.edu (Marty Kent) writes:
>>I've been wondering lately why it is that "modern" lisps like Common Lisp
>>and Scheme are committed to lexical scoping.  
>
>Good! somebody else prepared to stand up and say Common LISP is a mess. If
>you share this opinion, read the end of this posting even if you skip the
>middle... it is important.

I will respond to this, because I have talked with Simon before on
Uk.lisp, and because I am familiar with the various standardization
efforts mentioned in his message.  Nothing I say has any official
standing of course.

For one thing, it is significant that Marty Kent mentioned Scheme as
well as Common Lisp.  He did not say Common Lisp is a mess, nor did
he mention any problem with Common Lisp other than the effects of
its use of lexical scope on debugging.  I therefore think it a bit
unfair to enlist him in the anti-Common Lisp cause just yet.

This exchange is a sign of a general problem faced by the Lisp
community, namely that we are trying to standardize Lisp at a
time when our conception of the language is changing.  One aspect
of this change is the move towards lexical scoping.

In addition, there is a problem of "uneven development": some
people have gone further than others or in a different direction.
I do not mean to imply that those who have gone further are right.
Nonetheless, it may be difficult to explain "modern" Lisp without
retracing a lot of history.

It may even be that the only way to understand is to try out the
other point of view and see what it's like.

My own thinking on these matters was informed in part by the
first Scheme papers by Steele and Sussman starting in 1975,
by Stoy's book on Denotational Semantics (see especially his
comments on 1st-class functions), and by two papers in the
Conference Record of the 1980 Lisp Conference:

     Kent M. Pitman. Special Forms in Lisp. Pages 179-187.
     [An argument that macros are better than fexprs]

     Steven S. Muchnick and Uwe F. Pleban.  A Semantic
     Comparison of Lisp and Scheme.  Pages 56-64.
     [In part a reconstruction of Lisp development, as is
     Steele and Sussman's "The Art of the Interpreter".]

     [For other references, see the R*RS Scheme reports.]

Scheme-like ways of thinking were not immediately convincing, but
experience with Scheme showed that such an approach had virtues
despite seeming to impose a number of restrictions.  For one thing,
the lexical resolution of the differences between interpreter and
compiler semantics (many Lisps have partial lexical scoping in
compiled code but only dynamic scoping in interpreted) began to
seem better overall than the dynamic resolution (have compiled
code use dynamic scoping too), as did the lexical form of
functional values.

These ideas eventually became strong enough to influence Common Lisp:
  -  Common Lisp has lexical scoping with indefinite extent.
  -  Common Lisp does not support user-defined special forms.
  -  Common Lisp uses the same rules of scope and extent
     for both interpreted and compiled code.

It is important to note that for many these are all good things.
In particular, it is likely that all three will be true of any
Lisp standard, whether developed by x3j13 or ISO's WG-16.  They
are also true of the suggestions made by the EuLisp committee.
Indeed, one of the key questions now is how much further in the
Scheme direction the standard should move.

However, to others these things are all bad things and represent 
capitulation to the forces of Pascal, or something of that sort.
In addition, some, and I think Simon is among them, find Common
Lisp a confusing mixture.  They prefer both Scheme and "dynamic
Lisp" to Common Lisp, but still prefer dynamic Lisp to Scheme.
Such a position is not simply a complaint that Common Lisp has
lexical scoping; it is more complex.  If they would like a standard
that improves on Common Lisp in some respects, it may be possible
to achieve one; but I do not think the Lisp community has the
resources or the inclination to build a standard for dynamic Lisp.

This seems enough for one message; I will respond further in the
next.

Jeff Dalton,                      JANET: J.Dalton@uk.ac.ed             
AI Applications Institute,        ARPA:  J.Dalton%uk.ac.ed@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton

simon@comp.lancs.ac.uk (Simon Brooke) (06/10/88)

Before I start: thanks to Jeff Dalton for his piece, and I accept his
correction. I was clearly wrong to claim that Marty Kent neccesarily
shared my opinion of CL. Also, Jeff is perfectly right to suggest that I
would prefer Scheme - which appears a clean, elegant, well designed
language - to CL. But for the rest....

Things are bubbling! good. Let's look at some of the arguments that have
been advanced. Firstly, the objection to dynamic binding that's most
commonly advanced: namely, that you fall down holes when you confuse
locals with globals. As John Levine writes:

]	Every time I have used a dynamically scoped language (Lisp, APL,
]	Snobol4) I have been bitten quite painfully by strange bugs due
]	to unintended aliasing of names.

If, in a lexically scoped lisp, you refer to a global variable
thinking it's a local, or vice-versa, you'll still fall down a hole.
Lexical scoping does not obviate the need for good software engineering
practice - namely, in this case a *naming scheme*.

The Xerox community has got used to a scheme under which, among other
things, all globals are marked out with asteriscs:
	*thisIsAGlobal*
and locals are not:
	thisIsALocal
Even common lisp people, despite the fact that they use a language which
throws away 50% of all the information in it's input stream (that really
is unbelievable!), could adopt a simple convention like this. Then you
won't fall down *that* kind of hole.

Secondly, the argument that Lisp should become more like conventional
languages so that people switching to it will find it easier to learn.
Douglas Rand expressed it thus:

]	... most programmers, especially those who are transplants from 
]	'normal' languages like FORTRAN, PL/1, C, and PASCAL will expect
]	lexically scoped behaviour.

I don't buy this. The reason these people are switching to LISP is because
they are *dissatisfied* with the *expressiveness* of their current
language. They know that LISP is significantly different; they know they
are going to have to re-learn. What they don't want is to find, when
they've put in that effort, that we have castrated LISP to the extent that
it gives them no advantage over what they've left.

The argument advanced by Patrick Arnold:

]	the potential for a procedure (function) to capture variables from
]	the environment ... violates the "black box" notion of a
]	procedure 

seems to me to be the same thing in different clothes. The '"black box"
notion of a procedure' cannot come from LISP, because LISP has no
procedures. It comes, in fact, from conventional programming in the ALGOL
tradition. Part of the power and expressiveness of LISP is that we can,
when we want to, and when we know what we're doing, write functions which
are sensitive to changes in their environment. If you don't like this, you
will find that there are plenty of other *very good* languages (Pascal, 
Modula, Ada - even Scheme) which cater for your needs. Don't come and mess
up the one language which has the expressiveness to do this.

The key point I want to make is one which Patrick made admirably:

]	...dynamic binding... makes some forms of abatraction easier to 
]	handle (this is important for programming in the large).

Precisely so. And it is precisely for it's ability to handle abstraction
that we choose LISP as a language. If we reduce its power to do so, we
reduce it's value to 'just another programming language' - and one,
furthermore, which is greedy of machine resources and doesn't integrate
well with others.

One last point, quickly. Douglas Rand says:

]	... Common Lisp preserves the ability to screw yourself for the
]	hearty adventurer types (you can always do a (declare (special ..)))
]	but saves the rest of us mere mortals from our folly.

This is, in my opinion (not, I admit, widely shared as yet) one of the
worst of the Common LISP messes. It is the nature of LISP that code is
built up incrementally. You build your system on the top of my system. Let
us say that you are a mortal and I am a hearty adventurer. How are you to
know which tokens I have declared special? Well, I *ought* to have
documented them; or you could always read the source file; or, as a last
gasp, you could always, *every single time you use a new variable* ask the
system whether I've already declared it special. But are you *really*
going to do these things? No. Mixing your binding schemes is asking for
trouble - and trouble of a particularly nasty sort.

Perhaps what is being said in all this is that what we actually need is
two standard languages: say ISO Scheme and ISO Dynamic LISP...? I do not
reject the value of standardisation. To be able to transfer programs -
and, perhaps even more important, programming experience - from one
computing environment to another will be of steadily increasing importance
as LISP becomes accepted as a tool for some types of commercial
programming.

Happy Lisping!

** Simon Brooke *********************************************************
*  e-mail : simon@uk.ac.lancs.comp                                      * 
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
*                                                                       *
*  Neural Nets: "It doesn't matter if you don't know how your program   *
***************  works, so long as it's parallel" - R. O'Keefe **********

barmar@think.COM (Barry Margolin) (06/13/88)

In article <519@dcl-csvax.comp.lancs.ac.uk> simon@comp.lancs.ac.uk (Simon Brooke) writes:
>The Xerox community has got used to a scheme under which, among other
>things, all globals are marked out with asteriscs:
>	*thisIsAGlobal*
>and locals are not:
>	thisIsALocal
>Even common lisp people, despite the fact that they use a language which
>throws away 50% of all the information in it's input stream (that really
>is unbelievable!), could adopt a simple convention like this. Then you
>won't fall down *that* kind of hole.

The above is the same convention as is used by the Common Lisp
community.  All the special variables defined in CLtL use this naming
scheme.  I don't know whether the convention was developed at Xerox or
not, but it has been used at MIT since about 1981 (unfortunately, a
large part of the MIT Lisp Machine OS was written before the
convention caught on, and many of the old non-starred names still
exist in the Symbolics system).

>Secondly, the argument that Lisp should become more like conventional
>languages so that people switching to it will find it easier to learn.
>Douglas Rand expressed it thus:
>]	... most programmers, especially those who are transplants from 
>]	'normal' languages like FORTRAN, PL/1, C, and PASCAL will expect
>]	lexically scoped behaviour.
>I don't buy this. The reason these people are switching to LISP is because
>they are *dissatisfied* with the *expressiveness* of their current
>language. They know that LISP is significantly different; they know they
>are going to have to re-learn. What they don't want is to find, when
>they've put in that effort, that we have castrated LISP to the extent that
>it gives them no advantage over what they've left.

First of all, the original text said "especially", not "only", meaning
that even people who aren't switching are likely to expect
lexically-scoped behavior.  Lexical scoping simplifies understanding
of a program, because one can look at a function call and the function
definition and determine the behavior, without having to know the
entire history of the call tree.

Second, just because some feature is used in the traditional languages
does not mean that it should automatically be excluded from Lisp.
Lexical scoping is a good thing, and we should not be prejudiced just
because it was used in algebraic languages first.

>The argument advanced by Patrick Arnold:
>
>]	the potential for a procedure (function) to capture variables from
>]	the environment ... violates the "black box" notion of a
>]	procedure 
>
>seems to me to be the same thing in different clothes. The '"black box"
>notion of a procedure' cannot come from LISP, because LISP has no
>procedures. 

Huh?  I think you are using too restricted a definition of
"procedure".  In the above context, I think "procedure", "function",
and "subroutine" should all be considered synonymous.

>	     It comes, in fact, from conventional programming in the ALGOL
>tradition. Part of the power and expressiveness of LISP is that we can,
>when we want to, and when we know what we're doing, write functions which
>are sensitive to changes in their environment. If you don't like this, you
>will find that there are plenty of other *very good* languages (Pascal, 
>Modula, Ada - even Scheme) which cater for your needs. Don't come and mess
>up the one language which has the expressiveness to do this.

This distinction implies that only Lisp allows one to write functions
that are dependent upon the environment.  All the other languages
mentioned allow functions and procedures to refer to global variables.
The only unique feature of Lisp is that it does not force formal
parameter variables to be lexically-scoped local variables.  I don't
see this as a major feature, and I doubt computer science would have
been held back had Lisp required programmers to write:

(defun do-something (new-read-base)
  (let ((*read-base* new-read-base))
    ...))

or even

(defun do-something (new-read-base)
  (fluid-let ((*read-base* new-read-base))
    ...))

instead of

(defun do-something (*read-base*)
  ...)

>One last point, quickly. Douglas Rand says:
>
>]	... Common Lisp preserves the ability to screw yourself for the
>]	hearty adventurer types (you can always do a (declare (special ..)))
>]	but saves the rest of us mere mortals from our folly.
>
>This is, in my opinion (not, I admit, widely shared as yet) one of the
>worst of the Common LISP messes. It is the nature of LISP that code is
>built up incrementally. You build your system on the top of my system. Let
>us say that you are a mortal and I am a hearty adventurer. How are you to
>know which tokens I have declared special? Well, I *ought* to have
>documented them; or you could always read the source file; or, as a last
>gasp, you could always, *every single time you use a new variable* ask the
>system whether I've already declared it special. But are you *really*
>going to do these things? No. Mixing your binding schemes is asking for
>trouble - and trouble of a particularly nasty sort.

Packages, while they are not perfect, are the solution to the above
problem.  You can make sure that your variables don't collide with
mine by using a different package from me.  Yes, if you make use of
inherited packages, you run into the above problem.  One solution is
to not use inherited packages when you are not intimately familiar
with the system whose package you are inheriting from; another is to
make use of the *naming scheme* mentioned above (unfortunately, if the
provider of the system doesn't follow this convention, you lose).

>Perhaps what is being said in all this is that what we actually need is
>two standard languages: say ISO Scheme and ISO Dynamic LISP...?

A dynamic-only Lisp would be a bad idea.  We had one -- Maclisp -- and
we've abandoned it.  Most of the Common Lisp designers are former
Maclisp developers and programmers, and they consciously chose to
switch to lexical scoping by default.

Barry Margolin
Thinking Machines Corp.

barmar@think.com
uunet!think!barmar

marc@mosart.UUCP (Marc P. Rinfret) (06/14/88)

In article <515@dcl-csvax.comp.lancs.ac.uk> simon@comp.lancs.ac.uk (Simon Brooke) writes:
>In article <24508@ucbvax.BERKELEY.EDU> mkent@dewey.soe.berkeley.edu (Marty Kent) writes:
>>I've been wondering lately why it is that "modern" lisps like Common Lisp
>>and Scheme are committed to lexical scoping.  
>
>Good! somebody else prepared to stand up and say Common LISP is a mess.

I think you are jumping the gun.  Wondering about a feature of a language
and saying it "is a mess" are two different things.

>...
>Finally, if you (or your employer) have a wallet as
>deep as the Marianas trench, there's the much-heralded micro-explorer.
>That *ought* to give a decent LISP environment, but again I haven't seen
>one.
	I haven't seen one but I understand it gives you pretty much the same
environment as the one you get on the Explorer (or the Lambda).  This is
real good system, and yes it does support lexical scoping.  And yes it also
keeps the names of the local variables around, and yes it has a good debugger.


>>2) while it's true that Common Lisp's scoping makes it difficult to write
>>debuggers, lexical scoping is still a good trade-off because it buys
>>you...
>>
Ok, tell me how many of us will have the pleasure of WRITING a debugger? You
cannot assess the value of a language by how easy or hard it is to write
a debugger.  A language is good if you can easily develop code, a good
debugger is good to have but this is a feature of the implementation.
If the system you have doesn't include one that's bad, go get a good one now.
If you are developing your system it is worth the additional effort,
but then you don't have to design the debugger on top of the implementation
you make it a part of it.

>We had a long discussion about this on the uk.lisp newsgroup. I still have
>much of this on file and could post it if people are interested (I can't
>easily mail to the States). 

Sure post.

>Advocates of lexical scoping offered a number
>of extremely tricky programming examples which couldn't be done with
>anything else. These were very impressive *as tricks*, but I couldn't ever
>imagine using any of them in a serious programming situation. In short, I
>wasn't convinced - but I should add that I didn't convince anyone else
>either.

The question here is not what feature enables you to pull the best
tricks, but to have the language play the less tricks on you.  I believe
that accidental dynamic capture of identifiers is a dirty trick.  It
makes for hard to find problems.

>...
>the good news is that it appears that dynamic binding a la EuLisp will be
>incorporated

Common Lisp as it is currently defined DOES incorporate dynamic
bindings, you simply have to ask for it. (DEFVAR ...)  I think it is
nice to have the choice of both.  One of the nice things about having
lexical binding be the default is (on top of preventing accidental
shadowing) that it enable the compiler to warn you when you use an
undeclared variable (pretty often a typo, that with full dynamic would
only be detected at runtime).  Unless you don't like to declare variable
then you may as well use F......!

> and there will be no packages. 

The Package system of Common Lisp is something that no one likes, but you do
need something like it if you are to develop any  decent size system.
I don't know the kind of projects you have been involved with but when
you work on a large (> 25 man years, with people coming and going), you
need something to manage your namespace.  Saying "no packages" is not good
enough, do you have any ideas how to replace it?

>Obviously, I have my ideas about what a good LISP looks like (all right,
>as a minimum it has dynamic binding, both LAMBDA and NLAMBDA forms, at
>least the option of non-intrusive garbage collection; although it allows
>macros, there is nothing you can't do with a function; and it does not
>have packages, PROG, GO, stupid tokens in parameter lists, SETF....) -

So what's your problem with CL.  It has all you want, the features you
don't like you simply don't use.  If LISP is to be more than an academic
toy or a philosophical statement it must include some "impure features".
If you want to stay pure, keep away from these.  I have never personally
used PROG and GO but I've seen cases where they were well used.

SETF is something real nice, it is based on an easy to grasp concept.
It reduces the namespace cluttering (nice when you don't like packages!),
you don't have to look around to figure out the name of the modifier function
(provided you know the accessor).  Again this is something you may better
appreciate when you're working with large systems.

simon@comp.lancs.ac.uk (Simon Brooke) (06/16/88)

In article <199@esosun.UUCP> jackson@esosun.UUCP (Jerry Jackson) disagrees
with some of the things which I see as valuable in LISP. I'd like to
advance some defence of them, point by point. Fistly:

>CL supports dynamic binding for those cases where
>it is useful (I admit they definitely exist), although dynamic binding
>is quite clearly a *BUG* (the names you give to local variables should not
>matter...)
>
If you wish to gain information from your environment, then clearly, the
names of the symbols you use do matter. If you bind your locals either
in an arg list or in a let statement, then they don't matter. If you
*don't* do this, then you are using globals, which will get you into equal
trouble no matter what binding scheme you use. So this argument is simply
not tenable. I agree that we can debate (and disagree) about which binding
scheme is preferable, but it makes no sense to describe those you don't
like as bugs.

>NLAMBDA -- cannot be made efficient (unless you consider a run-time call
>to EVAL efficient)
>
No, I agree that it cannot. I use LISP for it's expressiveness, not its
efficiency; and while I appreciate that generally you can do with a macro
all that you can do with an NLAMBDA, few people can read a macro of more
than moderate complexity. We use LISP to convey information, not only to a
machine but also to other people. Writing code they can't read doesn't
achieve this object.

>PROG,GO -- For people who never have to write powerful tools I would
>agree that these are not necessary, but if you had ever tried to compile
>a special purpose language to lisp and make it reasonably efficient, you
>would appreciate the value of having things like PROG and GO as 
>compilation targets
>
Whilst we still programme largely for von Neumann architectures, there is
need for an iterative construct in LISP; however, there are many more
elegant iterative structures than PROG available to the designers of
modern LISPs. If you are using PROG for any purpose other than iteration,
then (if I were advising you - and of course, you might not accept my
advice) I would suggest that you probably need a clearer analysis of your
problem. Myself, I would never use GO or GOTO in any language.

>tokens in parameter lists -- Isn't it really obvious that something
>like member with a few options is better than the excessive proliferation
>of look-alike functions (ala memq memql memqual ...)
>
Obviously it it, but it isn't at all obvious to me that sticking tokens in
the parameter list even helps with this.

>SETF -- I can't believe my eyes... This is one of the BEST things about
>CL... I don't know what to say.  Anyone who has actually USED CL with setf
>for a while knows what I'm talking about.
>
So you actually like overwriting cons cells without knowing what else is
pointing to them !? Either you aren't serious, or you haven't looked at
what SETF does. We all *know* REPLACs are dangerous; we all use them with
care (I hope). But SETF allows us to overwrite a cons cell without even
getting hold of it to identify it first! That is *terrifying*! and you are
going to put that horror into the hands of the innocent?

>HAVE YOU EVER USED LISP????? 

Yes. Why do you think I care about it so much?


** Simon Brooke *********************************************************
*  e-mail : simon@uk.ac.lancs.comp                                      * 
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
*                                                                       *
* Thought for today: The task of a compiler is to take programs ... and *
******************** mutilate them beyond recognition [Elson] ***********

krulwich-bruce@CS.YALE.EDU (Bruce Krulwich) (06/18/88)

In article <525@dcl-csvax.comp.lancs.ac.uk> simon@comp.lancs.ac.uk (Simon
Brooke) writes:
>If you wish to gain information from your environment, then clearly, the
>names of the symbols you use do matter. If you bind your locals either
>in an arg list or in a let statement, then they don't matter. If you
>*don't* do this, then you are using globals, which will get you into equal
>trouble no matter what binding scheme you use.

This is simply not true, especially when using programming techniques
encouraged in lexically scoped LISPs.  Suppose you pass around a function.
In a lexically scoped LISP such a function can reference variables from the
function that created it.  In a dynamically scoped LISP this variable
references can be blocked by other variables in the system.  This is
something you may not have done, having not worked with lexically scoped
LISPs, but it is incredibly powerful.  (See, for example, the book AI
PROGRAMMING, by Charniak et al.)

>>HAVE YOU EVER USED LISP????? 
>Yes. Why do you think I care about it so much?

There is a big difference between the capabilities available (and thus the
techniques used) in modern LISPs as opposed to older LISPs.  I really
suggest looking at AI PROGRAMMING or a similar book before claiming that
such capabilities are not needed.


Bruce Krulwich

Net-mail: krulwich@{yale.arpa, cs.yale.edu, yalecs.bitnet, yale.UUCP}

	Goal in life: to sit on a quiet beach solving math problems for a
		      quarter and soaking in the rays.   

jackson@esosun.UUCP (Jerry Jackson) (06/18/88)

I must admit that after reading the measured response of Simon Brooke
to my *inflammatory* posting I felt somewhat abashed, but I would still
like to respond to some of his points... I think we are converging on
the good and bad points of both sides..


>>CL supports dynamic binding for those cases where
>>it is useful (I admit they definitely exist), although dynamic binding
>>is quite clearly a *BUG* (the names you give to local variables should not
>>matter...)
>>
>If you wish to gain information from your environment, then clearly, the
>names of the symbols you use do matter. If you bind your locals either
>in an arg list or in a let statement, then they don't matter. If you
>*don't* do this, then you are using globals, which will get you into equal
>trouble no matter what binding scheme you use. So this argument is simply
>not tenable. I agree that we can debate (and disagree) about which binding
>scheme is preferable, but it makes no sense to describe those you don't
>like as bugs.

I would like to elaborate on why I called this a *bug*.  It is not that
I just don't like it.  Here is an example of what I was talking about --

(defun foo (l)
  (my-mapcar #'(lambda (z)
		 (eql z l))
	     '(1 2 3 4)))

(defun my-mapcar (f l)
  (if (null l)
      nil
    (cons (funcall f (car l))
	  (my-mapcar f (cdr l)))))

With lexical scoping, the result of: (foo 2) => (nil t nil nil)
With dynamic scoping, the result of: (foo 2) => (nil nil nil nil)

With dynamic scoping, it is impossible to write a general procedure
which takes functional arguments that doesn't have this problem.  This is
why I said it's a bug -- it violates the notion that the names you pick
for *locals* shouldn't matter -- (notice that "l" in this case was
bound in the arglist)


>>PROG,GO -- For people who never have to write powerful tools I would
>>agree that these are not necessary, but if you had ever tried to compile
>>a special purpose language to lisp and make it reasonably efficient, you
>>would appreciate the value of having things like PROG and GO as 
>>compilation targets
>>
>Whilst we still programme largely for von Neumann architectures, there is
>need for an iterative construct in LISP; however, there are many more
>elegant iterative structures than PROG available to the designers of
>modern LISPs. If you are using PROG for any purpose other than iteration,
>then (if I were advising you - and of course, you might not accept my
>advice) I would suggest that you probably need a clearer analysis of your
>problem. Myself, I would never use GO or GOTO in any language.

As I said in my original statement, I am not advocating the use of the
abominable "go"-man in user code.  What I am saying, is that "go" is
a useful target for compilers for embedded languages -- (I have recently
written a compiler for a lisp-based prolog that compiles to lisp which
takes advantage of this...)

In fact, personally I don't much like iteration at all... That's why I
want implementors to be able to produce tail-recursive control structures
(even for embedded languages)


>>SETF -- I can't believe my eyes... This is one of the BEST things about
>>CL... I don't know what to say.  Anyone who has actually USED CL with setf
>>for a while knows what I'm talking about.
>>
>So you actually like overwriting cons cells without knowing what else is
>pointing to them !? Either you aren't serious, or you haven't looked at
>what SETF does. We all *know* REPLACs are dangerous; we all use them with
>care (I hope). But SETF allows us to overwrite a cons cell without even
>getting hold of it to identify it first! That is *terrifying*! and you are
>going to put that horror into the hands of the innocent?

On the contrary, I think that the benefits of SETF are most apparent when
you *do* know your target -- (I'm not really sure it is even possible to
do the opposite -- SETF is pretty dumb.. you have to tell it where the 
cell you want changed is and it has to know at compile time where that is)
Yes, RPLAC's are bad.  SETF is basically the same as the assignment 
mechanism in a more typical language like 'C':

a[i].wow = 5;  =>  (setf (wow (elt a i)) 5)

Is this bad?


BTW: There are things *I* don't like about CL -- 

1) packages -- The package system of CL is based on the wrong idea..
A programmer doesn't care if someone else uses the same symbol as *data*;
He only cares if it is a variable name or a function name, etc.  Since
what is important is the set of *bindings* for a symbol, an environment
system would be more appropriate.  

2) #' -- By distinguishing function bindings from variable bindings,
CL makes many uses of lexical scoping awkward and nearly opaque 
(as well as requiring extra special forms)

3) the equality predicates -- I admit that I don't have a good answer
to this problem, but I think equalp was not well thought out (couldn't
we at least have a function just like equalp except that it is case-sensitive
for strings; or an option to equalp? -- I know, I know, everyone has his
own set)

4) A nit-pik -- has anyone ever found a use for the top level form: '-' ? 

However, if you consider the magnitude of the task of designing this
language, they did pretty well. (I never thought I'd say this -- I used
to be an Interlisp-D hacker..)

+-----------------------------------------------------------------------------+
|   Jerry Jackson                       UUCP:  seismo!esosun!jackson          |
|   Geophysics Division, MS/22          ARPA:  esosun!jackson@seismo.css.gov  |
|   SAIC                                SOUND: (619)458-4924                  |
|   10210 Campus Point Drive                                                  |
|   San Diego, CA  92121                                                      |
+-----------------------------------------------------------------------------+

barmar@think.COM (Barry Margolin) (06/19/88)

In article <525@dcl-csvax.comp.lancs.ac.uk> simon@comp.lancs.ac.uk (Simon Brooke) writes:
>>SETF
>So you actually like overwriting cons cells without knowing what else is
>pointing to them !? Either you aren't serious, or you haven't looked at
>what SETF does. We all *know* REPLACs are dangerous; we all use them with
>care (I hope). But SETF allows us to overwrite a cons cell without even
>getting hold of it to identify it first! That is *terrifying*! and you are
>going to put that horror into the hands of the innocent?

I don't understand this point at all.  How does SETF allow you to
overwrite something without requiring you to know what you're
overwriting?

Maybe the problem you are referring to is the difference in behavior
of SETF depending upon whether it is operating on a structured object
or not.  If it is modifying a structured object, it modifies the
object, so all references to that object see the change.  On the other
hand, if it is given a character or a number, it modifies only the
referent it is given.  Examples:

Structured:
	(setq x (cons 1 2))
	(setq y x)
	(eql x y) => T
	(setf (car x) 3)
	x => (3 . 2)
	y => (3 . 2)
	(eql x y) => T

Non-structured:
	(setq x #\a)
	(setq y x)
	(eql x y) => T
	(setf (char-bit x :meta) t)
	x => #\meta-a
	y => #\a
	(eql x y) => NIL

However, these same inconsistencies would exist if you were forced to
use the pre-SETF equivalents:
	(setq x (rplaca x 3))
	(setq x (set-char-bit x :meta t))

(excuse the anachronism).  The inconsistency isn't in SETF, but in the
fact that the language allows side-effects to some data types but not
others.  If side effects on conses were not permitted, e.g.

(defun rplaca (cons new-car)
  (cons new-car (cdr cons)))

the two SETFs would be consistent regarding side effects.

Barry Margolin
Thinking Machines Corp.

barmar@think.com
{uunet,harvard}!think!barmar

barmar@think.COM (Barry Margolin) (06/19/88)

In article <209@esosun.UUCP> jackson@esosun.UUCP (Jerry Jackson) writes:
>1) packages -- The package system of CL is based on the wrong idea..
>A programmer doesn't care if someone else uses the same symbol as *data*;
>He only cares if it is a variable name or a function name, etc.  Since
>what is important is the set of *bindings* for a symbol, an environment
>system would be more appropriate.  

One program's function name is another program's data.  Macros, for
instance, are programs whose data will later be interpreted as a
program.

And what about symbols used in property lists?  Both a quantum physics
program and an auto inventory program might use the COLOR property of
symbols.

>4) A nit-pik -- has anyone ever found a use for the top level form: '-' ? 

Not for anything serious.  It's just a holdover from MacLisp.  It's
trivial to implement, and I guess the CL designers saw no reason to
drop it.

In MacLisp, which didn't have the LABELS construct, it could be used
to do recursion without actually defining a new function.  For
example, factorial(10) could be done with:

((lambda (n)
   (if (< x 2) 1
       (* n (funcall (car -) (1- n)))))
 10)

Barry Margolin
Thinking Machines Corp.

barmar@think.com
{uunet,harvard}!think!barmar

aarons@cvaxa.sussex.ac.uk (Aaron Sloman) (06/24/88)

From: simon@comp.lancs.ac.uk (Simon Brooke)

>Part of the power and expressiveness of LISP is that we can,
>when we want to, and when we know what we're doing, write functions which
>are sensitive to changes in their environment. If you don't like this, you
>will find that there are plenty of other *very good* languages (Pascal,
>Modula, Ada - even Scheme) which cater for your needs. Don't come and mess
>up the one language which has the expressiveness to do this.

The "one" language? No. Pop-11 is another language that allows dynamic
scoping. (For those who don't know it, there's an enthusiastic review of
Alphapop, a subset of Pop-11 for the Mac, in Byte May 1988).

The full version of Pop-11, currently available only in the Poplog
system, allows both lexical and dynamic scoping. For historical reasons
dynamic scoping is the default (ie input and output local parameters
default to being dynamically scoped if not explicitly declared lexical
(using "lvars", "lconstant" or "dlvars").

Pop-11 is a sort of mixture of Lisp (dynamic scoping is available, along
with macros, lists garbage collection, etc), Scheme (functions are
ordinary values of variables, and in all respects first class objects,
and lexical scoping is available), Pascal (rich, readable syntax,
records, arrays), Forth (use of explicit stack for passing arguments and
results). It also includes partial application, a pattern matcher, a
lightweight process mechanism and tools for creating new incremental
compilers, which is how the Poplog system can include compilers for a
variety of languages (Pop-11, Prolog, Common Lisp, ML, and other
user-implemented languages).

Pop-11 is a much expanded derivative of Pop-2 the language that was used
for many years in the Edinburgh Universith AI department.

One interesting fact about Pop-11 is that when lexical scoping became
available (version 10.2 1985) many users (including the Poplog system
developers) found themselves switching from dynamic scoping as their
default to lexical,
    (a) because they produced were fewer bugs due to unintended
    interactions between procedures taking procedures as arguments,
    (b) because it improved efficiency. Moreover, they also became aware
    of additional expressive power available.

E.g. here's a procedure that takes two procedures and returns a third
procedure which is their functional composition:

    define compose(f1, f2);
        lvars procedure (f1, f2);

        define lvars f3(x);
            f2(f1(x))
        enddefine;

        return(f3)
    enddefine;

So, using it:

    vars root4;

    compose(sqrt,sqrt) -> root4;
    root4(16) =>
    ** 2.0

A procedure that takes a start number and an increment number and
returns two procedures, a number generator and a "reset" procedure:

    define make_generator(start, incr) -> generator -> reset;
        lvars start, incr, generator, reset, count=start;

        ;;; create the number generator procedure
        procedure();
            count;                  ;;; result left on stack
            count + incr -> count;  ;;; increment for next time
        endprocedure -> generator;  ;;; procedure assigned to generator

        procedure();
            start -> count;
        endprocedure -> reset;      ;;; procedure assigned to reset
    enddefine;

    vars gen1, reset1, gen2, reset2; ;;; declare some global variables

    ;;; Create two generators and their "reset" procedures, one to
    ;;; generate multiples of 3, the second multiples of 5.

    make_generator(3,3) -> gen1 -> reset1;
    make_generator(5,5) -> gen2 -> reset2;

    gen1() =>       ;;; run gen1 and print out its result
    ** 3
    gen1() =>
    ** 6
    gen2() =>
    ** 5
    gen2() =>
    ** 10
    ;;; reset the first sequence
    reset1();
    gen1() =>
    ** 3
    gen2() =>
    ** 15

So, using lexical scoping it is very easy to define a procedure that
(each time it is invoked) creates a family of procedures that share a
private environment.

(For some of the simpler cases you can also do this using partial
application, which is a bit more efficient, though less elegant).

I suspect Lisp (and therefore perhaps AI) would have a much healthier
future as a general purpose language if it used the richer, more
readable (for ordinary mortals), syntax of Pop.

Cheers
Aaron Sloman,
School of Cognitive Sciences, Univ of Sussex, Brighton, BN1 9QN, England
    ARPANET : aarons%uk.ac.sussex.cvaxa@nss.cs.ucl.ac.uk
              aarons%uk.ac.sussex.cvaxa%nss.cs.ucl.ac.uk@relay.cs.net
    JANET     aarons@cvaxa.sussex.ac.uk
    BITNET:   aarons%uk.ac.sussex.cvaxa@uk.ac
        or    aarons%uk.ac.sussex.cvaxa%ukacrl.bitnet@cunyvm.cuny.edu

As a last resort (it costs us more...)
    UUCP:     ...mcvax!ukc!cvaxa!aarons
            or aarons@cvaxa.uucp

jeff@aiva.ed.ac.uk (Jeff Dalton) (07/05/88)

In article <519@dcl-csvax.comp.lancs.ac.uk> simon@comp.lancs.ac.uk (Simon Brooke) writes:
>Things are bubbling! good. Let's look at some of the arguments that have
>been advanced. Firstly, the objection to dynamic binding that's most
>commonly advanced: namely, that you fall down holes when you confuse
>locals with globals. [...]  If, in a lexically scoped lisp, you refer
>to a global variable thinking it's a local, or vice-versa, you'll still
>fall down a hole.  Lexical scoping does not obviate the need for good
>software engineering practice - namely, in this case a *naming scheme*.

There are many cases where both lexical and dynamic binding produce
the same result.  If you stick to these cases, problems with dynamic
binding will, of course, not appear.

The problem with dynamic binding is not so much that (incorrect)
references to (supposedly) local variables might refer to a global
instead -- that is clearly a problem in any language that has global
varibales -- but that there is no way to have a local variable
whose value is not visible everywhere.  One cannot determine by
local inspection what references to a variable exist: any function
called might refer to it.  *All* variables are globally visible,
not just the ones meant to be global.

A naming scheme can handle this problem, but bugs are much harder
to localize when it breaks down.  Suppose F has a local M, G has
a local N, F calls G, and the author of G mistakenly typed M in one
place instead of N.  Neither N nor M were meant to be global, so a
naming scheme for globals would have helped.  A naming scheme that
forbid local variables names such as N and M would not be acceptable.
And by "local" here I should really say something like "dynamic
variables meant to have only local (i.e. lexically valid) references".

In most code, dynamic scope is needed in a minority of cases and
all other cases that turn out to refer to the dynamic binding of
a variable will be bugs.  This suggests that some explicit step be
required to get dynamic scope and that lexical scope be the default.
And so it is a Good Thing that Common Lisp (and the varieties of
Scheme that provide dynamic scope) require such explicit steps and
a Bad Thing when a Lisp provides only dynamic variables.

Jeff Dalton,                      JANET: J.Dalton@uk.ac.ed             
AI Applications Institute,        ARPA:  J.Dalton%uk.ac.ed@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton

jeff@aiva.ed.ac.uk (Jeff Dalton) (07/05/88)

In article <519@dcl-csvax.comp.lancs.ac.uk> simon@comp.lancs.ac.uk (Simon Brooke) writes:
>The key point I want to make is one which Patrick made admirably:

]	...dynamic binding... makes some forms of abatraction easier to 
]	handle (this is important for programming in the large).

>Precisely so. And it is precisely for it's ability to handle abstraction
>that we choose LISP as a language. If we reduce its power to do so, we
>reduce it's value to 'just another programming language[...]

The key point is that dynamic binding makes *some* forms of abstraction
easier to handle.  Lexical scoping also has this property, though for
different abstractions.  If you are going to have only one or the other,
the question is which abstraction forms are more important.  A change
from dynamic scoping to lexical is not necessarily a reduction in
power (particularly since dyamic scoping is usually implemented
without a way to get closures over the dynamic environment).

The designers of Common Lisp decided to avoid both reductions in
power by providing both forms of scoping.

Jeff Dalton,                      JANET: J.Dalton@uk.ac.ed             
AI Applications Institute,        ARPA:  J.Dalton%uk.ac.ed@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton

jeff@aiva.ed.ac.uk (Jeff Dalton) (07/05/88)

In article <519@dcl-csvax.comp.lancs.ac.uk> simon@comp.lancs.ac.uk (Simon Brooke) writes:
>This is, in my opinion (not, I admit, widely shared as yet) one of the
>worst of the Common LISP messes. It is the nature of LISP that code is
>built up incrementally. You build your system on the top of my system. Let
>us say that you are a mortal and I am a hearty adventurer. How are you to
>know which tokens I have declared special? Well, I *ought* to have
>documented them; or you could always read the source file; or, as a last
>gasp, you could always, *every single time you use a new variable* ask the
>system whether I've already declared it special. But are you *really*
>going to do these things? No. Mixing your binding schemes is asking for
>trouble - and trouble of a particularly nasty sort.

Actually, you are not the only one who makes this argument.  The reasons
I do not find it convincing are:

1. The same naming convention you suggest earlier in your message -- 
   that the names of dynamic variables begin and end with "*" -- can
   be (and is) used in Common Lisp.  So I don't have to, as a last
   gasp, every single time, etc.

2. While it is true that I might not know that someone whose code I
   use has declared X special (i.e., dynamicly scoped), I also may
   not know that s/he has defined a function F.  Name conflicts of 
   this sort are not introduced by having both lexical and dynamic
   scope.  Indeed, they are *less* likely than in Lisps where every
   variable is dynamic.

The Common Lisp mixture of lexical and dynamic scoping is not perfect,
but the problems for the most part involve technical details not the
mere fact that both lexical and dynamic scope are available.

For example, there is no way in Common Lisp to guarentee that a
variable is not special.  In (LET ((A 10) ...), A might have been
proclaimed special somewhere, and there's no way to turn that off.
But this is because special proclamations affect all bindings as
well as all references, not because special variables exist at all.

Jeff Dalton,                      JANET: J.Dalton@uk.ac.ed             
AI Applications Institute,        ARPA:  J.Dalton%uk.ac.ed@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton

jeff@aiva.ed.ac.uk (Jeff Dalton) (07/05/88)

In article <525@dcl-csvax.comp.lancs.ac.uk> simon@comp.lancs.ac.uk (Simon Brooke) writes:
>If you wish to gain information from your environment, then clearly, the
>names of the symbols you use do matter. If you bind your locals either
>in an arg list or in a let statement, then they don't matter. 

But if the Lisp provides only dynamic scope, the names of variables
bound in arg lists and LETs do matter even though you often don't
want them to.

] NLAMBDA -- cannot be made efficient (unless you consider a run-time call
] to EVAL efficient)

>No, I agree that it cannot. I use LISP for it's expressiveness, not its
>efficiency; and while I appreciate that generally you can do with a macro
>all that you can do with an NLAMBDA, few people can read a macro of more
>than moderate complexity.

You can easily write NLAMBDAs in Common Lisp by using a function
together with a macro that adds quotes to the arguments.  Whether
is is desirable to do so is another matter.  The problems are not
just of efficiency but also of understanding.

] SETF -- I can't believe my eyes... This is one of the BEST things about
] CL... I don't know what to say.  Anyone who has actually USED CL with setf
] for a while knows what I'm talking about.

>So you actually like overwriting cons cells without knowing what else is
>pointing to them !? Either you aren't serious, or you haven't looked at
>what SETF does. We all *know* REPLACs are dangerous; we all use them with
>care (I hope). But SETF allows us to overwrite a cons cell without even
>getting hold of it to identify it first! That is *terrifying*! and you are
>going to put that horror into the hands of the innocent?

SETF of CAR and REPLACA are the same thing as far as what you've said
is concerned.  You have not given a reason why SETF is more terrifying
than REPLAC, for it does not let you modify cons cells without getting
hold of them first any more than REPLAC does.

Jeff Dalton,                      JANET: J.Dalton@uk.ac.ed             
AI Applications Institute,        ARPA:  J.Dalton%uk.ac.ed@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton

mrys@ethz.UUCP (Michael Rys) (07/17/88)

In article <478@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva (Jeff Dalton) writes:
>In article <519@dcl-csvax.comp.lancs.ac.uk> simon@comp.lancs.ac.uk (Simon Brooke) writes:
>>Things are bubbling! good. Let's look at some of the arguments that have
>>been advanced. Firstly, the objection to dynamic binding that's most
>>commonly advanced: namely, that you fall down holes when you confuse
>>locals with globals. [...]  If, in a lexically scoped lisp, you refer
>>to a global variable thinking it's a local, or vice-versa, you'll still
>>fall down a hole.  Lexical scoping does not obviate the need for good
>>software engineering practice - namely, in this case a *naming scheme*.
>
>There are many cases where both lexical and dynamic binding produce
>the same result.  If you stick to these cases, problems with dynamic
>binding will, of course, not appear.
>
>...
>A naming scheme can handle this problem, but bugs are much harder

In APL there exists only a dynamic scoping. A possible way to get
the same result as with static scoping (aka lexical scoping) is to
introduce 3 new scope classes. For a detailed description see
the paper of Seeds, Arpin and LaBarre in an APL Quote Quad 1978, or
my paper 'Scope and access classes in APL' in the APL88 Conference
Proceedings (avail. at ACM). Of course this new scheme would cause
new ideas for the symbol table...

Michael Rys

IPSANet : mrys@ipsaint
UUCP    : mrys@ethz.uucp