[comp.lang.lisp] Scheme as an Algol-like, not Lisp-like, language

pcg@cs.aber.ac.uk (Piercarlo Grandi) (02/27/91)

On 25 Feb 91 15:12:05 GMT, jeff@aiai.ed.ac.uk (Jeff Dalton) said:

jeff> In article <PCG.91Feb23162955@odin.cs.aber.ac.uk>
jeff> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:

barmar> Traditionally, one of the most important differences between
barmar> Lisp and most other languages has been the fact that memory
barmar> management is automatic.

pcg> Unfortunately this has traditionally encouraged sloppiness.

jeff> More often automatic storage management makes it easier to write
jeff> code that is clearer and in a sense more abstract.

Mostly agreed, but clearer and more abstract code should be modularized
so that explicit storage reclamation can be slipped in easily. When this
cannot be done, one has a tradeoff between having it easier to write
code that is slightly clearer and more abstract and using a machine
which costs ten time less, and the choice is often loaded :-). We all
already know this, but I'd like to emphasize it.

jeff> A number of Lisp hackers _did_ think about garbage generation
jeff> rates when it was appropriate to do so.

Ah yes, they did, the good ones and those with vast experience and
serious problems, but not everybody is a Dalton or Margolin, and garbage
generation rates are not even mentioned, just like locality of reference
in C/Unix books, in any Lisp/Scheme book I have seen -- please don't
post counterexamples! Every gross generalization like this has them, but
I hope you get the message, which is garbage generation rates is not
thought to be an issue worth mentioning, while for example time, but not
space, complexity is often prominently addressed (entire books devoted
to it!).

Even research in these issues has become unfashionable, like research in
locality of reference, and I can think of precious few contributions in
either field for the past ten years or so.

barmar> Even EVAL isn't as fundamental to Lisp as I once thought, since
barmar> Scheme doesn't have it [ ... ]

pcf> The entire Scheme report is really about defining the semantics of
pcg> 'eval'. 

jeff> I hope you're not supposing that Barmar didn't know this.

Your hope is well founded. I do occasionally restate the obvious just to
make is clearer what is the shape of my reasoning, if any, and not null
and void where prohibited by relevant statutes :-).

pcg> It does not have a way to invoke recursively the 'eval', but
pcg> that is probably best left as an implementation issue.

jeff> There are programs that can use it and they cannot be written in
jeff> portable Scheme. [ ... ] I don't see how any of this makes it an
jeff> implementation issue.

Well, my reasoning goes as follows: the Scheme RR defines the semantics
of 'eval', but does not define how to *invoke* it, because invoking it
*is* an implementation dependent issue. It may be something like 'scheme
<file>', for example, in most implementations. Or the implementation may
be a Scheme machine and all you have to type is '<expr>', 'eval' being
implicit.

pcg> Also, Scheme is not a Lisp -- it is an Algorithmic Language :-).

jeff> Single inheritiance thinking rides again.  [Common Lisp is a Lisp but
jeff> not supposedly an "Algorithmic language".  Scheme is an Algorithmic
jeff> language.  Since Algorithmic language is not a subcategory of Lisp,
jeff> and Lisp is not a subcategory of Algorithmic language (because of
jeff> such Lisps as Common Lisp), it must be that Scheme is not a Lisp.]

Now, *you* are underestimating me a bit. The serious point I was making
is that a Lisp is historically a symbolic List Processing language,
while, except for the survival of 'cons', 'car', and 'cdr', Scheme is
more of something in the Algol 60 tradition, with a superficially
Lisp-like syntax.

Unfrotunately I was not using, mea culpa, Algorithmic Language in the
proper sense of alternative to Programming Language, but in the narrower
and less proper sense of Algol like language. Scheme is an Algorithmic
Language that is Algol-like, while there are non Algol-like algorithmic
languages (e.g.  APL, or ML); CommonLisp is a Programming Language that
is Lisp-like (in a cancerous way :->), while there are Lisp-like
languages that are not programming languages (the old UW or Vincennes
Lisps, Forth and TRAC (TM) maybe qualify as Lisp-like too).

The lack of an explicit 'eval' gives the show away, I am afraid (just
like the consequence of having 'quasiquote' & Company as primitives
does).

Scheme cannot be used, portably, to "reason" about programs, inasmuch it
cannot *directly* build and execute *program fragments*, like every
Lisp-like language is supposed to do. Some say this is *the*
distinguishing characteristic of Lisp-like languages, however rarely it
is used in practice.

Scheme can "reason" more powerfully than most other languages (including
Common Lisp), thanks to lexical closures and to continuations, and to
explicit environments supported by many implementations, about *program
states*, in both the context and the control graph, but this really is
in the Algol-like language tradition, where 'own' is the root of all
such powerful technology. OO technology, which is related to it, is
after all a consequence of 'own', of Simula I and Simula 67, all
Algol-like languages.

The Lisp-like language tradition had to fumble with the upward and
downward funarg issue almost forever (until Steele and Baker, more or
less), even if as of now funargs are more mainstream in the Lisp-like
language camp than in the Algol-like language camp.

After all this, IMNHO it is not so far out to say that Scheme is an
alternative syntax (modulo the latent type issue) for an Algol 68
subset, if Algol 68 had upward funargs (and some Algol 68
_implementations_ had them, as an extension to the standard!).

I think that the provision of *excessive* and *regrettable* (complex and
rational!) numeric facilities in Schemeis also designed to give the
impression that it is designed to be more of a Algol-like language than
a Lisp-like language.

It is true however that most Scheme *implementations* have actually been
like Lisp implementations, that is workspace based, and with 'eval'; but
I am sure that a 'schemec' compiler that generated object code like the
'cc' compiler could be done, and actually some imperfect realizations do
exist (scheme2c from DEC actually is mostly used to produce objects to
be loaded in a Scheme environment; PreScheme from MIT is better geared,
I seem to understand, to generating standalone executables to be linked
with a library).

Actually, let me say, I think that one of the fatal mistakes in Scheme,
like it was for Pascal, is that there is no standard way to address
separate compilation! Seriously. Think carefully about the
implications...
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

jinx@zurich.ai.mit.edu (Guillermo J. Rozas) (02/27/91)

    Now, *you* are underestimating me a bit. The serious point I was making
    is that a Lisp is historically a symbolic List Processing language,
    while, except for the survival of 'cons', 'car', and 'cdr', Scheme is
    more of something in the Algol 60 tradition, with a superficially
    Lisp-like syntax.

Well, what makes a Lisp?  Let me suggest a few requirements.
1. List processing.
2. Programs that look like data.
3. Objects with unbounded extent and automatic storage management.
4. EVAL (making use of 2).
5. Powerful syntactic extension facilities that make use of 2.
6. Call by value.
7. Imperative constructs.
8. Functional style encouraged, or at least not discouraged.
9. Latent types.
10. Generic arithmetic, with integers not restricted to the word size
of the machine.
11. Interactive development enviroments.

You are right to claim that Scheme is not a Lisp because of its lack
of 4 and 5, but

- Every implementation of Scheme that I know of has both.

- It is very much the intent of the RnRS group to agree on a portable
way to do 5, and we have not yet agreed on 4 not because we don't want
EVAL in the language, but primarily because we cannot completely agree
on its arity.

- There are dialects of Lisp out there that don't satisfy some of
those requirements, yet no one thinks they are not Lisp.


Of course, Scheme is also Algol-like because it is a lexically scoped,
procedural language with imperative constructs, but so is CL.  As you
probably know, the original claim that Scheme was Algol-like, as
opposed to other Lisps, was because other Lisps were dynamically
scoped at the time.

    Scheme cannot be used, portably, to "reason" about programs, inasmuch it
    cannot *directly* build and execute *program fragments*, like every
    Lisp-like language is supposed to do. Some say this is *the*
    distinguishing characteristic of Lisp-like languages, however rarely it
    is used in practice.

Well, not quite right about Scheme.  R3RS, the last report published,
includes LOAD, and an adequate EVAL can be written in terms of it:

(define (eval S-expression)
  (with-output-to-file "<some temp file>"
    (lambda ()
      (write S-expression)))
  (load "<some temp file>"))

    I think that the provision of *excessive* and *regrettable* (complex and
    rational!) numeric facilities in Schemeis also designed to give the
    impression that it is designed to be more of a Algol-like language than
    a Lisp-like language.

Hmm.  That's interesting.  My reading of the various reports and the
draft IEEE standard, and my understanding of their intent, must be
different from yours.  Scheme does not require ratnums or recnums, it
merely requires that built-in operators should handle them
transparently if they are provided by an implementation.

You are confusing rational numbers with ratnums, an implementation
technique for representing arbitrary rationals.  You are also
confusing complex numbers with recnums, a representation technique for
reals with non-zero imaginary parts.

The built-in operators should handle rationals (whether implemented as
ratnums or floats) and complex numbers (whether they have a non-zero
imaginary part or not) correctly, but implementations are free not to
supply any way to construct non-float rationals (nor even floats for
that matter) nor any way to construct non-real complex numbers.

In other words, the requirement is one of integration if the features
are present, not a requirement on the presence!

Furthermore, I can't see why you would say that these features put
Scheme in the Algol camp instead of the Lisp camp.  Algol-60 (the
Algol meant by Steele and Sussman) did not have generic arithmetic,
but Lisps typically do.

    Actually, let me say, I think that one of the fatal mistakes in Scheme,
    like it was for Pascal, is that there is no standard way to address
    separate compilation! Seriously. Think carefully about the
    implications...

More on this along the way, we hope.

Scheme is not a finished language.  It is finished enough for some
purposes, but it is seriously lacking in others.  The lack of other
facilities is not because they are not considered important by the
authors of the report, but because they have not yet agreed on what
they should look like.

I think you are reading too much into the Scheme reports.

jaffer@gerber.ai.mit.edu (Aubrey Jaffer) (02/27/91)

>> 4. EVAL (making use of 2).
>> 5. Powerful syntactic extension facilities that make use of 2.
   ...
>> You are right to claim that Scheme is not a Lisp because of its lack
>> of 4 and 5, but
>> - Every implementation of Scheme that I know of has both.

SCM has neither facility.  Eval makes compilation impossible without
including an interpreter or compiler in the run-time support.

The lack of macros makes pure scheme code easily readable.  I would
not object to macros if they were required to be syntactically
differentiated from variable names in order to preserve readability.
I find when reading common-lisp code that I often don't know if a user
defined symbol is a macro or a function.

To make a more radical suggestion I think that scheme might do very
well to NOT include macros.  Introducing new syntactic constructs
(macros) in order to avoid typing a few lambdas is bad programming
style in that the code becomes unreadable.  If a syntactic construct
provides a programming paradigm not already supported by scheme then
it should be added to the language.

jeff@castle.ed.ac.uk (Jeff Dalton) (02/27/91)

In article <PCG.91Feb26194909@odin.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
>jeff> More often automatic storage management makes it easier to write
>jeff> code that is clearer and in a sense more abstract.
>
>Mostly agreed, but clearer and more abstract code should be modularized
>so that explicit storage reclamation can be slipped in easily. When this
>cannot be done, one has a tradeoff between having it easier to write
>code that is slightly clearer and more abstract and using a machine
>which costs ten time less, and the choice is often loaded :-). We all
>already know this, but I'd like to emphasize it.

I would disagree that the code is only "slightly clearer" and that
the machine would cost 10 times less.  Let's suppose Lisp is a factor
of 2 slower due to garbage collection, which in some cases is an
overestimate.  I bought a 386 machine a few months ago, and now for
more or less the same price I could get a 486 that's twice as fast.
So it looks like I could get my clearer and more abstract code
without paying anything more.  This estimate is no doubt off,
but I don't think it's off by a factor of 10.

Of course, if someone came up with a language in which I could slip in
explicit storage management (1) without having to write more awkward
code in order to allow the slipping-in and (2) without having slower
automatic storage management, I would be happy to use it. 

>jeff> A number of Lisp hackers _did_ think about garbage generation
>jeff> rates when it was appropriate to do so.
>
>Ah yes, they did, the good ones and those with vast experience and
>serious problems, but not everybody is a Dalton or Margolin, and garbage
>generation rates are not even mentioned, just like locality of reference
>in C/Unix books, in any Lisp/Scheme book I have seen -- please don't
>post counterexamples! Every gross generalization like this has them, but
>I hope you get the message, which is garbage generation rates is not
>thought to be an issue worth mentioning, while for example time, but not
>space, complexity is often prominently addressed (entire books devoted
>to it!).

Although books clearly have some impact on programming practice,
I don't think they necessarily give a true picture of what that 
practice is.

Since you won't let me use counterexamples, let me reply with some
gross generalizations.  Almost all Lisps texts are introductions and
consequently treat efficicency as a relatively minor concern.  It might
be argued that they're wrong to do this, but I think it's understandable.
Experienced programmers think differently, and many User Manuals for
implementations discuss efficiency at length in recognition of this. 

>Even research in these issues has become unfashionable, like research in
>locality of reference, and I can think of precious few contributions in
>either field for the past ten years or so.

I will leave this to people who have a stock of references nearer to
hand.  There are certainly things I've heard about only in the last 10
years, but of course that's not the same thing.

>barmar> Even EVAL isn't as fundamental to Lisp as I once thought, since
>barmar> Scheme doesn't have it [ ... ]
>pcf> The entire Scheme report is really about defining the semantics of
>pcg> 'eval'. 
>jeff> I hope you're not supposing that Barmar didn't know this.
>
>Your hope is well founded. I do occasionally restate the obvious just to
>make is clearer what is the shape of my reasoning, if any, and not null
>and void where prohibited by relevant statutes :-).

Yes, but I think you end up addressing the wrong point.  Barmar was
clearly talking about a sense of "have EVAL" in which Lisp normally
has it but Scheme does not.  All Lisps, including Scheme, define the
semantics of 'eval'.  So I would say Barmar must be talking about
something else, namely that Scheme does not make eval available as
a function.  Since this is a well-known and much-discussed issue,
Bamar didn't need to state it explicitly in order to make his meaning
clear.  Or so I would have thought.

>pcg> It does not have a way to invoke recursively the 'eval', but
>pcg> that is probably best left as an implementation issue.
>
>jeff> There are programs that can use it and they cannot be written in
>jeff> portable Scheme. [ ... ] I don't see how any of this makes it an
>jeff> implementation issue.
>
>Well, my reasoning goes as follows: the Scheme RR defines the semantics
>of 'eval', but does not define how to *invoke* it, because invoking it
>*is* an implementation dependent issue. It may be something like 'scheme
><file>', for example, in most implementations. Or the implementation may
>be a Scheme machine and all you have to type is '<expr>', 'eval' being
>implicit.

Well, all that is true, but it is still not what Barmar was talking
about.

I will leave the "Scheme isn't a Lisp" for another message while I
go look for a better editor on this random machine I'm using (the
usual one being broken).

-- jeff

jinx@zurich.ai.mit.edu (Guillermo J. Rozas) (02/28/91)

In article <13573@life.ai.mit.edu> jaffer@gerber.ai.mit.edu (Aubrey Jaffer) writes:

   Path: ai-lab!gerber!jaffer
   From: jaffer@gerber.ai.mit.edu (Aubrey Jaffer)
   Newsgroups: comp.lang.lisp,comp.lang.scheme
   Date: 27 Feb 91 03:40:22 GMT
   References: <1991Feb15.191259.20090@aero.org> <1991Feb15.223520.17267@Think.COM>
	   <1991Feb18.191549.7575@aero.org> <1991Feb19.030719.1137@Think.COM>
	   <PCG.91Feb23162955@odin.cs.aber.ac.uk> <4234@skye.ed.ac.uk>
	   <PCG.91Feb26194909@odin.cs.aber.ac.uk> <JINX.91Feb262130
   Sender: news@ai.mit.edu
   Lines: 22
   Xref: ai-lab comp.lang.lisp:3113 comp.lang.scheme:1455

   >> 4. EVAL (making use of 2).
   >> 5. Powerful syntactic extension facilities that make use of 2.
      ...
   >> You are right to claim that Scheme is not a Lisp because of its lack
   >> of 4 and 5, but
   >> - Every implementation of Scheme that I know of has both.

   SCM has neither facility.  Eval makes compilation impossible without
   including an interpreter or compiler in the run-time support.

I did not know that SCM did not include EVAL or macros.  Now I do.

Your comment about EVAL is no different from saying that compilation
of code that does IO is impossible without including WRITE (or even
more apropos, FORMAT) in the run-time support.
It is perfectly possible to build a Scheme system that links only
those run-time modules needed, and thus code not using EVAL would
never need it.

   The lack of macros makes pure scheme code easily readable.  I would
   not object to macros if they were required to be syntactically
   differentiated from variable names in order to preserve readability.
   I find when reading common-lisp code that I often don't know if a user
   defined symbol is a macro or a function.

   To make a more radical suggestion I think that scheme might do very
   well to NOT include macros.  Introducing new syntactic constructs
   (macros) in order to avoid typing a few lambdas is bad programming
   style in that the code becomes unreadable.  If a syntactic construct
   provides a programming paradigm not already supported by scheme then
   it should be added to the language.

I agree that macros can be abused and often are.  But I disagree with
your suggestion that all common and useful constructs should be
included in the language.  A language can't be (and shouldn't be) that
comprehensive.  Furthermore, as large as the design group is, it will
not include the complete community, nor will it be able to envision
all paradigms.

In particular, a language designed by consensus, such as the RnRS
dialects of Scheme, will have a hard time adding new special forms if
part of the design group feels strongly against adding them (I and
many others in the MIT Scheme community feel this way).  Yet I would
like to be able to use WHILE, FLUID-LET and perhaps even DO* even
though I might not want them in the language.

It is also the case that code can become much more readable with
judicious use of syntactic sugar.  I find code written with FLUID-LET
easier to read than code that passes many additional almost-constant
parameters around, or that open-codes FLUID-LET where it would be
used.  WHILE is often appropriate for imperative programs, and many
other special forms can be similarly defended.

I don't know how to resolve the tension between providing powerful
(and easily misused) tools and discouraging their abuse.  Perhaps the
best solution is to follow the guidelines that I once heard from Alan
Bawden (my paraphrasing): 
- No Lisp programmer should be allowed to write macros unless s/he's
been granted a license.
- Alan Bawden has a license and grants all other licenses.
I think this would have the properties that I (and perhaps you)
desire, but is, unfortunately, infeasible and politically incorrect.

markf@zurich.ai.mit.edu (Mark Friedman) (02/28/91)

In article <13573@life.ai.mit.edu> jaffer@gerber.ai.mit.edu (Aubrey Jaffer) writes:

   I find when reading common-lisp code that I often don't know if a user
   defined symbol is a macro or a function.

I find when reading code that I often don't know what a procedure does
or what its arguments are supposed to be.  My point is that there are
other things that one needs to know in order to understand code. The
issue of whether that a combination is a macro call or a syntactic
form or a procedure call is only one of them.

   If a syntactic construct provides a programming paradigm not
   already supported by scheme then it should be added to the
   language.

You've obviously never been to a Scheme standards or RNRS meeting :-)
Seriously, macros ARE a way to add to the language. Procedures are
another. Why discriminate against one of them.

-Mark
--

Mark Friedman
MIT Artificial Intelligence Lab
545 Technology Sq.
Cambridge, Ma. 02139

markf@zurich.ai.mit.edu

jeff@castle.ed.ac.uk (Jeff Dalton) (02/28/91)

In article <PCG.91Feb26194909@odin.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
>On 25 Feb 91 15:12:05 GMT, jeff@aiai.ed.ac.uk (Jeff Dalton) said:

>pcg> Also, Scheme is not a Lisp -- it is an Algorithmic Language :-).
>
>jeff> Single inheritiance thinking rides again.  [Common Lisp is a Lisp but
>jeff> not supposedly an "Algorithmic language".  Scheme is an Algorithmic
>jeff> language.  Since Algorithmic language is not a subcategory of Lisp,
>jeff> and Lisp is not a subcategory of Algorithmic language (because of
>jeff> such Lisps as Common Lisp), it must be that Scheme is not a Lisp.]
>
>Now, *you* are underestimating me a bit. The serious point I was making
>is that a Lisp is historically a symbolic List Processing language,
>while, except for the survival of 'cons', 'car', and 'cdr', Scheme is
>more of something in the Algol 60 tradition, with a superficially
>Lisp-like syntax.

1. Scheme aims to be both a Lisp and an Algol-like language.  The
   former ought to be clear on a variety of grounds.  Evidence for
   the latter includes the name of the TeX style file for the Scheme
   reports (ie, "algol60").

   The idea that Scheme is not a Lisp is promulgated mostly by those
   who do not like Lisp but want to, somehow, make an exception of
   Scheme.  Due to single-inheritance thinking, they think this ought
   to be done by placing Scheme outside of Lisp.

   I would also disagree with the suggestion that Scheme is more an
   Algol than a Lisp.

2. The idea that Scheme is further away from list processing than
   other Lisps is somewhat bizarre, because many other Lisps have a
   greater range of data types than Scheme.  Scheme is different
   from traditional Lisp in large part because it was developed
   later.  Lisp is not now, if it ever was, a language in which
   "everything is a list".  

3. The scheme syntax is not _superficially_ Lisp-like; it is a Lisp
   syntax.

>Unfortunately I was not using, mea culpa, Algorithmic Language in the
>proper sense of alternative to Programming Language, but in the narrower
>and less proper sense of Algol like language. Scheme is an Algorithmic
>Language that is Algol-like, while there are non Algol-like algorithmic
>languages (e.g.  APL, or ML); CommonLisp is a Programming Language that
>is Lisp-like (in a cancerous way :->), while there are Lisp-like
>languages that are not programming languages (the old UW or Vincennes
>Lisps, Forth and TRAC (TM) maybe qualify as Lisp-like too).

"Proper sense"?  Give me a break.

I will agree that you have now departed from single-inheritance, but
only to the extent of having two hierarchies instead of one.  I do not
find your elaborated classification scheme any more convincing.  ML is
closer to Algol in many ways than Scheme is.  Forth and Track may be
in some class that also includes Lisp, but there's surely a better
name for it than "Lisp-like".

Scheme, on the other hand, is a variety of Lisp.  (It's often said to
be a "dialect" of Lisp, but that may imply too strongly that Lisp is a
single language.)  Indeed, you have the bizarre notion that "Lisp-like"
includes some Lisps (eg, Common) but not others (Scheme).  (Or perhaps
you just have the bizarre notion that "Lisp-like" is a good name for
this category.)

The idea that Common Lisp is a cancerous departure from "old VW Lisp"
is, moreover, just wrong.  The evolution from Lisp 1.5 to the main
Lisps of the 70s, MacLisp and InterLisp, was a fairly natural one.
Common Lisp is the unification of several successors to Maclisp.  It
is, clearly, larger than MacLisp, but it is not larger than InterLisp.
Moreover, many things that may seem peculiar to Common Lisp (format,
defstruct, dotimes, etc) were already used in MacLisp.

Some critics of Common Lisp don't want to accept this.  They want
Common Lisp to be a uniquely bad language.  (Incidently, when I talk
about "some critics" I do not want to imply that they necessarily
include pgc.  Maybe they do, but I don't know.)

>The lack of an explicit 'eval' gives the show away, I am afraid (just
>like the consequence of having 'quasiquote' & Company as primitives
>does).

What?  I'll address eval below, where you elaborate on this point.
However, Common Lisp could just as well have quasiquote.  It isn't
that significant a difference.  (I hope you aren't counting it as part
of the claim that Scheme cannot be used to "*directly*" build program
fragments.)

>Scheme cannot be used, portably, to "reason" about programs, inasmuch it
>cannot *directly* build and execute *program fragments*, like every
>Lisp-like language is supposed to do. Some say this is *the*
>distinguishing characteristic of Lisp-like languages, however rarely it
>is used in practice.

Some say it, but they're wrong.  There isn't a single distinguishing
feature, and EVAL is not first on my list.  But this goes back to the
notion that there is a useful category "Lisp-like", characterized (it
now seems) by the ability to build and execute program fragments at
run time.  This way of thinking makes sense only if you escape single-
inheritance thinking and let this ability be one characteristic among
others (and change the name).  

>Scheme can$"reason" more powerfully than most other languages (including
>Common Lisp-, thanks to lexical closures and to continuatiofs, and to
>explicit environments supported by many implementations, about *program
>states*, 

I'd stay away from "supported by many implementations" if I were you,
since virtually all implementatins let you directly execute program
fragments by calling EVAL and construct them via macros if by nothing
else.

>in both the context and the control graph, but this really is
>in the Algol-like language tradition, where 'own' is the root of all
>such powerful technology. OO technology, which is related to it, is
>after all a consequence of 'own', of Simula I and Simula 67, all
>Algol-like languages.

If you start tracing good ideas back to Algol, virtually every "good"
language will be "Algol-like".  The principal root of such things in
Scheme is the lambda calculus and the first-class status of functions.

>The Lisp-like language tradition had to fumble with the upward and
>downward funarg issue almost forever (until Steele and Baker, more or
>less), even if as of now funargs are more mainstream in the Lisp-like
>language camp than in the Algol-like language camp.

Most of the programming language world managed to "fumble" with
functions, if you want to call it that.  Lisp 1.5 had upward
funargs.  Where it went wrong was in not having lexical scoping.
The reason the upward/downward distinction became important
was that the downward ones were easier to implement efficiently.

>After all this, IMNHO it is not so far out to say that Scheme is an
>alternative syntax (modulo the latent type issue) for an Algol 68
>subset, if Algol 68 had upward funargs (and some Algol 68
>_implementations_ had them, as an extension to the standard!).

Well, yes, let's confine the type question to parentheses.  That way
we can treat it as unimportant.

And let's try this alternative syntax + some implementations trick
more generally.  Let's see.  Because of the KCL compiler, Common Lisp
is demonstrably an alternative syntax for C, plus a library of
procedures.  And C is very like a subset of Algol68 (or else say the
KCL compiler could just as well emit Algol68), so it must be that
Common Lisp is an Algol-like language.  If anything doesn't quite fit,
we can just say "modulo" in parentheses.

>I think that the provision of *excessive* and *regrettable* (complex and
>rational!) numeric facilities in Schemeis also designed to give the
>impression that it is designed to be more of a Algol-like language than
>a Lisp-like language.

It was MacLisp (more or less) that started the emphasis on having
good numeric facilities in Lisp.  Common Lisp and Scheme have *fortunately*
provided exact rational arithmetic.  (N.B. not "exact" in the technical
Scheme sense.)  Other languages (eg, Haskell) have been influenced by
Common Lisp and Scheme.  

>It is true however that most Scheme *implementations* have actually been
>like Lisp implementations, that is workspace based, and with 'eval'; but
>I am sure that a 'schemec' compiler that generated object code like the
>'cc' compiler could be done, and actually some imperfect realizations do
>exist (scheme2c from DEC actually is mostly used to produce objects to
>be loaded in a Scheme environment; PreScheme from MIT is better geared,
>I seem to understand, to generating standalone executables to be linked
>with a library).

And the same can be done with (surprise!) Common Lisp.  The same sort of
imperfect realizations already exist, eg KCL.

-- jeff

kend@data.UUCP (Ken Dickey) (02/28/91)

[WARNING: I have not seen the previous postings.  Some brain-dead
comments may result].

pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
pcg>On 25 Feb 91 15:12:05 GMT, jeff@aiai.ed.ac.uk (Jeff Dalton) said:


AUTOMATIC STORAGE MANAGEMENT:	================================

>jeff> More often automatic storage management makes it easier to write
>jeff> code that is clearer and in a sense more abstract.

pcg>Mostly agreed, but clearer and more abstract code should be modularized
pcg>so that explicit storage reclamation can be slipped in easily. When this
pcg>cannot be done, one has a tradeoff between having it easier to write
pcg>code that is slightly clearer and more abstract and using a machine
pcg>which costs ten time less, and the choice is often loaded :-). 

This strikes me as very simular to the argument against virtual memory.

>jeff> A number of Lisp hackers _did_ think about garbage generation
>jeff> rates when it was appropriate to do so.

pcg>Ah yes, they did, the good ones and those with vast experience and
pcg>serious problems, but not everybody is a Dalton or Margolin, and garbage
pcg>generation rates are not even mentioned, just like locality of reference
pcg>in C/Unix books, in any Lisp/Scheme book I have seen -- please don't
pcg>post counterexamples! Every gross generalization like this has them, but
pcg>I hope you get the message, which is garbage generation rates is not
pcg>thought to be an issue worth mentioning, while for example time, but not
pcg>space, complexity is often prominently addressed (entire books devoted
pcg>to it!).

I don't ever remember a book on C discussing file sizes, but many Uni#
systems have disk usages ~98% full at all times.  Perhaps some topics
are considered "advanced" for programming language texts?

Are you particularly fond of "dumb" storage managers?  [I have to tell
it everything, even tell it to clean up after itself!].


EVAL: 	================================

>jeff> There are programs that can use it and they cannot be written in
>jeff> portable Scheme. [ ... ] I don't see how any of this makes it an
>jeff> implementation issue.

My recollection is that R^NRS defined LOAD.  Thus EVAL may be written
as a function which writes an expression to a file and LOADs it.

pcg>The lack of an explicit 'eval' gives the show away, I am afraid (just
pcg>like the consequence of having 'quasiquote' & Company as primitives
pcg>does).

pcg>Scheme cannot be used, portably, to "reason" about programs, inasmuch it
pcg>cannot *directly* build and execute *program fragments*, like every
pcg>Lisp-like language is supposed to do. Some say this is *the*
pcg>distinguishing characteristic of Lisp-like languages, however rarely it
pcg>is used in practice.

I find it interesting that you argue for making storage management
explicit, yet seem to ignore implementation issues such as linking
together applications with minimal runtime systems [which is what EVAL
is about--keeping a full runtime because one might have to EVAL
anything].  I have never seen a Scheme system without some form of
EVAL (although it may be a compiler invocation).

As Scheme has READ, one can certainly reason well about programs
without using EVAL.  Again, however, I know of no Scheme
implementation which lacks the ability to "*directly* build and
execute *program fragments*".


PROGRAMMING ENVIRONMENTS:	================================

pcg>Actually, let me say, I think that one of the fatal mistakes in Scheme,
pcg>like it was for Pascal, is that there is no standard way to address
pcg>separate compilation! Seriously. Think carefully about the
pcg>implications...

Now that there is a Scheme Language standard, perhaps the Scheme
community will do more work on development environment/invocation
standards.  Note that other languages have typically failed in this
area.

pcg>--
pcg>Piercarlo Grandi		 | INET: pcg@cs.aber.ac.uk

================================

Ken Dickey			kend@data.uucp

gumby@Cygnus.COM (David Vinayak Wallace) (02/28/91)

   Date: 26 Feb 91 19:49:09 GMT
   From: pcg@cs.aber.ac.uk (Piercarlo Grandi)

   [discussion of grabage generation profiles]

   Even research in these issues has become unfashionable, like research in
   locality of reference, and I can think of precious few contributions in
   either field for the past ten years or so.

Just check out the last few L&FP proceedings.  The last one was even
in Europe!  And that's just for starters...

Not to mention that several modern lisps provide stack-consing and
areas, allowing manual storage management if desired.

barmar@think.com (Barry Margolin) (02/28/91)

In article <8768@castle.ed.ac.uk> jeff@castle.ed.ac.uk (Jeff Dalton) writes:
>In article <PCG.91Feb26194909@odin.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
>>On 25 Feb 91 15:12:05 GMT, jeff@aiai.ed.ac.uk (Jeff Dalton) said:
>
>The idea that Common Lisp is a cancerous departure from "old VW Lisp"
>is, moreover, just wrong.  The evolution from Lisp 1.5 to the main
>Lisps of the 70s, MacLisp and InterLisp, was a fairly natural one.
>Common Lisp is the unification of several successors to Maclisp.  It
>is, clearly, larger than MacLisp, but it is not larger than InterLisp.
>Moreover, many things that may seem peculiar to Common Lisp (format,
>defstruct, dotimes, etc) were already used in MacLisp.

Quite true.  The most notable difference between Maclisp and Common Lisp in
this regard is the first-class standing given to many of these packages.
In Maclisp, one had to load lots of optional libraries to get these
facilities; consequently, the Maclisp manual was much smaller than the
Common Lisp manual.  The problem was that there were sometimes more than
one version of some of the facilities (Bernie Greenberg did a DEFVAR for
Multics Emacs, but its syntax was (defvar (var1 init1 doc1) ... (varN initN
docN))).  Common Lisp incorporates most of the popular facilities, so that
users would be able to port their programs without having to copy all these
auxiliary libraries.

>>Scheme cannot be used, portably, to "reason" about programs, inasmuch it
>>cannot *directly* build and execute *program fragments*, like every
>>Lisp-like language is supposed to do. Some say this is *the*
>>distinguishing characteristic of Lisp-like languages, however rarely it
>>is used in practice.
>
>Some say it, but they're wrong.  There isn't a single distinguishing
>feature, and EVAL is not first on my list.

I just wanted to mention that my original comment about realizing that EVAL
isn't fundamental to Lisp-like languages, was based on my previous belief
in the above statement about "reasoning" about programs.  If the lambda
calculus was designed to allow reasoning about algorithms, and Lisp was
intended to be a realization of the lambda calculus, then it seemed that
Lisp should be able to reason about itself.  Further, many AI researchers
thought Lisp was appropriate for machine learning, because learning could
be implemented by rewriting programs.

However, I don't think the original Lisp had EVAL.  Recall that Lisp was
originally a Fortran library for symbolic manipulation of data structures.
All it had was conses, symbols, and numbers.  Lambda calculus and predicate
calculus could be represented using these structures, but I suspect that
the programs that did this treated them abstractly (i.e. to write theorem
provers).  At some point one of McCarthy's students realized that an
*executable* program could be represented using the same data structures as
lambda calculus, and wrote EVAL, which allowed the interactive Lisp
interpreter to be written.  While EVAL is important to the Lisp family of
languages, it seems that symbolic manipulation is more fundamental.

--
Barry Margolin, Thinking Machines Corp.

barmar@think.com
{uunet,harvard}!think!barmar

wilson@uicbert.eecs.uic.edu (Paul Wilson) (03/01/91)

gumby@Cygnus.COM (David Vinayak Wallace) writes:


>   Date: 26 Feb 91 19:49:09 GMT
>   From: pcg@cs.aber.ac.uk (Piercarlo Grandi)

>   [discussion of garbage generation profiles]

>   Even research in these issues has become unfashionable, like research in
>   locality of reference, and I can think of precious few contributions in
>   either field for the past ten years or so.

>Just check out the last few L&FP proceedings.  The last one was even
>in Europe!  And that's just for starters...

References to relevant recent research can be found in my paper in the
March SIGPLAN Notices, "Issues and Strategies in Heap Management and
Hierarchical Memories," which is a position paper from the GC workshop
at OOPSLA/ECOOP '91.

Being a resolutely unfashionable person, I'm doing research on garbage
collection, locality of reference, and their interactions.  My next
paper, to be presented at the next SIGPLAN conference, is on improving
virtual memory performance by using better copying traversal algorithms,
to cluster data.  (Yes, it's been tried before, but I do some different
things, and it works very well...)

Some other people doing relevant research include Ben Zorn at the
U. of Colorado at Boulder, and Doug Johnson at TI, and the MUSHROOM
group at the U. of Manchester.

>Not to mention that several modern lisps provide stack-consing and
>areas, allowing manual storage management if desired.

Right.  On the other hand, the holy grail for gc folk is to avoid
having to do that sort of stuff explicitly the vast majority of the
time.  There have been great strides toward this with generational
garbage collectors -- having a bunch of short-lived data isn't nearly
as expensive as it used to be, though it's still more expensive than
stack allocation.  I suspect a bit of lifetime analysis in compilers
could help a lot too.

   -- PRW


Paul R. Wilson                         
Software Systems Laboratory               lab ph.: (312) 996-9216
U. of Illin. at C. EECS Dept. (M/C 154)   wilson@bert.eecs.uic.edu
Box 4348   Chicago,IL 60680 
-- 
Paul R. Wilson                         
Software Systems Laboratory               lab ph.: (312) 996-9216
U. of Illin. at C. EECS Dept. (M/C 154)   wilson@bert.eecs.uic.edu
Box 4348   Chicago,IL 60680 

kiran@copper.ucs.indiana.edu (Kiran Wagle) (03/02/91)

jaffer@gerber.ai.mit.edu (Aubrey Jaffer) writes:

>The lack of macros makes pure scheme code easily readable. 

No it doesn't. (Who is doing the reading?) i find a named procedure
_much_ easier to handle conceptually, and often name things just for
this reason. Macros allow one to say things using words whose meanings
are immediately obvious (at least to the writer of the macro) 
and thus are easier to reason about.

 
>To make a more radical suggestion I think that scheme might do very
>well to NOT include macros.  Introducing new syntactic constructs
>(macros) in order to avoid typing a few lambdas is bad programming
>style in that the code becomes unreadable.  

Doesn't this argument apply to all special forms and procedures?
All we need is if & lambda--should we get rid of and & or,
cond, etc.? Or are these tools that allow us to focus on the rest of
the program? I use named procedures and macros to avoid the
lambda--but also to help me conceptualize what's going on here. 
Other syntactic sugar likewise--are you willing to say that all 
code should be written at the lowest level possible? 
Why not code in binary?

Never underestimate the power of a name.

--
		--kiran_________________________kiran@copper.ucs.indiana.edu

"There may be two people in the world		Kiran Wagle 
who agree with each other on everything,	405 E. 8th St. #7
but *I* am not one of them...."			Bloomington, IN 47408-3788
		--David Friedman		(812) 331-1710
_______________________________________________________________________________

jaffer@gerber.ai.mit.edu (Aubrey Jaffer) (03/03/91)

>>Doesn't this argument apply to all special forms and procedures?
>>All we need is if & lambda--should we get rid of and & or,
>>cond, etc.? Or are these tools that allow us to focus on the rest of
>>the program? I use named procedures and macros to avoid the
>>lambda--but also to help me conceptualize what's going on here. 
>>Other syntactic sugar likewise--are you willing to say that all 
>>code should be written at the lowest level possible? 
>>Why not code in binary?

No.  As I said in the beginning of the article, the problem stems from
not being able to syntacticly differentiate between procedure calls
and special forms.  Scheme has less than 20 special forms.  I can
remember that small number.  A requirement that macro (or special
form) symbols start with * or some other marker would make me happy.

The radical suggestion I made was prompted by the realization that
Scheme's 16 special forms seem to cover almost all the ways I write
code (control structure).  No one seems to share that observation with
me.

Someone suggested that `while' should be added to Scheme.  To my mind
`while' is not different enough from `do' to be useful:

(while <CONDITIONAL> <CODE ...>)  ==>
(do () ((not <CONDITIONAL>)) <CODE ...>)

jinx@zurich.ai.mit.edu (Guillermo J. Rozas) (03/03/91)

In article <13645@life.ai.mit.edu> jaffer@gerber.ai.mit.edu (Aubrey Jaffer) writes:

   Someone suggested that `while' should be added to Scheme.  To my mind
   `while' is not different enough from `do' to be useful:

   (while <CONDITIONAL> <CODE ...>)  ==>
   (do () ((not <CONDITIONAL>)) <CODE ...>)

+ is not different enough from - to be useful:

(define (+ x y)
  (- x (- 0 y)))

yet you wouldn't want to do away with it, right? :-)

The language should be flexible enough to adapt to the thinking of the
programmer, not the other way around.

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (03/04/91)

In article <13645@life.ai.mit.edu>, jaffer@gerber.ai.mit.edu (Aubrey Jaffer) writes:
> Someone suggested that `while' should be added to Scheme.  To my mind
> `while' is not different enough from `do' to be useful:

> (while <CONDITIONAL> <CODE ...>)  ==>
> (do () ((not <CONDITIONAL>)) <CODE ...>)

This may be a matter of taste and background.  I find DO inordinately
hard to read.  The fact that the loop condition is negated is sometimes
an advantage, but not always.  I have learned to like T's ITERATE which
appears in Scheme as named-LET (let's face it, that's what a Prolog
programmer _expects_ a loop to look like), but even so WHILE is clearer
at times.

-- 
The purpose of advertising is to destroy the freedom of the market.

jeff@aiai.ed.ac.uk (Jeff Dalton) (03/04/91)

In article <JINX.91Feb26213028@chamarti.ai.mit.edu> jinx@zurich.ai.mit.edu writes:
>    Now, *you* are underestimating me a bit. The serious point I was making
>    is that a Lisp is historically a symbolic List Processing language,
>    while, except for the survival of 'cons', 'car', and 'cdr', Scheme is
>    more of something in the Algol 60 tradition, with a superficially
>    Lisp-like syntax.
>
>Well, what makes a Lisp?  Let me suggest a few requirements.
>1. List processing.
>2. Programs that look like data.
>3. Objects with unbounded extent and automatic storage management.
>4. EVAL (making use of 2).
>5. Powerful syntactic extension facilities that make use of 2.
>6. Call by value.
>7. Imperative constructs.
>8. Functional style encouraged, or at least not discouraged.
>9. Latent types.
>10. Generic arithmetic, with integers not restricted to the word size
>of the machine.
>11. Interactive development enviroments.
>
>You are right to claim that Scheme is not a Lisp because of its lack
>of 4 and 5, but

I'd say he is _wrong_ to claim Scheme is not a Lisp because it doesn't
have 4 and 5.  As you note later in your article:

>- There are dialects of Lisp out there that don't satisfy some of
>those requirements, yet no one thinks they are not Lisp.

The idea that we judge whether something is of a certain kind by
saying it must have _all_ properties in a list is philosophically
dubious to say the least [references omitted].  This has long been
recognized explicitly by the AI community and at least implicitly by
the Lisp community.  That's why the Algol-syntax language used with
Reduce was called RLISP, for example.  RLISP didn't have all of the
typical Lisp properties, but it did have enough of them.  [Exactly
what counts as "enough" may depend on many things.]

Remember too that some people used to (and maybe still do) write Lisp
in a syntax based on the Meta-language described in the Lisp 1.5 book.
If the Lisp 2 project has been a success, dual-notation Lisp (such as
RLISP / PSL) might have become the norm, and our idea of what Lisp is
typically like might be very different.

-- jd

john@mingus.mitre.org (John D. Burger) (03/06/91)

jaffer@gerber.ai.mit.edu (Aubrey Jaffer) writes:

>The radical suggestion I made was prompted by the realization that
>Scheme's 16 special forms seem to cover almost all the ways I write
>code (control structure).  No one seems to share that observation with
>me.

But one of the beauties of Lisp is the ability able to embed your own
language in it.  Do you object to defining DEF-type macros, e.g.

  (DEFPREDICATE AUTOMOBILE
    (ISA MOBILE-OBJECT MACHINE)
    (HAS-PARTS DOOR TIRE ENGINE))

What if I define a new data structure, and then want to write new
control constructs for it, e.g. streams:

  (DO-STREAM (ELEMENT MY-STREAM)
    (PRINT ELEMENT))

Are you seriously suggesting that doing these with "primitives" is
more readable, or indeed better in any way at all?
--
John Burger                                               john@mitre.org

"You ever think about .signature files? I mean, do we really need them?"
  - alt.andy.rooney