[comp.lang.lisp] Virtues of Lisp syntax

jeff@aiai.ed.ac.uk (Jeff Dalton) (09/06/90)

In article <1350028@otter.hpl.hp.com> sfk@otter.hpl.hp.com (Steve Knight) writes:
>With trepidation, I'd like to try to make a couple of points about Lisp
>syntax.  

The important thing to understand about (many, perhaps most, of) the
people who like Lisp is that they like the syntax and indeed prefer it
to the more Algol-like alternatives.

They are not, in their view, sacrificing something in order to get
extensibility (or whatever).  And if they wanted a different syntax,
they would build it on top of Lisp.

So I think it's important to get the issues right.  What it comes down
to, in my opinion, is that some people don't like Lisp syntax and are
consequently willing to use more complicated mechanisms for getting
some of the same benefits.

There isn't much that can be done about this except, when necessary,
pointing out some better ways of using and understanding Lisp.  No one
has to like Lisp in the end, but at least then they'll have given it
a fair shot.

>The basic point is that the virtues claimed for Lisp syntax
>are not exclusive to Lisp. 

Actually, the full set of virtues is exclusive to Lisp in the sense
that no other language offers the same set.  (I am of course counting
Scheme as a kind of Lisp here.)

>Lawrence writes:
>> Simplicity is nice, but extensibility is the much more important
>> virtue of Lisp syntax.  

I think it's misleading to consider such things in isolation.

>It is quite easy to imagine a programming language with (say) an Algol-like
>syntax in which the parse tree is available as a standard data-structure.

And many such languages have been built on top of Lisp.

>Speaking personally, I am no fan of the Lisp or Prolog style of syntax.
>It seems to me to be an unfortunate conflation of issues -- external 
>syntax is there to make programs readable etc -- internal structure is
>there to make manipulation of programs convenient.  

My view is just the opposite.  It's fortunate that a readable
external syntax can correspond so closely to a flexible, easily
manipulated data structure.

>                                                    I think that Prolog
>makes the better job of it, having the advantage of being relational so
>that quoting rules aren't required, and providing a good infix/prefix/
>postfix operator system.

It's not because Prolog is relational that quoting rules aren't
required.  And instead of quoting, Prolog has case conventions.
Variables begin in upper case, symbolic constants ("atoms") don't.
And to have something that begins in upper case treated as an
atom you have to ... wait for it ... put (single) quotes around
it.  (Yes, I know we can argue about whether these quotes are
like the Lisp quote or like the |...| syntax.)

BTW, the Lisp quoting rules needn't be any more confusing than
the quoting rules of, say, Basic.

>I agree with Lawrence in
>thinking that many folks pre-judge on the basis of ignorance, it would be 
>odd if it wasn't true!  However, the issue of Lisp's syntax is largely
>independent of programming environment -- you don't need Lisp's syntax
>to do sensible things with programs.  

The syntax issue is not independent of programming environment.
But this is not a claim that you need Lisp in order to do "sensible
things" -- it's possible to do sensible things with a variety of
languages -- but rather the observation that Lisp *requires* a certain
kind of programming environment if it's to be used effectively.  
(In particular, indentation is the key to making Lisp readable,
and an editor that does much of it for you is a big help.)

When students, for example, try to write Lisp programs without an
editor with a good Lisp mode, it's not surprising that they find it
difficult.  And since they don't know what it would be like to use a
better editor, it's not surprising that they think Lisp is to blame.

On the other hand, Lisp *does* make it easier than other languages to
do certain sensible things.

>I am inclined to think that the best interpretation of Lawrence's
>observation is that expensive Lisp equipment is of much more interest to
>folks who are prepared to tolerate the problems with Lisp.  

That's true, of course, if you think of Lisp programmers as
tolerating problems.

>And I would count the syntax as, overall, a problem because the
>benefits could be achieved without the cost.

Let me try to be clear at the risk of repetition.  Many Lisp
programmers do not regard it as a cost; they regard the syntax
as a benefit.  Many, and not just because they're "used
to" Lisp, find Lisp *easier* to read than other languages that
have a more complicated syntax.  Moreover, the combination of
benefits available in Lisp cannot be achieved (without giving
up something) in other languages.

>> Indeed, the "haters" usually
>> have never used any Lisp system beyond the 1962-vintage Lisp 1.5 that
>> most of us oldsters were introduced to in undergraduate school.
>
>That doesn't ring a bell with me.  My (UK based) experience is that people's
>exposure is to Lisp's such as Lucid.  This doesn't really sweeten
>the experience, to be fair, since so many commercial lisp systems have
>intense memory requirements -- which ends up spoiling the experience.

This has been my experience as well.  Scheme is much better, I think,
at creating a good initial impression.

>Another factor in the development of attitudes hostile to Lisp, in my view,
>is the bias of university teaching.  (I'm only talking about the UK here.)

I agree, and it can be true in the US as well.

Jeff Dalton,                      JANET: J.Dalton@uk.ac.ed             
AI Applications Institute,        ARPA:  J.Dalton%uk.ac.ed@nsfnet-relay.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton

andy@Theory.Stanford.EDU (Andy Freeman) (09/06/90)

In article <3368@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>In article <1350028@otter.hpl.hp.com> sfk@otter.hpl.hp.com (Steve Knight) writes:
>                                                    I think that Prolog
>makes the better job of it, having the advantage of being relational so
>that quoting rules aren't required, and providing a good infix/prefix/
>postfix operator system.

Prolog most definitely has quoting rules; case matters in obscure ways
and constants that begin with the wrong case have to be quoted.  The result
isn't simpler.

As to the advantages of infix/postfix/prefix syntax, I note that
operator arity is not limited to 1 and 2.  Again, prolog uses two
sytaxes of syntax to handle a case that is handled by one syntax in
lisp.  (I'm referring to the opr(i,j,k,l,m) vs a <- b , c business.)

-andy
--
UUCP:    {arpa gateways, sun, decwrl, uunet, rutgers}!neon.stanford.edu!andy
ARPA:    andy@neon.stanford.edu
BELLNET: (415) 723-3088

lou@cs.rutgers.edu (lou) (09/06/90)

In article <1350028@otter.hpl.hp.com> sfk@otter.hpl.hp.com (Steve Knight) writes:

  >Speaking personally, I am no fan of the Lisp or Prolog style of syntax.
  >It seems to me to be an unfortunate conflation of issues -- external 
  >syntax is there to make programs readable etc -- internal structure is
  >there to make manipulation of programs convenient.  

The problem with this distinction between internal syntax (only
programs see it) and external syntax (for people) is that in an
interactive programming environment you need to be able to take
something in internal syntax and display it to the user (e.g. as part
of a debugger).  But if this display is in internal syntax, then there
is little point in having the external syntax - the cost of dealing
with two syntaxes generally (in my experience and I think in that of
others) is very high, and outweighs any advantage of a nice external
syntax.

Thus, if internal and external syntaxes are different, the debuger has
to be able to display things in external syntax.  I.e., you have to
back-translate from internal to external or else you have to somehow
access the original external syntax from which a given piece of
internal syntax was produced.  These are both possible, and I know of
systems that do them, but either approach is more complex and more
expensive than simply using a common internal/external syntax.  So the
question is, is the advantage of having separate syntaxes worth the
cost?

I think most people who use lisp would agree with me that, if you have
a good programming environment, including an editor that does
indentation for you, lisp syntax is just a humanly-usable as Pascal's,
so the advantage of a separate syntax is 0, and thus not worth any
cost at all.  The one exception to this is in arithmetic expressions,
where I find that the lispish form is less readable than the Pascal
form.  (I know that much of this is a matter of taste and experience,
and I do not claim that everyone should agree that lisp syntax is as
usable as Pascal.)

--
					Lou Steinberg

uucp:   {pretty much any major site}!rutgers!aramis.rutgers.edu!lou 
arpa:   lou@cs.rutgers.edu

sfk@otter.hpl.hp.com (Steve Knight) (09/06/90)

Andy (& Jeff too) point out that:
>Prolog most definitely has quoting rules; case matters in obscure ways
>and constants that begin with the wrong case have to be quoted.  The result
>isn't simpler.

OK - you can see the distinction between variables (begin with upper case)
and atoms (anything else) as a quoting scheme.  However, it certainly has
none of the complexity of Lisp's quoting schemes -- there's nothing that
corresponds to backquoting.  It hardly deserves the adjective 'obscure'.

As evidence to this, I've never seen a failure-to-quote error committed in
Prolog, though I've seen it many times in Lisp.  Is this just a UK thing?
Perhaps teaching methods in the US mean that students rarely or never 
make those errors?  I know it is a big problem for Lisp acceptance for
students in the UK.

>As to the advantages of infix/postfix/prefix syntax, I note that
>operator arity is not limited to 1 and 2.  Again, prolog uses two
>sytaxes of syntax to handle a case that is handled by one syntax in
>lisp.  (I'm referring to the opr(i,j,k,l,m) vs a <- b , c business.)

This point eludes me.  Prolog has a limited way of extending its own
syntax, it is true.  I was simply stating my view that it is able to
create a more satisfactory syntax even with those limits.

Obviously, the attractiveness of syntax is in the eye of the beholder.
I have to accept that folks like Jeff are sincere when they argue that the
syntax of Lisp is very attractive for them.  (I am even inclined to agree
when the alternative is C++.)  

My argument, which wasn't contradicted, I think, was only that you could
have the same benefits of Lisp syntax without the exact same syntax.

Steve

freeman@argosy.UUCP (Jay R. Freeman) (09/07/90)

    I have forgotten the details, but there used to be a non-Lispy
front-end parser for the once highly-popular "Portable Standard Lisp"
dialect.  I believe it was called "RLisp", and accepted source in
a quite conventional Algol-like syntax.  I myself did not prefer it to
conventional Lisp syntax (and though I at the time was much more familiar
with conventional lisp than with any language with Algol-like syntax, I
nevertheless got well familiar with this parser, since my assignment at
the time was to convert it into something else).

    Perhaps it would be interesting to find out if there are any other
users or former users of PSL on the net, and what if anything they
thought of RLisp.  I had the feeling that it had not caught on and did
not look as if it were going to.

                                                -- Jay Freeman

	  <canonical disclaimer -- I speak only for myself>

rpk@rice-chex.ai.mit.edu (Robert Krajewski) (09/07/90)

In article <LOU.90Sep6100024@atanasoff.rutgers.edu> lou@cs.rutgers.edu writes:
>...But if this display is in internal syntax, then there
>is little point in having the external syntax - the cost of dealing
>with two syntaxes generally (in my experience and I think in that of
>others) is very high, and outweighs any advantage of a nice external
>syntax.

This is exactly why nobody uses the ``original'' McCarthy Lisp syntax
anymore.  People threw it out 25 years ago...
Robert P. Krajewski
Internet: rpk@ai.mit.edu ; Lotus: robert_krajewski.lotus@crd.dnet.lotus.com

bill@ibmpcug.co.uk (Bill Birch) (09/07/90)

As beauty is in the eye of the beholder, a discussion about the merits of 
LISP syntax shows the different views of the language. I guess my view 
of the LISP syntax could take some people by suprise.  As far as I am
concerned LISP is a software development tool.  It allows me to express
solutions to problems in small languages of my own creation. The interpreters
and compilers for these languages are LISP programs.

For this use, the syntax of LISP is ideal since it places no trestrictions 
on the format of my own languages. For example, I have implemented an 
assembler for some very nasty state-engines (usually coded in hex by 
"binary aboriginal" contractors).  Just browsing through the average
introduction to LISP text will show many examples of mini-languages
implemented with the humble s-expression.

This is what LISP is all about in my view. LISP is a powerful tool,
the Play-Dough (Plasticene) of programming languages.

Bill


-- 
Automatic Disclaimer:
The views expressed above are those of the author alone and may not
represent the views of the IBM PC User Group.
-- 

ericco@stew.ssl.berkeley.edu (Eric C. Olson) (09/08/90)

People often confuse the printed representation of Lisp with its
actual representation.  That is, Lisp functions are lists with the
symbol lambda as its car.  It happens that the common Lisp readers use
a fairly direct representation of Lisp functions, namely the body of a
defun function.  However, one could imagine a defun-algol function
that translates algol like syntax to Lisp functions.  Additionally,
one could have a print-algol function that would coerce Lisp functions
into algol like print representations.

I think of Lisp as a algorithmic assembler.  It allows the programmer
to deal with the issues of a problem without getting involved in the
details of a specific machine.  Often times, its useful to implement
a new language on top of Lisp to provide data abstraction.  A rule
based expert system comes to mind.

Its easy to translate into Lisp from a language using another syntax.
The reverse is in not true.  Especially with advent of aggresive
automatic translators.  Which for example, prove whether or not
a binding can be allocated on a stack instead of a heap (in a Lisp to C
translator).

The only other type of language that provides this level of
functionality are stack oriented languages.  I find them more
difficult to understand -- perhaps because they don't have all the
development tools that other languages have.

IMHO,
Eric

Eric
ericco@ssl.berkeley.edu

Chewy@cup.portal.com (Paul Frederick Snively) (09/09/90)

I believe Jay is referring to MLisp.

MLisp is indeed an Algol-like syntax for Lisp.  The version that I have is
written in PLisp (Pattern-matching Lisp), which in turn is written in Common
Lisp.

Paul Snively
Macintosh Developer Technical Support
Apple Computer, Inc.
Hanging out at Chewy@cup.portal.com

expc66@castle.ed.ac.uk (Ulf Dahlen) (09/09/90)

In article <10466@life.ai.mit.edu> rpk@rice-chex.ai.mit.edu (Robert Krajewski) writes:
>This is exactly why nobody uses the ``original'' McCarthy Lisp syntax
>anymore.  People threw it out 25 years ago...

What did the ``original'' McCarthy Lisp syntax look like?


--Ulf Dahlen
Linkoping University, Sweden   and   Edinburgh University, Scotland
Internet: uda@ida.liu.se

Reuben_Bert_Mayo@cup.portal.com (09/10/90)

Jay is referring to RLisp.  MLisp is yet another effort along the same lines.
RLisp used to be distributed with the REDUCE computer algebra package.  
(Maybe it still is; I haven't used Reduce lately.)  Portable Common Lisp was 
originally developed to support portability of Reduce.  RLisp was covered
in a chapter in the book by Organick, Forsythe, and Plummer, _Programming
Language Structures_, Academic Press (1978).

Personally I didn't find RLisp attractive.  A few control statements such as
IF..THEN were in Algol syntax, but the guts of the work still had to be done 
with Lisp 1.5-like functions, e.g. cons(car(a), b), so you felt like you 
were switching between two different languages within the same expression!


Scheme, now, _feels_ like Algol-60 (the world's sweetest version of Fortran),
and I'd say that feel is more important than look.
                       -- Bert

kessler%cons.utah.edu@cs.utah.edu (Robert R. Kessler) (09/10/90)

RLISP was an Algol-like Syntax that was a translator into Portable
Standard Lisp.  To those of us who implemented and subsequently used
it, there were many mixed views.  Some people felt that they liked it
because they didn't like the ``ugly'' Lisp syntax.  They still had
access to all of the Lisp operations, but didn't have to put up with
the parens (remember that when we were RLISP, we didn't have fancy
text editors that did paren bouncing, auto-indentation, etc -- try
writing Lisp code without the editor features, it really is much more
difficult).  The others among us, felt that RLISP just got in the way,
so we used PSL.  RLISP has currently diverged some what.  PSL is still
be distributed (by us and others) and supports a flavor of RLISP.
That version is still in use by the Alpha_1 group here at Utah which
has a solid modelling package, which has a mode where the users can
define models in RLISP.  The REDUCE algebra system (which is also
still being distributed) has a slightly different version for
supporting computer algebra (in that case, RLISP works well -- the
most common users of REDUCE are non-computer scientists who find
things like infix operators a requirement).  Finally, there is
something called RLISP-88 from Rand, which has extended RLISP with
concurrency operations, an object system, and other neat features.

B.

soder@nmpcad.se (Hakan Soderstrom) (09/11/90)

The syntax of Lisp is about the same as the syntax of
Assembler: it doesn't exactly stop you from doing what you
want, but it doesn't help either. Almost all kinds of errors
appear as run time errors.

Jeff Dalton writes,

>My view is just the opposite.  It's fortunate that a readable
>external syntax can correspond so closely to a flexible, easily
>manipulated data structure.

Yes, this is the crux of the matter. It also means that the
syntax is a compromise between machine readability and human
readability. Because it was designed in the 60's, there is a
bias towards machine readability. You help the compiler
build its data structure.

Goodness. I promised never to enter a syntax argument again
... it is one of those sure-to-flame topics. But it is fun!
And where would we be without Lisp?

	- Hakan

--
----------------------------------------------------
Hakan Soderstrom             Phone: +46 (8) 752 1138
NMP-CAD                      Fax:   +46 (8) 750 8056
P.O. Box 1193                E-mail: soder@nmpcad.se
S-164 22 Kista, Sweden

ned@pebbles.cad.mcc.com (Ned Nowotny) (09/11/90)

In article <1990Sep10.091911.20877@hellgate.utah.edu> kessler%cons.utah.edu@cs.utah.edu (Robert R. Kessler) writes:
=>RLISP was an Algol-like Syntax that was a translator into Portable
=>Standard Lisp.  To those of us who implemented and subsequently used
=>it, there were many mixed views.  Some people felt that they liked it
=>because they didn't like the ``ugly'' Lisp syntax.  They still had
=>access to all of the Lisp operations, but didn't have to put up with
=>the parens (remember that when we were RLISP, we didn't have fancy
=>text editors that did paren bouncing, auto-indentation, etc -- try
=>writing Lisp code without the editor features, it really is much more
=>difficult).  The others among us, felt that RLISP just got in the way,
=>so we used PSL.  RLISP has currently diverged some what.  PSL is still
=>be distributed (by us and others) and supports a flavor of RLISP.
=>That version is still in use by the Alpha_1 group here at Utah which
=>has a solid modelling package, which has a mode where the users can
=>define models in RLISP.  The REDUCE algebra system (which is also
=>still being distributed) has a slightly different version for
=>supporting computer algebra (in that case, RLISP works well -- the
=>most common users of REDUCE are non-computer scientists who find
=>things like infix operators a requirement).

In so far as extension languages are concerned, this is the most
important argument against unsugared Lisp syntax.  Most people
learned mathematics with infix operators and most people are more
accustomed to communicating in a written form where keywords and
separators are the typical delimiters, obviating the need for
parenthesis or bracket matching.  In fact, most users are not
persuaded by arguments that Lisp syntax is "elegant" or "easy
to learn."  They are far more likely to believe that the programmer
was to lazy to build a simple parser and therefore decided, because
of the obvious intrinsic value of the product, that the user should
be willing to be the parser for an otherwise unfamiliar notation.
This attitude, at best, is not customer-oriented and, in any case,
is unproductive.  Parsing technology is well developed.  Extension
languages can fairly easily accommodate an ALGOL-like syntax while
still providing all the semantics of Lisp (or Scheme, for that
matter.)

=>Finally, there is
=>something called RLISP-88 from Rand, which has extended RLISP with
=>concurrency operations, an object system, and other neat features.
=>
=>B.


Ned Nowotny, MCC CAD Program, Box 200195, Austin, TX  78720  Ph: (512) 338-3715
ARPA: ned@mcc.com                   UUCP: ...!cs.utexas.edu!milano!cadillac!ned
-------------------------------------------------------------------------------
"We have ways to make you scream." - Intel advertisement in the June 1989 DDJ.

andy@Neon.Stanford.EDU (Andy Freeman) (09/12/90)

In article <11048@cadillac.CAD.MCC.COM> ned%cad@MCC.COM (Ned Nowotny) writes:
>In so far as extension languages are concerned, this is the most
>important argument against unsugared Lisp syntax.  Most people
>learned mathematics with infix operators and most people are more
>accustomed to communicating in a written form where keywords and
>separators are the typical delimiters, obviating the need for
>parenthesis or bracket matching.  In fact, most users are not
>persuaded by arguments that Lisp syntax is "elegant" or "easy
>to learn."  They are far more likely to believe that the programmer
>was to lazy to build a simple parser and therefore decided, because
>of the obvious intrinsic value of the product, that the user should
>be willing to be the parser for an otherwise unfamiliar notation.
>This attitude, at best, is not customer-oriented and, in any case,
>is unproductive.  Parsing technology is well developed.  Extension
>languages can fairly easily accommodate an ALGOL-like syntax while
>still providing all the semantics of Lisp (or Scheme, for that
>matter.)

This makes a couple of assumptions that are unlikely to be true.

1)  We're not doing +,-,*,/ arithmetic, we're programming.  (BTW - "+"
    isn't really a binary operator, neither is "*"; there are surprisingly
    few true binary, or unary, operations.)
2)  One consequence is that binary and unary operators are the exception;
    in fact, operators with arbitrary arity are common, or at least would
    be if "modern" languages were as upto date as lisp.  That being
    the case, infix notation doesn't work and prefix notation requires
    delimiters, which brings us back to lisp-like syntaxes.

As to the development of parsing technology, the state-of the art
syntax for n-ary operators, user-defined or system defined, is:
    op(<operands, possibly separated by commas>)

I don't see that that is a big improvement over lisp syntax.

-andy
-- 
UUCP:    {arpa gateways, sun, decwrl, uunet, rutgers}!neon.stanford.edu!andy
ARPA:    andy@neon.stanford.edu
BELLNET: (415) 723-3088

zmacx07@doc.ic.ac.uk (Simon E Spero) (09/12/90)

    One thing that a lot of people seem to be ignoring is the way that all 
modern lisps make it so easy to augment lisp's syntax to suit the job at
hand. When it comes to building complex mathematical expressions, prefix 
notation is absolutely hopeless. 

    When you can add a simple operator-precedence parser and attach it to a 
macro in a few minutes, there is no need to bother writing it out long-hand. 
Surely the main reason lisp has survived so long is it's ability to take on 
and mimic the special features of newer languages that evolve around it.

Simon
--
zmacx07@uk.ac.ic.doc | sispero@cix.co.uk |    ..!mcsun!ukc!slxsys!cix!sispero
------------------------------------------------------------------------------
The Poll Tax.    | Saddam Hussein runs Lotus 123 on | DoC,IC,London SW7 2BZ
I'm Not. Are you?| Apple Macs.|   I love the smell of Sarin in the morning

jeff@aiai.ed.ac.uk (Jeff Dalton) (09/13/90)

In article <SODER.90Sep11141859@basm.nmpcad.se> soder@nmpcad.se (Hakan Soderstrom) writes:
>The syntax of Lisp is about the same as the syntax of
>Assembler: it doesn't exactly stop you from doing what you
>want, but it doesn't help either.  Almost all kinds of errors
>appear as run time errors.

Actually, Lisp syntax does help many people to do what they want.
It's certainly much more helpful than assembler.  Maybe it doesn't
help *you* to do what you want, but so what?  No one ever claimed
Lisp was the answer to all problems.

Of course people who think run-time checking is the worst of all
possible sins won't like Lisp.  Those people would do well to use
another language instead.  ML is a good choice if they want most
of the type work done for them.

>Jeff Dalton writes,
>
>>My view is just the opposite.  It's fortunate that a readable
>>external syntax can correspond so closely to a flexible, easily
>>manipulated data structure.
>
>Yes, this is the crux of the matter. It also means that the
>syntax is a compromise between machine readability and human
>readability. 

That's exactly what it doesn't mean.  In order to be a compromise it
would have to be worse for humans (as compared to other programming
languages -- because every programming language makes such compromises
to some extent) in order to be better for machines.  

But, as I pointed out before, (many) Lisp programmers don't regard it
as worse for humans: they prefer it to the more Algol-like syntaxes.
Critics of Lisp's syntax consistently ignore this point and suppose
that the syntax must be a cost rather than a benefit.

Of course, some people who dislike Lisp syntax may also happen to
think the syntax is good for machines.  But it's only because they
prefer the syntax of other programming languages that they see Lisp as
making a greater compromise.  And different preferences are just what
we expect on questions of syntax.  Different people prefer different
things.  It might be nice if everyone preferred the same syntax, but
it isn't so.

In any case, the idea that Lisp is more of a compromise than other
languages seems rather bizarre.  It may seem plausible (to some) if
we restrict ourselves to syntax.  But Lisp is notorious for being
unsuited to "conventional" machines.  (Before anyone flames me,
let me point out that I think Lisp can be implemented effectively
on conventional machines.  Nonetheless, it has a reputation that
is not entirely unjustified.)

>Because it was designed in the 60's, there is a
>bias towards machine readability. You help the compiler
>build its data structure.

There might be something to this if it weren't the case that
other languages designed at about the same time, such as FORTRAN
and Algol 60, didn't show the same "bias".

In any case, you're confusing the origin of Lisp syntax with
the question of whether it really is readable.

-- Jeff

jeff@aiai.ed.ac.uk (Jeff Dalton) (09/13/90)

In article <1350030@otter.hpl.hp.com> sfk@otter.hpl.hp.com (Steve Knight) writes:
>Andy (& Jeff too) point out that:
>>Prolog most definitely has quoting rules; case matters in obscure ways
>>and constants that begin with the wrong case have to be quoted.  The result
>>isn't simpler.
>
>OK - you can see the distinction between variables (begin with upper case)
>and atoms (anything else) as a quoting scheme.  However, it certainly has
>none of the complexity of Lisp's quoting schemes -- there's nothing that
>corresponds to backquoting.  It hardly deserves the adjective 'obscure'.

Backquote is, in my opinion, a separate issue.  Let me put it this
way: the rules for the use of QUOTE in Lisp (and for its abbreviation
the single quote) are neither obscure nor difficult to understand.
As I said in my previous message, the quoting rules for Lisp needn't
be any more confusing that those of, say, Basic.

As far as Prolog is concerned, the original claim was that Prolog
didn't have quoting rules.  I don't really want to quarrel about
whether Prolog is better or worse than Lisp in this respect or in
general.

>As evidence to this, I've never seen a failure-to-quote error committed in
>Prolog, though I've seen it many times in Lisp.  Is this just a UK thing?
>Perhaps teaching methods in the US mean that students rarely or never 
>make those errors?  I know it is a big problem for Lisp acceptance for
>students in the UK.

In my experience, in both the US and the UK, it is true that students
often make quoting mistakes in Lisp and that they often find the
quoting rules confusing.  I think there are a number of possible
contributing factors.

A common mistake in presentation, I think, is to make non-evaluation
seem too much of a special case.  For example, some people see SETQ as
confusing because it evaluates one "argument" and not another and so
they prefer to present SET first.  This doesn't work so well in Scheme
or even in Common Lisp, but it sort of worked for older Lisps.  
Unfortunately, it makes SETQ seem like a special case when it's
actually a pretty standard assignment:

          Lisp                                Basic

      (setq a 'apples)                let a$ = "apples"
      (setq b a)                      let b$ = a$
   vs (setq b 'a)                  vs let b$ = "a$"

Indeed, the whole terminology in which SETQ, COND, etc. are presented
as funny kinds of "functions" and where functions are described as
"evaluating an argument" (or not) may be a mistake.

>Obviously, the attractiveness of syntax is in the eye of the beholder.
>I have to accept that folks like Jeff are sincere when they argue that the
>syntax of Lisp is very attractive for them.  (I am even inclined to agree
>when the alternative is C++.)  

I would agree, provided we don't take this "eye of the beholder"
stuff too far.  It's true that different people will prefer different
syntaxes and that we can't say they're wrong to do so.  However, we
shouldn't go on to conclude that all views on the virtues or otherwise
of a syntax are equally valid.  Sometimes we can say, for example,
that someone hasn't learned good techniques for reading and writing
code in a particular syntax and that's why they find it so hard
to read and write.

>My argument, which wasn't contradicted, I think, was only that you could
>have the same benefits of Lisp syntax without the exact same syntax.

Actually, I did disagree with you on this point.  Perhaps I should
have said so more explicitly.  I don't think you can get the same
benefits.  You can get *some* of the benefits, but by sacrificing some
of the others.  Lisp occupies something like a local maximum in
"benefit space".

-- Jeff

pcg@cs.aber.ac.uk (Piercarlo Grandi) (09/13/90)

On 12 Sep 90 02:12:38 GMT, andy@Neon.Stanford.EDU (Andy Freeman) said:

andy> In article <11048@cadillac.CAD.MCC.COM> ned%cad@MCC.COM (Ned
andy> Nowotny) writes:

ned> In so far as extension languages are concerned, this is the most
ned> important argument against unsugared Lisp syntax.  Most people
ned> learned mathematics with infix operators and most people are more
ned> accustomed to communicating in a written form where keywords and
ned> separators are the typical delimiters, obviating the need for
ned> parenthesis or bracket matching.

I would also like to observe that while providing users with a familiar,
mathematical like syntax, may help sales, is is actually extremely
misleading, because even if they look like mathemtical expressions, the
semantics of expressions in programs are only weakly related to those of
the mathamtical expressions they look similar to, especially for
floating point, or unsigned in C (which actually uses modular arithmetic).

andy> This makes a couple of assumptions that are unlikely to be true.

andy> 1)  We're not doing +,-,*,/ arithmetic, we're programming.  (BTW - "+"
andy>     isn't really a binary operator, neither is "*"; there are
andy>     surprisingly few true binary, or unary, operations.)

Precisely. Agreed. Even the semantics are different.

andy> 2)  One consequence is that binary and unary operators are the exception;
andy>     in fact, operators with arbitrary arity are common, or at least would
andy>     be if "modern" languages were as upto date as lisp.  That being
andy>     the case, infix notation doesn't work and prefix notation requires
andy>     delimiters, which brings us back to lisp-like syntaxes.

The real challenge here is that we want some syntax that says, apply
this operator symbol to these arguments and return these value_s_. Even
lisp syntax does not really allow us to easily produce multiple values.

So either we say that after all all functions take just one argument and
return one result (and they may be both structured), which may be an
appealing solution, or we are stuck; mainly because our underlying
mathematical habits do not cope well with program technology (and I am
not happy with those that would like, like the functionalists, to reduce
programming to what is compatible with *their* notion of maths).

andy> As to the development of parsing technology, the state-of the art
andy> syntax for n-ary operators, user-defined or system defined, is:
andy>     op(<operands, possibly separated by commas>)

andy> I don't see that that is a big improvement over lisp syntax.

Actually, there is state of the art technology for multiple arguments to
multiple results, and it is *Forth* of all things, or maybe POP-2. A lot
of power of Forth and Pop-2 indeed comes from their functions being able
to map the top N arguments on the stack to a new top of M results. Maybe
postfix is not that bad after all. Something like

	10 3 / ( 10 and 3 replaced with 1 and 3 ) quot pop rem pop

or with some sugaring like Pop-11.

Another alternative is that used in languages like CDL or ALEPH, based
on affix/2level grammar style, to junk function notation entirely, and
just use imperative prefix syntax. Something like

	divide + 10 + 3 - quot - rem.

I have seen substantial programs written like this, and this notation
actually is not as verbose as it looks, and is remarkably clear. For
example, Smalltalk syntax is essentially equivalent to this, as in:

	10 dividedBy: 3 quotient: quot remainder: rem!

There are many interesting alternatives...
--
Piercarlo "Peter" Grandi           | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

oz@yunexus.yorku.ca (Ozan Yigit) (09/13/90)

[ongoing discussion regarding lisp syntax]

In article <3408@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:

>I would agree, provided we don't take this "eye of the beholder"
>stuff too far.

Why stop now, after we have come this far ?? ;-)

>... It's true that different people will prefer different
>syntaxes and that we can't say they're wrong to do so.  However, we
>shouldn't go on to conclude that all views on the virtues or otherwise
>of a syntax are equally valid. 

It follows therefore that one should try to substantiate some or all of the
claims regarding the effectiveness and benefits of a syntax, such as that
of lisp, instead of just presenting opinions. I have seen studies on
"natural artifical languages" (Gary Perlman's term for programming/command
languages), effects of punctuation in programming etc.  but I don't recall
seeing a study that has a definitive word on overwhelming benefits of one
syntax over another. If you know of any that substantiate various claims
[so far made] about lisp syntax, I would be very interested in it.

[I am curious: Does anyone know why Papert chose lisp-like semantics but
not the syntax for Logo?]

[and regarding the benefits of syntax]

>You can get *some* of the benefits, but by sacrificing some
>of the others.  Lisp occupies something like a local maximum in
>"benefit space".

Really. QED is it?

oz
---
The king: If there's no meaning             Usenet:    oz@nexus.yorku.ca
in it, that saves a world of trouble        ......!uunet!utai!yunexus!oz
you know, as we needn't try to find any.    Bitnet: oz@[yulibra|yuyetti]
Lewis Carroll (Alice in Wonderland)         Phonet: +1 416 7365257x33976

jeff@aiai.ed.ac.uk (Jeff Dalton) (09/14/90)

In article <11048@cadillac.CAD.MCC.COM> ned%cad@MCC.COM (Ned Nowotny) writes:

>In so far as extension languages are concerned, this is the most
>important argument against unsugared Lisp syntax.  Most people
>learned mathematics with infix operators and most people are more
>accustomed to communicating in a written form where keywords and
>separators are the typical delimiters, obviating the need for
>parenthesis or bracket matching.

Um, when's the last time *you* wrote expressions in an infix 
language.  Parentheses and bracket-matching are definitely
involved.  That is, the difference is, to some extent, a
mater of degree.

Of course, you're right that there are more user-friendly syntaxs that
than provided by Lisp, at least if the users are not already familiar
with Lisp.  However, (1) the implementation of a Lisp-based extension
language tends to be simpler and smaller, (2) the result is a proper
programming langauge rather than something more restricted, (3) Lisp
is at least as "friendly" as some of the alternatives such as Post-
Script, (4) experience with a number of implementations of Emacs (eg,
Multics Emacs, GNU Emacs) -- and of other things -- has shown that
users, even "non-programmers", can use Lisp effectively as an
extension language and even find such use pleasant.

>                                  In fact, most users are not
>persuaded by arguments that Lisp syntax is "elegant" or "easy
>to learn."  They are far more likely to believe that the programmer
>was to lazy to build a simple parser and therefore decided, because
>of the obvious intrinsic value of the product, that the user should
>be willing to be the parser for an otherwise unfamiliar notation.

I also think you overestimate the extent to which users will be
comfortable with mathematics and the rigidities imposed by programming
languages in general.  That is, many users will feel they are parsing
an unfamiliar notation regardless.

>This attitude, at best, is not customer-oriented and, in any case,
>is unproductive.  Parsing technology is well developed.  Extension
>languages can fairly easily accommodate an ALGOL-like syntax while
>still providing all the semantics of Lisp (or Scheme, for that
>matter.)

True.

-- JD

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (09/14/90)

In article <PCG.90Sep13135243@odin.cs.aber.ac.uk>, pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
> On 12 Sep 90 02:12:38 GMT, andy@Neon.Stanford.EDU (Andy Freeman) said:
> andy> 1)  We're not doing +,-,*,/ arithmetic, we're programming.  (BTW - "+"
> andy>     isn't really a binary operator, neither is "*"; there are
> andy>     surprisingly few true binary, or unary, operations.)

> Precisely. Agreed. Even the semantics are different.

I missed this the first time it came around.
I have some bad news for the two of you:  in floating-point arithmetic
"+" _is_ a binary operation.  Floating-point "+" and "*" are not
associative.  If one Lisp compiler turns (+ X Y Z) into
(plus (plus X Y) Z) and another turns it into (plus X (plus Y Z))
then they are going to produce _different_ results.  For integer
and rational arithmetic, there's no problem, but anyone doing floating
point calculations in Lisp has to be very wary of non-binary + and * .

-- 
Heuer's Law:  Any feature is a bug unless it can be turned off.

jjacobs@well.sf.ca.us (Jeffrey Jacobs) (09/15/90)

Having watched the LISP Syntax thread for a while, I thought a little
history might be in order...

Way back in the old days, development in LISP and the underlying
philosophy of the language were substantially different.

LISP was an interpreted language.  Debugging and development was done in
interpreted mode; compilation was considered the *last* step in the
development process.

LISP had two fundamental data types, CONS cells and ATOMs.  ATOMs were
"indivisible", and included "interned symbols", numbers and strings.
(Arrays were added, but were less important to the underlying
concept).

A key aspect of the language was the "equivalence of data and (source)code",
i.e. code consisted of LISP stuctures, which could be manipulated *exactly*
like any other LISP structure.  Note that this is substantially
different from the modern view, where "functions (*not* code) are a data
_type_" and not directly modifiable, e.g. even "interpreted" code in
most modern implementations
gets converted to something other than a CONS based structure.

This "equivalence" allowed some very interesting capabilities that are
no longer available in modern implementations.  Since the interpreter
operated on list structures, it was possible to dynamically modify code
while in the process of execution.  Now, most of us didn't write
self-modifying code (although we probably all tried it at least once).
But we were able to stop an execution, and make changes to code and
continue from the breakpoint without having to recompile or start over.
We could issue a break, put a trace point around an expression *within*
a defined function, and continue.  Or we could fix it, and continue;
the "fix" would be propagated even to pending calls.  E.g. if you
had
(DEFUN FOO (X Y)
  expr1
  expr2
  ...)

and expr1 invoked FOO recursively, you could break the execution,
change expr2 (or TRACE it,  or BREAK it or...), all of the pending
invocations on the stack were affected.  (You can't do this with
compiled code).

It allowed things like structure editors (where you didn't need to
worry about messing up parends), DWIM, and other features that have
been lost in the pursuit of performance.

With this view (and combined with the mathematical purity/simplicity
of McCarthy's original concept) LISP syntax not only makes sense,
it is virtually mandatory!

Of course, it also effectively mandated dynamic scoping.  "Local"/lexical
scoping really came about as the default for the compiler primarily because
most well written LISP code didn't use function arguments as free/special
variables, so it was an obvious optimization.  However, several years
ago, Daryle Lewis confided in me that he had intended that UCI LISP be
released with the compiler default set to everything being SPECIAL.
Given the historical problems in reconciling local and free variables,
and the fact that the vast majority of LISPers who learned the language
in the '70s and early '80 learned UCI LISP, I can't help but wonder
if what affect this might have had on Common LISP...

(FWIW, REDUCE was originally done in UCI LISP way back in the early '70s,
and BBN/INTERLISP supported MLISP, an Algol like syntax.  Seems to me that
RLISP must go back that far as well.  Given that structure editors are
incredibly ancient, I wonder why the people at Utah didn't use one of
those.  Oh, and almost nobody ever learned LISP using 1.5...

Personally, I think the whole reason LISP machines were created
was so that people could run EMACS :-)

However, if compilation is the primary development strategy (which it
is with CL), then the LISP syntax is not particularly useful.  Modern
block structured syntax is much easier to read and maintain; it also
allows constructs such as type declarations, etc. to be much
more readable.  Infix notation is indeed much more familiar to most
people.  Keyword syntax in most languages is much more obvious, readable
and certainly less prone to errors and abuse than CL.  The elimination
of direct interpretation of structures (as read) and the almost total use
of text editors does indeed leave LISP syntax a relic from the past.


Jeffrey M. Jacobs (co-developer of UCI LISP, 1973)
ConsArt Systems Inc, Technology & Management Consulting
P.O. Box 3016, Manhattan Beach, CA 90266
voice: (213)376-3802, E-Mail: 76702.456@COMPUSERVE.COM

barmar@think.com (Barry Margolin) (09/15/90)

In article <20301@well.sf.ca.us> jjacobs@well.sf.ca.us (Jeffrey Jacobs) writes:
>Way back in the old days, ...
>LISP had two fundamental data types, CONS cells and ATOMs.  ATOMs were
>"indivisible", and included "interned symbols", numbers and strings.
>(Arrays were added, but were less important to the underlying
>concept).

I don't know which "old days" you're referring to, but in the old days I
remember (I learned MacLisp in 1980) arrays predated strings.  PDP-10
MacLisp had just recently acquired a kludgey fake string mechanism, but
there was little support for anything but input and output of them.
Arrays, on the other hand, had existed since at least the early 70's.
--
Barry Margolin, Thinking Machines Corp.

barmar@think.com
{uunet,harvard}!think!barmar

masinter@parc.xerox.com (Larry Masinter) (09/15/90)

(This may sound a little like a flame, but I don't mean it to be. Just
comparing my recollection of History to Mr. Jacobs.)

In article <20301@well.sf.ca.us> jjacobs@well.sf.ca.us (Jeffrey Jacobs) writes:

>Having watched the LISP Syntax thread for a while, I thought a little
>history might be in order...

>Way back in the old days, development in LISP and the underlying
>philosophy of the language were substantially different.

>LISP was an interpreted language.  Debugging and development was done in
>interpreted mode; compilation was considered the *last* step in the
>development process.

*Way* back in the old days, Lisp was batch. Punch cards & JCL.
Although the first generation (Lisp 1.5 on 7040 and its ilk) may have
been primarily interpreted, mmost of the second generation Lisps were
compiled (I think I remember that in Stanford's Lisp/360 the DEFINE
function invoked the compiler.)  After all, since it was batch (or
interactive batch) you might as well compile.

>LISP had two fundamental data types, CONS cells and ATOMs.  ATOMs were
>"indivisible", and included "interned symbols", numbers and strings.
>(Arrays were added, but were less important to the underlying
>concept).

BBN Lisp had strings and arrays hash tables and a few other data types
in the early 70's, and I don't think it was unique. Lisp 1.5 only had
a few datatypes, though.

>A key aspect of the language was the "equivalence of data and (source)code",
>i.e. code consisted of LISP stuctures, which could be manipulated *exactly*
>like any other LISP structure.  Note that this is substantially
>different from the modern view, where "functions (*not* code) are a data
>_type_" and not directly modifiable, e.g. even "interpreted" code in
>most modern implementations
>gets converted to something other than a CONS based structure.

I think the move toward clearly distinguishing between functions and
lists that denote functions is positive. It just means that there is a
one-way transformation, not two-way.

>This "equivalence" allowed some very interesting capabilities that are
>no longer available in modern implementations.  Since the interpreter
>operated on list structures, it was possible to dynamically modify code
>while in the process of execution.  Now, most of us didn't write
>self-modifying code (although we probably all tried it at least once).
>But we were able to stop an execution, and make changes to code and
>continue from the breakpoint without having to recompile or start over.
>We could issue a break, put a trace point around an expression *within*
>a defined function, and continue.  Or we could fix it, and continue;
>the "fix" would be propagated even to pending calls.  E.g. if you
>had
>(DEFUN FOO (X Y)
>  expr1
>  expr2
>  ...)

>and expr1 invoked FOO recursively, you could break the execution,
>change expr2 (or TRACE it,  or BREAK it or...), all of the pending
>invocations on the stack were affected.  (You can't do this with
>compiled code).

Unfortunately, it was also impossible to avoid some fairly problematic
situations, e.g.,

 (defun foo (x y)
   expr1
   expr2
   ...)

if you deleted, added, or renamed an argument, you might start
executing code where the variable bindings of the old stack frame
didn't match what the new code expeced.

[As a side note, many learning LISP programmers frequently do
encounter self-modifying code and are mystified by it, e.g., 

   (let ((var '(a b c)))
       ...
       (nconc var value))

]

>It allowed things like structure editors (where you didn't need to
>worry about messing up parends), DWIM, and other features that have
>been lost in the pursuit of performance.

Medley has a structure editor which works fine in a compile-only mode
(compile & install when done editing the structure.)  I don't think
there's a very strong correlation between compilation and structure
editing.

As for dynamic error correction, insertion of breakpoints and trace
points, however, the major impediment is not that Lisp is compiled:
the major problem is the presence of *MACROS* in the language.

(defun stopme () 
  (macrolet ((nothing (&rest ignore) nil))
    (nothing  (try to insert a (breakpoint) in here somewhere))))

Macros can cause the executed code to look completely different than
the original source; this plays havoc with most source-level debugging
techniques.

>With this view (and combined with the mathematical purity/simplicity
>of McCarthy's original concept) LISP syntax not only makes sense,
>it is virtually mandatory!

Certainly you could take Lisp, print it backward, use indentation
instead of parens and *completely* change the way the language looks
as a sequence of characters and still wind up with the same features.
As long as there's an invertable relation between the printed
representation and the internal structure, you could still use
structure editors and do error correction, etc.

>Of course, it also effectively mandated dynamic scoping.  "Local"/lexical
>scoping really came about as the default for the compiler primarily because
>most well written LISP code didn't use function arguments as free/special
>variables, so it was an obvious optimization.  However, several years
>ago, Daryle Lewis confided in me that he had intended that UCI LISP be
>released with the compiler default set to everything being SPECIAL.
>Given the historical problems in reconciling local and free variables,
>and the fact that the vast majority of LISPers who learned the language
>in the '70s and early '80 learned UCI LISP, I can't help but wonder
>if what affect this might have had on Common LISP...

The Interlisp compiler always had the compiler default that all
variables are SPECIAL. (Daryle moved on to BBN and worked on the
BBN-Lisp compiler with Alice Hartley.)

I don't really know the demographics, but my impression was that the
popular Lisp implementations of the '70s were MacLisp, Lisp 1.6, Franz
Lisp, Interlisp, and that UCI Lisp wasn't so widely used. Maybe the
distinction is between the research vs. the student community.

>(FWIW, REDUCE was originally done in UCI LISP way back in the early '70s,
>and BBN/INTERLISP supported MLISP, an Algol like syntax.  Seems to me that
>RLISP must go back that far as well.  Given that structure editors are
>incredibly ancient, I wonder why the people at Utah didn't use one of
>those.  Oh, and almost nobody ever learned LISP using 1.5...

I'm not sure about the REDUCE bit, although it doesn't ring true (my
'standard lisp' history is a bit rusty), but you certainly shouldn't
confuse MLISP (which is the meta-lisp used in McCarthy's books) with
CLISP (which is the DWIM-based support for infix notation that was
added to Interlisp in the mid-70s.)  I'm not sure what editor Griss
and Hearn used for REDUCE, but I'd guess it was some variant of TECO.
Remember that in the early 70's, people still actually used teletype
machines and paper-based terminals to talk to computers; neither text
or structure based editors actually let you *see* anything unless you
asked it to print it out for you.

> Personally, I think the whole reason LISP machines were created
> was so that people could run EMACS :-)

I believe EMACS predated LISP machines by some margin; besides, Eine
Is Not Emacs.

>However, if compilation is the primary development strategy (which it
>is with CL), then the LISP syntax is not particularly useful.  Modern
>block structured syntax is much easier to read and maintain; it also
>allows constructs such as type declarations, etc. to be much
>more readable.  Infix notation is indeed much more familiar to most
>people.  Keyword syntax in most languages is much more obvious, readable
>and certainly less prone to errors and abuse than CL.  The elimination
>of direct interpretation of structures (as read) and the almost total use
>of text editors does indeed leave LISP syntax a relic from the past.
 
The ability to write programs that create or analyze other programs
without resorting to extensive string manipulation or pattern matching
has always been a strength of Lisp, whether or not you can do so on
the fly with programs that are running on the stack.

I guess C doesn't have a 'Modern block structured syntax' (I certainly
get cross-eyed looking at some of the constructs that cross my
screen); perhaps it is a relic from the past, too. Certainly there are
languages with a more principled syntax (Prolog, Smalltalk). I think
HyperTalk is fairly easy to read even though it is pretty inconsistent
about how it does nesting. I think a lot of people find Lisp hard to
read and maintain, and that the difficulty is intrinsic, but I don't
think there is as strong a correlation with editor technology as you
imply.


--
Larry Masinter (masinter@parc.xerox.com)
Xerox Palo Alto Research Center (PARC)
3333 Coyote Hill Road; Palo Alto, CA USA 94304
Fax: (415) 494-4333

rpk@rice-chex.ai.mit.edu (Robert Krajewski) (09/15/90)

In article <6217@castle.ed.ac.uk> expc66@castle.ed.ac.uk (Ulf Dahlen) writes:
>What did the ``original'' McCarthy Lisp syntax look like?

Umm, actually, I can't recall exactly...

But I think it was something like:

The list (a b c) ==> (a, b, c)

(cons x y) ==> cons[x;y]
Robert P. Krajewski
Internet: rpk@ai.mit.edu ; Lotus: robert_krajewski.lotus@crd.dnet.lotus.com

pcg@cs.aber.ac.uk (Piercarlo Grandi) (09/16/90)

On 12 Sep 90 02:12:38 GMT, andy@Neon.Stanford.EDU (Andy Freeman) said:

andy> 1)  We're not doing +,-,*,/ arithmetic, we're programming.  (BTW - "+"
andy>     isn't really a binary operator, neither is "*"; there are
andy>     surprisingly few true binary, or unary, operations.)

In article <PCG.90Sep13135243@odin.cs.aber.ac.uk>, pcg@cs.aber.ac.uk
(Piercarlo Grandi) writes:

pcg> Precisely. Agreed. Even the semantics are different.

On 14 Sep 90 07:45:24 GMT, ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe)
said:

ok> I missed this the first time it came around.  I have some bad news
ok> for the two of you: in floating-point arithmetic "+" _is_ a binary
ok> operation.  Floating-point "+" and "*" are not associative.

The semantics *are* different. Didn't I write that?

ok> If one Lisp compiler turns (+ X Y Z) into (plus (plus X Y) Z) and
ok> another turns it into (plus X (plus Y Z)) then they are going to
ok> produce _different_ results.

As long as people *know* that the semantics are different, and this is
the big problem, they can choose whether to code the things in either of
the three ways.

ok> For integer and rational arithmetic, there's no problem,

Well, this is a case in point for my argument about the pitfalls; there
is still a problem. Nobody constrains you to have only positive numbers
as the operands to n-ary fixed point +, so that

	(+ -10 +32767 +10)

is not well defined on a 16 bit machine, unless you do modular
arithmetic throughout.

As we already know but sometimes forget, arithmetic on computers follows
*very* different rules from arithmetic in vanilla mathematics, and using
a notation that resembles the latter can be utterly misleading, even for
very competent people, without pausing for hard thought.

	(+ a b c d e)

simply means apply repeated *computer* addition on *computer* fixed or
floating point throughout. It is the *programmer's* responsibility to
make sure this makes sense -- the unfamiliar syntax may make him pause,
at least -- and somehow maps into vanilla mathematics not too
inaccurately.

ok> but anyone doing floating point calculations in Lisp has to be very
ok> wary of non-binary + and * .

Anyone doing floating point arithmetic on *any* machine, IEEE standard
or Cray, has to be very very wary of assuming it is the same as
arithmetic on reals, and yet a lot of people do (and then complain that
two computers with different floating representations print utterly
different results to their programs!).

The real problem is that *lots* of people still believe that floating
point numbers are reals, and fixed point ones have infinite precision!

They miss out completely on the non associativity or modularity, and all
the other funny, funny properties of floating and fixed point.

My contention is not with lisp n-ary operators (that are damn useful and
safe on other domains), it is that even the simplest existing operators
on floats and fixeds have entirely non-obvious semantics, and that this
is carefully disguised by conventional, mathematics looking syntax. Ah
sales!

--
Piercarlo "Peter" Grandi           | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

scott@wiley.uucp (Scott Simpson) (09/17/90)

In article <PCG.90Sep15203024@odin.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
>Anyone doing floating point arithmetic on *any* machine, IEEE standard
>or Cray, has to be very very wary of assuming it is the same as
>arithmetic on reals, and yet a lot of people do (and then complain that
>two computers with different floating representations print utterly
>different results to their programs!).

Real programmers don't use real numbers. Donald Knuth did all the
arithmetic in TeX in fixed point numbers he called FIXes, which are
numbers raised to 2^20th (i.e., he uses the lower 20 bits for the
fractional part). This gives the same results on all (32 bit or
greater) machines but you still must be wary with your arithmetic.
(Come to think of it, maybe he did it in scaled points (which are
2^16th). I don't have the source here. I'll have to check. My point is
the same though.)
Scott Simpson    		TRW 			scott@coyote.trw.com

ericco@stew.ssl.berkeley.edu (Eric C. Olson) (09/17/90)

One way of evaluating the "virtues" of a language, is to compare their
expressiveness.  For example, someone already mentioned that an infix
notation parser can be readily added to Lisp.  I mentioned previously
that for ruled based expert systems, a lhs/rhs parser is also a simple
extention to Lisp.  Making a similiar system in an Algol language is
more complex.  Inorder for the rules to be expressed in the native
language the generated "meta" level code must be passed through the
compiler.  This may require system calls, or an inferior process, or
that the compiler be imbedded into program.  A good working example of
this is C++, which (typically) converts instructions into C code.

However there are significant limitations on what can be done in C++.
For example, suppose you need to convolve an image as quickly as
possible, and that the convolution kernel is not known at compile
time.  In Lisp, one can easily write code to make a function that applies
a specific convolution kernal to an image.  The generated code only
contains a single loop and a bunch of conditional statements for
boundaries, a sum, and a bunch of multiplications.  Almost
any modern Lisp compiler can generate efficient code for this.  In
addition, special case can be handled.  In particular, kernals with
zero values and ones can be optimized.

My impression is that writing a similiar piece of code in Algol
languages would be needlessly complicated (although doable).

I've also seen Lisp implementations in C, and C implementations in
Lisp.  Although I've seen more Lisp implementations in C, I rather
enjoyed using C in Lisp -- since it caught all my illegal pointers
during program development and safely returned the Lisp debugger.

Anyway, I've yet to find an application that can be easily implemented
in an Algol language that cannot be readily implemented in Lisp.
However, the above examples are easily implemented in Lisp, but, I
think, are not readily implemented in Algol languages.

But, hey, some one tell me I'm wrong.  I'll listen.
Eric

Eric
ericco@ssl.berkeley.edu

jeff@aiai.ed.ac.uk (Jeff Dalton) (09/18/90)

In article <15089@yunexus.YorkU.CA> oz@yunexus.yorku.ca (Ozan Yigit) writes:
>In article <3408@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:

>>... It's true that different people will prefer different
>>syntaxes and that we can't say they're wrong to do so.  However, we
>>shouldn't go on to conclude that all views on the virtues or otherwise
>>of a syntax are equally valid. 

>It follows therefore that one should try to substantiate some or all of the
>claims regarding the effectiveness and benefits of a syntax, such as that
>of lisp, instead of just presenting opinions.

What follows from what I said is that people should find out how good
Lisp programmers use the language before deciding that it has an
inherently losing syntax.

>I have seen studies on "natural artificial languages" [...] but I don't recall
>seeing a study that has a definitive word on overwhelming benefits of one
>syntax over another. If you know of any that substantiate various claims
>[so far made] about lisp syntax, I would be very interested in it.

I thought I was reasonably clear about what I had in mind when making
the statement you quote above, namely such things as the fact that
some people decide against Lisp syntax when they haven't used a good
editor or haven't learned some of the more effective techniques for
understanding Lisp.

The point I was making above was, more or less, just that informed
opinion should carry more weight than uninformed opinion.  This has
rather little to do with definitive words on the overwhelming benefits
of one syntax over another.

I suppose, however, I have claimed that Lisp's syntax is simpler than
some of the alternatives.  That seems to me rather self-evident.  In
any case, I don't intend to argue about it.

I have also claimed that Lisp's quoting rules needn't be more
confusing than those of other languages such as (for example) Basic.
I don't expect everyone to agree with me on this, but it's based
on the observation that quotes are needed in the same cases, namely
when the external representation of some data object might otherwise
be mistaken for an expression.  This suggests that it should be
possible to explain (enough of) Lisp's quotation rules in a way that
is not more confusing.  If you want to say this is "just opinion", I
suppose we'll just have to disagree.

>>You can get *some* of the benefits, but by sacrificing some
>>of the others.  Lisp occupies something like a local maximum in
>>"benefit space".
>
>Really. QED is it?

That's right, at least until someone provides a counterexample :-)

Steve Knight provided one, but of the wrong sort.  He showed that
one of the benefits of Lisp syntax -- extensibility -- could be
obtained with another syntax.  But it was done by giving up some
of the simplicity of the Lisp approach.

I am quite happy for someone to decide that the syntax of some other
language is better than that of Lisp, or that they prefer to give up
some of the simplicity (or whatever other virtues might be claimed).
Indeed, I think these are questions on which reasonable people can
reasonably disagree.  It's a mistake, in my view, to suppose that
some sort of study will give us the definitive answers.

However, when people start saying that Lisp's syntax is a cost
rather than a benefit, with implication that everyone ought to
see it that way, I think it makes sense to point out that some
people *don't* see it that way, that they don't feel they are
tolerating the syntax in order to get extensibility (or whatever),
that they're aren't trying to make life easier for machines at
their own expense, and so on.  Any definitive answer about 
Lisp syntax has to take these views into account.

-- Jeff

jjacobs@well.sf.ca.us (Jeffrey Jacobs) (09/18/90)

> old days (Barry Margolin learned LISP in 1980)

Early '70s; UCI LISP and BBN(/INTER)LISP both had strings.  But implementation
was a bit different than today.

Jeffrey M. Jacobs
ConsArt Systems Inc, Technology & Management Consulting
P.O. Box 3016, Manhattan Beach, CA 90266
voice: (213)376-3802, E-Mail: 76702.456@COMPUSERVE.COM

lgm@cbnewsc.att.com (lawrence.g.mayka) (09/18/90)

In article <PCG.90Sep13135243@odin.cs.aber.ac.uk>, pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
> The real challenge here is that we want some syntax that says, apply
> this operator symbol to these arguments and return these value_s_. Even
> lisp syntax does not really allow us to easily produce multiple values.

Common Lisp supports multiple value return with a fairly simple syntax.


	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@iexist.att.com

Standard disclaimer.

lgm@cbnewsc.att.com (lawrence.g.mayka) (09/18/90)

In article <20301@well.sf.ca.us>, jjacobs@well.sf.ca.us (Jeffrey Jacobs) writes:
> the "fix" would be propagated even to pending calls.  E.g. if you
> had
> (DEFUN FOO (X Y)
>   expr1
>   expr2
>   ...)
> 
> and expr1 invoked FOO recursively, you could break the execution,
> change expr2 (or TRACE it,  or BREAK it or...), all of the pending
> invocations on the stack were affected.  (You can't do this with
> compiled code).

You can if you use the Set Breakpoint command on a Symbolics
workstation.  TRACE :WHEREIN and ADVISE-WITHIN are also applicable if
'expr2' is a function call.  Or if you mean to change FOO permanently,
recompile it and then reinvoke it at its earliest occurrence on the
stack.

> However, if compilation is the primary development strategy (which it
> is with CL), then the LISP syntax is not particularly useful.  Modern

The great power of Common Lisp's unique syntax extension facilities
(a.k.a. its macro capability) depends on the language's regular
syntax.


	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@iexist.att.com

Standard disclaimer.

lgm@cbnewsc.att.com (lawrence.g.mayka) (09/18/90)

In article <MASINTER.90Sep15020347@origami.parc.xerox.com>, masinter@parc.xerox.com (Larry Masinter) writes:
> about how it does nesting. I think a lot of people find Lisp hard to
> read and maintain, and that the difficulty is intrinsic, but I don't
> think there is as strong a correlation with editor technology as you
> imply.

Most complaints I've heard about Lisp syntax, from both novices and
regular users, boil down to the claim that the repetitive,
positionally dependent syntax of most Lisp constructs has insufficient
redundancy for easy recognition by the human eye.  Repetition of
parentheses could be reduced by defining (e.g., via the macro
character facility) a character pair such as {} to be synonymous with
().  Positional dependency could be reduced simply by making greater
use of keywords (e.g., defining macros synonymous with common
constructs but taking keyword arguments instead of positional ones).
The difficulty some people have in reading Lisp is hence not intrinsic
to its syntax, but rather an accident of common practice.


	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@iexist.att.com

Standard disclaimer.

lgm@cbnewsc.att.com (lawrence.g.mayka) (09/18/90)

In article <PCG.90Sep15203024@odin.cs.aber.ac.uk>, pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
> is still a problem. Nobody constrains you to have only positive numbers
> as the operands to n-ary fixed point +, so that
> 
> 	(+ -10 +32767 +10)
> 
> is not well defined on a 16 bit machine, unless you do modular
> arithmetic throughout.

Common Lisp integers have infinite precision, so Lisp programmers
don't need to worry about 16-bit machine words vs. 32-bit machine
words.  Rational numbers in Lisp also have infinite precision.


	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@iexist.att.com

Standard disclaimer.

jjacobs@well.sf.ca.us (Jeffrey Jacobs) (09/19/90)

Larry Masinter writes:

> *Way* back in the old days, Lisp was batch. Punch cards & JCL.
> Although the first generation (Lisp 1.5 on 7040 and its ilk) may have
> been primarily interpreted, mmost of the second generation Lisps were
> compiled (I think I remember that in Stanford's Lisp/360 the DEFINE
> function invoked the compiler.)

I'm not sure which "second generation LISPs" Larry refers to.  Having
fortunately been spared any "batch LISPs" other than briefly and
mercifully SDC's, I can't comment.  Certainly, Maclisp, BBN/Inter-, UCI,
Stanford 1.6 meet my definition of being primarily interpreted during
the development process.  (I can't imagine anything more horrible than
programming LISP in a batch environment ;-)

>BBN Lisp had strings and arrays hash tables and a few other data types
>in the early 70's, and I don't think it was unique. Lisp 1.5 only had
>a few datatypes, though.

UCI and 1.6 also had similar data types.  What I meant to point out was
that the conceptual focus was on CONS cells and ATOMS as the *primary*
data types of interest.  As opposed to modern Common LISPs, where the
term "atom" had disappeared,
people are deathly afraid of using CONS cells, and the focus
seems to be primarily on sequences, hash-tables, vectors, etc.  (This
reached its peak in the SPICE litrature where users were warned to
'avoid the use of CONS whenever possible').

>if you deleted, added, or renamed an argument, you might start
>executing code where the variable bindings of the old stack frame
>didn't match what the new code expeced.

True, but protecting the programmer from dumb mistakes is not something
for which LISP is famous.  Even in a compiled environment, changing
argument lists still causes problems.  And in the interpreted mode,
you could often fix the roblem and continue, as opposed to starting over.


>[As a side note, many learning LISP programmers frequently do
> encounter self-modifying code and are mystified by it, e.g.,
>   (let ((var '(a b c)))
>       ...
>       (nconc var value))
> ]

This isn't self modifying code, it's destructive modification of a data
structure (and let's avoid a circular argument about "var" being "code";
the learning programmer encounters this as a data example, not an
example of code).

As an aside, destructive modification of structures, or even
the building of any structure that is not a fairly simple list,
seems to be something that either programmers don't learn or fear
greatly.  I know one former member of the committee that confessed
to never having used RPLACA or its equivalent.  See also the
recent thread on building lists and TCONC...

>As for dynamic error correction, insertion of breakpoints and trace
>points, however, the major impediment is not that Lisp is compiled:
>the major problem is the presence of *MACROS* in the language.

Sorry, this on doesn't fly.  The information needed for dynamic correction,
tracing, etc.  is simply gone.  You can't issue a break, followed
by a "TRACE FOO IN FUN" where "UN" is a compiled function, continue,
ad have pending call to FOO in previous invocation of FUN trace.
(The Medly example simply demonstrates this).

Nor can I buy the "MACROS" argument.  Most applications programmers(as
opposed to "language implementors"), make very little use of macros.  Nor
do they have much reason to do source level debugging of macros supplied
with their particular implementation.  IOW, *most* of their debugging is
done on functions, not macros.  And if they do use macros, they are usually
fairly simple and straightforward.

>I don't really know the demographics, but my impression was that the
>popular Lisp implementations of the '70s were MacLisp, Lisp 1.6, Franz
>Lisp, Interlisp, and that UCI Lisp wasn't so widely used. Maybe the
>distinction is between the research vs. the student community.

The distinction is indeed the "student community".  UCI LISP was,
according to my records and correspondence, the most widely used system
for *teaching* LISP.  In fact, it was used for classes at Stanford around '75
or '76, and was used at CMU into the '80s (CMU rewrote it).  I don't think
it ever was adopted by SAIL, although I'm sure they incorporated many of the
enhancements.  Certainly BBN/Interlisp and MacLisp were the premier
versions used in the research community, followed by Franz later in the
decade.  (I'm not real clear on what happend with 1.6; UCI was a
melding of Stanford 1.6 and BBN).

>I'm not sure about the REDUCE bit, although it doesn't ring true (my
>'standard lisp' history is a bit rusty)

Trust me; I was providing support to Utah while I was at ISI in 1974!

> but you certainly shouldn't
>confuse MLISP (which is the meta-lisp used in McCarthy's books) with
>CLISP (which is the DWIM-based support for infix notation that was
>added to Interlisp in the mid-70s.)

Mea culpa, it was indeed CLISP!  This was available at least as early
as '74.  My earlier BBN/Inter manuals are in storage, but I think it
might even go back a little earlier.

>I'm not sure what editor Griss
>and Hearn used for REDUCE, but I'd guess it was some variant of TECO.
>Remember that in the early 70's, people still actually used teletype
>machines and paper-based terminals to talk to computers; neither text
>or structure based editors actually let you *see* anything unless you
>asked it to print it out for you.

I remember, literally.  I used to have a TI Silentwriter next to two terminals
in my office...

>The ability to write programs that create or analyze other programs
>without resorting to extensive string manipulation or pattern matching
>has always been a strength of Lisp, whether or not you can do so on
>the fly with programs that are running on the stack.

Quite true!  And, I might add, the simple syntax of LISP makes it much
easier to write code that generates LISP code than is the case with other
languages.

>I guess C doesn't have a 'Modern block structured syntax' (I certainly
>get cross-eyed looking at some of the constructs that cross my
>screen); perhaps it is a relic from the past, too.

Heck, I get *serious* eye strain trying to read most C code.  Nobody
on this planet will ever accuse me of being a C-booster.  (And it
certainly qualifies as a "relic"; the biggest influence in C was
probably the difficulty in pushing the keys on the old ASR teletypes!)

>> Personally, I think the whole reason LISP machines were created
>> was so that people could run EMACS :-)
>I believe EMACS predated LISP machines by some margin

Hey, my point exactly.  More (?) than one EMACS user on a PDP-10 brought
it to its knees, so the LISP machine had to be invented :-)

>I think a lot of people find Lisp hard to
>read and maintain, and that the difficulty is intrinsic, but I don't
>think there is as strong a correlation with editor technology as you
>imply.

I didn't mean to imply any such correlation; in fact, I'm quite happy to
use ?MACS.  But I'd also like to have a structure editor, and DWIM, and
a lot of other things that are no longer available.  More importantly,
I'd like to have seen more of the underlying concepts and philosophy
maintained!

>(This may sound a little like a flame, but I don't mean it to be. Just
>comparing my recollection of History to Mr. Jacobs.)

No flame taken...

Jeffrey M. Jacobs
ConsArt Systems Inc, Technology & Management Consulting
P.O. Box 3016, Manhattan Beach, CA 90266
voice: (213)376-3802, E-Mail: 76702.456@COMPUSERVE.COM

"I hope that when I get old I won't jes sit around talking about glory days,
but I probably will" - Bruce Springsteen

moore%cdr.utah.edu@cs.utah.edu (Tim Moore) (09/19/90)

In article <20501@well.sf.ca.us> jjacobs@well.sf.ca.us (Jeffrey Jacobs) writes:
>Larry Masinter writes:
>>...
>True, but protecting the programmer from dumb mistakes is not something
>for which LISP is famous.  Even in a compiled environment, changing
>argument lists still causes problems.  And in the interpreted mode,
>you could often fix the roblem and continue, as opposed to starting over.
>

I'd say that Lisp, with its copious runtime typechecking, does a
pretty good job of protecting the user from dumb mistakes, compared
with other languages like C.

>
>>[As a side note, many learning LISP programmers frequently do
>> encounter self-modifying code and are mystified by it, e.g.,
>>   (let ((var '(a b c)))
>>       ...
>>       (nconc var value))
>> ]
>
>This isn't self modifying code, it's destructive modification of a data
>structure (and let's avoid a circular argument about "var" being "code";
>the learning programmer encounters this as a data example, not an
>example of code).
>

It is self-modifying code in the sense that it changes the behavior of
the form from one evaluation to the next by tweaking a structure that
is not obviously part of the global state of the program. It's clearly
a hack that has been made obsolete by closures. It's not legal Common
Lisp code, because the constant may be in read-only memory.

>As an aside, destructive modification of structures, or even
>the building of any structure that is not a fairly simple list,
>seems to be something that either programmers don't learn or fear
>greatly.  I know one former member of the committee that confessed
>to never having used RPLACA or its equivalent.  See also the
>recent thread on building lists and TCONC...
>

A reason for this may be that one of the most influential Lisp books
of the 80's (IMHO), Abelson and Sussman's "Structure and
Interpretation of Computer Programs", does a pretty good job of
discouraging the pitfalls of destructive modification. Also, avoiding
destructive modification is sound engineering practice. Far too often
have I stumbled over someone's clever destructive hack when trying to
modify code.

I think many Lisp programmers go through a phase where they discover
the forbidden pleasures of destructive modification and go overboard
using it. I myself am starting to grow out of it. It causes too much
grief in the long run.

Also, in modern Lisps that use a generational garbage collector,
destructive modification of a structure can be as expensive if not
more than allocating a fresh structure.

>>As for dynamic error correction, insertion of breakpoints and trace
>>points, however, the major impediment is not that Lisp is compiled:
>>the major problem is the presence of *MACROS* in the language.
>...

>Nor can I buy the "MACROS" argument.  Most applications programmers(as
>opposed to "language implementors"), make very little use of macros.  Nor
>do they have much reason to do source level debugging of macros supplied
>with their particular implementation.  IOW, *most* of their debugging is
>done on functions, not macros.  And if they do use macros, they are usually
>fairly simple and straightforward.
>

It's not the macros that the application programmers write themselves
that cause problems; at least the programmer knows what the expansions
of those macros look like. Rather, it's the macros that are a part of
the language that cause trouble. Consider how often Common Lisp
programmers use cond, setf, dotimes, dolist, do, and do*. The
expansions of these macros can be pretty hairy. Maintaining the
correspondence between the internal representation of a function and
the form that produced it is not trivial.

>>I'm not sure about the REDUCE bit, although it doesn't ring true (my
>>'standard lisp' history is a bit rusty)
>
>Trust me; I was providing support to Utah while I was at ISI in 1974!

So that explains the old copy of the UCI Lisp manual in the PASS group
library... 

>>The ability to write programs that create or analyze other programs
>>without resorting to extensive string manipulation or pattern matching
>>has always been a strength of Lisp, whether or not you can do so on
>>the fly with programs that are running on the stack.
>
>Quite true!  And, I might add, the simple syntax of LISP makes it much
>easier to write code that generates LISP code than is the case with other
>languages.

I third this point. Many problems can be solved cleverly and
efficiently by generating code for the solution instead of solving the
problem directly. Pattern matchers come to mind. To return briefly to
the original subject of this thread, I don't think that the trend
towards an opaque function representation has effected this capability
at all.

>
>Jeffrey M. Jacobs
>ConsArt Systems Inc, Technology & Management Consulting
>P.O. Box 3016, Manhattan Beach, CA 90266
>voice: (213)376-3802, E-Mail: 76702.456@COMPUSERVE.COM
>
>"I hope that when I get old I won't jes sit around talking about glory days,
>but I probably will" - Bruce Springsteen
Tim Moore                    moore@cs.utah.edu {bellcore,hplabs}!utah-cs!moore
"Ah, youth. Ah, statute of limitations."
		-John Waters

jwz@lucid.com (Jamie Zawinski) (09/19/90)

In article <20501@well.sf.ca.us> jjacobs@well.sf.ca.us (Jeffrey Jacobs) writes:
>
> Most applications programmers (as opposed to "language implementors"), make
> very little use of macros.  Nor do they have much reason to do source level
> debugging of macros supplied with their particular implementation.  IOW,
> *most* of their debugging is done on functions, not macros.  And if they do
> use macros, they are usually fairly simple and straightforward.

I disagree; macros are Common Lisp's only serious hook into the compiler, and
I use them all the time when writing code that needs to be efficient.  In a
pagination system I worked on, we used macros to turn a structure describing a
finite-state machine into a parser for it.  The strength of the thing was the
fact that, once all of the macroexpansion was done, there were a lot of
clauses of the form (= 4 4) which the compiler optimized away.  No need to
learn the meta-language that things like lex use.  And what other language
lets you define new control structures?

(And though I'm a "language implementor" now, I wasn't then. :-))

Saying that most macros are fairly simple is like saying that most functions
aren't recursive.  It's probably true, but it's not really that meaningful a
generalization.  

		-- Jamie

mike@ists.ists.ca (Mike Clarkson) (09/20/90)

In article <1990Sep18.180829.8801@hellgate.utah.edu> moore%cdr.utah.edu@cs.utah.edu (Tim Moore) writes:
>>Nor can I buy the "MACROS" argument.  Most applications programmers(as
>>opposed to "language implementors"), make very little use of macros.  Nor
>>do they have much reason to do source level debugging of macros supplied
>>with their particular implementation.  IOW, *most* of their debugging is
>>done on functions, not macros.  And if they do use macros, they are usually
>>fairly simple and straightforward.
>>
>
>It's not the macros that the application programmers write themselves
>that cause problems; at least the programmer knows what the expansions
>of those macros look like. Rather, it's the macros that are a part of
>the language that cause trouble. Consider how often Common Lisp
>programmers use cond, setf, dotimes, dolist, do, and do*. The
>expansions of these macros can be pretty hairy. Maintaining the
>correspondence between the internal representation of a function and
>the form that produced it is not trivial.

A bit of a tangent on macros and Scheme:

When I look at the kind of code environment I write Common Lisp code in
these days, I feel that most application programmers make very heavy use
of macro packages, some of which are getting very large.  How may files
that you have start with

(require 'pcl)
(require 'clx)
(require 'loop)

etc.  This doesn't invalidate Jeff Jacobs or Tim Moore's arguments, but it
leads me to an observation.  As we go on as lisp programmers (in Common Lisp),
we are building higher and higher levels of abstraction, relying on
larger and larger programming bases above the underlying language.
The large packages hid many of the bookkeeping details from the applications
programmer, and assuming they work as described, s/he is able to write
much denser and compact code.  Look at programming in Picasso as an example.

This is "a good thing" in the Abselson and Sussman sense, and each or
these packages I mentioned is becoming a standard of sorts, getting
incorporated back into the evolving Common Lisp standard.  Contrast this
with the case in Scheme: there is no standard definition of macros or
structures or advanced loops or any of the things neccessary to build
these abstracting packages.  As a result, they are difficult to build
and by definition, ex-standard. 

The lack of these parts of the Scheme standard is intentional.  In the
current Scheme point of view is that, "The ability to alter the syntax
of the language creates numerous problems.  All current implementations
of Scheme have macro facilities that solve those problems to one degree
or another, but the solutions are quite different and it isn't clear at
this time which solution is best, or indeed whether any of the solutions
are truly adequate.  Rather than standardize, we are encouraging
implementations to continue to experiment with different solutions." But
in the interim, time marches on, and it is crippling the language from
development into areas that are also very important. 

I dearly love Scheme as a language, but because of the lack of
standardization it's difficult to find the abstracting packages on which
we increasingly rely.  I feel that the approach of encouraging
implementations to continue to experiment with different solutions has
had its merits (first class continuations as an example), but the time
has come to solidify the language and build upwards on the very real
accomplishments of the base language. 


Mike.



-- 
Mike Clarkson					mike@ists.ists.ca
Institute for Space and Terrestrial Science	uunet!attcan!ists!mike
York University, North York, Ontario,		FORTRAN - just say no. 
CANADA M3J 1P3					+1 (416) 736-5611

pcg@cs.aber.ac.uk (Piercarlo Grandi) (09/20/90)

On 18 Sep 90 00:21:37 GMT, lgm@cbnewsc.att.com (lawrence.g.mayka) said:

lgm> In article <PCG.90Sep13135243@odin.cs.aber.ac.uk>,
lgm> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:

pcg> The real challenge here is that we want some syntax that says, apply
pcg> this operator symbol to these arguments and return these value_s_. Even
pcg> lisp syntax does not really allow us to easily produce multiple values.

lgm> Common Lisp supports multiple value return with a fairly simple syntax.

It is fairly ad hoc, ugly and "inefficient". It essentially implies that
you are assignining a list to a list. It is also slightly incosistent in
flavour with the rest of the language.

Conventions like that used by Aleph (no return values, only in and out
parameters), or Forth (take N from the stack, return M from the stack)
seem to be much more consistent...

You have a point that it is simple, though. It is still a bit of a fixup
job, though (IMNHO of course).
--
Piercarlo "Peter" Grandi           | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

tmb@ai.mit.edu (Thomas M. Breuel) (09/21/90)

In article <12812@ists.ists.ca>, mike@ists.ists.ca (Mike Clarkson) writes:
|> moore%cdr.utah.edu@cs.utah.edu (Tim Moore) writes:
|> >It's not the macros that the application programmers write themselves
|> >that cause problems [...], it's the macros that are a part of
|> >the language that cause trouble.
|> 
|> When I look at the kind of code environment I write Common Lisp code in
|> these days, I feel that most application programmers make very heavy use
|> of macro packages, some of which are getting very large.  How may files
|> that you have start with
|> 
|> (require 'pcl)
|> (require 'clx)
|> (require 'loop)
|> 
|> etc.  This doesn't invalidate Jeff Jacobs or Tim Moore's arguments, but it
|> leads me to an observation.  As we go on as lisp programmers (in
Common Lisp),
|> we are building higher and higher levels of abstraction, relying on
|> larger and larger programming bases above the underlying language.
|> [...]
|> I dearly love Scheme as a language, but because of the lack of
|> standardization [of macros] it's difficult to find the abstracting
packages 
|> on which we increasingly rely.  I feel that the approach of
encouraging
|> implementations to continue to experiment with different solutions
has
|> had its merits (first class continuations as an example), but the
time
|> has come to solidify the language and build upwards on the very real
|> accomplishments of the base language. 

The use of macros to implement PCL, CLX, and LOOP is
highly questionable:

* In a language like Scheme (as opposed to CommonLisp)
that provides efficient implementations of higher order
constructs with natural syntax, there is little need or
excuse for something as horrible as LOOP in the first
place.  

* I see no reason for an X interface to make heavy use of
macros (everything can be implemented nicely as functions
or integrable functions).  

* It's perhaps OK to prototype an object system as a macro
package, but for production use, an object system (or any
equally complex extension to the language) should be part
of the compiler, for reasons of efficiency and
debuggability.

Altogether, I think the decision not to standardise macros
in Scheme has had the beneficial side effect of
discouraging their use (maybe that was one of the intents
in the first place). On the other hand, I doubt that the
differences between DEFINE-MACRO in different implementations
of Scheme are so significant that it is a serious obstacle for
writing a portable X window system interface, portable
iteration constructs, or portable prototypes of something
like an object system (even PCL doesn't run out of the box
on every implementation of CommonLisp).

Much more important to my taste than a standard macro
facility (there exists a de-facto standard anyway), would
be guidelines for a standard foreign function interface
to Scheme (at least for numerical code), and guidelines
for optimization features such as MAKE-MONOTYPE-VECTOR.
I say "guidelines" because I realize that such extensions
should not really be part of the language (they may not
even be implementable on some systems), but that there
should be a common consensus about how extensions of
this kind should look.

lgm@cbnewsc.att.com (lawrence.g.mayka) (09/21/90)

In article <1990Sep20.170311@ai.mit.edu>, tmb@ai.mit.edu (Thomas M. Breuel) writes:
> * In a language like Scheme (as opposed to CommonLisp)
> that provides efficient implementations of higher order
> constructs with natural syntax, there is little need or
> excuse for something as horrible as LOOP in the first
> place.  

We can talk about Dick Waters' SERIES package instead, if you prefer.
Though functional rather than iterative in style, the SERIES package
still makes heavy use of macros.  Why?  To perform source code
rearrangement during compilation, especially the unraveling of series
expressions into iteration to improve performance.  In general, I see
the primary purpose of macros not as simple syntactic sugar but as a
means of manipulating source code "surreptitiously" at compile time
for reasons of efficiency improvement, tool applicability (e.g.,
recording the source file of a function definition so that Meta-. can
find it on request), existence in the compilation environment (e.g.,
DEFPACKAGE), etc.



	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@iexist.att.com

Standard disclaimer.

jjacobs@well.sf.ca.us (Jeffrey Jacobs) (09/22/90)

Larry Masinter wrote:
>>>[As a side note, many learning LISP programmers frequently do
>>> encounter self-modifying code and are mystified by it, e.g.,
>>>   (let ((var '(a b c)))
>>>       ...
>>>       (nconc var value))
>>> ]

To which I replied:
>>This isn't self modifying code, it's destructive modification of a data
>>structure (and let's avoid a circular argument about "var" being "code";
>>the learning programmer encounters this as a data example, not an
>>example of code).

And Tim Moore wrote:

>It is self-modifying code in the sense that it changes the behavior of
>the form from one evaluation to the next by tweaking a structure that
>is not obviously part of the global state of the program.

Tim is quite correct, and points out what I (now) assume Larry Masinter
meant.  I (mis-)took the example in a different context.

>>It's not legal Common
>>Lisp code, because the constant may be in read-only memory.

Correct me if I'm wrong, but it seems to me that this *is* legal and
that the compiler should specifically *not* allocate quoted lists such
as that to read-only memory!

In any case, it's basically poor programming style, which is found in
every language.

>Rather, it's the macros that are a part of
>the language that cause trouble. Consider how often Common Lisp
>programmers use cond, setf, dotimes, dolist, do, and do*. The
>expansions of these macros can be pretty hairy.
>Maintaining the
>correspondence between the internal representation of a function and
>the form that produced it is not trivial.

This assume a destructive replacement at the point of invocation of
these forms.  In the dark distant pass, macro expansion during
interpretation occured *every* time the form was encountered (and
COND wasn't a macro); one paid the price during interpretation, trading
off ease of debugging and  maintaining the original form for the improvement
in efficiency during compilation.  As time went on, destructive replacement
during interpretation became an option, and subsequently became the
de-facto default.  (Note that this provides its own set of problems,
i.e. a macro being changed, but previous invocations don't get updated).

But none of these arguments is particularly compelling, IMHO.  They
are throwing the baby out with the bath water, i.e. "because the
programmmer may have difficulty dealing with certain things in such
an environment, we will eliminate all such capabilities"!


>"Ah, youth.
I remember youth...
>Ah, statute of limitations."
Thank god!
>		-John Waters

Jeffrey M. Jacobs
ConsArt Systems Inc, Technology & Management Consulting
P.O. Box 3016, Manhattan Beach, CA 90266
voice: (213)376-3802, E-Mail: 76702.456@COMPUSERVE.COM

aarons@syma.sussex.ac.uk (Aaron Sloman) (09/22/90)

	Lawrence G. Mayka
	AT&T Bell Laboratories (lgm@cbnewsc.att.com)
writes:
>
> Most complaints I've heard about Lisp syntax, from both novices and
> regular users, boil down to the claim that the repetitive,
> positionally dependent syntax of most Lisp constructs has insufficient
> redundancy for easy recognition by the human eye.

And for the system to provide compile-time help for the user who
makes mistakes.

> ....Repetition of
> parentheses could be reduced by defining (e.g., via the macro
> character facility) a character pair such as {} to be synonymous with
> ().  Positional dependency could be reduced simply by making greater
> use of keywords (e.g., defining macros synonymous with common
> constructs but taking keyword arguments instead of positional ones).
> The difficulty some people have in reading Lisp is hence not intrinsic
> to its syntax, but rather an accident of common practice.
>

If this sort of enhancement of redundancy and readability were done
in some standard, generally agreed, way, then some of the main
objections that I and others have to the lisp family of languages
(Common Lisp, Scheme, T, ....) would go away. I _might_ even
consider using T in place of Pop-11 one day???


Aaron Sloman,
School of Cognitive and Computing Sciences,
Univ of Sussex, Brighton, BN1 9QH, England
    EMAIL   aarons@cogs.sussex.ac.uk
or:
            aarons%uk.ac.sussex.cogs@nsfnet-relay.ac.uk
            aarons%uk.ac.sussex.cogs%nsfnet-relay.ac.uk@relay.cs.net
    BITNET: aarons%uk.ac.sussex.cogs@uk.ac
    UUCP:     ...mcvax!ukc!cogs!aarons
            or aarons@cogs.uucp

aarons@syma.sussex.ac.uk (Aaron Sloman) (09/22/90)

jwz@lucid.com (Jamie Zawinski) writes:

> .....And what other language
> lets you define new control structures?

Just for the record -- Pop-11 does, either using macros (though they
are somewhat different from Lisp macros, as pointed out in previous
discussion in comp.lang.lisp) or by defining new "syntax words" that
read in some text and plant code for the incremental compiler.

Aaron Sloman,
School of Cognitive and Computing Sciences,
Univ of Sussex, Brighton, BN1 9QH, England
    EMAIL   aarons@cogs.sussex.ac.uk
or:
            aarons%uk.ac.sussex.cogs@nsfnet-relay.ac.uk

miller@GEM.cam.nist.gov (Bruce R. Miller) (09/22/90)

In article <20692@well.sf.ca.us>, Jeffrey Jacobs writes: 
> 
> Larry Masinter wrote:
> >>>[As a side note, many learning LISP programmers frequently do
> >>> encounter self-modifying code and are mystified by it, e.g.,
> >>>   (let ((var '(a b c)))
> >>>       ...
> >>>       (nconc var value))
> To which I replied:
> >>This isn't self modifying code, it's ...> 
> And Tim Moore wrote:
> >It is self-modifying code in the sense ...
> Tim is quite correct, and points out what I (now) assume Larry Masinter
> ...
> >>It's not legal Common Lisp code, because ...
> Correct me if I'm wrong, but it seems to me that this *is* legal and...
> In any case, it's basically poor programming style, which is found in
> every language.

That last point is RIGHT in any case.
Without digging thru  CLtL to  determine its  legality, it  seems pretty
clear that  it  is  unclear  what  it  SHOULD  do!  And  that  different
interpreters/compilers  will  make  different  choices.  And  that   the
Committee probably didn't even want to specify.

Having written such things  accidentally in the  past (most likely  on a
misguided optimization binge :>), and been, of course, severely bitten, I
was curious what Rel 8 symbolics  (on ivory) did.  Interpreted, it  does
permanently modify the data.  Compiled it gave an error to the effect:

Error: Attempt to RPLACD a list that is embedded in a structure and
       therefore cannot be RPLACD'ed.  

BRAVO! (and, to the best of my knowledge, they dont even have
read-only memory; or maybe ivory does now?)

In any  case,  one  could  imagine  a  perfectly legitimate interpretter
consing the list fresh  every time, giving  a 3rd behavior.   Before you
groan, consider that this might in fact be the most consistent behavior!
The (QUOTE (A B C)) form is (conceptually, at least) evaluated each time
the function is called!  Should (QUOTE (A B C)) sometimes 
return (A B C FOO)?

In any case, I suspect we've wandered off on a tangent here...

> >Rather, it's the macros that are a part of
> >the language that cause trouble. Consider how often Common Lisp

YOW! For my tastes (note that I said Taste!) the lisp macro may be the
single most dramatic and convincing feature in FAVOR of lisp!
[Not that I dont like its other features].

Some people argue that the lisp code/data equivalence is seldom used --
If they mean programs that explicitly build other programs, yes I write
relatively few of them.  BUT, the power & flexibility of macros comes
from being able to write macros almost exactly like writing any other
lisp.  And most lisp programmers I know, myself included, use macros
quite heavily.  Even when I use a continuation style (sort of), I
usually hide it in a cozy with-foo type macro.

bruce.

moore%cdr.utah.edu@cs.utah.edu (Tim Moore) (09/22/90)

In article <20692@well.sf.ca.us> jjacobs@well.sf.ca.us (Jeffrey Jacobs) writes:
>
>Larry Masinter wrote:
>>>>[As a side note, many learning LISP programmers frequently do
>>>> encounter self-modifying code and are mystified by it, e.g.,
>>>>   (let ((var '(a b c)))
>>>>       ...
>>>>       (nconc var value))
...
>And Tim Moore wrote:
...
>>>It's not legal Common
>>>Lisp code, because the constant may be in read-only memory.
>
>Correct me if I'm wrong, but it seems to me that this *is* legal and
>that the compiler should specifically *not* allocate quoted lists such
>as that to read-only memory!

For better or worse, this is from CLtL2, pg 115:
X3J13 [the body specifying ANSI Common Lisp] voted in January 1989 to
clarify that it is an error to destructively modify any object that
appears as a constant in executable code, whether within a quote
special form or as a self-evaluating form.

I'm not sure where this was specified in the orginal CLtL; obviously
it wasn't specified too clearly or X3J13 wouldn't have needed to
clarify this point.

>Jeffrey M. Jacobs
>ConsArt Systems Inc, Technology & Management Consulting
>P.O. Box 3016, Manhattan Beach, CA 90266
>voice: (213)376-3802, E-Mail: 76702.456@COMPUSERVE.COM


Tim Moore                    moore@cs.utah.edu {bellcore,hplabs}!utah-cs!moore
"Ah, youth. Ah, statute of limitations."
		-John Waters

peter@ficc.ferranti.com (Peter da Silva) (09/22/90)

In article <PCG.90Sep13135243@odin.cs.aber.ac.uk>, pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
> The real challenge here is that we want some syntax that says, apply
> this operator symbol to these arguments and return these value_s_. Even
> lisp syntax does not really allow us to easily produce multiple values.

I may be being more than usually dense, here, but what's wrong with returning
a list? What could be more natural?
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
peter@ferranti.com

peter@ficc.ferranti.com (Peter da Silva) (09/22/90)

In article <12812@ists.ists.ca> mike@ists.ists.ca.ists.ca (Mike Clarkson) writes:
> I feel that the approach of encouraging
> implementations to continue to experiment with different solutions has
> had its merits (first class continuations as an example), but the time
> has come to solidify the language and build upwards on the very real
> accomplishments of the base language. 

As an object lesson on what can happen to a language if you let this sort
of experimentation run unchecked, look at Forth. They are finally working
on an ANSI standard for the language, and they can't even agree on how
division should work!
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
peter@ferranti.com

pcg@cs.aber.ac.uk (Piercarlo Grandi) (09/24/90)

On 22 Sep 90 15:35:21 GMT, peter@ficc.ferranti.com (Peter da Silva) said:

peter> In article <PCG.90Sep13135243@odin.cs.aber.ac.uk>,
peter> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:

pcg> The real challenge here is that we want some syntax that says,
pcg> apply this operator symbol to these arguments and return these
pcg> value_s_. Even lisp syntax does not really allow us to easily
pcg> produce multiple values.

In other words there is some difference between say

	(f (a b c d e))		vs.	(f a b c d e)
	(return (a b c))	vs.	(return a b c)

For example,

	(lambda (x y) (list (div x y) (rem x y)))

could be rewritten as

	(lambda (x y) (div x y) (rem x y))

if we removed the implicit-progn rule. But this opens up other
questions...

peter> I may be being more than usually dense, here, but what's wrong
peter> with returning a list? What could be more natural?

That it is not returning multiple values -- it is returning a single
value. You can always get out of the multiple value difficulty like
that. Unfortunately it is also a good reason to also require functions
to have a single parameter. Maybe this is the right way to do things, or
maybe a function over a cartesian product or a curried function is not
quite the same thing as a function over a list.

One argument I could make is that Aleph or Forth seem to handle, each in
their way, the multiple-parameter/multiple-result problem more
elegantly, in a more fundamental way than passing around lists and
multiple-bind.

The effect may be largely the same, though.


--
Piercarlo "Peter" Grandi           | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (09/24/90)

On 14 Sep 90 07:45:24 GMT, ok@goanna.cs.rmit.oz.au (I) wrote
> operation.  Floating-point "+" and "*" are not associative.

In article <PCG.90Sep15203024@odin.cs.aber.ac.uk>, pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
> The semantics *are* different.  Didn't I write that?

I was responding to this exchange:
    Andy Freeman wrote:
	"+" isn't really a binary operator, neither is "*";
	there are surprisingly few true binary operations
    to which Pierlecarlo Grandi replied
	Precisely.  Agreed.

and surely to agree that "+" and "*" are "not really" binary operations
but are really N-ary "associative" operations "precisely" is to say that
they are associative?

> ok> For integer and rational arithmetic, there's no problem,
> 
> Well, this is a case in point for my argument about the pitfalls; there
> is still a problem. Nobody constrains you to have only positive numbers
> as the operands to n-ary fixed point +, so that
> 
> 	(+ -10 +32767 +10)
> 
> is not well defined on a 16 bit machine, unless you do modular
> arithmetic throughout.

This turns out not to be the case.  Given any arithmetic expression made
up of 16-bit integer constants, variables having suitable values, and
the operations "+", unary "-", binary "-", and "*", if the result of the
whole expression should be in range, the answer is guaranteed correct.
If the implementation detects integer overflow, then arithmetic
expressions are only weakly equivalent to their mathematical counterparts,
but if integer overflow is ignored, then the answer is right iff it is in
range.  In particular, on a 16-bit machine, if
	(+ -10 +32767 +10)
gives you any answer other than 32767 or "overflow", your Lisp is broken.

Given that I referred to "integer and rational arithmetic", I think it's
reasonably clear that I was referring to Common Lisp (and of course T!)
where integer and rational arithmetic are in fact offered.


> As we already know but sometimes forget, arithmetic on computers follows
> *very* different rules from arithmetic in vanilla mathematics,

Not for integer and rational arithmetic it doesn't, and even N-bit
arithmetic is correct *modular* arithmetic (except for division).

> 	(+ a b c d e)
> 
> simply means apply repeated *computer* addition on *computer* fixed or
> floating point throughout.

But it is an utterly hopeless notation for that!  The only excuse for
using N-ary notation for addition is that it genuinely doesn't matter
which way the additions are done.  For 2s-complement (with overflow
ignored) and for integer and rational arithmetic, this is indeed the
case, and (+ ...) notation can be used without misleading.  But for
FP it matters.

> ok> but anyone doing floating point calculations in Lisp has to be very
> ok> wary of non-binary + and * .
> 
> Anyone doing floating point arithmetic on *any* machine, IEEE standard
> or Cray, has to be very very wary of assuming it is the same as
> arithmetic on reals

This completely misses the point.  I am supposing someone who thoroughly
understands floating-point arithmetic.  (+ a b c) is a trap for _that_
person, no matter how much he understands IEEE or CRAY or even /360
arithmetic, because there is no promise about how it will turn into
>floating-point< operations (which we assume the user thoroughly grasps).
(+ a b c) may be compiled as (+ (+ a b) c) or as (+ a (+ b c)) or even
as (+ (+ a c) b).  For floating-point, these are different formulas.
The trap here is that the particular Lisp used by a floating-point
expert may compile (+ a b c) into (+ (+ a b) c) and he may mistake this
for something guaranteed by the language.  The copy of CLtL2 I've
borrowed is at home right now, but there is nothing in CLtL1 to require
this.  The entire description of "+" is
	+ &rest numbers				[Function]
	This returns the sum of the arguments.  If there are no arguments,
	the result is 0, which is an identity for this operation.
There is no reason to expect that + will always involve an addition.
As far as I can see, there is nothing to prevent a CL implementation
compiling
	(declare (type float X))
	(+ X X X)
as
	(* 3 X)
even though on the given machine the two imagined floating-point formulas
may yield different answers.  I'm sure CLtL2 must clarify this.

-- 
Heuer's Law:  Any feature is a bug unless it can be turned off.

roman@sparc17.hri.com (Roman Budzianowski) (09/25/90)

In article <E++5MBC@xds13.ferranti.com>, peter@ficc.ferranti.com (Peter
da Silva) writes:
> In article <PCG.90Sep13135243@odin.cs.aber.ac.uk>, pcg@cs.aber.ac.uk
(Piercarlo Grandi) writes:
> > The real challenge here is that we want some syntax that says, apply
> > this operator symbol to these arguments and return these value_s_. Even
> > lisp syntax does not really allow us to easily produce multiple values.
> 
> I may be being more than usually dense, here, but what's wrong with returning
> a list? What could be more natural?
> -- 
> Peter da Silva.   `-_-'
> +1 713 274 5180.   'U`
> peter@ferranti.com

My understanding was that the major reason is efficiency: multiple
values are returned on the stack, instead of consing a new list. The
rest is a syntactic sugar ( important in that if you are interested only
in the first value there is no additional syntax).

sfk@otter.hpl.hp.com (Steve Knight) (09/25/90)

Eric Olsen points out:
> However there are significant limitations on what can be done in C++.
> For example, suppose you need to convolve an image as quickly as
> possible, and that the convolution kernel is not known at compile
> time.  In Lisp, one can easily write code to make a function that applies
> a specific convolution kernal to an image.  [...]
> My impression is that writing a similiar piece of code in Algol
> languages would be needlessly complicated (although doable).

This point is one that illustrates the usefulness of creating code on the
fly.  Where I differ in view is in thinking that this capability does not
entail the direct relationship between internal and external syntax that Lisp's
syntax possesses.  

Jeff Dalton has argued, and I concede, that if you don't have such a direct 
relationship then a certain simplicity is lost.  I justify this, in my mind,
by believing that the benefit of having a block structured syntax outweighs
the minor complications introduced.  Of course, you'd have to agree the 
complications were minor, the simplicity was not significant, and that
a different syntax might be better -- and it looks to me as if Jeff has a
different assessment on all of these.

> Anyway, I've yet to find an application that can be easily implemented
> in an Algol language that cannot be readily implemented in Lisp.
> However, the above examples are easily implemented in Lisp, but, I
> think, are not readily implemented in Algol languages.

I'd be happy to prove that this was not possible, on the simple basis of
matching construct & concepts between the two schools.  I've never heard anyone
state the contrary.  Of course, this is NOT the same as stating that Lisp
is an unsuitable delivery vehicle, alas, which is all too often the case.

Steve

jeff@aiai.ed.ac.uk (Jeff Dalton) (09/26/90)

In article <1990Sep20.234752.19591@cbnewsc.att.com> lgm@cbnewsc.att.com (lawrence.g.mayka) writes:
>In article <1990Sep20.170311@ai.mit.edu>, tmb@ai.mit.edu (Thomas M. Breuel) writes:
>> * In a language like Scheme (as opposed to CommonLisp)
>> that provides efficient implementations of higher order
>> constructs with natural syntax, there is little need or
>> excuse for something as horrible as LOOP in the first
>> place.  

I think this ritual Common Lisp bashing is becoming a bit of a bore,
don't you?

The truth is that I can easily write a named-let macro in Common Lisp
and write a large class of loops in exactly the way I would write them
in Scheme.  The efficiency isn't bad either, because most CL compilers
can optimize self-tail-recursion for local functions.  (Yes, I know
Scheme does better than that.)

Moreover, if I want something like the Abelson & Sussman COLLECT macro
for streams I can write it in portable Common Lisp.  I would argue
that code using such macros is significantly easier to understand that
the same thing done with nested recursive loops or nested calles to
MAP-STREAM and FLATMAP.

Until Scheme has a standard way of writing macros, I can't write
such things in portable Scheme.  So I am very much in favor of
having such a standard mechanism for Scheme.

Of course, if you prefer higher-order functions instead of macros,
there is already a large set of them available in Common Lisp (MAPCAR,
SOME, REDUCE, etc.) and it is easy to write more.

So (1) Scheme's advantages over Common Lisp is this area, while real
and significant, are not as great as one might suppose, and (2) there
are iteration macros that are worth having regardless of one's views
on LOOP.

>We can talk about Dick Waters' SERIES package instead, if you prefer.

Yes, let's.  I don't expect Scheme folk to flock to this beastie,
but it does show one of the benefits of having macros, namely that
it enables such work to be done.

Moreover, one of the arguments for Scheme, I would think, is that
anyone doing such work in Scheme would be encouraged by the nature of
the language to look for simple, general, "deep" mechanisms rather
than the relatively broad, "shallow" mechanisms that would fit better
in a big language such as Common Lisp.  (This is not to imply that the
Series package is shallow or in any way bad (or even that shallow in
this sense is bad).  Anyway, I'm speaking in general here and no
longer taking Series as my example.)

-- Jeff

lou@cs.rutgers.edu (lou) (09/27/90)

In article <1990Sep18.180829.8801@hellgate.utah.edu> moore%cdr.utah.edu@cs.utah.edu (Tim Moore) writes:

>It's not the macros that the application programmers write themselves
>that cause problems; at least the programmer knows what the expansions
>of those macros look like. Rather, it's the macros that are a part of
>the language that cause trouble. Consider how often Common Lisp
>programmers use cond, setf, dotimes, dolist, do, and do*. The
>expansions of these macros can be pretty hairy. Maintaining the
>correspondence between the internal representation of a function and
>the form that produced it is not trivial.

Here is a case in point, that actually happened to a student in my AI
Programming class.  He was getting an error message about bad
variables in a let, and could not figure out what was going wrong,
since the code being executed did not have a let in it!  It turns out
that in our lisp, prog is implemented as a macro that expands into a
form that involves a let.  He did have a prog, and the syntax of that
prog was bad - not bad enough to prevent the macro expansion but bad
enough to cause the let it expanded into to give an error.  If the
error message were in terms of a prog he would have seen his error
quite quickly.  In fact he did not, and wasted time on it.

On the other hand, using the interactive debugger (and my vast
experience :-) I was able to figure it out pretty quickly.  But this
does point out that any time the code as seen in the debugger / error
messages / etc. differs from the code as written by the programmer,
there is a price to pay.
--
					Lou Steinberg

uucp:   {pretty much any major site}!rutgers!aramis.rutgers.edu!lou 
internet:   lou@cs.rutgers.edu

ram@wb1.cs.cmu.edu (Rob MacLachlan) (09/30/90)

You just weren't using the right compiler.  foo.lisp is:
    (defun test ()
        (prog ((:end 42))
            (print :end)))

________________________________
* (compile-file "test:foo.lisp")

Python version 0.0, VM version DECstation 3100/Mach 0.0 on 29 SEP 90 01:13:48 pm.
Compiling: /afs/cs.cmu.edu/project/clisp/new-compiler/tests/foo.lisp 29 SEP 90 01:13:19 pm


In: DEFUN TEST
  (PROG ((:END 42))
        (PRINT :END))
--> BLOCK 
==>
  (LET ((:END 42)) (TAGBODY (PRINT :END)))
Error: Name of lambda-variable is a constant: :END.
________________________________

  Rob (ram@cs.cmu.edu)