[comp.lang.misc] What does an anti-perl look like

rh@smds.UUCP (Richard Harter) (06/11/91)

Most of the articles on the ap thread have expired here; in particular I missed
the original article that sparked the exchanges.  Byron, I gather, dislikes
the style of Perl and would like, so to speak, an anti-Perl, even if he has
to write it himself.  This is really a lang.misc topic, so I've added that
as a group.

Byron, you don't want to do it.  It's more work to develop a language than
one might imagine, particularly if it is to be a good one.  If you don't
like Perl there are a number of perfectly reasonable alternatives, among
them TCL, Python, and Icon.  There is also Lakota, which I am a principal
developer.  [Apologies to all who fuss about people talking about their
own work duly made.  My justification is that I want to talk about the
language design issues, which are of general interest to those who are 
interested in language design issues.]

Briefly, Lakota occupies the same general linguistic niche as Perl, but
it is about as far as one can get from the style of Perl.  When I say the
same niche, what I am getting at is this:  The power of UNIX shell programming
rests not in the shell language itself, but in the collection of tool
programs that are standardly available, e.g. sed, awk, find, uniq, sort,
and their interconnection with pipes and redirection.  There are a number
of problems with this bag of tools approach, however.  These include
(a) the execution of the resulting scripts is slow because each tool is
a separate process, (b) most of the tools (and the shell itself) have
various size limitations, (c) the standard tools do not complement each
other completely, (d) the implementations of the tools vary from OS to 
OS, and (e) there are numerous conflicts between command line usage
in the shell and the principal tools.  And that's only the beginning of
the list :-).

The Perl solution (and the Lakota solution) is to provide a single language
which has the functionality of the shell and the principle tools.  Now
the approach that Larry has taken with Perl is to, in effect, fuse the
major tools of UNIX into a single language with a common syntax and
the annoying limitations removed.  This approach has a number of merits;
it fits in with the UNIX/C style, which offers both familiarity and
the use of a proven intellectual technology.

The approach taken in Lakota is much more radical for those who are
accustomed to doing everything "the UNIX way" although not particularly
radical in terms of languages in general.  The essence of the matter is
to accept the shell tools functionality as a requirement but scrap the
entire shell syntax and style.  In a sense this is a dangerous step
because one has to resolve a lot of issues anew.  However there is a lot
to gain.  Conventional shell programming has many faults than the ones
I have listed above.  What they amount to is that shell programming
involves scripts that are hard to read, hard to maintain, and programming
structures that do not scale up well to large programs.

Some of the features that make Lakota an 'anti-Perl' are:

(A)  Brutally stripped syntax.  The only special characters are (){} and
the white space characters.  There are no quote characters, no meta
characters and no escape character, and no need for them.

(B)  English language commands.  Call it COBOL if you like.  The upshot
is that someone who has a modest command of the language can read it
and maintain it without being a language lawyer.

(C)  A hierarchical procedure/module structure with variable isolation
between procedures and modules.  Variables are, by default, local to the
procedure they used in; they can be made global to a particular modules.
All inter-module coupling must be done via explcitly shared procedures.

(D)  Variable substitution is done via enclosure in parentheses and may
be nested, e.g. (a-(b)) means replace (b) by the contents of b and then
result the resulting string (concatenation of a- and contents of b) by
its contents.

(E)  A minimal number of special variables; functional operators are
used instead.  [Braces {} are used to delimit functions.]  Thus 
{all-arguments} for the list of all arguments and {argument 15} for the
15'th argument.

And so on and so forth.

The main point I would like to make is that it is not such a simple thing
to design and implement a language, particularly one which is to have
strong string manipulation capabilities.

Also this article is longer than it should be.
-- 
Richard Harter, Software Maintenance and Development Systems, Inc.
Net address: jjmhome!smds!rh Phone: 508-369-7398 
US Mail: SMDS Inc., PO Box 555, Concord MA 01742
This sentence no verb.  This sentence short.  This signature done.

byron@donald.tamu.edu (Byron Rakitzis) (06/11/91)

In article <541@smds.UUCP> rh@smds.UUCP (Richard Harter) writes:
>Most of the articles on the ap thread have expired here; in particular I missed
>the original article that sparked the exchanges.

>Byron, you don't want to do it.  It's more work to develop a language than
>one might imagine, particularly if it is to be a good one.  If you don't
>like Perl there are a number of perfectly reasonable alternatives, among
>them TCL, Python, and Icon.

Your comment is perfectly valid. So far, I have received numerous messages
with suggestions for (already existing) alternatives to perl. Most people
named TCL, Python, Icon (as above) but I have also heard that people use
Scheme. This is the first I've heard of Lakota. Is it available for anonymous
ftp?

What it boils down to is this: whether or not ap is a lot of work, I fear it
is going to get written sometime in the near future. When I re-implemented
rc, I thought I came close to what I wanted as my command interpreter, but
now that I am aware of its shortcomings, I want to try to improve on the
rc model. I have at least several goals in mind:

1) I want the core command interpreter to be independent of any particular
application.

2) I want it to be easy to add modules so that the core command interpreter
may be grown into, say, an ap.

3) I want the syntax of the language to be elegant. This sounds like a tall
order, but what I'm doing is ruling out languages like Scheme; I just cannot
deal with the Lots of Irrelevant and Stupid Parentheses. Also, I'm afraid that
Python does not quite fit the bill either. Syntax that is whitespace dependent
just feels "wrong". I know, I tried using Python for a while.

4) This follows from (3): I've recently checked out the Oberon system, and I
am really excited and intrigued by the ideas they bring forth in this environment.
I think they did certain things completely wrong: the interclicking mouse language,
the tiled windows, but the fact that anything on the screen may be instantly
interpreted as a command at the push of a button is a big win, in my view.
There is a window system somewhere in my fingers, and one day it is going to come
out. I'm going to need a command interpreter for this task.

Let me state here that I have not ruled out any of the other interpreters yet.
I still want to see if, say, TCL cannot satisfy my needs. Also, let me state
right away that (1)-(4) contain strong doses of religion; please don't take my
comments as flames. However, if you disagree, I'm always willing to argue. :-)

--
Byron Rakitzis
byron@archone.tamu.edu

jbw@maverick.uswest.com (Joe Wells) (06/12/91)

In article <541@smds.UUCP> rh@smds.UUCP (Richard Harter) writes:

   (A)  Brutally stripped syntax.  The only special characters are (){} and
   the white space characters.  There are no quote characters, no meta
   characters and no escape character, and no need for them.

Does this mean that the NUL character (0) is treated like a letter?

-- 
Joe Wells <jbw@uswest.com>

oz@ursa.ccs.yorku.ca (Ozan Yigit) (06/12/91)

byron@donald.tamu.edu (Byron Rakitzis) writes:

   3) I want the syntax of the language to be elegant. This sounds like a tall
   order, but what I'm doing is ruling out languages like Scheme; I just cannot
   deal with the Lots of Irrelevant and Stupid Parentheses.

Your loss. Scheme is a lot more than just Cambridge polish notation
with lots of parens. I include a reasonably complete biblography for
you or anybody else who may wish to have a more detailed view of this
language and some of the concepts embodied within. I am also certain
that these references would be helpful in designing and implementing
other languages worth remembering.

oz
---

%A John Reynolds
%T Definitional Interpreters for Higher Order Programming Languages
%J ACM Conference Proceedings
%P 717-740
%I ACM
%D 1972

%A Gerald Jay Sussman
%A Guy Lewis Steele Jr.
%T Scheme: an Interpreter for Extended Lambda Calculus
%R MIT AI Memo 349
%I Massachusetts Institute of Technology
%C Cambridge, Mass.
%D December 1975

%A Guy Lewis Steele Jr.
%A Gerald Jay Sussman
%T Lambda, the Ultimate Imperative
%R MIT AI Memo 353
%I Massachusetts Institute of Technology
%C Cambridge, Mass.
%D March 1976
%K imperative

%A Guy Lewis Steele Jr.
%T Lambda, the Ultimate Declarative
%R MIT AI Memo 379
%I Massachusetts Institute of Technology
%C Cambridge, Mass.
%D November 1976
%K declarative

%A Guy Lewis Steele Jr.
%T Debunking the ``Expensive Procedure Call'' Myth, or Procedure Call
Implementations Considered Harmful, or LAMBDA, the Ultimate GOTO
%J ACM Conference Proceedings
%P 153-162
%I ACM
%D 1977
%K ultimate

%A Guy Lewis Steele Jr.
%T Macaroni is Better than Spaghetti
%J Proceedings of the Symposium on Artificial Intelligence and
Programming Languages
%P 60-66
%O Special joint issue of SIGPLAN Notices 12(8) and SIGART Newsletter 64
%D August 1977
%K macaroni

%A Mitchell Wand
%T Continuation-Based Program Transformation Strategies
%J Journal of the ACM
%V 27
%N 1
%P 174-180
%D 1978

%A Mitchell Wand
%A Daniel P. Friedman
%T Compiling lambda expressions using continuations and
factorizations
%J Journal of Computer Languages
%V 3
%P 241-263
%D 1978

%A Guy Lewis Steele Jr.
%A Gerald Jay Sussman
%T The Revised Report on Scheme, a Dialect of Lisp
%R MIT AI Memo 452
%I Massachusetts Institute of Technology
%C Cambridge, Mass.
%D January 1978
%K r-report

%A Guy Lewis Steele Jr.
%T Rabbit: a Compiler for Scheme
%R MIT AI Memo 474
%I Massachusetts Institute of Technology
%C Cambridge, Mass.
%D May 1978
%K rabbit

%A Guy Lewis Steele Jr.
%A Gerald Jay Sussman
%T The Art of the Interpreter, or the Modularity Complex
(parts zero, one, and two)
%R MIT AI Memo 453
%I Massachusetts Institute of Technology
%C Cambridge, Mass.
%D May 1978
%K modularity

%A Guy Lewis Steele Jr.
%A Gerald Jay Sussman
%T Design of LISP-Based Processors
or, SCHEME: A Dielectric LISP
or, Finite Memories Considered Harmful
or, LAMBDA: The Ultimate Opcode
%R MIT-AI Memo 514
%I Massachusetts Institute of Technology
%C Cambridge, Mass.
%D 1979

%A Uwe F. Pleban
%T The Standard Semantics of a Subset of SCHEME, a Dialect of LISP
%R Computer Science Technical Report TR-79-3
%I University of Kansas
%C Lawrence, Kansas
%D July 1979

%A Guy Lewis Steele Jr.
%T Compiler Optimization Based on Viewing LAMBDA as RENAME + GOTO
%B AI: An MIT Perspective
%E Patrick Henry Winston
%E Richard Henry Brown
%I MIT Press
%C Cambridge, Mass.
%D 1980
%K rename+goto

%A Guy Lewis Steele Jr.
%A Gerald Jay Sussman
%T The Dream of a Lifetime: a Lazy Variable Extent Mechanism
%J Conference Record of the 1980 Lisp Conference
%P 163-172
%I The Lisp Conference
%D 1980
%K lazy

%A Drew McDermott
%T An Efficient Environment Allocation Scheme in an Interpreter
for a Lexically-Scoped Lisp
%J Conference Record of the 1980 Lisp Conference
%P 154-162
%I The Lisp Conference, P.O. Box 487, Redwood Estates CA.
%D 1980
%O Proceedings reprinted by ACM

%A Steven S. Muchnick
%A Uwe F. Pleban
%T A Semantic Comparison of Lisp and Scheme
%J Conference Record of the 1980 Lisp Conference
%P 56-65
%I The Lisp Conference, P.O. Box 487, Redwood Estates CA.
%D 1980

%A Uwe F. Pleban
%T A Denotational Approach to Flow Analysis and Optimization of
SCHEME, A Dialect of LISP
%R Ph.D. Dissertation
%I University of Kansas
%C Lawrence, Kansas
%D 1980

%A Mitchell Wand
%T Continuation-Based Multiprocessing
%J Conference Record of the 1980 Lisp Conference
%P 19-28
%I The Lisp Conference
%D 1980

%A Mitchell Wand
%T SCHEME Version 3.1 Reference Manual
%R Computer Science Technical Report 93
%I Indiana University
%C Bloomington, Indiana
%D June 1980
%K scheme3.1

%A Guy Lewis Steele Jr.
%A Gerald Jay Sussman
%T Design of a Lisp-based Processor
%J CACM
%V 23
%N 11
%P 628-645
%D November 1980

%A Rex A. Dwyer
%A R. Kent Dybvig
%T A SCHEME for Distributed Processes
%R Computer Science Department Technical Report #107
%I Indiana University
%C Bloomington, Indiana
%D April 1981

%A Gerald Jay Sussman
%A Jack Holloway
%A Guy Lewis Steele Jr.
%A Alan Bell
%T Scheme-79 - Lisp on a Chip
%J IEEE Computer
%V 14
%N 7
%P 10-21
%D July 1981
%I IEEE
%K scheme79

%A John Batali
%A Edmund Goodhue
%A Chris Hanson
%A Howie Shrobe
%A Richard M. Stallman
%A Gerald Jay Sussman
%T The Scheme-81 Architecture - System and Chip
%J Proceedings, Conference on Advanced Research in VLSI
%P 69-77
%E Paul Penfield, Jr.
%C Artech House, Dedham MA.
%D 1982
%K scheme81

%A Jonathan A. Rees
%A Norman I. Adams
%T T: A Dialect of Lisp or, LAMBDA: The Ultimate Software Tool
%J Conference Record of the 1982 ACM Symposium on Lisp and
Functional Programming
%P 114-122
%D 1982
%K T

%A Gerald Jay Sussman
%T LISP, Programming and Implementation
%B Functional Programming and its Applications
%E Darlington, Henderson, Turner
%I Cambridge University Press
%C London
%D 1982

%A R. Kent Dybvig
%T C-Scheme
%R Computer Science Department Technical Report #149 (MS Thesis)
%I Indiana University
%C Bloomington, Indiana
%D 1983

%A Pee Hong Chen 
%A W.Y. Chi
%A E.M. Ost
%A L.D. Sabbagh 
%A G. Springer
%T Scheme Graphics Reference Manual
%R Computer Science Technical Report No. 145
%I Indiana University 
%C Bloomington, Indiana
%D August 1983

%A Pee Hong Chen
%A Daniel P. Friedman
%T Prototyping data flow by translation into Scheme
%R Computer Science Technical Report #147
%I Indiana University
%C Bloomington, Indiana
%D August 1983

%A Carol Fessenden
%A William Clinger
%A Daniel P. Friedman
%A Christopher T. Haynes
%T Scheme 311 version 4 Reference Manual
%R Computer Science Technical Report 137
%I Indiana University
%C Bloomington, Indiana
%D February 1983
%O Superseded by Computer Science Technical Report 153, 1985
%K scheme311

%A William Clinger
%T The Scheme 311 compiler: An Exercise in Denotational Semantics
%J Conference Record of the 1984 ACM Symposium on Lisp and
Functional Programming
%P 356-364
%D 1984
%K compile311

%A Daniel P. Friedman
%A Christopher T. Haynes
%A Eugene E. Kohlbecker
%T Programming with Continuations
%B Program Transformation and Programming Environments
%P 263-274
%E P. Pepper
%I Springer-Verlag
%D 1984

%A Christopher T. Haynes
%A Daniel P. Friedman
%T Engines Build Process Abstractions
%J Conference Record of the 1984 ACM Symposium on Lisp and
Functional Programming
%C Austin, TX.
%P 18-24
%D 1984

%A Christopher T. Haynes
%A Daniel P. Friedman
%A Mitchell Wand
%T Continuations and Coroutines
%J Conference Record of the 1984 ACM Symposium on Lisp and
Functional Programming
%C Austin, TX.
%P 293-298
%D 1984

%A Daniel P. Friedman
%A Mitchell Wand
%T Reification: reflection without metaphysics
%J Conference Record of the 1984 ACM Symposium on LISP and Functional
Programming
%C Austin, TX.
%P 348-355
%D August 1984

%A Jonathan A. Rees
%A Norman I. Adams
%A James R. Meehan
%T The T manual, fourth edition
%I Yale University Computer Science Department
%D January 1984

%A Guillermo J. Rozas
%T Liar, an Algol-like Compiler for Scheme
%R S. B. Thesis
%I Department of Electrical Engineering and Computer Science,
Massachusetts Institute of Technology
%D January 1984
%K liar

%A Richard Schooler
%A James W. Stamos
%T Proposal For a Small Scheme Implementation
%R MIT LCS Memo TM-267
%I Massachusetts Institute of Technology
%C Cambridge, Mass.
%D October 1984

%T MIT Scheme Manual, Seventh Edition
%I Department of Electrical Engineering and Computer Science,
Massachusetts Institute of Technology
%C Cambridge, Mass.
%D September 1984
%K mitscheme

%T MacScheme Reference Manual
%I Semantic Microsystems
%C Sausalito, California
%D 1985
%K macscheme

%A Harold Abelson
%A Gerald Jay Sussman
%A Julie Sussman
%T Structure and Interpretation of Computer Programs
%I MIT Press
%C Cambridge, Mass.
%D 1985
%K siocp

%A William Clinger
%A Daniel P. Friedman
%A Mitchell Wand
%T A Scheme for a Higher-Level Semantic Algebra
%B Algebraic Methods in Semantics
%E J. Reynolds, M. Nivat
%P 237-250
%I Cambridge University Press
%C London
%D 1985

%A Amitabh Srivastava
%A Don Oxley
%A Aditya Srivastava
%T An (other) Integration of Logic and Functional Programming
%J Proceedings of the Symposium on Logic Programming
%P 254-260
%I IEEE
%D 1985

%E William Clinger
%T The Revised Revised Report on Scheme, or An Uncommon Lisp
%R MIT AI Memo 848
%I Massachusetts Institute of Technology
%C Cambridge, Mass.
%O Also published as Computer Science Department Technical Report 174,
Indiana University, June 1985
%D August 1985
%K rrrs

%A Daniel P. Friedman
%A Christopher T. Haynes
%T Constraining Control
%J Proceedings of the Twelfth Annual Symposium on Principles of
Programming Languages
%C New Orleans, LA.
%P 245-254
%I ACM
%D January 1985

%A Daniel P. Friedman
%A Christopher T. Haynes
%A Eugene E. Kohlbecker
%A Mitchell Wand
%T Scheme 84 Interim Reference Manual
%R Computer Science Technical Report 153
%I Indiana University
%C Bloomington, Indiana
%D January 1985
%K scheme84

%A Pee Hong Chen
%A David Sabbagh
%T Scheme as an Interactive Graphics Programming Environment
%R Computer Science Technical Report No. 166
%I Indiana University
%C Bloomington, Indiana
%D March 1985

%A R. Kent Dybvig
%A Bruce T. Smith
%T Chez Scheme Reference Manual Version 1.0
%I Cadence Research Systems
%C Bloomington, Indiana
%D May 1985

%T TI Scheme Language Reference Manual
%I Texas Instruments, Inc.
%O Preliminary version 1.0
%D November 1985

%A Michael A. Eisenberg
%T Bochser: An Integrated Scheme Programming System
%R MIT Computer Science Technical Report 349
%I Massachusetts Institute of Technology
%C Cambridge, Mass.
%D October 1985
%K bochser

%T Transliterating Prolog into Scheme
%A Matthias Felleisen
%R Computer Science Technical Report #182
%I Indiana University
%C Bloomington, Indiana
%D October 1985

%A David H. Bartley
%A John C. Jensen
%T The Implementation of PC Scheme
%J Proceedings of the 1986 ACM Conference on Lisp
and Functional Programming
%P 86-93
%D 1986
%K pcscheme

%A R. Kent Dybvig
%A Daniel P. Friedman
%A Christopher T. Haynes
%T Expansion-Passing style: Beyond Conventional Macros
%J Conference Record of the 1986 ACM Conference on Lisp and
Functional Programming
%P 143-150
%D 1986

%A Marc Feeley
%A Guy LaPalme
%T Closure Generation based on viewing LAMBDA as EPSILON plus COMPILE
%O Submitted for Publication
%D 1986

%A Matthias Felleisen
%A Daniel P. Friedman
%T A Closer Look At Export and Import Statements
%J Journal of Computer Languages
%V 11
%N 1
%P 29-37
%I Pergamon Press
%D 1986

%A Daniel P. Friedman
%A Matthias Felleisen
%T The Little LISPer: Second Edition
%I Science Research Associates, Inc.
%C Palo Alto, California
%D 1986

%A Christopher T. Haynes
%A Daniel P. Friedman
%A Mitchell Wand
%T Obtaining Coroutines With Continuations
%J Journal of Computer Languages
%V 11
%N 3/4
%P 143-153
%I Pergamon Press
%D 1986

%A Mitchell Wand
%T Finding the Source of Type Errors
%J Conference Record of the Thirteenth Annual Symposium on
Principles of Programming Languages
%P 38-43
%I ACM
%C St. Peterburg, Fla.
%D 1986

%A Mitchell Wand
%T From Interpreter to Compiler: A Representational Derivation
%B Programs as Data Objects
%I Springer-Verlag Lecture Notes
%D 1986

%A Matthias Felleisen
%A Daniel P. Friedman
%T Control operators, the SECD-machine, and the lambda-calculus
%J 3rd Working Conference on the Formal Description of
Programming Concepts
%C Ebberup, Denmark
%P 193-219
%D August 1986

%A Eugene E. Kohlbecker
%T Syntactic Extensions in the Programming Language Lisp
%R Computer Science Technical Report #199 (Ph.D. Dissertation)
%I Indiana University
%C Bloomington, Indiana
%D August 1986

%A Eugene E. Kohlbecker
%A Daniel P. Friedman
%A Matthias Felleisen
%A Bruce Duba
%T Hygienic macro expansion
%J Symposium on LISP and Functional Programming
%P 151-161
%D August 1986
%O To appear in Lisp and Symbolic Computation
%K hygienic

%A Mitchell Wand
%T The mystery of the tower revealed: a non-reflective
description of the reflective tower
%J Proceedings of the 1986 ACM Symposium on LISP and Functional Programming
%P 298-307
%D August 1986
%K tower

%E Jonathan A. Rees
%E William Clinger
%T Revised^3 Report on the Algorithmic Language Scheme
%J ACM Sigplan Notices
%V 21
%N 12
%D December 1986
%K rrrrs

%A Christopher T. Haynes
%T Logic Continuations
%J Proceedings of the Third International Conference on
Logic Programming
%P 671-685
%I Springer-Verlag
%D July 1986

%A Matthias Felleisen
%A Daniel P. Friedman
%A Eugene E. Kohlbecker
%A Bruce Duba
%T Reasoning with Continuations
%J Proceedings of the Symposium on Logic in Computer Science
%P 131-141
%I IEEE Computer Society Press
%C Washington DC
%D June 1986

%A David Kranz
%A Richard Kelsey
%A Jonathan A. Rees
%A Paul Hudak
%A James Philbin
%A Norman I. Adams
%T Orbit: An Optimizing Compiler for Scheme
%J Proceedings of the SIGPLAN '86 Symposium on Compiler
Construction
%P 219-233
%I ACM
%O Published as SIGPLAN Notices 21(7), July 1986
%D June 1986
%K orbit

%A Marc Feeley
%T Deux Approches a' L'implantation du Language Scheme
%I M.Sc. Thesis, De'partement d'Informatique et de Recherche
Ope'rationelle, University of Montreal
%D May 1986

%A Kevin J.  Lang 
%A Barak A. Pearlmutter
%T Oaklisp: an Object-Oriented Scheme with First Class Types
%J ACM Conference on Object-Oriented Systems, Programming,
Languages and Applications
%P 30-37
%D September 1986

%A William Clinger
%T The Scheme of things:  Streams versus Generators
%R Technical Report
%I Tektronix, Inc.
%D 1987

%A R. Kent Dybvig
%T The Scheme Programming Language
%I Prentice-Hall, Inc.
%C Englewood Cliffs, New Jersey
%D 1987
%K splang

%A Marc Feeley
%A Guy LaPalme
%T Using Closures for Code Generation
%J Journal of Computer Languages
%V 12
%N 1
%P 47-66
%I Pergamon Press
%D 1987

%A Matthias Felleisen
%T Reflections on Landin's J-Operator: A Partly Historical Note
%J Journal of Computer Languages
%V 12
%N 3/4
%P 197-207
%I Pergamon Press
%D 1987

%A Matthias Felleisen
%A Daniel P. Friedman
%T A Reduction Semantics for Imperative Higher-Order Languages
%J Parallel Architectures and Languages Europe
%E De Bakker, Nijman and Treleaven
%B Lecture Notes in Computer Science
%V 259
%I Springer-Verlag
%C Berlin
%P 206-223
%D 1987

%A Matthias Felleisen
%A Daniel P. Friedman
%A Eugene E. Kohlbecker
%A Bruce Duba
%T A syntactic theory of sequential control
%J Theoretical Computer Science
%V 52
%P 205-237
%D 1987

%A Daniel P. Friedman
%A Matthias Felleisen
%T The Little LISPer
%I MIT Press
%D 1987
%O Trade Edition
%K littlelisper

%A Christopher T. Haynes
%A Daniel P. Friedman
%T Abstracting Timed Preemption with Engines
%J Journal of Computer Languages
%V 12
%N 2
%P 109-121
%I Pergamon Press
%D 1987
%K engines

%A Stephen Slade
%B The T programming Language
%I Prentice-Hall Inc.
%C Englewood Cliffs, N.J.
%D 1987

%A R. Kent Dybvig
%T Three Implementation Models for Scheme
%R Department of Computer Science Technical Report #87-011 (Ph.D. Dissertation)
%I University of North Carolina at Chapel Hill
%C Chapel Hill, North Carolina
%D April 1987

%A Matthias Felleisen
%T The Calculi of lambda-v-cs conversion: a syntactic
theory of control and state in imperative higher-order programming
languages
%R Computer Science Technical Report #226. (Ph.D. Dissertation)
%I Indiana University
%C Bloomington, Indiana
%D August 1987

%A James S. Miller
%T A Parallel Processing System Based on MIT Scheme
%R MIT LCS Technical Report 402 (Ph.D. Dissertation)
%I Massachusetts Institute of Technology
%C Cambridge, Mass.
%D August 1987

%A Matthias Felleisen
%A Daniel P. Friedman
%A Bruce Duba
%A John Merrill
%T Beyond Continuations
%R Computer Science Dept. Technical Report #216
%I Indiana University
%C Bloomington, Indiana
%D February, 1987

%A Matthias Felleisen
%A Daniel P. Friedman
%T A calculus for assignments in higher-order languages
%J Conference Record of the 14th Annual ACM Symposium on Principles of
Programming Languages
%C Munich, West Germany
%P 314-345
%D January 1987

%A Matthias Felleisen
%A Daniel P. Friedman
%T A Syntactic Theory of Sequential State
%R Computer Science Dept. Technical Report #230
%I Indiana University
%C Bloomington, Indiana
%D October 1987

%A Christopher T. Haynes
%A Daniel P. Friedman
%T Embedding continuations in procedural objects
%J ACM Transactions on Programming Languages and Systems
%V 9
%N 4
%P 582-598
%D October 1987

%A Michael Eisenberg
%T Programming In Scheme
%E Harold Abelson
%I The Scientific Press
%C Redwood City, CA
%D 1988

%A David Kranz
%T Orbit: An optimizing compiler for Scheme
%R Computer Science Technical report #632 (Ph.D. Dissertation)
%I Yale University
%D 1988
%K orbit-thesis

%A Mitchell Wand
%A Daniel P. Friedman
%T The Mystery of the Tower Revealed: A Non-Reflective
Description of the Reflective Tower
%B Meta-Level Architectures and Reflection
%E P. Maes and D. Nardi
%I Elsevier Sci. Publishers B.V. (North Holland)
%P 111-134
%D 1988
%O Also to appear in Lisp and Symbolic Computation

%A Daniel P. Friedman
%A Mitchell Wand
%A Christopher T. Haynes
%A Eugene E. Kohlbecker
%B Programming Languages: Their Abstractions, Representations,
and Implementations
%I MIT Press and McGraw-Hill
%D 1988-1989
%O in progress

%A Norman Adams
%A Jonathan Rees
%T Object-Oriented Programming in Scheme
%J Conference Record of the 1988 ACM Conference on Lisp
and Functional Programming
%P 277-288
%D August 1988
%K oopinscheme

%A William D. Clinger
%A Anne H. Hartheimer
%A Eric M. Ost
%T Implementation Strategies for Continuations
%J Conference Record of the 1988 ACM Conference on Lisp
and Functional Programming
%P 124 131
%D August 1988
%K contimpl

%A Matthias Felleisen
%T \(*l-vs-CS: An Extended \(*l-Calculus for Scheme
%J Conference Record of the 1988 ACM Conference on Lisp
and Functional Programming
%P 72-85
%D August 1988
%K calculus

%A Harold Abelson
%A Gerald Jay Sussman
%T Lisp: A Language for Stratified Design
%J BYTE
%D February 1988
%P 207-218

%A William Clinger
%T Semantics of Scheme
%J BYTE
%D February 1988
%P 221-227

%A Alan Bawden
%A Jonathan Rees
%T Syntactic Closures
%J Proceedings of the 1988 ACM Symposium on LISP
and Functional Programming
%C Salt Lake City, Utah.
%D July 1988
%K macrology

%A R. Kent Dybvig
%A Robert Hieb
%T A Variable-Arity Procedural Interface
%J Proceedings of the 1988 ACM Symposium on LISP and Functional Programming
%C Salt Lake City, Utah
%D July 1988
%P 106-115
%O Also Indiana University Computer Science Department Technical Report #247

%A Matthias Felleisen
%A Mitchell Wand
%A Daniel P. Friedman
%A Bruce Duba
%T Abstract Continuations: A Mathematical Semantics for
Handling Functional Jumps
%J Proceedings of the 1988 ACM Symposium on LISP
and Functional Programming
%C Salt Lake City, Utah.
%D July 1988

%A R. Kent Dybvig
%A Daniel P. Friedman
%A Christopher T. Haynes
%T Expansion-Passing Style: A General Macro Mechanism
%J Lisp and Symbolic Computation: An International Journal
%V 1
%N 1
%I Kluwer Academic Publishers
%P 53-76
%D June 1988

%A Olin Shivers
%T Control Flow Analysis in Scheme
%J Proceedings of the Sigplan 1988 Conference on Programming Language
Design and Implementation
%P 164-174
%C Atlanta, Georgia
%D June 1988
%K schflow

%A John Franco
%A Daniel P. Friedman
%T Creating Efficient Programs by Exchanging Data for Procedures
%R Computer Science Technical Report #245
%I Indiana University
%C Bloomington, Indiana
%D March 1988

%A Kevin J.  Lang 
%A Barak A. Pearlmutter
%T Oaklisp: an Object-Oriented Dialect of Scheme
%J Lisp and Symbolic Computation: An International Journal
%V 1
%N 1
%I Kluwer Academic Publishers
%P 39-51
%D May 1988
%K oaklisp

%A Olin Shivers
%T The Semantics of Scheme Control Flow Analysis (Preliminary).
%R Technical Report ERGO-90-090
%I CMU School of Computer Science
%C Pittsburgh, Penn.
%D November 1988

%A R. Kent Dybvig
%A Robert Hieb
%T Engines from Continuations
%J Journal of Computer Languages
%V 14
%N 2
%P 109-123
%D 1989
%O Also Indiana University Computer Science Department Technical Report #254

%A George Springer
%A Daniel P. Friedman
%B Scheme and the Art of Programming
%I MIT Press and McGraw-Hill
%D 1989
%K scheme-art

%A Matthias Felleisen
%A Robert Hieb
%T The Revised Report on the Syntactic Theories of Sequential Control
and State.
%R Computer Science Technical Report No. 100
%I Rice University
%D June 1989.

%A Steven R. Vegdahl
%A Uwe F. Pleban
%T The Runtime Environment for Screme, a Scheme Implementation
on the 88000
%J Proceedings of the Third International Conference on Architectural
Support for Programming Languages and Operating Systems
%C Boston, Mass.
%D April 1989
%P 172-182

%A Joel F. Bartlett
%T SCHEME->C a Portable Scheme-to-C Compiler
%R Research Report 89/1
%I DEC Western Research Laboratory
%C Palo Alto, California
%D January 1989

%A J. Michael Ashley
%A Richard M. Salter
%T A Revised State Space Model for a Logic Programming Embedding in Scheme
%B BIGRE Bulletin
%O Putting Scheme to Work
%E Andr\o'e\(aa' Pic, Michel Briand, Jean B\o'e\(aa'zivin
%N 65
%D July 1989

%A Olivier Danvy
%T Programming with Tighter Control
%B BIGRE Bulletin
%O Putting Scheme to Work
%E Andr\o'e\(aa' Pic, Michel Briand, Jean B\o'e\(aa'zivin
%N 65
%D July 1989

%A Olivier Danvy
%T Combiner Logiquement en Scheme
%B BIGRE Bulletin
%O Putting Scheme to Work
%E Andr\o'e\(aa' Pic, Michel Briand, Jean B\o'e\(aa'zivin
%N 65
%D July 1989

%A Vincent Delacour
%T Picolo Expresso
%B BIGRE Bulletin
%O Putting Scheme to Work
%E Andr\o'e\(aa' Pic, Michel Briand, Jean B\o'e\(aa'zivin
%N 65
%D July 1989

%A Alain Deutsch
%A Renaud Dumeur
%A Charles Consel
%A Jean-Daniel Fekete
%T CSKIM: An Extended Dialect of Scheme
%B BIGRE Bulletin
%O Putting Scheme to Work
%E Andr\o'e\(aa' Pic, Michel Briand, Jean B\o'e\(aa'zivin
%N 65
%D July 1989

%A Simon M. Kaplan
%A Joseph P. Loyall
%T GARP/Scheme: Implementing a Concurrent, Object-Based Language
%B BIGRE Bulletin
%O Putting Scheme to Work
%E Andr\o'e\(aa' Pic, Michel Briand, Jean B\o'e\(aa'zivin
%N 65
%D July 1989

%A Tan Gon Kim
%A Bernard P. Zeigler
%T The DEVS-Scheme Modelling and Simulation Environment
%B BIGRE Bulletin
%O Putting Scheme to Work
%E Andr\o'e\(aa' Pic, Michel Briand, Jean B\o'e\(aa'zivin
%N 65
%D July 1989

%A Guy Lapalme
%A Marc Feeley
%T Micro-Scheme
%B BIGRE Bulletin
%O Putting Scheme to Work
%E Andr\o'e\(aa' Pic, Michel Briand, Jean B\o'e\(aa'zivin
%N 65
%D July 1989

%A Julia L. Lawall
%A Daniel P. Friedman
%T Embedding the Self Language in Scheme
%B BIGRE Bulletin
%O Putting Scheme to Work
%E Andr\o'e\(aa' Pic, Michel Briand, Jean B\o'e\(aa'zivin
%N 65
%D July 1989

%A Andr\o'e\(aa' Pic
%A Michel Briand
%T Visual Programming with Generators
%B BIGRE Bulletin
%O Putting Scheme to Work
%E Andr\o'e\(aa' Pic, Michel Briand, Jean B\o'e\(aa'zivin
%N 65
%D July 1989

%A Christian Queinnec
%T Validation Suite Generation
%B BIGRE Bulletin
%O Putting Scheme to Work
%E Andr\o'e\(aa' Pic, Michel Briand, Jean B\o'e\(aa'zivin
%N 65
%D July 1989

%A J. C. Royer
%A J. P. Braquelaire
%A P. Casteran
%A M. Desainte-Catherine
%A J. G. Penaud
%T Le mod\o'e\(ga'le OBJScheme: principes et applications
%B BIGRE Bulletin
%O Putting Scheme to Work
%E Andr\o'e\(aa' Pic, Michel Briand, Jean B\o'e\(aa'zivin
%N 65
%D July 1989

%A Robert Strandh
%T OOOZ, A multi-User Programming Environment Based on Scheme
%B BIGRE Bulletin
%O Putting Scheme to Work
%E Andr\o'e\(aa' Pic, Michel Briand, Jean B\o'e\(aa'zivin
%N 65
%D July 1989

%A Nitsan S\o'e\(aa'niak
%T Compilation de Scheme par sp\o'e\(aa'cialisation explicite
%B BIGRE Bulletin
%O Putting Scheme to Work
%E Andr\o'e\(aa' Pic, Michel Briand, Jean B\o'e\(aa'zivin
%N 65
%D July 1989

%A John Wade Ulrich
%T Enumeration Algorithms and Non-deterministic Programming in Scheme
%B BIGRE Bulletin
%O Putting Scheme to Work
%E Andr\o'e\(aa' Pic, Michel Briand, Jean B\o'e\(aa'zivin
%N 65
%D July 1989

%A Jonathan Rees
%T Modular Macros
%R Master's thesis
%I Department of Electrical Engineering and Computer Science,
Massachusetts Institute of Technology
%D May 1989
%K modmac

%A Williams Ludwell Harrison III
%T The Interprocedural Analysis and Automatic Parallellization
of Scheme Programs
%J Lisp and Symbolic Computation: An International Journal
%V 2
%N 3/4
%I Kluwer Academic Publishers
%D October 1989

%A Michael Eisenberg
%A William Clinger
%A Anne Hartheimer
%T Programming In MacScheme
%E Harold Abelson
%I The Scientific Press
%C Redwood City, CA
%D 1990

%A John Franco
%A Daniel P. Friedman
%T Towards A Facility for Lexically Scoped, Dynamic Mutual Recursion
in Scheme
%J Journal of Computer Languages
%V 15
%N 1
%P 55-64
%I Pergamon Press
%D 1990

%A John Franco
%A Daniel Friedman
%A Steven Johnson
%T Multi-way Streams in Scheme
%J Journal of Computer Languages
%V 15
%N 2
%P 109-125
%D 1990

%A Samuel Kamin
%B Programming Languages: An Interpreter-based Approach
%I Addison-Wesley
%C Reading, Mass.
%D 1990

%A Guillermo Rozas
%A James Miller
%T Free Variables and First-Class Environments
%J Lisp and Symbolic Computation: An International Journal
%V 3
%N 4
%I Kluwer Academic Publishers
%D December 1990

%A Kurt Normark
%T Simulation of Object-Oriented Concepts and Mechanisms in Scheme
%R Institute for Electronic Systems Technical Report 90-01
%I Aalborg University
%C Aalborg, Denmark
%D January 1990
%K oopmech

%A Dorai Sitaram
%A Matthias Felleisen
%T Control Delimiters and Their Hierarchies
%J Lisp and Symbolic Computation: An International Journal
%V 3
%N 1
%I Kluwer Academic Publishers
%P 67-99
%D January 1990
%K ctrldelim

%A Pavel Curtis
%A James Rauen
%T A Module System for Scheme
%J Proceedings of the 1990 ACM Conference on Lisp
and Functional Programming
%C Nice, France
%D June 1990
%K module

%A Marc Feeley
%A James S. Miller
%T A Parallel Virtual Machine for Efficient Scheme Compilation
%J Proceedings of the 1990 ACM Conference on Lisp
and Functional Programming
%C Nice, France
%D June 1990

%A Chris Hanson
%T Efficient Stack Allocation for Tail-Recursive Languages
%J Proceedings of the 1990 ACM Conference on Lisp
and Functional Programming
%C Nice, France
%D June 1990

%A Morry Katz
%A Daniel Weise
%T Continuing Into the Future:
On the Interaction of Futures and First-Class Continuations
%J Proceedings of the 1990 ACM Conference on Lisp
and Functional Programming
%C Nice, France
%D June 1990

%A Pierre Bonzon
%T A Matacircular Evaluator for a Logical Extension of Scheme
%J Lisp and Symbolic Computation: An International Journal
%I Kluwer Academic Publishers
%V 3
%N 2
%P 113-133
%D March 1990

%A R. Kent Dybvig
%A Robert Hieb
%T Continuations and Concurrency
%J Proceedings of the Second ACM SIGPLAN Symposium on 
Principles and Practice of Parallel Programming
%C Seattle, Washington
%D March 1990
%P 128-136
%O Also Indiana University Computer Science Department Technical Report #256

%A Olin Shivers
%T Data-Flow Analysis and Type Recovery in Scheme. 
%R Technical Report CMU-CS-90-115
%I CMU School of Computer Science
%C Pittsburgh, Penn.
%D March 1990
%O Also to appear in Topics in Advanced Language Implementation,
Ed. Peter Lee, MIT Press.

%A R. Kent Dybvig
%A Robert Hieb
%T A New Approach to Procedures with Variable Arity
%J Lisp and Symbolic Computation: An International Journal
%V 3
%N 3
%I Kluwer Academic Publishers
%D September 1990
%P 229-244

%A Robert Hieb
%A R. Kent Dybvig
%A Carl Bruggeman
%T Representing Control in the Presence of First-Class Continuations
%J Proceedings of the SIGPLAN '90 Conference on
Programming Language Design and Implementation
%C White Plains, New York
%D June 1990 (to appear)

%A IEEE Std 1178-1990
%T IEEE Standard for the Scheme Programming Language
%I Institute of Electrical and Electronic Engineers, Inc.
%C New York, NY
%D 1991

%A Dorai Sitaram
%A Matthias felleisen
%T Modeling continuations without continuations
%J Proceedings of the Eighteenth ACM Symposium on 
Principles of Programming Languages
%D 1991
%P 185-196

rh@smds.UUCP (Richard Harter) (06/13/91)

In article <JBW.91Jun11165952@maverick.uswest.com>, jbw@maverick.uswest.com (Joe Wells) writes:
> In article <541@smds.UUCP> rh@smds.UUCP (Richard Harter) writes:

>    (A)  Brutally stripped syntax.  The only special characters are (){} and
>    the white space characters.  There are no quote characters, no meta
>    characters and no escape character, and no need for them.

> Does this mean that the NUL character (0) is treated like a letter?

No - but it's an interesting thought.  The character set is all printable
characters.  Special characters, in this context, are punctuation characters,
characters used as operators, etc.  For example, +-=,.[](){}!~&|$\ etc in
the UNIX shell.  It is a design spec that OS commands be legal Lakota commands,
in so far as possible, i.e. a Lakota script can have a mixture of Lakota
and UNIX shell commands under UNIX or Lakota and DCL under VMS.  The
stripped syntax is a consequence.  For this purpose, Lakota is an extension
language over the underlying OS command language; to avoid syntax conflicts
one avoids attaching special meaning to any character that has special meaning
in the underlying OS languages of interest.

-- 
Richard Harter, Software Maintenance and Development Systems, Inc.
Net address: jjmhome!smds!rh Phone: 508-369-7398 
US Mail: SMDS Inc., PO Box 555, Concord MA 01742
This sentence no verb.  This sentence short.  This signature done.

rh@smds.UUCP (Richard Harter) (06/13/91)

In article <17132@helios.TAMU.EDU>, byron@donald.tamu.edu (Byron Rakitzis) writes:
> In article <541@smds.UUCP> rh@smds.UUCP (Richard Harter) writes:
     .....

> Your comment is perfectly valid. So far, I have received numerous messages
> with suggestions for (already existing) alternatives to perl. Most people
> named TCL, Python, Icon (as above) but I have also heard that people use
> Scheme. This is the first I've heard of Lakota. Is it available for anonymous
> ftp?

Sorry, no, you'll need to sign a license.  However write me at the address
below or give me an address to send it to, and I'll get a copy to you.

> What it boils down to is this: whether or not ap is a lot of work, I fear it
> is going to get written sometime in the near future. When I re-implemented
> rc, I thought I came close to what I wanted as my command interpreter, but
> now that I am aware of its shortcomings, I want to try to improve on the
> rc model. I have at least several goals in mind:

Also look at Scheme and Rexx as others have noted.  Elsewhere David Gudeman
has deprecated Perl and Python as reinventing an unneeded wheel (see his
article in comp.unix.shell).  I think David is wrong, but that is a matter
for another article.  He argues that refinement of existing languages is
the right way to go.  There is something to this; the problem, however, is
that this approach doesn't lead to fundamental improvements in language
design.  Taking an existing language and hacking at it (which is what most
people do when they tackle creating a language) is unprofitable -- a
language is more than a sum of pieces.  Moreover the "improved" language
is more than likely to be a dead end.


> 1) I want the core command interpreter to be independent of any particular
> application.

By this I gather you mean application area.

> 2) I want it to be easy to add modules so that the core command interpreter
> may be grown into, say, an ap.

This is an interesting topic.  The approach in Lakota is to have a core
library (the interpreter and the basic commands) and an extension facility
to add new commands and functions.  I believe TCL does the same thing.
And, of course, one can provide libraries of predefined scripts.  It
occurs to me that this is a somewhat parochial approach -- that there
may be a more powerful and general way to deal with language extension.
I would be interested in seeing comments on this topic.

> 3) I want the syntax of the language to be elegant. This sounds like a tall
> order, but what I'm doing is ruling out languages like Scheme; I just cannot
> deal with the Lots of Irrelevant and Stupid Parentheses. Also, I'm afraid that
> Python does not quite fit the bill either. Syntax that is whitespace dependent
> just feels "wrong". I know, I tried using Python for a while.

I haven't used Python, so I don't know what the whitespace dependencies are.
I'm not sure what "elegance" is -- I would take it as a combination of
conciseness, coherence, simplicity, and power.  I would rate Scheme as
elegant.  I would also rate Lakota as elegant, but I'm prejudiced.  It
does use indentation for grouping.

> Let me state here that I have not ruled out any of the other interpreters yet.
> I still want to see if, say, TCL cannot satisfy my needs. Also, let me state
> right away that (1)-(4) contain strong doses of religion; please don't take my
> comments as flames. However, if you disagree, I'm always willing to argue. :-)

Understood.  One way to approach languages is in terms of the needs that
they fulfill.  This, in no particular order, is a list of needs that Lakota
was designed to meet.

(a)	The general objective was to provide a language in which it was
	convenient and feasible to write moderately large-scale administrative
	scripts.

(b)	It should be possible to embed the interpreter in programs written
	in other languages and have it available as a service utility.

(c)	It should be possible to write programs which which are a mixture
	of 3GL compiled code and interpreted routines.

(d)	The code should be readable and maintainable by someone with a
	modest exposure to the language.

(e)	As much as possible, scripts should be portable across OS's.

(f)	OS commands (or commands in any other underlying target language)
	should be directly executable.

(g)	Standard structured programming concepts should be natural.

(h)	It should be able to handle string and list manipulation.

(i)	The language should have the functionality of standard UNIX
	tools such as sed, awk, and find.

(j)	There should not be any arbitrary size restrictions.

-- 
Richard Harter, Software Maintenance and Development Systems, Inc.
Net address: jjmhome!smds!rh Phone: 508-369-7398 
US Mail: SMDS Inc., PO Box 555, Concord MA 01742
This sentence no verb.  This sentence short.  This signature done.

alan@cwi.UUCP (Alan Wright ) (06/14/91)

In article <541@smds.UUCP>, rh@smds.UUCP (Richard Harter) writes:
> Most of the articles on the ap thread have expired here; in particular I missed
> the original article that sparked the exchanges.  Byron, I gather, dislikes
> the style of Perl and would like, so to speak, an anti-Perl, even if he has
> to write it himself.  This is really a lang.misc topic, so I've added that
> as a group.

I also missed the beginning of this thread, but I would like to know what
Byron (or anyone else) has in mind for an anti-Perl.

> Byron, you don't want to do it.  It's more work to develop a language than
> one might imagine, particularly if it is to be a good one.  If you don't

I have to disagree. I think more people should be experimenting with more
languages of this general flavor. These very high-level interpreted
languages become more usable and more practical as hardware gets faster
and faster. Of course, it would be helpful if one has a source of funding
or some guaranteed revenue from the result....

> like Perl there are a number of perfectly reasonable alternatives, among
> them TCL, Python, and Icon.  There is also Lakota, which I am a principal

... and we sell one called Accent. (Actually, it comes bundled with another
product.)

> developer.  [Apologies to all who fuss about people talking about their
> own work duly made.  My justification is that I want to talk about the
> language design issues, which are of general interest to those who are 
> interested in language design issues.]

ditto.

Our background is as follows:

Some three years ago, we used Perl to prototype a product. When we shifted
to developing the production verson, we decided we still wanted to use
a very-high-level interpreted language with dynamic strings, arrays, tables,
etc.... We did not want to use Perl, because we didn't want the compile
phase at runtime, nor could we deliver source code. So we designed and
implemented Accent, which has the following substantial differences from
Perl:
	
	- declarations and strong type checking
	- functions, modules, and classes
	- remote function calls (as easy as local calls)
	- high-level (i.e. dynamic) first class data structures:
	  (strings, arrays, tables, composites
	- table keys can be any data type (just as values can)
	- more palatable syntax (derived from C, Pascal, and others)
	- portable source and object code

In short, we added what we thought were good software engingeering
features, so that we could write production code (and in large quantity). 

> The main point I would like to make is that it is not such a simple thing
> to design and implement a language, particularly one which is to have
> strong string manipulation capabilities.

Could you elaborate on this (especially on why you think string
manipulation is a significant problem). Initial development of such a
language can be done in only a few months. If there is much interest in
the result, then the additional work to polish and distribute the language
is justified.

The key issue here is how to generate interest. We would probably give away
Accent if people wanted it, but new languages seem to be responded to with
nothing but negativity and criticism. I think there is plenty of room for 
many more very-high-level languages, which do not require substantial
learning to use effectively. Languages like Perl, Accent, and perhaps
Lakota only take hours or days to learn thoroughly, and thus pay for this
effort very quickly. 

To sum up, I would like to see more variations on the Perl theme. More
languages to test various combinations of features. (Yes, I am aware of
Icon, Python, Rexx, and others, but I don't like any of them).
This technology is still far from providing the final solution, so we
need to keep trying!

P.S. Richard, I would like to hear more about Lakota. In case you are not
going to sell it for profit, I see nothing worng with discussing it at
length in this forum.
 

kend@data.UUCP (Ken Dickey) (06/14/91)

>byron@donald.tamu.edu (Byron Rakitzis) writes:

>> 3) I want the syntax of the language to be elegant. This sounds like a tall
>> order, but what I'm doing is ruling out languages like Scheme; I just cannot
>> deal with the Lots of Irrelevant and Stupid Parentheses.

You probably don't use C's ugly curly braces, parenthesis, or square
brackets (why so many types of parenthesis?).  Aside from being
elegant--which it is--, Scheme's syntax is trivial.  I *never* have to
look at a language manual to use Scheme.  I *always* have to look at
manuals for non-lisp-family languages.  Syntax should be trivial!

-Ken Dickey				kend@data.uucp

rh@smds.UUCP (Richard Harter) (06/14/91)

In article <796@cwi.UUCP>, alan@cwi.UUCP (Alan Wright ) writes:

	I am going to address the following PS first:


> P.S. Richard, I would like to hear more about Lakota. In case you are not
> going to sell it for profit, I see nothing wrong with discussing it at
> length in this forum.

Well, we are sort of on the gray edge here -- Lakota is proprietary to SMDS.
I am a principal in SMDS.  We are using it internally; at some point it
may be marketed at a nominal price consistent with financial reality.  At
present anyone reading this who wants a beta site copy can get it for the
asking.  In view of this, the usual Usenet caveats are relevant.  My view
is that it is permissable and valid to discuss Lakota in the context of
language design issues, e.g. "I took this path and made these decisions
for these reasons".  That seems to me be fair and legitimate; my interest
is in discussing what should and shouldn't be in languages and what
they are good for.  If people disagree, I'll hold my peace.

> In article <541@smds.UUCP>, rh@smds.UUCP (Richard Harter) writes:
	....

> I also missed the beginning of this thread, but I would like to know what
> Byron (or anyone else) has in mind for an anti-Perl.

By the by, I'm not knocking Perl; that was Byron.

> > Byron, you don't want to do it.  It's more work to develop a language than
> > one might imagine, particularly if it is to be a good one.  If you don't

> I have to disagree. I think more people should be experimenting with more
> languages of this general flavor. These very high-level interpreted
> languages become more usable and more practical as hardware gets faster
> and faster. Of course, it would be helpful if one has a source of funding
> or some guaranteed revenue from the result....

There is much to what you say.  Actually, even without the hardware speedup
factor it makes sense.  In most applications most of the execution time is
spent in a small percentage of the code.  Write that 5-10% in C or whatever.
It is my contention that a good high-level interpreted language is several
times more expressive than a traditional 3GL.  Grant me a factor of 5, i.e.
it takes 5 times as much 3GL code to do what you can do in the interpreter.
A good high level interpreter should run about 2-3 times slower than the
equivalent 3GL code.  For example:  suppose you have an application that
has 100,000 lines of 3GL code.  10,000 lines accounts for 90% of the CPU
usage; 90,000 accounts for 10%.  Replace the 90,000 by 18,000 lines of
interpreted code.  The result is a program that has 28,000 lines of code
and runs 10-20% slower.  The development time drops sharply.  These are
reasonable tradeoffs.

However my point was really that it is not so simple to develop a general
language that is a signifigant improvement on what is already available.
There is a psychological trap involved, a tendency to say "I use and
like X, but it could be a lot better so I am going to create my own
language Y."  The result is X with a bunch of extra miscellaneous features.

	....

> ... and we sell one called Accent. (Actually, it comes bundled with another
> product.)

> Our background is as follows:

> Some three years ago, we used Perl to prototype a product. When we shifted
> to developing the production verson, we decided we still wanted to use
> a very-high-level interpreted language with dynamic strings, arrays, tables,
> etc.... We did not want to use Perl, because we didn't want the compile
> phase at runtime, nor could we deliver source code. So we designed and
> implemented Accent, which has the following substantial differences from
> Perl:

> 	- declarations and strong type checking
> 	- functions, modules, and classes
> 	- remote function calls (as easy as local calls)
> 	- high-level (i.e. dynamic) first class data structures:
> 	  (strings, arrays, tables, composites
> 	- table keys can be any data type (just as values can)
> 	- more palatable syntax (derived from C, Pascal, and others)
> 	- portable source and object code

> In short, we added what we thought were good software engingeering
> features, so that we could write production code (and in large quantity). 

That's a fairly impressive collection of features.  It reads like a full
blown modern OOL.  I gather from your comments about a compile-phase for
Perl at runtime that you can parse an accent program in advance and execute
off of a representation of the parsed program.  Is this correct?

> > The main point I would like to make is that it is not such a simple thing
> > to design and implement a language, particularly one which is to have
> > strong string manipulation capabilities.

> Could you elaborate on this (especially on why you think string
> manipulation is a significant problem).

That was somewhat of an overstatement on part that was conditioned by
the particular issues that I was trying to address.  If you are in a 
typed language with string variables you can do quite nicely with the
usual collection of string manipulation utilities and operators if you
are only concerned with strings as the content of variables.  However
if you permit the dynamic creation of variable (and procedure) names
things get interesting.   If one is going to allow multiple levels of
substitution and the inclusion of arbitrary characters one has to be
fairly careful.

> To sum up, I would like to see more variations on the Perl theme. More
> languages to test various combinations of features. (Yes, I am aware of
> Icon, Python, Rexx, and others, but I don't like any of them).
> This technology is still far from providing the final solution, so we
> need to keep trying!

I am curious.  Why don't you like them?
-- 
Richard Harter, Software Maintenance and Development Systems, Inc.
Net address: jjmhome!smds!rh Phone: 508-369-7398 
US Mail: SMDS Inc., PO Box 555, Concord MA 01742
This sentence no verb.  This sentence short.  This signature done.

marti@mint.inf.ethz.ch (Robert Marti) (06/14/91)

In article <505@data.UUCP> kend@data.UUCP (Ken Dickey) writes:
>I *never* have to look at a language manual to use Scheme.
Apparently, you *never* use do-loops.  (Of course, using tail-recursion
is the preferred style, anyway ... )
 
>I *always* have to look at manuals for non-lisp-family languages.
I *never* have to look at a language manual to use C.  (I admit to
having a table of operator precedence pasted to my workstation,
though.
 
>Syntax should be trivial!
Syntax should make for (human-) readable code.  If you find that Lots
of Irrelevant Stupid Parenthesis make for readable code, fine.  I don't!
(I guess that makes me a wimp, huh?  But Real Programmers (TM) program
in FORTRAN, anyway ;-)

Note:  IMHO, Scheme has many strong points:  It's a small and orthogonal
language (first-class procedures, continuations).  However, its syntax
is definitely not one of them.

Robert Marti                      |  Phone:    +41 1 254 72 60
Institut fur Informationssysteme  |  FAX:      +41 1 262 39 73
ETH-Zentrum                       |  E-Mail:   marti@inf.ethz.ch
CH-8092 Zurich, Switzerland       |

skrenta@amix.commodore.com (Rich Skrenta) (06/15/91)

oz@ursa.ccs.yorku.ca (Ozan Yigit) writes:

> Your loss. Scheme is a lot more than just Cambridge polish notation
> with lots of parens. I include a reasonably complete biblography for
> you or anybody else who may wish to have a more detailed view of this
> language and some of the concepts embodied within. I am also certain
> that these references would be helpful in designing and implementing
> other languages worth remembering.

Scheme may very well be The Future, but this arrogant attitude does nothing
to promote the language.  Scheme seems to share much of its syntax with
Lisp, which tends to be a "write only" language.  Languages in which it
is often easier to scrap a function and write it from scratch than to
debug it don't earn high marks among those who have to maintain code for
a living.

Scheme may be wonderful, but you shouldn't dismiss people who complain
about its syntax.  Their complaints may be valid.

Rich
--
skrenta@amix.commodore.com

dwp@willett.pgh.pa.us (Doug Philips) (06/15/91)

In article <505@data.UUCP>,
	kend@data.UUCP (Ken Dickey) writes:

+>byron@donald.tamu.edu (Byron Rakitzis) writes:

+>> 3) I want the syntax of the language to be elegant. This sounds like a tall
+>> order, but what I'm doing is ruling out languages like Scheme; I just cannot
+>> deal with the Lots of Irrelevant and Stupid Parentheses.

+You probably don't use C's ugly curly braces, parenthesis, or square
+brackets (why so many types of parenthesis?).  Aside from being
+elegant--which it is--, Scheme's syntax is trivial.  I *never* have to
+look at a language manual to use Scheme.  I *always* have to look at
+manuals for non-lisp-family languages.  Syntax should be trivial!

And then there are Forth and PostScript with even less syntax!

-Doug
---
Preferred:  dwp@willett.pgh.pa.us	Ok:  {pitt,sei,uunet}!willett!dwp

adrianho@barkley.berkeley.edu (Adrian J Ho) (06/15/91)

In article <2625@amix.commodore.com> skrenta@amix.commodore.com (Rich Skrenta) writes:
>Scheme may very well be The Future, but this arrogant attitude does nothing
>to promote the language.

Well, I guess Ozan might have gotten a _little_ over-enthusiastic, but
do cut him some slack -- he is, after all, the Scheme Repository
maintainer.  8-)

>			Scheme seems to share much of its syntax with
>Lisp, which tends to be a "write only" language.

IMHO, no more "write-only" than C.  If your Scheme/Lisp code is neatly
indented and you take care to use descriptive names _and_ comments,
it's very readable.  It's probably a matter of getting used to a
particular language.  Having a powerful editor like Emacs helps, too 8-).

>						Languages in which it
>is often easier to scrap a function and write it from scratch than to
>debug it don't earn high marks among those who have to maintain code for
>a living.

Hardly true of Scheme/Lisp.  The only language that might be truly
said to possess this characteristic is APL, and only because of its
infernal character set.  (Because of this, APL is the one language
I've encountered in my life that I've never _wanted_ to master.)
Substituting each APL operator with a descriptive name would certainly
go a long way towards making it less of a "write-only" language.

>Scheme may be wonderful, but you shouldn't dismiss people who complain
>about its syntax.  Their complaints may be valid.

Perhaps, but the _environment_ under which the complainants use a
particular language often goes a _very_ long way towards shaping their
opinions about language features.  For instance, I found (like the
original Scheme complainant) the innumerable parens of Scheme programs
to be a royal pain in the butt -- until I discovered Emacs.  With its
paren-matching and "intelligent" indentation capabilities, all was
well again in SchemeLand.  8-)

Another example: Does anyone program in Smalltalk on a regular basis
without a class browser?  Would anyone _like_ to?  8-)

Summary: Syntax is only one aspect of a language's appeal.
Development tools, although they have absolutely nothing to do with
the language per se, also play an important role.  With the right
tools, even APL can be a joy to program.  8-)

[NOTE: This article should not be construed as a APL-bashing post.  I
picked the language as an (extreme) example of a "write-only" language
(to the general programming populace).]

rockwell@socrates.umd.edu (Raul Rockwell) (06/15/91)

Rich Skrenta:
   Scheme seems to share much of its syntax with Lisp, which tends to
   be a "write only" language.  Languages in which it is often easier
   to scrap a function and write it from scratch than to debug it
   don't earn high marks among those who have to maintain code for a
   living.

"Write only" languages tend to be ones you don't understand.

Programmers who can't debug programs don't earn high marks in
anybody's book (except, perhaps, their own).  [And an inability to
debug in an interactive environment suggests a severe lack of
discipline.]

-- 
Raul <rockwell@socrates.umd.edu>

oz@ursa.ccs.yorku.ca (Ozan Yigit) (06/15/91)

skrenta@amix.commodore.com (Rich Skrenta) writes:

   Scheme may very well be The Future, but this arrogant attitude does
   nothing to promote the language.

I don't see anything particularly arrogant about a strong suggestion that
there is more to some languages than their syntax might alone suggest, and
that they should perhaps be studied before their dismissal. I also happen to
like scheme a lot, and I can go to some lengths to help people learn more
about the language, hence the posting of a near-complete bibliography.

   Scheme may be wonderful, but you shouldn't dismiss people who complain
   about its syntax.  Their complaints may be valid.

If I were to dismiss people because they complain about the syntax, I would
not try to help them see beneath it by posting real information. Surprizing
as it may seem, I have no strong feelings about lisp syntax, and in the past
I have challenged other people who made unsubstentiated or irrational claims
on the usefullness of such syntax.

Btw: there is at least one book near completion about the representation and
implementation of programming languages that shows the implementation [using
scheme] of scheme-like languages without a lot of brackets. It should be out
before the summer is over.

oz
---
Often it is means that justify ends: Goals    | email: oz@nexus.yorku.ca
advance technique and technique survives even | phone: 416-736-5257 x 33976
when goal structures crumble. -- A. J. Perlis | other: oz@ursa.ccs.yorku.ca

alan@cwi.UUCP (Alan Wright ) (06/16/91)

In article <552@smds.UUCP>, rh@smds.UUCP (Richard Harter) writes:
> 
> Well, we are sort of on the gray edge here -- Lakota is proprietary to SMDS.
> I am a principal in SMDS.  We are using it internally; at some point it
> may be marketed at a nominal price consistent with financial reality.  At

You may find that the best you can do prior to some large following
developing is to literally give your language away. Further, the
prevailing attitude is that it must be available in source form.

> There is much to what you say.  Actually, even without the hardware speedup
> factor it makes sense.

I agree, I just think some of the traditional performance-related objections
to interpretive languages are now less relevant. 

> In most applications most of the execution time is
> spent in a small percentage of the code.  Write that 5-10% in C or whatever.
> It is my contention that a good high-level interpreted language is several
> times more expressive than a traditional 3GL.  Grant me a factor of 5, i.e.
> it takes 5 times as much 3GL code to do what you can do in the interpreter.
> A good high level interpreter should run about 2-3 times slower than the
> equivalent 3GL code.  For example:  suppose you have an application that
> has 100,000 lines of 3GL code.  10,000 lines accounts for 90% of the CPU
> usage; 90,000 accounts for 10%.  Replace the 90,000 by 18,000 lines of
> interpreted code.  The result is a program that has 28,000 lines of code
> and runs 10-20% slower.  The development time drops sharply.  These are
> reasonable tradeoffs.

I agree with your reasoning here, but my experience shows that not only
do you achieve higher productivity via such languages, that you also
tend to implement more generality, flexibility, and other such
qualities with the more expressive language. I suspect that a general
rule here is that the more code a programmer has to write, the more
corners (s)he will tend to cut. 

> However my point was really that it is not so simple to develop a general
> language that is a signifigant improvement on what is already available.

It is not simple to successfully popularize one, but it is NOT difficult
for many of us to try. I suppose it may be a lot like writing a hit song.
It is difficult to predict what features will have strong appeal. 

> There is a psychological trap involved, a tendency to say "I use and
> like X, but it could be a lot better so I am going to create my own
> language Y."  The result is X with a bunch of extra miscellaneous features.

Not necessarily. For example, there is really no resemblance between
Perl and Accent. The reason for this is that Perl was designed for
one niche (system administration?), while our needs were somewhat
different (production CASE software prototyping and development). 

> > 	- declarations and strong type checking
> > 	- functions, modules, and classes
> > 	- remote function calls (as easy as local calls)
> > 	- high-level (i.e. dynamic) first class data structures:
> > 	  (strings, arrays, tables, composites
> > 	- table keys can be any data type (just as values can)
> > 	- more palatable syntax (derived from C, Pascal, and others)
> > 	- portable source and object code

I forgot one:
		- simple API for graphical user interfaces (for X, Sunview, ...)

> > In short, we added what we thought were good software engingeering
> > features, so that we could write production code (and in large quantity). 
> 
> That's a fairly impressive collection of features.  It reads like a full
> blown modern OOL.

Our notion of classes is not well enough developed to qualify as a
real OO language. This is actually more a matter of how we limited
our present implementation than a syntactic issue. 

> I gather from your comments about a compile-phase for
> Perl at runtime that you can parse an accent program in advance and execute
> off of a representation of the parsed program.  Is this correct?

Yes. Parse once, execute multiple.... Also, we can deliver one object code
for all of our platforms.

> > languages to test various combinations of features. (Yes, I am aware of
> > Icon, Python, Rexx, and others, but I don't like any of them).
> > This technology is still far from providing the final solution, so we
> > need to keep trying!
> 
> I am curious.  Why don't you like them?

I should rephrase: I don't feel they are well suited for my
applications.  I could provide an analysis of each, but instead let me
just say that the set of features we ultimately put into Accent were
some of the features I wanted.  I think you'll find that each of these
other languages (let me add Scheme, Smalltalk, ABC, TCL) really was
designed with a different use in mind, or is missing some key feature,
or has some critical misfeature. (No vendor support is a critical
misfeature.)

Now, back to the topic of Lakota. I remember seeing something you posted
in the past which suggested to me that Lakota was perhaps more like a
REXX than a Perl. What I remember is something about controlling the
interactions among a set of subordinate tool processes. Is this correct?

throopw@sheol.UUCP (Wayne Throop) (06/17/91)

> rh@smds.UUCP (Richard Harter)
> The power of UNIX shell programming
> rests not in the shell language itself, but in the collection of tool
> programs that are standardly available, e.g. sed, awk, find, uniq, sort,
> and their interconnection with pipes and redirection.  [..but..]
> the execution of the resulting scripts is slow because each tool is
> a separate process, [...]

A thought that occurs to me is that the rapid adoption of perl in the
Unix community is a back-door defection to a philosophy that Lisp folks
have been preaching for years.  In perl, one is partly abandoning the
the notion of small, stand-alone tools that do one thing well, and
instead adopting the monolithic everything-in-one-language approach that
Lisp has always had.  You know.  "Swiss army chainsaw."  Perl simply
makes this palatable to the Unix community by leaving out some of the
parenthesis and including a baroque syntax and idiosyncratic semantics.  :-)

> The main point I would like to make is that it is not such a simple thing
> to design and implement a language, particularly one which is to have
> strong string manipulation capabilities.

Having had recent experience in doing just that (for the usual misguided
reasons), I mildly disagree.  By compromising runtime speed somewhat,
one can put together a complete interpreted string-manipulation language
for well under 2000 lines of C code and a month or two of elapsed time. 
This doesn't include design time.  Double or treble or more that effort
if you then need to squeeze speed out of the thing.  Does this match
other folks' estimates/experience?

As an aside, I also find it interesting that "everybody" seems to have
implemented or be implementing a tiny language or two, even if they
later repent.  I don't know if that's good, bad, or indifferent, 
but it seems to be so. 

> Message-ID: <552@smds.UUCP>
> However my point was really that it is not so simple to develop a general
> language that is a signifigant improvement on what is already available.

Ah.  Now *this* formulation seems more apt.  "Significant improvement on
what is already available" is the key.  Part of the problem is, just
exactly what's "significant" is a difficult question, and easy to be
self-deceptive about.  Wrangling with myself over exactly this issue is
what led me down the slippery slope of interpreter implementation.

Somebody who's name I forget just now, talking about TCP/IP and ISO,
coined an apt phrase in this regard.  "Just because your local tire shop
doesn't have four-ply radials in stock is no reason to go out and
reinvent the travois."

As I said, I myself have a fine travois which I use daily.  I don't
regret inventing it, and I learned a lot from it, and it is a fine and
dandy tool... but my original justification for writing it was, in
retrospect, insufficient.


Finally, I note that I'm familiar with at least one alternative to the
"brutal stripping" of syntax to allow the "outer interpreter" (eg: the
shell) to smoothly encapsulate several "inner interpreters" (eg:
commands with mini-languages in their arguments, like expr, grep, and so
on and on).  It is, however, somewhat trickier. 

Instead of giving symbols no interpretation at all, you can give them
*lexical* significance, but no *semantic* significance.  You can get
away with this because the lexical significance of characters is pretty
standard across languages.  For example, () always match, and so on. 
One also has to have a regular and predictable quote convention to deal
with the few leftover conflicts, sigh.  But one almost always has to
have some moral equavilent of this, even in "brutally stripped"
syntaxes. 

What this buys you is the ability to give multiple and more natural
semantic interpretation of characters.  Or to put it another way, the
semantics associated with the characters is determined by context, while
at the same time the shell isn't prevented from giving meaning to
characters so strictly.  This is especially important in a multi-lingual
debugger. 
--
Wayne Throop  ...!mcnc!dg-rtp!sheol!throopw

lwall@jpl-devvax.jpl.nasa.gov (Larry Wall) (06/18/91)

In article <2154@sheol.UUCP> throopw@sheol.UUCP (Wayne Throop) writes:
: > rh@smds.UUCP (Richard Harter)
: > The power of UNIX shell programming
: > rests not in the shell language itself, but in the collection of tool
: > programs that are standardly available, e.g. sed, awk, find, uniq, sort,
: > and their interconnection with pipes and redirection.  [..but..]
: > the execution of the resulting scripts is slow because each tool is
: > a separate process, [...]
: 
: A thought that occurs to me is that the rapid adoption of perl in the
: Unix community is a back-door defection to a philosophy that Lisp folks
: have been preaching for years.  In perl, one is partly abandoning the
: the notion of small, stand-alone tools that do one thing well, and
: instead adopting the monolithic everything-in-one-language approach that
: Lisp has always had.  You know.  "Swiss army chainsaw."  Perl simply
: makes this palatable to the Unix community by leaving out some of the
: parenthesis and including a baroque syntax and idiosyncratic semantics.  :-)

That's not too far off the mark.  The baroque syntax and idiosyncratic
semantics are necessary in Perl precisely because they're borrowed from
Unix culture.  As such, ugliness is part of its design goal.  However,
the monolithicism isn't intended to take anything away from the toolbox
approach, but merely to give you an alternative.  Perl isn't a toolbox,
but a small machine shop where you can special-order certain sorts of
tools at low cost and in short order.

As someone else remarked, it's the height of arrogance to design a
language.  With normal human languages, nobody can even hope to design
more than a few catchy phrases.  Thus, such languages are totally
undesigned.  They are, nevertheless, useful upon occasion.  I have just
enough hubris to think that I can design a medium-sized language that
has some of the positive aspects of a "large" language without a lot of
its negative aspects.  If this is what you're reacting to when you say
"I want anti-perl," then your anti-perl is going to be solving a
different problem than Perl does.  More power to you, but I'll keep
doing my thing...

Larry Wall
lwall@netlabs.com

rh@smds.UUCP (Richard Harter) (06/18/91)

In article <798@cwi.UUCP>, alan@cwi.UUCP (Alan Wright ) writes:
> In article <552@smds.UUCP>, rh@smds.UUCP (Richard Harter) writes:

> You may find that the best you can do prior to some large following
> developing is to literally give your language away. Further, the
> prevailing attitude is that it must be available in source form.

Well, yes, that is generally the case.

> I agree with your reasoning here, but my experience shows that not only
> do you achieve higher productivity via such languages, that you also
> tend to implement more generality, flexibility, and other such
> qualities with the more expressive language. I suspect that a general
> rule here is that the more code a programmer has to write, the more
> corners (s)he will tend to cut. 

Particular as deadlines grow closer.  :-)  I think you are right -- the
gain I see is that a well designed "higher level" language is easier to
write -- line for line of code -- than a traditional compiled 3GL because
irrelevant considerations (for the purpose of the code) do not intrude
themselves.  To give a typical example from C, quite often the most natural
way to implement an algorithm is to use pointers into data structures.  The
pointers have to all be declared and initialized.  While this is simple
enough it is overhead.

I take it as a maxim that any language which does not provide automatic
dynamic storage which you do not have to manage is not really satisfactory.

> > There is a psychological trap involved, a tendency to say "I use and
> > like X, but it could be a lot better so I am going to create my own
> > language Y."  The result is X with a bunch of extra miscellaneous features.

> Not necessarily. For example, there is really no resemblance between
> Perl and Accent. The reason for this is that Perl was designed for
> one niche (system administration?), while our needs were somewhat
> different (production CASE software prototyping and development). 

This is not what I had in mind.  You were trying to do something different.
What I was thinking of more on the lines of fiddling with minor features
and adding gadgets.  If you look at the major languages you seen quite
different underlying concepts of what the languages are about.  Compare,
for example, FORTRAN, COBOL, lisp, apl, forth, SNOBOL, and eiffel.  You
are talking whole different modes of thinking here.  My point is that
there is a great tendency to elaborate on the mode of thinking that one
is famliar with.  One, in effect, rewrites C with one's particular pet
peeves removed.

> > I gather from your comments about a compile-phase for
> > Perl at runtime that you can parse an accent program in advance and execute
> > off of a representation of the parsed program.  Is this correct?

> Yes. Parse once, execute multiple.... Also, we can deliver one object code
> for all of our platforms.

At this point we have settled for parsing at run time (but parsing is
very cheap.)  This is a one time cost since procedures, once parsed,
remain available in parsed form for re-execution.  Perhaps later.

> Now, back to the topic of Lakota. I remember seeing something you posted
> in the past which suggested to me that Lakota was perhaps more like a
> REXX than a Perl. What I remember is something about controlling the
> interactions among a set of subordinate tool processes. Is this correct?

Just so.  The general notion is that you can create execution frames on
the fly which have persistence but which are decoupled.  An execution frame
is sort of like a light weight process; however from the viewpoint of the
OS the execution frames are all part of the same process.  From a 3GL 
program you can start up (or resume) an execution frame which can, in
turn suspend itself and pass control (and data) back to the calling program.
You can also link 3GL routines into the execution frame via an interface
which makes them look like Lakota commands and intrinsic functions.
[3GL is an overstatement -- the interface understands C and the interpreter
is written in C.]  The interpreter expects that there is an underlying
shell language that you can pass commands to which is anything that
it doesn't recognize as being a Lakota command.  The real difference
between REXX and Lakota (in this regard) is that there isn't a natural
underlying OS such as VM/CMS that Lakota is tied to.  (Although AREXX
for the Amiga apparently is a happy combination.)  The objective here
is to be able to have multiple pieces of functionality working together
within a common environment.
-- 
Richard Harter, Software Maintenance and Development Systems, Inc.
Net address: jjmhome!smds!rh Phone: 508-369-7398 
US Mail: SMDS Inc., PO Box 555, Concord MA 01742
This sentence no verb.  This sentence short.  This signature done.

rh@smds.UUCP (Richard Harter) (06/18/91)

In article <2154@sheol.UUCP>, throopw@sheol.UUCP (Wayne Throop) writes:
> > rh@smds.UUCP (Richard Harter)
	....

> A thought that occurs to me is that the rapid adoption of perl in the
> Unix community is a back-door defection to a philosophy that Lisp folks
> have been preaching for years.  In perl, one is partly abandoning the
> the notion of small, stand-alone tools that do one thing well, and
> instead adopting the monolithic everything-in-one-language approach that
> Lisp has always had.  You know.  "Swiss army chainsaw."  Perl simply
> makes this palatable to the Unix community by leaving out some of the
> parenthesis and including a baroque syntax and idiosyncratic semantics.  :-)

Baroque?!  How can you say such a thing.  I opine (loverly word, opine) that
the problem with the UNIX tool set approach is three-fold.  The first fold
is that it is (relatively) expensive to connect the tools together.  This
is intrinsic to the UNIX approach of making everything a process and the
need to use temporary files for data flows that can't be shoe-horned into
the redirect/pipe model.  The second fold is simply that the various
utilities are not all consistent.  This is reasonable -- they were
developed independently in an evolutionary fashion.  After the fact one
can look at the whole mess and propose [with the aid of hind sight] a
rationalization of the utilities.  In some sense this is what perl has
done.  The third fold is that the shell (sh, csh) which serves as the
glue to tie all of the pieces together has a number of unfortunate
features.  

> > The main point I would like to make is that it is not such a simple thing
> > to design and implement a language, particularly one which is to have
> > strong string manipulation capabilities.

I didn't say what I meant, here.  I had in mind a number of issues that
regularly create syntactical awkwardnesses unless they are dealt with
carefully.  What I meant to say was that it's simple to design a language
that has a number of messy little annoyances that you don't realize are
there until after you start writing code in it.

> Having had recent experience in doing just that (for the usual misguided
> reasons), I mildly disagree.  By compromising runtime speed somewhat,
> one can put together a complete interpreted string-manipulation language
> for well under 2000 lines of C code and a month or two of elapsed time. 
> This doesn't include design time.  Double or treble or more that effort
> if you then need to squeeze speed out of the thing.  Does this match
> other folks' estimates/experience?

Sounds about right, particularly if you sensible about using canned library
utilities.  Lakota currently runs about 8500 lines of C and the time spent
in development is much more.  [There have been a couple of rewrites and
a fair bit of work on speed.]  Lakota does a lot more, which accounts for
the difference in size.

> As an aside, I also find it interesting that "everybody" seems to have
> implemented or be implementing a tiny language or two, even if they
> later repent.  I don't know if that's good, bad, or indifferent, 
> but it seems to be so. 

Who has not cursed the languages that they had available and wished for
better?  It seems to me that it would be natural for someone with a modern
CS  background to implement a tiny language, given the common emphasis
on language principles and implementation.

> > However my point was really that it is not so simple to develop a general
> > language that is a signifigant improvement on what is already available.

> Ah.  Now *this* formulation seems more apt.  "Significant improvement on
> what is already available" is the key.  Part of the problem is, just
> exactly what's "significant" is a difficult question, and easy to be
> self-deceptive about.  Wrangling with myself over exactly this issue is
> what led me down the slippery slope of interpreter implementation.

Self-deceptive?  Is that to my address, sir?   Name your seconds.  It's
pistols at dawn for you.  :-)

> Somebody who's name I forget just now, talking about TCP/IP and ISO,
> coined an apt phrase in this regard.  "Just because your local tire shop
> doesn't have four-ply radials in stock is no reason to go out and
> reinvent the travois."

> As I said, I myself have a fine travois which I use daily.  I don't
> regret inventing it, and I learned a lot from it, and it is a fine and
> dandy tool... but my original justification for writing it was, in
> retrospect, insufficient.

Ah yes.  One's own travois is a much finer vehicle than any automobile
from the workshops of Detroit.  Most of the time my shell is a Lakota
shell.  My, ah, justification is that it looks just like the bourne
shell except when I need something better.  However the truth of the
matter is that it is my very own travois.  :-)
-- 
Richard Harter, Software Maintenance and Development Systems, Inc.
Net address: jjmhome!smds!rh Phone: 508-369-7398 
US Mail: SMDS Inc., PO Box 555, Concord MA 01742
This sentence no verb.  This sentence short.  This signature done.

dwp@willett.pgh.pa.us (Doug Philips) (06/18/91)

In article <2154@sheol.UUCP>, throopw@sheol.UUCP (Wayne Throop) writes:

+                                                         Perl simply
+makes this palatable to the Unix community by leaving out some of the
+parenthesis and including a baroque syntax and idiosyncratic semantics.  :-)

:-) :-)

+As an aside, I also find it interesting that "everybody" seems to have
+implemented or be implementing a tiny language or two, even if they
+later repent.  I don't know if that's good, bad, or indifferent, 
+but it seems to be so. 

Perhaps because "everybody" has read Jon Bentley's Programming Pearls
column on Little Languages?

+As I said, I myself have a fine travois which I use daily.  I don't
+regret inventing it, and I learned a lot from it, and it is a fine and
+dandy tool... but my original justification for writing it was, in
+retrospect, insufficient.

I think that one of the important benefits of writing/implementing your
own LL in addition to accomplishing the immediate task at hand, is in
the learning experience.

-Doug
---
Preferred:  dwp@willett.pgh.pa.us	Ok:  {pitt,sei,uunet}!willett!dwp

gudeman@cs.arizona.edu (David Gudeman) (06/19/91)

In article  <1991Jun18.004338.23499@jpl-devvax.jpl.nasa.gov> Larry Wall writes:
]In article <2154@sheol.UUCP> throopw@sheol.UUCP (Wayne Throop) writes:
]:a philosophy that Lisp folks
]: have been preaching for years.  In perl, one is partly abandoning the
]: the notion of small, stand-alone tools that do one thing well, and
]: instead adopting the monolithic everything-in-one-language approach that
]: Lisp has always had.  You know.  "Swiss army chainsaw."

]That's not too far off the mark...

Actually, _is_ off the mark.  This characterization of Lisp as a
"Swiss army chainsaw" comes from Unix types who are trying to
characterize lisp as a "tool" so that they can have a nice mental box
to put it in.  However putting Lisp in a group with lex, yacc, make,
awk, etc. is not quite correct.  It would be no less (and no more)
accurate to characterize Lisp as an operating system.

Basically Unix works on the principle that all tools are seperate
programs, loaded seperately by the operating system.  This has several
disadvantages -- most obviously that running a seperate program as a
seperate process is expensive.

Another problem with this approach is that communication between tools
is awkward and inefficient because it is all done through character
streams or disk files.  For tools that want to communicate complex
data (like a parser communicating with a code generator), each one has
to be able to read and write character representations of the data.
This is expensive in both developement time and execution time.

The Lisp philosophy is to have a system where tools communicate
through high-level data structures, and where communication between
tools is no different from communication between procedures within a
tool.  Lisp tries to tie everything into single model and avoid the
two-level system of Unix.  In Lisp, the development language, the
shell, the "pipes", the "file system" and sometimes the process
handler are all the same system (Lisp) and the tools are Lisp
procedures.  This isn't any more monolithic than Unix, it is just that
the divisions are put in different places.

You can have the "tools" approach in Lisp, but there is no distinction
between a tool and a library routine.  In Lisp, you don't need both an
"rm" and an "unlink()".  If you have "cp" you don't have to code your
own "copy_file()".  And when you write a compiler, you don't have to
decide whether to put the parser and code generator in a single
program for speed, or seperate them for maintainability.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

skrenta@amix.commodore.com (Rich Skrenta) (06/21/91)

Looks like I rubbed a scheme zealot the wrong way.  :-)

rockwell@socrates.umd.edu (Raul Rockwell) writes:
> "Write only" languages tend to be ones you don't understand.
>
> Programmers who can't debug programs don't earn high marks in
> anybody's book (except, perhaps, their own).

Strong words, but little insight.  Of course you have to understand a
language to debug it.  No kidding.  But given equal proficiency, could a
programmer debug a similar algorithm implemented in Algol or Forth faster?
How about C vs. Assembly?  Scheme vs. teco?

It seems clear that some languages open code written in them to inspection
and understanding.  Other languages seem to swallow code in a black goo of
obfuscation.

It's possible that the Magical Power of Lisp-like-languages outweighs the
disadvantages of folding every syntactic construct onto the pattern f(a b c).
I'm suspicious, though.  The Scheme crowds' arguments sound an awful lot
like all of the excuses I heard for using Forth.

> [And an inability to
> debug in an interactive environment suggests a severe lack of
> discipline.]

This statement inspired such wonder in me, I couldn't choose between
my responses:

	1)  My interactive enviroment is Unix and vi, does that count?  :-)

	2)  Good thing all of those nasty undisciplined Fortran programmers
	    debugging their card stacks between batch runs are gone.

	3)  What on earth does discipline have to do with using debuggers
	    and code viewing tools?

> Raul <rockwell@socrates.umd.edu>

Rich
--
skrenta@amix.commodore.com

gateley@rice.edu (John Gateley) (06/22/91)

In article <2714@amix.commodore.com> skrenta@amix.commodore.com (Rich Skrenta) writes:

   But given equal proficiency, could a
   programmer debug a similar algorithm implemented in Algol or Forth faster?
   How about C vs. Assembly?  Scheme vs. teco?

As much as it may surprise you, Scheme is a descendent of Algol. And
as much as it may suprise you, the best environment for
coding/debugging in any language I have ever seen was for
Scheme/Common Lisp (the TI Explorer, though scheme is not publically
available).

   It's possible that the Magical Power of Lisp-like-languages outweighs the
   disadvantages of folding every syntactic construct onto the pattern f(a b c).
   I'm suspicious, though.  The Scheme crowds' arguments sound an awful lot
   like all of the excuses I heard for using Forth.

Your pattern is wrong - it should be (f a b c). But you have missed a
lot: Scheme syntax is simple (<function/command> argument ...), but
that is only one small feature of the language. You can write "the
magical power of lisp" into a C style language if you choose, or you
can write a more restrictive version of "lisp" which is basically
equivalent to C but uses prefix notation. So what - its only syntax.

No, I'm not a Scheme zealot, I just don't like to see a language
trashed because of misunderstanding or dislike of a single feature.

John
gateley@rice.edu
--
"I've thought the thoughts of little children and the thoughts of men
 I've thought the thoughts of stupid people who have never been
 so much in love as they should be and got confused too easily
 to fall in love again." The Residents and Renaldo and the Loaf

rockwell@socrates.umd.edu (Raul Rockwell) (06/22/91)

Rich Skrenta:
   Looks like I rubbed a scheme zealot the wrong way.  :-)

I have a little trouble accepting this idea.  Partially because this
is in response to one of my articles, and partially because there are
languages I find much more suited to my purposes than scheme.  (Scheme
is too akward at expressing higher-ordered functions for my taste :-).

Raul Rockwell (me):
   > "Write only" languages tend to be ones you don't understand.

   > Programmers who can't debug programs don't earn high marks in
   > anybody's book (except, perhaps, their own).

Rich Skrenta:
   Strong words, but little insight.  Of course you have to understand
   a language to debug it.  No kidding.  But given equal proficiency,
   could a programmer debug a similar algorithm implemented in Algol
   or Forth faster?  How about C vs. Assembly?  Scheme vs. teco?

I seem to recall a team of Forth programmers winning a "recent"
programming contest -- they were the first to come up with a working
program which displayed text on a moving LED bar, or something on that
order.  I don't know if anybody even attempted Algol.

The C vs. Assembly argument is basically that there are so _many_
assembly languages that it becomes difficult to learn all of them, let
alone learn many well.  C is not the end-all on portability, however.

I'll let scheme and teco fans argue about the virtues of those two
languages.

Anyways, the point [or the "insight", if you prefer] was that
"write-only" is a reflection on programmer skill in a language more
than a reflection of the language itself.

Rich Skrenta:
   It seems clear that some languages open code written in them to
   inspection and understanding.

Ah, yes, like scheme... ;-)

   Other languages seem to swallow code in a black goo of obfuscation.

Obviously refering to C, here...  ;-)

   It's possible that the Magical Power of Lisp-like-languages
   outweighs the disadvantages of folding every syntactic construct
   onto the pattern f(a b c).  I'm suspicious, though.  The Scheme
   crowds' arguments sound an awful lot like all of the excuses I
   heard for using Forth.

Well, if I pointed out the language that *I* prefer (and find clear
and easy to use), you'd probably choke to death.  Let me just say that
I do prefer infix notation.

[This is me:]
   > [And an inability to debug in an interactive environment suggests
   > a severe lack of discipline.]

[And that's Rich:]
   This statement inspired such wonder in me, I couldn't choose
   between my responses:

	   1)  My interactive enviroment is Unix and vi, does that count?  :-)

to the degree that the environment is interactive, yes.

	   2)  Good thing all of those nasty undisciplined Fortran
	       programmers debugging their card stacks between batch
	       runs are gone.

Why?  I was simply pointing out that an interactive environment
provides many opportunities for debugging -- if you have the insight
to use them.  A batch environment has fewer opportunities for
debugging and so enforces a sort of discipline.

	   3)  What on earth does discipline have to do with using
	       debuggers and code viewing tools?

And here I thought you were using Unix and vi.  ;-)

Anyways, if you get lost in the wonder of it all, you aren't going to
get much debugging done.  On the other hand, if you know exactly what
you want to do, and only breakpoint/trace/single-step/dump the key
sections of code, your debugging goes a lot faster.

Once you've been using an environment for a while, this all becomes
second nature, but since the topic seemed to be write-only languages
(which, by definition, are not well understood) I don't think I was
out of line with that comment (3).  For instance, breakpoints, and
code viewing, are trivial with scheme.

-- 
Raul <rockwell@socrates.umd.edu>

oz@ursa.ccs.yorku.ca (Ozan Yigit) (06/22/91)

skrenta@amix.commodore.com (Rich Skrenta) writes:

   ... But given equal proficiency, could a
   programmer debug a similar algorithm implemented in Algol or Forth faster?
   How about C vs. Assembly?  Scheme vs. teco?

Debugging is an interesting problem. As far as lisp / its descendants go,
they have been well documented in the last two decades, so the answer is
out there in the literature. If you are really in a hurry for something,
a copy of Interactive Programming Environments, from Barstow, Shrobe and
Sandewall may be of some help.

   It's possible that the Magical Power of Lisp-like-languages outweighs the
   disadvantages of folding every syntactic construct onto the pattern f(a b c).

I don't know what you are trying to say. Could you be a bit more specific?

   I'm suspicious, though.  The Scheme crowds' arguments sound an awful lot
   like all of the excuses I heard for using Forth.

There is no "scheme crowd" making "excuses" for anything. Language is out there
for you to accept or reject. All one can ask is an enlightened response one way
or the other. I hope the bibliography just posted can help facilitate that. Btw
I will have a substantial collection of abstracts before long, in case that may
be more helpful.
 
enjoy.	oz
---
In seeking the unattainable, simplicity  |  Internet: oz@nexus.yorku.ca
only gets in the way. -- Alan J. Perlis  |  Uucp: utai/utzoo!yunexus!oz

sw@smds.UUCP (Stephen E. Witham) (06/24/91)

In article <GATELEY.91Jun22001123@gefion.rice.edu>, gateley@rice.edu (John Gateley) writes:
> ...You can write "the
> magical power of lisp" into a C style language if you choose, or you
> can write a more restrictive version of "lisp" which is basically
> equivalent to C but uses prefix notation. So what - its only syntax.

"It's only syntax?"  (When you say "more restrictive," you imply more than
syntax, but...)  Do you mean that syntax is unimportant, or do you
mean that changing the syntax of Scheme would be easy, or what?

If you say syntax is unimportant, then you're pretty much ignoring 
the main issue here.  Scheme with a different syntax is a nice idea,
but it would be a different language, as far as this discussion goes.

If you say changing Scheme to look like C would be easy, that's only
half true.  I've seen a description of an Algolish, infix-notation front 
end for Lisp, and it looked great, and I'm sure that at a certain level,
it's a trivial thing to do, but there are problems, like, how do you 
define new operators, and how do you define macros, how do you write
programs that write programs, and how do you put lambda expressions in
the middle of expressions.  All of these add little chunks of 
complexity, and little chunks of complexity count.

> No, I'm not a Scheme zealot, I just don't like to see a language
> trashed because of misunderstanding or dislike of a single feature.

If a single--integral--feature makes the whole language confusing, then 
it's fair to point that out.

--Steve

sw@smds.UUCP (Stephen E. Witham) (06/24/91)

In article <ROCKWELL.91Jun22014535@socrates.umd.edu>, rockwell@socrates.umd.edu (Raul Rockwell) writes:
 
> Anyways, the point [or the "insight", if you prefer] was that
> "write-only" is a reflection on programmer skill in a language more
> than a reflection of the language itself.

(Disclaimer up front:  I work for SMDS, with Richard Harter, who wrote
Lakota and is in this discussion.  The following isn't a plug for Lakota 
in particular, though.)

I disagree strongly.  When you write a program, you transform your
ideas into a form that the computer can use.  In some languages, the
transformation scrambles the evidence of the original thinking process.
In other languages, it's better preserved, so that you can go back and
pick up where you left off.

You might say that a "good" programmer can write a clear program in
any language.  This deserves two responses.  The first is that the design
of a language (or anything else) should have its users in mind.  If the
typical, or natural, way programmers use a language is write-only, then
that's the language designer's fault.  Blaming users for the faults of
our designs is one of the crappiest tendencies of computerists, and it's
not mitigated by applying the same macho ethic to ourselves as users of
languages.  So, perhaps a programmer could write a clear program in a
given language, but if the language makes this hard for some people, or if 
it takes a long time for them to learn how to do it, then the language is 
write-only for those people.

Second, it's not true that you can write any program clearly in any
language.  In the process of making a program "clearer" in a bad
language, all you can do is add complexity--extra variables, extra lines, 
extra function definitions, extra comments--and make the program bigger, 
with more parts.  This pushes details over the horizon, so that at any one 
point, you can see less and less of the relevant detail you need (and more 
and more stuff that's just compensating for the weakness of your language).

--Steve

new@ee.udel.edu (Darren New) (06/25/91)

>> [And an inability to
>> debug in an interactive environment suggests a severe lack of
>> discipline.]

>	2)  Good thing all of those nasty undisciplined Fortran programmers
>	    debugging their card stacks between batch runs are gone.

Actually, having done both, I would say an inability to debug in a batch
environment suffests a lack of discipline, and an inability to debug in
an interactive environment suggests inadequate tools.

	       -- Darren

-- 
--- Darren New --- Grad Student --- CIS --- Univ. of Delaware ---
----- Network Protocols, Graphics, Programming Languages, FDTs -----
+=+ Nails work better than screws, when both are driven with hammers +=+

rockwell@socrates.umd.edu (Raul Rockwell) (06/25/91)

Me:>> [And an inability to debug in an interactive environment
   >> suggests a severe lack of discipline.]

Rich Skrenta:
   >	2)  Good thing all of those nasty undisciplined Fortran
   >	    programmers debugging their card stacks between batch runs
   >	    are gone.

Darren New:
   Actually, having done both, I would say an inability to debug in a
   batch environment suffests a lack of discipline, and an inability
   to debug in an interactive environment suggests inadequate tools.

Yeah.

Except I'd say that an inability to debug in a batch environment
suggests a lack of paper ;-)

Define the ability and inclination to make tools you need as
"discipline" and you'd have the gist of what I was trying to say.

-- 
Raul <rockwell@socrates.umd.edu>

gateley@rice.edu (John Gateley) (06/25/91)

In article <583@smds.UUCP> sw@smds.UUCP (Stephen E. Witham) writes:

   In article <GATELEY.91Jun22001123@gefion.rice.edu>, gateley@rice.edu (John Gateley) writes:
   > ...You can write "the
   > magical power of lisp" into a C style language if you choose, or you
   > can write a more restrictive version of "lisp" which is basically
   > equivalent to C but uses prefix notation. So what - its only
   > syntax.

   "It's only syntax?"  (When you say "more restrictive," you imply more than
   syntax, but...)  Do you mean that syntax is unimportant, or do you
   mean that changing the syntax of Scheme would be easy, or what?

I mean that syntax is relatively unimportant. My personal preference
if for the prefix notation of scheme, but many people don't like it.
What is important about a language though, is the ease with which you
can express algorithms and ideas. This has very little to do with
syntax, unless you choose a completely confusing syntax. Perhaps the
scheme-syntax bashers are claiming that its syntax is completely
confusing. If so, I have to disagree, it's very simple, easy to learn,
and there are lots of tools available to help with those nasty little
parenthesis. 

   If you say syntax is unimportant, then you're pretty much ignoring 
   the main issue here.  Scheme with a different syntax is a nice idea,
   but it would be a different language, as far as this discussion goes.

I disagree with this statement, but ...

   If you say changing Scheme to look like C would be easy, that's only
   half true.  I've seen a description of an Algolish, infix-notation front 
   end for Lisp, and it looked great, and I'm sure that at a certain level,
   it's a trivial thing to do, but there are problems, like, how do you 
   define new operators,

You don't - scheme doesn't have operators! (cheap answer, I know).
However, it should be fairly easy to come up with some technique. Ada
has the capability of defining new infix operators, doesn't it?

   and how do you define macros,

Say foo is a macro, then foo(argument, ... , argument) would be an
invocation of the macro. The parser would have to have a table for
macro definitions, but this wouldn't be hard. See the answer to the
next question for the rest of the answer to this.

   how do you write
   programs that write programs,

This is the trickiest of the questions you pose. There are cheap
answers: have an intermediate language for these programs which look
like scheme, or just have the program writing program output a text
string. A better idea might be to provide a library with program
constructing functions. Remember, any program writing program must
send its output through eval (or be a macro) and so the parser will
be run on the resulting program as well.

   and how do you put lambda expressions in
   the middle of expressions.

Whats wrong with saying function (arg ...) { exp ... } a la C? Since
Scheme is expression oriented, these guys can appear anywhere. Perhaps
I missed something here?

   All of these add little chunks of 
   complexity, and little chunks of complexity count.

However, all these little chunks of complexity are already present in
some form in more standard (syntactically) langauges. If you choose
the more complex syntax then you have to pay the price.

John
gateley@rice.edu

--
"I've thought the thoughts of little children and the thoughts of men
 I've thought the thoughts of stupid people who have never been
 so much in love as they should be and got confused too easily
 to fall in love again." The Residents and Renaldo and the Loaf

gudeman@cs.arizona.edu (David Gudeman) (06/26/91)

In article  <GATELEY.91Jun25134644@gefion.rice.edu> John Gateley writes:
]
]I mean that syntax is relatively unimportant.

How can you say that syntax is unimportant when such a huge percentage
of people who try lisp or scheme are turned off by the syntax?  It is
certainly important to them.  It is a hindrance to the goal of getting
more people to use the language, and it is a distraction for many
people trying to use the language.  Just because it isn't a problem
for _you_ doesn't mean that it isn't a problem.

Any language feature that is unpalatable to large numbers of people is
a misfeature.  And calling it "relatively unimportant" isn't going to
make it any less important to the people who are bothered by it.  If
you don't think a lot of people are seriously bothered by lisp syntax
then you are mistaken.

By the way, fancy editors may make it easier to balance parens when
writing lisp, but they don't help much in reading it.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

rockwell@socrates.umd.edu (Raul Rockwell) (06/26/91)

David Gudeman:
   How can you say that syntax is unimportant when such a huge
   percentage of people who try lisp or scheme are turned off by the
   syntax?  It is certainly important to them.  It is a hindrance to
   the goal of getting more people to use the language, and it is a
   distraction for many people trying to use the language.

   Any language feature that is unpalatable to large numbers of people
   is a misfeature.  And calling it "relatively unimportant" isn't
   going to make it any less important to the people who are bothered
   by it.

I don't think that syntax is the big issue for people who don't like
lisp or scheme.  The big issues are more along the line of lack of
decent reference works (something like the old MAC-Lisp manual would
be nice -- very complete, to the point, and cheap).

-- 
Raul <rockwell@socrates.umd.edu>

sw@smds.UUCP (Stephen E. Witham) (06/26/91)

In article <GATELEY.91Jun25134644@gefion.rice.edu>, gateley@rice.edu (John Gateley) writes:
> In article <583@smds.UUCP> sw@smds.UUCP,   I (Stephen E. Witham) write:
> 
>    ...Do you mean that syntax is unimportant, or do you
>    mean that changing the syntax of Scheme would be easy, or what?
> 
> I mean that syntax is relatively unimportant. My personal preference
> if for the prefix notation of scheme, but many people don't like it.
> What is important about a language though, is the ease with which you
> can express algorithms and ideas. This has very little to do with
> syntax, unless you choose a completely confusing syntax. Perhaps the
> scheme-syntax bashers are claiming that its syntax is completely
> confusing. If so, I have to disagree, it's very simple, easy to learn,
> and there are lots of tools available to help with those nasty little
> parenthesis. 

Scheme is a great language.  Lisp notation is simple to understand--for
my "left brain."  That means I can easily figure out (given a manual)
how to write a program, or figure out (given a manual) what a program
means.  But I think this thread is about "write-onliness," which to me
includes when I look at a program and my eyes (or "right brain") can't 
immediately grasp the structure of it.  I think visual "graspability" is 
also important to let a program serve as a visual pegboard that memories 
and ideas about the program stick to.

The problem with Lisp notation, for my "right brain", is that *every*
*structure*looks*the*same*.  It's all just forms with arguments.
With C, there are visual differences between function definitions,
variable declarations, variable assignments, control structures,
expressions, array references, structure field references, and function 
calls.  In Lisp or Scheme, in order to tell what type of thing something 
is, you have to look at the word at the beginning of the expression or 
special form it's in, know what types of arguments that form takes, and 
then count down through the arguments.

This issue is, for me, only about half of what makes a language readable,
clear, or simple, or not.  Scheme is very elegant otherwise.

(Lots of discussion of how do Scheme in C style.  John makes good
suggestions.  He doesn't get this question:)

>    and how do you put lambda expressions in
>    the middle of expressions.
> 
> Whats wrong with saying function (arg ...) { exp ... } a la C? Since
> Scheme is expression oriented, these guys can appear anywhere. Perhaps
> I missed something here?

Well, what I meant was, in C, function definitions are always on the
top level, and trying to stick them into the middle of expressions
would make a visual mess, sort of like the worst of C and Scheme
combined.  Of course, visual mess doesn't bother you. ;^] 

My main point was, anybody who can say "just syntax" is either ignoring
the issue of visual clarity, or they have parsers and reference
manuals built into their corneas.  Perceptual simplicity is hard to
produce, and Lisp takes a simplistic approach to simplicity.

In order to make a language visually simple, you have to have a good idea 
of the kinds of things people will do in it, and in order to do that, you
have to make assumptions.  Scheme (even more than older Lisps) goes the
other way and insists on generality in everything.  So maybe that makes
Scheme a good AI language--it makes it hard to do easy things, but easy
to do hard things. :-)

--Steve

dwp@willett.pgh.pa.us (Doug Philips) (06/26/91)

In article <4582@optima.cs.arizona.edu>,
	gudeman@cs.arizona.edu (David Gudeman) writes:

+In article  <GATELEY.91Jun25134644@gefion.rice.edu> John Gateley writes:
+]
+]I mean that syntax is relatively unimportant.

+How can you say that syntax is unimportant when such a huge percentage
+of people who try lisp or scheme are turned off by the syntax?  It is
+certainly important to them.  It is a hindrance to the goal of getting
+more people to use the language, and it is a distraction for many
+people trying to use the language.  Just because it isn't a problem
+for _you_ doesn't mean that it isn't a problem.

And just because it is a problem for the so called masses means merely
that is it DIFFERENT from what they are used to.  Not wrong.  DIFFERENT.

+Any language feature that is unpalatable to large numbers of people is
+a misfeature.  And calling it "relatively unimportant" isn't going to
+make it any less important to the people who are bothered by it.  If
+you don't think a lot of people are seriously bothered by lisp syntax
+then you are mistaken.

"Tons of people don't like lisp syntax" ... Does ANYONE have any
non-anecdotal, non-fictional data to back this up?  Yeah, me'n'my'buddies
donna like it.  Wow, am I ever impressed.

No, I don't particular care for LISP syntax because it is not what I use
most of the time.  Once you get into the groove its not that big of a deal.

All you've really argued for is making all programming language syntax
sufficiently similar that some first year programming weenie can switch
to it with no hassles.

If you want something really weird try PostScript or Forth.

-Doug
---
Preferred:  dwp@willett.pgh.pa.us	Ok:  {pitt,sei,uunet}!willett!dwp

gudeman@cs.arizona.edu (David Gudeman) (06/27/91)

In article  <2925.UUL1.3#5129@willett.pgh.pa.us> Doug Philips writes:
]In article <4582@optima.cs.arizona.edu>,
]	gudeman@cs.arizona.edu (David Gudeman) writes:
]
]+How can you say that syntax is unimportant when such a huge percentage
]+of people who try lisp or scheme are turned off by the syntax?
]
]And just because it is a problem for the so called masses means merely
]that is it DIFFERENT from what they are used to.  Not wrong.  DIFFERENT.

No, WRONG.  If it is uncomfortable for people, then it is by
definition the wrong syntax to have them program in.

Definition: A "wrong syntax" is any syntax that is uncomfortable for
the people using it.

It is possible that people could become comfortable with a syntax that
they were once uncomfortable with.  In this case it would cease being
a "wrong syntax" for that individual.  However, in order to make such
an exercise worthwhile you first have to demonstrate that there would
be some great advantage to the new syntax.  There is no great
advantage to lisp syntax.

It is possible (but not certain, contrary to what some lispers would
claim) that if everyone had grown up with lisp-like syntax in math
classes that they would all be just as happy with it as they are with
infix syntax.  Who cares?  The fact is that they didn't grow up with
that sort of syntax, they aren't happy with it, and they shouldn't
have to learn a completely different syntax just because a small
influential community of programmers learned that syntax back in the
days of dinosaurs and IBM 360's when parsing was poorly understood.

It has become institutionalized by now.  People who use that syntax on
a regular basis usually had no trouble with it (or they wouldn't be
using it now), and they assume that everyone is just like them, and
should have not trouble with it either.  And if anyone _does_ have
trouble with it, then the person must be an uneducated, reactionary
slob who uses phrases like "me'n'my'buddies donna like it".

Wake up.  People are different.  They have different tastes, different
tolerance for change, and different backgrounds.  And when you have
something that a huge majority agree on (like infix notation) then you
should exploit this miracle, not spurn it.  And for heaven's sake,
don't try to undermine it (which is just what the lisp community has
been doing).

]"Tons of people don't like lisp syntax" ... Does ANYONE have any
]non-anecdotal, non-fictional data to back this up?  Yeah, me'n'my'buddies
]donna like it.  Wow, am I ever impressed.

Does anyone have any non-anecdotal, non-fictional data to back up the
claim that the syntax of lisp is _not_ a hindrance to learning it?
(sheesh. If you live in a glass house, don't throw stones.)  When the
only evidence available is anecdotal, then that is what you have to
use to form an opinion.  My experience involves a non-scientific
sample of perhaps 10 to 20 individuals who learned lisp and didn't
like it.  When asked why they didn't like it, the syntax is _always_
mentioned as a negative feature.  The sample size is small, but the
variance is impressively low.

By the way, I personally don't have any problem with lisp syntax.  I
program in lisp quite a bit and never have minded the syntax (when
using an editor that bounces parens).

]All you've really argued for is making all programming language syntax
]sufficiently similar that some first year programming weenie can switch
]to it with no hassles.

No, I've argued for making syntax comfortable for people in general,
people who have grown up with modern mathematical notations and
learned to program in C or Pascal or BASIC.  Beginning programmers
aren't the biggest problem, I expect that it is established
programmers who have the most difficulty with radically different
syntax.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

gateley@rice.edu (John Gateley) (06/27/91)

In article <4582@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:

   In article  <GATELEY.91Jun25134644@gefion.rice.edu> John Gateley writes:
   ]
   ]I mean that syntax is relatively unimportant.

   How can you say that syntax is unimportant when such a huge percentage
   of people who try lisp or scheme are turned off by the syntax?  It is
   certainly important to them.  It is a hindrance to the goal of getting
   more people to use the language, and it is a distraction for many
   people trying to use the language.  Just because it isn't a problem
   for _you_ doesn't mean that it isn't a problem.

Let me try and explain what I mean :^) and at the same time reinforce
what a couple of later posts have already said. Many people don't like
Lisp/Scheme true, but to blame it all or even majorly on the syntax is
unfair. There are many features which require a different mindset:
recursion instead of loops, first class functions, dynamic typing,
macros etc. etc. etc. Among all these new features, I think the
problem of learning an extremely simple syntax is the smallest of the
challenge. The syntax is very regular, has very few special
characters, and can be learned quickly. With a small amount of
practice (and EVERY language requires a small amount, or more) the
programs become understand, and those nasty little parens cease to be
nasty.

   Any language feature that is unpalatable to large numbers of people is
   a misfeature.  And calling it "relatively unimportant" isn't going to
   make it any less important to the people who are bothered by it.  If
   you don't think a lot of people are seriously bothered by lisp syntax
   then you are mistaken.

I bet that all the people who are bothered by it are much more
bothered by some of the other features (which are semantic features).
And, a lot of the problem here is being faced with something "new".
For example, I put off learning emacs for the longest time, just
because I hate learning new editors. Different commands, different
styles etc. But, once I did, I really enjoyed it. Similarly
Scheme/Lisp is something new to most people, and all it takes are a
couple of changes to make them frustrated and go back to the old way
of doing it.

   By the way, fancy editors may make it easier to balance parens when
   writing lisp, but they don't help much in reading it.

I disagree VERY much with this statement. I do most of my code reading
on a terminal in an emacs buffer so that I can use all the commands
when needed. I very rarely print out Scheme code and look at it on
paper.

John
gateley@rice.edu
--
"I've thought the thoughts of little children and the thoughts of men
 I've thought the thoughts of stupid people who have never been
 so much in love as they should be and got confused too easily
 to fall in love again." The Residents and Renaldo and the Loaf

mhcoffin@tolstoy.waterloo.edu (Michael Coffin) (06/27/91)

There's at least one advantage to the lisp syntax, and that advantage
is important to lispers, although others may not find it important.
Lisp programs have a simple, logical representation as data within
lisp: a program is a list of lists, i.e., a tree.  In most programming
languages, source-to-source transformations are a pain because to do
anything general you have to parse the language into some idiosyncratic
form, manipulate it, and then translate back into the source language.
In lisp you put a single quote in front of the program and you have
data.  That's why lisp is the language of choice for building embedded
languages.  It's also why lisp has general-purpose syntax extension
that integrates neatly with the rest of the language (e.g.,
extend-syntax) while C has a crude, tacked-on macro processor
pre-pass.

For people who are interested in the semantics of new constructs as
opposed to syntax, this is a big win.  For example, for fun I once
implemented ``Icon in Scheme'', using streams with lazy evaluation to
implement generators and coexpressions.  The syntax was lisp-ish, but
it had equivalents for all the interesting Icon constructs except
string scanning.  Although I was learning Scheme at the time, it only
took me about three days to get it running.

-mike

gateley@rice.edu (John Gateley) (06/27/91)

In article <591@smds.UUCP> sw@smds.UUCP (Stephen E. Witham) writes:

   [Scheme is good, but not visually graspable]

   The problem with Lisp notation, for my "right brain", is that *every*
   *structure*looks*the*same*.  It's all just forms with arguments.
   With C, there are visual differences between function definitions,
   variable declarations, [...]
   In Lisp or Scheme, in order to tell what type of thing something 
   is, you have to look at the word at the beginning of the expression or 
   special form it's in, know what types of arguments that form takes, and 
   then count down through the arguments.

   [...]

   My main point was, anybody who can say "just syntax" is either ignoring
   the issue of visual clarity, or they have parsers and reference
   manuals built into their corneas.  Perceptual simplicity is hard to
   produce, and Lisp takes a simplistic approach to simplicity.

For me (but of course I am one of those strange Scheme people :^), the
problem is exactly the same but in the other direction. When I look at
languages with more conventional syntax, I have to go through the same
contortions you do. I drag the manual out etc. I accept what you are
saying, but I don't think it implies write-onlyness (perhaps the
problem here is that to me, write-onlyness is a global property which
states that it is ALWAYS hard to read programs in a particular
langauge as opposed to initially hard). 

It seems to me, also, that people do have parsers and reference
manuals in their corneas. Consider reading - do you spell out the
words letter by letter and then parse them into a single word? Or do
you get the "gestalt" all at once? But this is getting much too
philosophical and outside my field :^).

I still think that Scheme/Lisp style syntax is a viable alternative
for more conventional syntaxes(syntaci?). With any syntax there is
going to be an initial period where it seems write-only, and I don't
see that this period is any longer for Scheme/Lisp than other
languages (and my personal experience indicates it is shorter). I
think the general idea of "write-onlyness" for Scheme comes from the
amazing amount of languages with more conventional syntax, the extra
expressive power available, and a sort of "social inertia" to new
things.

John
gateley@rice.edu
--
"I've thought the thoughts of little children and the thoughts of men
 I've thought the thoughts of stupid people who have never been
 so much in love as they should be and got confused too easily
 to fall in love again." The Residents and Renaldo and the Loaf

gudeman@cs.arizona.edu (David Gudeman) (06/27/91)

In article  <GATELEY.91Jun26150215@gefion.rice.edu> John Gateley writes:
]
]... Many people don't like
]Lisp/Scheme true, but to blame it all or even majorly on the syntax is
]unfair. There are many features which require a different mindset:
]recursion instead of loops, first class functions, dynamic typing,
]macros...

Actually, you _can_ write loops in lisp, and you can ignore
first-class functions and macros if you don't like them.  When these
things are mentioned as negative aspects of lisp, they are actually
negative reactions to the way they were taught lisp, or negative
reactions to advanced programming techniques.  Neither should be taken
as a criticism of lisp.  The lack of static type checking might be
considered a real criticism, but the rest are not.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

gudeman@cs.arizona.edu (David Gudeman) (06/27/91)

In article  <1991Jun26.223026.13792@watserv1.waterloo.edu> Michael Coffin writes:
]There's at least one advantage to the lisp syntax, and that advantage
]is important to lispers, although others may not find it important.
]Lisp programs have a simple, logical representation as data within
]lisp: a program is a list of lists, i.e., a tree.

I'll agree that that would be a huge advantage if it were only
possible with lisp syntax.  But in prolog you can do anything that you
can do with lisp syntax, and prolog uses mostly conventional
function-call and operator syntax.

And I understand (from reading, not first-hand experience) that there
are some languages that have even improved on prolog in this area.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

mathew@mantis.co.uk (Giving C News a *HUG*) (06/27/91)

gudeman@cs.arizona.edu (David Gudeman) writes:
> In article  <2925.UUL1.3#5129@willett.pgh.pa.us> Doug Philips writes:
> ]And just because it is a problem for the so called masses means merely
> ]that is it DIFFERENT from what they are used to.  Not wrong.  DIFFERENT.
> 
> No, WRONG.  If it is uncomfortable for people, then it is by
> definition the wrong syntax to have them program in.

Oh?  Presumably because object-oriented programming is uncomfortable and
confusing for many people, it is also by definition the wrong way to have
them write programs?

>                                             There is no great
> advantage to lisp syntax.

1. It's simple.  You can summarize all the syntax you need to know on the
   back of a postcard.

2. It's easy to implement.

3. It's flexible, in the way that (say) infix isn't.  Most of the basic
   operations (+, *, and so on) naturally extend to any number of arguments.

4. You don't have to worry about any precedence rules whatsoever.

I'm sure other people can come up with lots of advantages I've forgotten.

> It is possible (but not certain, contrary to what some lispers would
> claim) that if everyone had grown up with lisp-like syntax in math
> classes that they would all be just as happy with it as they are with
> infix syntax.  Who cares?  The fact is that they didn't grow up with
> that sort of syntax, they aren't happy with it, and they shouldn't
> have to learn a completely different syntax [...]

Ah!  You're arguing that people should be taught Lisp or RPN in schools?  I
agree wholeheartedly.  At school I devised a method of using my Casio
calculator in an RPN-like way, because I found it more straightforward.  When
I later found out about RPN and got an HP, I thought "Why didn't they teach
me this, or at least mention it, when we started using calculators?"

> ]"Tons of people don't like lisp syntax" ... Does ANYONE have any
> ]non-anecdotal, non-fictional data to back this up?  Yeah, me'n'my'buddies
> ]donna like it.  Wow, am I ever impressed.
> 
> Does anyone have any non-anecdotal, non-fictional data to back up the
> claim that the syntax of lisp is _not_ a hindrance to learning it?

You're the one making the positive assertion, that Lisp's syntax is a
hindrance to learning it.  You're the one who has to provide the evidence.

>                                                              When the
> only evidence available is anecdotal, then that is what you have to
> use to form an opinion.

When the only evidence is anecdotal, I would advise you not to form such a
strongly-held opinion.


mathew

 

sw@smds.UUCP (Stephen E. Witham) (06/27/91)

In article <ROCKWELL.91Jun26084802@socrates.umd.edu>, rockwell@socrates.umd.edu (Raul Rockwell) writes:
 
> I don't think that syntax is the big issue for people who don't like
> lisp or scheme.  

For people who want a Scheme-like language, but don't like Scheme, like
me, the main reason is probably the syntax.

> The big issues are more along the line of lack of decent reference works...

Last Scheme manual (PC Scheme from TI) seemed pretty complete to me.

--Steve

sw@smds.UUCP (Stephen E. Witham) (06/27/91)

In article <2925.UUL1.3#5129@willett.pgh.pa.us>, dwp@willett.pgh.pa.us (Doug Philips) writes:
> In article <4582@optima.cs.arizona.edu>,
> 	gudeman@cs.arizona.edu (David Gudeman) writes:
 
> +How can you say that syntax is unimportant when such a huge percentage
> +of people who try lisp or scheme are turned off by the syntax?  
 
> And just because it is a problem for the so called masses means merely
> that is it DIFFERENT from what they are used to.  Not wrong.  DIFFERENT.

To an intellectual populist like myself, something that is inaccessible
to large numbers of people for insufficient reason is WRONG.
 
> Once you get into the groove its not that big of a deal.

Once you've banged your head enough, you become numb.  But that doesn't
mean the damage stops.  Lisp syntax just doesn't make good use of the
way human brains and visual systems work.  Practice can compensate,
but not totally.  It means you spend conscious energy to do what you
can do unconsciously with other languages.

--Steve

sw@smds.UUCP (Stephen E. Witham) (06/28/91)

In article <GATELEY.91Jun26150215@gefion.rice.edu>, gateley@rice.edu (John Gateley) writes:
 
> ...There are many features which require a different mindset:
> recursion instead of loops, first class functions, dynamic typing,
> macros etc. etc. etc. Among all these new features, I think the
> problem of learning an extremely simple syntax is the smallest of the
> challenge. The syntax is very regular, has very few special
> characters, and can be learned quickly. 

That's true.  Learning the rules of Lisp syntax is easy.  But there are
two reasons why that doesn't make it easy to read:

#1 is that you're talking about consciously learning
the rules and consciously parsing things.  But Lisp syntax doesn't 
transfer easily to the unconscious.

#2 is that what other languages do with syntax, lisp does with special 
keywords, like defun, lambda, cond, let, prog, loopxxx... I don't know
the exact words, but they all have special structural meanings and
special treatments of certain arguments.  Learning all the special
forms and their argument forms takes longer than just learning to
balance parens.  And still your eyes can't learn it very well.

> I bet that all the people who are bothered by it are much more
> bothered by some of the other features (which are semantic features).

Well, I for one LOVE the semantics of Scheme and dislike the syntax.

> And, a lot of the problem here is being faced with something "new".

That may be for some people.  I've known about Lisp for a long time,
so I don't have a new user's point of view.  But I don't think the 
semantics are that different from other languages, just more consistent.  
I still think syntax is what puts off most of the people who are put off.

> ...I do most of my code reading on a terminal in an emacs buffer so that 
> I can use all the commands when needed. I very rarely print out Scheme 
> code and look at it on paper.

This sounds like a REAL INTERESTING point.  Do you mean you use the
editor to step between arguments and to find matching parens?  If so,
you're FEELING your way, or maybe more like WALKING, around the code--
in any case, something kinesthetic is going on...COOL!  But notice how
it's like groping in the dark (in a well-labeled, tree-structured space, 
(wasn't that a short story by Hemingway?) I grant you).  In other words, 
Lisp syntax is no help to your EYES, but with emacs as a CANE...  This 
probably has some great implications for user interfaces for blind people!

--Steve

sw@smds.UUCP (Stephen E. Witham) (06/28/91)

In article <1991Jun26.223026.13792@watserv1.waterloo.edu>, mhcoffin@tolstoy.waterloo.edu (Michael Coffin) writes:
> There's at least one advantage to the lisp syntax, and that advantage
> is important to lispers, although others may not find it important.
> Lisp programs have a simple, logical representation as data within
> lisp: a program is a list of lists, i.e., a tree.  

Lisp has two features: an internal form for programs, and a (over:-) simple
syntax.  I think with an "infix Scheme," having a well-defined internal 
form would give you most of the power.  You'd need a "code-constant" or
"internal-form-of-this-quoted-code-fragment" construct, e.g.,

x = quote {
    function ( a, b ) {
        return a+b;
        }
    };

(That's not a lambda expression, it's a quoted lambda expression.)

> In most programming
> languages, source-to-source transformations are a pain because to do
> anything general you have to parse the language into some idiosyncratic
> form, manipulate it, and then translate back into the source language.

If the compiler (or a library routine) can do the parsing,
and there's a standard (not idiosyncratic) internal form,
and a library routine can print programs,
(or else, you don't go to ASCII, but stay in internal form),
then the situation would be pretty much the same as with Lisp.
Lisp has to be parsed and printed, too, if you're working with ASCII.  

> It's also why lisp has general-purpose syntax extension
> that integrates neatly with the rest of the language (e.g.,
> extend-syntax) while C has a crude, tacked-on macro processor
> pre-pass.

I've always wanted "semantic macros" instead of text macros, for C.

--Steve

sw@smds.UUCP (Stephen E. Witham) (06/28/91)

In article <GATELEY.91Jun26193626@gefion.rice.edu>, gateley@rice.edu (John Gateley) writes:
> 
> For me (but of course I am one of those strange Scheme people :^), the
> problem is exactly the same but in the other direction. When I look at
> languages with more conventional syntax, I have to go through the same
> contortions you do. I drag the manual out etc. ...
 
> Consider reading - do you spell out the
> words letter by letter and then parse them into a single word? Or do
> you get the "gestalt" all at once? 

Words--gestalt.  Sentences--infix notation (well, mostly) plus special
beginning and end markers!  Paragraphs, sections, chapters--visual!  Hah!  
But English isn't that different from Lisp.  On the other hand, English
is constructed for reading straight through rather than hacking on.

--Steve

dwp@willett.pgh.pa.us (Doug Philips) (06/28/91)

In article <4601@optima.cs.arizona.edu>,
	gudeman@cs.arizona.edu (David Gudeman) writes:
+No, WRONG.  If it is uncomfortable for people, then it is by
+definition the wrong syntax to have them program in.

+Definition: A "wrong syntax" is any syntax that is uncomfortable for
+the people using it.

No, it is "uncomfortable syntax."  (NOT (EQ 'Uncomfortable 'Wrong))
see also very last comment.

+It is possible that people could become comfortable with a syntax that
+they were once uncomfortable with.  In this case it would cease being
+a "wrong syntax" for that individual.  However, in order to make such
+an exercise worthwhile you first have to demonstrate that there would
+be some great advantage to the new syntax.  There is no great
+advantage to lisp syntax.

Indeed.  As someone else has already pointed out, Lisp syntax allows
programs/code to be data, and visa versa.  Of course, if you come
from a background in Algol-like languages, you may not see how that
could be valuable.  If you think that orthogonality between syntax
and semantics is useful, you would like lisp syntax.

+It is possible (but not certain, contrary to what some lispers would
+claim) that if everyone had grown up with lisp-like syntax in math
+classes that they would all be just as happy with it as they are with
+infix syntax.  Who cares?  The fact is that they didn't grow up with
+that sort of syntax, they aren't happy with it, and they shouldn't
+have to learn a completely different syntax just because a small
+influential community of programmers learned that syntax back in the
+days of dinosaurs and IBM 360's when parsing was poorly understood.

You have still failed to show whose these "they" are that are so
unhappy, and just how numerous "they" are.

The fact is part of what makes lisp powerful is its highly regular
syntax and that it permits the program/data division to be easily
crossed.

+It has become institutionalized by now.  People who use that syntax on
+a regular basis usually had no trouble with it (or they wouldn't be
+using it now), and they assume that everyone is just like them, and
+should have not trouble with it either.  And if anyone _does_ have
+trouble with it, then the person must be an uneducated, reactionary
+slob who uses phrases like "me'n'my'buddies donna like it".

Just as the people who use Algol-like languages have become
institutionalized and assume that everyone is just like them.

+Wake up.  People are different.  They have different tastes, different
+tolerance for change, and different backgrounds.  And when you have
+something that a huge majority agree on (like infix notation) then you
+should exploit this miracle, not spurn it.  And for heaven's sake,
+don't try to undermine it (which is just what the lisp community has
+been doing).

Wake up?  Why?  You haven't said anything new.  You keep asserting the
same things over and over with no support.

All hail the long and venerated tradition.

+Does anyone have any non-anecdotal, non-fictional data to back up the
+claim that the syntax of lisp is _not_ a hindrance to learning it?
+(sheesh. If you live in a glass house, don't throw stones.)

You:  X!
Me:   Support it.
You:  if not X, then X.  Prove not X to me or I'll assume X.

Yawn.  Can you distinguish between a challenge to support and a claim
for an opposite position?

+When the only evidence available is anecdotal, then that is what you
+have to use to form an opinion.  My experience involves a
+non-scientific sample of perhaps 10 to 20 individuals who learned
+lisp and didn't like it.  When asked why they didn't like it, the
+syntax is _always_ mentioned as a negative feature.  The sample size
+is small, but the variance is impressively low.

When the sample size is small, the variance don't tell you squat.
You cannot prove your position by lack of opposition.  Of course you
gave the game away when you admitted it was a matter of opinion.

+No, I've argued for making syntax comfortable for people in general,
+people who have grown up with modern mathematical notations and
+learned to program in C or Pascal or BASIC.  Beginning programmers
+aren't the biggest problem, I expect that it is established
+programmers who have the most difficulty with radically different
+syntax.

I agree that beginning programmers aren't the biggest problem.  They
haven't yet learned any preconceived notions about how programming
languages should be written.  That admission undermines your position
about "modern mathematical notations", which is not the source of the
difficulty, and highlights that it is instead dogmatic adherence to the
syntax of an early learned language.

Since it is not confusing to beginners who have no preconceived
notions about programming languages, _and_ (tie in from first
comment...) since it is something that people can become accustomed
to, THEN the real point is that rightness/wrongness is nothing
*inherent* about the syntax itself, but is merely a matter of the
habitual experience of people.

-Doug
---
Preferred:  dwp@willett.pgh.pa.us	Ok:  {pitt,sei,uunet}!willett!dwp

macrakis@osf.org (Stavros Macrakis) (06/28/91)

In article <603@smds.UUCP> sw@smds.UUCP (Stephen E. Witham) suggests
adding more conventional syntax on top of a Scheme-like semantic base.

Algol-like syntax has gone with Lisp-like languages in several
different projects:

 -- Lisp 2.0 was a pretty standard Lisp inside, with Algol syntax outside.

 -- ECL (or EL/1) had an Algol-like syntax, a Lisp-like approach to
    the language environment (interpreter with compatible compiler,
    program represented as linked-list data structures available to
    other programs, dynamic scoping, ...) and an original approach to
    dynamic data typing (types as data, etc.).

 -- The Macsyma user language was fairly Algol-like on the outside,
    very Lisp-like on the inside.  (Macsyma is a large symbolic
    algebra system.)

Of course, all of this depends on what you call `Algol-like' and what
you call `Lisp-like'.  If you're not careful, you end up calling every
expression-oriented language with a tree-like internal form Lisp-like,
and every language with begin-end blocks Algol-like.

Perhaps Scheme ought to be called Algol-like for that matter, since
Algol had static scoping since the beginning, and Algol 68 even
introduced first-class closures....

And oh, yes, I agree with Witham that there's more to Lisp's syntax
than S-expressions: you've got to count all the special forms as part
of the syntax to be fair....  One of the advantages of Scheme is that
it reduces the number of these.

	-s

** I don't have bibliographies, but here are some pointers:

For Lisp 2.0: see Jean Sammet's Programming Languages: History and
	Fundamentals

For ECL: Ben Wegbreit's papers in CACM around 1970-75.

For Macsyma: Macsyma Users' Manual (I don't think the language was
	described in any detail in the published papers).

rockwell@socrates.umd.edu (Raul Rockwell) (06/28/91)

Stephen E. Witham:
> Lisp has two features: an internal form for programs, and a (over:-)
> simple syntax.  I think with an "infix Scheme," having a
> well-defined internal form would give you most of the power.  You'd
> need a "code-constant" or
> "internal-form-of-this-quoted-code-fragment" construct, e.g.,
>
> x = quote {
>     function ( a, b ) {
>         return a+b;
>         }
>     };
>
> (That's not a lambda expression, it's a quoted lambda expression.)


Why such a complicated expression?  Why not just
x =: +

Then, presumably, when you say 
   1 x 1
the answer would be
2

Or is it important to quote the thing?  You could always say
x =: '+'

(too simplistic?  what if you want to have a code constant for some
existing function, independent of its symbol?  Well, you could always
adopt a secondary quoting scheme....  Like, maybe
x =: + quoted
)

Requiring a lambda expression to express an anonymous function seems
to me to be one of LISP's flaws, not one of its advantages.  Or, put
another way, infix notation already has a clear scheme for
representing arguments, why not take advantage of it?

-- 
Raul <rockwell@socrates.umd.edu>

kers@hplb.hpl.hp.com (Chris Dollin) (06/28/91)

Doug Philips says:

   Indeed.  As someone else has already pointed out, Lisp syntax allows
   programs/code to be data, and visa versa.  Of course, if you come
   from a background in Algol-like languages, you may not see how that
   could be valuable.  If you think that orthogonality between syntax
   and semantics is useful, you would like lisp syntax.

Can we dispose of this myth for once and for all?

It's not ``Lisp syntax'' that allows code-as-data; it's because there's a
canonical representation for program texts as a supported datatype. In Lisp,
the supported datatype happens to be lists, and the mapping twixt source and
representation is at least translucent. But there's no reason an ``Algol-like''
language (whatever that means in 1991; are C, Ada, ML, Pop11, or Scheme
Algol-like? Answers on a postcard please ...) shouldn't provide a suitable
data-type and representation scheme.

For example ``here's one I prepared earlier'': in my Pop-like language Pepper,
I plan to include a construct such as ``<$ Expr $>'' to mean ``the parse tree
associated with Expr''. There will be operations on parse trees, such as gluing
them together and taking them apart, or generating code from them. (The present
compiler uses lists for parse trees - surprise! - but the post-bootstrap
compiler will have a more compact representation engineered to its needs.)

So it's not the (concrete) syntax of Lisp that gives code-as-data; it's the act
of making the parse tree *available*.

[Incidentally, rather than an anti-quoting syntax within the quote brackets, I
am thinking of making the quote take a left operand, viz, the names to be taken
as meta-variables inside the quotation, thus:

    (x) <$ f(x) $>

would denote the parse tree for an application whose function-part is f and
whose argument part is the expression in the variable x. Any comments on this
idea?]

--

Regards, Chris ``GC's should take less than 0.1 second'' Dollin.

gudeman@cs.arizona.edu (David Gudeman) (06/28/91)

In article  <2Xc8415w164w@mantis.co.uk> Giving C News a *HUG* writes:
]gudeman@cs.arizona.edu (David Gudeman) writes:
]> 
]> No, WRONG.  If it is uncomfortable for people, then it is by
]> definition the wrong syntax to have them program in.
]
]Oh?  Presumably because object-oriented programming is uncomfortable and
]confusing for many people, it is also by definition the wrong way to have
]them write programs?

Two answers: (1) there is presumably some other advantage to oriented
programming that gives incentive to go through the discomfort -- the
same does not apply to lisp syntax.  (2) if your object oriented
syntax is all that uncomfortable for large numbers of people then,
yes, it is probably wrong.  Find a better one.

]>                                             There is no great
]> advantage to lisp syntax.
]
]1. It's simple.  You can summarize all the syntax you need to know on the
]   back of a postcard.

Not true.  You have to specify the special syntax of defun, let,
protect-unwind, do (which is worse than anything I have ever seen in
an algol-like language), etc.  You have to specify what operators take
more than the normal number of parameters.  And for non-associative
operations you have to specify what multiple parameters mean.  Lisp
syntax is _not_ trivial.

]2. It's easy to implement.

That isn't an advantage for the person using the syntax.

]3. It's flexible, in the way that (say) infix isn't.  Most of the basic
]   operations (+, *, and so on) naturally extend to any number of arguments.

A very minor advantage, and probably the only real one.

]4. You don't have to worry about any precedence rules whatsoever.

The same is true of infix syntax.  If you don't know the precedence
you can always use parens (and the result is no worse than lisp).  The
difference is that with infix syntax you have the choice.

]> Does anyone have any non-anecdotal, non-fictional data to back up the
]> claim that the syntax of lisp is _not_ a hindrance to learning it?
]
]You're the one making the positive assertion, that Lisp's syntax is a
]hindrance to learning it.  You're the one who has to provide the evidence.

The fact is that you seem to believe that lisp syntax does not cause
problems.  This is a real opinion.  It is different from having no
opinion on the matter, and is just as subject to the need for evidence
as is the opposite opinion.  I can even rephrase your stand as a
positive assertion: "lisp syntax is just as good as algol-like
syntax".  There, now you have to give evidence and I don't.

]>                                                              When the
]> only evidence available is anecdotal, then that is what you have to
]> use to form an opinion.

]When the only evidence is anecdotal, I would advise you not to form such a
]strongly-held opinion.

You have no idea how strongly I hold this opinion.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

gudeman@cs.arizona.edu (David Gudeman) (06/28/91)

In article  <2928.UUL1.3#5129@willett.pgh.pa.us> Doug Philips writes:
]In article <4601@optima.cs.arizona.edu>,

]+Definition: A "wrong syntax" is any syntax that is uncomfortable for
]+the people using it.
]
]No, it is "uncomfortable syntax."  (NOT (EQ 'Uncomfortable 'Wrong))
]see also very last comment.

People who choose which programming language to use are going to agree
with my definition, and are not usually going to choose a language
with wrong syntax.

]Indeed.  As someone else has already pointed out, Lisp syntax allows
]programs/code to be data, and visa versa.  Of course, if you come
]from a background in Algol-like languages, you may not see how that
]could be valuable.  If you think that orthogonality between syntax
]and semantics is useful, you would like lisp syntax.

Oh please.  I am fully conversant with high-level language issues.  I
don't like algol-like languages, and I don't have anything against
lisp syntax personally.  My objection to lisp syntax is that it is a
problem for _other_ people.  It is not a problem for me.  Furthermore,
there is no relationship between lisp syntax and the ability to treat
program as data.  Quite the contrary, prolog does a much better job
(in my opinion) of treating programs as data, and prolog has
traditional syntax.

]The fact is part of what makes lisp powerful is its highly regular
]syntax and that it permits the program/data division to be easily
]crossed.

Half true.  The program-as-data is an extremely valuable paradigm.
The "highly regular syntax" (which is a myth anyway) has nothing to do
with it.  In fact, I suspect that the syntax of lisp was a major
factor in making C the language of the 80's, and the syntax of scheme
is what will make some other language the language of the 90's.

]Just as the people who use Algol-like languages have become
]institutionalized and assume that everyone is just like them.

True enough.  People who think there is no alternative to static
typing irritate me even more than people who think lisp syntax is a
non-issue.

]Wake up?  Why?  You haven't said anything new.  You keep asserting the
]same things over and over with no support.
]
]All hail the long and venerated tradition.

Contrary to you?  What support have you given for your view except for
your contempt of tradition and of the convenience of others?

]You:  X!
]Me:   Support it.
]You:  if not X, then X.  Prove not X to me or I'll assume X.
]
]Yawn.  Can you distinguish between a challenge to support and a claim
]for an opposite position?

Yes I can.  Can you?  You have _not_ just said "support it", you have
been making counter claims (see the end).  You have been claiming "not
X" which is different from just asking me to support my view.
Furthermore, I have not said anything similar to "Prove not X to me or
I'll assume X".  I reached my conclusions about X on my own, thank
you, with evidence that is conclusive enough for my purposes.  If you
can show me counter-X evidence that is stronger than my X evidence, I
will likely change my mind.  This is not an emotional issue with me.

]+...  My experience involves a
]+non-scientific sample of perhaps 10 to 20 individuals who learned
]+lisp and didn't like it.  When asked why they didn't like it, the
]+syntax is _always_ mentioned as a negative feature.  The sample size
]+is small, but the variance is impressively low.
]
]When the sample size is small, the variance don't tell you squat.

On the contrary.  With a variance of 0, a sample size of 10 is quite
telling.  If you tossed a coin 10 times and got heads every time,
wouldn't you feel safe in concluding that the coin is _very_ probably
biased?

]You cannot prove your position by lack of opposition.  Of course you
]gave the game away when you admitted it was a matter of opinion.

Where did I admit that?  I said that some people are comfortable with
lisp syntax and some aren't.  So I guess the anwer to the question "is
lisp syntax comfortable?" is a matter of opinion.  But the question
"is lisp syntax a hindrance in getting people to except lisp?" is not
a matter of opinion.  The answer is clearly "yes".

]I agree that beginning programmers aren't the biggest problem.  They
]haven't yet learned any preconceived notions about how programming
]languages should be written.  That admission undermines your position
]about "modern mathematical notations", which is not the source of the
]difficulty, and highlights that it is instead dogmatic adherence to the
]syntax of an early learned language.

That admission does not undermine my position about modern
mathematical notations at all.  There are at least two or three
effects here.  (Since you seem to like arguing about rhetorical
method, I will observe that the following is speculation.  However,
what I am trying to explain by the speculation is the _observation_
that lisp syntax is a hindrance in getting people to accept lisp.)

First, people learn infix notation and f(x) notation from their early
school years.  If they learn lisp as their first programming language,
then they have no reason to expect that the programming language will
be like math (after all, prose isn't like math), so they don't rebel
against the difference.  But on the other hand, they don't have the
helpful semantic hooks to use when learning lisp that they would have
had learning a language that looks more like math.  So learning to
program has been harder for them than it should have been, but they
won't know that.

People who already know a language with a more traditional syntax
already know that it is possible to make languages look like math, and
they will expect it.  When these more experienced people don't get
their little semantic hooks they _know_ that they are being subjected
to unnecessary difficulty.  That is the source of the negative
reactions to lisp.

A third possible factor is that lisp notation is actually, objectively
harder for the human mind to grasp.  I wouldn't want to claim this
outright, but I rather suspect that it is true.  I _don't_ believe
this in general about prefix and postfix notations (like Postscript).
Human languages have all three forms of operator/operand distribution:
prefix, infix and postfix (and arguably "outfix" or brackets).  What
human languages don't have (as far as I know) is any pervasive feature
that could be described as prefix-with-required-brackets.

]Since it is not confusing to beginners who have no preconceived
]notions about programming languages,

Looks like you are claiming "not X" there.  What's your evidence?

] _and_ (tie in from first
]comment...) since it is something that people can become accustomed
]to,

If you mean "everyone" or even "most everyone" can become accustomed
to it, then you are making another assertion without evidence.  If you
only mean "some people" then your following:

] THEN the real point is that rightness/wrongness is nothing
]*inherent* about the syntax itself, but is merely a matter of the
]habitual experience of people.

doesn't follow.

Furthermore, _I_ already suggested that this difficulty is probably
caused by experience.  Your final sentence makes it sound like like I
was arguing against that.  My point (as I clearly stated it before) is
that it doesn't matter why people have difficulty with lisp syntax,
they do.  And if their problems are just because they are not used to it
then it is still a real problem.  (And you haven't even proven that
the problems can be overcome by everyone).
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

markf@zurich.ai.mit.edu (Mark Friedman) (06/28/91)

In article <ROCKWELL.91Jun27233537@socrates.umd.edu> rockwell@socrates.umd.edu (Raul Rockwell) writes:

   Stephen E. Witham:
   > Lisp has two features: an internal form for programs, and a
   > (over:-) simple syntax.  I think with an "infix Scheme," having a
   > well-defined internal form would give you most of the power.
   > You'd need a "code-constant" or
   > "internal-form-of-this-quoted-code-fragment" construct, e.g.,
   >
   > x = quote {
   >     function ( a, b ) {
   >         return a+b;
   >         }
   >     };
   >
   > (That's not a lambda expression, it's a quoted lambda expression.)


   Why such a complicated expression?  Why not just
   x =: +

Because he was responding to someone who was remarking upon the
usefulness of lisp code being represented as lists. Stephan was trying
to show that you could have some syntax (like 'quote') which when
given a code fragment could return a useful internal representation of
that code, the same way that, in lisp, you can slap a 'quote' around a
piece of code and get a list. He wanted the variable 'x' to be bound
to a representation of the code of the procedure, not the procedure
itself.

   Requiring a lambda expression to express an anonymous function
   seems to me to be one of LISP's flaws, not one of its advantages.

What's your alternative? You need some syntax to say that you want a
function that takes some number of arguments.

   Or, put another way, infix notation already has a clear scheme for
   representing arguments, why not take advantage of it?

I won't argue about the usefulness of infix notation, but what has
that to do with anonymous functions?

-Mark
--

Mark Friedman
MIT Artificial Intelligence Lab
545 Technology Sq.
Cambridge, Ma. 02139

markf@zurich.ai.mit.edu

dwp@willett.pgh.pa.us (Doug Philips) (06/28/91)

In message <601@smds.UUCP>, sw@smds.UUCP (Stephen E. Witham) writes:

+In article <2925.UUL1.3#5129@willett.pgh.pa.us>, dwp@willett.pgh.pa.us (Doug Philips) writes:
+> And just because it is a problem for the so called masses means merely
+> that is it DIFFERENT from what they are used to.  Not wrong.  DIFFERENT.

+To an intellectual populist like myself, something that is inaccessible
+to large numbers of people for insufficient reason is WRONG.

Then any computer programming language is WRONG.  Your definition is
useless.  The dictionary definition even more so.

+> Once you get into the groove its not that big of a deal.

+Once you've banged your head enough, you become numb.  But that doesn't
+mean the damage stops.  Lisp syntax just doesn't make good use of the
+way human brains and visual systems work.  Practice can compensate,
+but not totally.  It means you spend conscious energy to do what you
+can do unconsciously with other languages.

There is almost nothing you can do unconsciously without first having
been trained.  All you are saying is that there has been sufficient
training/experience with other languages for internalization to have
occurred.  That is nothing inherent in the language, it is due to an
external artifact of experience.

In article <602@smds.UUCP>, sw@smds.UUCP (Stephen E. Witham) writes:
+That's true.  Learning the rules of Lisp syntax is easy.  But there are
+two reasons why that doesn't make it easy to read:

+#1 is that you're talking about consciously learning
+the rules and consciously parsing things.  But Lisp syntax doesn't 
+transfer easily to the unconscious.

Another unsubstantiated claim.

+#2 is that what other languages do with syntax, lisp does with special 
+keywords, like defun, lambda, cond, let, prog, loopxxx... I don't know
+the exact words, but they all have special structural meanings and
+special treatments of certain arguments.  Learning all the special
+forms and their argument forms takes longer than just learning to
+balance parens.  And still your eyes can't learn it very well.

Another unsubstantiated claim.  Oh, and by the way, obsfucated code
can be written in any language.  The formatting (as opposed to syntax)
has much to do with how easily "your eyes" can see structure.  That
factor would have to be accounted for as well.

+I still think syntax is what puts off most of the people who are put off.

We are still stuck at unsupported assertions.

+This sounds like a REAL INTERESTING point.  Do you mean you use the
+editor to step between arguments and to find matching parens?  If so,
+you're FEELING your way, or maybe more like WALKING, around the code--
+in any case, something kinesthetic is going on...COOL!  But notice how
+it's like groping in the dark (in a well-labeled, tree-structured space, 
+(wasn't that a short story by Hemingway?) I grant you).  In other words, 
+Lisp syntax is no help to your EYES, but with emacs as a CANE...  This 
+probably has some great implications for user interfaces for blind people!

Non-sequitur.  VI (the editor I use, no comment on preference here) lets
you step over C blocks easily.  That I use '%' to move around in a file
has nothing to do with my eyes, but my interaction with the editor.
Do not confuse the format and layout of the code with its syntax, nor
confuse it with the way in which the editor does things.  And, _I_
do use printed lisp.

In article <603@smds.UUCP>, sw@smds.UUCP (Stephen E. Witham) writes:

+Lisp has two features: an internal form for programs, and a (over:-) simple
+syntax.  I think with an "infix Scheme," having a well-defined internal 
+form would give you most of the power.  You'd need a "code-constant" or
+"internal-form-of-this-quoted-code-fragment" construct, e.g.,

+x = quote {
+    function ( a, b ) {
+        return a+b;
+        }
+    };

+(That's not a lambda expression, it's a quoted lambda expression.)

Ok, so that looks like something a Algol-ic would "understand" "easily."
The real question is to show how a program would have written it.  What
can the program do with 'x' and how.  There are no "built-in" data structres
in a language like (say C, or pascal) for representing that.  You'd have
to add one in.  Now you aren't talking the same language.  I am curious
to follow this part up, but it really isn't part of this thread any more.

+If the compiler (or a library routine) can do the parsing,
+and there's a standard (not idiosyncratic) internal form,
+and a library routine can print programs,
+(or else, you don't go to ASCII, but stay in internal form),
+then the situation would be pretty much the same as with Lisp.
+Lisp has to be parsed and printed, too, if you're working with ASCII.  

Yes, but to me the real issue is how do programs manipulate data-which-
is-program?  In Lisp you use the same functions you would for any other
list.  You'd have to add language level support (for quoted lambda
constants and the "internal form" at least).  I suspect, but am willing
to investigate further, that doing so would alter the language in a 
non-trivial way.

-Doug
---
Preferred:  dwp@willett.pgh.pa.us	Ok:  {pitt,sei,uunet}!willett!dwp

gateley@rice.edu (John Gateley) (06/28/91)

In article <602@smds.UUCP> sw@smds.UUCP (Stephen E. Witham) writes:

(hopefully not taken too much out of context).
   But Lisp syntax doesn't transfer easily to the unconscious.
   And still your eyes can't learn it very well.
I would be interested in some data backing up these assertions. At
this point it is just you and I saying "no you cant! yes you can!".
Unfortunately I don't have any data backing up my assertions other
than I have helped teach scheme for several years, as well as
programming in it, and have not noticed the things you are describing.

   I write:
   > ...I do most of my code reading on a terminal in an emacs buffer so that 
   > I can use all the commands when needed. I very rarely print out Scheme 
   > code and look at it on paper.

   This sounds like a REAL INTERESTING point.  Do you mean you use the
   editor to step between arguments and to find matching parens?  If so,
   you're FEELING your way, or maybe more like WALKING, around the code--
   in any case, something kinesthetic is going on...COOL!  But notice how
   it's like groping in the dark (in a well-labeled, tree-structured space, 
   (wasn't that a short story by Hemingway?) I grant you).  In other words, 
   Lisp syntax is no help to your EYES, but with emacs as a CANE...  This 
   probably has some great implications for user interfaces for blind people!

I think you are taking my point slightly incorrectly. Suppose you are
programming in language X, and have a big huge function (several
pages, full of nested loops etc.). Can your eyes parse it? I use the
editor to help me with cases like that, and to make sure that parens
balance. I do not consider myself "feeling my way". I have a good tool
and I use it.

j

--
"I've thought the thoughts of little children and the thoughts of men
 I've thought the thoughts of stupid people who have never been
 so much in love as they should be and got confused too easily
 to fall in love again." The Residents and Renaldo and the Loaf

mathew@mantis.co.uk (Industrial Poet) (06/29/91)

gudeman@cs.arizona.edu (David Gudeman) writes:
> In article  <2Xc8415w164w@mantis.co.uk> Giving C News a *HUG* writes:
> ]Oh?  Presumably because object-oriented programming is uncomfortable and
> ]confusing for many people, it is also by definition the wrong way to have
> ]them write programs?
> 
> Two answers: (1) there is presumably some other advantage to oriented
> programming that gives incentive to go through the discomfort -- the
> same does not apply to lisp syntax.

That's your opinion.  A lot of people see great advantages in the syntactic
flexibility and simplicity of Lisp.

> ]1. It's simple.  You can summarize all the syntax you need to know on the
> ]   back of a postcard.
> 
> Not true.  You have to specify the special syntax of defun, let,
> protect-unwind, do (which is worse than anything I have ever seen in
> an algol-like language), etc.

Sorry?  What "special syntax" are you referring to here with your examples? 
I'm not entirely sure I understand what you mean by a "special" syntax as
opposed to the standard one.

>                             You have to specify what operators take
> more than the normal number of parameters.

You're applying a concept which doesn't fit the language.  Lisp functions
like + and cond don't have a "normal number of parameters".  (+ 2 3 5) isn't
a special version of + with an abnormal number of parameters; it's the same +
which you find in (+ 2 3).

>                                          And for non-associative
> operations you have to specify what multiple parameters mean.

You have to specify what the parameters mean for other sorts of syntax as
well.

> ]You're the one making the positive assertion, that Lisp's syntax is a
> ]hindrance to learning it.  You're the one who has to provide the evidence.
> 
> The fact is that you seem to believe that lisp syntax does not cause
> problems.

Untrue.  I believe that Lisp syntax does cause problems, I just don't believe
that Lisp syntax is a hindrance to learning the language -- at least, not to
those who are actually prepared to try and learn the language.

>                             I can even rephrase your stand as a
> positive assertion: "lisp syntax is just as good as algol-like
> syntax".

That isn't what I'm saying at all.  You haven't re-phrased my stand, you have
re-defined it.

My opinion is: (not (believe (syntax-hinders-learning Lisp))).

If it were (believe (not (syntax-hinders-learning Lisp))) then you could
indeed re-phrase it as a positive assertion.  But as it stands I am not
making an assertion; I am simply doubting an assertion you have made.

> ]When the only evidence is anecdotal, I would advise you not to form such a
> ]strongly-held opinion.
> 
> You have no idea how strongly I hold this opinion.

Clearly you hold it fairly strongly, or you wouldn't go on about it so much.


mathew

 

new@ee.udel.edu (Darren New) (06/29/91)

In article <603@smds.UUCP> sw@smds.UUCP (Stephen E. Witham) writes:
>Lisp has two features: an internal form for programs, and a (over:-) simple
>syntax.  I think with an "infix Scheme," having a well-defined internal 
>form would give you most of the power.  You'd need a "code-constant" or
>"internal-form-of-this-quoted-code-fragment" construct, e.g.,

I would like to just interject that Hermes is a strongly-typed,
powerful, high-level modular infix distributed programming language
that has "program" as a built-in data type.  All of what *I've* seen
Lisp do with programs could be done as easily with Hermes programs.
You can even pass them around, store them in libraries, and so on.
Check it out. It's neet. :-)

Now, if only you could write a constant of any type, we'd be in great shape.
	  -- Darren

-- 
--- Darren New --- Grad Student --- CIS --- Univ. of Delaware ---
----- Network Protocols, Graphics, Programming Languages, FDTs -----
+=+ Nails work better than screws, when both are driven with hammers +=+

rockwell@socrates.umd.edu (Raul Rockwell) (06/29/91)

Stephen E. Witham:
      > Lisp has two features: an internal form for programs, and a
      > (over:-) simple syntax.  I think with an "infix Scheme,"
      > having a well-defined internal form would give you most of the
      > power.  You'd need a "code-constant" or
      > "internal-form-of-this-quoted-code-fragment" construct, e.g.,
      > x = quote {
      >     function ( a, b ) {
      >         return a+b;
      >         }
      >     };
      > (That's not a lambda expression, it's a quoted lambda expression.)

Raul Rockwell (me):
      Requiring a lambda expression to express an anonymous function
      seems to me to be one of LISP's flaws, not one of its advantages.

Mark Friedman:
   What's your alternative? You need some syntax to say that you want
   a function that takes some number of arguments.

Well, you can say that an infix function takes one or two arguments.
Make one argument optional (with greedy binding) and ambiguity may be
resolved when the function is used.  e.g.  x-y  vs.  -y

A less elegant solution might be to always require two arguments and
use BOTTOM a lot.

Me:   Or, put another way, infix notation already has a clear scheme
      for representing arguments, why not take advantage of it?

Mark Friedman:
   I won't argue about the usefulness of infix notation, but what has
   that to do with anonymous functions?

You can define anonymous functions using infix notation by either (a)
explicitly declaring dummy variables (similar to (lambda (x y) ...))
(b) using a pair of "standard names" for dummy variables, or (c)
tacitly defining functions without the use of any dummy variables
(just say what functions preprocess the arguments/postprocess the
results).

I expect you'd want a good variety of data-structure building
functions for something like this (otherwise you could only work with
"toy" data).  [And a good variety of meta-functions for leverage.]

Or maybe I'm missing the point of the question??

-- 
Raul <rockwell@socrates.umd.edu>

skrenta@blekko.commodore.com (Rich Skrenta) (06/29/91)

dwp@willett.pgh.pa.us (Doug Philips) writes:
> +Once you've banged your head enough, you become numb.  But that doesn't
> +mean the damage stops.  Lisp syntax just doesn't make good use of the
> +way human brains and visual systems work.  Practice can compensate,
> +but not totally.  It means you spend conscious energy to do what you
> +can do unconsciously with other languages.
> 
> There is almost nothing you can do unconsciously without first having
> been trained.  All you are saying is that there has been sufficient
> training/experience with other languages for internalization to have
> occurred.  That is nothing inherent in the language, it is due to an
> external artifact of experience.

I could write sentences in all-caps without proper spacing and you would
still be able to understand it.  It would just be harder for your eyes to
parse, and experience would never completely overcome this.

Mapping every semantic meaning onto the same syntactic construct makes lisp
hard for eyes to parse.

	a[i] * a[i+1]

	(* (index a i) (index a (+ i 1)))

Hmmm.  Let's do a study.  Flash equivalent code up to programmers well versed
in Lisp and an infix language and see which they can parse visually faster.

--
Rich Skrenta (skrenta@blekko.commodore.com)

gaynor@paul.rutgers.edu (Silver) (06/30/91)

My opinions on this discussion highly favor the all-is-data, syntax-is-simple,
memory-is-managed, ... languages.

gudeman@cs.arizona.edu writes:
>> 1. It's simple.  You can summarize all the syntax you need to know on the
>>    back of a postcard.
>
> Not true.  You have to specify the special syntax of defun, let,
> protect-unwind, do (which is worse than anything I have ever seen in an
> algol-like language), etc.  You have to specify what operators take more than
> the normal number of parameters.  And for non-associative operations you have
> to specify what multiple parameters mean.  Lisp syntax is _not_ trivial.

Make the distinction between SYNTAX and SEMANTICS.  In a description of Lisp's
semantics, one should introduce the notions of macros and read-time evaluation.
The semantics of the special forms can then be described in a straightforward
fashion, for example, by noting the equivalence of the following:

    (let ((variable-1 value-1)     ((lambda (variabl-1 ... variable-N)
          ...                         form-1
          (variable-N value-N))       ...
      form-1                          form-M)
      ...                            value-1
      form-M)                        ...
                                     value-N)

I tend to agree with you that the SEMANTICS of some `do's are much too hairy,
so I never use it.  Nonetheless, this degenerate case works the same as the
others, just a matter of degree.

>> 3. It's flexible, in the way that (say) infix isn't.  Most of the basic
>>    operations (+, *, and so on) naturally extend to any number of arguments.
>
> A very minor advantage, and probably the only real one.

The ability to handle variable numbers of arguments is a big win.  Seriously
affects readability and usability.  This feature should be provided by a clean
(hell, transparant) syntax.

>> 4. You don't have to worry about any precedence rules whatsoever.
>
> The same is true of infix syntax.  If you don't know the precedence you can
> always use parens (and the result is no worse than lisp).  The difference is
> that with infix syntax you have the choice.

YOU can insert the parens, but the next bloke might not.  And you may have to
play with his code someday.  And when you do, you better remember that C's
bitwise-and operator `&' is of higher precedence than the bitwise-xor operator
`^'.  The regular syntax eliminates this type of error even if I'm not fond of
expressiong *every* expression in prefix notation.

Note that I've conveniently sidestepped the issue.  Because programs are data,
I know that I can write an infix expression macro within the bounds of the
language.  Experience has shown it to be relatively straightforward.  (However,
I never use the critter because I am comfortable with prefix notation.)
Extending this toy to handle new operators with a given precedence and
associativity is relatively easy.  In fact, I'd say that it does infix better
than most imperative languages because it is so easy to manipulate programs as
data in Lisp.

>> You're the one making the positive assertion, that Lisp's syntax is a
>> hindrance to learning it.  You're the one who has to provide the evidence.
>
> The fact is that you seem to believe that lisp syntax does not cause
> problems.  This is a real opinion.  It is different from having no opinion on
> the matter, and is just as subject to the need for evidence as is the
> opposite opinion.  I can even rephrase your stand as a positive assertion:
> "lisp syntax is just as good as algol-like syntax".  There, now you have to
> give evidence and I don't.

Well said.  But let's dispense with the meta-arguments for the moment.  I have
never had a problem teaching Lisp.  I've invariably gotten faster results with
Lisp than with imperative languages like C and Pascal.  The major issue was one
of syntax.  Once we struggled through a formal discussion of the general syntax
and the low-level semantics, they had no problem learning the individual
special forms.

The key words here are simplicity and genericity.

Regards, [Ag]

gaynor@paul.rutgers.edu (Silver) (06/30/91)

gudeman@cs.arizona.edu writes:
> People who choose which programming language to use are going to agree
> with my definition, and are not usually going to choose a language
> with [an uncomfortable] syntax.

I dunno 'bout that.  I am not as fond of Prolog's syntax as I am of Lisp's.
Yet when the shoe fits, I wear it.  The dialect of Lisp I most frequently use
is GNU Emacs Lisp.  This Lisp is truly canine, but the Emacs environment is
fantastic.

Regardless, as long as the syntax is short, simple, and unsweetened with fluff,
it is acceptable.  Notational simplicity will win out in the end, not only by
virtue of simplicity, but because it is easy to transform and extend.  From a
practical perspective, there is no reason why one shouldn't be able to
mechanically switch between reasonably simple syntaxes.

Regards, [Ag]

rockwell@socrates.umd.edu (Raul Rockwell) (07/01/91)

A. Gaynor:
   The ability to handle variable numbers of arguments is a big win.
   Seriously affects readability and usability.  This feature should
   be provided by a clean (hell, transparant) syntax.

Note that LISP's ability to handle a variable number of arguments is
equivalent to an ability to handle one argument of arbitrary
complexity.  Further, it is (or should be) trivial to extend a
function which is associative, and takes two arguments, to a function
which takes an arbitrary number of arguments [or takes one argument
which is a list of arguments...].

-- 
Raul <rockwell@socrates.umd.edu>

gudeman@cs.arizona.edu (David Gudeman) (07/01/91)

In article  <ZTL042w164w@mantis.co.uk> Industrial Poet writes:
]gudeman@cs.arizona.edu (David Gudeman) writes:
]> 
]> Two answers: (1) there is presumably some other advantage to oriented
]> programming that gives incentive to go through the discomfort -- the
]> same does not apply to lisp syntax.
]
]That's your opinion.  A lot of people see great advantages in the syntactic
]flexibility and simplicity of Lisp.

A lot of people don't.

]> ]1. It's simple.  You can summarize all the syntax you need to know on the
]> ]   back of a postcard.
]> 
]> Not true.  You have to specify the special syntax of defun, let,
]> protect-unwind, do (which is worse than anything I have ever seen in
]> an algol-like language), etc.
]
]Sorry?  What "special syntax" are you referring to here with your examples? 
]I'm not entirely sure I understand what you mean by a "special" syntax as
]opposed to the standard one.

The do loop is part of the standard sytax.  It is a very complex
structure, and cannot be described on a postcard even by itself, even
along with the rest of the syntax.  The syntax of lisp special forms
is just as complicated as control structures in any other language.
The only difference is that in lisp these control structures are
forced to look like function calls (which makes them harder to
recognize).  The _only_ think in lisp syntax that can be truly be
called "simpler" than traditional syntax is the lack of infix
operations.

]
]>                             You have to specify what operators take
]> more than the normal number of parameters.
]
]You're applying a concept which doesn't fit the language.  Lisp functions
]like + and cond don't have a "normal number of parameters".  (+ 2 3 5) isn't
]a special version of + with an abnormal number of parameters; it's the same +
]which you find in (+ 2 3).

First, cond isn't a function it is a special form (called a control
structure in other languages).  People often make such mistakes in
Lisp because of the "simplifying" homogeneous syntax.  (In fact when I
taught lisp, I found one of the most confusing things to my students
was trying to distinguish between functions and special forms.)
Second, (+ 2 3 5) certainly _is_ a special version of + with an
abnormal number of parameters.  + is taken from mathematics where it
almost always has just two arguments.

]If it were (believe (not (syntax-hinders-learning Lisp))) then you could
]indeed re-phrase it as a positive assertion.  But as it stands I am not
]making an assertion; I am simply doubting an assertion you have made.

Sorry, I had you confused with the person who wrote the original
article.  He certainly _did_ make positive assertions, therefor your
claim that I was the only one who needed to provide evidence is wrong.

]> You have no idea how strongly I hold this opinion.
]
]Clearly you hold it fairly strongly, or you wouldn't go on about it so much.

Shows what you know.  I would go on about anything if I was prodded
right (I expect many of the regular readers of this group can vouch
for that :-).  My prodding in this case came from an article where
someone said the issue of syntax isn't important.  I find this
attitude to be particularly arrogant.  Just because it isn't a problem
for _you_ doesn't mean it isn't a problem.  And my observations
indicate that it _is_ a problem for a lot of people.

It so happens that I _don't_ hold this opinion very strongly.  I could
be easily swayed by an experimental study on the matter (even though I
am not under the illusion that such studies are always sound).
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

gudeman@cs.arizona.edu (David Gudeman) (07/01/91)

In article  <Jun.30.04.42.26.1991.22172@paul.rutgers.edu> Silver writes:
]My opinions on this discussion highly favor the all-is-data, syntax-is-simple,
]memory-is-managed, ... languages.

These three things are mostly orthogonal.  The only dependency is that
it is pretty hard to have an all-is-data model without memory
managment -- otherwise those features are independent and unrelated.

]Make the distinction between SYNTAX and SEMANTICS.

If you have a wierd syntax that encourages structures with wierd
semantics (like a mulitple-argument minus), then the syntax is making
the semantics more complicated and you have to take that into account
when discussing if the syntax makes the language simpler.

]The semantics of the special forms can then be described in a straightforward
]fashion, for example, by noting the equivalence of the following:

That doesn't make the syntax any simpler.  You still have to describe
all the possible variations of all the special forms.

]I tend to agree with you that the SEMANTICS of some `do's are much too hairy,
]so I never use it.  Nonetheless, this degenerate case works the same as the
]others, just a matter of degree.

I guess it depends on what you are going to call syntax (the
distinction is not as obvious as you seem to think).  I include in the
syntax the positioning of the relevant parts according to their
function.  If you are going to claim that syntax is just what is
describe by the BNF grammar then I say that there is no reason to
suppose that such simplicity is an advantage.  Who cares if you can
describe lisp syntax as a sequence of s-expressions?  That doesn't
tell you anything about how to write a lisp program.

]>> 3. It's flexible, in the way that (say) infix isn't.  Most of the basic
]>>    operations (+, *, and so on) naturally extend to any number of arguments
]> A very minor advantage, and probably the only real one.
]The ability to handle variable numbers of arguments is a big win.  Seriously
]affects readability and usability.  This feature should be provided by a clean
](hell, transparant) syntax.

I don't believe (+ x1 x2 ... xn) is any clearer than x1 + x2 + ... +
xn.  Hmm.  I think I'm going to have to take back where I said that
this is an advantage of lisp.  I was really thinking about the ability
to distribute an operation over a sequence, which is unrelated to
syntax.

]YOU can insert the parens, but the next bloke might not.  And you may have to
]play with his code someday.  And when you do, you better remember that C's
]bitwise-and operator `&' is of higher precedence than the bitwise-xor operator
]`^'.  The regular syntax eliminates this type of error even if I'm not fond of
]expressiong *every* expression in prefix notation.

If you don't remember the precedence then look it up and put in the
parens.  Or have an editor do it for you.  It was lisp defenders who
set the precedent that if an editor function can make up for a
deficiency then the deficiency doesn't count.

]Note that I've conveniently sidestepped the issue.  Because programs are data,
]I know that I can write an infix expression macro within the bounds of the
]language.

Terrific.  Then I'll get an infix language with programs-as-data and
do the reverse.  It doesn't prove anything either way.  (Except that
the fact that people always want such things in lisp and never want it
in prolog implies that prolog has better syntax).

][teaching languages]
]I've invariably gotten faster results with Lisp than with imperative
]languages like C and Pascal.  The major issue was one of syntax.
]Once we struggled through a formal discussion of the general syntax
]and the low-level semantics, they had no problem learning the
]individual special forms.

It sounds like you are agreeing with me that syntax is the major
problem in getting people to accept lisp.  It is obvious to me that
except for the syntax, it is _much_ easier to program in lisp than in
C or Pascal.  If lisp had a decent syntax it would be that much better
a language.

(Just to nit-pick, Lisp _is_ an imperative language.  It happens to be
expression-based and to have first-class functions and lots of
built-in functionals, but it also has a global store and assignment.)
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

diamond@jit533.swstokyo.dec.com (Norman Diamond) (07/01/91)

In article <4673@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>In article  <2928.UUL1.3#5129@willett.pgh.pa.us> Doug Philips writes:
>>In article <4601@optima.cs.arizona.edu>,
>>>Definition: A "wrong syntax" is any syntax that is uncomfortable for
>>>the people using it.
>>No, it is "uncomfortable syntax."  (NOT (EQ 'Uncomfortable 'Wrong))
>>see also very last comment.
>
>People who choose which programming language to use are going to agree
>with my definition,

Yes.

>and are not usually going to choose a language with wrong syntax.

No.  I have to choose and use wrong syntaxes all the time, because languages
with the right syntaxes aren't properly implemented and/or missing a few
occasionally-needed features, and my employer won't let me build a language
with a less-wrong syntax.
--
Norman Diamond       diamond@tkov50.enet.dec.com
If this were the company's opinion, I wouldn't be allowed to post it.
Permission is granted to feel this signature, but not to look at it.