mclennan@cs.utk.edu (Bruce MacLennan) (02/28/91)
> From: jhf@chaco.c3.lanl.gov (Joe Fasel) . . . . . > Hi, Bruce. > > The following query was posted to comp.lang.functional recently: > > | From: ddr@margaux.inria.fr (Daniel de Rauglaudre) . . . . . > | There is a question of great importance in our institute: why "lambda" > | in "lambda calculus" ? Why not "alpha" or "dzeta" or any other symbol ? > | Does anybody know the origin of this choice ? Thank you for you answers. > | > | Daniel de Rauglaudre > | INRIA - France > | ddr@inria.inria.fr > > I recall that you got the answer straight from the horse's mouth at the > 82 LFP conference. I think I remember the gist of it, but I might get > the details wrong, so I wonder if you could relate the story to us. > > Thanks. I hope all is going well for you and your family. > > --Joe At the 1982 LISP and Functional Programming Conference I asked Alonzo Church about the origin of the lambda symbol. What I found out is briefly summarized in a footnote on p. 357 of my book "Functional Programming: Practice and Theory" (Addison Wesley, 1990). Since Church never confirmed the story in writing, I thought it was inappropriate to attribute it to him in my book. Nevertheless, here is the history of the lambda symbol, based on the notes I wrote down after my conversation. Church said that the starting point was Russell and Whitehead's abstraction operator (in Principia Mathematica), which they wrote with a caret over the bound variable: $\hat{x}(x^2+1)$. To facilitate mechanical manipulation of the bound variables, he began to write the caret in front of the bound variable (because, I presume, this made the abstraction a string rather than a two-dimensional structure): $\hat{}x(x^2+1)$. From there the caret symbol evolved into an uppercase lambda, $\Lambda x(x^2+1)$, and finally a lower case lambda, $\lambda x(x^2+1)$. I presume the latter stages of this evolution were under the pressure of writing convenience and to avoid confusion with other symbols (such as the and-sign). Bruce MacLennan Department of Computer Science 107 Ayres Hall The University of Tennessee Knoxville, TN 37996-1301 (615)974-0994/5067 maclennan@cs.utk.edu
aarons@syma.sussex.ac.uk (Aaron Sloman) (03/11/91)
mclennan@cs.utk.edu (Bruce MacLennan) writes: > Date: 27 Feb 91 16:53:22 GMT > Followup-To: why lambda ? (Daniel de Rauglaudre) .......... > At the 1982 LISP and Functional Programming Conference I asked > Alonzo Church about the origin of the lambda symbol. .......... > Church said that the starting point was Russell and Whitehead's > abstraction operator (in Principia Mathematica), which they wrote > with a caret over the bound variable: $\hat{x}(x^2+1)$. .......... It is perhaps worth noting that the first person (as far as I know) to use an operator that binds a variable as an abstraction operator was Gottlob Frege, who also invented the existential and universal quantifiers, though he used a cumbersome 2-D notation for implication. Russell learnt about Frege's notation as a result of reviewing his work (I think it was the first volume of Frege's "The basic laws (Grundgezetse) of arithmetic" the first full blown attempt to show (a) that all concepts of arithmetic can be defined solely in terms of purely logical concepts (b) that all truths of arithmetic could be proved solely on the basis of truths of logic. Although Russell found that Frege's system was inconsistent (because it allowed the formulation of Russell's paradox, concerning the set of all sets that are not members of themselves), he continued to use many of the ideas, though using a rather different notation. I believe computer science owes a great deal to Frege's pioneering work, including the generalisation of the notion of a function to include predicates and higher order functions, and the first proper analysis of variables. Aaron Sloman, School of Cognitive and Computing Sciences, Univ of Sussex, Brighton, BN1 9QH, England EMAIL aarons@cogs.sussex.ac.uk