[comp.theory] Models of Lambda Calculus

fs@rex.cs.tulane.edu (Frank Silbermann) (01/11/90)

Denotational semantics for LISP-like functional languages
(e.g. Scheme minus side-effects) typically map program syntax
to an expression in an extended lambda calculus.
Typical extensions include atoms, booleans, sequence constructors,
and delta-conversion rules.

It is understood that such an extended lambda calculus
can be encoded within the pure untyped lambda calculus,
which has as extensional model a domain isomorphic
to its own function space (i.e. the solution to domain
equation "D ~= D->D").

Nevertheless, has the extended lambda calculus ever
been studied in its own right?  For instance,
has anyone ever shown the domain
	D ~= (Atom + Bools + DxD + D->D)_bottom
to be an extensional model of an extended lambda calculus?

I am especially interested in extensions
to the _untyped_ lambda calculus,
which seems more appropriate for describing languages
like pure Scheme.

I would be grateful to receive references.

	Frank Silbermann	fs@rex.cs.tulane.edu

pdm@daimi.dk (Peter D. Mosses) (01/12/90)

In article <1773@rex.cs.tulane.edu>, fs@rex (Frank Silbermann) writes:
>Denotational semantics for LISP-like functional languages
>(e.g. Scheme minus side-effects) typically map program syntax
>to an expression in an extended lambda calculus.
>Typical extensions include atoms, booleans, sequence constructors,
>and delta-conversion rules.
>...
>I am especially interested in extensions
>to the _untyped_ lambda calculus,
>which seems more appropriate for describing languages
>like pure Scheme.

One shouldn't really speak of a syntax-directed translation from one
language to another as a denotational semantics.  The main idea of
idea of denotational semantics is to map (all) phrases of programs to
their denotations - where the denotation of a phrase is an abstract
semantic entity (such as a higher-order function) that represents the
contribution of the phrase to the behaviour of programs in which it
may occur.

Because denotational semantics is compositional, a program
transformation (the replacement of one phrase by another, preserving
program behaviour) is known to be valid whenever the phrases involved
have equal denotations.  But when the denotation of a phrase is taken
to be syntactic (e.g., the phrase itself, or its translation to the
lambda-calculus) syntactically-different phrases almost always have
different denotations, which makes such a "semantics" useless for
justifying program transformations, etc.

This is not to say that explaining a language by translating it to the
lambda-calculus is a bad idea.  (It was advocated initially by Peter
Landin in the mid-60's, for explaining Algol-60).  The lambda-calculus
has conversion rules, which can be used to show that different terms
are "equivalent".  It just shouldn't be confused with giving a
denotational semantics using (typed) lambda-NOTATION for identifying
particular elements of Scott domains.

Presumably your "LISP-like languages" have QUOTE and EVAL functions.
Here it is necessary for the denotation of every phrase (or at least,
those that can be arguments to QUOTE) to be isomorphic to the syntax
of the phrase itself, which makes the denotations uninteresting.
Moreover, a compositional translation of such a language to (extended)
lambda-calculus has to build up terms that represent the syntactic
structure of phrases, so it doesn't "explain away" the translated
language as much as the kind of translation advocated by Landin.

Thus I don't think that models of the extended lambda-calculus are
relevant to the description of "LISP-like languages" such as Scheme.
A structural operational semantics would be more appropriate.  Or just
leave out QUOTE and EVAL {B-)


--
Peter D. Mosses					     pdmosses@daimi.DK
----------------------------------------------------------------------
[ Computer Science Department * Aarhus University * Aarhus * Denmark ]
----------------------------------------------------------------------

pdm@daimi.dk (Peter D. Mosses) (01/12/90)

In article <1773@rex.cs.tulane.edu>, fs@rex (Frank Silbermann) writes:
>Denotational semantics for LISP-like functional languages
>(e.g. Scheme minus side-effects) typically map program syntax
>to an expression in an extended lambda calculus.
>Typical extensions include atoms, booleans, sequence constructors,
>and delta-conversion rules.
>...
>I am especially interested in extensions
>to the _untyped_ lambda calculus,
>which seems more appropriate for describing languages
>like pure Scheme.

One shouldn't really speak of a syntax-directed translation from one
language to another as a denotational semantics.  The main idea of
idea of denotational semantics is to map (all) phrases of programs to
their denotations - where the denotation of a phrase is an abstract
semantic entity (such as a higher-order function) that represents the
contribution of the phrase to the behaviour of programs in which it
may occur.

Because denotational semantics is compositional, a program
transformation (the replacement of one phrase by another, preserving
program behaviour) is known to be valid whenever the phrases involved
have equal denotations.  But when the denotation of a phrase is taken
to be syntactic (e.g., the phrase itself, or its translation to the
lambda-calculus) syntactically-different phrases almost always have
different denotations, which makes such a "semantics" useless for
justifying program transformations, etc.

This is not to say that explaining a language by translating it to the
lambda-calculus is a bad idea.  (It was advocated initially by Peter
Landin in the mid-60's, for explaining Algol-60).  The lambda-calculus
has conversion rules, which can be used to show that different terms
are "equivalent".  It just shouldn't be confused with giving a
denotational semantics using (typed) lambda-NOTATION for identifying
particular elements of Scott domains.

Presumably your "LISP-like languages" have QUOTE and EVAL functions.
Here it is necessary for the denotation of every phrase (or at least,
those that can be arguments to QUOTE) to be isomorphic to the syntax
of the phrase itself, which makes the denotations uninteresting.
Moreover, a compositional translation of such a language to (extended)
lambda-calculus has to build up terms that represent the syntactic
structure of phrases, so it doesn't "explain away" the translated
language as much as the kind of translation advocated by Landin.

Thus I don't think that models of the extended lambda-calculus are
relevant to the description of "LISP-like languages" such as Scheme.
A structural operational semantics would be more appropriate.  Or just
leave out QUOTE and EVAL {B-)
--
Peter D. Mosses					     pdmosses@daimi.DK
----------------------------------------------------------------------
[ Computer Science Department * Aarhus University * Aarhus * Denmark ]
----------------------------------------------------------------------