[comp.lang.functional] Laziness and Leftmost Rule

fs@rex.cs.tulane.edu (Frank Silbermann) (06/02/90)

Bjorn Lisper
>	But there must be cases where some argument
>	to a non-strict function has to be evaluated
>	in order to determine what is to be done next,
>	even in a lazy language. In all languages I've seen,
>	lazy or not, this is done on a left to right basis.

The theoretical justifications for purely leftmost evaluation
were developed for in syntactic term-rewriting systems
and also for the pure untyped lambda calculus.
Many assume that the justifications carry over
to more complicated languages.  This is not always true.

>	The fact that leftmost-outermost reduction works
>	for the lambda calculus (in the sense that this strategy
>	will find the normal form of any lambda expression  
>	which has one) is an uninteresting quirk ...  So,
>	there's a problem with the very theoretical foundations
>	of functional programming!

Conventional wisdom is that the pure untyped lambda calculus
is the theoretical basis of Lisp-like functional programming.
If you choose to view logical and mathamatical primitives
as syntactic sugar for pure lambda expressions
(whose operational behavior seems to simulate execution
of these primitives), then you have no right to complain
if these simulated operators sometimes do not behave
the way logical and mathamatical primitives ought.

If you want numbers and booleans to behave according
to the rules, then you should base your language
on a lambda calculus which admits numbers and booleans
as distinct expression types (i.e. a typed lambda calculus).
Then you can throw in arithmetic and boolean primitives
whose parallel evaluation mechanisms have nothing to do
with beta-reduction.

Frank Silbermann   Tulane University, New Orleans, Louisianna, USA
		   fs@rex.cs.tulane.edu

bjornl@sics.se (Bj|rn Lisper) (06/02/90)

In article <3462@rex.cs.tulane.edu> fs@rex.cs.tulane.edu (Frank Silbermann)
writes:
%Bjorn Lisper
%>	But there must be cases where some argument
%>	to a non-strict function has to be evaluated
%>	in order to determine what is to be done next,
%>	even in a lazy language. In all languages I've seen,
%>	lazy or not, this is done on a left to right basis.

%The theoretical justifications for purely leftmost evaluation
%were developed for in syntactic term-rewriting systems
%and also for the pure untyped lambda calculus.
%Many assume that the justifications carry over
%to more complicated languages.  This is not always true.

Which is quite the point I've been trying to make.

%>	The fact that leftmost-outermost reduction works
%>	for the lambda calculus (in the sense that this strategy
%>	will find the normal form of any lambda expression  
%>	which has one) is an uninteresting quirk ...  So,
%>	there's a problem with the very theoretical foundations
%>	of functional programming!

Whoa...you've merged a text I commented on with my comment! I didn't write
the text before the "..." above!

Bjorn Lisper

fs@rex.cs.tulane.edu (Frank Silbermann) (06/05/90)

(my previous post)
>%	The theoretical justifications for purely leftmost evaluation
>%	were developed for in syntactic term-rewriting systems
>%	and also for the pure untyped lambda calculus.
>%	Many assume that the justifications carry over
>%	to more complicated languages.  This is not always true.

The pure untyped lambda calculus is Turing equivalent.
One can express and compute any computable function via this notation.

Through a very complicated encoding, one could represent
symmetric boolean operations (e.g. parallel-OR).
The pure LC object language program would encode
the source program via some sort of dove-tailing algorithm,
so that left-most reduction of the object program
would simulate parallel evaluation of the source program.

However, such an encoding would be very ugly
(so would implementation via Turing machines
or via arithmetic in a Godel-numbering scheme).
Such an exercise is not very interesting --
all it does is build one machine (the language evaluator)
on top of another (the pure lambda calculus).

To show that a "functional" program is not merely
a set of computer instructions, but simultaneously
a mathamatical statement of fact (declarative),
then we must answer the following questions:

1)  Upon what domain or domains of values are
    the function mappings defined?

2)  From a given program expression,
    how is the function mapping derived?

For these questions,
term-rewriting theory is irrelevant; denotational semantics
provides functional programming's theoretical basis.
Domain theory describes the appropriate domains.
Since beta reduction resembles function application,
it is natural to use lambda expressions to denote
the various elements.

Interpreting lambda expressions as functions is justified
when we prove that the relationship between lambda expressions
and domain elements is _fully abstract_, i.e.  that
two lambda expressions denote the same domain element
IFF in any context they yield equivalent results.

For instance, the pure untyped lambda calculus is fully abstract
w.r.t. a domain isomorphic to its own function space
(i.e. a universe D isomorphic to the universe of
computable unary functions over D).
This is not the domain most programmers have in mind.
We want functions as objects; but do we want _only_ functions?

We need a lambda calculus with different kinds
of expressions to represent different kinds of objects
(not just functions, but also integers, booleans and sequences).
This is called a _typed_ lambda calculus.
To define a language via denotational semantics,
we map program statements into a kernal lambda calculus
whose fully abstract interpretation is both appropriate
and well understood.

So far, there has been more progress with typed languages
like pure ML and LCF, and less progress with untyped languages
such as pure Scheme.
A common weakness is use of an "extended lambda calculus"
whose operational properties are convenient, but whose
denotational semantics (i.e. its fully abstract interpretation)
has never been worked out.

>%>	So, there's a problem with the very theoretical foundations
>%>	of functional programming!  /Bjorn Lisper

Of course, otherwise there would be no more work for theorists!

	Frank Silbermann	fs@rex.cs.tulane.edu
	Tulane University, New Orleans, Louisianna  USA

paul@batserver.cs.uq.oz.au (Paul Bailes) (06/05/90)

fs@rex.cs.tulane.edu (Frank Silbermann) writes:


> <lots of good stuff, but then ...>

>For instance, the pure untyped lambda calculus is fully abstract
>w.r.t. a domain isomorphic to its own function space
>(i.e. a universe D isomorphic to the universe of
>computable unary functions over D).

Not so, I think. As you appreciate at the start of your letter,
parallel-or is only (untyped-)lambda-definable through some
complex encoding/interpretation. However, parallel-or exists in the
Scott domain. Hence, lambda-calc's not fully-abstract. The detailed
support for this comes from Mulmuley (1987) and Plotkin (1977). While
they're working with typed lambda-calc., I can't see how that affects
the issue (ie the semantic domain's got something in it that the
programming language hasn't). Anyhow, I look forward to being corrected
if appropriate.

REFERENCES

Mulmuley, K. ``Full Abstraction and Semantic Equivalence'' MIT Press 1987.

Plotkin, G.D. ``LCF Considered as a Programming Language'' TCS vol 5 pp 223-
	255 1977.

Paul Bailes
Dept Comp Sci
Uni of Queensland QLD 4072
AUSTRALIA

fs@rex.cs.tulane.edu (Frank Silbermann) (06/06/90)

>>		...the pure untyped lambda calculus is fully abstract
>>		w.r.t. a domain isomorphic to its own function space
>>		(i.e. a universe D isomorphic to the universe of
>>		computable unary functions over D).

In article <3835@moondance.cs.uq.oz.au> paul@batserver.cs.uq.oz.au writes:

>	Not so, I think.  Parallel-or exists in the Scott domain.
>	But, parallel-or is only (untyped-)lambda-definable
>	through some complex encoding/interpretation.
>	Hence, lambda-calc's not fully-abstract.

Parallel-or exists in _which_ Scott domain?
There is no such thing as "fully-abstract",
except in relationship to some specified (or implied) domain.

The Scott domain D described by:
		D =~ (Int + Bools + D->D )_bot
does indeed contain a (curried) parallel-OR.
However, the Scott domain D described by:
		D =~ (D->D)
contains no Boolean functions at all.  In fact,
this domain lacks even the base Boolean values TRUE and FALSE!
So, please explain again why you don't believe
this domain D =~ (D->D) is not a fully abstract model
of the pure untyped lambda calculus.

>	The detailed support for this comes from
>	Mulmuley (1987) and Plotkin (1977).
>	While they're working with typed lambda-calc.,
>	I can't see how that affects the issue ...  Paul Bailes

In programming languages, we usually assume the existence of
base atomic values (e.g. integers, booleans or other atoms),
so D =~ (D->D) is rarely the appropriate domain.
Therefore, the pure-untyped-lambda-calculus
is not the best basis for programming language design.
Mulmuley and Plotkin use typed lambda calculi,
whose fully abstract models (their corresponding domains)
contain the kinds of values desired.

	Frank Silbermann	fs@rex.cs.tulane.edu
	Tulane University, New Orleans, Louisianna  USA

paul@batserver.cs.uq.oz.au (Paul Bailes) (06/06/90)

fs@rex.cs.tulane.edu (Frank Silbermann) writes:


>The Scott domain D described by:
>		D =~ (Int + Bools + D->D )_bot
>does indeed contain a (curried) parallel-OR.
>However, the Scott domain D described by:
>		D =~ (D->D)
>contains no Boolean functions at all.  In fact,
>this domain lacks even the base Boolean values TRUE and FALSE!
>So, please explain again why you don't believe
>this domain D =~ (D->D) is not a fully abstract model
>of the pure untyped lambda calculus.

> ...

>In programming languages, we usually assume the existence of
>base atomic values (e.g. integers, booleans or other atoms),
>so D =~ (D->D) is rarely the appropriate domain.
>Therefore, the pure-untyped-lambda-calculus
>is not the best basis for programming language design.
>Mulmuley and Plotkin use typed lambda calculi,
>whose fully abstract models (their corresponding domains)
>contain the kinds of values desired.

Thanks for pointing out the need for a Bool type to make
parallel-or etc. meaningful. The only thing that still
troubles me is that, of course, Ints, Bools, etc. can be
simulated in lambda-calc. without any sort of complicated
interpretation e.g. ``1'' can be ``declared'' in lambda-calc
as

	(lam f.(lam x.f x))

Likewise true and false as respectively

	(lam x.(lam y. x)) <<called T below>>
	(lam x.(lam y. y)) <<called F below>>

These exist in pure, untyped lambda-calc, in the D =~ D->D
domain. What's stopping a parallel-or

	parallel-or T ? ->> T
	parallel-or ? T ->> T
	parallel-or F F ->> F

where ? means anything (incl ``bottom''), from existing in the domain?

Thanks in advance for your help

Paul Bailes

fs@rex.cs.tulane.edu (Frank Silbermann) (06/07/90)

>>		The Scott domain D described by:
>>				D =~ (Int + Bools + D->D )_bot
>>		does indeed contain a (curried) parallel-OR.
>>		However, the Scott domain D described by:
>>				D =~ (D->D)
>>		contains no Boolean functions at all.  In fact,
>>		this domain lacks even the base Boolean values TRUE and FALSE!

Adapted from <3859@moondance.cs.uq.oz.au> by paul@batserver.cs.uq.oz.au:

>	Thanks for pointing out the need for a Bool type to make
>	parallel-or etc. meaningful.  Nevertheless, Ints, Bools, etc.
>	can be simulated in lambda-calc. without any sort of complicated
>	interpretation e.g. ``TRUE'' and ``FALSE'' can be ``declared'' as
>	...
>		(lam x.(lam y. x)) <<called T below>>
>		(lam x.(lam y. y)) <<called F below>>
>
>	These exist in pure, untyped lambda-calc, in the D =~ D->D domain.
>	What's stopping a parallel-or ...?	Paul Bailes

Since the pure untyped lambda calculus is Turing equivalent,
you can indeed simulate any computable function, even parallel-or.
But nobody says it has to be easy!
The dove-tailing required to ensure fairness
(e.g. regularly swapping left and right arguments)
will probably be mucho ugly (and inefficient).
The suggested encodings for TRUE and FALSE were chosen for
their simplicity _under the assumption_ that sequential-or sufficed.

Aside from the difficulties of simulating parallelism,
note that if you erroneously apply your encoding of TRUE to an object,
it will happily compute a value, whereas I would want the result
to be undefined (BOTTOM), or perhaps a special error code.

The right typed lambda calculi will avoid
the need for difficult encodings.
The disadvantage of a more complex system
is that proofs become more complicated.

	Frank Silbermann	fs@rex.cs.tulane.edu