[comp.lang.functional] Laziness and Back-to-front Lists

farrell@batserver.cs.uq.oz.au (Friendless) (05/28/90)

In comp.lang.functional Nick writes:
>I can write things like
>	fun f x = cons(1, f x)
>to build an infinite list, but not
>	fun f x = reverse_cons(f x, 1)
>because of the reduction order.

  I suppose you mean:

fun = cons 1 fun
fun = reverse_cons fun 1

  ? f and x seem entirely useless.

  I'm not sure what you want here. The fact is that the two definitions of fun
don't define the same function - the first has an infinite tail, and the
second has an infinite head. In say, Miranda, you won't be able to get
anything out of the second because lists are evaluated head to tail only.
If you defined a list by

list * = Nil | Cons * (list *) | RCons (list *) *

and for the second said 

fun = RCons fun 1

  then the definitions would be symmetric. Of course lists aren't ever defined
like this. I think someone did a paper on this sort of stuff (maybe Kennaway &
Sleep?) which I can hunt down if anyone cares.


							John

aipdc@castle.ed.ac.uk (Paul D. Crowley) (05/29/90)

WARNING: Uninformed opinions ahead!

I was under the impression that in a truly "pure" functional language
the problem doesn't arise because nothing has more than one argument. 
Things that look as if they should have more than one argument are dealt
with lambda-calculus style:

2+3:  ((plus 2) 3)
(plus 2) means plus applied to two, and equals a function which adds two
to its argument.

so ((plus 2) 3) = (plus2 3) = 5

Under these circumstances you _expect_ order of arguments to be
significant, so it doesn't surprise you that you get different results
if you change it.

I'm trying to imagine a language which evaluated expressions
breadth-first.   Maybe it's already been done?  Obviously your
"done-so-far" structure would be _much_ more complicated than a stack.

-- 
\/ o\ Paul Crowley aipdc@uk.ac.ed.castle
/\__/ "Trust me, I know what I'm doing" - Sledge Hammer

bjornl@tds.kth.se (Bj|rn Lisper) (05/29/90)

In article <4289@castle.ed.ac.uk> aipdc@castle.ed.ac.uk (Paul D. Crowley)
writes:
%WARNING: Uninformed opinions ahead!

%I was under the impression that in a truly "pure" functional language
%the problem doesn't arise because nothing has more than one argument. 
%Things that look as if they should have more than one argument are dealt
%with lambda-calculus style:

%2+3:  ((plus 2) 3)
%(plus 2) means plus applied to two, and equals a function which adds two
%to its argument.

%so ((plus 2) 3) = (plus2 3) = 5

%Under these circumstances you _expect_ order of arguments to be
%significant, so it doesn't surprise you that you get different results
%if you change it.

Yes, this is exactly the "problem" with lazy functional languages as I see
it: they rely on a serial evaluation order of arguments obtained from the
currying of functions in order to get unary functions only, as in lambda
calculus.  HOWEVER, this ordering is an imposed, arbitrary one. As I
understand it this is inherent in the lambda calculus itself. So, there's a
problem with the very theoretical foundations of functional programming!

My last comment is deliberately provocative. It would be interesting to hear
any comments on it from people more involved in the field than me.

Bjorn Lisper

brian@comp.vuw.ac.nz (Brian Boutel) (05/30/90)

In article <BJORNL.90May29165842@pike.tds.kth.se>, bjornl@tds.kth.se
(Bj|rn Lisper) writes:
> In article <4289@castle.ed.ac.uk> aipdc@castle.ed.ac.uk (Paul D. Crowley)
> writes:
> %WARNING: Uninformed opinions ahead!
> 
> %I was under the impression that in a truly "pure" functional language
> %the problem doesn't arise because nothing has more than one argument. 
> %Things that look as if they should have more than one argument are dealt
> %with lambda-calculus style:
> 
> %2+3:  ((plus 2) 3)
> %(plus 2) means plus applied to two, and equals a function which adds two
> %to its argument.
> 
> %so ((plus 2) 3) = (plus2 3) = 5
> 
> %Under these circumstances you _expect_ order of arguments to be
> %significant, so it doesn't surprise you that you get different results
> %if you change it.
> 
> Yes, this is exactly the "problem" with lazy functional languages as I see
> it: they rely on a serial evaluation order of arguments obtained from the
> currying of functions in order to get unary functions only, as in lambda
> calculus.  HOWEVER, this ordering is an imposed, arbitrary one. As I
> understand it this is inherent in the lambda calculus itself. So, there's a
> problem with the very theoretical foundations of functional programming!
> 
> My last comment is deliberately provocative. It would be interesting to hear
> any comments on it from people more involved in the field than me.
> 
> Bjorn Lisper

I think there is a misunderstanding here.
There is no need to *evaluate* the argument of a curried function before
you need its value. 

If I define f x y = E, I may be defining f to be a function which takes
an argument x and produces a function which takes an argument y etc.,
but all this implies is that I can use f 3, for example, as a function.
It says nothing about the order of evaluation of x and y.

In lazy evaluation, evaluation only occurs when a value is needed.
Ultimately, this is because it has to be output, but within a program,
evaluation is driven by pattern matching (a need to know which case
applies), or by a conditional ( a need to know which branch to take) or
by a strict primitive function like plus (needs the values of its
arguments before it can be applied). In all other cases, arguments are
passed to functions without prior evaluation. (Actually, if you do
strictness analysis, it may be possible to determine at compile time
that a particular argument is always evaluated, and so, to improve
efficiency, it can safely be evaluated before being passed to the function.)

In plus x y, plus is strict in both its arguments, and they will both be
evaluated. The order in which this happens is unimportant. Real
implementations do not implement plus as taking one argument and
producing a new function, they implement it by using ordinary machine
arithmetic, where addition is an operation that adds two values. 

In f (complex expression), I can still pass this as a functional
argument to something without evaluating (complex expression) first.
With call-by-need, when (complex expression) is eventually evaluated, it
will be replaced by its value, and will not need to be evaluated again.

None of this invalidates the semantics of a non-strict language.

The issue of the order in which cases and arguments are checked in
pattern matching is different. If you are primarily interested in
reasoning about programs, you will want all equations in a function
definition to be disjoint, so at most one equation can match the actual
arguments. In practice this is inconvenient, as you often need a case
that matches "everything else" after some special values have been dealt
with. If you are interested in writing programs, you want semantics that
are easy to understand, and if you can't have a situation where equation
sequence and argument order make no difference, at least you will want
something obvious like top-to-bottom, left-to-right.

Even if this is not totally pure, lazy evaluation with these
restrictions is still very useful, and should not be rejected. 

--brian
Internet: brian@comp.vuw.ac.nz
Postal: Brian Boutel, Computer Science Dept, Victoria University of Wellington,
        PO Box 600, Wellington, New Zealand
Phone: +64 4 721000
Fax:   +64 4 712070

aipdc@castle.ed.ac.uk (Paul D. Crowley) (05/30/90)

Article: 120 of comp.lang.functional
Path: edcastle!ukc!mcsun!sunic!sics.se!sics!bjornl
From: 
Newsgroups: comp.lang.functional
Message-ID: 
Date: 28 Jul 90 08:08:43 GMT
References: <BJORNL.90May29165842@pike.tds.kth.se> <14531@dime.cs.umass.edu> <2535@skye.ed.ac.uk> <8691@cs.utexas.edu> <4170@castle.ed.ac.uk> <3738@moondance.cs.uq.oz.au> <4289@castle.ed.ac.uk> <1990May29.234840.29162@comp.vuw.ac.nz>
Sender: news@sics.se
Organization: The Royal Inst. of Technology (KTH), Stockholm, Sweden.
Lines: 81
In-Reply-To: brian@comp.vuw.ac.nz's message of 29 May 90 23:48:40 GMT

bjornl@tds.kth.se (Bj|rn Lisper) sez in <BJORNL.90Jul28100843@tarpon.tds.kth.se>
/In article <1990May29.234840.29162@comp.vuw.ac.nz> brian@comp.vuw.ac.nz
\(Brian Boutel) writes:
/%> (Bjorn Lisper) wrote:
\%> Yes, this is exactly the "problem" with lazy functional languages as I see
/%> it: they rely on a serial evaluation order of arguments obtained from the
\%> currying of functions in order to get unary functions only, as in lambda
/%> calculus.  HOWEVER, this ordering is an imposed, arbitrary one. As I
\%> understand it this is inherent in the lambda calculus itself.
/
\%I think there is a misunderstanding here.
/%There is no need to *evaluate* the argument of a curried function before
\%you need its value. 
/
\Oops, I never claimed this. 

As far as I can see, the only kind of laziness that lambda-calculus is
open to is a curried function which says "Ignore the argument: the
answer is ...".  If such a function is produced the argument need not be
evaluated.

/%If I define f x y = E, I may be defining f to be a function which takes
\%an argument x and produces a function which takes an argument y etc.,
/%but all this implies is that I can use f 3, for example, as a function.
\%It says nothing about the order of evaluation of x and y.
/
\Doesn't it? Unless you know that f is strict, I'd say that whenever the
/value of f x y is needed, f x must be evaluated first and then, if needed,
\(f x) y. All this assuming that we want to follow the semantics given by
/lambda calculus (which imposes this order).
\
/(Please correct me if I'm wrong. I'm not a lambda calculus expert.)
\
/%In lazy evaluation, evaluation only occurs when a value is needed.
\%Ultimately, this is because it has to be output, but within a program,
/%evaluation is driven by pattern matching (a need to know which case
\%applies), or by a conditional ( a need to know which branch to take) or
/%by a strict primitive function like plus (needs the values of its
\%arguments before it can be applied). In all other cases, arguments are
/%passed to functions without prior evaluation.
/
\But in some sense, a conditional is a function.  To call some functions
/conditionals is a way to hide the basic "problem".  For instance, if(b,x,y)
\defined by
/
\b => if(b,x,y) = x
/not b => if(b,x,y) = y
\
/is a (non-strict) function in three arguments. Left-to-right lazy evaulation
\will give the ordinary "conditional" semantics. As I pointed out, there are
/cases where other evaluation orders terminate whereas this doesn't.

The kind of laziness I was talking about above applies particularly well
to if - you can use it to produce a version of if which only evaluates
the appropriate half.  

I can't see any way of implementing breadth-first evaluation of the kind
that would cope with

(((if forever) x) x)

using lambda-calculus notions.

Of course, what would solve all this would be a program that tests
whether an expression halts... :-)


-- 
\/ o\ Paul Crowley aipdc@uk.ac.ed.castle
/\__/ "Trust me, I know what I'm doing" - Sledge Hammer

lambert@spectrum.eecs.unsw.oz (Tim Lambert) (05/30/90)

In article <BJORNL.90Jul28100843@tarpon.tds.kth.se> bjornl@tds.kth.se (Bj|rn Lisper) writes:
% For instance, if(b,x,y) defined by
% 
% b => if(b,x,y) = x
% not b => if(b,x,y) = y
% 
% is a (non-strict) function in three arguments. Left-to-right lazy evaulation
% will give the ordinary "conditional" semantics. As I pointed out, there are
% cases where other evaluation orders terminate whereas this doesn't.

If you want a different evaluation order you can define it (in
Miranda) like this:

> if b x x = x
> if True x y = x
> if False x y = y

Now if is strict in its second and third arguments but not its first
one.

% Of course. A more interesting example is multiplication, that strictly
% speaking (sic!) is non-strict, since whenever one argument returns zero the
% other argument is not needed. if f(x) is non-terminating, then 0*f(x) still
% terminates (and returns zero) if '*' is treated as a non-strict function
% with left-to-right evaluation order. Is '*' usually implemented as a strict
% or as a non-strict operator in lazy functional languages?

Strict, since if you don't like this you can easily roll your own:

> 0 $times y = 0
> x $times y = x*y

Tim

jrk@sys.uea.ac.uk (Richard Kennaway) (05/30/90)

In article <BJORNL.90May29165842@pike.tds.kth.se> bjornl@tds.kth.se (Bj|rn Lisper) writes:

>In article <4289@castle.ed.ac.uk> aipdc@castle.ed.ac.uk (Paul D. Crowley)
>writes:
>%WARNING: Uninformed opinions ahead!

>%I was under the impression that in a truly "pure" functional language
>%the problem doesn't arise because nothing has more than one argument. 
>%Things that look as if they should have more than one argument are dealt
>%with lambda-calculus style:

>%2+3:  ((plus 2) 3)
>%(plus 2) means plus applied to two, and equals a function which adds two
>%to its argument.

>%so ((plus 2) 3) = (plus2 3) = 5

>%Under these circumstances you _expect_ order of arguments to be
>%significant, so it doesn't surprise you that you get different results
>%if you change it.

>Yes, this is exactly the "problem" with lazy functional languages as I see
>it: they rely on a serial evaluation order of arguments obtained from the
>currying of functions in order to get unary functions only, as in lambda
>calculus.  HOWEVER, this ordering is an imposed, arbitrary one. As I
>understand it this is inherent in the lambda calculus itself. So, there's a
>problem with the very theoretical foundations of functional programming!

>My last comment is deliberately provocative. It would be interesting to hear
>any comments on it from people more involved in the field than me.

>Bjorn Lisper

Perhaps I can clear up some of the confusion in this thread.

Laziness has nothing to do with currying.  There is no reason why
order of arguments should be any more significant in a curried
language than in an uncurried one.

The lack of laziness apparent in the supposedly lazy functional languages
also has nothing to do with currying.  They impose a particular evaluation
strategy based on textual order, but only because it simplifies the
implementation.  It is not required by laziness; on the contrary, it
loses some laziness (and is therefore a Bad Thing :-)).

The fact that leftmost-outermost reduction works for the lambda calculus
(in the sense that this strategy will find the normal form of any
lambda expression which has one) is an uninteresting quirk of
its usual concrete syntax.  Write all your applications in reverse
order, and rightmost-outermost becomes the preferred strategy.

The "sequentiality" possessed by lambda calculus and certain term
rewrite systems has nothing to do with left-to-right pattern matching.

Left-to-right pattern matching does not imply that arguments to functions
are evaluated from left to right, but that matching against rules is
attempted from left to right, evaluation of arguments only being
triggered when the matcher attempts to match a constructor symbol in
a rule against an unevaluated subterm.  For example, take the following
Miranda rules:

	f x 0  =  0
	loop   =  loop

Miranda will use these rules to evaluate the term (f loop 0) to 0,
without attempting to evaluate the first argument to f, because the
rule for f does not need to know the value of that argument.  (The
Miranda syntax is a little confusing: in the above rules, "f" and "loop"
are function symbols, but "x" is a variable.)

--
Richard Kennaway          SYS, University of East Anglia, Norwich, U.K.
Internet:  jrk@sys.uea.ac.uk		uucp:  ...mcvax!ukc!uea-sys!jrk

alti@staffa.ed.ac.uk (Thorsten Altenkirch) (05/31/90)

Bjorn's article reminds me on the following problem : In a lazy
language we can define OR in two possible ways, s.t. it has the
following behaviours : (_|_ = bottom)

OR1 True _|_  = True        OR2 True _|_  = _|_
OR1 _|_  True = _|_         OR2 _|_  True = True
...                         ...

but there is no way to get the 'intended' behaviour :

OR True _|_  = True
OR _|_  True = True
...

It is fairly obvious, that we need some sort of parallel evaluation to
get the desired result. But even given that, it's not obvious what
programming language construct would enable us to write 'symmetric'
programs like OR. Candidates are the 'fair bottom avoiding merge' or
more fundamently McCarthy's ambiguity operator [which applied to two
arguments return's a value, if the evaluation of at least one of them
terminates]. But both obviously destroy the Church-Rosser-Property !

Thorsten


--
Thorsten Altenkirch 
JCMB Room 1404                     JANET: alti@uk.ac.ed.lfcs
LFCS, Dept. of Computer Science    UUCP:  ..!mcvax!ukc!lfcs!alti
University of Edinburgh            ARPA:  alti%lfcs.ed.ac.uk@nsfnet-relay.ac.uk
Edinburgh EH9 3JZ, UK.             Tel:   031-667-1081 / 031 229 24 88 (p)

jrk@sys.uea.ac.uk (Richard Kennaway) (05/31/90)

In article <LAMBERT.90May30224949@nankeen.spectrum.cs.unsw.oz.au> lambert@spectrum.eecs.unsw.oz (Tim Lambert) writes:

>In article <BJORNL.90Jul28100843@tarpon.tds.kth.se> bjornl@tds.kth.se (Bj|rn Lisper) writes:
>% For instance, if(b,x,y) defined by
>% 
>% b => if(b,x,y) = x
>% not b => if(b,x,y) = y
>% 
>% is a (non-strict) function in three arguments. Left-to-right lazy evaulation
						  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is rather a contradiction in terms.  The left-to-right (and dont
forget top-to-bottom as well) strategy is lazier than strict evaluation,
but not as lazy as possible.  It is possible to be lazier, but it takes
more work :-).

>% will give the ordinary "conditional" semantics. As I pointed out, there are
>% cases where other evaluation orders terminate whereas this doesn't.

>If you want a different evaluation order you can define it (in
>Miranda) like this:

>> if b x x = x
>> if True x y = x
>> if False x y = y

>Now if is strict in its second and third arguments but not its first
>one.

...but you then get an error if the second and third arguments
are functions (Miranda doesnt let you compare functions for equality -
which is reasonable).  The ordinary definition of if (i.e. omit the first
rule above) works fine for functions.

>% Of course. A more interesting example is multiplication, that strictly
>% speaking (sic!) is non-strict, since whenever one argument returns zero the
>% other argument is not needed. if f(x) is non-terminating, then 0*f(x) still
>% terminates (and returns zero) if '*' is treated as a non-strict function
>% with left-to-right evaluation order. Is '*' usually implemented as a strict
>% or as a non-strict operator in lazy functional languages?

>Strict, since if you don't like this you can easily roll your own:

>> 0 $times y = 0
>> x $times y = x*y

This is only lazy in its first argument, not its second.  In Miranda,
you cant define a multiplication which is lazy in both arguments.

--
Richard Kennaway          SYS, University of East Anglia, Norwich, U.K.
Internet:  jrk@sys.uea.ac.uk		uucp:  ...mcvax!ukc!uea-sys!jrk

jrk@sys.uea.ac.uk (Richard Kennaway CMP RA) (06/01/90)

In article <BJORNL.90Jul30101627@tarpon.tds.kth.se> bjornl@tds.kth.se (Bj|rn Lisper) writes:
...
>Which is something that irritates me: that a function should change its
>semantics just because its arguments in the definition are written down in a
>different order (while the rest of the definition is the same). I guess this
>is hard to get around, though: in order to get reasonable simple evaluation
>rules some order must be chosen.

Yes, it's easy to implement, and it lets you write "everything else" rules
just by putting them last, e.g.

	fac 0  =  1
	fac x  =  x * (fac (x-1))

Which is why it's nearly always done that way.  The only exception I know
is Hope+, which is still left-to-right within each rule, but orders
different rules by specificity.  That is, pattern-matching is always
attempted with the most specific possible rule, regardless of tedtual
order.  E.g. in the above example, translated into Hope+, you could write
the two rules in either order and they would behave in the same way.

But it irritates me as well.  It seems just as reasonable (with respect
to understandability by people rather than machines) to write default
rules first, followed by all the exceptions; and if I have a function
of 12 arguments, I want to give those arguments in an order meaningful
to me, rather than use that ordering to control the evaluation order.

>In article <1539@sys.uea.ac.uk> jrk@sys.uea.ac.uk (Richard Kennaway) writes:
...
>   a rule against an unevaluated subterm.  For example, take the following
>   Miranda rules:

>	   f x 0  =  0
>	   loop   =  loop

>   Miranda will use these rules to evaluate the term (f loop 0) to 0,
>   without attempting to evaluate the first argument to f, because the
>   rule for f does not need to know the value of that argument.

>So Miranda then goes a little further than left to right: it checks for
>patterns that can be matched before evaluation takes place. This requires
>knowledge about the arguments: For the example function f above, I suppose
>that the evaluation of a more general term (f loop y) would not terminate
>even if y should happen to evaluate to 0.

No, for example (f loop (1-1)) would also evaluate to 0.  The pattern-
matcher looks at the first argument of the rule for f, finds that it
doesnt need to know anything about it, looks at the second argument
in the rule (0), and sees that it must evaluate the second argument of
the given term (1-1), does so, finds the term now matches, and applies the
rule.

>An even more general strategy than pattern matching would perhaps then try
>to do some transformation of y before the call, in order to see if it is
>equal to 0 so that the first rule can be applied.

As described above, that's exactly what Miranda, and all the other lazy
functional languages do.

>The most general
>transformation would of course be to evaluate y, but then termination is not
>guaranteed.

It's guaranteed more than I suspect you think.  Consider the following
example:

	f x Nil  =  0
	f x (Cons y x)  =  1
	tail (Cons u v)  =  v

(where Nil and Cons are the usual list constructors - Miranda actually
has these built-in, but I forget what it calls them.)

Now consider evaluating the term:

	f loop (tail (Cons loop (Cons loop loop)))

This will be evaluated to 1.  As before, the pattern-matcher skips the
first argument, and sees that it must do some evaluation of the second.
It applies the 'tail' rule, giving

	f loop (Cons loop loop)

which matches the second f rule.  None of the 'loop' subterms are ever
evaluated.

>Bjorn Lisper

--
Richard Kennaway          SYS, University of East Anglia, Norwich, U.K.
Internet:  jrk@sys.uea.ac.uk		uucp:  ...mcvax!ukc!uea-sys!jrk

bjornl@tds.kth.se (Bj|rn Lisper) (07/28/90)

In article <1990May29.234840.29162@comp.vuw.ac.nz> brian@comp.vuw.ac.nz
(Brian Boutel) writes:
> I (Bjorn Lisper) wrote:
%> Yes, this is exactly the "problem" with lazy functional languages as I see
%> it: they rely on a serial evaluation order of arguments obtained from the
%> currying of functions in order to get unary functions only, as in lambda
%> calculus.  HOWEVER, this ordering is an imposed, arbitrary one. As I
%> understand it this is inherent in the lambda calculus itself.

%I think there is a misunderstanding here.
%There is no need to *evaluate* the argument of a curried function before
%you need its value. 

Oops, I never claimed this. What I meant to say was that there is an imposed
order for arguments whenever the value of the function call itself is needed.
(Unless you know that the function is strict in some arguments, of course,
in which case you can safely evaluate those in any order.)

%If I define f x y = E, I may be defining f to be a function which takes
%an argument x and produces a function which takes an argument y etc.,
%but all this implies is that I can use f 3, for example, as a function.
%It says nothing about the order of evaluation of x and y.

Doesn't it? Unless you know that f is strict, I'd say that whenever the
value of f x y is needed, f x must be evaluated first and then, if needed,
(f x) y. All this assuming that we want to follow the semantics given by
lambda calculus (which imposes this order).

(Please correct me if I'm wrong. I'm not a lambda calculus expert.)

%In lazy evaluation, evaluation only occurs when a value is needed.
%Ultimately, this is because it has to be output, but within a program,
%evaluation is driven by pattern matching (a need to know which case
%applies), or by a conditional ( a need to know which branch to take) or
%by a strict primitive function like plus (needs the values of its
%arguments before it can be applied). In all other cases, arguments are
%passed to functions without prior evaluation.

But in some sense, a conditional is a function.  To call some functions
conditionals is a way to hide the basic "problem".  For instance, if(b,x,y)
defined by

b => if(b,x,y) = x
not b => if(b,x,y) = y

is a (non-strict) function in three arguments. Left-to-right lazy evaulation
will give the ordinary "conditional" semantics. As I pointed out, there are
cases where other evaluation orders terminate whereas this doesn't.

%In plus x y, plus is strict in both its arguments, and they will both be
%evaluated. The order in which this happens is unimportant. Real
%implementations do not implement plus as taking one argument and
%producing a new function, they implement it by using ordinary machine
%arithmetic, where addition is an operation that adds two values. 

Of course. A more interesting example is multiplication, that strictly
speaking (sic!) is non-strict, since whenever one argument returns zero the
other argument is not needed. if f(x) is non-terminating, then 0*f(x) still
terminates (and returns zero) if '*' is treated as a non-strict function
with left-to-right evaluation order. Is '*' usually implemented as a strict
or as a non-strict operator in lazy functional languages?

%The issue of the order in which cases and arguments are checked in
%pattern matching is different. ...
% ...If you are interested in writing programs, you want semantics that
%are easy to understand, and if you can't have a situation where equation
%sequence and argument order make no difference, at least you will want
%something obvious like top-to-bottom, left-to-right.

%Even if this is not totally pure, lazy evaluation with these
%restrictions is still very useful, and should not be rejected. 

Oh, I've never rejected lazy evaluation. I've just tried to point out that
there are cases where "usual" lazy evaluation does not terminate but other
evaluation orders do. So any claim that lazy evaluation is the "best" kind
of evaluation must draw its support from other arguments than superior
termination properties or "purity". I can for instance buy your "easy to
understand" argument above. (Since I use the latin alphabet, so I naturally
read from left to right.... :^))

Bjorn Lisper

bjornl@tds.kth.se (Bj|rn Lisper) (07/30/90)

In article <1539@sys.uea.ac.uk> jrk@sys.uea.ac.uk (Richard Kennaway) writes:

   Laziness has nothing to do with currying.  There is no reason why
   order of arguments should be any more significant in a curried
   language than in an uncurried one.

Right. But there must be cases where some argument to a non-strict function
has to be evaluated in order to determine what is to be done next, even in a
lazy language. In all languages I've seen, lazy or not, this is done on a
left to right basis.

   The fact that leftmost-outermost reduction works for the lambda calculus
   (in the sense that this strategy will find the normal form of any
   lambda expression which has one) is an uninteresting quirk of
   its usual concrete syntax.  Write all your applications in reverse
   order, and rightmost-outermost becomes the preferred strategy.

Which is something that irritates me: that a function should change its
semantics just because its arguments in the definition are written down in a
different order (while the rest of the definition is the same). I guess this
is hard to get around, though: in order to get reasonable simple evaluation
rules some order must be chosen.

   The "sequentiality" possessed by lambda calculus and certain term
   rewrite systems has nothing to do with left-to-right pattern matching.

   Left-to-right pattern matching does not imply that arguments to functions
   are evaluated from left to right, but that matching against rules is
   attempted from left to right, evaluation of arguments only being
   triggered when the matcher attempts to match a constructor symbol in
   a rule against an unevaluated subterm.  For example, take the following
   Miranda rules:

	   f x 0  =  0
	   loop   =  loop

   Miranda will use these rules to evaluate the term (f loop 0) to 0,
   without attempting to evaluate the first argument to f, because the
   rule for f does not need to know the value of that argument.

So Miranda then goes a little further than left to right: it checks for
patterns that can be matched before evaluation takes place. This requires
knowledge about the arguments: For the example function f above, I suppose
that the evaluation of a more general term (f loop y) would not terminate
even if y should happen to evaluate to 0.

An even more general strategy than pattern matching would perhaps then try
to do some transformation of y before the call, in order to see if it is
equal to 0 so that the first rule can be applied. The most general
transformation would of course be to evaluate y, but then termination is not
guaranteed.

Bjorn Lisper