[comp.lang.functional] Can laziness sometimes be too lazy?

mmcg@bruce.cs.monash.OZ.AU (Mike McGaughey) (07/20/90)

Are there situations where lazy evaluation is *too* lazy?

The following example (due to Lloyd Alison) shows what I mean:

let rec first n l =
		if n = 0 then []
		else (hd l).(first (n-1) (tl l))
and rec length l =
		if l = [] then 0
		else 1 + length (tl l)
in
    length (first 2 [])

In a strict language, the result is an error, due to the (tl []) in "first".

In a lazy language (LML here), you get the result 2.

If we replace those IFs with pattern matching, which is slightly
stricter than it needs to be, we get a pattern match failure.

The problem, in this case, is that "length" is passed a list of
closures (of length 2) which are never evaluated, but which correspond
to nonexistent list elements.

Is this problem well known?  Has anyone any insight into what the
language "should" do, and why?  Is it just a case of the increased
freedom of laziness introducing a more subtle class of errors?

Mike.
--
Mike McGaughey			ACSNET:	mmcg@bruce.cs.monash.oz

blenko-tom@CS.YALE.EDU (Tom Blenko) (07/20/90)

In article <2706@bruce.cs.monash.OZ.AU> mmcg@bruce.cs.monash.OZ.AU (Mike McGaughey) writes:
|
|let rec first n l =
|		if n = 0 then []
|		else (hd l).(first (n-1) (tl l))
|and rec length l =
|		if l = [] then 0
|		else 1 + length (tl l)
|in
|    length (first 2 [])
|
|In a strict language, the result is an error, due to the (tl []) in "first".
|
|In a lazy language (LML here), you get the result 2.
|
|If we replace those IFs with pattern matching, which is slightly
|stricter than it needs to be, we get a pattern match failure.
|
|The problem, in this case, is that "length" is passed a list of
|closures (of length 2) which are never evaluated, but which correspond
|to nonexistent list elements.
|
|Is this problem well known?  Has anyone any insight into what the
|language "should" do, and why?  Is it just a case of the increased
|freedom of laziness introducing a more subtle class of errors?

I don't think there's a good answer to your question (as it relates to
this example, anyway) unless it is clear what 'first' is intended to
mean.  The given definition for 'first' appears to be incorrect.

What is supposed to be the result of

	(first 2 [])

for example? A "correct" version of 'first' might be

	let rec cfirst n l =
			if n = 0 then []
			else if l = [] then []
			else (hd l).(cfirst (n-1) (tl l))

in which case the indicated behavior does not occur.

	Tom

wg@opal.cs.tu-berlin.de (Wolfgang Grieskamp) (07/21/90)

blenko-tom@CS.YALE.EDU (Tom Blenko) writes:

>In article <2706@bruce.cs.monash.OZ.AU> mmcg@bruce.cs.monash.OZ.AU (Mike McGaughey) writes:
>|
>|let rec first n l =
>|		if n = 0 then []
>|		else (hd l).(first (n-1) (tl l))
>|and rec length l =
>|		if l = [] then 0
>|		else 1 + length (tl l)
>|in
>|    length (first 2 [])
>|
>|In a strict language, the result is an error, due to the (tl []) in "first".
>|
>|In a lazy language (LML here), you get the result 2.
>| [..]

>I don't think there's a good answer to your question (as it relates to
>this example, anyway) unless it is clear what 'first' is intended to
>mean.  The given definition for 'first' appears to be incorrect.

>What is supposed to be the result of

>	(first 2 [])

>for example? A "correct" version of 'first' might be

>	let rec cfirst n l =
>			if n = 0 then []
>			else if l = [] then []
>			else (hd l).(cfirst (n-1) (tl l))

>in which case the indicated behavior does not occur.


The intended definition is obviously:

	let rec first n l =
			if n = 0 then []
			else if l = [] then BOTTOM 
			else (hd l).(cfirst (n-1) (tl l))

where BOTTOM must be an unslightly polymorphic zero-arity function!! 
The example enlightens the pain of "lazy" languages (beside the
notion, which stands only for some implementation detail, but not
for a sematical concept). 

(-:

You folks of the lazy lobby (so silent about Mikes 
example?), have you ever written larger programs in NOR languages?
Of course you have, but no debugging was necessary, because
they have been totally correct from the beginning. 

Say, have you ever delegated a programming task in a NOR language
to someone else, for example, to a student, and then must use yourself
this service? Of course you have, but your students (or assistants,
to step upwards) always write totally correct programs, and so there
was no need to view hours all of your excellent code and figure out
at least a bug in a trivial function at a totally other place.

:-)

Very seriously, according to my experience pure "lazyness" seems 
not be the way to solve the "software-crisis". Some might imagine
marvelous debuggers buts thats an old argument of the von-Neumann 
fraction. 

I believe strictness introduces a kind of redundancy, which makes 
programs easier to understand, to valificate, to verify, and 
(consequently) to manipulate formally. There is only a small 
number of problems which are naturally normal-order (streams). 
The rest is only some kind of dirty programmer optimization 
(if I reduce "lazy" bool conjunction over a list, then it stops when 
the first FALSE occurs and something like this).

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Wolfgang Grieskamp 
wg@opal.cs.tu-berlin.de tub!tubopal!wg wg%opal@DB0TUI11.BITNET

tom@stl.stc.co.uk (Tom Thomson) (07/23/90)

response to Wolfgang Grieskamp's shot across the bows of lazy languages:-

I guess there'll be plenty of other responses, so I'm not going into the
whole lazy versus eager thing.  Three points though:-

1)  WG's BOTTOM should of course be TOP, there isn't an absence of information
    (or a nonterminating construct), there's an excess of information (the
    function's type requires the argument to have type non-nil-list, the
    argument type is nil-list, the inferred type would have to have both
    pieces of information which pushes it straight to the top of the type
    lattice).  Mostly, I prefer to call this sort of thing ERROR rather
    than TOP or BOTTOM  but BOTTOM is just plain wrong.

2)  Almost all the code I've ever written has been splattered with "if you
    ever get here, generate an error". Hope+ even had the type ERROR 
   added to it to permit this style of programming. The flagship machine
   design included tag bits with every value, one of which indicated that
   the value had type "ERROR" and our parallel emulators implemented it.
    Most large software systems have this sort of programming, and we're
   quite happy when no errors are generated because those bits don't get
   executed.  Lazy languages allow this style of programming, just as
   imperative languages did, but eager languages effectively prevent it

3)  Why should it be assumed that the presence of a construct which, if
   evaluated, leads to an error or a nonterminating computation is a bug?
   I don't much like the original program, but if it's what the programmer
   intended there's no bug in it. For nice examples of bottom, some of us
   program with infinite lists all the time (and intentionally write
   non-terminating loops in imperative languages too). 

Tom Thomson

fs@rex.cs.tulane.edu (Frank Silbermann) (07/24/90)

Mike McGaughey)
>>>	Are there situations where lazy evaluation is *too* lazy?
>>>	The following example (due to Lloyd Alison) shows what I mean:
>>>	Let
>>>		rec first n l be
>>>			if n = 0 then [] else (hd l).(first (n-1) (tl l))
>>>	and
>>>		rec length l be
>>>			if l = [] then 0 else 1 + length (tl l)
>>>	in	length (first 2 [])
>>>
>>>	In a strict language, the result is an error,
>>>	due to the (tl []) in "first".
>>>	In a lazy language (LML here), you get the result 2.
>>>
>>>	The problem, in this case, is that "length"
>>>	is passed a list of closures (of length 2)
>>>	which are never evaluated, but which correspond
>>>	to nonexistent list elements.
>>>	 What "should" the language do, and why?

Tom Blenko:
>>	I don't think there's a good answer to your question
>>	(as it relates to this example, anyway) unless it is clear
>>	what 'first' is intended to mean.
>>	What is supposed to be the result of `(first 2 [])', for example?

In a lazy language, you have two choices.
You can define the result to be "bottom",
(the least-defined domain element, according to the partial-order).
This approach emphasizes flexibility at the expense of safety,
and is therefore appropriate in a language for building prototypes.
Or, you can use strong typing to catch all such operations
at compile-time and reject such programs as illegal.  
This approach emphasizes safety at the expense of flexibility,
and is therefore more appropriate for building a production system.

Wolfgang Grieskamp:
>	The intended definition is obviously: ...

If the language was defined via an operational semantics,
then whatever result the standard implementation provides
is the correct result, by definition.  However,
here there seem to be doubt as to whether
the result is really what the implementor intended.

Such questions are less likely to arise when
the language is defined via denotational semantics.
Rather than defining the constructs implicitly
via a black-box implementation, denotational semantics
gives the intended meaning more directly.
This is surely one of the motivations its development.

Wolfgang Grieskamp:
>	According to my experience pure "lazyness" seems 
>	not be the way to solve the "software-crisis".
>	I believe strictness makes programs easier to understand,
>	to valificate, to verify, and (consequently) to manipulate formally.
>	There is only a small number of problems which are naturally
>	normal-order (streams). 

One of my research projects is a language offering
_both_ lazy and strict constructs, so the programmer
can consider the situation and choose appropriately.

	Frank Silbermann	fs@rex.cs.tulane.edu
	Tulane University	New Orleans, Louisanna  USA

jgk@osc.COM (Joe Keane) (07/24/90)

In article <2706@bruce.cs.monash.OZ.AU> mmcg@bruce.cs.monash.OZ.AU (Mike
McGaughey) writes:
>Are there situations where lazy evaluation is *too* lazy?
>
>The following example (due to Lloyd Alison) shows what I mean:
>
>let rec first n l =
>		if n = 0 then []
>		else (hd l).(first (n-1) (tl l))
>and rec length l =
>		if l = [] then 0
>		else 1 + length (tl l)
>in
>    length (first 2 [])
>
>In a strict language, the result is an error, due to the (tl []) in "first".
>
>In a lazy language (LML here), you get the result 2.

The whole idea of being lazy is that you do as little work as possible.  The
constraint is that that you can't change the return values or side effects of
an expression.  One thing that you are allowed to do is change bottom into
some valid value, as long as you're consistent.  I won't explain this, except
to point out that a program which gets stuck can't contradict one that does
something useful.

Many functional languages are weak in exception handling.  For example, one
whose name i won't mention says that if you get any error, you lose, the
result is the same as bottom.  In a language like this, it is valid for the
example expression to return 2, for the reason given above.

Now suppose we're in a language with real exception handling.  It's only a
matter of terminology whether you count an exception as a kind of return
value, a side effect, or something different.  Given the constraint above, the
implementation must examine the expression far enough to determine whether it
will raise an exception.  This is exactly what you'd expect.  If it doesn't,
it's broken.

In summary, i think exception handling is an important part of any language.
In particular, defining errors to be the same as bottom is an unwise thing to
do, and this example points out some of the consequences.

mmcg@bruce.cs.monash.OZ.AU (Mike McGaughey) (07/25/90)

This is something I said recently by mail (edited); I hope it
clarifies my interest somewhat:

-----
> It is simply
> a matter of which properties you prefer to have satisfied:

Yes, indeed.  There is, however, a basic inconsistency which arises
when using lazy languages - the routine that prints the result of the
expression has to be strict.

Now, this means that if I had just written:

    (first 2 [])

the program would have failed; if I then incorporate that program into
a larger program that takes the length, naively, I would expect that to
fail too.  It's perfectly obvious that it won't, of course, but you get
the idea - one would like to be able to simply glue functions together
and have the system do just what it would have done on the functions
individually.

So, laziness affects one of the basic strengths of functional
programming - the composition of functions doesn't always preserve the
semantics of those functions.  More precisely, it *does* preserve the
semantics of individual functions, but the failure semantics of a
function may not be propogated upwards when you naively expect it to
be.  It seems that the programmer has to be aware of subtleties he
shouldn't really need to worry about - he sometimes needs to know
whether the value being computed will actually be inspected or not!

I am actually a part of the 'lazy lobby', so I am happy to live with
this "problem".  However, it's an interesting anomaly, considering
the aforementioned functional philosophy.
-----

On reflection, there are a number of possible responses to my query -
the most common one has been "your program is incorrect - rewrite it".
This is manifestly obvious and misses the point.  It might be closer to
the mark to say that the example illustrates that lazy programmers are
safer programming in a top down fashion, or in such a way that they
never rely on a function's failure, or always making their intentions
clear by using pattern matching (but there are certainly cases where
pattern matching won't help.  And what about languages that don't have
pattern matching?  Also, could automatic program transformations cause
problems here?).

There seems to be a fundamental inelegance in having to know whether or
not the result of a function will actually be inspected by a program.
It's not very modular.  Is this important, from either a semantic or
reliability point of view?  Is it a symptom of something much deeper?
Is there an elegant fix for lazy languages which preserves laziness, in
all its glory, as well as allowing cleaner functional glue?  Is it
fundamentally impossible? (I suspect so, but perhaps there is a radical
solution).

This is not just a case of "Well, don't write programs that fail".  We
are, in a deeper sense, talking about whether it is possible to cleanly
integrate exception handling into a lazy language.  Those of you who
have used (say) SML will realise just how useful this could be.

One more thing:

jgk@osc.COM (Joe Keane) [23 Jul 90 22:48:39 GMT]:
> 
> One thing that you are allowed to do is change bottom into
> some valid value, as long as you're consistent.  I won't explain this, except
> to point out that a program which gets stuck can't contradict one that does
> something useful. [etc]

Program SomethingUseful :-)

let IncomingMissiles = (first 20 Missiles)
in
    if (length IncomingMissiles) > 0 then
	CounterAttackLaunchCode
    else
	PeacefulCode


For something more prosaic along the same lines, try an operating system.

So, if you have any ideas...

Mike.
--
Mike McGaughey			ACSNET:	mmcg@bruce.cs.monash.oz

"His state is kingly; thousands at his bidding speed,
 And post o'er land and ocean without rest."
	- Milton.

jgk@osc.COM (Joe Keane) (07/25/90)

In article <3913@rex.cs.tulane.edu> fs@rex.cs.tulane.edu (Frank Silbermann)
writes:
>In a lazy language, you have two choices.
>You can define the result to be "bottom",
>(the least-defined domain element, according to the partial-order).
>This approach emphasizes flexibility at the expense of safety,
>and is therefore appropriate in a language for building prototypes.
>Or, you can use strong typing to catch all such operations
>at compile-time and reject such programs as illegal.  
>This approach emphasizes safety at the expense of flexibility,
>and is therefore more appropriate for building a production system.

I don't agree with this dichotomy.  Specifically, i don't like either of the
alternatives.  There are better ways to handle the problem.

Despite what some people say, i don't think strong typing is such a good deal.
It catches obvious errors, but doesn't do you any good against less obvious
bugs.  In the example given, strong typing is no help, unless perhaps your
typing system includes the length of a list.  The problem is how to predict
and deal with run-time errors.

fs@rex.cs.tulane.edu (Frank Silbermann) (07/25/90)

In article <3188@osc.COM> jgk@osc.COM (Joe Keane) writes:
>
>	Suppose we're in a language with real exception handling.
>	It's only a matter of terminology whether you count an exception
>	as a kind of return value, a side effect, or something different.

>	In particular, defining errors to be the same as bottom
>	is an unwise thing to do ...

If you "handle" an exception then it is no longer an error
-- just another possible result.  Semantically speaking,
an error is an _unhandled_ exception.  Since it is unhandled,
why not declare the result to be `undefined' (i.e. is bottom)?

Make an analogy with mathamatics.  Consider the function f(x) = 5/x.
What is the value of the function at x=0?  `Error' or `undefined'?

It is a design issue as to whether you make the programmer
handle exceptions explicitly, or whether the language implicitly
tacks on exception-handling routines.  I have no argument
with those who advocate implicit exception-handling mechanisms
(though the increased expressiveness requires a more complicated language).

	Frank Silbermann	fs@rex.cs.tulane.edu

fs@rex.cs.tulane.edu (Frank Silbermann) (07/25/90)

In article <2734@bruce.cs.monash.OZ.AU> mmcg@bruce.cs.monash.OZ.AU (Mike McGaughey) writes:
>	If I had just written:
>
>	    (first 2 [])
>
>	the program would have failed;
>	if I then incorporate that program into a larger program
>	that takes the length, naively, I would expect that to fail too.

This depends on whether you view the above as illegal or undefined.
If you view it as undefined, then there is no reason you can't
have partially-defined objects.  If you say that in a correct program
all objects should be fully defined, then what about functions?
We don't expect functions to be defined over every possible argument!
What should be the value of `factorial(-5)'?
If you view it as an error, then is not the definition
of `factorial' also in error, and therefore
anly program containing it?

If you want to view `(first 2 [])' as illegal
(and therefore any program containing it as illegal),
then you must have a prgram analyizer strong enough to detect
potential problems at compile time -- so the program can be rejected.
The only computable way to avoid all potential problems
is to only accept programs which can be guarranteed
to run without problem.  Unfortunately,
no algorthm exists to guarrantee all good programs,
so we are stuck with a choice between safety and flexibility.

	Frank Silbermann	fs@rex.cs.tulane.edu
	Tulane University	New Orleans, Louisianna  USA

jgk@osc.COM (Joe Keane) (07/26/90)

How about the following C function:

struct list
{
  int length;
  ITEM* items;
};

struct list* first(length, old)
  int length;
  struct list* old;
{
  struct list* new = (struct list*)malloc(sizeof(struct list));
  int i;
  new->length = length;
  new->items = (ITEM*)malloc(length * sizeof(ITEM));
  for (i = 0; i < length; i++)
    new->items[i] = old->items[i];
  return new;
}

This is a fairly close translation of the functional version of the first
function.  You could easily make it recursive but i think it's a little
clearer in the iterative form.  Note that it makes exactly the same mistake as
the functional version.  It doesn't check the length of the old list, so the
new list may include some invalid elements.

I think the question is whether a list should be allowed to have elements
which raise exceptions.  Certainly if the list is declared to have elements of
a specific type, then it can't.  If the elements can be of any type, then
maybe.  In a language with good exception handling, you would be able to
declare a list of elements which don't raise exceptions, or declare a list of
elements which may raise exceptions.  You can get specific and say which kinds
of exceptions are allowed.  It's like Burger King, have it your way.

wg@opal.cs.tu-berlin.de (Wolfgang Grieskamp) (07/26/90)

jgk@osc.COM (Joe Keane) writes:

>[...] One thing that you are allowed to do [in lazy languages] is change 
>bottom into some valid value, as long as you're consistent.  

	letrec BOTTOM = BOTTOM

You cannot change bottom ever to a legal value. You can only suspend the
access to bottom in normal order languages. Bottom is not an error, 
catchable exeception, or something like this. It is  the real fatal case,
the last thing a program feels before nirwarna!

Exception handling is only little related to the problem. Like Michael Ashley 
(who intentionally changed the subject) I believe that an exception is a 
side effect, and you have to leave the comfortable semantic reservate
fl's give to tread them.

Handling errors is again another thing. I dont like the idea 
extending the domain of each data object implicit by 
an error value. This last not least to avoid wasting tagging bits.

I would rather prefer some kind of subsorting (not only
for this reason). For example, you may have a sort implementation for
lists as usual, and you may have a sort implementation

	type list_or_error == list U {ERROR}

Sort list is compatible with list_or_error, but not vice versa, unless
you establish a constraining which checks that l of sort list_or_error is 
also of sort list. 

I think this is the direction suggested by Frank Silbermann which he
calls "strong typing" (i would prefer the notion subsorting. The OOPs 
people made this concept famous).

For example, list itself maybe defined as

	type list == nil_list U cons_list

If now hd and tl is only defined on cons_list, the original "first" definition 
is type incorrect. You must inspect both the nil_list argument 
and the cons_list argument case and are forced to choose a way to handle the 
nil_list. 

However, choosing to let the result BOTTOM, the problem
is still not soluted on my opinion.  You would like to see where the 
bottom was produced when the fatal case handler is raised! So BOTTOM must 
implement a kind of suspended call chain backtrace. Its clear that such a 
BOTTOM cannot be the regular case.


mmcg@bruce.cs.monash.OZ.AU (Mike McGaughey) writes:

>On reflection, there are a number of possible responses to my query -
>the most common one has been "your program is incorrect - rewrite it".
>This is manifestly obvious and misses the point.  It might be closer to
>the mark to say that the example illustrates that lazy programmers are
>safer programming in a top down fashion, or in such a way that they
>never rely on a function's failure, or always making their intentions
>clear by using pattern matching (but there are certainly cases where
>pattern matching won't help.  And what about languages that don't have
>pattern matching?  

Pattern matching cannot solute the problem. Pattern matching is 
just syntactical flavour - if you translate it to low-level
case-cascardes strictness propertys do not change. Maybe, what you 
expect from pattern matching, can be supported by assertions (eiffel
has them). In a crutchy syntax:

	letrec first n l {length(l) >= n} = ...


>Also, could automatic program transformations cause problems here?.

Actually, for many transformations, you need a expensive strictness 
analysis to prove the applicatability in normal-order languages. 

>There seems to be a fundamental inelegance in having to know whether or
>not the result of a function will actually be inspected by a program.

The programmer needs the freedom to choose between strict and non-strict 
programming constructs. I would prefer to make the non-strict case explicit, 
and let the strict one be the default. 

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Wolfgang Grieskamp            "Learning form OOPs means learning victory ..."
wg@opal.cs.tu-berlin.de        tub!tubopal!wg        wg%opal@DB0TUI11.BITNET

rama@titan.rice.edu (Ramarao Kanneganti) (07/26/90)

In article <3926@rex.cs.tulane.edu> fs@rex.cs.tulane.edu (Frank Silbermann) writes:
>If you "handle" an exception then it is no longer an error
>-- just another possible result.  Semantically speaking,
>an error is an _unhandled_ exception.  Since it is unhandled,
>why not declare the result to be `undefined' (i.e. is bottom)?

Error is much more than bottom. Bottom is least informative. If you
sitting in front of a terminal you notice when the program doesn't
come back with any thing, or if it comes back with an error message.
Error is more informative than bottom. Would you equate abort(5) with
bottom, simply because abort is unhandled?

>
>Make an analogy with mathamatics.  Consider the function f(x) = 5/x.
>What is the value of the function at x=0?  `Error' or `undefined'?
>

It can be a matter of choice of how you define the function f. But, if
you want to give maximal information, it should be error.  Suppose you
have a domain with a chain like, a <= b <= c. After getting the result
b, in the further query, you get an error. Do you still call the
result b? I claim that it is an error element, and it is above b. But
any function which is strict in bottom is also strict in error, so any
choice will be consistent.


>	Frank Silbermann	fs@rex.cs.tulane.edu

	thanks,
	Ramarao Kanneganti	rama@rice.edu

jashley@silver.ucs.indiana.edu (J. Michael Ashley) (07/26/90)

In article <1681@opal.tubopal.UUCP> wg@opal.cs.tu-berlin.de writes:
>
> Bottom is not an error, 
>catchable exeception, or something like this. It is  the real fatal case,
>the last thing a program feels before [nirvana]!

I think I just found my "clever .signature".

>Handling errors is again another thing. I dont like the idea 
>extending the domain of each data object implicit by 
>an error value.

A lot of people don't like this idea.  I think that's *one* of the motivations
for using continuations semantics.  If you get an error, you throw away the
current continuation and invoke an error continuation.  That way you don't have
to pass this silly ERROR value all the way through the rest of the program.

So we extend the domain or use a non-functional exception handler.
This doesn't look very good.

>However, choosing to let the result BOTTOM, the problem
>is still not soluted on my opinion.  You would like to see where the 
>bottom was produced when the fatal case handler is raised!

Sorry, I don't understand this sentence.

>So BOTTOM must 
>implement a kind of suspended call chain backtrace.

What's a suspended call chain backtrace?

>Its clear that such a BOTTOM cannot be the regular case.

Can somebody very precisely define the meaning of bottom?  I'm getting
a little confused here now that Wolfgang and other people have started
throwing it around liberally.

It sounds like people want bottom to be the value of any nonsensical
expression.  Is this right/


mike ashley
jashley@lanl.gov

jgk@osc.COM (Joe Keane) (07/28/90)

I think we all agree that bottom is the least-defined element.  This means you
don't know anything about it.  Is it an error?  You don't know.  Is it a
number?  You don't know.  Is it bottom?  You don't know.  And so on.

The last one is key.  The most important property of bottom is that you never
know when you have it.  If you `know' something is bottom, you know something
about it, so it's not really bottom, is it?

Now, in a lot of cases we can determine that an expression is stuck in an
infinite loop.  Usually this is defined to be bottom, but we can extend the
semantics so that when this is detected it raises an `infinte loop' exception.
However, it should be clear that this is no longer bottom, and that no matter
how sophisticated our detection algorithm, some expressions will still be
bottom.

Also, depending on your type system, there can be different `degrees' of
bottom.  For example, you may know an expression is an integer but its exact
value is unknown.  I don't think this changes the arguments.

In article <1681@opal.tubopal.UUCP> wg@opal.cs.tu-berlin.de (Wolfgang
Grieskamp) writes:
>Exception handling is only little related to the problem. Like Michael Ashley 
>(who intentionally changed the subject) I believe that an exception is a 
>side effect, and you have to leave the comfortable semantic reservate
>fl's give to tread them.

Needless to say, i disagree with this.  If FL designers believe this is true,
they will only design toy languages.  You're OK as long as you don't make any
mistakes.

>However, choosing to let the result BOTTOM, the problem
>is still not soluted on my opinion.  You would like to see where the 
>bottom was produced when the fatal case handler is raised! So BOTTOM must 
>implement a kind of suspended call chain backtrace. Its clear that such a 
>BOTTOM cannot be the regular case.

You're right.  If the error `BOTTOM' does some sort of backtrace, it's not
really bottom.

>The programmer needs the freedom to choose between strict and non-strict 
>programming constructs. I would prefer to make the non-strict case explicit, 
>and let the strict one be the default. 

As i said before, i think a sort of hybrid approach is the best.  You can
declare that a given argument to a function must be valid, i.e. must not raise
an exception.  Then upon entering the function, the exception system will do
enough work to ensure that the argument is valid.  However, the actual value
need not be computed until it's really needed.

In article <10268@brazos.Rice.edu> rama@titan.rice.edu (Ramarao Kanneganti)
writes:
>Error is much more than bottom. Bottom is least informative. If you
>sitting in front of a terminal you notice when the program doesn't
>come back with any thing, or if it comes back with an error message.
>Error is more informative than bottom. Would you equate abort(5) with
>bottom, simply because abort is unhandled?

I agree completely.  Bottom gives you no information, not even that it's
bottom.

>A lot of people don't like this idea.  I think that's *one* of the motivations
>for using continuations semantics.  If you get an error, you throw away the
>current continuation and invoke an error continuation.  That way you don't
>have to pass this silly ERROR value all the way through the rest of the
>program.

I don't think these are really any different.  It's just a matter of
implementation.  The naive way to deal with an exception is to represent it as
a return value, and keep passing it back up the chain, until someone handles
it, i.e. does something besides pass it up.  You can be a little smarter and
figure out who is going to handle the exception, and jump straight there.

>It sounds like people want bottom to be the value of any nonsensical
>expression.  Is this right/

I think it's a bad idea.

wg@opal.cs.tu-berlin.de (Wolfgang Grieskamp) (07/29/90)

jgk@osc.COM (Joe Keane) writes:

 >I think we all agree that bottom is the least-defined element.  This means you
 >don't know anything about it.  Is it an error?  You don't know.  Is it a
 >number?  You don't know.  Is it bottom?  You don't know.  And so on.

Actually agreed. But i wouldnt say you know nothing, but inside the 
language you view nothing is known about bottom.

 >Now, in a lot of cases we can determine that an expression is stuck in an
 >infinite loop.  Usually this is defined to be bottom, but we can extend the
 >semantics so that when this is detected it raises an `infinte loop' exception.

I have a simple pragmatic problem with the approach to extend the semantic.
Imagine, you have a functional language with sum-of-prod types and
you have a simple compiler for it. This compiler is able to generate
exception code for expressions like, for example, hd(nil).

Now you have written a program, gone through the valification phase and
so on, and are quite sure your program is correct. The compiler has
a switch to disable checking of undefined expressions.
Because the program is highly data-structure oriented, you will get
a speedup of about 10% using this switch. You do it!   ???


~~
Wolfgang Grieskamp 
wg@opal.cs.tu-berlin.de 

D.A.Harrison@newcastle.ac.uk (Dave Harrison) (07/31/90)

In article <3203@osc.COM>, jgk@osc.COM (Joe Keane) writes:
|>
|>Now, in a lot of cases we can determine that an expression is stuck in an
|>infinite loop.  Usually this is defined to be bottom, but we can extend the
|>semantics so that when this is detected it raises an `infinte loop'
|>exception.
|>However, it should be clear that this is no longer bottom, and that no matter
|>how sophisticated our detection algorithm, some expressions will still be
|>bottom.
|>

Sort of related to this is an idea I found in paper by Manfred Broy [Broy 83].
The idea is to timestamp every element in a domain. For the domain of booleans
you get (tt = true, ff = false, ! = bottom) :


             <infinity, !>
                  |
                  . 
                  . 
                  . 

                  |
                  |
         <2,tt> <2,!> <2,ff>
              \   |   /
               \  |  /
         <1,tt> <1,!> <1,ff>
              \   |   /
               \  |  /
         <0,tt> <0,!> <0,ff>
              \   |   /
               \  |  /
		<0,!>

(Sorry about the diagram - a bit tricky in ASCII and I'm no artist at the best
of times :-) ).

The timestamps on tt and ff denote the times at which a program works out that
the result is true or false. The timestamps on ! denote that the program hasn't
worked out what the result is by that time. The relevance to this thread is 
that non-termination is not modelled by the weakest element, <0,!> but by the 
"highest" <infinity, bottom> (i.e. the program doesn't know the answer "after 
infinite time").

Because of the shape of the diagram above I call these things herring-bone 
domains. Among other things I've used them to give a semantics for a 
functional language for real-time programming [Harrison 88]. 

If anyone is remotely interested please get in touch. 

Dave

[Broy 83]
  "Applicative Real-Time Programming"
  M. Broy
  Proceedings IFIP  1983.
  North-Holland Information Processing 83.

[Harrison 88]
  "Functional Real-Time Programming : The Language Ruth and its Sementics"
  D. Harrison
  T.R. 59, Ph.D. Thesis. University of Stirling 1988

---------------------------------

Dave Harrison				JANET: D.A.Harrison@uk.ac.newcastle
Computing Laboratory			PHONE: +44 91 222 7784
University of Newcastle upon Tyne
Newcastle upon Tyne NE1 7RU
U.K.

tom@nw.stl.stc.co.uk (Tom Thomsom) (08/08/90)

In article <10268@brazos.Rice.edu> rama@titan.rice.edu (Ramarao Kanneganti) writes:
>result b? I claim that it is an error element, and it is above b. But
>any function which is strict in bottom is also strict in error, so any
>choice will be consistent.
Strange way of putting it.  
Trouble is assigning a meaning to "strict in error".
Normally "strict" means "does not transform bottom or top", and "strict in
bottom" means "does not transform bottom".  So "strict in error", by analogy,
would mean "does not transform error"; if that's what you meant, I am in 
TOTAL DISagreement with you - the information in the error can be used to derive
a result.
On the other hand, maybe you meant that any program that was going to try to
evaluate bottom would try to evaluate the error that occurred in the place
where the bottom would otherwise have been, ie if you were going to hit a 
bottom then you'ld hit an error if that particular bottom were turned into 
that; then I would agree with you.
 
Maybe if we are going to discuss both errors and strictness we need a definition
for phrases like "strict in error"?  Or maybe we shouldn't use such phrases?