[comp.parallel] optim., specul., eager; refs

wilson@uicbert.eecs.uic.edu (11/23/88)

   (This is the first of several response summary/elaboration postings I'm
likely to post about optimistic computation & related issues.  Towards
the end it's got some basic references.
   I'm not sure if there are well-defined boundaries in this discussion,
so bear with me and feel free to post differing views.)

    Several people have pointed out to me that people in functional (and
parafunctional) programming have been working on eager evaluation and
speculative computation.  As I see it, speculative and optimistic
computation are two ways to APPLY eager evaluation.  And the optimistic
use of eager evaluation in a functional language is a degenerate case
of the optimism I'm talking about.

    Eager evaluation is the evaluation of expressions (usually in a
functional programming context) before they are called for.  Where
lazy evaluation evaluates things only as absolutely necessary, eager
evaluation evaluates them before it is known whether they need to
be evaluated.

(--  a short digression to explain lazy and eager evaluation:  --)

    Take the expression  "FOO(BAR(x),BLEEN(y))"

    In most conventional languages, BAR(x) and BLEEN(y) will be
evaluated and then FOO will be applied to their results.  In a lazy
system, FOO will begin executing and if it actually tries to use
the values from the expressions BAR(x) and BLEEN(y), they will
be evaluated as needed.  Thus if BAR(x) is used and BLEEN(y) turns
out not to be, BLEEN(y) need not be evaluated.  This allows you
to do nifty things like representing infinite lists, determining
the next element on an as-needed basis, rather than precomputing
the whole list at an infinite cost.

    With eager evaluation, you may begin evaluating BAR(x) and/or
BLEEN(y) at the *same time* as FOO.  If FOO accesses a part of
the result of BAR(x) or BLEEN(y) before it's actually computed,
FOO may have to stop and wait for the result.

(-- and now back to speculative and optimistic stuff --)

   As I understand it, speculative computation is the eager evaluation
of OR-parallel programs.  In general it is nondeterministic because
more than one branch of an OR may succeed and return a result, and
whichever does so first has its value used, in a winner-take-all
fashion.  For example,  "OR(A,B,C)"  may begin evaluating A, B, and
C simultaneously, and whenever one succeeds, the others are terminated.
If more than one may succeed and if the times for the computation
may vary because of vagaries of process scheduling or resource
allocation, then on some runs A may "win" and on others, B or C.

   Optimistic computation is different, in that it is deterministic,
in input-output terms.  For example, if we evaluate IF A then B else C
eagerly, B and C may be evaluated in parallel with A.  (Sometimes,
anyway.)  Whether the results of B or C are used depends only on A,
so the result will always be the same.  The actual work done may
vary, however.  Even if A always turns out to be false, on some runs B
may run to completion, and on others it won't.

   In previous postings, I used the term optimistic computation in a
somewhat more restrictive sense.  I was not really referring to eager
evaluation in a functional context, but rather to "eager" execution
in a more conventional language framework, where side-effects may
complicate the issue.  In the "IF A then B else C" example, B and
C might each have incompatible side-effects.  To execute B and C
in parallel, they must be isolated from each other's effects to
be sure that the result is deterministic (and/or equivalent to the
usual kind of serial evaluation.)


----------- SOME (OLD) REFERENCES --------------

   Optimistic computation (allowing side effects) was originally
discussed (I think) in transaction-based concurrency control
by Kung and Robinson (see their article in the June 1981 TODS).
Jefferson at UCLA has a positively *beautiful* system called
Time Warp for optimistic distributed simulation.  (The basic
paper on it, called "Virtual Time", was in the July 1985 TOPLAS.)

   Morry Katz and Tom Knight both have done designs for optimistic
execution of overtly serial programs.  Knight's system was discussed
in his 1986 Lisp and Functional Programming Conference paper, and
uses specialized hardware to keep track of assumptions about data
values.

   Katz' system was software-based, and he did simulations of it.
The simulations proved rather disappointing because of a serialization
bottleneck.  Last I heard, he was redesigning it to make it more
analogous to Time Warp (it was originally closely analogous to Kung and
Robinson's system).  I don't know whether he's done much with it
lately, but the original system was described in his MS thesis from
MIT.  (June, 1986,  MIT EECS dept., title "ParaTran: a Transparent,
Transaction-based Runtime Mechanism for Parallel execution of Scheme".)

   One thing to note about the above systems is that all of them
make exactly one guess at a choice point, computing one possible
future.  This amounts to a depth-first backtracking search through
possible executions on the part of each independent process.  This
reduces the problem of isolating the side effects of each possible
future to one of "undoing" a linear sequence of states.


   In a future posting, I'll post more references, some of them about
current work that people have told me about in responses.

   Also, people who are interested in the basics of eager evaluation
in a functional or parafunctional context may want to look at Halstead
and Hudak's papers in the August 1986 IEEE Computer. 

   Halstead's group at MIT is exploring explicit eager evaluation; they
use objects called "futures" which are a sort of placeholder with an
associated process.  They represent the result of a computation which may
or may not have finished yet.  They can be passed around like any other
object before the result is in, but an attempt to actually use the
result causes the accessing process to block until it's been computed.
Their system allows side effects, but doesn't provide any special
provisions for isolating them or undoing them.  It's up to the programmer
to use scoping and binding disciplines to keep things straight.

   Hudak's and others at Yale do something called parafunctional
programming, which is functional programming with annotations that
can specify things like eager evaluation.  The semantics of the
programs are functional, with the annotations allowing the specification
of different evaluation orders, etc.

DISCLAIMER:  Any of this could be wrong, since I'm no expert in this
             area.  Feel free to bring real expertise to bear on this.


Paul R. Wilson                         
Electronic Mind Control Laboratory*
U. of Illin. at C. EECS Dept. (M/C 154)   wilson%uicbert@uxc.cso.uiuc.edu
Box 4348   Chicago,IL 60680      (* a.k.a.  Human-Computer Interaction Lab)

jwmills@iuvax.cs.indiana.edu (11/28/88)

/* Written 12:57 pm  Nov 22, 1988 by wilson@hubcap in iuvax:comp.parallel */
/* ---------- "optim., specul., eager;  refs" ---------- */

   (This is the first of several response summary/elaboration postings I'm
likely to post about optimistic computation & related issues.  Towards
the end it's got some basic references.
   I'm not sure if there are well-defined boundaries in this discussion,
so bear with me and feel free to post differing views.)

    Several people have pointed out to me that people in functional (and
parafunctional) programming have been working on eager evaluation and
speculative computation.  As I see it, speculative and optimistic
computation are two ways to APPLY eager evaluation.  And the optimistic
use of eager evaluation in a functional language is a degenerate case
of the optimism I'm talking about.

    Eager evaluation is the evaluation of expressions (usually in a
functional programming context) before they are called for.  Where
lazy evaluation evaluates things only as absolutely necessary, eager
evaluation evaluates them before it is known whether they need to
be evaluated.

(--  a short digression to explain lazy and eager evaluation:  --)

    Take the expression  "FOO(BAR(x),BLEEN(y))"

    In most conventional languages, BAR(x) and BLEEN(y) will be
evaluated and then FOO will be applied to their results.  In a lazy
system, FOO will begin executing and if it actually tries to use
the values from the expressions BAR(x) and BLEEN(y), they will
be evaluated as needed.  Thus if BAR(x) is used and BLEEN(y) turns
out not to be, BLEEN(y) need not be evaluated.  This allows you
to do nifty things like representing infinite lists, determining
the next element on an as-needed basis, rather than precomputing
the whole list at an infinite cost.

    With eager evaluation, you may begin evaluating BAR(x) and/or
BLEEN(y) at the *same time* as FOO.  If FOO accesses a part of
the result of BAR(x) or BLEEN(y) before it's actually computed,
FOO may have to stop and wait for the result.

(-- and now back to speculative and optimistic stuff --)

   As I understand it, speculative computation is the eager evaluation
of OR-parallel programs.  In general it is nondeterministic because
more than one branch of an OR may succeed and return a result, and
whichever does so first has its value used, in a winner-take-all
fashion.  For example,  "OR(A,B,C)"  may begin evaluating A, B, and
C simultaneously, and whenever one succeeds, the others are terminated.
If more than one may succeed and if the times for the computation
may vary because of vagaries of process scheduling or resource
allocation, then on some runs A may "win" and on others, B or C.

   Optimistic computation is different, in that it is deterministic,
in input-output terms.  For example, if we evaluate IF A then B else C
eagerly, B and C may be evaluated in parallel with A.  (Sometimes,
anyway.)  Whether the results of B or C are used depends only on A,
so the result will always be the same.  The actual work done may
vary, however.  Even if A always turns out to be false, on some runs B
may run to completion, and on others it won't.

   In previous postings, I used the term optimistic computation in a
somewhat more restrictive sense.  I was not really referring to eager
evaluation in a functional context, but rather to "eager" execution
in a more conventional language framework, where side-effects may
complicate the issue.  In the "IF A then B else C" example, B and
C might each have incompatible side-effects.  To execute B and C
in parallel, they must be isolated from each other's effects to
be sure that the result is deterministic (and/or equivalent to the
usual kind of serial evaluation.)


----------- SOME (OLD) REFERENCES --------------

   Optimistic computation (allowing side effects) was originally
discussed (I think) in transaction-based concurrency control
by Kung and Robinson (see their article in the June 1981 TODS).
Jefferson at UCLA has a positively *beautiful* system called
Time Warp for optimistic distributed simulation.  (The basic
paper on it, called "Virtual Time", was in the July 1985 TOPLAS.)

   Morry Katz and Tom Knight both have done designs for optimistic
execution of overtly serial programs.  Knight's system was discussed
in his 1986 Lisp and Functional Programming Conference paper, and
uses specialized hardware to keep track of assumptions about data
values.

   Katz' system was software-based, and he did simulations of it.
The simulations proved rather disappointing because of a serialization
bottleneck.  Last I heard, he was redesigning it to make it more
analogous to Time Warp (it was originally closely analogous to Kung and
Robinson's system).  I don't know whether he's done much with it
lately, but the original system was described in his MS thesis from
MIT.  (June, 1986,  MIT EECS dept., title "ParaTran: a Transparent,
Transaction-based Runtime Mechanism for Parallel execution of Scheme".)

   One thing to note about the above systems is that all of them
make exactly one guess at a choice point, computing one possible
future.  This amounts to a depth-first backtracking search through
possible executions on the part of each independent process.  This
reduces the problem of isolating the side effects of each possible
future to one of "undoing" a linear sequence of states.


   In a future posting, I'll post more references, some of them about
current work that people have told me about in responses.

   Also, people who are interested in the basics of eager evaluation
in a functional or parafunctional context may want to look at Halstead
and Hudak's papers in the August 1986 IEEE Computer. 

   Halstead's group at MIT is exploring explicit eager evaluation; they
use objects called "futures" which are a sort of placeholder with an
associated process.  They represent the result of a computation which may
or may not have finished yet.  They can be passed around like any other
object before the result is in, but an attempt to actually use the
result causes the accessing process to block until it's been computed.
Their system allows side effects, but doesn't provide any special
provisions for isolating them or undoing them.  It's up to the programmer
to use scoping and binding disciplines to keep things straight.

   Hudak's and others at Yale do something called parafunctional
programming, which is functional programming with annotations that
can specify things like eager evaluation.  The semantics of the
programs are functional, with the annotations allowing the specification
of different evaluation orders, etc.

DISCLAIMER:  Any of this could be wrong, since I'm no expert in this
             area.  Feel free to bring real expertise to bear on this.


Paul R. Wilson                         
Electronic Mind Control Laboratory*
U. of Illin. at C. EECS Dept. (M/C 154)   wilson%uicbert@uxc.cso.uiuc.edu
Box 4348   Chicago,IL 60680      (* a.k.a.  Human-Computer Interaction Lab)
/* End of text from iuvax:comp.parallel */