[comp.parallel] Heretics and Infidels

zenith@ensmp.fr (Steven Ericsson-Zenith) (01/05/91)

A number of people, I know, are awaiting this posting which is
essentially in reply to the statements made by David Gelernter in a
posting here on the 19th of December. I know that a number of points
have been raised by others in response to my original, they are not
ignored - primarily, most were (I believe) misunderstandings due to the
brevity with which, in this forum, a case must be made. In many respects
this reply suffers the same inadequacy even though it is LONG.  If
things remain unclear after this post perhaps we could have a discussion
offline and I will summarize for the list later.

An original questioner (hshi@maytag.waterloo.edu) wrote to ask (I
paraphrase): "if there is some parallel programming programming model
between shared variables and message passing, and if there is such a
model would it be more suitable for present computer architectures."

Paolo Ciancarini wrote (Dec 13th) in reply that he believed Linda's
Generative Communication through tuple space constituted a further
"different model".

I disagreed with Paolo's claim for Linda's distinction by noting
fundamental similarities between all three paradigms where: "... the
persistent nature of the intermediary objects [used for data sharing
between processes] <is a> significant disinquishing characteristic.".

Thus I generalized by encompassing shared variables, message passing
objects (channels, mailboxes), and Linda's tuples as components which
distinguish "variants of the same model". Again, these components
express how *processes* share data.

A *process*, for the purposes of this discussion, is a sequence whose
interaction with other *processes* is completely described by some
"Process Interaction Model". A "Process Interaction Model", in addition
to the aforementioned "intermediary objects" includes whatever other
mechanism you may have for *process synchronization*. This mechanism
includes the synchronization characteristics of the operations (e.g.
"in", "rd" etc.  in Linda) acting upon the mentioned "intermediary
objects" and any explicit constructions for process synchronization
(e.g. || in CSP), or other constructions effecting the synchronization
characteristics of operations (such as choice in CSP).

I then said: "It would seem desirable to have all process interactions
defined by the associated "interaction model"" and pointed out that "as
things stand in Linda processes are able to side-effect each other
directly, i.e. outside of the tuple space model -- at least this is the
case for the Yale C-Linda model. The process model in Linda is vague to
say the least, with the semantics of process creation varying according
to implementation."

This prompted David Gelernter to say:

   False.  The semantics of process creation are described in the "How
   to" book, in exactly the same way they've been described in the
   literature for years.  Zenith is correct in stating that there are
   Linda implementations (not ours) that don't follow our version of the
   semantics.  But that's not to say that the definition doesn't exist,
   is it?

Well, I'm very familar with this literature and have, in a small way,
contributed to it. I and I suspect many others, haven't yet read David's
book since it was only published in the past few months (but it was
worth the plug anyway Dave :-) ). So I do not speak with regard to that
particular publication.

My observation is this: the fact that (I hesitate to say "all") other
implementations do not follow the version of the "definition" in the
literature is testimony to the vagueness of the "definition".

Not only that, I quote from the literature in which the semantics of
process creation have supposedly been described "in exactly the same
way" "for years".

First let us consider Nick Carriero's thesis "The implementation of
Tuple Space machines" published in December 1987, in which (on page 4)
he says:

  "There are significant unanswered questions about *eval*. Some of the
  most important concern variable bindings in [a process]. Can [a
  process] have unbound variables? If it can, when will these variables
  be bound? At the time *eval* executes or at the time [the process]
  executes?"

And in the most comprehensive and least dogmatic work of it's kind on
Linda I've seen todate, Jerry Leichter's thesis published in July 1989
devotes a section to "The problem of eval". In it he says (pages 27-28):

  "However, producing an exact specification for [eval] that is both
  natural within the C environment, and implementable with reasonable
  efficiency, is difficult,"

He then in subsections deals with the issue of closures and proposes a
semantics for eval (page 31). They're too long to repeat them here. But
to summarize, the rules attempt to guarantee that the created processes
will not side effect, with clear statements about the treatment of
pointers. He at least makes statements like:

  "In no case is any information passed back from the execution
  environment to the creating environment, except by explicit tuple
  space operations."

Which I take to mean that free variables cannot be changed by any of the
many things (including pointers) C uses to change the value of a
variable.

Leichter later states when discussing "Alternative approaches to eval",
(page 279) that:

  "The eval operation does not fit comfortably into C, nor into most
  sequential languages."

In a much more recent and advanced piece of work "Semantics and Analysis
of First Class Tuple Spaces" published April 1990, Suresh Jagannathan
expertly attempts to shore up the walls of uncertainty with some
formalism (at last!). I quote here his informal statement of eval
semantics:

  "[eval] deposits a tuple consisting of [some number of] processes into
  [tuple space]; the ith process is responsible for computing the value
  of [the respective component of the tuple]. Each process evaluates
  within its own private environment. To achieve this, all functions
  applied within eval are transformed into closed combinators that
  reference no free variables."

In fact, if we return to the original appearance of eval in the IEEE
paper "Parallel programming in Linda" published in 1985, for which both
Gelernter and Carriero are accredited authors, we find the following
statement regarding eval:

  "The process that is implicitly created to do the evaluating has
  available to it the value of each name appearing in [the tuple] as
  that name was defined in the callers environment when eval was
  executed. (If such a name is a function or procedure name, it's value
  is an executable expression). This protocol amounts to an "unevaluated" 
  variant of call-by-value.  Note that it is an error to invoke, within
  an eval tuple, a function whose body refers to non-local variables -
  their values won't be available to the process that does the
  evaluation."

Indeed, I am aware that Suresh used this statement as a basis for his
work - as should be apparent.

It has also been pointed out to me that: "there is .. a research report
by Jim Narem [a member of the Linda Group] that attempts to provide an
operational semantics of existing implementations; it described three
different Yale eval implementations, each with very different
semantics." I regret that I do not have a copy of and do not recall this
piece of the literature.

There are several other works which I've failed to mention.

So to summarize my observation of the literature - the first *real*
definition that I find is that produced by Suresh Jagannarthan in April
of this past year which superficially concurrs with an earlier informal
statement by Gelernter. However, the statement in the 1985 paper is in
the context of the following statement from the paper:

  "Linda is essentially identical to C."

In which case the treatment of pointers has been left to a now uncertain
statement about names. Where Suresh's formal description explictly
excludes names which are "addresses".

Much of the "vagueness" to which I refered really results from comments
in Carriero's thesis - which is unfortunate since it is one of the most
influential documents on the implementation of Linda.  The adopted Yale
C-Linda product is based on this work. The real problem however,
concerns how names that are pointers are treated (this won't come as
much of a surprise) - only Leichter's thesis confronts this problem
earlier.

I can here some crys now - wait a minute! You're confusing implementation 
with the definition! Indeed, that is what has happened.  In the face of
only informal definition and contradictory "definitive" implementations,
"vagueness" and uncertainty abound. It should be no surprise that when
others attempted to apply the model to other host languages that the
semantics of process creation were by necessity different - and thus the
confusion continues.

(continued in part 2)
--
Steven Ericsson Zenith * email: zenith@ensmp.fr *   fax: (1).64.69.47.09
                       |                        | voice: (1).64.69.48.52
	     Ecole Nationale Superieure des Mines de Paris
	  35 Rue Saint-Honore 77305 Fontainebleau Cedex France
    "All see beauty as beauty only because they see ugliness" LaoTzu

zenith@ensmp.fr (Steven Ericsson-Zenith) (01/07/91)

(continued from part 1)

Indeed, when I earlier pointed to the problem Paolo Ciancarini who is
currently a visitor to the Linda group, and is no doubt as familiar with
this literature as I am, wrote:

 "I am not aware of the implementations you are speaking about.
  In the pure Linda model, that is our argument of discussion,
  agents do not share variables. Period.
  (I agree that side effects are undesirable; this is not an issue)."

Paolo shares an idealized notion of "Pure Linda" not dissimilar to the
one I brought, by assumption, with me when I first joined the same group
at Yale.

Indeed, if we return to Carriero' thesis and look in the conclusion,
where, discussing various alternative approaches (page 86), he says:

  "But just as it is the case that while many elementary kinematics
  problems are more easily analyzed if we disregard friction, it would not
  do to outlaw friction; so too is it the case that while many elementary
  programs can be more easily analyzed by eliminating side effects, it
  will not do to outlaw side effects. Side effects are simply, and
  necessarily, a fact of life."

At the root of this statement is a cultural dogma prevalent in the USA.
Just as Paolo's statement and my own idealization against side effects
is founded in an equally powerful European dogma. Intuitively I prefer
the more formal and disciplined approach (yet I see no clear evidence to
resolve this particular issue of side-effects) - however, I hate dogma!
Wherever it is found it stifles creativity and innovation.

In fact, it was the study of the implementations at Yale and
consideration of the pragmatic problems they presented, alongside a
study of the more formal semantic problems in the LINDA model which led
me to the invention of "Ease".  (It is only fair to point out that my
earlier involvement in the definition and application of Occam led me
there too). That's what research is all about!

I'm sure to modify my view as we gain a deeper understanding of the
problems. Yet I have a case to make.

My thesis however (that Ease is an Easier and more efficient programming
model than these other models), is demonstrable and its degree of
validity can be shown by the following test: Take any non-trivial
explicitly parallel code beit hand generated or automatically generated,
using either message passing or Linda and translate the program into an
equivalent program written using the Ease model. If the thesis is good
then in both cases the translation will be trival (providing the process
model of the original program is well behaved :-) and the resulting
program will display improved performance characteristics and what is
more that improvement will in many cases be significant - esp. where the
data is a non-trival structure (e.g. a list or array) and exchanged
(acted upon) often.

In the worse cases (where the existing code is closely mapped to the
machine, and data exchange is trival) the performance will be no slower
than the original.

The performance improvements will be found when compiled for shared
memory OR distributed memory architecture machines OR some combination
of both architectures - in all cases when scaled from 1 (a uniprocessor)
to N (where N is some number of nodes understood to be the programs
optimum), and compared with the equivalent original.

Now there is a test to sink your teeth into! And not only applicable to
Ease. This is what the Ease Project here is doing right now.  We are
looking at the Ease model and we expect soon to be in a position to make
these tests and produce the results. Whatever the result - with any luck
our work will show us a better way to do it - and I encourage productive
dissent.

It's interesting to note that an old friend from the European message
passing community who I met again recently said "Essentially what you've
done in Ease is clean up message passing.". It's interesting because
many of my American friends think I've cleaned up Linda!

The Ease model is inherently more efficient. It's existence does not
undermine the fine work on whose shoulders it climbs. The model uses CSP
as it's mathematical basis, and inherits many of it's ideas from the
less formally defined LINDA. Both good work. CSP remains a useful
mathematical tool (now one of a number). LINDA was an interesting
excersize - it's time to move on.

If programming models are to evolve to achieve the effective
exploitation of the coming machines in the field of High Performance
Computing then we cannot hang around to make a commercial success of one
model or the other. We must be innovative and bold. We must be prepared
to learn from what we have done before, give it up and move on.

(continued in part 3, where we finally get to the issue of real time)
--
Steven Ericsson Zenith * email: zenith@ensmp.fr *   fax: (1).64.69.47.09
                       |                        | voice: (1).64.69.48.52
	     Ecole Nationale Superieure des Mines de Paris
	  35 Rue Saint-Honore 77305 Fontainebleau Cedex France
    "All see beauty as beauty only because they see ugliness" LaoTzu

zenith@ensmp.fr (Steven Ericsson-Zenith) (01/07/91)

(continued from part 2)

In a message to comp.parallel on December 19th David Gelernter writes:

   Steven Zenith writes

     For example, in the case of the Linda optimizer, write a Linda
     program or Linda-ize an existing C code. You cannot predict the
     performance of the program, and you will not know the performance
     until the task is complete - any imperical analysis you perform at
     any point in the development will likely be invalidated by
     subsequent changes to the program since the optimizer will almost
     certainly change stategy as you add new tuple types. Ok, so you
     learn everything there is to know about the optimizer - now you
     have subverted portability. Most "successful" Linda programs are
     either written within a few feet of the implementor of the
     optimizer or are so parallel and have such granularity that the
     Linda overhead disappears into insignificance and in such cases
     pretty much any model would do...

   Two claims are made here: (a) Linda is unpredictable... the inference
   being, I assume, that it isn't suitable for realtime applications.
   (b) You have to understand the optimizer and subvert portability in
   order to get decent performance from Linda programs.

   Regarding (a), realtime is a difficult area.  The only substantial
   experiment I'm aware of involving realtime use of Linda is Mike
   Factor's work on an intensive care unit monitor.  He developed a
   performance model (for use by a heuristic scheduler) that predicted
   the runtime of his Linda application within a couple of percent.
   Factor's work has been described at length in the literature.  We've
   never made any blanket claims for Linda in realtime situations; on
   the basis of Factor's work, Linda is highly promising in this area,
   but that's our only claim for now.

I know this work too. Mike Factor's trellis ideas are interesting. They
are by no means dependent upon Linda for implementation. In fact, Mike
Factor's thesis far from showing Linda as "highly promising" provides
damning evidence to the contrary.

First of all, the category of real-time dealt with by Mike's thesis
(published December 1990) "The Process Trellis Software Architecture for
Parallel, Real-Time Monitors", is novel. It deals with a "soft
worst-case guarantee" where (page 5):

  "... given the worst-case senario of module invocations (i.e., most
  time consuming), the program will on average meet its deadline."

Having chosen this novel category he considers three possible
implementations (pages 49-51):

  "Full-Scale Processes: The obvious implementation of the trellis is to
  let each trellis process be a full-blown language or operating system
  process. For instance, in a Linda implementation the process would be 
  created by eval, in a Unix implementation by fork. ..."

This is dismissed because it reduces "efficiency and predictability", as
indeed it does.

  "It reduces efficiency because information at compile time that would
  be useful in scheduling is withheld from the program which performs
  the scheduling. ..."

  "It reduces predictability because a system other than the run-time
  kernel decides when a process gets CPU time. ..."

Indeed, this will come a no surprise to those with any experience in
real-time programming. It is surprising that Mike didn't junk the idea
of using Linda, or indeed any conventional UNIX platform at this point.
But such things are very often dependent upon who your supervisor is :-)

Mike then considers:

  "Dynamic-Worker: The dynamic worker kernel controls in a dynamic,
  distributed fashion when and where each process executes. ... 
  These workers ... are full-scale Unix-type processes..."

And he implemented this one but found (page 51):

  "the dynamic-worker kernel is inappropriate, even though it controls
  when and where processes execute. It's control is dynamic, and thus,
  the kernel is less efficient and predictable than it could be."

So finally:

  "The Static-Worker Run-Time Kernel: ... This kernel controls when and
  where processes execute in a static fashion. ... It is highly efficient
  accessing Linda's tuple space only when necessary. ..."

He overcame the problem by using Tuple Space as little as possible! Even
so, the process scheduler is written using Linda.

To return to Gelernter's comment's:

   Statement (b) is false.  The specific claim about 'most "succecessful'
   Linda applications' is also false.  "How to Write Parallel Programs"
   by Carriero and Gelernter (MIT Press 1990) presents 7 sets of
   performance results for a range of programs on a range of parallel
   machines (both shared and distributed memory).  (Look up "performance"
   in the index.)  These applications were developed with no regard for
   the optimizer whatever.  In fact, the book describes exactly how they
   WERE developed: on the basis of a simple and Linda-independent
   performance model.  We've reported similar performance results in the
   literature going back to 1986.

   (Zenith's statements ARE true to the vacuous extent that any system
   with an optimizer permits programmers who understand the optimizer to
   allow for its behavior in coding.  But the specific claim that you
   *must* do this to get good performance using Linda is false.)

Ah, that word "false" again - "true to a vacuous extent"?! Remember I
said, "You cannot predict the performance of the program, and you will
not know the performance until the task is complete. ..." and that,
"Most "successful" Linda programs are either written within a few feet
of the implementor of the optimizer ...". I did not make the "specific
claim that you *must* do this to get good performance using Linda" my
observation is stronger than that - the effect is subliminal even when
performance is less of an issue - a programmer *will* do this!

Let's return to Mike Factor's thesis, where in justifying "Why Linda"
(page 169) he says:

  "There is, however, one problem in predicting the execution time of a
  Linda program; Linda does not constrain the cost of accessing
  shared-memory; the time to complete a Linda operation is, in
  principle, unbounded. But a necessary condition for predicting a
  program's execution time is that it execute only time-bounded
  operations. Fortunately, the tuple space operations that occur in the
  static-worker kernel are predictable in practice given the Linda
  implementation; THIS WOULD HAVE BEEN EXTREMELY DIFFICULT TO VERIFY
  WITHOUT THE EXPERTISE OF LINDA'S IMPLEMENTORS, [the caps are mine] .."

Further, in July 1988, Ken Musgrave (a member of Mandelbrot's group at
Yale) published a paper entitled "Grid Tracing: Fast Ray Tracing for
Height Fields" in it he says:

  "Although the incipient scanline tasks are stored in a theoretically
  unordered "tuple space" in the Linda system, the implementation of
  this space is of course[?] an ordered list, and this virtual ordering
  in practice can be exploited to assure that the tasks will be executed
  by the "worker" processes in roughly the same order as the task
  "tuples" were output into the tuple space ... the image can still be
  expected to [be] rendered in an orderly fashion, such as
  top-to-bottom."

Wanna bet? Not if you run it on anything else it can't!  Ken also
acknowledges "the invaluable assistance" of the implementors.

As I mentioned before, I have not yet seen David's book and so cannot
comment. Perhaps MIT Press will send me a review copy? :-)

Steven
PS. This message ends my reply.
--
Steven Ericsson Zenith * email: zenith@ensmp.fr *   fax: (1).64.69.47.09
                       |                        | voice: (1).64.69.48.52
	     Ecole Nationale Superieure des Mines de Paris
	  35 Rue Saint-Honore 77305 Fontainebleau Cedex France
    "All see beauty as beauty only because they see ugliness" LaoTzu