[comp.parallel] Response to comments on "Linda in Context"

carriero@YALE.EDU (Nicholas Carriero) (04/27/89)

Udi Shapiro of the Weizmann Inst and Ken Kahn of Xerox PARC both
posted interesting & challenging comments in response to our recent
Linda paper in CACM.  Here are our reactions (would someone with
access post this to the CLP list?):


  Reply-To: Ken Kahn <kahn@arisia.Xerox.COM>
  Date: Thu, 13 Apr 89 16:16:06 -0700
  From: Ken Kahn <kahn@arisia.Xerox.COM>
  To: clp.x@Xerox.COM
  Subject: Linda in Context
  Status: R

Our basic response: Ken's central claim is that
  
  I think the biggest design philosophy difference between "us" and
  "them" is the degree to which one values UNIFORMITY.

We are great lovers of uniformity, and getting uniformity (in our
terms) is a prime impetus in the Linda project.  Prolog is a nice
language that will be around for a long time.  So are C, Scheme,
Smalltalk and (yes) Fortran, and a bunch of others.  From our point of
view, it makes no sense to force people to learn a new way to get
parallelism for every host language they are interested in.  It's a
waste of time and of conceptual resources, and it makes heterogeneous
applications harder to achieve.  That's why Linda dissects out the
parallelism stuff and leaves the rest alone.  Yes, that makes us
"non-uniform" with the base language, but we did it to get uniformity
with the rest of the world---which in our view (& of course it's a
subjective point) is more important.


  Essentially the
  view behind Linda is that it is fine to have a very different
  computational model for programming in the small and "in the
  medium".  I say "in the medium" because I don't see how Linda can
  scale to large-scale distributed computing (open systems) without
  making yet another shift in computational model.  It would seem
  that Linda's practically rests upon global compiler
  optimizations.  As systems get larger and more distributed it
  seems implausible that one module which adds a tuple of a certain
  form could be compiled knowing all the other modules that read
  tuples of that form.

An attack from the CLP camp because some desired extension to Linda
will be HARD TO IMPLEMENT?

The comment re global compiler optimization is true as far as it goes,
but misses an important point.  Part of the unity of Linda is that
precisely the same communication model holds for fine grained, medium
grained and coarse grained distributed computations.  The compiler is
important for good performance in small and medium scale computations,
but is less important (fortunately) where you see it being most
problematic---in the large.  Furthermore, there are all sorts of
intermediate optimization levels to be explored in the absence of
global knowledge.

In general, claims that efficient Linda "seems implausible" in some
context are themselves no longer plausible.  Those who have been
involved in or interested in the project from the start know that the
community (a) dismissed Linda because it couldn't be implemented at
all, (b) assured us that OK, we could implement it, but not on more
than 8 nodes (the original S/Net), (c) told us all right, so it works
on more than 8 nodes, but only for toy applications, (d) admitted that
it works for real applications, but would never run on a network or a
hypercube, and so on.  These claims have ALL been falsified.  Perhaps
you can't see a good way to do Linda in the large because you don't
want to do it.  We DO want to do it.
  
  Linda shows that some advantages come from sacrificing
  uniformity.

Like uniformity.  One communication scheme supports fine, medium and
large computations.  One communication scheme is compatible with many
programming languages.  One communication scheme is compatible with
many hardware models.  One communication scheme is compatible with
many application domains.  It would have been much harder to achieve
all this if the communication scheme was bound to one particular
computation model.  And then there are the interesting possibilities
of unifying a file system with tuple space, using tuple space to
support a parallel shell/interpreter, and in fact to lay the
foundation of an entire OS.  (See our recent paper on multiple tuple
spaces. Cogent Research in Oregon has done considerable work on the
last two, btw).

  The uniformity and
  simplicity of CLP (and CCP (concurrent constraint programming))
  leads me to hope that semantically based and enabled tools will
  be built.  I hope that the simple and clean languages enable
  tools to reason about programs, to transform them, to debug them,
  to visualize them, etc..  The practicality of such tools may rest
  upon the simplicity of the language.  In principle such tools
  could be built for C-Linda, Fortran-Linda, etc. but I expect
  would be too complex to do in practice.
  
We've already built debugging and visualization tools.  And we're
working on transformation tools (though perhaps not in the sense you
are thinking of).  We probably disagree on the role tools to reason
about programs will play in near and medium term.  Bottom line: we
know "simple and clean" languages enable tools because working with a
"simple and clean" system we have built or are building them.


  What worries me in this Linda vs CLP debate is that Linda seems
  to be spreading quickly, solving real problems in the world, etc.
  while CLP's potential superiority rests upon a lot of research
  that has yet to be done.  Maybe we should move a bit more quickly...
  
Spurring research would be one of the more gratifying results of the
paper.

  
  
  From: udi@wisdom.weizmann.ac.il (Udi Shapiro)
  Newsgroups: comp.lang.prolog
  Subject: Embedding Linda and other joys of concurrent logic programming
  Message-ID: <8904141416.AA01840@dorsim.wisdom.weizmann.ac.il>
  Date: 14 Apr 89 14:16:23 GMT
  Lines: 336

We were looking forward to a response from Udi.  We've had doubts
about CP from the beginning, but have always found his papers among
the most interesting & stimulating in the whole parallelism field.
   
Our basic response to his note: as we've pointed out repeatedly (see
e.g. Aug '86 IEEE Computer), the fact that you can write Linda
implementations in other languages is both vacuous (of course you can)
and irrelevant (who cares?).  Programmers either prefer to work with
the Linda model of coordination, or they prefer CP, or whatever.  If
they prefer CP, fine.  If they prefer Linda, they need a decent Linda
implementation, and your Linda-in-CP demo is completely beside the
point---it's merely one of many ways to get Linda, and from an
efficiency standpoint it's a lousy way.  So what's the point?

Prolog is a neat and elegant language.  We're more convinced than ever
that the right way to have a neat, elegant parallel Prolog is merely
to add Linda.  Several people are looking at this prospect & we're
eager to see what they come up with.  Why get stuck with recursive
streams for IPC when there's an alternative that's not only more
powerful but simpler?  Linda and Prolog have always been a potentially
good semantic match.

   
  The dining philosophers
  -----------------------

  The need for assigning unique identifiers to philosophers can be
  eliminated if the language is strengthened with read-only variables or
  with a test-and-set primitive.  We consider the second extension: in
  FCP(^,!), an !-annotated term in the clause head denotes an output
  assignment of the term to a variable in the corresponding position in
  the goal. The assignment fails if the corresponding position is a
  non-variable term. It can be mixed (but not nested) with
  ^-annotations, and all assignments and unifications should be done
  atomically. In FCP(^,!), an anonymous dining philosopher can be
  specified by:

Wait a minute, what happened to simplicity?  Does it make sense that
you need to deal with "extensions" (primitive low-level ones) to deal
with as trivial a problem as dining philosophers?  Doesn't it make the
CLP camp uncomfortable to respond to the non-CLP world in terms of
"OK, so the previous version can't handle this the way we'd like, but
with these great new extensions..."?

Basically, though, our reaction to your dp example in super-duper
extended FCP is, this is more like it!  From our humble low-level
perspective, the typical CLP program isn't long and rambling, it's
short and impenetrable.  We'd argue that your new version is about the
same length as the Linda version, but that the Linda version is
perhaps an order of magnitude clearer.  This is strictly subjective.
But we're happy to let potential programmers judge for themselves.


  Embedding Linda
  ---------------
  ...   
  These trivial embeddings of Linda in FCP(^) and in FCP(^,!) is a further
  evidence to the generality and versatility of concurrent logic
  programming.

We would be surprised if you *couldn't* embed Linda in FCP(^,!).  Your
code certainly is a trivial embedding, and really says nothing more
than you can implement Linda by having "out" append a tuple to the end
of a shared list and having "in" search the list.  Now if you had a
decent implementation of FCP-Linda (and there are groups thinking
about variants of Prolog-Linda) you would have something---something
you could use to free yourself from the burden of casting every
problem into the mold of a stream of shared logic variables.

  The DNA example
  ---------------
   
  The program is more efficient than the other two in two respects.
  First, it avoids recomputation of the auxiliary functions p and q
  (although some future Crystal compiler may automatically transform the
  functional program into a more intricate functional program that
  avoids the recomputation).   Second, it specifies a regular process
  structure with local communication only.  This structure can be
  mapped --- manually, using Turtle programs, or automatically --- to a
  fine grain concurrent multicomputer in a way that near processes
  reside in near processors.  Mapping efficiently the other two
  programs, which are not specified by process networks and have
  non-local communication, is more difficult.
   
We don't understand the claims of greater efficiency.  First, all the
fragments can avoid the recomputation by the same transformation, i.e.
create h' from triple h, p, q. (It is interesting to note that there
is actually a common variant of the program that does precisely this.
Doing this assumes a particular constraint on insertion/deletion
penalties that was not present in the original algorithm.  That
algorithm could require, depending on the penalty function, a
recomputation.  The code in the CACM paper follows the structure of
the latter but with a penalty function suitable to the former.)  With
such a transformation the second point is also moot, no?  What does
this have to do with FCP vs Linda vs FP?  It's an algorithms issue
with the same implications for FCP, Linda, Cobol and anything else.

  Conclusions
  -----------

  However, the time
  in which the loss in performance in using concurrent logic languages
  is small enough to be easily compensated by the gain they offer
  in expressiveness has yet to come.

It's nice to close with a point we can all agree on!