[comp.parallel] response to Braner Re: Chare Kernel / Linda

kale@kale.cs.uiuc.edu (L. V. Kale') (05/10/89)

braner@tcgould.tn.cornell.edu writes:

>With Chares: parent process tells child it's ID, child is programmed
>send return result to that destination (the parent).
>
>With Linda: parent process looks for result with a specific tag, e.g.,
>a specific string in the first element of the tuple.  Child is programmed
>send return result in a tuple with that tag.
>
>I don't see how Linda is restrictive there.  In fact, it is less restrictive,
> ...

Consider a divide-and-conquer application (a pretty common occurrence).
The computation is "recursive", so the parent (with Linda) cannot be looking
for a specific constant string. So, for every instance, the parent has to
make up a unique string, and ask the child (as parameter in eval?) to
use that string as first element. This is no clearer than the parent sending
its own ID down. 
(Sometimes, the other parameters of
eval may act as a key. But in either case,) When the child does an
"out" the system does not know, in general, which PROCESSOR to send
the tuple to. It has to go through the hashing mechanism - which is
what I think is an overkill - using a uniform, "most general", mechanism
when the specific form of communication - one process sending data to
another - is obvious to the programmer.

>All the approaches
>for global data are easier to implement on shared-memory machines, of
>course.  It's tough with distributed memory, and will always be less
>efficient than straight message-passing.  

Which is why one must design abstractions (read: language features)
in a machine independent parallel programming languages
in such a way that they can be efficiently implemented on both kinds
of machines. Chare Kernel language, for example, is not a "straight
message passing" language. It is tough, but doable.

>But the goal is (for some of
>us, anyway) an 80%-efficient application created and maintained with
>one-third the human work as compared with the 90%-efficient optimized
>implementation...

I identify with this goal (well, percentages may vary :-) ).
That is why we are developing a machine independent parallel
programming language, not programming the raw machine.

-- Kale