[comp.arch] parallel computing

henry@utzoo.UUCP (Henry Spencer) (07/10/87)

> ... Future programs, written by future
> programmers familiar with parallel algorithms and hardware are
> likely to write programs that use parallelism...

But *which* *flavor* of parallelism?  And how are they going to do this
when the languages don't support it in a standard way?  And how are the
languages going to support it in a standard way when the current machines
are all so different from each other that's there practically no common
ground?  Parallelism is never going to get really popular in programming
until there is some vaguely portable way of expressing it, or some utterly
overwhelming consensus on the preferred architecture of parallel machines.
I am not holding my breath.
-- 
Mars must wait -- we have un-         Henry Spencer @ U of Toronto Zoology
finished business on the Moon.     {allegra,ihnp4,decvax,pyramid}!utzoo!henry

dinucci@ogcvax.UUCP (David C. DiNucci) (07/12/87)

In article <utzoo.8283> henry@utzoo.UUCP (Henry Spencer) writes:
>But *which* *flavor* of parallelism?  And how are they going to do this
>when the languages don't support it in a standard way?  And how are the
>languages going to support it in a standard way when the current machines
>are all so different from each other that's there practically no common
>ground?  Parallelism is never going to get really popular in programming
>until there is some vaguely portable way of expressing it, or some utterly
>overwhelming consensus on the preferred architecture of parallel machines.
>I am not holding my breath.

Some ideas on this subject are starting to emerge.  I refer you to a
book just being published by MIT Press called "The Characteristics of
Parallel Algorithms" where a number of them are discussed by their
proponents -  Lusk and Overbeek's monitors, Harry Jordan's "The Force",
and Robert Babb's Large Grain Dataflow (LGDF), to name a few.  I believe
Lusk and Overbeek are writing another book on their approach
with a number of other people.  With luck, a paper
by Babb and myself will be presented at the HICSS-21 conference in
Hawaii this Winter on the new version of LGDF.

There are lots of similarities in these models.  Most all use some
common sequential language as a basis, then supplement it (usually
with macros) with new constructs which are implemented in a machine-
specific way.  They also each have one-flavor for shared-memory machines
and another flavor (or perhaps no flavor) for distributed memory
machines.

LGDF, which I am most familiar with, is becoming what you might call a
very-high-level graphical language.  That is, an LGDF program IS a
network, and the nodes in it, which would be analogous to
statements in other languages, are actually sequential processes
in LGDF.  These processes can be written in any supported high-level
language (currently C and Fortran), augmented by two "special"
statements (macro calls); "grant and "receive".

The idea here is to let the high-level language (C or Fortran)
abstract away the low-level machine-specific details of the sequential
instruction set of the machine (as they have always done), while the
LGDF language abstracts away the low-level machine-specific details
of the inter-process interactions (including operating-system interface,
memory hierarchy, and interconnection structure).  The new version of
LGDF does this well enough that the same (source) program should run
well on shared- and distributed-memory machines.  We have implemented
it on the Sequent with good results.  With luck, the Intel hypercube
will be soon.

There are other ways to go, such as writing programs in declarative or
dataflow languages or CSP in which sequential execution is not implied
by statement ordering, but rather by other constructs and only when the
programmer or compiler deems it absolutely necessary.  Mastering these
new techniques will not happen overnight.  Another way, of
course, is to hope that vectorizing and parallelizing compilers will
come to the rescue.  All of these approaches have their drawbacks and
advantages.

In any case, the software development cycle, especially debugging,
should not be forgotten when considering a solution to this problem.
If one were watching the execution of a parallel program using a
debugger, would its actions ever surprise the user or be hard to follow,
even if the user had a source listing handy or other tools handy?  Is it
even feasible to write a debugger?  My own view is that too many people
are offering new parallel constructs and models without considering
these issues.

Dave DiNucci    dinucci@Oregon-Grad     ..!sequent!ogcvax!dinucci

Disclaimer:  I co-authored the two works on LGDF cited here, but will
  not benefit financially from either one (to the best of my knowledge!)

baden@ucbarpa.Berkeley.EDU (Scott B. Baden) (07/13/87)

I also agree with David DiNucci.. help *is* coming...
I just filed my dissertation this spring;
the subject was a programming discipline that can help
help the programm write somewhat portable software.
My approach is to provide a virtual machine-- an abstract
local memory multiprocessor-- and some simple VM operations
whose semantics are insensitive both to the application
and to various aspects of the underlying system (i.e.
whether or not memory is shared) running the VM.
The approach isn't universal, but I believe
that it can make multiprocessors more attractive than
they have been in the past for many interesting problems
in mathematical physics and engineering.
I tried my approach using a large (= "real world")
fluid's problem and ran on two very different architectures--
the Cray X-MP/416 and an Intel hypercube.
The codes were not identical, but differed primarily in 
mundane ways-- (1) the Cray supports vector-mode arithmetic
so inner loops had to be re-worked in order to vectorize;
(2) the Cray had more memory than I could use, but I didn't
really have enough memory on the cube (with 32 nodes).


I agree with Henry Spencer's remarks that parallelism
suffers from an image problem, however, I can't see that
architecture will provide all the answers.  My own
results suggest that the programmer is better off he can
remain as aloof as possible from the way that the processors
are strapped together, whether through shared memory, message
passing or whatever, and that he need not necesarily
pay a heavy performance penalty for keeping his distance
from the innards of the machine.  In short,  software is needed
to insulate the programmer from novel developments in
parallel architecture.  When a progrmamer wants
to use a new machine he shouldn't have to rewrite his code, or
he will resist the innovation.  The field is still too young
for anyone to commit to one kind of machine.
Perhaps someday architectures will become standardized,
but I think that it will be awhile before that happens
(and I'm not so sure that it ever will).


As an aside: I've found that many of the problems I encountered
in writing multiprocessor software were mundane:
lack of a good debugger, system bugs, lack of application
libraries,  and so on. In short, these machines haven't been around
for long, and are harder to use than the more mature
uniprocessor systems.  Many of the  problems have nothing
to do with the introduction of  parallellism but rather
the newness of the machines themselves.

Comments?

Scott Baden	baden@lbl-csam.arpa   ...!ucbvax!baden
					(will be forwareded to lbl)
Newsgroups: comp.arch
Subject: Re: Parallel Computing
Expires: 
References: <8270@amdahl.amdahl.com> <utzoo.8283>
Sender: 
Reply-To: baden@lbl-csam.arpa.UUCP (Scott Baden [CSR/Math])
Followup-To: 
Distribution: 
Organization: Lawrence Berkeley Laboratory
Keywords: 

I also agree with David DiNucci.. help *is* coming...
I just filed my dissertation this spring;
the subject was a programming discipline that can help
help the programm write somewhat portable software.
My approach is to provide a virtual machine-- an abstract
local memory multiprocessor-- and some simple VM operations
whose semantics are insensitive both to the application
and to various aspects of the underlying system (i.e.
whether or not memory is shared) running the VM.
The approach isn't universal, but I believe
that it can make multiprocessors more attractive than
they have been in the past for many interesting problems
in mathematical physics and engineering.
I tried my approach using a large (= "real world")
fluid's problem and ran on two very different architectures--
the Cray X-MP/416 and an Intel hypercube.
The codes were not identical, but differed primarily in 
mundane ways-- (1) the Cray supports vector-mode arithmetic
so inner loops had to be re-worked in order to vectorize;
(2) the Cray had more memory than I could use, but I didn't
really have enough memory on the cube (with 32 nodes).


I agree with Henry Spencer's remarks that parallelism
suffers from an image problem, however, I can't see that
architecture will provide all the answers.  My own
results suggest that the programmer is better off he can
remain as aloof as possible from the way that the processors
are strapped together, whether through shared memory, message
passing or whatever, and that he need not necesarily
pay a heavy performance penalty for keeping his distance
from the innards of the machine.  In short,  software is needed
to insulate the programmer from novel developments in
parallel architecture.  When a progrmamer wants
to use a new machine he shouldn't have to rewrite his code, or
he will resist the innovation.  The field is still too young
for anyone to commit to one kind of machine.
Perhaps someday architectures will become standardized,
but I think that it will be awhile before that happens
(and I'm not so sure that it ever will).


As an aside: I've found that many of the problems I encountered
in writing multiprocessor software were mundane:
lack of a good debugger, system bugs, lack of application
libraries,  and so on. In short, these machines haven't been around
for long, and are harder to use than the more mature
uniprocessor systems.  Many of the  problems have nothing
to do with the introduction of  parallellism but rather
the newness of the machines themselves.

Comments?

Scott Baden	baden@lbl-csam.arpa   ...!ucbvax!baden
					(will be forwareded to lbl)