[comp.parallel] parallelism terminology

eugene@orville.nas.nasa.gov (Eugene Miya) (02/05/90)

Guido Jouret makes a very good point about terminology, but I think you
are way too late to standarize. Your bias shows by keeping the word "parallel."
This confines your thinking to geometrically restricted adjectives.
Taking a idea from Benjamin Whorf, I've suggested the following
exercise.  Work a week without using the obvious words: "parallel,
multiprocessor, concurrent, etc."  Observe the new words you start to use.
Language is far too imprecise to help[ us here.
See if the scope of what you mean changes.  It will.  Assumptions
become clearer sooner.  The physicists learned to do studying quantum
mechanics decades ago (i.e., light is particle on MWF, light is a wave
on TTS).  I detected a race condition in a problem I had over a weekago.
Here is part of the keyword list from my bibliography.

parallel
multiprocess
multiprocessor
multiprocessing
multitask
distributed
cooperative
competititive
loosely-coupled
tightly-coupled
pipelined
systolic
simultaneous
synchronous/asynchronous
vector
array
autonomous
connected/connectionist/cellular/automata
The following depend on whether you have "different" kinds of applications:
fault-tolerant
graceful degradation
reliable
dependable
Motherhood.

gkj@doc.imperial.ac.uk (Guido K Jouret) (02/07/90)

In response to E. Miya's concerns:

In my original posting I only mentioned different kinds of *parallelism*
because I consciously wanted to separate "characteristics" of programs/
algorithms from their implementation or the architectures they run on.
That's why I don't want to deal with such issues as loosely-coupled,
fault-tolerant, etc.  That's a whole other ballgame...

I think it's important to separate program or algorithm "characteristics"
(i.e. kinds of parallelism present) from the way in which such programs
are actually run (e.g. processor-farm model, divide-and-conquer, static
pipeline, etc.).

Thanks to all the people who have been sending me contributions to add
to my list of things to consider.  Keep them coming!

Guido...

 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+< Guido K. Jouret >+~~~~~~~~~~~~~~~~~~~~~~~~~~~
/ email:  gkj@uk.ac.ic.doc                         =  Humor is like a frog:    \
| rmail:  Functional Programming Section           =                           |
|         Dept. of Computing                       =  It can be dissected, but |
|         Imperial College                         =    usually dies in the    |
|         London SW7 2AZ                           =         process.          
|         U.K.                                     =                           |   tel:  44-1-589-5111 xt: 7532                   =                           
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

eugene@eos.arc.nasa.gov (Eugene Miya) (02/09/90)

I enjoyed reading Guido's commenst below and I openly respond.
Yes I know exactly what you are saying.

>In my original posting I only mentioned different kinds of *parallelism*
>because I consciously wanted to separate "characteristics" of programs/
>algorithms from their implementation or the architectures they run on.
>That's why I don't want to deal with such issues as loosely-coupled,
>fault-tolerant, etc.  That's a whole other ballgame...

Parallelism is a very large elephant and all the researchers around it
are Indian Blindmen (to use the analogy).  They dealt with characteristics.

So attempt most researchers, and that is where many of them fail.
Permit me to draw an analogy: warm fusion.  Physicists are only
able to achieve fusion for very short periods of time with lots
and lots of preparation and concern for problems (overheads, costs).
They have the concept of "breakeven."  Parallel processing has a
somewhat similar problem.  We get parallelism for very short periods
of time with high overheads.  At best we can get super-unitary, but linear
speed ups.

The most successful parallel architectures have paid attention to serious
designs of some type of "graceful degradation."  Parallel machines
aren't by default more reliable because of replication.  It has to be designed
in from the beginning.  It doesn't just come naturally.  This attention
to detail is critical in making systems.  Unfortunately, a few people
have adopted these buzzwords without much thought and a lot of assumption.

Increasingly we will have to see parallel computers used like telescopes
and particle accelerators as tools for research about computation.

>I think it's important to separate program or algorithm "characteristics"
>(i.e. kinds of parallelism present) from the way in which such programs
>are actually run (e.g. processor-farm model, divide-and-conquer, static
>pipeline, etc.).

Yes, your key word is "virtual."  It's that separation of concept from
detail.  But I would like to eventually see the hardware you run on.
Programs are written for architectures.  Architectures are rarely built
for programs (a few like GF-11).

I was told a joke by George Adams [Hi George] which I passed on to Steve
(your moderator):  In short: An EE and CS are stranded on a desert island.
They decide that to design a boat, a computer would be useful.  The EE says,
"Desert island, lots of sand around? I will build a computer."  The CS says,
"Posit a machine......"

Not a flame, just a comment.
--eugene miya

landman@hanami.Eng.Sun.COM (Howard A. Landman x61391) (02/21/90)

>landman@hanami.Eng.Sun.COM writes
>>One of the interesting things about the Connection Machine is
>>that it clearly had a particular language (CM-Lisp) in mind.

In article <8076@hubcap.clemson.edu> kale@cs.uiuc.edu (L. V. Kale') writes:
>Isn't that a contradiction? Seems to me that CM-Lisp is designed
>for CM. (At least the idea of a data parallel SIMD machine existed
>before the language).

CM languages have changed somewhat from the original ideas, but
the alpha and beta operators in CM-Lisp are a mathematical
abstraction which is quite clean and hardware-independent.

>I think that C* etc. are not expressive enough for non data-parallel
>applications.

That's arguably true of any data-parallel language, but then one
could also argue that using a full MIMD-oriented language to attack
a data-parallel problem is grossly inappropriate.  The restriction
to D-P brings a great deal of conceptual simplicity.  And there are
LOTS of D-P problems worth solving.

I say "arguably" above since it appears possible to emulate MIMD
behavior in SIMD languages (and vice versa of course), but the
performance degradation would probably be totally unacceptable.

	Howard A. Landman
	landman@eng.sun.com -or- sun!landman