[comp.parallel] Justifications for || processing

muttiah@stable.ecn.purdue.edu (Ranjan S Muttiah) (12/17/90)

I'm wondering how people in the application side of parallel processing
justify coding for a parallel machine.  I would be very much interested
in knowing how much complexity analysis plays a part in this.

On a related note, how prevalent are compilers that parallelize code ?

john%ghostwheel.unm.edu@ariel.unm.edu (John Prentice) (12/18/90)

In article <12301@hubcap.clemson.edu> pur-ee!muttiah@stable.ecn.purdue.edu (Ranjan S Muttiah) writes:
>I'm wondering how people in the application side of parallel processing
>justify coding for a parallel machine.  I would be very much interested
>in knowing how much complexity analysis plays a part in this.
>

My group here justifies it simply on the basis of the fact that we can't
get the performance we need for our calculations without it.  We are doing
computational solid dynamics and Monte Carlo quantum scattering calculations.
We want to be able to do 3 dimensional calculations, but the current machines
aren't up to it.  Parallel systems offer a hope as we get systems that
can do 100 Gigaflops and up.  We don't see serial systems (vector, etc...)
as the route to this kind of performance.  Neither does anyone else so
far as I have heard (consider that even Cray is now getting on the parallel
bandwagon).

As far as complexity analysis, it is not used in our efforts.

>On a related note, how prevalent are compilers that parallelize code ?

I don't know of any in the applications world.  On the CM2, the Fortran compiler
detects parallelism by the use of Fortran Extended array syntax which
is somewhat "automatic" I suppose (except that you have to put in this
syntax specifically for the CM2, nobody else uses it.  When Fortran Extended
gets out, that will be another matter).

John K. Prentice
Computational Physics Groups
Amparo Corporation
Albuquerque, NM  USA 

carter@iastate.edu (Michael Brannon Carter) (12/18/90)

> I'm wondering how people in the application side of parallel processing
> justify coding for a parallel machine.  I would be very much interested
> in knowing how much complexity analysis plays a part in this.

For the applications I see at the Scalable Computing Facility at Ames
Laboratory,
the answer is simple: people want to run BIGGER problems.  They want to
run them
as fast as possible, but the primary emphasis, at least here, is on the
size of
the problem.  More atoms in the electrolyte, more atoms in the metallic
cluster,
etc.

When a physicist or chemist is told that he/she can have a bunch of time
on a parallel computer to run bigger problems, they usually jump at the
chance.  This is followed shortly thereafter by a lot of headscratching
trying to figure out how to make it parallel.  We find ourselves having
to help people along a lot because they just don't know how to program a
parallel computer.  (If anyone really KNOWS how...)  Granted, these are
researchers, so there are a lot of graduate students around to actually
do the work.  I can't speak to the commercial side of things.

As for complexity analysis, it remains on an intuitive level; not a
formal one.
We simply decide by obvious factors (%age runtime consumed by a
particular function) and experience what functions to make parallel, and
just how that is to be done.  Since we see mainly scientific
applications, the operations performed in them tend to be
well-understood algorithms: matrix math, global communications, etc.

> On a related note, how prevalent are compilers that parallelize code
?

There are none.  Note that I do *NOT* call the MIMDizer a parallelizing
compiler!
As far as I know, it only works for certain shared-memory by recognizing
very
specific constructs in FORTRAN.  Try this on for size: MIMDizer ~
||izing compiler
the way Expert System ~ AI.

--
Michael B. Carter

tseng@rice.edu (Chau-Wen Tseng) (12/20/90)

Michael Carter (carter@iastate.edu) writes:

> > On a related note, how prevalent are compilers that parallelize code?
>
> There are none.  Note that I do *NOT* call the MIMDizer a parallelizing
> compiler!  As far as I know, it only works for certain shared-memory by
> recognizing very specific constructs in FORTRAN. 

A distinction needs to be made between shared/distributed memory 
parallelizing compilers.  For shared memory computers, commercial
firms such as Stardent, Convex, IBM, KAP, Pacific Sierra, etc... 
all provide compilers to convert sequential Fortran into parallel
and/or vector code.  Many of these compilers were based on the 
pioneering work done at the Univ of Illinois and Rice.

For distributed memory computers, the state of the art is much less
advanced.  I know of at least two commercial parallelizing compilers: 
MIMDizer from Pacific Sierra and Aspar from ParaSoft.  The CM Fortran 
compiler from Thinking Machines Corp/Compass is close - it requires the 
programmer to specify the parallelism, but handles almost everything else.
In addition, there are a ton of universities tackling this problem.  They 
include groups working on Superb, Crystal, ParaScope, ID Nouveau, Kali, 
Dino, AL, Pandore, Paragon, Oxygen, Booster, Spot, and Parti, just to 
name a few :-)  


Chau-Wen Tseng
Dept of Computer Science
Rice University