[comp.parallel] Corollary of Modest Potential

gary@hermes.chpc.utexas.edu (Gary Smith) (06/07/91)

In article <1991Jun5.120653.7852@hubcap.clemson.edu> baugh%ssd.intel.com@RELAY.CS.NET
(Jerry Baugh) writes:

>
>One of the reasons we are proud of our numbers here at Intel is that
>we got a high GFLOP number with a relatively small problem size (we can
>still increase the problem size as we have not yet run out of physical
>memory).
>
Fine.  Scale it to the physical memory limit and tell us how long it takes.
Worley has written a marvelous paper, "The Effect of Time Constraints on
Scaled Speed-Up" (ORNL/TM-11031) wherein he shows that a meaningful inter-
pretation of speed-up depends on constraints on execution time.  Why don't
advocates of the promise of massive parallelism acknowledge such issues?

john@spectre.unm.edu (John Prentice) replies to the above article:

>>Allowing the
>>benchmarker to scale the problem to his or her physical memory would
>>seem to me to be a prescription for getting alot of results that can't be
>>compared to each other.  What I would prefer is to just fix the 
>>dimension, but fix it big.

Another important paper, "Type Architectures, Shared Memory, and the
Corollary of Modest Potential" should be read by john@spectre.unm.edu
and most other advocates of the promise of massive parallelism.  To para-
phrase Lawrence Snyder (author of paper - Annual Reviews of Computer
Science, 1986), it's certainly still premature to be pessimistic regarding
ultimate success of achieving the benefits of massive parallelism, yet
THERE IS REASON TO BE EXTREMELY CAUTIOUS.

Continuing to paraphrase Snyder: The frequent challenge with physical
phemomena models is not to solve a fixed-size problem faster, but to
solve larger instances within a fixed time budget.  Simple rationale:
Solutions are so computationally intensive that problem instances of an
"interesting" size do not complete execution in a tolerable amount of
time.  What is the effect of parallelism on problem size?  Keeping time
fixed and UNREALISTICALLY assuming the best possible speedup, it follows
that superlinear problems (the interesting, realistic ones) can only be
improved sublineraly by parallel computation.  Specifically, if a problem
takes t = c*n^x sequential steps, and if the application of p processors
permits an increase in the problem size by a factor of m, t = c*(m*n)^x/p,
then the problem size can increase by at most a factor of m = p^(1/x).
For example, to increase by two orders of magnitude the size of a problem
whose sequential performance is given by t = cn^4 (three spacial, one time
demension) requires, WITH UNREALISTIC OPTIMISM, 100,000,000 processors!

Again, it's time for the advocates of the promise of massive parallelism
to acknowledge Synder's "corollary of modest potential."

Randolph Gary Smith                       Internet: gary@chpc.utexas.edu
Systems Group                             Phonenet: (512) 471-2411
Center for High Performance Computing     Snailnet: 10100 Burnet Road
The University of Texas System                      Austin, Texas 78758-4497



-- 
---Gary

Randolph Gary Smith                       Internet: gary@chpc.utexas.edu
Systems Group                             Phonenet: (512) 471-2411
Center for High Performance Computing     Snailnet: 10100 Burnet Road
The University of Texas System                      Austin, Texas 78758-4497