[comp.hypercube] Amdahl's law

delp@udel.EDU (Gary Delp) (09/14/87)

I have heard the following observation attibuted to Gene Amdahl, but
have not been able to find it in the literature.  If you have seen it,
and have some hard information on it, I would appreciate it if you
would let me know directly.  I will report back on the results.

The "Law" as I understand it states that there is a fundamental limit
to the speedup that can be acheived by taking a serial task and
running it on a parallel processor.  This limit is set by the portions
of the task that still be serialized.  the formula is:

           S + P            S = time required by serial portion of task
speedup = --------- , where P = total time required by parallel portion
           S + P/N          N = Number of Processors

This obviously ignores such factors as communication overhead and
latency, but is meant to provide a fundamental limit, not a firm
prediction of performance.

Harold Stone's new book "High-Performance Computer Architecture",
which is, by the way, a fine addition to the choices of textbooks in
the field, covers the issues in this area without mentioning the
fundamental limit.

Gary Delp               ARPA:   delp@udel.edu
123 Evans Hall          BITNET: delp@udel.edu
EE U of D               CSNET:  delp%udel.edu@relay.cs.net
Newark, DE 19716

eugene@pioneer.arpa (Eugene Miya N.) (09/14/87)

%A Gene M. Amdahl
%T Validity of the Single Processor Approach to Achieving 
Large Scale Computing Capabilities
%J AFIPS Conference Proceedings (SJCC)
%V 30
%D 1967
%P 483-485

If you get a chance to here him speak, I highly recommend it.
John Backus is another IBMer (developed FORTRAN) who gives good
lectures.  This paper is not great, it's a top of the head paper rather
than an academic paper.  Required reading (my opinion) for any person
thinking about parallel processing (but I'll mention that Wednesday).

--eugene

lindsay@k.gp.cs.cmu.edu (Donald Lindsay) (09/18/87)

[Xref: hubcap comp.hypercube:85]
There are several answers to Amdahl's Law.

To recap: he said that a parallel computation cannot run any faster than its
inherently sequential portion.

Some people with multiprocessors ignore the Law. They feel that they
replicate, not parallelize. Just do other problems too !  The
counter-argument is that some people only have one problem (like, the
weather). The phrase THEY use is "time to solution".

Cleve Moler at Intel Scientific Computers argues that as a problem increases
in size, the fraction of it which is sequential will decrease.  He defines
an "effective" algorithm to be one for which this is true, and claims that
there are many effective algorithms.

Multiflow has another answer. They find more parallelism in programs than
had previously been suspected.  This doesn't defeat the Law, but gives many
problems another chance.  For example, Multiflow claim that a Monte Carlo
benchmark ran faster on their machine than the customer had ever seen it go
before - even on Crays.  (An atypical case, of course.)

On a MIMD machine such as a hypercube, it can happen that a program runs
faster on 2**5 nodes than on 2**6 nodes. On a Multiflow, it can happen that
a program takes the same time on 7 functional units as on 14 funtional
units. Or, you can win!  Exciting times.
-- 
	Don		lindsay@k.gp.cs.cmu.edu    CMU Computer Science

delp@udel.EDU (Gary Delp) (09/22/87)

in article <11370002@hpsmtc1.HP.COM>, swh@hpsmtc1.HP.COM (Steve Harrold) says:
> If there is some dispute as to how accurate Amdahl's "Law" is, shouldn't
> it be more properly labelled "Amdahl's Theorem"??

Indeed, what people have said is that the world of parallel processing
isn't as bad as Amdahl's Law might first make it seem, not that the
"law" is not true.  They have said that the serial portion of code can
be minimized and separate tasks may be run in parallel.  But, no one
has given an example of where the *law* (like the *law* of gravity, an
observation which fits all the observed data and predicts nature (also
like the law of gravity, arbitrarily complex depending on what factors
must be considered, but simple in the generally used cases.))  has
been shown to be wrong.

The law, simply stated, says that if you need to do something, and
that thing has some elements which depend on their happening in a
certain order, then it doesn't matter how many widgets there are to do
that thing, the speed with which it can be done is limited by the
serial execution speed of the things that must be done in series.

BTW, the reference that eugene miya graciously provided (thanks
eugene) only vaguely refered to this law.  Is it possible that there
are more recent references?  Has anyone heard him talk and refer to
this law?  Any information would be helpful.
-- 
Gary Delp	     ARPA:   delp@udel.edu CSNET:  delp%udel.edu@relay.cs.net
123 Hall, EE U of D		  ...!ihnp4!berkeley	  -\
Newark, DE 19716     UUCP:	  ...!allegra!berkeley	  -->!delp@udel.edu
(302)-451-6653 or 2405		  ...!harvard		  -/

eugene@pioneer.arpa (Eugene Miya N.) (09/28/87)

There are many other refernecs of Amdahl's law.  Worlton has a couple
I like (The COMPCON "Bottleneckology" paper is a favorite, also
argues for the use of the Harmonic Mean).  It's unclear to me what
people want. (I saw Cleve's paper posted in troff.)  I only posted
the original reference.  Send mail

>From the Rock of Ages Home for Retired Hackers:

--eugene miya
  NASA Ames Research Center
  eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
  {hplabs,hao,ihnp4,decwrl,allegra,tektronix}!ames!aurora!eugene