[comp.arch] Amdahl's Law vs Amdahl/Case Rule

martelli@cadlab.sublink.ORG (Alex Martelli) (05/21/91)

wayne@dsndata.uucp (Wayne Schlitt) writes:
:In article <13096@pt.cs.cmu.edu> lindsay@gandalf.cs.cmu.edu (Donald Lindsay) writes:
:> The Amdahl rule (1+ Mb/s, sustained, per MIP) suggests that the push
:> towards 100 MHz processors is also a push past 100 Mb/s.  [ ... ]
:i havent been able to find Amdahl's law stated in any of the books i
:have looked at, but i cant say that i have looked real hard either... :->

I'd say...:-)  Open your copy of Hennessy and Patterson, "Computer
Architecture - a Quantitative Approach" (you DO have one, yes?  it
would be really absurd to lack this crucial work for anybody with
the slightest hint of a suggestion of a possibility of interest in
computer architecture... it's GREAT!), and there it is, in the
inner cover pages, amongst other Definitions, Trivia, Formulas, and
Rules of Thumb (it's one of the latter, of course).  It's identified
as Amdahl/Case Rule ("A balanced computer system needs about 1
megabyte of main memory capacity and 1 megabit per second of I/O
bandwidth per MIPS of CPU performance (page 17)", where the page
reference points to the main text, smack in the middle of chapter
one, "Fundamentals of Computer Design".

The name "Amdahl's Law" is reserved for a totally different result,
number 1 in the "Formulas" section in the same page, and is:
                                     1
speedup = ----------------------------------------------------------
          (1-Fraction.enhanced)+(Fraction.enhanced/Speedup.enhanced)

(the crucial reason why you never get as much benefit as "common sense"
would suggest, from specialized coprocessors, vectors, parallel...).
-- 
Alex Martelli - CAD.LAB s.p.a., v. Stalingrado 53, Bologna, Italia
Email: (work:) martelli@cadlab.sublink.org, (home:) alex@am.sublink.org
Phone: (work:) ++39 (51) 371099, (home:) ++39 (51) 250434; 
Fax: ++39 (51) 366964 (work only), Fidonet: 332/407.314 (home only).

burley@albert.gnu.ai.mit.edu (Craig Burley) (05/22/91)

In article <860@cadlab.sublink.ORG> martelli@cadlab.sublink.ORG (Alex Martelli) writes:

   (the crucial reason why you never get as much benefit as "common sense"
   would suggest, from specialized coprocessors, vectors, parallel...).

Never say never!  You should use the rule to find out about how much benefit to
EXPECT, especially on a "traditional" problem.

But on some problems, parallelization (or even coprocessors, I suppose)
can have a superscalable (is this the right word?) effect on performance.

E.g. a program which runs in X units of time on a single processor can run
in X/10 units of time on eight parallel processors.  (Or even less time.)

But I think this works only for programs including solution-space searches
as critical elements of their algorithms.
--

James Craig Burley, Software Craftsperson    burley@gnu.ai.mit.edu

prener@watson.ibm.com (Dan Prener) (05/24/91)

In article <BURLEY.91May22040315@albert.gnu.ai.mit.edu>, burley@albert.gnu.ai.mit.edu (Craig Burley) writes:

|> But on some problems, parallelization (or even coprocessors, I suppose)
|> can have a superscalable (is this the right word?) effect on performance.
|> 
|> E.g. a program which runs in X units of time on a single processor can run
|> in X/10 units of time on eight parallel processors.  (Or even less time.)
|> 
|> But I think this works only for programs including solution-space searches
|> as critical elements of their algorithms.

There can be even more trivial reasons for super-linear speedup.

For example, the N processors together might have not only more real
memory than the single processor used as the base for the speedup
computation actually has, but they might together have more real
memory than is architecturally possible on the single processor.
-- 
                                   Dan Prener (prener @ watson.ibm.com)