[comp.arch] Balanced system - a tentative definition

hjm@cernvax.UUCP (hjm) (08/11/88)

Someone a while ago asked what a "balanced system" is.  I propose
the following definition for debate/flaming:

"A balanced system is one where an improvement in the performance
of  any single part would not increase the overall performance of
the system, and where the degrading of any single part would  de-
crease the overall performance."

So that's my definition.  Comments, anyone?

The explanation of this definition is that every part of the sys-
tem  is  going as fast as it can, and that no one part is holding
up the process.  Consequently, if any one  part  slows  down,  it
would drag the rest of the system down with it.

In this definition, I include both the hardware and the  software
in  the  term  "system",  as  a system can only be balanced for a
given problem, or class of problems.  For example,  Amdahl's  Law
about  1 MIPS & 1 MB & 1 Mbyte/sec was for I/O intensive business
applications written in COBOL.  This is  obviously  not  directly
applicable to image processing written in assembler or C, for ex-
ample.

Including the software as part of the system is  a  more  general
view  than that expressed by the original debaters (who were dis-
cussing memory speeds and CPU speeds  with  reference  to  screen
blitting),  but  is  necessary as this is most often the limiting
factor on system performance; the most dramatic speedups are usu-
ally  achieved by tweaking the software.  But, there comes a time
when, for instance, the number of calculations required cannot be
changed (you always have to do the same number of FLOPS in an FFT
calc) and the CPU is then the limit.  If it is choked by a memory
that requires wait-states then that is the next point of improve-
ment.  Increasing the memory speed might then lead to a  balanced
system  where  the  CPU is processing as fast as possible and sa-
turating the memory whilst executing a well-written program.   At
this  point,  increasing either the CPU speed or the memory speed
would have no effect, and the software can't be  tweaked  anymore
anyway  -  a  balanced  system.   But, detuning any one component
(CPU, memory or software) would cause a speed hit.  Hence the de-
finition.

(I  haven't  mentioned  I/O-related  parameters,  such  as   disk
transfer  rates  or  disk  access times, but the extension to any
given system parameter is not difficult.)

        Hubert Matthews

suitti@haddock.ISC.COM (Steve Uitti) (08/12/88)

In article <794@cernvax.UUCP> hjm@cernvax.UUCP () writes:
>"A balanced system is one where an improvement in the performance
>of  any single part would not increase the overall performance of
>the system, and where the degrading of any single part would  de-
>crease the overall performance."
>
>The explanation of this definition is that every part of the sys-
>tem  is  going as fast as it can, and that no one part is holding
>up the process.  Consequently, if any one  part  slows  down,  it
>would drag the rest of the system down with it.

	I would first add to the definition that parts of the system
are subsystems that act independantly of each other in that they can
perform at the same time (pure overlap).  If one looks at a RAM system
(with bandwidth and latency) and the CPU system (with instruction
speed), for a "balanced" system, the RAM bandwidth should be saturated
while the CPU is never stalled (under some conditions which make up
the rest of the system).

	It is a nice definition, but doesn't do much for the real world.

	In real life I generally consider the CPU/RAM part as one half
of the system, and disk I/O as the other half (and I ignore other
kinds of I/O, unless the application being considered requires them).
Usually the parameter that could be changed is how much RAM the system
has, and this determines how much paging will happen.  I will contend
that a system that does not have to page (or swap) is always faster
than one which does, in real life, even if the disk bandwidth is never
even close to saturation, even if disk I/O can take place while the
CPU is doing usefull work (perhaps more often the case than not).  The
problem is that disk I/O requires CPU.  CPU is required to manage RAM
resources, disk paging area resources, perform context switches, etc.
My experience with real (UNIX) systems is that a faster CPU performs
I/O quicker than a slower one, even to the same very slow (floppies)
disks.  Very sad.

	The interdependancies for RAM bandwidth & CPU speed with cache
have enough complications for similar problems.  I'm sure someone out
there can elaborate.

	Stephen

colwell@mfci.UUCP (Robert Colwell) (08/14/88)

In article <794@cernvax.UUCP> hjm@cernvax.UUCP () writes:
>
>
>Someone a while ago asked what a "balanced system" is.  I propose
>the following definition for debate/flaming:
>
>"A balanced system is one where an improvement in the performance
>of  any single part would not increase the overall performance of
>the system, and where the degrading of any single part would  de-
>crease the overall performance."
>
>So that's my definition.  Comments, anyone?

I like your definition, Hubert.  The only thing that might be worth
stating explicitly is that the above must hold true for the workload
of interest.  For any given architecture, it is easy to conjure up
a program that runs extremely badly.  But that doesn't in itself 
mean that the architecture is deficient or imbalanced.   I know
Hubert knows this, but it's worth being very clear about it, because
I think this one issue was the primary cause of an awful lot of
early RISC/CISC fights ("my machine kicks your machine's butt on the
following set of toy benchmarks!"  "Oh, yeah, well my machine spits
on your benchmarks and blows you away on this other set!")

It's also usually possible to produce a program that runs extremely
well on some given machine; if you're going to go this route, though,
it behooves you to make that program something useful.

Bob Colwell            mfci!colwell@uunet.uucp
Multiflow Computer
175 N. Main St.
Branford, CT 06405     203-488-6090

colwell@mfci.UUCP (Robert Colwell) (08/14/88)

In article <6082@haddock.ISC.COM> suitti@haddock.ima.isc.com (Steve Uitti) writes:
>In article <794@cernvax.UUCP> hjm@cernvax.UUCP () writes:
>>"A balanced system is one where an improvement in the performance
>>of  any single part would not increase the overall performance of
>>the system, and where the degrading of any single part would  de-
>>crease the overall performance."
>
>	It is a nice definition, but doesn't do much for the real world.

I disagree.  In the design of our TRACE VLIW, we had to balance
dozens of system design issues, such as number of register files,
number of register file write ports, number of register file read
ports, number of system buses, instruction sets, memory sizes,
functional unit quantities, functional unit pipelining, memory
pipelining, etc. ad-nearly-infinitum.  When somebody comes up to me
with code that they say is bottlenecked by one or another of these,
I don't shake my head sorrowfully.  I feel that as long as each of
the design parameters occasionally becomes the limiting case for some
program, then the design is probably pretty-well balanced.  If every
programmer were to complain about the same thing, I'd say that that
area probably deserved more attention in the design.

>	In real life I generally consider the CPU/RAM part as one half
>of the system, and disk I/O as the other half (and I ignore other
>kinds of I/O, unless the application being considered requires them).
>Usually the parameter that could be changed is how much RAM the system
>has, and this determines how much paging will happen.

These are just the obvious large-scale items.  At least for those one
has resources with which to make the decisions; there are books,
graphs, data, studies, and experts around.  Not that it's a piece of
cake, either, I grant you.

Bob Colwell            mfci!colwell@uunet.uucp
Multiflow Computer
175 N. Main St.
Branford, CT 06405     203-488-6090