[comp.arch] RISCy designs

renglish@hplabsz.HPL.HP.COM (Bob English) (09/12/90)

One of the recurring arguments in this group concerns what happens when
a machine is used for purposes beyond those foreseen by its designers.
A recent example is the discussion of whether floating point or multiple
precision integer support is appropriate use of hardware.  I don't want
to get into that particular argument, but I'd like to discuss a related
issue.

Much of the RISC design philosophy revolves around measuring actual
system usage and tuning those parts of your design which measurements
show to be important.  These measurements can be done either on existing
systems or on simulators, but they share the common element of
presupposing a workload.  The danger is that by tuning too closely to
that particular workload, the machine can become less useful for other
purposes.

Classic examples of this include machines that perform quite well on
individual benchmarks, but run out of cache and die when two benchmarks
are run simultaneously.  Other examples that some here might bring up
are machines that run exremely well until asked to multiply two 32 bit
numbers.  Since at the time of design, the true workload of a machine is
never precisely know, there is always the danger that the target
workload will be wrong in some subtle way, and the machine will fail in
the market for which it is intended.

The question, then, is how do you design a machine with this in mind?
How do you fix the size of the cache so that if you miss the target, the
machine will still run?  One possible approach would just be to make the
cache larger than indicated by the models, but by how much?

--bob--
renglish@hplabs.hp.com
"I've seen pictures of Dave, Bill, and John, but I wouldn't presume to
speak for them."