[net.arch] Re**3: Cache Revisited

ehj@mordor.UUCP (Eric H Jensen) (09/06/85)

In article <455@mtxinu.UUCP> ed@mtxinu.UUCP (Ed Gould) writes:
>In article <170@mips.UUCP> mash@mips.UUCP (John Mashey) writes:
>
>>                                                 Consider the ultimate
>>case: a smart compiler and a machine with many registers, such that
>>most code sequences fetch a variable just once, so that most data references
>>are cache misses.  Passing arguments in registers also drives the hit
>>rate down.
>
>With this ultimate machine/compiler combination it seems intuitively
>that a data cache would then be a *bad* idea, since having a cache
>can't be faster than an uncached memory reference (for what would
>be a miss) and is often slower.  We can then use the real estate saved
>for even more registers!

An important advantage of the cache is that it uses the same name
space as main memory.  This is important when your compiler can not
determine access patterns apriori so as to push your load instructions
sufficiently ahead of your uses of the loaded data to compensate
for main memory latency.  For applications that do a lot of pointer
chasing (read: a lot of loads (i.e. compilers)) and touch linked
records more than once, it seems from experiance that those records
that will be touched again remain in the cache (implicit fetch ahead
also helps you quite a bit).

A second aspect of maintaining the same name space is the potential for
reads and writes of shared data to be very fast in a shared memory
model.  Although some may argue that the shared memory model is not
the way to go, there seems to be sufficient utilitarian, if not
practical, counter arguments to pursue the shared memory model for the
present. 

It also turns out that you can save alot of hardware (in the
multiprocessor case) implementing locking mechanisms in the cache
instead of in the memory (systems where you can not just 'lock' the
bus). 

All three items above address issues that are not as easily addressed
with just a large register file.  Some kind of name space mapping is
needed.  If you provide a large register file with name space mapping
you have a cache....

It seems to me the cache is an indispensable element of the hardware
memory hierarchy.  It does not need to take up `precious' real estate.
If delayed loads are used, part or all of one pipe stage can be taken
up by wire length.  Also when the processor speed is 10 times that of
the memory speed, a cache miss is not all that much slower than an
uncached reference.

Now of course caches could be made a little smarter ...



-- 
eric h. jensen         (S1 Project @ Lawrence Livermore National Laboratory)
Phone: (415) 423-0229  USMail: LLNL, P.O. Box 5503, L-276, Livermore, Ca., 94550
ARPA:  ehj@angband     UUCP:   ...!decvax!decwrl!mordor!angband!ehj