[net.arch] Split instruction and data caches

daemon@houligan.UUCP (03/19/86)

Jack Jansen requested results on the performance of split caches.
There have also been several requests for information on the 
performance of other cache configurations.  As a starting point for
studying the RESEARCH in this area, I recommend _Cache Memories_,
by Alan Jay Smith, ACM Computing Surveys, Vol. 14, No. 3, September
1982.  This 60 page paper explains many cache design alternatives
and presents the results of a large number of trace driven simulations.
It also has an excellent list of references.  If you want further
references or performance information after you have read this paper,
I will be glad to help.  I also would appreciate any information that
you might have, since I am actively involved in designing caches for 
high performance computers.  Mail to me and I will summarize
appropriate information to the net.

In actual practice, the major reason for choosing a split cache design
over a unified cache is usually a result of virtual page size,
speed requirements, and available RAM configurations rather than hit ratio.
A split cache of the same total size and associativity as a unified cache 
usually has slightly lower hit rates than the unified cache.  However, this 
is heavily dependant on instruction mix and workload.  In the caches I've 
worked on, it also requires slightly more control logic to implement a split
cache.  (This all refers to a split cache with a single read port, ie
a single cache split into two parts.  A multiport cache is another 
subject.)

I hope this information helps.


                 The above opinions are solely mine.

Jeff Oltmann                             ...!{brl-bmd,pur-ee,sun}!gould!joltmann
Gould, Computer Systems Division         6901 W. Sunrise Blvd.  
Ft. Lauderdale, FL  33310-4499           (305) 587-2900