[comp.sys.ibm.pc.hardware] Faster RAM <==> Faster operation

jackin@vehka.uta.fi (Markku M{enp{{) (11/01/90)

I am wondering about how faster ram affects perfomance.
I understand that non-cache boards probably benefit from
faster ram but is it the same with cache MBs too ?
Say for example if I have a 33Mhz/64K cache board, do
I get a speed increase if I use 70ns ram instead of 80ns.
It seems that currently ram write is 1 ws and read 0 ws.

Markku


--
Markku M{enp{{ 
email : jackin@vehka.uta.fi
phone : 358 31 561 575 (in Finland (931) 561 575)

silver@xrtll.uucp (Hi Ho Silver) (11/05/90)

In article <1718@kielo.uta.fi> jackin@vehka.uta.fi (Markku M{enp{{) writes:
$I am wondering about how faster ram affects perfomance.
$I understand that non-cache boards probably benefit from
$faster ram but is it the same with cache MBs too ?

   Yes, but nowhere near as much.  The idea of a cache is that the vast
majority of memory references occur in a fairly small area of memory
during any given time period.  For example, in a matrix multiplication
routine, you'd have some loop code that gets executed repeatedly, and
your data fetches into the matrices would also be done repeatedly.  By
loading these frequently-accessed areas into a fast cache, your computer
doesn't have to wait for main memory as often.

   If 90% of your memory accesses are serviced out of the cache, and your
main memory runs at 2 wait states (and assuming cache misses take no
additional clock cycles of their own), your average number of wait states
would be

	.9 * 0		+ .1 * 2		= .2
	(the cache)	(main memory)

   If you reduce your main memory wait states to 1 by replacing the
chips with faster memory, you then end up with an average of .1 wait
states per memory access.

   On my old 286 system, running at 0 ws was about 10% faster than
running at 1 ws; if this holds for 386 systems as a guideline, then
in the above example, you'd only speed your machine up by 1%.  Admittedly,
the numbers are entirely made-up, but they should show you that you are
not likely to gain much by speeding up main memory on a cached motherboard.
Keep in mind that although a 90% hit rate may sound pretty high, you
actually don't need a very large cache to achieve that kind of rate.  I
don't have the figures available, but that rate should be achievable with
a 64K cache, and perhaps even with a 32K one.
-- 
HI ROGER |Nikebo says "Nikebo knows how to post.  Just do it."| silver@xrtll
_________|-----------------------|_______________|------------|_____________
yunexus!xrtll!silver (L, not 1)  | Hi Ho Silver  | costing the net thousands
Silver:  Ever Searching for SNTF |i need a grilf | upon thousands of dollars

jackin@vehka.uta.fi (Markku M{enp{{) (11/05/90)

In article <1990Nov4.221653.4823@xrtll.uucp> silver@xrtll.UUCP (Hi Ho Silver) writes:
>In article <1718@kielo.uta.fi> jackin@vehka.uta.fi (Markku M{enp{{) writes:
>$I am wondering about how faster ram affects perfomance.
>$I understand that non-cache boards probably benefit from
>$faster ram but is it the same with cache MBs too ?
>
>   Yes, but nowhere near as much. 
>   If 90% of your memory accesses are serviced out of the cache, and your
>main memory runs at 2 wait states (and assuming cache misses take no
>additional clock cycles of their own), your average number of wait states
>would be
>
>	.9 * 0		+ .1 * 2		= .2
>	(the cache)	(main memory)
>
>   If you reduce your main memory wait states to 1 by replacing the
>chips with faster memory, you then end up with an average of .1 wait
>states per memory access.

This is true for write-back cache method. It doesn't update main memory
if a write operation remains in cached areas. This method is used in for
example Everex machines. However the most common method is write-through 
method which *allways* updates main memory when a write operation occures. 
So if our read/write-operations go like this :

	90% hits cache 
	10% misses cache 

	.45 * 0		+ .45 * 2	+ .1 * 2 	= 1.1
	(cache/read)	(cache/write)	(main memory)	

Now if we could reduce our main memory waitstates by 1 we could get 
.55 waitstates which could give us a noticable speed increase over 1.1 ws.

The third method, posted write-through cache, is also used but I don't
know if it is very common.

Markku
--
Markku M{enp{{ 
email : jackin@vehka.uta.fi
phone : 358 31 561 575 (in Finland (931) 561 575)