[net.arch] Access times of BIG memories

radford@calgary.UUCP (10/11/86)

In article <236@ima.UUCP>, johnl@ima.UUCP (John R. Levine) writes:
> In article <403@vaxb.calgary.UUCP> radford@calgary.UUCP (Radford Neal) writes:
> >... the time required to decode the addresses for your huge
> >memories is logarithmic in the size of the memory.
> 
> Not really true -- memory addresses are not decoded by a tree but more
> typically by broadcasting the address and having all of the memory units
> look for their own addresses in parallel.  More realistically, by the time
> you take into account caches, pipelined buses, interleaved memory and all
> the other speed up tricks commonly done in hardware, the address decode time
> is the least of your problems.
> -- 
> John R. Levine, Javelin Software Corp., Cambridge MA +1 617 494 1400

I'm not a hardware engineer, but when you broadcast the address to a 
number of memory units, doesn't the capacitance of the lines, and hence
the access time, go up *linearly* in the amount of memory, which is
(asymptotically) even worse than the logn time for a decode tree?

I agree that this may not be of practical significance for present-size
memories, but this whole discussion has been about ridiculously BIG memories,
for which I don't think a constant access time can be assumed.

Also, I don't think caches and interleaving help you in this respect. 
Pipelining clearly does, but there are limits if you stick with a basically
Von Neuman (sp?) achitecture.

    Radford Neal
    The University of Calgary