[net.arch] My BIG Mistake in Description of Use for BIG Memories

wa263@sdcc12.UUCP (bookmark) (09/26/86)

<-- bug snack


	I posted a serious error the other night when detailing a good
use for big core memories.  I alluded to setting up a simple "address
calculation" sort; just stuffing items into a big array in positions
indicated by their keys and then scanning the array afterward to find
them in order.  I said that this was O(n), but a good friend has
pointed out that it is NOT: if keys have values in 1..k (k >= n) the
sort scheme I outlined takes O(k) time.

	This is what happens when you have something in your head,
type it onto the screen, decide the language is too ordinary, set out
to impress everyone with your computer-science phraseology, recast your
stuff into pseudo-mathematical cant (generalizing wildly in the process)
and send it out into the cold light of day.  I should be more careful to
mind my n's and k's.

	Even good internal sorting methods take O(n log n) time.  The
address calculation sort I described CAN do better than that, in fact,
when k < (n log n) it is often preferable (assuming that we have the
required fast memory).  Also, in many applications k == n, giving us
every incentive to do things this way.

	I must thank my friend for pointing out my foolishness.  I hope
that you-all will pardon me.  I still think that large main memories are
very handy.


					bookmark@sdcsvax.ARPA

				    ...!sdcsvax!bookmark