david@kessner.denver.co.us (David D. Kessner) (02/22/91)
I think that there is an important 'fact' that has been overlooked in this
64 bit addressing discussion. Here is a though train that is close to the
real issue here:
32 bits is enough for most applications today, but it is limiting
for some supercomputers, and computers that map the file system into
virtual memory.
Therefor, 32 bit addressing is a real limitation that needs to be
delt with in the near future. It's not a 'problem' right now, but
it will be.
Therefor, addressing that uses more than 32 bits is required for
future high-performance computers.
The next logical step (arguably) is 64 bit addressing. This is based
on the usual doubling of data/address bits every-so-often. Another
posibility is 48 bits, but may cause problems because it is 6 bytes
long rather than 8 (a power of 2), besides 64 bits is more elegent--
but I am no CPU designer.
Therefor, 64 bit addressing is the most logical step for addressing
to go, even though the full 64 bits will not be used for quite a while.
This makes a better case for 64 bits than anything else I have heard-- bacause
32 bits is not enough, and 64 bits seems like the most logical step.
As I said, I am no CPU designer but doesn't this make sense?
- David K
--
David Kessner - david@kessner.denver.co.us | do {
1135 Fairfax, Denver CO 80220 (303) 377-1801 (p.m.) | . . .
This is my system so I can say any damn thing I want! | } while( jones);