[comp.arch] 64-bit addressing

dswartz@bigbootay.sw.stratus.com (02/18/91)

Not that I'm arguing with all of the erudite claims I've heard in this
newsgroup arguing about how bad a 64 bit virtual address space is because
it is [too inefficient | unnecessary | too expensive | ...].  I'm sure
with the current state of technology, they probably understand the issues
better than I.  I just can't help but remember back in the good old days
when I was programming a 16-bit PDP/11 with 128KB main memory and 16MB
of hard disk, hearing similar lamentations about how a 32-bit machine
would be a huge lose, just think of all the extra memory you will use once
the pointers and integers all double in size!  I now have a home PC with
64 times the main memory and 16 times the disk storage (and a processor
which is probably 10 times faster at 1/10 the cost!)  All of this happened
within 10 years.  Although I would like to have 4GB of main memory in a
desktop box, my main interest in the large address space is being able to
map ANYTHING of interest into memory.  It might be a fairly small database
mapped readonly over a pair of tin cans connected by string, but hey...

--

Dan S.

wilson@uicbert.eecs.uic.edu (Paul Wilson) (02/18/91)

dswartz@bigbootay.sw.stratus.com writes:

>Not that I'm arguing with all of the erudite claims I've heard in this
>newsgroup arguing about how bad a 64 bit virtual address space is because
>it is [too inefficient | unnecessary | too expensive | ...].  I'm sure
>with the current state of technology, they probably understand the issues
>better than I.  I just can't help but remember back in the good old days
>when I was programming a 16-bit PDP/11 with 128KB main memory and 16MB
>of hard disk, hearing similar lamentations about how a 32-bit machine
>would be a huge lose, just think of all the extra memory you will use once
>the pointers and integers all double in size!  I now have a home PC with
>64 times the main memory and 16 times the disk storage (and a processor
>which is probably 10 times faster at 1/10 the cost!)  All of this happened
>within 10 years.  Although I would like to have 4GB of main memory in a
>desktop box, my main interest in the large address space is being able to
>map ANYTHING of interest into memory.  It might be a fairly small database
>mapped readonly over a pair of tin cans connected by string, but hey...

  Then you ought to *like* pointer swizzling.  Your address-space can be
arbitrarily large, at little cost, on hardware with some fixed-sized
bus, and with languages with only one kind of pointer.  You can implement 
a really huge address space if you want to, so that you *could* address 
more bytes of memory than there are subatomic particles in the universe.

  Pointer swizzling is like segmentation, but with the upper bits of the
address translated _at_page_fault_time_ rather than by hardware.  And
(mercifully) programmers needn't know any of this is going on, if the
language they use supports fully supports the abstraction of pointers.
(Unrestricted C doesn't, quite.  Eiffel and Modula-3 do, as do Lisp
and Smalltalk.)  But like segmentation, it gets into trouble when object 
sizes approach the segment size.  But if you just want to map a big
database into your desktop machine, you're *really* in luck.  You can
have all of the computers in the world implement one unbelievably huge heap, 
heap and you could follow pointers around the planet. 

  The real question is whether the bus should be 32 or 64 bits, or
something in between.  Even if you have a 64-bit word, pointer swizzling
could be handy -- you could implement the heap described above without
*ever* running out of bits.  You can arbitrarily large address space with
no problem.

>Dan S. 

   -- Paul
-- 
Paul R. Wilson                         
Software Systems Laboratory               lab ph.: (312) 996-9216
U. of Illin. at C. EECS Dept. (M/C 154)   wilson@bert.eecs.uic.edu
Box 4348   Chicago,IL 60680 

raje@dolores.Stanford.EDU (Prasad Raje) (02/19/91)

First the statement:

A 64 bit address will be able to address each byte in a memory array
10.4 kilometers on each side, populated chock full with 1Gbit DRAMs.

put differently

You will require a square 10.5 kilometers on a side completely filled
with 1Gbit DRAMs to exhaust the addressability of a 64 bit byte pointer.

(the thought of 1Gbit DRAMs from Stanford to Sunnyvale is weirdly attractive)

The (simple) math:

1 GBit DRAMS should be available by 2000 A.D. and are estimated to each
occupy an area of 8 cm^2.
1 GByte (2^30) of memory occupies 64 cm^2.
2^64 bytes of memory will occupy 64 x 2^34 cm^2.
Put the chips edge to edge (never mind interconnect, maybe we will have
parallel optical connections in the Z direction) and you end up with a
square 10.5 kilometers on a side (6.6 miles for you non metric types)

So the question is, just what kind of store is a processor or even a
massive bevy of processors going to address with 64 bit pointers?

Maybe we are talking about addressing files on disk. I am not intimately
familiar with disk storage densities - but my impression is that it is
not much denser (surface area/bit) than DRAM bit density. (please correct
me if I am wrong)

Prasad (just getting a feel for how big 2^64 really is)

trebor@lkbreth.foretune.co.jp (Robert Trebor Woodhead) (02/19/91)

raje@dolores.Stanford.EDU (Prasad Raje) writes:


>First the statement:

>A 64 bit address will be able to address each byte in a memory array
>10.4 kilometers on each side, populated chock full with 1Gbit DRAMs.

>So the question is, just what kind of store is a processor or even a
>massive bevy of processors going to address with 64 bit pointers?

Well, gee, for starters, consider massively parallel hydrodynamics
simulations.  It is important to realize that "the need for a 64
bit address space" does not equate to "the need for 2^64 bits/bytes/
words" of real memory.

Also, consider a worldwide "community memory pool".  Millions of people
make their "square meter" of 1Gbit DRAMs readable to the rest of the
world, adding pointers to other people's data as they wish.

I confidently predict that within 20 years (perhaps within 10!) 64 bits
will seem too small, and the big fight will be between the "96 bits is
enough" and the "you're nuts, we need at least 128 bits" camps, with a
lunatic fringe out there clamoring for 256 bit addressing.

Woodhead's 1st law of Computer Resource Utilization:

   "Each time you double the {size of ram/size of hard disk/speed of cpu},
    the time required before this increased capacity becomes inadequate
    is halved."

Woodhead's 2nd law:

   "For any interesting task you want to do, your processor is always
    a little bit to slow, and your memory is always a little bit to
    small.  This is true on every machine from a Nintendo to a Cray.
    Insightful programmers refer to this as the "Job Security
    Syndrome."


-- 
+--------------------------------------------------------------------------+
| Robert J. Woodhead, Biar Games / AnimEigo, Incs.   trebor@foretune.co.jp |
| "The Force. It surrounds us; It enfolds us; It gets us dates on Saturday |
| Nights." -- Obi Wan Kenobi, Famous Jedi Knight and Party Animal.         |

dhinds@elaine26.stanford.edu (David Hinds) (02/20/91)

In article <RAJE.91Feb18123559@dolores.Stanford.EDU> raje@dolores.Stanford.EDU (Prasad Raje) writes:
>
>First the statement:
>
>A 64 bit address will be able to address each byte in a memory array
>10.4 kilometers on each side, populated chock full with 1Gbit DRAMs.
>
   How about using holographic memories?  I saw a description in BYTE a
few months back, that Bellcore has prototypes with storage densities of
10^12 bits per cubic centimeter.  By my calculations, a cube 2.6 meters
per side would have a capacity of 2^64 bits.  Double the edge length to
get 2^64 bytes.  This is non-volatile storage, and retrieving a page of
100K bits takes "a nanosecond".  The I/O interface would have to be able
to handle 10^15 bits/second to keep up with this, but then you could read
the entire cube in a few hours.  I suspect this may take a few years to
develop.

 -David Hinds
  dhinds@cb-iris.stanford.edu

wicklund@intellistor.com (Tom Wicklund) (02/20/91)

In <RAJE.91Feb18123559@dolores.Stanford.EDU> raje@dolores.Stanford.EDU (Prasad Raje) writes:

>First the statement:

>A 64 bit address will be able to address each byte in a memory array
>10.4 kilometers on each side, populated chock full with 1Gbit DRAMs.

>You will require a square 10.5 kilometers on a side completely filled
>with 1Gbit DRAMs to exhaust the addressability of a 64 bit byte pointer.

>(the thought of 1Gbit DRAMs from Stanford to Sunnyvale is weirdly attractive)


Before deciding how rediculous a fully populated 64 bit address space
is, consider whether or not a full 64 bit space will appear
immediately.  The short term need is for >32 bit.  I imagine that
processors might be built with 36, 40, 48, etc. physical address lines
rather than a full 64.  How many real systems today implement more
than 24-26 bits of physical address even though the chips can handle 32?

Alternately, consider the size of a 4GB memory back in the early 80's
when 32 bit addresses were being designed into processors.  The
dominant memory chip was 256K, so you'd need 131,000 chips to
implement the memory.  Obviously rediculous at the time.

dmocsny@minerva.che.uc.edu (Daniel Mocsny) (02/22/91)

In article <9042@lkbreth.foretune.co.jp> trebor@lkbreth.foretune.co.jp (Robert Trebor Woodhead) writes:
>I confidently predict that within 20 years (perhaps within 10!) 64 bits
>will seem too small, and the big fight will be between the "96 bits is
>enough" and the "you're nuts, we need at least 128 bits" camps, with a
>lunatic fringe out there clamoring for 256 bit addressing.

Very few people who say "enough" understand fully the venerable
historic tradition from which they speak. Humans have two conflicting
characteristics: we are at once (1) almost infinitely adaptive, and
(2) almost infinitely greedy. Because reality is very hard on us, 
nobody gets to be as greedy as they would like to be. Our adaptive
ability rescues us, by helping us believe that whatever we can get
at time t is what we in fact "need". However, this human tendency
to mistake the attainable for the desirable is nothing more than a
psychological coping strategy. When something more becomes attainable 
(and this becomes sufficiently obvious), we eventually revise
our notion of "need" upward to match it.

However, overcoming our adaptation to old limits takes time.  Almost 
every significant innovation meets widespread initial skepticism. 
Hard reality forces people to truncate their expectations, to the point
where limitations actually DEFINE life for us. Relaxing a limitation
can then seem like a threat instead of an opportunity.

Suppose you can chain a lion to a stake. Eventually the lion will
stop tugging at the chain, and accept the small circle of ground as
his territory. Keep him there for enough years, and supposedly you
can one day remove the chain, but he will no longer try to run away.
He has grown used to life within a small circle, and no longer 
understands the desire to go anywhere else. In fact, he may have
become an "expert" at living in that pathetic circumstance, and he
may resent having to complicate his life with a whole bunch of new
unknowns.

This mentality has been pervasive among humans all through the
technological era. For example: 

"What need have we of this 'telephone'? London has no shortage of
messenger boys."

19th-century London business had adapted itself to a particular
method of communication. The slow speed and inefficiency of it limited
what that business could accomplish, but it also defined the framework
in which business could take place. Therefore, if you wanted to succeed
in business, you had to adapt your operations to match
the limits of existing technology. To be ahead of your time was to
make a fatal error. You either designed a business that could run well
with only the services of messenger boys, or else you went broke.

A system which defines success as ability to adapt to current limits
is not a system which fosters visionary thinking. Almost invariably, 
the innovators are in some sense "outside" the system. The system as
a whole does not embrace the innovation until it becomes apparent as
the new limiting reality.

Unlike previous generations, however, we do not have to wait decades
between revolutions. One would think that the last ten years would
suffice to quell forever anyone's tendency to pronounce "enough", but
this does seem to happen.


--
Dan Mocsny				
Internet: dmocsny@minerva.che.uc.edu