[comp.arch] 64-bits, How many years?

rmbult01@ulkyvx.BITNET (Robert M. Bultman) (02/17/91)

In article <6050@mentor.cc.purdue.edu>,hrubin@pop.stat.purdue.edu (Herman Rubin)
> In article <3206@crdos1.crd.ge.COM>, davidsen@crdos1.crd.ge.COM (Wm E Davidsen
 Jr) writes:
>             .....................
>
> >   Assuming that you have a 1 gigaBYTE bus on that memory, it will take
> > 316 years (almost 317) to swap a program in.
>
> The CYBER 205, definitely not the fastest machine in the world, can move
> roughly 50 megawords (400 megabytes) per second per pipe.  Now even allowing
> for overhead (vector units are limited to 65535 words, and setup costs), this
> time will not be doubled.  I see no way to get your pessimistic result.
> --
>
And in <MCCALPIN.91Feb16123129@pereland.cms.udel.edu> mccalpin@perelandra.cms.ud
el.edu (John D. McCalpin)
> Herman probably did not understand what Davidsen was saying.
> I get a slightly different number from the following calculation:
>
> perelandra 1% bc
> m=2^64              % number of bytes addressable by 64 bits
> 18,446,744,073,709,551,616  % a big number 2*10^19
> r=1,000,000,000         % 1 GB/s data bus rate
> m/r
> 18,446,744,073          % time in seconds for transfer
> m/r/(86400*365)
> 584             % time in years for transfer
> quit
> --
> John D. McCalpin            mccalpin@perelandra.cms.udel.edu
> Assistant Professor         mccalpin@brahms.udel.edu
> College of Marine Studies, U. Del.  J.MCCALPIN/OMNET

Well, here is another slightly different answer:

Hypothetical 1 Gbyte transfer rate:
===================================
  address      transfer
   space         rate         seconds to years conversion
|---------|   |---------|   |-----------------------------|
                second         hour        day       year
2**64 bytes X ----------- X ---------- X ------- X -------- = 544.77 years
              2**30 bytes   3600 secs.   24 hrs.   365 days


Hypothetical slow CYBER 205 operating without setup:
====================================================
  address     transfer rate 1      seconds to
   space      CYBER 205 pipe     years convers.
|---------|   |-------------|   |--------------|
                  second             year          1394.61 years
2**64 bytes X --------------- X ---------------- = --------------
              400*2**20 bytes   31,536,000 secs.   CYBER 205 pipe

Now, just how many pipes are there in a CYBER 205...

                   year
define: T = ------------------
            31,536,000 seconds

CRAY-1 using all i/o channels at same time:
===========================================
              1 word          second
2**64 bytes X ------- X ------------------------ X T  = 6,093.15 years
              8 bytes   500,000 words X 24 chnls

(Note: above transfer rate based upon maximum data streaming rate
of 500,000 words/second from Richard Russel, "The CRAY-1 Computer
System", Communications of the ACM, Jan. '78, V21, N1, Table I,
and was assumed to be per channel, not combined)

Rob Bultman, University of Louisville, Speed Scientific School

gillies@m.cs.uiuc.edu (Don Gillies) (02/19/91)

Re: Is 64bits too much -- how many years?

In 1940's, main memory was about 1K*48bits (about 2^13 bytes).  In the
1990's, main memory of a fairly big machine is 256Mbytes (2^28 bytes).
That's a 15 bit increase in 50 years.  So we conclude that because
64-28 = 36, it will take 120 years to outgrow the 64-bit address
space.  By then, the technology for handling the "out of address
space" problem will be a lost art.  Luckily, we will all be dead by
then...  Also, I bet a 64-bit address space will take a fusion reactor
power plant to supply enough energy.....


Don Gillies	     |  University of Illinois at Urbana-Champaign
gillies@cs.uiuc.edu  |  Digital Computer Lab, 1304 W. Springfield, Urbana IL
---------------------+------------------------------------------------------
"UGH!  WAR! ... What is it GOOD FOR?  ABSOLUTELY NOTHING!"  
	- the song "WAR" by Edwin Starr, circa 1965

-- 

sef@kithrup.COM (Sean Eric Fagan) (02/19/91)

In article <1991Feb18.163010.31688@m.cs.uiuc.edu> gillies@m.cs.uiuc.edu (Don Gillies) writes:
>Also, I bet a 64-bit address space will take a fusion reactor
>power plant to supply enough energy.....

You seem to be assuming that silicon (or even GaAs) will be used.  Using
protein-based "memory" takes a lot less power (none, actually) to maintain.

-- 
Sean Eric Fagan  | "I made the universe, but please don't blame me for it;
sef@kithrup.COM  |  I had a bellyache at the time."
-----------------+           -- The Turtle (Stephen King, _It_)
Any opinions expressed are my own, and generally unpopular with others.

baum@Apple.COM (Allen J. Baum) (02/19/91)

[]
>In article <1991Feb18.163010.31688@m.cs.uiuc.edu> gillies@m.cs.uiuc.edu (Don Gillies) writes:
>
>Re: Is 64bits too much -- how many years?
>I bet a 64-bit address space will take a fusion reactor
>power plant to supply enough energy.....

Remember when computers of the future would take all the power of Niagra Falls
to operate?

--
		  baum@apple.com		(408)974-3385
{decwrl,hplabs}!amdahl!apple!baum

jerry@TALOS.UUCP (Jerry Gitomer) (02/19/91)

baum@Apple.COM (Allen J. Baum) writes:

:[]
::In article <1991Feb18.163010.31688@m.cs.uiuc.edu: gillies@m.cs.uiuc.edu (Don Gillies) writes:
::
::Re: Is 64bits too much -- how many years?
::I bet a 64-bit address space will take a fusion reactor
::power plant to supply enough energy.....

:Remember when computers of the future would take all the power of Niagra Falls
:to operate?

They do, but collectively rather than individually ;-)

-- 
Jerry Gitomer at National Political Resources Inc, Alexandria, VA USA
I am apolitical, have no resources, and speak only for myself.
Ma Bell (703)683-9090      (UUCP:  ...{uupsi,vrdxhq}!pbs!npri6!jerry 

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (02/20/91)

In article <1991Feb18.163010.31688@m.cs.uiuc.edu> gillies@m.cs.uiuc.edu (Don Gillies) writes:

|                                        So we conclude that because
| 64-28 = 36, it will take 120 years to outgrow the 64-bit address
| space.  

  We may never run out of 64 bits of address space. That's not to say we
won't have problems larger than that, but there's a real possibility
that some limitations of physics will hold us back.

  The first is that size of an electron, and the minimum size of a
trace. A trace has to be a certain size, or it ceases to be a conductor
and becomes an exercise in probability. Therefore you can only downsize
a chip so far, even in theory. Given that limit, and the relatively low
speed of light relative to increasing clock rates, it may never be
practical to build a computer with all the memory 64 bits will address,
ignoring the engineering and financial problems.

  Now the reason I say "may" never run out, is that other technology may
be developed, although I don't think it will be an extension of anything
we have today. Optical is size limited by the size of the photon, and is
still limited by the speed of light. Mark that one off as a candidate
for ultra dense computing.

  How about using isotopes of individual atoms for bits? Let's use
hydrogen, since the atoms are small. We add a neutron to the atom for a
1, remove it for a zero. Of course if we have an error and keep
adding... we get a new meaning to the term "program blowup." Well, okay,
I'm kidding, but the external stuff to diddle atoms is likely to be
larger than the atom, so this is unlikely, too.

  Therefore, I conclude that the speed of light makes 64 bits likely as
the largest physical address space we will even need. I have lots of
faith in new development, but I have faith in relativity and physics,
too.
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
  "I'll come home in one of two ways, the big parade or in a body bag.
   I prefer the former but I'll take the latter" -Sgt Marco Rodrigez

rmc@snitor.UUCP (Russell Crook) (02/21/91)

In article <3209@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes:
>In article <1991Feb18.163010.31688@m.cs.uiuc.edu> gillies@m.cs.uiuc.edu (Don Gillies) writes:
>
>|                                        So we conclude that because
>| 64-28 = 36, it will take 120 years to outgrow the 64-bit address
>| space.  
>
>  We may never run out of 64 bits of address space. That's not to say we
>won't have problems larger than that, but there's a real possibility
>that some limitations of physics will hold us back.
>
<<<<lots of stuff on sizes, etc. being constrained by physical reality>>>
>
>  Therefore, I conclude that the speed of light makes 64 bits likely as
>the largest physical address space we will even need. I have lots of
>faith in new development, but I have faith in relativity and physics,
>too.
>-- 
>bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
>  "I'll come home in one of two ways, the big parade or in a body bag.
>   I prefer the former but I'll take the latter" -Sgt Marco Rodrigez

All of this presupposes two dimensional memory. In 3D, 64 bits seems
to be in reach.  Some numbers:

2**64 = 2 * 2**63 . Assuming a cubic array of bits, 1.26*2**21 bits on a side,
or about 3.5 * 10**6. If we constrain our memory cells to be one micron
(which isn't too far from current praxis, except in 2D)
this yields a cube 3.5 metres on a side.  Large, but not ridiculously so.
If you can get the cells down to .1 micron including wires (e.g., 1000
angstroms, or about 10**8 to 10**9 atoms per cell), the size is 35 cm on
a side, which would fit on a desktop...

Speed of light would however restrict the clock rate to some fraction
(say a third) of a gigahertz (i.e., 3nsec access time), the larger version
30 ns or so, so there could be some performance limitations.
Even that applies only to completely random access.  If you treat the memory
like a current day disk, you get 3-30 nsec seek+latency, followed by
some arbitrary transfer rate.

    I won't argue about 128 bits being enough :->

------------------------------------------------------------------------------
Russell Crook, Siemens Nixdorf Information Systems, Toronto Development Centre
2235 Sheppard Ave. E., Willowdale, Ontario, Canada M2J 5B5   +1 416 496 8510
uunet!{imax,lsuc,mnetor}!nixtdc!rmc,  rmc%nixtdc.uucp@{eunet.eu,uunet.uu}.net,
      rmc.tor@nixdorf.com (in N.A.), rmc.tor@nixpbe.uucp (in Europe)
      "... technology so advanced, even we don't know what it does."
-- 
------------------------------------------------------------------------------
Russell Crook, Siemens Nixdorf Information Systems, Toronto Development Centre
2235 Sheppard Ave. E., Willowdale, Ontario, Canada M2J 5B5   +1 416 496 8510
uunet!{imax,lsuc,mnetor}!nixtdc!rmc,  rmc%nixtdc.uucp@{eunet.eu,uunet.uu}.net,

keith@MIPS.com (Keith Garrett) (02/22/91)

In article <1991Feb18.202512.13150@kithrup.COM> sef@kithrup.COM (Sean Eric Fagan) writes:
>In article <1991Feb18.163010.31688@m.cs.uiuc.edu> gillies@m.cs.uiuc.edu (Don Gillies) writes:
>>Also, I bet a 64-bit address space will take a fusion reactor
>>power plant to supply enough energy.....
>
>You seem to be assuming that silicon (or even GaAs) will be used.  Using
>protein-based "memory" takes a lot less power (none, actually) to maintain.

then why do i get so hungry??? 8^}
-- 
Keith Garrett        "This is *MY* opinion, OBVIOUSLY"
      Mips Computer Systems, 930 Arques Ave, Sunnyvale, Ca. 94086
      (408) 524-8110     keith@mips.com  or  {ames,decwrl,prls}!mips!keith

darcy@druid.uucp (D'Arcy J.M. Cain) (02/22/91)

In article <3209@crdos1.crd.ge.COM> bill davidsen writes:
>  Therefore, I conclude that the speed of light makes 64 bits likely as
>the largest physical address space we will even need. I have lots of
>faith in new development, but I have faith in relativity and physics,
>too.

Boy, that's the kind of statement that can come back to haunt you in ten
or twenty years.  :-)

-- 
D'Arcy J.M. Cain (darcy@druid)     |
D'Arcy Cain Consulting             |   There's no government
West Hill, Ontario, Canada         |   like no government!
+1 416 281 6094                    |

henry@zoo.toronto.edu (Henry Spencer) (02/23/91)

In article <1991Feb21.170537.1441@druid.uucp> darcy@druid.uucp (D'Arcy J.M. Cain) writes:
>>the largest physical address space we will even need. I have lots of
>>faith in new development, but I have faith in relativity and physics,
>>too.
>
>Boy, that's the kind of statement that can come back to haunt you in ten
>or twenty years.  :-)

Yes, such "proofs" have a tendency to have a lot of hidden assumptions.
Like the "proof" in the late 70s that it was impossible to make 64Kb DRAMs
with optical lithography, which assumed no change in cell design and no
fundamental improvements in the optical processes.  (In fact, both cells
and processes changed, and now people are gearing up to do 64Mb (note M not
K!) DRAMs with optical lithography.)
-- 
"Read the OSI protocol specifications?  | Henry Spencer @ U of Toronto Zoology
I can't even *lift* them!"              |  henry@zoo.toronto.edu  utzoo!henry

kahn@theory.tn.cornell.edu (Shahin Kahn) (02/27/91)

So, it looks like 64-bit-sized memory is out of the question (how many
systems have you seen with more than 256 MB of memory?)
I guess we'll have to page.  Where do we page to?  How many systems do you know
with more than 200 GB of disk?  How much of it is relatively-fast disk?
What am I missing?

I am all for 64-bit addressing, by the way.  in fact I am for 
*arbitrary* length addressing.

But is this 64-bit addressing thing more than just marketing for the moment?
Or is it that if you're going past 32, you may as well go to 64?
Then why not design it for arbitrary length and just implement 48 bits
for now?

speaking for me,
Shahin.

peter@ficc.ferranti.com (Peter da Silva) (02/27/91)

> Then why not design it for arbitrary length and just implement 48 bits
> for now?

How do you plan to allow for this in the instruction stream? What's the
word size? How big are the registers? How do you implement this?
-- 
Peter da Silva.  `-_-'  peter@ferranti.com
+1 713 274 5180.  'U`  "Have you hugged your wolf today?"

mash@mips.com (John Mashey) (03/03/91)

In article <+MR97B7@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:
>> Then why not design it for arbitrary length and just implement 48 bits
>> for now?
>How do you plan to allow for this in the instruction stream? What's the
>word size? How big are the registers? How do you implement this?

Actually, the question comes up fairly often, so why don't we just
take care of it once and for all, i.e.:
	why 64-bits?  why not 48 bits?

The answer is simple:
	It's really AWKWARD to build a byte-oriented machine with
	word-sizes that are not power-of-two in terms of character sizes,
	especially if you'd like C to be reasonable on it.

Let's assume that you get over the awkwardnesses of what you think
ints, shorts, and such are.  Let us also assume that you care whether
or not C works on this machine (that is NOT a given, just an assumption;
after all, 48-bit machines have been designed and sold, although
not recently).

However, consider this: memory is usually physically organized
as words or multiples of words.  Consider what happens when you do
a load or store of (for example) 32-bits on a byte addressed machine:
1) Compute the address.
	T = tag, high-order bits
	I = index, middle bunch of bits
	xx = throw away low-order 2 bits
	T...TI...Ixx
2) Index the cache with I, check the resulting tags.
OR, send T..TI..I to the memory system, with some extra specifier to do
provide the access size and alignment.

Now, consider the way byte-addressing works:
	T..TI..I00
	T..TI..I01
	T..TI..I10
	T..TI..I11
	(TI+1..)00
	....
access the 4 bytes, in order, i.e., you can access the word, using
a character pointer, if you keep incrementing it, you get the
next byte of the next word, and all of this works perfectly fine.

Now, suppose you have 48-bit words, with 8-bit chars?
Consider word 0.  It has bytes numbered from 0..5, or 000..101.
Now, in what word is byte 6 (110)?
Well, it's in word 1.
So, how do you compute the index of the word that contains the byte,
from the byte address?
	YOU DIVIDE BY 6, which is not a power of 2.
	Given the recent discussions of speed of division, it should be
	clear why computer designers do not wish to include a divide
	(that is NOT just a right-shift) in every partial word access....

So, maybe what you do is have 12-bit bytes, which at least gives you
4 of them ...  but has other problems.

Or, maybe, you punt on thinking there is a staightforward incrementation
that maps words into bytes.
	This has been done.  Many word-addressed machines, like DEC
	PDP-10s, and (gasp!) Stanford MIPS use word-addressing,
	but have special byte-pointers and instructions for dealing
	with them. (Note that MIPS Computer Systems MIPS use byte
	addresses - I wouldn't have come here if we'd stuck with word
	addresses :- life is too short.)
	Note that such things usually have a word address, then steal
	a couple bits give the byte number within word, and when you
	do p++ in one of these, the hardware increments the byte offset,
	and if it exceeds the maximum, it resets the offset to 0, and
	increments the word address.  I.e., this sneaks around the
	division problem.

	It IS possible to port C to such things (as, for example,
	people at BTL did with Honeywell mainframes, Sperry/Unisys 1100s,
	XDS Sigma machines, etc), but it is never once been very pleasant.
	(Various BTL friends did ports to some fo t hese; the debriefing
	memos on the efforts were interesting.  I felt especially sorry
	for the folks who did the string routines for the Univac 1100
	series machines a long time ago.  I don't know if it is still
	true, but at that time, the byte-within-word offset was actually
	stored as part of the opcode, i.e., you had things like
	"load 1st byte", "load 2nd byte", (different nomenclature,
	but that's the idea).  Hence, and an efficient strcpy was at least
	100s of lines long, as you had to decode the pointers to
	figure out which permutation of alignments was needed for
	the basic loop, i.e.:
		1: load 1st byte, store 1st byte
		   load 2nd byte, store 2nd byte...
		2: load 1st byte, store 2nd byte
		   load 2nd byte, store 3rd byte
	  	3:....
		   
	Worse, there are now huge numbers of application programs that
	are very portable amongst power-of-two-byte-addressed machines,
	but which will be miserable to make run otherwise.  This fact
	WAS NOT TRUE at the time people were doing C ports to such
	machines in the early 1970s, but it's true now.

So...  48 is actually a pretty good number, as it is divisible by
2,3,4,6,8,12,16, and 24, but.....
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	 mash@mips.com OR {ames,decwrl,prls,pyramid}!mips!mash 
DDD:  	408-524-7015, 524-8253 or (main number) 408-720-1700
USPS: 	MIPS Computer Systems MS 1/05, 930 E. Arques, Sunnyvale, CA 94086

kym@bingvaxu.cc.binghamton.edu (R. Kym Horsell) (03/03/91)

In article <660@spim.mips.COM> mash@mips.com (John Mashey) writes:
>Now, in what word is byte 6 (110)?
>Well, it's in word 1.
>So, how do you compute the index of the word that contains the byte,
>from the byte address?
>	YOU DIVIDE BY 6, which is not a power of 2.
>	Given the recent discussions of speed of division, it should be
>	clear why computer designers do not wish to include a divide
>	(that is NOT just a right-shift) in every partial word access....

Not to disagree with the thrust of John's argument, which is based on
more than the little detail I'm about to nit pick :-), but division by
constants _can_ be faily efficient (although non powers-of-two are not
half so good as powers-of-2). Viz:

main(){
	long sixth=0xaaaa;
	int i;
	for(i=1;i<0x8000L;i++){
		int j=i*sixth,j1;
		j1=(j+(1<<15))>>18;
		if(i/6!=j1)exit(1);
		}
	exit(0);
	}

To make the `multiply by 1/6' pipe requires a typical tree of
depth 4 for 48 bits. I'm not seriously suggesting putting up with
a latency of >=4 for address decoding, but higher figures _have_ been
known :-).

-kym

rpw3@rigden.wpd.sgi.com (Rob Warnock) (03/04/91)

In article <660@spim.mips.COM> mash@mips.com (John Mashey) writes:
+---------------
| Or, maybe, you punt on thinking there is a staightforward incrementation
| that maps words into bytes.
| 	This has been done.  Many word-addressed machines, like DEC PDP-10s...
|
| 	It IS possible to port C to such things... I felt especially sorry
| 	for the folks who did the string routines for the Univac 1100
| 	series machines a long time ago.  I don't know if it is still
| 	true, but at that time, the byte-within-word offset was actually
| 	stored as part of the opcode, i.e., you had things like
| 	"load 1st byte", "load 2nd byte",...
+---------------

The way the PDP-10 handled this (and not just for C -- it was a system-wide
convention) was to have a byte-pointer for a "string" point to the "-1'st"
byte within the first word (by having the "P" field of the pointer be 36),
whereupon an Increment-And-Load-Byte (ILDB) instruction would get the first
byte of the string (and another ILDB would get the 2nd, and so on), but the
word address for the string pointer was still "correct".  Cute hack...


-Rob

-----
Rob Warnock, MS-1L/515		rpw3@sgi.com		rpw3@pei.com
Silicon Graphics, Inc.		(415)335-1673		Protocol Engines, Inc.
2011 N. Shoreline Blvd.
Mountain View, CA  94039-7311

koll@NECAM.tdd.sj.nec.com (Michael Goldman) (03/20/91)

Back in Feb. '91 bill davidsen posted a comment to the effect that the limits
of trace size, electron size, and photon size would mean 64 bits address
size would be all we'd ever need - because we couldn't use any more than that.
Others commented about the need for large virtual spaces but this is to mention
some ways around the limits.

I read sometime in the last 6 months that a research group (AT&T or one of
the other giga-companies) was working on using the energy state of the
electrons of an atom as a way of storing information.

I.e., since an electron can be at any of a large number of energy levels, which
can be induced by sending a photon to the electron, then the energy level of
the valence electrons would be one-to-one mappable to numbers.  So, if one
atom could have 8 easily manipulated energy states, two atoms could represent
numbers 0-63, 4 atoms could represent (8 * 8 * 8 * 8) 0-4095, etc.

One could also include the enegrgy state of the nucleus, or the spin state
of the electron or nucleus, but I haven't heard of anyone working on that
angle.  Signal transmission could be just the appropriate-energy photon
traveling in free-space.  With the resulting density of components, it
would be practical to add all the functions of a CPU to each word.  The
ultimate in massive parallelism.

Possibly, this is the direction neural nets are going towards.

	("Say buddy, can you paradigm?")

hrubin@pop.stat.purdue.edu (Herman Rubin) (03/20/91)

In article <1991Mar19.225915.17474@sj.nec.com>, koll@NECAM.tdd.sj.nec.com (Michael Goldman) writes:
> 
> Back in Feb. '91 bill davidsen posted a comment to the effect that the limits
> of trace size, electron size, and photon size would mean 64 bits address
> size would be all we'd ever need - because we couldn't use any more than that.
> Others commented about the need for large virtual spaces but this is to mention
> some ways around the limits.
> 
> I read sometime in the last 6 months that a research group (AT&T or one of
> the other giga-companies) was working on using the energy state of the
> electrons of an atom as a way of storing information.
> 
> I.e., since an electron can be at any of a large number of energy levels, which
> can be induced by sending a photon to the electron, then the energy level of
> the valence electrons would be one-to-one mappable to numbers.  So, if one
> atom could have 8 easily manipulated energy states, two atoms could represent
> numbers 0-63, 4 atoms could represent (8 * 8 * 8 * 8) 0-4095, etc.

		[More details of the same type.]

There are real problems with this.  The uncertainty principle also requires
that if the energy is precisely known, the time cannot be.  If we want very
high switching speeds, and we need precision, do we not need enough particles
to keep the uncertainty principle from overwhelming things?  Also, how about
spontaneous emission of radiation?

Furthermore, if one sends a photon to an electron, one cannot be sure what
will happen.  The probability distribution of results is rarely simple.  
An energy high enough to cause the desired transition to be reasonably
likely is high enough to cause unwanted transitions with good probability.
The way out is redundancy, but there goes the efficiency.

All present memory devices to my knowledge, except those causing permanent
changes on a write, use an inefficient hysteresis loop.  It is this 
thermal inefficiency which allows reliable storage.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)   {purdue,pur-ee}!l.cc!hrubin(UUCP)

koll@dragon.nec.com (Michael Goldman) (03/21/91)

 Herman Rubin was kind enough to dignify my meanderings on electron states
mapping to numbers resulting in molecule-sized CPUs with the thought that
the Uncertainty Principle ("It's 10 femtoseconds O'clock! Do you know where
your electron is?") would preclude combining high switching speeds with
precision.

 All of which is true (as far as current theory goes), but the energy levels
 of valence electrons are quantized so that your precision would only have
 to be such that the photon sent is >= to the quantum necessary to jump to
 the next higher energy level and < that needed to go to the level above that.
 One could simplify it by having only 2 states per atom - base energy level = 0
 and above base level = 1.  While the speed would be limited, it would be orders
 of magnitude greater than the current solid state devices which rely on masses
 of electrons creeping over energy barriers.  As for the point of state decay,
 this would map to the capacitor leakage of RAMs and require refreshing
 periodically.  The same old problems but on a much smaller and faster scale.
 
 But maybe you still don't like this.  How about another research area I read
 of some months ago, of using the relative positions of nodes on a long organic
 molecules to represent bits.  The simplest case would be 2 atoms sticking out
 on a small chain like watch hands (watch hands? boy, does that date me).
 Each 90 degree relative position could be distinguished so that you could
 count to 4 on each 2 molecule chain.  This would lend a whole new meaning
 to the term computer virus!

mash@mips.com (John Mashey) (03/21/91)

In article <1991Mar20.162907.22485@sj.nec.com> koll@dragon.nec.com (Michael Goldman) writes:
>
> All of which is true (as far as current theory goes), but the energy levels
> of valence electrons are quantized so that your precision would only have
....  more along discussion of making computers REALLY small and fast...

Of course, there better be some seriosu redundancy in all of this.

I'm reminded of a science fiction story (can somebody recall the author
and title?)  about successive waves of shrinkage as all of the information
in galactic society gets squeezed into smaller and samller physical space
(using techniques of the ilk mentioned in this thread), and this is
wonderful, but then, somewhere, somebody loses the tiny object,
or perhaps the index to it, and society collapses because no one can
get the information back :-) 
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	 mash@mips.com OR {ames,decwrl,prls,pyramid}!mips!mash 
DDD:  	408-524-7015, 524-8253 or (main number) 408-720-1700
USPS: 	MIPS Computer Systems MS 1/05, 930 E. Arques, Sunnyvale, CA 94086

koll@NECAM.tdd.sj.nec.com (Michael Goldman) (03/22/91)

This is to argue that we may be able to circumvent the speed of light limiting
computer speed by using the quantum tunneling effect currently under
development at TI and others. (I think they made a functioning circuit a year
or two ago.)

In a much earlier posting, Bill Davidsen expressed an innocent faith in
relativity and physics as providing limits to getting to a 64-bit address space.
I'm here to tell ya, Bill, they ain't the same.  I.e., relativity is a subset
of physics.  Quantum mechanics is another subset.

The general theory of relativity is basically a philosophical
resolution of the paradox posed by the finite speed of light, and the
non-existence of an "ether" or medium for its transmission.  The paradox was
that one could travel faster than a light beam which transmitted information
which had happened before your departure.  The resolution was that therefore
you can't travel faster than light (other results follow from that).

However, if one is on the sub-atomic level, this paradox may not arise.
The Heisenberg uncertainty principle states that (your uncertainty in
determining a particle's speed) * (your uncertainty in determining a
particle's position) > (Planck's constant * particle's frequency).  Therefore,
if you know a particle's position precisely, your uncertainty concerning
it's speed is infinite, so it's speed could be infinite!

This is more than a wild surmise.  When I was studying particle physics, I
went to the professor about a problem in which it seemed that some of the
sub-atomic particles were exchanging other sub-atomic particles (as a binding
force) in times that implied exceeding the speed of light.  He assured me that
that was, in fact, what was happening due to quantum mechanics and the
resultant probability distributions and reminded me of the uncertainty
principle in the context where light = a photon = just another sub-atomic
particle.  Relativity is not violated, but the IMPLICATION of relativity in
the macro world we live in that the speed of light cannot be exceeded does
not apply.  Quantum mechanics gives results which say that a particle has a
finite (though tiny) probability of being anywhere in the universe, which
means that it could be definitely at point A at one time, and definitely
at point B at another time, and that the delta-time is a probability
ditribution on an open-ended scale - i.e. the probability is the area under
a curve that extends to infinity.  (Einstein and others had problems with
this.  Einstein never accepted it, hence his famous saying that God does not
play dice with the universe.)  C.f., "Schroedinger's cat"

The quantum tunneling effect that TI and others are pursuing relies on the
quantum mechanical results above.  These do not change simply because there
is a barrier between the two places an electron might be.  One transmits
electrons THROUGH (not over) potential barriers by somehow using these effects
to make the probability ~ 1 that an electron that was once on one side of
the barrier is now on the other side of the barrier.  I don't know how they
do this, but an interesting thought is that they are in a realm where
relativitistic results take on a different nature than on the macro scale
we live in.  So, maybe we'll get switching times faster than light!?

	"A lady who was truly quite bright,
	 Traveled far faster than light.
	 She set out one day,
	 In a relative way,
	 And returned the previous night."

acha@CS.CMU.EDU (Anurag Acharya) (03/22/91)

In article <1991Mar21.181256.1494@sj.nec.com> koll@NECAM.tdd.sj.nec.com (Michael Goldman) writes:
>This is to argue that we may be able to circumvent the speed of light limiting
> computer speed by using the quantum tunneling effect currently under
> development at TI and others. (I think they made a functioning circuit a year
> or two ago.)
...............
>So, maybe we'll get switching times faster than light!?

Ons of the implications of the theory of special relativity is that 
*information* cannot be transmitted at a speed faster than that of light.
Therefore, it is *not* possible to switch faster than the speed of light --
no matter what quantum mechanical trick you pull. Tunnel effects have been
used in working devices for quite some time now -- eg the tunnel diodes.
No one claims that any of these will ever switch faster than the speed of
light.

anurag