lindsay@MATHOM.GANDALF.CS.CMU.EDU (Donald Lindsay) (11/24/89)
nelson@m.cs.uiuc.edu writes: >> If you project the slope of the clock rates of supercomputers, you >> will see sub-nanosecond CYCLE times before 1995. > Actually, I don't see this (dare I say it) EVER occuring. Oh ye of little faith! Today's gates are, repeat after me, slow, slow, slow! For example, if you build a ring oscillator on a Motorola MCA3 ECL gate array, you see a 120 picosecond gate delay. Many major labs have built ring oscillators at 20 ps or below. The lab record for a HEMT (ie GaAs/AlGaAs) is about 10 picoseconds. That's at room temperature: in liquid nitrogen the HEMT record is better than 6 picoseconds. Beyond HEMT is a zoo of proposed exotic devices: unipolar tunnelling transistors, quantam dots, and the like. There is one thing that is sure: today's Y-MP gates aren't the last word in speed. As for SRAM .. already there are lab chips with access times under one nanosecond. Cypress is currently advertising a 3 nano SRAM (1Kx4). >a nanosecond is only 12 inches of wire More precisely, a nanosecond is 30 cm. in a vacuum. The copper of the VAX 9000 circuit board (MCU) gets 160 picoseconds/inch. However, an MCU is only 4 inches across (two thirds of a nanosecond), and can realisticaly hold 40 or 50 SRAMs. Plus, of course, we would expect a hot processor to have an on-chip cache. The fancy packaging would be important only to keep the second-level cache nearby, thus reducing the cost of a first-level miss. A more serious problem is power distribution. The 10 picosecond HEMT took a milliwatt per gate: ouch. Luckily, liquid nitrogen temperatures reduce resistivity by a factor of six. That temperature also slows down physical processes - including the physical processes by which chips fail (such as atoms electromigrating downstream in your power wires). The other serious problem is signal distribution around a chip. Wiring doesn't shrink as easily as devices, so we will see heavy emphasis on keeping things local. Special purpose chips (say, signal processing pipelines) might get the equivalent of 10 GHz before the century is out. General purpose chips have to push signals through metal (e.g. the bus through the cache) and will probably bottleneck on the capacitance. -- Don D.C.Lindsay Carnegie Mellon Computer Science
seanf@sco.COM (Sean Fagan) (11/25/89)
In article <7076@pt.cs.cmu.edu> lindsay@MATHOM.GANDALF.CS.CMU.EDU (Donald Lindsay) writes: >nelson@m.cs.uiuc.edu writes: >>> If you project the slope of the clock rates of supercomputers, you >>> will see sub-nanosecond CYCLE times before 1995. >> Actually, I don't see this (dare I say it) EVER occuring. >Oh ye of little faith! >Many major labs have built ring >oscillators at 20 ps or below. And don't forget protein-based logic. (Yeah, yeah: jokes of 'don't forget to feed it!' abound.) CMU had, a few years ago, announced some protein-based RAM and a NAND gate. The RAM had an access time of, if I remember correctly, something like 3 picoseconds, and the NAND gate was at something like 6 picoseconds. They were using lasers to access and change the states; the protein just stored it (as large as protein molecules are, they are orders of magnitude *smaller* than any current circuit). *If* this pans out (and I believe that either Cray Research or Cray Computer is looking into it), it could be *very* significant. So, yes, you could end up with a Cray-7, with 16384 processors (extrapolating from the trend of the past 3), and deity alone knows how much memory, all on your desk. But don't forget to feed it 8-). (Seriously: I don't know enough about current research to say whether it would work or not. Initial results show it *might* work, and might work soon enough and cheaply enough to be a viable research project, but that's all I know.) -- Sean Eric Fagan | "Time has little to do with infinity and jelly donuts." seanf@sco.COM | -- Thomas Magnum (Tom Selleck), _Magnum, P.I._ (408) 458-1422 | Any opinions expressed are my own, not my employers'.
terry@sunquest.UUCP (Terry Friedrichsen) (12/05/89)
In article <7076@pt.cs.cmu.edu>, lindsay@MATHOM.GANDALF.CS.CMU.EDU (Donald Lindsay) writes: > Beyond > HEMT is a zoo of proposed exotic devices: unipolar tunnelling > transistors, quantam dots, and the like. There is one thing that is > sure: today's Y-MP gates aren't the last word in speed. > > Don D.C.Lindsay Carnegie Mellon Computer Science Could you do me (and perhaps many others in comp.arch) a favor, if it's convenient, and post some references to these "proposed exotic devices"? Sounds like interesing reading (I GOTTA find out what a "quantum dot" is!). Terry R. Friedrichsen TERRY@SDSC.EDU (alternate address; I live and work in Tucson) "Do, or do not. There is no 'try'." Yoda - The Empire Strikes Back
lindsay@MATHOM.GANDALF.CS.CMU.EDU (Donald Lindsay) (12/11/89)
In article <1106@sunquest.UUCP> terry@sunquest.UUCP (Terry Friedrichsen) writes: >Could you do me (and perhaps many others in comp.arch) a favor, if it's >convenient, and post some references to these "proposed exotic devices"? >Sounds like interesing reading (I GOTTA find out what a "quantum dot" is!). Sure. The one-color diagrams are in IEEE Journal of Quantam Electronics vol. QE-22, #9, Sep86. (Special issue: hundreds of pages: have a recent physics degree.) + keep an eye on Applied Physics Letters. Two-color diagrams are hard to find, the last one I saw was: Electronics Oct88 p.143 "Will Quantam-Effect Technology Represent a Quantam Jump in ICs?" Three-color diagrams are likewise scarce: R.T.Bate, Scientific American vol. 256, #3, Mar88, p. 96. Mark Reed, Byte, May89, p.275 "The Quantam Transistor" Both gentlemen are physicists at Texas Instruments. Working devices are even scarcer, i.e. not yet. Also, products are 6 to infinity years away, so don't get _too_ excited. The basic insight is very simple: there is a limit below which transistors will not work: this is about the 0.2 - 0.35 micron level. Below that, quantam effects will be unavoidable. So, the dream is to make quantam effects into a feature rather than a bug. Ballistic transistors are nearer-term. The insight here is that electrons, moving through a crystal because of an applied voltage, do _not_ travel at their "drift velocity". In fact, they accelerate, then bump into the lattice ("emit phonons"). Then they accelerate again, and so on. So, imagine a channel region shorter than the mean free path. Electrons can cross "ballistically". A GaAs ballistic transistor would have a channel of about 0.4 microns. This is doable. A silicon ballistic transistor would have to be smaller - bad news for silicon. Diamond devices would be the biggest (if only we could make them). Drift velocities are higher at lower temperatures, so liquid nitrogen cooling would allow larger devices. However, the article I'm stealing all this from claims that the biggest benefit occurs when the electrons are injected at high speed, probably by a heterostructure. So far, heterostructures have been GaAlAs on top of GaAs, or GaInAs on InP, or the like: bleah: all hard to work with. There are recent reports of silicon hetero- structures using germanium, or silicon carbide. Hmmm: the set of possible futures keeps growing. Disclaimer: I don't do this stuff: I collect. Please correct any mistakes, and information donations are welcome. -- Don D.C.Lindsay Carnegie Mellon Computer Science
andrew@dtg.nsc.com (Lord Snooty @ The Giant Poisoned Electric Head ) (12/14/89)
I read recently in the trade press that Hughes had bettered the world record for device speed, currently held by themselves. They constructed a ring oscillator at 0.6 THz. This was, I believe, a bipolar silicon process, and was definitely at room temperature. Any details, anyone? -- ........................................................................... Andrew Palfreyman a wet bird never flies at night time sucks andrew@dtg.nsc.com there are always two sides to a broken window