[comp.arch] Polonium-Powered Parallel Processor

mmm@cup.portal.com (Mark Robert Thorson) (12/02/90)

Excerpted from a posting in sci.electronics several moons ago:

> The electrodes took
> some experimentation, but as I recall, they ended up using the little
> Polonium-impregnated strip of metal taken from an anti-static brush
> for phonograph records - remember vinyl disks?? :{) The Polonium
> emits a charged particle (Beta, isn't it? An electron?), and builds
> up a charge of the opposite polarity. This charge dissipates at a rate
> dependant on the humidity, but also on the atmospheric electrostatic
> potential. The amplifiers were common IC op-amps, LM709's, I think.

This got me thinking about the possibility of making a self-powered chip.
_MOS_Integrated_Circuits_ by Penney and Lau has a nice chapter on dynamic
logic structures which resemble charge-transfer devices, in which data
is manipulated as charge loaded or unloaded from dynamic nodes, rather than
the static approach of defining the voltage on a node by the ratio between
a pull-up and a pull-down device.

Now imagine the power for each gate coming from self-recharging capacitors
fabricated on a special polonium metallization layer.  These would be like
little power supplies distributed all over the chip.  Advantages are:
elimination of chip real estate consumed by large power distribution
structures, elimination of cross-coupling of adjacent circuits through
power supply noise, and -- most important -- continuous operation of chips
in unbroken, unpackaged wafers.  I.e. you could smash one of these wafers
and the individual pieces would continue computing as long as they were
large enough to hold a complete die.

One obvious application would be data recorders, such as eavesdropping
devices and aircraft flight recorders.  If each chip had a built-in
silicon accelerometer, you could embed the chips in the skin of an
aircraft and when one blows up, you could reconstruct the paths of the
individual pieces as they fell to the ground.

But more important is the construction of a massively parallel processor
on a gargantuan scale.  By storing the wafers in standard wafer boats,
you could using standard wafer-probing equipment to access the processor
array.  Just pop a boat in the machine, and it can automatically step
through each die on each wafer, loading each one with its part of a
massively parallel computation.  Then you send the wafers back to the
warehouse to crunch for a few weeks, and later bring them back to read
off the results.

When I worked at a major commodity semiconductor company, one of the
engineers in a neighboring cubicle showed we a wafer for the part he
was selling.  It was some cheap 8-bit microcontroller (CPU+memory+I/O)
which wasn't selling very well, and he said they had thousands and
thousands in storage.  He said they cost $1.50/wafer to manufacture,
but they were thinking of destroying these already-tested wafers to
recycle them for the gold plating on the back of the wafer (used to
attach the die to the lead-frame).

Now imagine a polonium-powered RISC microcontroller, say with 100
good die per wafer.  If the cost per wafer were even as low as $5,
that would be a nickel per processor, far cheaper than any other form
of parallel processing technology.

While the processors are crunching, they would be sitting in wafer boats in
polyethylene bags in cardboard boxes in a warehouse.  Or you could shrink-
wrap the boxes, put them on forklift pallets, and just store them on
a parking lot or a flat piece of ground.  It would be the largest, densest,
and most cost-effective supercomputer ever constructed.

The greatest limitation of this architecture is, of course, that there
is no communication between processors, except for that provided through the
wafer-probe head.  One application for such a computer is the factoring
of large numbers for cryptography.  This problem is easy to set up in a small
number of bytes, in which each processor is assigned a separate big number
to factor.  For $5 million, you would get 100 million processors working
on their assigned parts of the problem, which would substantially extend
the reach of brute-force attacks.