[comp.arch] Faster busses are hard; can we do wider?

colin@array.UUCP (Colin Plumb) (06/13/91)

I'm trying to evaluate the feasability of some >1000 line buses
prefessionally, so I'm making this blatant effort to pick the net's
brains.

At the last ASPLOS, Howard Davidson <hld@sun.com> gave a nifty tutorial
on the state of the art in fitting more into a smaller space and
(partially consequently) making it go faster.  More transistors per
debuggable unit requires more testing facilities such as boundary scan,
denser and faster requires better cooling than casually forced air,
and wire delays become ever more significant.

He was really big on multi-chip modules, which are basically miniature
PC boards on which bare dice are mounted.  You get shorter wire lengths,
better thermal properties, finer pitch, and the like.  You also get
repair and debugging problems since you can't just stick a logic analyzer
on everything so easily.

Then there's the inter-board communications problem.  Past 100 MHz, it
gets rather painful, so there's a lot of pressure to widen instead
of hastening system busses.  (Witness the 256-bit maximum Futurebus+
width.)

Cinch Coinnectors make a tangled-wire-bump called the Synapse that has
gorgeous electrical properties (.5 nH per pin) and they pack on 40 mil
centres, about 1 mm.  100 connectors per square cm makes massively
wide data buses feasable.

Another company called Betaphase make a sort of ribbon cable (it's actually
a flexible PC board) which clamps onto gold fingers at the edge of a PC board
and provides 125 connectors per inch, per side.

And apparently AMP have somkething in the works, too.  Probably others.

Has anyone played with this stuff?  Is it available, does it work?  How
do *you* get enought data to your screaming wonder to keep it from
starving?  Just talking to a cache at >1GB/sec is hard work.  And even
with a cache, a dozen processors or so will ask for a heck of a lot of
bandwidth out of main memory.  Widening it to a full (secondary) cache
line is is an easy way out (or so we think) if only the connector
technology can manage it.

Would anyone like to share experiences, opinons, or prejudices?
-- 
	-Colin

minsky@parc.xerox.com (Henry Minsky) (06/22/91)

In article <1940@array.UUCP> colin@array.UUCP (Colin Plumb) writes:
>
>Cinch Coinnectors make a tangled-wire-bump called the Synapse that has
>gorgeous electrical properties (.5 nH per pin) and they pack on 40 mil
>centres, about 1 mm.  100 connectors per square cm makes massively
>wide data buses feasable.
>
>Would anyone like to share experiences, opinons, or prejudices?
>-- 
>	-Colin

We are designing a multiprocessor using packaging based on the Cinch
technology; We had them injection mold custom connectors which are
square arrays about an 1.4 inches square, with an array of 372 20 mil
buttons (fuzzballs) of 1 mil gold-copper alloy wires stuffed through
holes in the connector, and sticking out each side. We package our
network chips in custom packages which we call pad-grid arrays (or
Land Grid Arrays). The chip carriers have pads on both top and bottom
of the package, and thus we can use the Cinch Connectors to connect
the chip to both the PC board below it, and the PC board above.

The Cinch connector, and our chip carrier have enough extra feed
throughs that we can use the packaging as both the chip-to-board
connections, and the board-to-board connections. This lets us dispense
with the backplane entirely; all signal flow between boards is very
short wire length, and can have good impedance matching. 

The connectors have channels molded in for liquid cooling
(fluorinert). The chip carriers also also have channels and an
integral heat-sink for running the fluid across the back of the chip.


The fuzzbutton connectors are very low resistance, unlike some of the
conductive polymer or wires-in-silicone sheets. We believe that the PC
Board traces can be impedance matched with the connectors, although we
haven't really done enough experiments with this yet. 

There is one problem with the connectors; the fuzz buttons have an
amazing affinity for human fingertips; the little wire balls tend to
grab onto the whorls on your fingertips, and get pulled out of their
holes. This means you need to be careful when handling the connectors.
Otherwise, the connectors seem quite reliable; we have encased a
sample board in plastic and then sanded it away to get a
cross-section, and the buttons make very good compression area contact
with the pads of the board and chip-carrier. And the whole system is
solderless, giving easy access for debugging our prototypes. 

The entire stack of boards and chips will be held under compression
using two cast aluminum tooling plates, with through bolts to squeeze
the stack. The cooling fluid can be pumped through manifold channels
machined into the aluminum endplates.

scheinin@mrlaxa.mrl.uiuc.edu (Alan L. Scheinine) (06/22/91)

   > The Cinch connector, and our chip carrier have enough extra feed
   > throughs that we can use the packaging as both the chip-to-board
   > connections, and the board-to-board connections. This lets us dispense
   > with the backplane entirely; all signal flow between boards is very
   > short wire length, and can have good impedance matching.
     [...and more interesting information.]

   I have pondered parallel designs and interconnection schemes
for many years.  Within the universe of different interconnection
schemes there are qualitatively different versions, i.e. not obviously
isomorphic.  Nonetheless, when in comes down to implementation, every
scheme seems to be roughly the same speed (very roughly).  (Of
course, any given interconnection scheme has algorithms for which it
is well matched.)  I've concluded that one essential area the needs
progress is the connection hardware.  For a long time, I expected that
a very clever interconnection topology would result in a great leap
forward in parallel computers.  Now I think that interconnection
topology ideas are adequately clever, no further leaps are needed.
As mundane as it may be, tiny connectors are the key technology for
further great leaps in parallel computing.
   My words have not been chosen carefully and my generalizations
have many exceptions.  Please understand that I am not trying to
present an overall philosophy.  Rather, I am presenting a notion for
the purpose of saying:

   Let's hear more about tiny connector hardware!  It is important.

                      Alan Scheinine
                      u10534@uy.ncsa.uiuc.edu

wcs) (06/26/91)

In article <1991Jun21.195509.28579@parc.xerox.com> minsky@parc.xerox.com (Henry Minsky) writes:
]>Cinch Connectors make a tangled-wire-bump called the Synapse that has
]>gorgeous electrical properties (.5 nH per pin) and they pack on 40 mil
]>centres, about 1 mm.  100 connectors per square cm makes massively wide
]There is one problem with the connectors; the fuzz buttons have an
]amazing affinity for human fingertips; the little wire balls tend to
]grab onto the whorls on your fingertips, and get pulled out of their

Misc.kids has been complaining for a long time that kids can't tie
shoelaces because they only know how to use Velcro - 
I guess some of these kids are designing computer hardware now :-)
-- 
				Pray for peace;		  Bill
# Bill Stewart 908-949-0705 erebus.att.com!wcs AT&T Bell Labs 4M-312 Holmdel NJ
# No, that's covered by the Drug Exception to the Fourth Amendment.
# You can read it here in the fine print.