stein@dhw68k.cts.com (Rick 'Transputer' Stein) (06/09/90)
Hi there fellow multiconfuser engineers! I have surreptitiously
obtained a copy of Dick Pountain's Byte magazine piece "Virtual
Channels: The Next Generation of Transputers" appearing in the
April, 1990 edition (special European and world edition only).
I have also learned, through the same path, that the editors at Byte
view this kind of stuff as a strictly European fad, and not worthy
of publication in the U.S. I guess this attitude is typical of
American management-always responding too late with too little.
Pountain's article surprises me because I was led to believe that
all this stuff about the H1 was still under non-disclosure
(Damn! Scooped again by my British cousins). I guess Inmos
is leaking stuff out in the same way that Detroit shows off its
1995 iron in 1990. What good is it if you can't buy it now?
I guess they prefer to promote wet-dreams!
A brief summary of Pountain's piece (my editorial comments in []):
Current Txxx do not fully embody the CSP model. Why? Logical
concurrency is great on a single Txxx, but when you map it into
physical concurrency, you've got problems connecting all the soft
channels with physical links. [That's why there are multiconfuser OSs].
Pountain prefers the Occam route [I'm a die-hard C type myself],
although he complains that routing software (e.g., OSs) degrades
performance [True, but it gives you the freedom to write your application
without worrying about all the junk underneath it]. The n-dimensional
hypercube is something that can't really be implemented with current Txxx.
The physical connectivity at the chip-level just isn't there.
Virtual channels will solve this.
Pountain went right to the source: David May @ Inmos in the U.K.
[Tell me about virtual channels David.] Virtual channels are just
like soft one's, except they "physically" connect distinct Txxx address
spaces. The analogy between a telephone network is made: You call
somebody on AT&T [the Death Star], and magically the connection
is complete. All the switching is taken care of by the phone
company (analogous to the virtual channel). Virtual channels eliminate
topological dependence of interconnections.
The H1 has a on-chip communication controller, and it "wacks"
messages into packets for transport throughout the interconnect network.
Messages of arbitrary length are wacked into 32byte packets for transport.
An internal packet protocol preserves message integrity.
Packets terminate with EOP (end-of-packet), except for the last with
has an EOM, end-of-marker (for end of message).
<header><0...32 bytes of data><EOP/EOM> --- packet protocol
The virtual link: two virtual channels, one for send, one for recieve.
Communication is still synchronized, but its now across a virtual link.
Each output message is queued on a link, buffering will be in-memory,
so more than one byte can be queued.
You need the C104 to fully exploit virtual channel stuff. C104 == complete
packet-switching exchange [e.g., AT&T (the Death Star) on a chip]. C104
does the worm-hole routing, no store and forward stuff. C104 figures all
the routing [like magic, and I don't know how it happens]. Oh, it uses
"interval routing." [I'm not gonna' try to 'splain this] It is dead-lock
free however. It can succumb to "hot spots." The solution: try to
distribute your message-traffic evenly through out the network
[no fooling, so we still them simulated annealer load balancers].
You won't need the Occam PLACE keyword anymore with this stuff.
The article quotes the same Inmos propaganda:
on-chip caching and DRAM controller w/static column access.
Peak 100 MIPS, 20MFLOPS
Memory protection (thank God!) w/4 protected regions/process
( I guess that means recursion finally).
H/W support for IEEE exceptions and hose-ups.
Finally, the Inmos simulations of routing claim that a message delay of
12 microseconds in a 64 node hypercube only becomes 27 microseconds on
16K node hypercube. [That's intense! Look out Thinking Machines!]
[Does all this stuff mean an end to OSs for transputers? Hell no!
If anything, it means that they are more vital than ever. A standard
message-passing interface is really needed. Somebody outta' lobby the
IEEE. Pretty soon Intel will probably announce a similar thing
(or have they with the iWarp thing?) So, who cares if you build
your application on IMS or Intel or TI iron? Not me! But if you want
to make money, your codes better run on anyone's iron. So that means
portability partner. Need a standard I/F to make that work. Isn't that
right X-windows people? What all the virtual channel stuff means to me
is that simulations will run faster due to hardware assist, and that's
a definite plus. The topology independence is a real boon to software
development. Shit, start writing your fully parallel next generation
toxic waste dumps now, while you've still got the chance. Linear scalable
software will become lots more easier to build with the C104 and H1.]
Rick 'Transputer' Stein (signing off!)
--
Richard M. Stein (aka, Rick 'Transputer' Stein)
Sole proprietor of Rick's Software Toxic Waste Dump and Kitty Litter Co.
"You build 'em, we bury 'em." uucp: ...{spsd, zardoz, felix}!dhw68k!stein