[comp.arch] Languages vs. machines

edw@IUS1.CS.CMU.EDU (Eddie Wyatt) (03/29/88)

Stats on TF3

> 
> 30 MW ? Are you sure?

   It may not have been exactly 30MW but is was some outrages number like that.
This thing was suppose to take 1/2 the power output of Yorktown.
The speaker said that if they where to turn this thing on without having
a gradual power increase that the differential in voltage would
melt all the power lines back to power plant.  The same was true
for turning this thing off.  (I can see real meaning given to the
term crash! :-).  

  Note that the machine consisted of 4096 processors.  That means that
each processor was a 1 gigaflop processor - pretty damn impressive 
if they can build it.

  Another interest stat that was brought up was the up time of the
machine.  With the expected hardware failure rate the processors
they could expect to lose a processor onces every three days.

  Canidates for this beast include NASA.  Does anyone at NASA
want to comment?
-- 

Eddie Wyatt 				e-mail: edw@ius1.cs.cmu.edu

koopman@A.GP.CS.CMU.EDU (Philip Koopman) (03/30/88)

In article <1247@PT.CS.CMU.EDU>, edw@IUS1.CS.CMU.EDU (Eddie Wyatt) writes:
>    It may not have been exactly 30MW but is was some outrages number like that.
> This thing was suppose to take 1/2 the power output of Yorktown.
> ...
>   Note that the machine consisted of 4096 processors.  That means that
> each processor was a 1 gigaflop processor - pretty damn impressive 
> if they can build it.
> ...
> Eddie Wyatt 				e-mail: edw@ius1.cs.cmu.edu


I believe you folks are talking about the TF-1 processor.
The head architect of that recently gave a talk at CMU, and I think
I can remember some of the details:

About 32000 processors, total floating point computation speed 1 Tflops.
Every processor has 2 computation elements that are checked for
consistency to spot errors.  Any inconsistency takes the element off-line.

Total power about 3.5 MW.  Yes, powering the system up is interesting
(so is cooling).
Implementation is CMOS.  That means that if the system clock dies
at the full operating speed of something like 20 MHz or so, the
dI/dt current change melts the power lines and blows up the
substation (and maybe the East Coast power grid???? *grin*)
They're working on redundant/fail-safe clock distribution.

The thing has got a LOT of packet switching capability (more
than all the telephone switching capability in the world) to
get the processors to communicate.

An interesting and ambitious architecture!


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~  Phil Koopman             5551 Beacon St.             ~
~                           Pittsburgh, PA  15217       ~
~  koopman@faraday.ece.cmu.edu   (preferred address)    ~ 
~  koopman@a.gp.cs.cmu.edu                              ~
~                                                       ~
~  Disclaimer: I'm a PhD student at CMU, and I do some  ~
~              work for WISC Technologies.              ~
~              (No one listens to me anyway!)           ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

edw@IUS1.CS.CMU.EDU (Eddie Wyatt) (03/30/88)

 > I believe you folks are talking about the TF-1 processor.
 > The head architect of that recently gave a talk at CMU, and I think
 > I can remember some of the details:
 > 
 > About 32000 processors, total floating point computation speed 1 Tflops.
 > Every processor has 2 computation elements that are checked for
 > consistency to spot errors.  Any inconsistency takes the element off-line.
 > 
 > Total power about 3.5 MW.  Yes, powering the system up is interesting
 > (so is cooling).
 > Implementation is CMOS.  That means that if the system clock dies
 > at the full operating speed of something like 20 MHz or so, the
 > dI/dt current change melts the power lines and blows up the
 > substation (and maybe the East Coast power grid???? *grin*)
 > They're working on redundant/fail-safe clock distribution.
 > 
 > The thing has got a LOT of packet switching capability (more
 > than all the telephone switching capability in the world) to
 > get the processors to communicate.
 > 
 > An interesting and ambitious architecture!
 > 
 > 
 > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 > ~  Phil Koopman             5551 Beacon St.             ~

  I wanted to thank you for clearing up some of my misinformation.
This discussion start over someone mocking someone else about the plans
to build tera-machines. I cross posted into comp.arch hoping someone would
clear it up and someone did.

Oh, didn't the speaker say that the speed of the beast was going to be
up'ed to 3 teraflops?

BTW: 32000 processor sound about the right number considering what
was said about the switching network.  
-- 

Eddie Wyatt 				e-mail: edw@ius1.cs.cmu.edu

dfk@duke.cs.duke.edu (David Kotz) (03/30/88)

In article <1252@PT.CS.CMU.EDU>, koopman@A.GP.CS.CMU.EDU (Philip Koopman) writes:
> I believe you folks are talking about the TF-1 processor.
> The head architect of that recently gave a talk at CMU, and I think
> I can remember some of the details:
> 

Many details are available in an article in the recent issue of
"Supercomputing" magazine, supposedly the only thing out there written
about the TF-1. Try some of these specs out for size:

32,768 processors
arranged in a 40' donut
with 3000 miles of wiring in between (butterfly-style packet-switched network)
global 50MHz clock
using 2.5 Mwatts
water cooled
total 3 Tflops single-precision or 1.5 Tflops double-precision

Each processor:
single 300-pin CMOS chip has 
	50 Mips fixed-point unit
	100 Mflop float unit
	128 (32 bit) registers
	interface to switch (50 Mbytes/s)
and two 200 Mbyte/s channels to 
	4M of data RAM		(=> 128 Gbytes total)
	1M of instruction RAM

Processors are packed 8 to a board (actually 16, all are replicated).
Switch nodes are packed 16 8x8 nodes to a board (actually 32, all replicated).
In addition, the whole switch is replicated 8 times to lower contention.
The wiring is in 504 layers of 64 wires each. 

This is a BIG machine. Don't look for it under your desk anytime soon...

David Kotz
-- 
Department of Computer Science, Duke University, Durham, NC 27706
ARPA:	dfk@cs.duke.edu
CSNET:	dfk@duke        
UUCP:	{ihnp4!decvax}!duke!dfk