[comp.sys.transputer] Who Wants Massively Parallel Processors Anyway?

moses@NADC.NADC.NAVY.MIL (Bill Moses) (02/27/91)

Luigi Rizzo (lr@cs.brown.edu) writes:

>  TRAM modules are kind of a toy: you buy one (or an evaluation
>  kit) and don't care too much about the price; then your
>  interest disappears... 
>  ...those who are really interested in massive use of Transputer,
>  and do care about their [company|school]'s money, sure they
>  build their own modules...

Jeff Carroll (carroll@ssc-vax.boeing.com) writes:

>     I think there are two kinds of people who use transputers.
> One is the computer scientist who is doing research on parallel
> systems, and the other is the scientist or engineer whose
> number-crunching application takes too long to run on his PC.
> ...the needs of either are likely to be met with a relatively small
> transputer network (about the size that will fit into the spare
> slots in the user's PC).

This brings up a simple question: Who wants massively parallel
machines anyhow? Computer Scientists and Engineers experiment
with and develop systems - what are they used for? I'm all
for research for its own sake, but there are actually people
out there who want to apply it. I suppose the questions are more
along these lines: What are the foreseen uses of large parallel
machines? Is this in line with current research? What should 
they be used for (besides weather prediction, Mandelbrot sets, etc.)?

Just wondering

Bill <Moses@NADC.NAVY.MIL>

rbe@yrloc.ipsa.reuter.COM (Robert Bernecky) (03/01/91)

The question was: "who wants massive parallelism anyway..."?

Answer: 

Anybody with a large problem to solve or a problem to solve quickly
which  can be mapped onto such an architecture. 

For example: Dow Jones wants to supply a service to their tens of 
  thousands of customers, to let them search, for example, the New York
   Times articles for the past 10 years, for all occurences of 
   "not a crook" and "national security" within the same paragraph.

  The Connection Machine(with a piddly 64k processors) does a fairly
  bangup job of this.

I suspect the human genome problem is another candidate for massively
parallel processing.

The key to making MPP work lies in NOT having to program for it
explicitly. That is where languages such as J should be helpful --
reflect the way we think, rather than the way computers are built.

When I was Director of Research at I.P. Sharp (bought out by Reuters,
arch-enemy \\\\\competitor\\\\\\\\\\honorable opponent of Dow Jones),
I proposed we get an CM2 to look at such applications. 

This was refused by the forward-thinking management of Reuters,
who were happily exploring ideas such as mediocrely parallel systems,
and farms of sun workstations. I think both ideas, and the people who
were pushing them, are no longer at Reuters either. Food for thought...

Bob Bernecky 
Snake Island Research Inc.

ps: I'm not there any more either. 

frost@watop.nosc.mil (Richard Frost) (03/04/91)

Jeff Carroll (carroll@ssc-vax.boeing.com) writes:

>     I think there are two kinds of people who use transputers.
> One is the computer scientist who is doing research on parallel
> systems, and the other is the scientist or engineer whose
> number-crunching application takes too long to run on his PC.

Some of use want them because our number-crunching applications take too
long on a Cray YMP or its simply more cost effective to use a large transputer
network for an inherently parallel application.  (Never mind that a 25-33MHz
PC or Mac outperforms a VAX ;-)

moses@NADC.NADC.NAVY.MIL (Bill Moses) writes:

>Who wants massively parallel machines anyhow?
>What are the foreseen uses of large parallel machines?

Digital signal processing:  Specifically at my site--a proposed application
is the identification of dim incoming hostile targets (e.g. Exocet missles).
Another application was proposed at CERN to detect weak particle signatures.
Some image-processing applications are better suited for transputers or MIMD
in general.  Real-time non-linear controls for many dynamic systems including
municipal traffic control are possible with parallel machines.

Numerical Analysis: The folks at Los Alamos and elsewhere have been busy
re-writing the book for parallel architectures.  Check recent SIAM proceedings.

Symbolic Analysis:  The nth order hyper-foobar expansion of f(x,y,z).
Soliton or wavelet solutions to the general case of Maxwell's
equations => goodby to ray-tracing.

Combinitorics/Graph Theory:  Transportation and Network optimization,
and of course the TSP.

Others will undoubtably contribute more ...

--
(Note: please e-mail directly as the mail header "From:" line is broken)
Richard Frost				Naval Ocean Systems Center
frost@watop.nosc.mil			voice: 619-553-6960

hht@filbert.sarnoff.com (Herbert H. Taylor x2733) (03/06/91)

** This brings up a simple question: Who wants massively parallel
** machines anyhow? Computer Scientists and Engineers experiment
** with and develop systems - what are they used for? I'm all
** for research for its own sake, but there are actually people
** out there who want to apply it. I suppose the questions are more
** along these lines: What are the foreseen uses of large parallel
** machines? Is this in line with current research? What should 
** they be used for (besides weather prediction, Mandelbrot sets, etc.)?

   We have developed a massively parallel, Video Supercomputer a.k.a.
the Princeton Engine. This is a SIMD machine with 2048 16bit custom
DSP chips. It was originally intended for HDTV research but has since
found a diverse number of applications including image pyramid
processing, multispectral analysis, SAR, histogram equalization, Data
compression algorithms, neural nets, terrain rendering, medical
imaging, volume visualization, ultrasound processing, teleoperation
and, oh, even mandlebrot sets.... All these applications run in
continuous real-time. For example, we simulate in continuous real-time
HDTV system proposals. HDTV simulations on conventional computing
platforms, for example, high-end mainframes take 10 to 20 hours to
compute ONE second of video - while on the Princeton Engine this is
continuous real-time - we watch "television" on the computer. For a
conventional mainframe to assemble even a "few seconds" of a single
simulation of important design ideas (perhaps to show your boss) it
might take weeks to compose, simulate and evaluate. On a system such
as the Princeton Engine - the process is real-time.

  In fact, the kind of interaction one can have with the creative
design process when the response of the computer is instant is
extraordinary - and is difficult to describe without sounding
unbelievable. It is even different then the interaction one can have
with other very fast computers - even a supercomputer. 

 The Princeton Engine was originally developed when DSRC was RCA Labs.
One system has been here at DSRC in Princeton and one in Indianapolis
Ind. for almost three years. This spring we are placing another system
at NIST in Maryland under DARPA sponsorship. The NIST system will be
used primarily by DARPA HDTV program participants.

 Many of the important conceptual ideas embodied in the recent DARPA
BAA for High Definition systems will require massive parallel systems
to fully explore the underlying ideas. DARPA's interest lies in, "high
resolution systems for applications in command and control, battle
management, training and simulation, intelligence analysis, weapons
systems..." If one examines the processing requirements to accomplish
several of the ares embodied in the BAA it is evident that a
significant measure of parallism will be required. The technical areas
of the BAA include displays, processors, etc. These are fundamentally
the areas where the Princeton Engine has found great applicability.


  H Taylor.