[net.arch] Why I am a cynic about multiprocessor flaming

reid@Glacier.ARPA (Brian Reid) (05/10/85)

Several people seem to have misunderstood my posting about multiprocessors.
First, I should explain that I think I know a lot about multiprocessors. I
have programmed the Univac parallel monsters, the C.mmp and Cm* machines at
Carnegie-Mellon, and have briefly flirted with writing code for several
unnamed experimental MIMD machines. I have also paid very close attention to
the published results of the multiprocessor research groups at Bell Labs,
Goodyear, Texas Instruments, Livermore, MIT, Illinois, and other places. I
would like to recommend that people who want to have informed opinions about
multiprocessors try reading some of this history also.

The reason I voiced my skepticism is that I have been quietly reading a
bunch of stuff about multiprocessors in this group for a long time; the
conversations in general were not about how to build ultra-high-end machines
or ultra-special-purpose machines, but about how to build cost-effective
midrange computers. 256 Z-80s do not make a computer worth anything at all,
regardless of how cheap it is.

It is obviously true that multiprocessors are a big win for problems that
have inherent parallelism and that are important enough to warrant dedicated
machines (e.g. FFT, convolution). It has been determined by many years of
research on MIMD multiprocessors that MIMD machines are nearly impossible to
program, and that the cost of synchronizing the computations usually
dominates. Many of the correspondents in this newsgroup (e.g. Brooks) know
this, and it is obvious from their postings that they understand its
ramifications. However, many of the other correspondents do not seem to know
this.

The lure of parallel computers is strong. They seem to be so perfect. Take a
nice cheap chip and replicate it N times to make a supercomputer. What
history has shown, again and again and again, is that if N is greater than 4,
you are wasting your time unless you intend to dedicate this computer to the
solution of problems that are well understood and can be decomposed into a
large number of independent tasks that do not need to synchronize very often.
The intended application of the Connection Machine (logical inferences) is
such an application (though I still suspect its communication and
synchronization delays will dominate, and I eagerly await seeing some
numbers).

Every year I read about a new university research project to take hundreds
and hundreds of X chips and connect them together in topology Y to make the
ultimate computer. It is much rarer to read benchmark results from those
groups, 5 years later, showing that the computers that they have built are
worth anything. The Caltech cube is an exception--kudos to Chuck Seitz and
all the rest of his research group. I won't mention any names, but I can
think of 5 or 6 other recent university projects that I would have to
classify as failures, because they didn't understand the lessons of
multiprocessor history. If you are going to flame about multiprocessors on
netnews, you should at least learn the history and vocabulary of the field.
-- 
	Brian Reid	decwrl!glacier!reid
	Stanford	reid@SU-Glacier.ARPA