[comp.parallel] Critical Issues in Distributed-Memory Multicomputing.

wangjw@usceast.cs.scarolina.edu (Jingwen Wang) (12/03/90)

   The Distributed-Memory Multi-Computers have now been facilitated in 
many research institutions and Universities. The most exciting time of
such architectures ( I guess is from 1984-1988 ) seems to have come
to an end. The exploratory stage of such architectures is in fact
completed. Many scientists engaged in such architectures earlier have
now switched to other directions (such as those in the Caltech). The
hypercube computers now seem far from being matual for widespread
engineering usage. The software is always a vital problem for any
parallel computers, and is particularly a problem for multicomputers.
  There seems still much to be done in this area. And many researchers
are numerical scientists who develop parallel algorithms for such
machines. The software problem and programming are given less attention
to. 
  What in hell are the most critical issues in these area? What are the
undergoing topics to cope with these issues? I am not so sure about
these. We seem to be in an endless loop designing endless algorithms
for an endless variaty of applications. When could usual engineer
(not computer engineer) can use such machine with ease? 
  Shall I suggest that interested experts contribute their ideas to 
us. Of course, we don't expect anyone bring with a complete answer
to all problems. Any comments are welcome!

Jingwen Wang

gdburns@osc.edu (12/04/90)

>
>   The Distributed-Memory Multi-Computers have now been facilitated in 
>many research institutions and Universities. The most exciting time of
>...
>engineering usage. The software is always a vital problem for any
>parallel computers, and is particularly a problem for multicomputers.
>  There seems still much to be done in this area. And many researchers
>are numerical scientists who develop parallel algorithms for such
>machines. The software problem and programming are given less attention
>to. 
>...
>for an endless variaty of applications. When could usual engineer
>(not computer engineer) can use such machine with ease? 
>
>Jingwen Wang

I am willing to take a shot at this.

I have been working on infrastructure s/w for multicomputers since '84
and I should first say that the situation has improved dramatically since
then.  I think we need four things for multicomputers to work:  a)
h/w, b) infrastructure s/w (compilers, comm/sync libraries and OS),
c) algorithms (embodied in libraries not just papers) and d) education.

H/W is in reasonable shape.  Witness Touchstone, Intel, Ncube, office
buildings full of incredible workstations and decent networks, and
the easy to buy and build transputers.

Infrastructure s/w, which I'll just call OS, doesn't get enough attention
from the vendors, in my opinion, because the customer tends to look at
either the glamour of the h/w or the bottom line application.  OS doesn't
get enough attention from academia because, IMHO, the work that needs
to be done is grunt level - essentially big D and very little r.

Algorithms are going well, I think, because of the number of people
doing work.  Getting the work productized and distributed is not going
so well.  I have not done a survey but I'm sure you could find a
workable algorithm for every solver somewhere in the literature.
In some cases you can find several alternatives.

Education is the kicker.  You have to learn how to program distributed
memory.  There are no true magic pills.  It is not hard to learn but
if you assume you don't have to learn anything or you expect to throw
a switch and make the whole thing go faster, you're in trouble.

Back to OS, I think that we need to get out our hammer and saw and
build a quality lifestyle on multicomputers.  Quality to me means:

1) a dynamic multi-tasking environment on every node
2) a powerful, no-nonsense, save the pardigms for later, support everything
   message passing system on every node
3) Unix file access in all its glory from every node
4) lots of control (user is boss not servant) and monitoring

What we are used to for OS is a black box of nodes with performance as
goal one and there is no goal two.  Now we are taking the other extreme
and putting Mach and Unix friends on every node.  If scalable scientific
computation is the goal, I believe this is overkill.  I don't blame the
vendors who do what the market and the bottom line dictates.  I don't
blame academia who have to publish or die.  There just aren't many places
that can afford to engineer nuts and bolts software.

I think MIMDizer and F90 are logical next steps in multicomputer tool
technology.  I/O and comm libraries like CrOS and Cubix are also very good.
Linda and Strand are, IMHO, appropriate for functional parallelism.
These things make multicomputer programming sensible and even fun.
But not enough effort has been spent on open environments with process
control/status, message status, error code delivery, and other mundane
support whose absence can really ruin your day.

--Greg
-- 
Greg Burns				gdburns@tbag.osc.edu
Trollius Project			(614) 292-8492
Research Computing			The Ohio State University

fsset@bach.lerc.nasa.gov (Scott E. Townsend) (12/04/90)

In article <12027@hubcap.clemson.edu> wangjw@usceast.cs.scarolina.edu (Jingwen Wang) writes:
>
>   The Distributed-Memory Multi-Computers have now been facilitated in 
>many research institutions and Universities. The most exciting time of
>such architectures ( I guess is from 1984-1988 ) seems to have come
>to an end. The exploratory stage of such architectures is in fact
>completed. Many scientists engaged in such architectures earlier have
>now switched to other directions (such as those in the Caltech). The
>hypercube computers now seem far from being matual for widespread
>engineering usage. The software is always a vital problem for any
>parallel computers, and is particularly a problem for multicomputers.
>  There seems still much to be done in this area. And many researchers
>are numerical scientists who develop parallel algorithms for such
>machines. The software problem and programming are given less attention
>to. 
>  What in hell are the most critical issues in these area? What are the
>undergoing topics to cope with these issues? I am not so sure about
>these. We seem to be in an endless loop designing endless algorithms
>for an endless variaty of applications. When could usual engineer
>(not computer engineer) can use such machine with ease? 
>  Shall I suggest that interested experts contribute their ideas to 
>us. Of course, we don't expect anyone bring with a complete answer
>to all problems. Any comments are welcome!
>
>Jingwen Wang

I'm no expert, but it seems to me that Distributed-Memory Multi-Computers 
are at a point where they need some standardization at the user level,
de facto or otherwise.  We need some standard portable languages and tools
to allow users to worry about their problem, not wether they are running
on an NCube or IPSC or some other machine.

At a low level, this implies a portable message passing library with tools
for debugging and monitoring.  People here, at Argonne, at Oak Ridge, and
many other facilities are working on this.  But a common standard that
a user can expect to exist on whatever machine they run on isn't here yet.

At the next level, I think you want to hide the details of message passing.
This might be through distributed shared memory, or maybe by the compiler
generating message passing calls in the object code.  I don't expect
great efficiency on 'dusty decks', but an algorithm designer should only
be concerned with parallelism, not the details of each little message.

There are a number of research efforts in this area, but the tools/systems
developed are all just a bit different.  If the low level gets standardized,
then maybe something like a parallel gcc could be developed.

Unfortunately, I don't think enough experience has been gained with these
systems for people to agree on a standard.  We all know about SEND and RECV,
but we're still playing around with the details.  And I don't think a system
will really be usable for general problem solving until the programmer can
concentrate on parallel algorithm design rather than passing individual
messages.

(You might compare this with the stdio package for the C language.  I/O is
often different on different operating systems, i.e UNIX, VM, MS DOS, yet
I can expect fprintf to exist and always work the same way.  It also hides
the implicit buffering from me)

--
------------------------------------------------------------------------
Scott Townsend               |   Phone: 216-433-8101
NASA Lewis Research Center   |   Mail Stop: 5-11
Cleveland, Ohio  44135       |   Email: fsset@bach.lerc.nasa.gov
------------------------------------------------------------------------

eugene@nas.nasa.gov (Eugene N. Miya) (12/13/90)

I go on vacation and come back to this....
In article <12051@hubcap.clemson.edu> gdburns@osc.edu writes:
>>
>>   The Distributed-Memory Multi-Computers have now been facilitated in 
>>many research institutions and Universities. The most exciting time of
>>...
>>for an endless variaty of applications. When could usual engineer
>>(not computer engineer) can use such machine with ease? 
>>
>>Jingwen Wang

A good post of some of the issues.

I think you have to ask if the usual engineer uses non-DMMCs with ease.
I assert not.  Read back in the literature using "automatic programming"
as a keyword.  It has gotten easier, but we have only learned with time
and experience.  We have a deceptive situation: some programming
appears very easy learn, the basics: syntax of programming languages for
simple programs, etc. 

>I am willing to take a shot at this.
>
>H/W is in reasonable shape.

A nice ATTEMPT at a state-of-the-art survey.
I don't quite agree.  We have perfected packaging, but we have not
perfected communication or storage.  We have a fetish for CPU speed
while ignoring balance.  Amdahl deserves a lot of credit for his little
3 page NCC paper (1967).

That you ignored difficulties with programming languages tells me we have
a long way to go.  A reviewer can tell the quality of a survey paper
with the %text on hardware versus the %software.  This also goes for
market literature, personnel in a start up, and a few other artifacts.

>from the vendors, in my opinion, because the customer tends to look at
>either the glamour of the h/w or the bottom line application.

One can't blame them for the bottom line especially when a technology
as incomplete as EE/CS exists.  A friend from IBM TJW (just
skiing with him) gave me an article which ended with:
"Creating the Universe was simple: God didn't have an installed based."

>OS doesn't
>get enough attention from academia because, IMHO, the work that needs
>to be done is grunt level - essentially big D and very little r.

The OS people are contrainted by the limitations of hardware.  End users
don't care as much about the OS.  That indifference creates some subtle
problems (Elxsi, Denelcor, and a few other defunct companies can
tell you about this.)  I had an interesting lunch discussion about
the concept of a "load average" on a multiprocessor.

>Algorithms are going well, I think, because of the number of people
>doing work.  Getting the work productized and distributed is not going well.
>I have not done a survey but I'm sure you could find a
>workable algorithm for every solver somewhere in the literature.

I do not think algorithms are going well.  I have done some survey.
Algorithms by their very nature tend to be sequential.  What tends to be
"parallel" is data.  There are a few books (Akl, Oretega and Voigt)
but these all tend to specialized algorithms.  I took the work "good"
out before "books."  It's too early to judge quality.  Akl's book is now
in second edition.  I don't know what you expect with "productization."
And there are algorithms which do not parallelize well as this time.
It depends on a vague concept of "level of parallelism."

>Education is the kicker.  You have to learn how to program distributed
>memory.  There are no true magic pills.

Very good (no silver bullets).

See you are re-inventing programming again.  Hence the automatic programming
comment.  This must have been what programming was like in 1946-1950.
Now, we replace the LOADs and STOREs with SENDs and RECEIVEs.
The goal started to hide the "underlying" architecture, but to get at the
speed, we need to.  It's no better with the FORALLs and DOALLs, or
host of other constructs.  The problem is not syntactic, it's
semantic.  It makes comparison and evaluation difficult.
We ended up creating something now called a compiler.  But we
are overloading the functionality of compilers.  Part of that is the
problem os existing programming languages built for a von Neumann
model of programming.

If the users are unwilling to explore new programming languages,
some models of parallel computation will be in trouble.  Perfect
automatic parallelization only exists in the mind of the media, hopeful
users, and very few computer scientists.

Anita Jones and others wrote some interesting comments about the ease
of parallel programming in the early 1980s.  Her reports and surveys
from CMU (Cm* and earlier C.mmp papers) noted many of the problems.  
We still have those problems today, yet we are not talking the same
human language.

We are so closed minded to other earlier approachs like LISP, APL,
and languages like VAL, SISAL, can't even get a chance for a reasonable
evaluation.

>Back to OS, I think that we need to get out our hammer and saw and
>build a quality lifestyle on multicomputers.  Quality to me means:
>
>1) a dynamic multi-tasking environment on every node
>2) a powerful, no-nonsense, save the pardigms for later, support everything
>   message passing system on every node
>3) Unix file access in all its glory from every node
>4) lots of control (user is boss not servant) and monitoring

Naive Motherhood: somewhat.  This is partially what led to CISC
architectures.  It also created Unix and RISCs. This isn't a flame, but
I think we put too much functionality into some parallel systems.
We have to simplify and strip out.  No one would serious consider
a new OS not written in a high level language (a development tar pit).
But we see people trying to explore concepts like light-weight
tasking.

>I think MIMDizer and F90 are logical next steps in multicomputer tool
>technology.  I/O and comm libraries like CrOS and Cubix are also very good.
>Linda and Strand are, IMHO, appropriate for functional parallelism.
>These things make multicomputer programming sensible and even fun.
>But not enough effort has been spent on open environments with process
>control/status, message status, error code delivery, and other mundane
>support whose absence can really ruin your day.

I am not so sure. (Sorry John 8^).  I don't know how best to describe
what we have to do.  Developer have to be able to pay around with
real (not simulated, we sometimes do too much of this, let's simlate
1 TFLOP......8^) ) architectural choices.  User communities must be
willing to try new developments like OSes and languages, and architecture.

We are all blind men trying to describe an elephant.  I recommend
Fred Brooks The Mythical Man-Month (1975). One thing Brooks noted was the
structure of a system reflects the bureaucracy which built it.
Parallel computing research is in a state of flux because of the immaturity
of the technologies which feed it (H/W and S/W).  It says something about
those doing research, development, and the funding agencies.  We
lack critical mass and we have a meek user community (the cost penalty is
high for a mistake).  This is why the "critical mass" analogy is important.
If we ignore or forget aspects of parallel computing, we will not see
problems or their solution.  We lack good conceptual models like the
von Neumann machine in sequential programming.

It is important in any science to have a health dose of skepticism.
CS/EE tends to be a bit weak on this (perenial optimists 8^).
A good test would be to guess what companies you would be willing to
invest personal, real dollars.  Or me for that matter.

--e. nobuo miya, NASA Ames Research Center, eugene@orville.nas.nasa.gov
  {uunet,mailrus,other gateways}!ames!eugene

carroll@cs.washington.edu (Jeff Carroll) (12/27/90)

In article <12235@hubcap.clemson.edu> eugene@wilbur.nas.nasa.gov (Eugene N. Miya) writes:
>In article <12051@hubcap.clemson.edu> gdburns@osc.edu writes:
>I think you have to ask if the usual engineer uses non-DMMCs with ease.
>I assert not.  Read back in the literature using "automatic programming"
>as a keyword.  It has gotten easier, but we have only learned with time
>and experience.  We have a deceptive situation: some programming
>appears very easy learn, the basics: syntax of programming languages for
>simple programs, etc. 

	In my experience we now have two "generations" of practicing
engineers - those who have learned how to use the computer as an
engineering tool, and those who have not. (I hesitate to use the term
"generation" because some belonging to the former group have been
practicing since 1960, and some of the latter graduated from college in
1985.)

	The first group is quite adept in sitting down at the computer
and coding a calculation that needs to be performed, in the personal
language of choice (be it FORTRAN, BASIC, Pascal, C, or Forth.) It's
certainly simpler than programming an SR-52 or an HP-41.

	We have some of these people who have learned to use hypercubes
competently with relatively little trouble. Most of them have developed
enough understanding of the computer in general that they can understand
what goes on in a DMMP machine.

	The second group are the ones that are hard pressed to generate
spaghetti FORTRAN that gets the right answer. They are completely
hopeless when it comes to programming DMMPs.


>>I am willing to take a shot at this.

>>H/W is in reasonable shape.

>A nice ATTEMPT at a state-of-the-art survey.
>I don't quite agree.  We have perfected packaging, but we have not
>perfected communication or storage.  We have a fetish for CPU speed
>while ignoring balance.  Amdahl deserves a lot of credit for his little
>3 page NCC paper (1967).

	I think that it must be admitted that HW is at least way ahead
of SW. The economics of the situation dictate that unless there are
overwhelming arguments against it, we must use silicon which is
available off-the-shelf as our building blocks, and adhere to industry
standards wherever possible.

	The DMMP manufacturers have done a good job of using standards
when it comes to storage and IO. What is needed now is an interprocessor
interface standard - the transputer link is nearly perfect except that
it is *way too slow*. It has all the other goodies - it's simple, it's
cheap, and it's public domain.

>That you ignored difficulties with programming languages tells me we have
>a long way to go.  A reviewer can tell the quality of a survey paper
>with the %text on hardware versus the %software.  This also goes for
>market literature, personnel in a start up, and a few other artifacts.
>
>>from the vendors, in my opinion, because the customer tends to look at
>>either the glamour of the h/w or the bottom line application.

	In the MIMD world, nobody with SOTA hardware has decent
software. Thus none of the vendors face market pressure to improve their
product.

>>OS doesn't
>>get enough attention from academia because, IMHO, the work that needs
>>to be done is grunt level - essentially big D and very little r.

>The OS people are contrainted by the limitations of hardware.  End users
>don't care as much about the OS.  That indifference creates some subtle
>problems (Elxsi, Denelcor, and a few other defunct companies can
>tell you about this.)  I had an interesting lunch discussion about
>the concept of a "load average" on a multiprocessor.

	There are not enough systems programmers to go around in the
industry at large, and even fewer (I guess) who can deal with MIMD
boxes. Nonetheless the vendors push the hardware because that's
ultimately where the performance is (you can always machine-code a sexy
demo to take around to trade shows. This is not unique to parallel
systems. I understand that that's what IBM did with the RS6000.)

>>Algorithms are going well, I think, because of the number of people
>>doing work.  Getting the work productized and distributed is not going well.
>>I have not done a survey but I'm sure you could find a
>>workable algorithm for every solver somewhere in the literature.

>I do not think algorithms are going well.  I have done some survey.

	There are lots and lots of books containing papers about and
algorithms for machines that don`t exist any more. What is lacking (as
far as I know) is a broadly accepted taxonomy upon which parametrizable
algorithms can be developed. Unfortunately there are only a handful of
people around with the mental equipment to do this sort of thing (I
certainly don't claim to be one of them).


>>Education is the kicker.  You have to learn how to program distributed
>>memory.  There are no true magic pills.


>See you are re-inventing programming again.  Hence the automatic programming
>comment.  This must have been what programming was like in 1946-1950.
>Now, we replace the LOADs and STOREs with SENDs and RECEIVEs...

>If the users are unwilling to explore new programming languages,
>some models of parallel computation will be in trouble.  Perfect
>automatic parallelization only exists in the mind of the media, hopeful
>users, and very few computer scientists...

>We are so closed minded to other earlier approachs like LISP, APL,
>and languages like VAL, SISAL, can't even get a chance for a reasonable
>evaluation.

	I often wonder what happened to the flowchart. We are just now
developing the technology (GUIs, OOP) which will enable us to make
serious use of the flowchart as a programming tool.

>>Back to OS, I think that we need to get out our hammer and saw and
>>build a quality lifestyle on multicomputers.  Quality to me means:

>>1) a dynamic multi-tasking environment on every node
>>2) a powerful, no-nonsense, save the pardigms for later, support everything
>>   message passing system on every node
>>3) Unix file access in all its glory from every node
>>4) lots of control (user is boss not servant) and monitoring

>Naive Motherhood: somewhat.  This is partially what led to CISC
>architectures.  It also created Unix and RISCs. This isn't a flame, but
>I think we put too much functionality into some parallel systems...

	I think a generally useful DMMP system has to support node
multitasking at least to the level of permitting the "virtual processor"
concept. You shouldn't (within practical limits) have to size the
problem to fit the machine. I'm not so concerned about file system
access; you can always build a postprocessor to interpret the answer for
you once it exists in some algorithmically convenient form.

	The stripped-down compute-node kernel is a good idea, as long as
you can spawn multiple copies of it on a physical processor. Similarly
the specialized file server node is a good idea as long as it has
adequate connectivity to the rest of the machine. It should also support
multitasking, since there are useful IO massaging processes that can be
done on them rather than screwing up your load balance by putting them
on "compute nodes".

>>I think MIMDizer and F90 are logical next steps in multicomputer tool
>>technology.  I/O and comm libraries like CrOS and Cubix are also very good.
>>Linda and Strand are, IMHO, appropriate for functional parallelism...

>I am not so sure. (Sorry John 8^).  I don't know how best to describe
>what we have to do... 
>... User communities must be
>willing to try new developments like OSes and languages, and architecture.

	Again, I expect to see the resurgence of the flowchart in the
form of graphical programming aids. What good is that Iris on your desk
if it can't help you program? If all you can do is display the results
of your work, it might as well be a big-screen TV in the conference
room.

	Once we have the right taxonomy and flowcharting concepts, we
can start to merge the various parallel technologies that have developed
(MIMD, SIMD, VLIW, dataflow, ...) into systems that can be
architecturally parametrized to suit the application/range of
applications desired.

	Maybe I should go back to grad school.

	Jeff Carroll
	carroll@atc.boeing.com

eugene@nas.nasa.gov (Eugene N. Miya) (01/03/91)

In article <12401@hubcap.clemson.edu> bcsaic!carroll@cs.washington.edu
(Jeff Carroll) writes:
>	In my experience we now have two "generations" of practicing
>engineers - those who have learned how to use the computer as an
>engineering tool, and those who have not.

Actually I suggest you sub-divide those who learn computers into
two cases (pre-card and post card deck).  I also suggest those engineers who
lack computer experience have valuable practical and theoretical experience
which make them knowledge computer skeptics.

Older computer people are just now making it into the management ranks
of organizations.  Taking a FORTRAN/batch oriented view of programming
has some distinct dis-advantages.  This can be MORE detrimental than
total computer naive.  I say this from earlier attempts to bring NASA out
of card-age.  But that is a political discussion.

We must not be computer bigoted to those not knowing computers.

>	I think that it must be admitted that HW is at least way ahead
>of SW. The economics of the situation dictate that unless there are

Again, this is where I suggested Fred Brook's Mythical Man-Month.
See his 1960s S curve on systems development where HW cost and development
initially dominate.

>	The DMMP manufacturers have done a good job of using standards
>when it comes to storage and IO. What is needed now is an interprocessor
>interface standard - the transputer link is nearly perfect except that
>it is *way too slow*. It has all the other goodies - it's simple, it's
>cheap, and it's public domain.

"Good job" is a bit strong.  The first hypercubes had all I/O going
thru a single node.  They discovered the need for balance quickly.
We don't have standards or even good concepts for parallel disk systems
like this (infancy).  It does have to be "public domain," but
that's a touchy issue.  Again balance: See Amdahl's article.

>	There are not enough systems programmers to go around in the
>industry at large, and even fewer (I guess) who can deal with MIMD
>boxes. Nonetheless the vendors push the hardware because that's
>ultimately where the performance is (you can always machine-code a sexy
>demo to take around to trade shows. This is not unique to parallel
>systems. I understand that that's what IBM did with the RS6000.)

When the CM made Time, Steve Squires (then head of DARPA's office on this)
made the quote only 1 in 3 programmers will make the transition to parallel
programming architectures.  Now where did he pull that figure out of his
hat?  I'm trying to maintain a comprehensive bibliography in the field
and that's certainly a topic to inetrest me.  Performance appears to be
in the hardware, it set bounds, but I would not discount optimizing
software.

>	There are lots and lots of books containing papers about and
>algorithms for machines that don`t exist any more. What is lacking (as
>far as I know) is a broadly accepted taxonomy upon which parametrizable
>algorithms can be developed. Unfortunately there are only a handful of
>people around with the mental equipment to do this sort of thing (I
>certainly don't claim to be one of them).

Actually, there are fairly few books on parallel algorithms.  Only a few books
on machines.  Knowledge gained from most failed projects is lost. and "man"
learns from failure.
I admit I can't make it to Computer Literacy Bookshop every week [an Ace card
for Silicon Valley next to Fry's Electronics (groceries next to electronic
chips in a building decorated like a chip), I think Stevenson
(your moderator) walked out with $300 in books once 8^)], and I can't
impose too special a favor for every new book on parallelism, but I could,
and I know they would say yes.
A taxonomy would help, but it's only one step.  I don't claim the mental
equipment either, BTW.

>	I often wonder what happened to the flowchart. We are just now
>developing the technology (GUIs, OOP) which will enable us to make
>serious use of the flowchart as a programming tool.

I know of at least two attempts and probably more to make dataflow
languages using GUI (one of the first crudest was using a language
called Appleflow written as a demo on a Mac (1985).  I recall a joke
how hard it was to program a factorial n! function.

Geometry is filled with wonderful analogies but they fall down
when it comes to certain aspects of timing and synchronization.
Certainly more work needs to be done, but it is no relief to the "dusty deck."

>	I think a generally useful DMMP system has to support node
>multitasking at least to the level of permitting the "virtual processor"
>concept. You shouldn't (within practical limits) have to size the
>problem to fit the machine. I'm not so concerned about file system
>access; you can always build a postprocessor to interpret the answer for
>you once it exists in some algorithmically convenient form.

The VP is something which was added to the Connection Machine.  It
certainly helps.  We shouldn't, but we do.  The US builds and funds
special purpose one of a kind machines.  I fear your "post-processor"
because it seems to leave too much after the fact.  Shuggs off potentially
important details, and computer users frequently don't like that.

Machines/architectures have "failed" because they lack appropriate
balance and attention to detail.  This is why comp/arch is an art.  

>	Again, I expect to see the resurgence of the flowchart in the
>form of graphical programming aids. What good is that Iris on your desk
>if it can't help you program? If all you can do is display the results
>of your work, it might as well be a big-screen TV in the conference
>room.
>
>	Once we have the right taxonomy and flowcharting concepts, we
>can start to merge the various parallel technologies that have developed
>(MIMD, SIMD, VLIW, dataflow, ...) into systems that can be
>architecturally parametrized to suit the application/range of
>applications desired.

You need to walk to want into a room filled with non-parallelism people.
I had a discussion with a former editor (Peter Neumann, SRI) of mine.
The software engineers are trying to get away from flow charts (which have
their problems).  As some of us have read and discussed the Pancake/Bermark
paper, we have too much confusion and differing assumptions about parallelism.
Users make too many assumptions of synchronization.  The notation for
parallelism does not completelt scale at this time.

>	Maybe I should go back to grad school.

I need to, too. 8^)  We all probably do, and we need to go back to schools
filled with collections of different parallel computers (hence the 40s
analogy).  But our educational system lacks the money for this hardware
which has greater architectural diversity than the days of the ENIAC and
the Harvard machines.  Students should have access to any architecture
they want, get a chance to try different things, .... but that is education.

Too much said.

--e. nobuo miya, NASA Ames Research Center, eugene@orville.nas.nasa.gov
  {uunet,mailrus,other gateways}!ames!eugene

**If you are unfamiliar with Computer Literacy Bookshop, you should be
you can even buy UMI Press PhD theses in hardcover form there.
(408-435-1118)