[comp.arch] Hypercubes and second note:

eugene@pioneer.UUCP (02/19/87)

Doug Pase writes:
>Hypercubes are hard to use only if you don't know anything about them, or
>about what they're good for.

I won't mince words: Hypercubes are difficult to program.  I've not seen
a parallel machine which is not difficult to program in some way.  I
know people who don't like programming vector machines, and they are by
far the best developed.  You have information management, and
communications issues in Hypercubes which do not occur in uniprocessors.
Shared memory machines are little better.  McGraw(LLNL)/Grit(CSUFC)
compare it to OS programming of applications and many would agree: you
have to deal with consistency problems, deadlock, etc.
It really depends on the structure of an application (at this time)
if it amenable to use of parallelism right now, in fact, the term "parallel"
is extremely deceiving.

>For large scale computing,
>a hypercube might actually be easier to use.
	     ^^^^^ this is a big IF.

SECOND more serious note: it is beginning to appear that the powers that
be are putting a tighter and tighter grip on my multiprocessor
bibliography.  It has not yet had a SENSITIVE classification put on it,
but I think this is a matter of time.  What this means is that it might
not be possible to distribute this outside the US anymore under penality
of law.  This includes to Allies of the US like France and England.  All
for a few bibliographic references whose titles are in the public
domain.  I will try to clear the currently pending Foreign requests, and
requests by International companies should best be done by your US
offices.  But once that SENSITIVE label is placed on it, the penalities
on me or anyone who transports it outside the US will be serious.

From the Rock of Ages Home for Retired Hackers:

--eugene miya
  NASA Ames Research Center
  eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
  {hplabs,hao,nike,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene

news@cit-vax.UUCP (02/20/87)

Organization : California Institute of Technology
Keywords: 
From: jon@oddhack.Caltech.Edu (Jon Leech)
Path: oddhack!jon

In article <347@ames.UUCP> eugene@pioneer.UUCP (Eugene Miya N.) writes:
>Doug Pase writes:
>>Hypercubes are hard to use only if you don't know anything about them, or
>>about what they're good for.
>
>I won't mince words: Hypercubes are difficult to program.  I've not seen
>a parallel machine which is not difficult to program in some way.  I

	I believe that conventional FBAPP languages and even LISP are 
not appropriate paradigms for programming such machines, which is partly
why they're so horrible right now (putting junk like the 80286 inside
doesn't help either). It's a pity that so much effort is going into
producing hardware instead of software to use it effectively.
One of the potential problems I see in better programming paradigms,
however, is that there are often several ways to decompose a problem - 
in image rendering, for example, you can decompose in frame space,
image space, object space, or combinations thereof - and a language 
designed to abstract away parallelism may also restrict which 
decompositions can be easily implemented.

    -- Jon Leech (jon@csvax.caltech.edu || ...seismo!cit-vax!jon)
    Caltech Computer Science Graphics Group
    __@/

miner@ulowell.UUCP (02/23/87)

In article <1824@cit-vax.Caltech.Edu> jon@oddhack.UUCP (Jon Leech) writes:
>One of the potential problems I see in better programming paradigms
>however, is that there are often several ways to decompose a problem...
>a language designed to abstract away parallelism may also restrict which 
>decompositions can be easily implemented.

You would not want a language that abstracted away the parallelism, what
we need to develop are languages that model the parallelism of these new
architectures.  As far as I am concerned most of the recently accepted
languages (Pascal, Modula-2, Ada) are no better then FORTRAN(1) for 
implementing algorithms on these machines.  We need to develop better
languages based on parallel and data flow concepts to model these 
architectures.

(1) Bacus - Communications of the ACM 1972
-- 
Rich Miner !ulowell!miner Cntr for Productivity Enhancement 617-452-5000x2693

news@cit-vax.UUCP (02/23/87)

Organization : California Institute of Technology
Keywords: 
From: jon@oddhack.Caltech.Edu (Jon Leech)
Path: oddhack!jon

In article <1081@ulowell.cs.ulowell.edu> miner@ulowell.cs.ulowell.edu (Richard Miner) writes:
>You would not want a language that abstracted away the parallelism, what
>we need to develop are languages that model the parallelism of these new
>architectures.  

	No, YOU would not want it. I think it's far to early in the parallel
programming game to restrict our options by saying 'we don't want to do X
because...' - let someone try X and see if it works. I don't know if it's
possible to develop a reasonable language which will work well on a wide
variety of parallel architectures, but we're certainly not going to create
such a beast by making the language reflect a particular architecture!

    -- Jon Leech (jon@csvax.caltech.edu || ...seismo!cit-vax!jon)
    Caltech Computer Science Graphics Group
    __@/

miner@ulowell.UUCP (02/24/87)

In article <1854@cit-vax.Caltech.Edu> jon@oddhack.UUCP (Jon Leech) writes:
>(talk about parallel languages) but we're certainly not going to create
>such a beast by making the language reflect a particular architecture!
As you mention, what right do we have to make such a global statements.

"I still feel" that we need to develop languages that model parallel 
constructs.  I am not saying develop something that models just a cube or 
just a data flow machine.  "I think" we need to develop a few languages that 
allow us to express parallel statements as easily as sequential languages 
FORTRAN, C, Ada (mostly), force us to think in terms of Von Neumann type 
machines.

These are the opinions of someone who has been coding parallel routines 
for a set of parallel dataflow chips(NEC7281's) in a parallel dataflow 
assembly language.  I want a High Level Parallel Language HLPL.
-- 
Rich Miner !ulowell!miner Cntr for Productivity Enhancement 617-452-5000x2693

ccplumb@watnot.UUCP (02/24/87)

In article <347@ames.UUCP> eugene@pioneer.UUCP (Eugene Miya N.) writes:
>SECOND more serious note: it is beginning to appear that the powers that
>be are putting a tighter and tighter grip on my multiprocessor
>bibliography.  It has not yet had a SENSITIVE classification put on it,
>but I think this is a matter of time.  What this means is that it might
>not be possible to distribute this outside the US anymore under penality
>of law.  This includes to Allies of the US like France and England.  All
>for a few bibliographic references whose titles are in the public
>domain.  I will try to clear the currently pending Foreign requests, and
>requests by International companies should best be done by your US
>offices.  But once that SENSITIVE label is placed on it, the penalities
>on me or anyone who transports it outside the US will be serious.

Two points...
A) Gee, that sounds interesting... I *would* like a copy what you've got.
   I thought it would be very dry, but with such a recommendation....
   (You may tell The Powers That Be that _The_Purloined_Letter_ might have
   been more successful.)
B) It strikes me as funny that the most accessible multiprocessing system
   currently available (and thus, may I suggest, one of the most promising)
   is the Transputer, which the U.S. Gov't can't do a damn thing about.

	-Colin Plumb (ccplumb@watnot.UUCP)

NSA CIA contras Iran bombing hostages gargoyle (<-- this is probably the code
                                                    name for *something*)

pase@ogcvax.UUCP (02/24/87)

In article <ulowell.1081> miner@ulowell.cs.ulowell.edu (Richard Miner) writes:
>You would not want a language that abstracted away the parallelism, what
>we need to develop are languages that model the parallelism of these new
>architectures.  As far as I am concerned most of the recently accepted
>languages (Pascal, Modula-2, Ada) are no better then FORTRAN(1) for 
>implementing algorithms on these machines.  We need to develop better
>languages based on parallel and data flow concepts to model these 
>architectures.

(1)	Agreed, FORTRAN, Ada, Pascal, Modula-2, etc. are not well suited
for parallel processing of any sort.  Kuck and his associates have done some
very good work on parallelizing these languages as much as can be done, but
many will agree that it is difficult and mostly ad hoc.

(2)	Occam and other languages of that class allow the free expression of
parallelism, but have the disadvantages that a) the programmer is responsible
for the placement of processes, and b) therefore the placement cannot change
to adapt to the requirements of the system.

	This is a disadvantage for several reasons.  Optimal static allocation
of tasks to processors is well known to be NP complete.  It is very difficult
even for small problems.  It assumes advance knowledge of the communication,
memory and CPU requirements, much of which is not available for any but the
most regular applications (eg. matrix operations, FFTs, etc.).  The allocation
must be recomputed any time there is a change in the network.  There is no
fault tolerance to any such system.  If a processor goes down, too bad.  The
computation must be restarted from the last save-point, but only after the
system has been fixed.  There is no way to run on a partial system.

(3)	Dataflow languages offer a clean alternative which is easy to
learn and simple to use.  They are best suited for dataflow architectures,
but could be used to some advantage on more conventional architectures.
This is a promising area.

(4)	Other languages - LML, &c. - allow the free expression of parallelism
but do not require a parallel computation model.  These are the languages
which "abstract the parallelism away".  They could execute equally well on a
uniprocessor or a multiprocessor system.  There are no explicit task
partitioning or communication mechanisms.  These languages, when combined with
dynamic load balancing techniques, allow algorithms to be expressed naturally
with no fuss over the underlying hardware.  Fault tolerance is also possible,
though I won't go into that here.
-- 
Doug Pase   --   ...ucbvax!tektronix!ogcvax!pase   or   pase@Oregon-Grad