[comp.compilers] compiling for parallel machines

deb@quimby.dartmouth.edu (Deb Banerjee) (08/05/89)

Your suspicion about software rather than hardware being the challenge in
parallel programming systems may be correct.

I am working on high level programming languages & systems for SIMD machines
(I have the Connection Machine in mind).

References:

"Partitioning and Scheduling parallel programs for multiprocessors"
Vivek Sarkar, MIT press 1989.
--based on his Ph.D thesis (1987).


"Optimizing Supercompilers for supercomputers"
Michael Wolfe, MIT press 1989.
--deals with dependence analysis etc.


IEEE Software July 1989 - is a special issue on Parallel Programming
environments and languages.

The state of the art in parallel languages is rather poor. The languages are
too low level - puts more burden on the programmer and the excessive detail
the programmer specifies prevents program optimizations; portability is out
of the question.

There has been quite a bit of work on Parallel programming environments.
Doesn't look very elegant most of the time.

Could you send me any other info you get.  Hope you find this useful.

Thanks
--Deb
[From deb@quimby.dartmouth.edu (Deb Banerjee)]
-- 
--
Send compilers articles to compilers@ima.isc.com or, perhaps, Levine@YALE.EDU
{ decvax | harvard | yale | bbn }!ima.  Meta-mail to ima!compilers-request.
Please send responses to the author of the message, not the poster.

eric@snark.uu.net (Eric S. Raymond) (08/06/89)

In <1989Aug4.180349.3480@esegue.uucp> Deb Banerjee wrote:
> Your suspicion about software rather than hardware being the challenge in
> parallel programming systems may be correct.

I would have thought this was obvious!

There's been enough theory done on network topology, relationships between
complexity (in the automata-theory sense) and various resource/transaction
costs, and multiprocessor scaling laws that the area can be described as
`mature'. Empirically, parallel-processing architectures have been
*converging* into a few broad families over the last three years rather than
*diverging*, a sure sign that researchers in the hardware end of the problem
now have a shared paradigm (in Thomas Kuhn's sense).

The software end is nowhere near this well-evolved. There are formalisms
like CSP that allow us to discuss the behavior of parallel systems in a
rigorous way, but no one has even come up with a convincing general attack on
the higher-level problem -- automatic parallelization of algorithms expressed
for a uniprocessor abstract machine in real languages (i.e. in the presence
of multiple assignment, side-effects and data aliasing).

There is the really *hard* problem!
-- 
      Eric S. Raymond = eric@snark.uu.net    (mad mastermind of TMN-Netnews)
-- 
--
Send compilers articles to compilers@ima.isc.com or, perhaps, Levine@YALE.EDU
{ decvax | harvard | yale | bbn }!ima.  Meta-mail to ima!compilers-request.
Please send responses to the author of the message, not the poster.

jps@cat.cmu.edu (James Salsman) (08/07/89)

In article <1989Aug6.113745.309@esegue.uucp> eric@snark.uu.net (Eric S. Raymond) writes:
> The software end is nowhere near this well-evolved. There are formalisms
> like CSP that allow us to discuss the behavior of parallel systems in a
> rigorous way, but no one has even come up with a convincing general attack on
> the higher-level problem -- automatic parallelization of algorithms expressed
> for a uniprocessor abstract machine in real languages (i.e. in the presence
> of multiple assignment, side-effects and data aliasing).
> 
> There is the really *hard* problem!

Automatic parallelization under the conditions you describe is SO
difficult that the resulting code usually isn't sped up enough to overcome
the practical problems of the added cost of the parallel hardware and the
time spent doing the compile.

At CMU, the Ergo project is working on the compilation of languages into
combinator graphs.  The idea being that combinator graphs may be
automatically parallelized.

Combinators are related to the type-free lambda calculus, so in practice
the combinator graph reduction engine must suffer the efficiency blow of
some sort of type-tag system.  In addition, combinator graphs must not
have local state (assignment or destructive data structure operations) if
they are to be parallelized.  This is a *BIG* lose.

I think the real solution is to start the wide-spread instruction of
parallel programming languages such as Occam, Linda, and *Lisp at the
university level.

:JAmes

Disclaimer:  I have nothing to do with the Ergo project.
-- 
:James P. Salsman (jps@CAT.CMU.EDU)
-- 
--
Send compilers articles to compilers@ima.isc.com or, perhaps, Levine@YALE.EDU
{ decvax | harvard | yale | bbn }!ima.  Meta-mail to ima!compilers-request.
Please send responses to the author of the message, not the poster.