[net.arch] Yet another language

donz@tekcbi.UUCP (Don Zocchi) (03/23/85)

Occam was a nice try at manual parallel processing.
Wouldn't it be nice to apply such techniques as vectorizing, and parallel
microcode optimization to a "standard" language such as C.

Having programmed on array processors, multi microprocessor systems, and
distributed mainframes, I feel that the best solutions have hidden the
parallel nature of the machine.  By hiding the parallel nature, I mean
removing the need for parallel constructs, yet providing multi-processor
performance advantages.

Has anyone done any work on say a vectorizing C compiler for multi
microprocessor systems?

	Don Zocchi,
	Tektronix, Inc.
	Computer Based Instruments

jww@bonnie.UUCP (Joel West) (03/29/85)

> Occam was a nice try at manual parallel processing.
> Wouldn't it be nice to apply such techniques as vectorizing, and parallel
> microcode optimization to a "standard" language such as C.
> 
> Having programmed on array processors, multi microprocessor systems, and
> distributed mainframes, I feel that the best solutions have hidden the
> parallel nature of the machine.  By hiding the parallel nature, I mean
> removing the need for parallel constructs, yet providing multi-processor
> performance advantages.

I would agree with "manual" vs. "automatic" parallelization.  However,
if taking FORTRAN DO loops and vectorizing them were easy, Illiac IV
would have been a raging success.  C is much, much worse, because it
encourages you to get to the byte-level and also use neat, compact
tricks of side-effects.  Attempting to examine a low-level algorithm
is a fairly significant artificial intelligence problem.

Instead, you need your user to express abstractions, not lower-level
implementation details of "for" loops, etc.  The higher you go, the 
better off you are.  Consider three examples:
	1) In APL (a language I don't know at all, so I'll fake it)
	   if you say
		A <- B * C
	   where B and C are matrices, then there's an obvious automatic
	   parallelism.

	2) In some database or artificial inteligence language, you say

		for each HACKER with PC.TYPE = "IBM"
		    compute SQUANDERED as the total of SOFTWARE.INVESTMENT

	   obviously you can let loose on all the database in parallel
	   and somehow merge running subtotals into a total

	3) In discrete simulation, many things go on in parallel in the
	   real situation you model (4,000 assemblers in the factory, etc.)
	   so you can express each one (of a class) and let the event
	   scheduler run them in parallel.  This works quite nicely if
	   things are loosely coupled -- guy A doesn't change guy B at
	   time 100, but instead at time 100, guy A sets up a change
	   for guy B at time 101.

The last is a situation I've investigated in some detail and I believe
it is quite feasible to crack.
-- 
	Joel West				     (619) 457-9681
	CACI, Inc. - Federal 3344 N. Torrey Pines Ct La Jolla 92037
	jww@bonnie.UUCP (ihnp4!bonnie!jww)
	westjw@nosc.ARPA

   "The best is the enemy of the good" - A. Mullarney

cdshaw@watrose.UUCP (Chris Shaw) (03/29/85)

> > Occam was a nice try at manual parallel processing.
> > Wouldn't it be nice to apply such techniques as vectorizing, and parallel
> > microcode optimization to a "standard" language such as C.
> > 
> > Having programmed on array processors, multi microprocessor systems, and
> > distributed mainframes, I feel that the best solutions have hidden the
> > parallel nature of the machine.  
> 
> 
> Instead, you need your user to express abstractions, not lower-level
> implementation details of "for" loops, etc.  The higher you go, the 
> better off you are.  Consider three examples:
> 	1) In APL (a language I don't know at all, so I'll fake it)
> 	   if you say
> 		A <- B * C
> 	   where B and C are matrices, then there's an obvious automatic
> 	   parallelism.
> -- 
> 	Joel West

I think that optimizing/vectorizing C is at best a stop-gap measure.
The best you can do (probably) is to use the parallel section of your
machine about 10% of the time.

Besides, one is likely to get hellishly complicated code, both at the user and
object levels. As Joel mantioned, the best way to do parallelism is through
a higher-level abstraction. If one has any doubts, just look at the higher-
level abstraction offered by C/Pascal/whatever as opposed to assembler.

I read a very interesting article about a year ago by John Backus
for his 1978 Turing Award Lecture in August '78 CACM.
The article described the language FP in terms of higher-level abstrac-  
tion and program provability, and although FP isn't exactly something you can 
build device drivers with, it IS a useful first step in a new paradigm of the 
type required to do useful parallel programming.

One would probably be better off in learning a parallel language and using it
on a parallel machine, than one would be in writing half a dozen C/FORTRAN
post-processors to take advantage of your architecture.

The fundamental point to realize, I think, is that one can only adapt a tool
so far (von Neumann-style languages) before you start having to create a
new tool altogether.

Chris Shaw
University of Waterloo

lazarus@sunybcs.UUCP (Daniel G. Winkowski) (04/01/85)

> I read a very interesting article about a year ago by John Backus
> for his 1978 Turing Award Lecture in August '78 CACM.
> The article described the language FP in terms of higher-level abstrac-  
> tion and program provability, and although FP isn't exactly something you can 
> build device drivers with, it IS a useful first step in a new paradigm of the 
> type required to do useful parallel programming.

I have done a small amount of programming in FP (4.? BSD, implimented in
lisp) and have realized several things. 1) Though it runs god-awful slow
on a von Neumann architecture, it can easily be seen how it might take full
advantage of a parallel machine. 2) All languages seriously proposed for
parallel processing attempt to eliminate global side-effects by variables. FP 
accomplishes this very simply, it allows no variables! Aghast! Shock!
'Pure Lisp' also allows no global side-effects. Where as, von Neumann style
languages are programming by effect - sequential changes to a global
environment. [see also VAL - a Data Flow Language by Jack Dennis at MIT]

> One would probably be better off in learning a parallel language and using it
> on a parallel machine, than one would be in writing half a dozen C/FORTRAN
> post-processors to take advantage of your architecture.
> 
> The fundamental point to realize, I think, is that one can only adapt a tool
> so far (von Neumann-style languages) before you start having to create a
> new tool altogether.
> 
> Chris Shaw
> University of Waterloo

Agreed. Though, there are going to be many people who will want to port their
Fortran programs to this new machine (whichever it might be). Unfortunately,
because of this, I suspect that much more effort will go into the porting
problem, then into the language design problem. Or, at least, few will
bother to learn a new type of programming if they can continue using 
Fortran (though, there is no way in which a converted Fortran program will
be anywhere near as efficient as a program writen with parallel considerations
in mind).

=-=
Today we live in the future,
Tomorrow we'll live for the moment,
But, pray we never live in the past.
--------------
Dan Winkowski @ SUNY Buffalo Computer Science (716-636-2879)
UUCP:	..![bbncca,decvax,dual,rocksanne,watmath]!sunybcs!lazarus
CSNET:	lazarus@Buffalo.CSNET     ARPA:	lazarus%buffalo@CSNET-RELAY

gottlieb@cmcl2.UUCP (Allan Gottlieb) (04/02/85)

In article <1436@sunybcs.UUCP> lazarus@sunybcs.UUCP (Daniel G. Winkowski) writes:
>              2) All languages seriously proposed for
>parallel processing attempt to eliminate global side-effects by variables. 

Presumably you mean all languages seriously proposed for nonshared
memory parallel processors.  I can't imagine why the Illinois cedar
people or our Ultracomputer group would want to outlaw writing global
variables since we are proposing architectures that feature (expensive
to implement) globally shared memory.
-- 
Allan Gottlieb
GOTTLIEB@NYU
{floyd,ihnp4}!cmcl2!gottlieb   <---the character before the 2 is an el