[comp.lang.apl] Intermediate language for APL

burns@latcs1.oz (Jonathan Burns) (09/05/89)

I'm reposting some talk from comp.parallel a little while back, because of
interest here in intermediate representations for APL compilers. 
Tim Budd has sent me his papers on his lambda-form representation, which
I find very neat. I've ordered his book, in fact.

>From: eugene@eos.arc.nasa.gov (Eugene Miya)
Newsgroups: comp.parallel

I received a kind of neat posting on languages for parallelism
and vectorism.  Unfortunately, the author isn't able to post.
I think he makes interesting comments on APL (which perhaps
deserves a second look).

--eugene
[complete: (please reply to author or follow ups to comp.parallel)]

Subject:      Parallelism/Vectorism
To: eos.arc.nasa.gov!eugene@cs.utexas.edu
>From: uunet.UU.NET!tmsoft!loc@cs.utexas.edu (Leigh Clayton)
X-To:         Eugene Miya
Organization: I. P. Sharp Associates
X-Mail:       1900/2 First Canadian Place,
              Toronto, Canada, M5X 1E3
X-Telephone:  +1 (416) 364-5361
X-Fax:        +1 (416) 364-2910
X-Telex:      0622259
Date:         17 May 89 16:14:59 UT

 I think  I may be fairly accused of prejudice in favour of APL since I've
spent the last 16 Years prodding an implementation of it along (Sharp APL
in case you can't guess from my header).

 No language that I know of is perfect (except perhaps CDC 6400 COMPASS :-)
but there are certainly several features of 'modern' APL that I believe at
least point the way towards effective use of scalable architectures. By
'modern' APL I mean APL as Ken (Iverson) now sees it, which is to say *not*
IBM/APL2, but the 'Dictionary' APL as the APL community usually calls it.


 - Generalised operator/function syntax. This is something APL has always
   had, but it's value and generality has not been fully used or recognised
   until quite recently. The number of 'operators' (that is, meta-functions
   whose arguments and results are functions, the architypical one being
   "/", ie in "+/" the operator "/" takes "+" as it's argument, and produces
   "sum reduction" (usually called "summation" by non-APLers) as it's
   result) [pardon long parenthesis] has increased quite radically in the
   last 5 years or so, partly to deal with the other points below -

 - Boxed arrays. Since the structure of APL is embodied much more within
   it's data structures than within it's programs (ie it is more like LISP
   than PASCAL) it made sense to extend the levels of structure for APL
   objects, and general arrays essentially allow an arbitrary level of
   data hierarchy orthogonal to APL's traditional n-dimensional structuring.

 - Function rank. This is probably the most important change to APL formalism
   in the last ten or fifteen years, and complements and completes the notion
   of operators. Basically, each function is defined to have an implicit
   rank, which means the shape of arguments it works on. Thus the rank of
   "+" is 0, meaning it works on 0-rank operands (scalars). Iota is defined
   to have right rank of 0 (ie it's right argument is a scalar) and its
   left rank is 1 (it's left argument is vectors/lists). The power of this
   simple notion comes from two things:

     - If the rank of the argument is larger than the function rank, the
       function is applied (potentially in parallel) to *each* properly-
       ranked structure (which we call a cell) within the argument (ie, as
       Iota's left argument, a matrix is a collection of vector cells).

     - There is an operator to explicitly state the rank of a function,
       which allows explicit control of the application of a function
       across a structure.

 All implementations of  APL that I am currently aware of are (essentially)
single processor implementations (Sharp APL runs on large IBM multi-CPU
systems, but a given interpreter only uses a single CPU at a time). But
APL could be effectively used to exploit massive parallism (at least of the
MIMD sort) by at least the two following strategies:

 - Each cell within any single function invocation can be done separately

 - Within an expression such as   foo"dual">x    ("dual" is a dieresis)
   requests the function "foo"
   be evaluated for each sub-box of the array "x", and these evaluations
   can be done independantly.

 And I'm sure that, once that problem is seriously studied, there will
turn out to be lots of other opportunities.

[ Chat deleted ... JB]

  Leigh Clayton,                        loc@tmsoft.UUCP


>From: budd@mist.cs.orst.edu (Tim Budd)
Newsgroups: comp.parallel
Subject: re: APL and parallelism
Keywords: APL parallelism
Message-ID: <5583@hubcap.clemson.edu>
Date: 23 May 89 20:52:20 GMT
Sender: fpst@hubcap.clemson.edu
Lines: 36
Approved: parallel@hubcap.clemson.edu

Regarding APL and Parallelism

I've been an advocate of APL for parallel processors for a long time
(see my TOPLAS paper in 84 and my book ``An APL Compiler'' published by
Springer in 1988), so I was interested in Leigh's comments passed on by
Eugene.  Some comments on his remarks:

(1) My students and I continue to pursue approaches to generating efficient
code from APL for a variety of machines.  Our latest technique involves
a nice intermediate form based on the lambda calculus.  We can translate
APL (and FP too!) into this form; perform transformations and optimizations
at this intermediate form level, then generate code from the intermediate
form.  The interesting thing is that the intermediate form does not
involve any explicit control flow, and thus we can introduce parallelism
during code generation in a variety of ways.  This allows us to generate
fine grain SIMD (connection-machine style) code by introducing parallelism
one way, and coarse grain, MIMD (sequent-style) code by introducing
parallelism in another way.  There are also interesting time/space
tradeoffs which we are treating as a *code-generation* problem and not
something the programmer needs to deal with.
If you are intersted in finding out more about
this let me know and I can send you technical reports.

(2) APL is, despite being an anathema to the academic communnity (for
reasons which I have never quite understood) still alive and well.
Following the publication of my book I was contacted by several commercial
wall street types offering to throw large sums of money my direction if I
would develop a commercial quality compiler for APL.  Being a silly college
professor, I resisted these because (1) I don't think I understand the
problems well enough to make such a system yet and (2) there are a few dark
corners of the language I've been able to ignore since I'm just ``doing
research''; if I were serious about a commercial product I would have to
really look into this, and (3) if I was interested in making money I would
never have become a college professor in the first place.

--tim budd, budd@cs.orst.edu


Refs: [JB]

Technical papers:
A New Approach to Vector Code Generation for Applicative Languages, 1988
Composition and Compilation in Functional Programming Languages, 1988
by:
Timothy A. Budd
Department of Computer Science
Oregon State University
Corvallis, Oregon 97331
budd@cs.orst.edu

Book:
An APL Compiler
Timothy A. Budd
Springer-Verlag, 1988

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Jonathan Burns        |
burns@latcs1.oz       | AI is born free, yet everywhere is found in loops.
Computer Science Dept |          
La Trobe University   |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~