CLT@SAIL.STANFORD.EDU (Carolyn Talcott) (09/30/86)
Speaker: James M. Boyle, Argonne National Laboratory
Time: Monday, October 6, 4pm
Place: 252 Margaret Jacks (Stanford Computer Science Dept)
Deriving Parallel Programs
from Pure LISP Specifications
by Program Transformation
Dr. James M. Boyle
Mathematics and Computer Science Division
Argonne National Laboratory
Argonne, IL 60439-4844
boyle@anl-mcs.arpa
How can one implement a "dusty deck" pure Lisp program on global-
memory parallel computers? Fortunately, pure Lisp programs have a declara-
tive interpretation, which protects their decks from becoming too dusty!
This declarative interpretation means that a pure Lisp program is not
over-specified in the direction of sequential execution. Thus there is
hope to detect parallelism automatically in pure Lisp programs.
In this talk I shall describe a stepwise refinement of pure Lisp pro-
grams that leads to a parallel implementation. From this point of view,
the pure Lisp program is an abstract specification, which program transfor-
mations can refine in several steps to a parallel program. I shall
describe some of the transformations--correctness preserving rewrite rules
--used to carry out the implementation.
An important property of a parallel program is whether it can
deadlock. I shall discuss a number of the design decisions involved in the
refinement and their role in preserving the correctness of the transformed
program, especially with regard to deadlock.
Implementing a transformational refinement often leads to interesting
insights about programming. I shall discuss some of these insights,
including one about the compilation of recursive programs, and some that
suggest ways to systematically relax the "purity" requirement on the Lisp
program being implemented.
We have used this approach to implement a moderately large pure Lisp
program (1300 lines, 42 functions) on several parallel machines, including
the Denelcor HEP (r.i.p.), the Encore Multimax, the Sequent Balance 8000,
and the Alliant FX/8. I shall discuss some measurements of the performance
of this program, which has achieved a speedup of 12.5 for 16 processors on
realistic data, and some of the optimizations used to obtain this perfor-
mance.
Oh, yes, and by the way, the transformations produce a parallel pro-
gram in FORTRAN!