hains@iro.umontreal.ca (Gaetan Hains) (11/11/89)
The research I referred to can be put in the category of program transformation, or program derivation until it is partly automated. In a sense it aims at generalising existing methodologies for systolic array design. The target architecture will preferably have distributed memory, with shared-memory as a special case. We certainly agree about the relevance of APL workspaces. By functional array expressions I mean more than notation. My reference is L.Mullin's recent thesis (Syracuse Univ.) which defines a calculus for expressions formed from multi-dimensional arrays. Expressions can be transformed by 2nd order operators like reshape, flatten, n-D rotate etc. which only affect the "indexing" by which the array is accessed. The goal is of course to reshape until you get something close enough to a given type of architecture. This approach can't deal with irregular sparse ("graph") problems. At least for certain graph problems, A.Gibbons has recently shown that it can be optimal to simulate divide-and-conquer shared-memory algorithms on a mesh. But that is another story. The algebraic path problem and FFT are obvious candidates for a methodology like ours, since they are known to have good algorithms for distributed machines. What good is it to derive known algorithms? You can systematically derive variants for all sizes and shapes of meshes, including the 2x2x...x2 cubes, which raises the possibility of portability without loosing efficiency. Another advantage is to predict efficiency symbolically, without simulation. All of this boils down to problem-dependent global optimisations and may never be automated. Whenever it applies however, I expect it to complement general-purpose load balancing and routing methods. Linda could certainly be used as target language and Express/Helios as OS from which to use the program transformation tools I have in mind. Gaetan Hains, Universite de Montreal (hains@iro.umontreal.ca)