edith@ai.toronto.edu (Edith Fraser) (02/23/90)
Department of Computer Science, University of Toronto (GB = Galbraith Building, 35 St. George Street) ------------------------------------------------------------- NUMERICAL ANALYSIS GB220, at 3:00 p.m., 8 March 1990 Dr. Craig Douglas Thomas J. Watson Research Center Edith Fraser "Parallel Multigrid and Domain Reduction Methods: Ellyptic Boundary Value Problems" A parallel algorithm is analyzed which combines the best features of the multigrid, projection, and domain decomposition methods. Subproblems are solved on reduced portions of the domain and the solution is recovered on the whole domain using simple extension operators. Unlike standard domain decomposition methods, there is no boundary or overlap information that must be transferred between neighboring subproblems since there is no concept of a neighboring domain. When the subproblems are solved directly, a direct method results. We show how to use iterative methods to efficiently solve the problem on the entire domain. We also show that the matrix associated with a grid on the entire domain need never be generated, making this an efficient method from a storage point of view. While suitable for general domains, special cases will be emphasized. For a square and a cube, there is an obvious 4 and 8 way domain reduction. We also demonstrate a technique which leads to nonobvious 8 and (up to) 64 way domain reductions, respectively. We also analyze variants of the algorithm which result in iterative methods. We analyze Hackbusch's robust multigrid algorithm for some model problems and show that our parallel algorithm uses much less computer time and at most the same amount of storage. We also exploit the composition of two classes of restriction operators to produce many smaller problems to solve. This is similar to the total reduction method variant of standard multigrid. By using more, smaller problems, we speed up our parallel algorithm noticeably. Complexity results are derived for both coarse and fine grained parallel computation. Further, computational experience on a variety of parallel architectures will be discussed with an emphasis on shared and distributed memory systems, such as an Encore Multimax or Intel IPSc2.