workman@decwrl.dec.com (Will Workman) (02/13/90)
At MasPar, we are focused on fine grain or massively parallel computing to leverage Data Parallel programming techniques. This approach to leveraging parallelism fits many scientific and engineering applications where a grid or array of data can be assigned to an array of processors with each processor performing computations for a point or group of points in the array using a single instruction stream (indirect addressing can provide some flexibility on the data selected for each processor). This approach generally involves large data sets, and the time to swap out one user for another is generally excessive, and not appropriate for "time sharing" as we have known it for extending the access for large machines. Your focus appears to be on course grain or multi-processing parallelism - and we can expect that mainframes or other larger machines will continue to evolve with multiple processrs and newer operating systems like Mach replacing standard UNIX and single processor configurations. There is no architectural boundary that would preserve single CPU systems as most of the earlier mainframes or time sharing systems were based on. We expect that the operating systems will continue to evolve to take advantage of multiprocessor configurations including load balancing. But a fundamental difference is that we are still process driven, and not able to take advantage of the power of Data Parallel programming techniques that offer the highest performance on problems with thousands of data points commonly found in scientific and engineering applications. My personal viewpoint - is that we must separate parallelism into two types - multi-processor which is process driven and Data Parallel - as the trade-offs become clearer and the programming techniques are distinctly different. Best Regards > > Will Workman, Dir of Fed Mkt, MasPar