bernhold@qtp.ufl.edu (David E. Bernholdt) (02/12/90)
Take as a basic assumption that I have an algorithm which needs to do a fair bit of I/O. I want to produce a parallel version of this algorithm, and I'm interested in ideas for handling the I/O. I'm not sure if there's been much done about this -- all the discussions I hear seem to ignore doing any kind of I/O in parallel. By the way, I'm mainly concerned about MIMD-shared memory machines -- I realize that each class of machines presents different problems and solutions. To provide fuel for the discussion, here are some of my thoughts: 1) Most machines do not have an I/O channel for each process, therefore having every process initiate all of the I/O requests it wants will really kill performance -- all processes will end up waiting for the disk. Is this a real problem? In a single-user environment? Multi-user? 2) One solution might be to have one process make all of the I/O calls, storing the data in a large buffer. The other processes would "read" or "write" data from the large buffer. What sort of machines will this work on? How general is it? 3) What about cases where the number of CPUs working on your code is indeterminant at any given time (Cray autotasking, for example)? Any pointers to literature, responses to the points above, other discussion would be appreciated. -- David Bernholdt bernhold@qtp.ufl.edu Quantum Theory Project bernhold@ufpine.bitnet University of Florida Gainesville, FL 32611 904/392 6365