rminnich@super.org (Ronald G. Minnich) (12/14/90)
In article <12221@hubcap.clemson.edu>, hshi@maytag.waterloo.edu writes: |> A naive question: |> We have two parallel programming models: shared memory model via |> shared variables and distributed memory model via message passing. |> Is there any model which is some where between? Will it be more suitable |> for the persent comptuter architectures? The shared memory model is very nice. Lots of systems (e.g. Ivy, MemNet, ...) have demonstrated that you can provide a strongly-consistent shared memory model to an application running on a bunch of workstations. It does not hide the fact, however, that there is a network under there, and some things that work on tightly coupled shared memory systems run very badly indeed in these network-based shared memories. My experience is that allowing the high-latency message-driven nature of the underlying network to manifest itself in the set of operators you provide is a useful approach. One can provide the strict shared-memory model on a network of, e.g., Suns, but having implemented such a system and watched how applications use it, I think that is a mistake. Rather, you provide an extended model of shared memory, with operators that can (depending on the underlying network) cause pages to move, allow you to access different size pages, allow you to indicate that a high latency on the next fetch is acceptable, an so on. We have found here that such extended operators are both easy to use and can improve program performance. For an idea of what the operators are in an earlier version of our system (Mether), see the 10th ICDCS "Reducing host load, network load, and latency ...". I hope to have a tech. report out on the current system soon (it's my dissertation). So the answer is, "yes, there is something in between, and we have an instance of it at SRC". I am sure there are other such systems elsewhere. Take a look at Lipton's PRAM (get a tech. report) or Hutto's slowly consistent memory (10th ICDCS). ron