carriero@YALE.EDU (Nicholas Carriero) (05/12/89)
To those who have commented on our CACM article, thanks; we've found parts of this discussion (particularly the objections of Shapiro, Kahn and Miller) interesting and illuminating. But we don't have time to respond to everything, and so at this point we're calling it quits. Please don't interpret non-comment from us as either acceptance or rejection of a posting. We'll conclude by responding to a posting that (unfortunately) is merely ludicrous. This is exactly the sort of thing we don't have either time or inclination to rebut. But let's get it over with this one time. Michael Scott of Rochester writes I've been trying to sit this discussion out, but I can't contain myself any longer. "Speculative arguments about Linda's efficiency" ARE relevant precisely because Linda proponents have had eight years to make the case and they haven't succeeded. Kale was attempting to make a point by arguing from the potential for inefficiency. Such arguments in the past have been proved wrong. Scott goes on to contradict himself, and illustrate this point. He says: Linda is a simple, elegant, and appealing approach to writing parallel programs. For small-scale parallelism with medium to coarse-grain process interactions it is clearly very nice. Eight years ago it was argued that Linda would fail in these domains. Apparently Scott is conceding that those eight years weren't wasted after all. In large part, those arguments were similar to Kale's, "it looks like it has got to be inefficient, so forget it", or in your phrasing: For large-scale parallel programming, however, Linda has problems with efficiency, modularity, and scalability that have not been resolved and that I do not believe can be resolved. Historically, we have repeatedly faced the "so you've done X, can you do Y" syndrome. This is merely another instance. This statement and the following could easily be interpreted as meaning that Linda has been charged for the last eight years with the mission of satisfying Scott's "large-scale parallel" programming challenge (whatever that is) and has failed to do so. This is not the case. In fact, we moved from one domain to another, with varying degrees of success, as machines became available. We continue to do this. When, as we believe we will, we demonstrate adequate performance in the "large-scale" domain, history teaches us we will have to brace ourselves for the "hyper-scale" enthusiasts. Note, we are not arguing that there are no legitimate reasons for concern or that we know we will always be able to deliver an appropriate level of efficiency. We are rejecting the tendency to dismiss Linda based on speculations about efficiency in a particular domain. Informed, detailed arguments based on theoretical, experiential and experimental considerations would be a different matter, but such arguments require that the critic actually expend some effort---we all know it's a lot easier to speculate. The burden of proof lies on the Linda camp. They have yet to produce a single application that addresses the performance question convincingly. How long is the world supposed to wait? Being charitable, let's assume you left off the "large-scale parallelism" from in front of "application". Otherwise, this statement is simply irresponsible. An article in Byte last Fall and two recent reports on our Hypercube system and on programming methdology all discuss Linda applications whose speedup increases close to linearly through 64 nodes on the iPSC/2, our biggest machine. Maybe these applications would break at 65 nodes, but no-one who understands the structure of the system seems to think this is likely. This might have something to do with the fact that, when Intel introduced its new parallel disk server for the iPSC/2 at the last hypercube conference, they used a Linda program to demo it. The world is not waiting. More than a dozen hardware concerns are involved in Linda work. At least as many Linda-related research projects (independent from our efforts at Yale) are underway around the world. Nick & Dave