mark%umcp-cs@UDel-Relay@sri-unix.UUCP (08/17/83)
From: Mark Weiser <mark%umcp-cs@UDel-Relay> This *maximum* time business needs further ground rules if we are to discuss it here (which we probably shouldn't). For instance, the argument that communication and multiplcation paths don't matter in an nxn matrix multiply, but that the limiting step is the summation of n numbers, seems to allow too much power in specifying components. I am allowed unboundedly many processors and communication paths, but only a tree of adders? I can build you a circuit that will add n numbers simultaneously, so that means the *maximum* speed of an nxn matrix multiply is constant. But it just ain't so. As n grows larger and larger and larger the communication paths and the addition circuitry will also either grow and grow and grow, or the algorithm will slow down. Good old time-space tradeoff. (Another time-space tradeoff for matrix multiply on digital computers: just remember all the answers and look them up in ROM. Result: constant time matrix multiply for bounded n.)
sts@ssc-vax.UUCP (Stanley T Shebs) (08/20/83)
Hmmm, I didn't know that addition of n numbers could be performed simultaneously - ok then, constant time matrix multiplication, given enough processors. I still haven't seen any hard data on limits to speed because of communications problems. If it seems like there are limits but you can't prove it, then maybe you haven't discovered the cleverest way to do it yet... stan the lep hack ssc-vax!sts (soon utah-cs) ps The space cost of constant or log time matrix mults is of course ridiculous pps Perhaps this should move to net.applic?