aglew@ccvaxa.UUCP (05/20/86)
As I was reading Hillis' book on the Connection Machine, I was struck by the statement <<we don't care if the processors are computing ANDs or ADDition - we aren't comparing the quality of the instruction set>>. This started me wondering - has any work been done on an information theoretic measure of computing power? It seems intuitively obvious to me that AND is less powerful than ADD, but how do we quantify this? I've started out by using the type of approach that is used in a first communications class to `define' entropy. Such a measure of compute power would have to be probabilistic - probably (:-) parametrized for a particular input distribution. I'm only worrying about functions that take two N bit inputs, and produce an N bit output. The distinction between AND and ADD seems to be one of dependencies between bits - but that isn't the whole story, since a function that just copies its first input bit has all the dependencies you'd want, but doesn't really seem very powerful. And so on... Can anyone suggest properties that such a function should have, as in cascades of computation, or ifs, or... Anyone know of any work along this line? Is there yet another field of computer science/math/engineering that I'm totally ignorant of? --- Before people start replying that such a measure is meaningless unless it is associated with a particular benchmark, let me say (1) you're probably right, but (2) I hope you're wrong. The whole beauty of Shannon's work in information theory was that it was independent of the communications channel used; I think it at least possible that the same type of thing can be done for computation. I wouldn't be surprised if it hasn't been done already; if it hasn't been done yet, I'm probably not the person to do it; but what the heck, it gives me something to do while the kernel is making. Andy "Krazy" Glew. Gould CSD-Urbana. USEnet: ihnp4!uiucdcs!ccvaxa!aglew 1101 E. University, Urbana, IL 61801 ARPAnet: aglew@gswd-vms