kp@uts.amdahl.com (Ken Presting) (02/24/90)
In article <90Feb15.231415est.6212@neat.cs.toronto.edu> radford@ai.toronto.edu (Radford Neal) writes: >In article <e1oq020b88jL01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >>It is not correct that calculations or algorithms are less powerful >>than other mathematical notations (such as the differential equations >>used to express the laws of physics), as Penrose suggests. Any >>mathematical description of a physical object (such as the brain) may >>be directly translated into a computer program, with no loss of >>information. This claim is a consequence of mathematical theorems which >>are certainly familiar to Penrose. > >I think you have missed Penrose's point, here. I have only just >started reading _The Emperor's New Mind_, but from what I have read, >it is clear that Penrose believes that the (true) laws of physics are >_not_ computable, and that neurons somehow make use of these laws >to perform non-computable operations. Penrose is also quite aware that >the _currently accepted_ laws of physics _are_ computable, so pointing >this out is not an argument against his views. Maybe I need to extend this paragraph. If some hypothesis is proposed, then to test the hypothesis it is necessary to draw conclusions from it (combined with assertions regrading initial conditions, state functions, etc). The hypothesis is disconfirmed if the conclusions mismatch observation. Confirmation is harder to define, but we can say that the hypothesis is succesfully tested when conclusions based on it match observation. As Ray Allis has observed in this group, a computer simulation operates by drawing conclusions from the description of a system. Now, any conclusion which might be drawn by a physicist can also be drawn by a computer (by enumeration, if the theory is stated in a complete logic). Therefore, if a physical hypothesis has been successfully tested for a given system, the physicists have *nothing* to say about that system which the programmers can't match. So my point is not that a computer program can always update its variables to match the trajectory of any physical system through its phase space. Last month I argued to the contrary. My point is that whatever claims the physicists care to make about the trajectory must be stated in finite notations, and are therefore susceptible to automated enumeration of consequences. A related point holds for measurements of initial conditions - sure, there may be an infinite amount of info in a real analog system, but when you measure it, you get a finite amount of info. There is a problem with getting the enumeration to run in real time. The elapsed time for a physical system to traverse a trajectory is not related to the length of the enumeration of assertions *about* the trajectory. If the trajectory is chaotic, then there is no upper bound on the number of significant digits needed to represent the trajectory, and no upper bound on the length of the assertions in the enumeration. So if brain processes are chaotic, we can't use our deductive simulator to control a robot in a real time duplication of some human's behavior. But the finiteness of measurements and the need to test physical hypotheses is significant here also. Suppose we truncate our significant digits (ouch), and run our simulation in real time. Then we try to compare the behavior of the robot to the behavior of a real person. They won't match, but there's no way to tell if the mismatch is due to unmeasurable discrepancies in the initial conditions, or to inaccurate simulation of the trajectory. Of course, if we let slip to the testers that the robot is computer-controlled, they'll know for sure that there are inaccuracies in the simulation. But they could never figure that out by external observation - experimental runs on chaotic systems are inherently unrepeatable. The best they could do is run many tests, looking for statistical deviations between the real person and the robot. Eventually, they'll find some, but we can always add a significant digit and force them to make the testing process take longer than a lifetime. (To get the simulation to run in real time at all requires that a more efficient algorithm than enumeration of assertions be used. I'd rather stick with enumeration and assume that a non-deterministic TM do the enumeration, just for elegance. Associative memory will give us real machines with non-deterministic speed, for finite problems) It is especially interesting that the designers (and testers) of an AI constructed out of analog devices are protected (or hampered) by similar phenomena. The specifications for the analog hardware must be finitely stated, the accuracy of the construction can be verified only down to a finite tolerance, the components' behavior can be predicted with only finite accuracy, and the initial state can be measured with limited precision. So if there is something very very small, but very very important about being born (for example), and we don't notice it, and therefore we can't give the same property to something that we build, then it won't help us at all to have a non-symbolic AI. If nobody noticed and described "that certain something", then no way can we build it into anything. But if we can build it, we can also program it. To sum up, the hypothetical uncomputability of any hypothetical laws of physics is just irrelevant. We'll crunch the same symbols the *physicists* crunch. Who cares what the *particles* compute? I feel a little sheepish making this argument, after having fanatically argued against the "if all else fails, we can simulate the brain" view so recently. The key idea here - the finiteness of scientific method - had not occurred to me before I got into Penrose' book myself. Still, I think my position here is consistent with my former arguments. I see no way to be sure that any given simulation will capture *everything* about an arbitrary physical process. I also see no way to be sure that the inevitable inaccuracies in a simulation will not leave out something important. I agree with Penrose that a computer simulation necessarily diverges from the system it describes. I deny the possibility that the difference is necessarily significant. I guess I'll leave the paragraph as it was. >Unlike the Chinese Room, Penrose's argument makes perfect sense to me. >I don't believe it (as yet), but that's just because I think the >possibility that the real laws of physics are non-computable is remote, >and the possibility that this would make any difference to the operation >of neurons even if it was true is even more remote. They both make sense to me. Searle's are more useful, I think. >But who knows? Maybe he's right. The only way to refute him is to >succeed in creating an artificial intelligence. I certainly hope none >of the proponents of "strong AI" think they've proved _their_ position >in the absense of such a demonstration. > > Radford Neal Thank you for your comments. Ken Presting
dmocsny@uceng.UC.EDU (daniel mocsny) (02/25/90)
In article <2al902Zg8bnn01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >The specifications for the analog hardware must be finitely >stated, the accuracy of the construction can be verified only down to >a finite tolerance, the components' behavior can be predicted with only >finite accuracy, and the initial state can be measured with limited >precision. So if there is something very very small, but very very >important about being born (for example), and we don't notice it, and >therefore we can't give the same property to something that we build, >then it won't help us at all to have a non-symbolic AI. If nobody >noticed and described "that certain something", then no way can we build >it into anything. But if we can build it, we can also program it. I disagree. The analog builders might "get lucky," and be able to reproduce their success even though they can't explicate any programmable underlying mechanism. Remember, *loads* of fine engineering with real-world machines has preceded solid theoretical models of what is going on. (The compass, the optical telescope, gunpowder, rockets, selective breeding, public sanitation, civil engineering in the ancient world, etc., in short, almost everything discovered before the Second World War.) You don't have to have any idea of how to simulate combustion before you can invent gunpowder and conquer nations. All you have to know is how to find white crystals in certain caves, a yellow funny-smelling substance in certain rocks, how to roast logs down to charcoal, and a repeatable formula for mixing these things together. Your recipe for gunpowder is a reproduceable, logical description of the process, but it is nowhere near complete enough *by* *itself* to allow a computer to reproduce much of the sensory information an observer would obtain from watching you mix up a big batch of gunpowder and set a match to it. The real world contains many useful phenomena that give us an enormous head start in the game of artificially creating complex sensory experiences. Trying to recreate the same sensory experiences via logical computation and general-purpose actuators is vastly harder, which is why our efforts to virtualize greatly lag our success at actualizing. We may consider the real world to be like one massive ongoing computation, with many hooks for us to get in and call subroutines we don't comprehend beyond their interfaces. Even in this day of computing machines and extensive theories, experiment still leads theory at least as often as otherwise. Consider the recent example of high-temperature superconductors. A viable industry may emerge without much of a theoretical model. And it will have ample precedent. In my field (chemical engineering) everything is curve fits and correlations. (Thumb through the _Chemical Engineer's Handbook_ the next time you get bored with determinism. The extent to which our economy depends on practically uncomputable phenomena is rather appalling.) I concede that the likely complexity of an intelligent machine greatly lowers the probability that a dirty-handed empiricist could build one by accident. But I don't think empiricism and theory have to be so nicely interchangeable as you imply, especially in the short run. In the long run, who knows? Can any phenomenon be so truly uncomputable that no logical process could behave equivalently (if not exactly)? The presence of such phenomena would seem to imply a universe where theory is of no value at all. I hope we all agree that's not true... :-) Dan Mocsny dmocsny@uceng.uc.edu
kp@uts.amdahl.com (Ken Presting) (02/28/90)
In article <3750@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes: >In article <2al902Zg8bnn01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >> . . . If nobody >>noticed and described "that certain something", then no way can we build >>it into anything. But if we can build it, we can also program it. > >I disagree. The analog builders might "get lucky," and be able to >reproduce their success even though they can't explicate any >programmable underlying mechanism. Good point, I should have explicitly qualified "build" with "deliberately". But even this qualification won't, by itself, rescue my point altogether - >You don't have to have any idea of how to simulate combustion before >you can invent gunpowder and conquer nations. . . . I can think of an even more damaging example. Natural selection has "built" an intelligent organism, by trial and error, and certainly did not start out with any understanding of the phenomenon it was destined to produce. > . . . {A} recipe for >gunpowder is a reproduceable, logical description of the process, but >it is nowhere near complete enough *by* *itself* to allow a computer >to reproduce much of the sensory information an observer would obtain >from watching you mix up a big batch of gunpowder and set a match to >it. > >The real world contains many useful phenomena that give us an enormous >head start in the game of artificially creating complex sensory >experiences. Trying to recreate the same sensory experiences via >logical computation and general-purpose actuators is vastly harder, This comment strikes at the heart of my mistake. A general-purpose computer is easy enough to imagine, and not so hard to make. But now that I think about it, I am convinced that a general-purpose *actuator* is clearly impossible. The picture I drew of a deductive simulator controlling a robot would require an actuator that could take any assertion about the state of a human body and *poof* cause it to be the case that some congregation of atoms has that very state. Short of Star Trek's transporter, the idea is absurd. So let me backpedal a bit. I need to start with a special purpose transducer which can drive a dedicated actuator. I can recite a standard story here, for example a two-way radio inside the skull of a suitably decerebrated (ouch) human body, which is the "actuator". The transducers might be electrodes (or better, electrically controlled neurotransmitter dispensers) connected to the remains of a spinal column, with corresponding neurotransmitter sensors connected to efferent neurons from the retina, skin, etc. Endocrine "signals" from the rest of the body are also important in brain function, and we would like our robot also to respond appropriately to various psycho-active drugs. Fortunately, current technology provides us with a suggestive example of general-purpose chemical sensors in the form of gas chromatographs, PET and MRI scanners, etc, so it is perhaps conceivable that future development (and especially miniaturization) will result in devices capable of encoding the various chemical events to which the brain is sensitive. Of course, since I have described a body which is physically altered, it is trivial to identify observable differences between the robot and a normal human. Just X-ray its head. So my tidy picture of abritrarily accurate calculations subverting increasingly careful or protracted measurements needs an indefinite number of ad-hoc exceptions. If we want our robot to make a reasonable showing in (say) a Total Turing Test, we'd have to make it very shy of radiologists, and who knows what else. To rescue my final conclusion, that any detectable deviations between the model and a real human are eliminable, I must add the assumption that sensor and actuator technology will progress without limit. This is much more problematic than assuming that the deductive simulation can be made arbitrarily precise. I consider the failure of this story to be a reductio of all proposals to guage the success of AI in terms of indistinguishability from humans. The problem is not that there is some threshold of accuracy that androids can never reach. Rather, the problem is that sensor and actuator technology will necessarily progress at a finite pace, always leaving some detectable difference between the robot and the real thing. As long as we refrain from analyzing intelligence, we will be hard pressed to find principled grounds on which to ignore objections that the detectable differences show the failure of our implementation. In a future post, I will discuss Dan's objections to my hasty disposal of empiricism. The main idea is that measurement enforces theory-dependence. In the meantime, my thanks to Dan for a stimulating objection.
kp@uts.amdahl.com (Ken Presting) (03/03/90)
In article <3750@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes: >In article <2al902Zg8bnn01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >> . . . If nobody >>noticed and described "that certain something", then no way can we build >>it into anything. But if we can build it, we can also program it. > >I disagree. The analog builders might "get lucky," and be able to >reproduce their success even though they can't explicate any >programmable underlying mechanism. . . . > . . . (Thumb through the _Chemical Engineer's >Handbook_ the next time you get bored with determinism. The extent to >which our economy depends on practically uncomputable phenomena is >rather appalling.) Here (and below) I think I can salvage some useful conclusions from my original argument. I had focused on Penrose' arguments, which depend on the possible existence of uncomputable functions in the laws of physics. But I propose a simulation technique which depends on deduction rather than numerical solution of differential equations. The standard logical operation which represents the determinist metaphysical thesis is *deduction*, not computation. Of course, computers are entirely adequate to implement deductive systems, by the Church-Turing thesis. If there is any way at all to predict a future state from a past state, computers can do it as well as any mere physicist. Using deduction doesn't really change the power of the simulator, it just puts the focus on a different issue. I do think this eliminates any concerns with the computability of the functions which are used to state the laws, given that the laws are stated in a complete logic. Recursive enumerability is sufficient, supposing that real-time performance is not required, or that machines with non- deterministic performance exist (such as associative memory). The computability or uncomputability of every other process is irrelevant. Even if we lack a theoretical account of a measured phenomenon, so that we are unable to deduce our measurements from a more general theory, we still have the expedient of adding the tabulated data points themselves to the data base of a simulation. One point I am urging against Penrose is that whatever the scientists can state about the phenomenon we want to model, AI workers can implement, because deduction itself is a computable process. Penrose goes on to suppose that non-deductive processes have a role in mathematics and science, but that is a different subject, with huge controversies of its own. Hypothesis formation is non-deductive, but you need deductions from the hypothesis to get testable predictions, and it's the test of the hypothesis that gets published. The only relevant non-deductive process I can think of is measurement, which causes no problems for my argument. Quite the contrary: >I concede that the likely complexity of an intelligent machine greatly >lowers the probability that a dirty-handed empiricist could build one >by accident. But I don't think empiricism and theory have to be so >nicely interchangeable as you imply, especially in the short run. This is why I made a very big deal out of *measuring* the robot. In engineering, theory most often has a background role if anything, and need have no role at all. As you observe, the bang is no smaller for being unexplainable. But a measuring device without a theoretical explanation is no measuring device at all - it's just another mysterious correlation! This point is crucial to my argument. Penrose can speculate at will about possible laws. But if he is presented with a robot, and wants to criticize its behavior, he'll need some sort of observation on which to base the criticism. If he knows how to make the measurement, then there is a way to simulate the same effect. This dependency of measurement on theory holds pretty well for complex measuring devices, but stops irritatingly short of providing a rationalist reduction of empiricism (it irritates me, anyway). And I should grant right away that a device (X-rays are a good example) can make useful measurements before it's explained by the theorists, once the experimentalists have a resonably thorough description of its performance. But a new form of measurement is not going to topple any old theories until it's explained by a new theory itself. (Via E-mail, Dan made further comments, which I'll address indirectly here. btw, Dan, that letter made my day!) It is possible to finesse the actuator issue by restricting the observer in a Turing test to a limited interface with the simulator. Dan suggests using the simulator to drive a light and sound source, thus reproducing most of an observer's experience of interacting with a person. This would be like substituting a "virtual reality" environment for Turing's original teletype. I have considered an even more radically restricted interface - present the "observer" with two sets of descriptions, one set generated by the simulator, and the other set recorded by hand by technicians watching, measuring, X-raying, etc a live human. The observer then must decide which is the description of the real person versus the output of the simulator. The observer is allowed to specify any situation, any measurement, any dialogue, to his heart's content. The technicians who report on the real human are required to couch their reports in the same notation as used by the simulator. My approach suffers severely from the "simulations don't fly" problem, which is why I stayed with the actuators. But any proposal for a success condition for AI which involves restricting the observations allowed to the judges has a deeper problem. In order to claim that "observations of types X, Y ... provide adequate information to determine the presence of thought in the observed system" it is necessary to show that other types of observations are irrelevant. For example, the familiar Turing test says "Observations of language behavior are sufficient...". This is OK if you can show that (at least some aspect of) thought is independent of all the other things people do, and all the other attributes they posess. But you *can't* do that until you have some independently motivated account of what thought is. I don't think AI is going to get the amateur speculators off its back (and out of the general press) until there is a good reason for saying "We don't care that our duck don't waddle. It's STILL a duck". Or else settle for Weak AI. <dismount soapbox> >In the long run, who knows? Can any phenomenon be so truly uncomputable >that no logical process could behave equivalently (if not exactly)? >The presence of such phenomena would seem to imply a universe where >theory is of no value at all. I hope we all agree that's not true... :-) Agreed! I'd emphasize deduction as the crucial logical process, rather than computation. And we have to be quite forgiving wrt the time scale of the simulation (cf Dunning's comments on chaotic systems), or else focus on the finiteness of the equivalency-checking process. I have a hard time believing that we could ever convince ourselves that we have discovered an absolutely non-simulable process. I bet that if a candidate ever arose, there'd be lots of talk about "hidden variables" or some such nonsense... :-} >Dan Mocsny >dmocsny@uceng.uc.edu Thanks again for a very stimulating article. Ken Presting
radford@ai.toronto.edu (Radford Neal) (03/04/90)
In article <b3Yc02Cj8dl401@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >... I had focused on Penrose' arguments, which depend on >the possible existence of uncomputable functions in the laws of physics. >But I propose a simulation technique which depends on deduction rather >than numerical solution of differential equations. The standard >logical operation which represents the determinist metaphysical thesis >is *deduction*, not computation. Of course, computers are entirely ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >adequate to implement deductive systems, by the Church-Turing thesis. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >If there is any way at all to predict a future state from a past state, >computers can do it as well as any mere physicist. I think you're suffering from a lack of imagination here. Penrose is of course denying that computers are adequate to implement any deductive system, if by "deductive system" you mean the process by which real mathematicians establish new truths. If you're to understand his argument, you have to try to imagine how this might be true. Let's say that one day Penrose announces that he is able to solve, say, the word problem for semi-groups - a well-known non-computable problem. People give him instances of this problem. After a period of time that goes up only reasonably with the size of the instance he announces the answer: YES or NO. In those cases where the true answer is subsequently determined, he always turns out to be right. This holds even for very difficult cases that require increasingly subtle arguments to establish that the answer is NO, as well as cases where extremely complex reductions are needed to demonstrate that the answer is YES. I think it would be quite reasonable to conclude in the above situation that Penrose can somehow perform non-computable operations, and hence that the laws of physics must also be non-computable. You could construct a similar scenario in which the word problem is solved by some physical computer (obviously not Turing equivalent), rather than by a human being. Radford Neal
jrk@sys.uea.ac.uk (Richard Kennaway) (03/04/90)
In article <90Mar3.152728est.6160@neat.cs.toronto.edu> radford@ai.toronto.edu (Radford Neal) writes: >In article <b3Yc02Cj8dl401@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >>Of course, computers are entirely > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >>adequate to implement deductive systems, by the Church-Turing thesis. > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >I think you're suffering from a lack of imagination here. Penrose is >of course denying that computers are adequate to implement any >deductive system, if by "deductive system" you mean the process by >which real mathematicians establish new truths. If you're to understand >his argument, you have to try to imagine how this might be true. >Let's say that one day Penrose announces that he is able to solve, say, >the word problem for semi-groups - a well-known non-computable problem. >People give him instances of this problem. After a period of time >that goes up only reasonably with the size of the instance he announces >the answer: YES or NO. In those cases where the true answer is >subsequently determined, he always turns out to be right. This holds >even for very difficult cases that require increasingly subtle >arguments to establish that the answer is NO, as well as cases where >extremely complex reductions are needed to demonstrate that the answer >is YES. >I think it would be quite reasonable to conclude in the above situation >that Penrose can somehow perform non-computable operations, and hence >that the laws of physics must also be non-computable. You could construct >a similar scenario in which the word problem is solved by some physical >computer (obviously not Turing equivalent), rather than by a human being. And if pigs had wings they could fly. So what? -- Richard Kennaway SYS, University of East Anglia, Norwich, U.K. Internet: jrk@sys.uea.ac.uk uucp: ...mcvax!ukc!uea-sys!jrk
smoliar@vaxa.isi.edu (Stephen Smoliar) (03/06/90)
In article <90Mar3.152728est.6160@neat.cs.toronto.edu> radford@ai.toronto.edu (Radford Neal) writes: > >Let's say that one day Penrose announces that he is able to solve, say, >the word problem for semi-groups - a well-known non-computable problem. >People give him instances of this problem. After a period of time >that goes up only reasonably with the size of the instance he announces >the answer: YES or NO. In those cases where the true answer is >subsequently determined, he always turns out to be right. This holds >even for very difficult cases that require increasingly subtle >arguments to establish that the answer is NO, as well as cases where >extremely complex reductions are needed to demonstrate that the answer >is YES. > I think this gets to the heart of the matter behind Penrose's argument very nicely. It also raises some rather troubling issues which probably need further discussion. I realize that, to some extent, these issues conflict with a personal tendency to try to abstract the world into deductive operations; but, in spite of the fact that I know this may only be a personal bias, there are aspects of this example which worry me. I think the problem is one of the epistemological foundations of issues of BELIEF. Penrose announces that he can solve the word problem. For any case we give him, he responds with the correct answer. Has he convinced us? Let me illustrate the difficulty by constructing what I hope is a valid analogy: Penrose announces that he can solve the general medical diagnosis problem. For any case from the medical literature we give him, he responds with the correct answer. If you were running a hospital, would you want him going on rounds through your wards? The point I am trying to get at here is that there is more to reasoning than getting the right answer. If Penrose says (using whatever reasoning powers "work" for him), "This patient has a malignant brain tumor which will be fatal if not corrected by surgery within one hour," would YOU rush the patient into the operating room WITHOUT ASKING ANY QUESTIONS? Whether you are an administrator or a fellow physician, chances are, you would want some kind of JUSTIFICATION for Penrose's decision before taking any hasty action; and if, when confronted with the question of justification, Penrose were to respond, "I just know it," you would be in quite a quandary. (Believe me, I have no idea how I would respond in such a situation, particularly if I knew that Penrose's track record had been flawless prior to this incident.) I think the point we have to confront here is that CONVINCING reasoning lies at neither extreme. If we are talking about "the process by which real mathematicians [or any other thinkers, for that matter] establish new truths," then we have no reason to believe that deductive systems are the only mechanisms which may kick in. Instead, as Marvin Minsky pointed out in THE SOCIETY OF MIND, such systems for formal reasoning are better qualified to SUMMARIZE and JUSTIFY those "new truths," once they have been encountered. The question then reduces to whether or not there are MECHANISMS, deductive or otherwise, for establishing (or even hypothesizing) them. AI argues that such mechanisms do, indeed, exist; but, because we have given so much of our attention to deductive systems, we are probably still a far cry from a better understanding of their nature. ========================================================================= USPS: Stephen Smoliar USC Information Sciences Institute 4676 Admiralty Way Suite 1001 Marina del Rey, California 90292-6695 Internet: smoliar@vaxa.isi.edu "Only a schoolteacher innocent of how literature is made could have written such a line."--Gore Vidal
kp@uts.amdahl.com (Ken Presting) (03/07/90)
In article <90Mar3.152728est.6160@neat.cs.toronto.edu> radford@ai.toronto.edu (Radford Neal) writes: >> . . . Of course, computers are entirely >>adequate to implement deductive systems, by the Church-Turing thesis. > >I think you're suffering from a lack of imagination here. . . . Them's fightin' words. >Let's say that one day Penrose announces that he is able to solve, say, >the word problem for semi-groups - a well-known non-computable problem. >People give him instances of this problem. After a period of time >that goes up only reasonably with the size of the instance he announces >the answer: YES or NO. In those cases where the true answer is >subsequently determined, he always turns out to be right. This holds >even for very difficult cases that require increasingly subtle >arguments to establish that the answer is NO, as well as cases where >extremely complex reductions are needed to demonstrate that the answer >is YES. If you drop the restriction that the computing device (wet or dry) be constructed on purpose, the trap door swings both ways. My feeble imagination can picture a computer pre-programmed with any number of Y's & N's, (of course I can't imagine how) just waiting to pop out when a word problem gets typed in. <grunt> Urrrnnnh. (Ouch!) Seriously straining my simplistic cerebrum, I can even imagine sooo many little y's and n's mashed into memory that the only word problems the machine couldn't "solve" are sooo long that nobody could type them in in a lifetime. (Picture all the little y's and n's packed into a Penrose tiling of main storage. Don't ask me how they got that way.) And that's the best I can do. I put in the original argument that the period for testing the simulation output would have to be finite. That was on purpose. I tell you, that argument has more spin than a whole flock of Fermions. > . . . You could construct >a similar scenario in which the word problem is solved by some physical >computer (obviously not Turing equivalent), rather than by a human being. Funny you should mention that: >From: kp@uts.amdahl.com (Ken Presting) >Summary: Peano's Piano - major uncomputability in a physical process >Message-ID: <80tJ02UA8cX201@amdahl.uts.amdahl.com> >Date: 27 Feb 90 21:34:11 GMT > >Suppose that one day we find an old piano. On the keys are digits, >arithmetic operations + - * / **, parentheses, quantifiers, and "x". >The piano makes no sound for any individual keystroke, but we observe >that if we play "1+1=2", then close the cover, it sounds a major chord, >while if we play "1+1=3" it sounds a minor chord. Playing for a while, >we can soon convince ourselves of the proper notation (it makes no sound >if we play "1+1=+", plays a major chord for "(x1) 2*x1=x1+x1", etc). > >Consider the hypothesis that this piano will sound a major chord for >every true sentence, and a minor chord for every false one. > >Clearly, we could never build a machine on purpose, guaranteed to >play correctly - that would contradict Goedel's theorem. > >For similar reasons, no matter how long we observe it, >or how carefully we examine its inner workings, we could never prove >to ourselves that it is reliable. If we could figure out how & why it >worked, we would have an algorithm for deciding truth, another >contradiction. > >Most important, such a device could absolutely never be simulated by a >computer. > >If anybody can think of a physical or logical reason why such a device >*cannot* exist, I'd be very interested. I think that Penrose' arguments >may reduce to the assertion that such a device is not disallowed by the >laws of physics. (Note: My construction explicitly assumes that the device is not bounded in the range of inputs for which it is successful) Of course, I wrote this before I strained my imagination on Penrose tiles. It was also before that unfortunate accident that I stumbled on a partial answer. Ted Dunning's comment on machines that can store a real number got me thinking. Sure, a real number can encode an infinite sequence in its decimal expansion: instant Entsheidungsantwort. But that is not just an infinite amount of information, it's a *non-denumerable* amount of information. Now, a real physical device is made of finitely many atoms, each with a *denumerable* infinity of orbitals. That makes only denumerably many states, and therefore a denumerable amount of information in any one state. So, if you want to encode the answer to the EP, you'll have to do so in some other set of states than the electron orbitals, the nuleon orbitals, molecular vibration modes, et al. (Assuming, of course, that all particles in the machine are in bound states. Tunneling would lose information, when it occurs, so I'll ignore it). Now, "who ya gonna call?". It's hard enough to read anything out of the outer orbital states. Since the difference in energy between levels decreases without limit, the time required to measure the energy level of an excited electron (say) with sufficient accuracy to distinguish between adjacent levels will increase without limit. (Assuming that all those excited 'trons stay put long enough to make the machine useful. Most likely, you'd get one big white flash. Bigger, if you used excited nuclear states.) What's left? Store data in hidden variables? Call Ghostbusters - there's a ghost in that machine... ****** Radford, I hope you found this article both informative and amusing. I do not feel that I have the luxury of offending anyone who is willing to take the time to read and respond to my ideas. But I do enjoy exercising my imagination. :-) Ken Presting (Wait; what's that still small voice? What's that? "Dyyynaamiic Prooocessseeeesss... Dddyyynnnaammiicc Pprroocceesseess..." All right, all right, I haven't considered the possibility of storing information in dynamic processes. Wait for my imagination to recover, and I'll post again.)
kp@uts.amdahl.com (Ken Presting) (03/07/90)
In article <12240@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes: >In article <90Mar3.152728est.6160@neat.cs.toronto.edu> radford@ai.toronto.edu >(Radford Neal) writes: >> >>Let's say that one day Penrose announces that he is able to solve, say, >>the word problem for semi-groups - a well-known non-computable problem. >> >. . . I realize that, to some extent, these issues conflict >with a personal tendency to try to abstract the world into deductive >operations; . . . When I built the argument on which Radford is commenting, I was *very* careful to apply deductive abstractions *only* to logical or epistemological issues. Kuhn, Feyerabend, Kitcher, and many others have made it very clear that there can be no general method for defining the appropriate formal abstraction in which to describe natural phenomena. This point holds for abstraction in engineering as well as in science, I believe. However, deductive reasoning still has an indispensable role in all argumentation, including scientists' arguments with each other over theories. I would caution against any application of the concept of deduction outside the context of an analysis of arguments. >Let me illustrate the difficulty by constructing what I hope is a valid >analogy: Penrose announces that he can solve the general medical diagnosis >problem. For any case from the medical literature we give him, he responds >with the correct answer. If you were running a hospital, would you want him >going on rounds through your wards? A small point: Diagnosing a case from a textbook description is different from diagnosing a case from clinical observation. Interpreting X-rays is very tricky, for example. Clinical skill is not implied by inferential skill. This issue is not central to the rest of your example, but I thought it was worth mentioning. Assuming that Penrose can magically (or whatever) diagnose from clinical observations, he would constitute a case of a measuring device without a theoretical explanation. A "mysterious correlation". >The point I am trying to get at here is that there is more to reasoning than >getting the right answer. If Penrose says (using whatever reasoning powers >"work" for him), "This patient has a malignant brain tumor which will be fatal >if not corrected by surgery within one hour," would YOU rush the patient into >the operating room WITHOUT ASKING ANY QUESTIONS? Whether you are an >administrator or a fellow physician, chances are, you would want some >kind of JUSTIFICATION for Penrose's decision before taking any hasty >action; and if, when confronted with the question of justification, >Penrose were to respond, "I just know it," you would be in quite a quandary. >(Believe me, I have no idea how I would respond in such a situation, >particularly if I knew that Penrose's track record had been flawless >prior to this incident.) This example brings up an issue which is very important for AI, but is well outside anything Penrose addresses in his book. When a belief is to be used as the basis for a practical decision, one's confidence in the belief is only one factor. The risks entailed by taking action must also be considered, as well as the cost of the action, missed opportunities, etc. The rationality of an action is by no means determined solely by the rationality of the beliefs which direct it. It is interesting to notice the variation in the points of view of the several actors in Stephen's story. A consulting physician could rationally refuse to take any position at all (moral considerations aside) if his primary concern were his reputation. The administrator must consider the hospital's liability, which will be decided at least partly on the grounds of the hospital's scientific efforts to establish Penrose' reliability. The patient is in quite a pickle. Considerations of epistemic purity are likely to have little influence on his decision. Penrose, Neal, and I have been discussing a case in which epistemic purity alone is relevant. This is appropriate, since we are considering a (fairly) well-defined hypothesis - can a robot controlled by a computer program exhibit behavior indistinguishable from natural human behavior. The parameters of the discussion are also (fairly) clear - the computer is to be allowed any finite speed, storage, and number of processors, and is otherwise constrained only by computability theory. The human's behavior is contrained only by the laws of physics. As I understand Radford's objection, he is insisting that any claim that computers are capable of modeling all real processes must address the issues of (a) other laws of physics and (b) phenomena which are describable but not explainable. Since I constructed my argument in terms of making measurements, and science proceeds by constructing abstract deductive theories, tested by measurement and observation, I think the argument stands. Deduction is by no means the whole of science, but it cannot be abandoned by scientists. My argument (before the Peano Piano) is not based on the limitations of the processes to be modeled, but on the limitations of the the observers' ability to make persuasive objections to the model. >I think the point we have to confront here is that CONVINCING reasoning lies at >neither extreme. If we are talking about "the process by which real >mathematicians [or any other thinkers, for that matter] establish new >truths," then we have no reason to believe that deductive systems are >the only mechanisms which may kick in. Stephen, with this point you have returned to the epistemic issue of discovering new truths. Your example, which involved a practical decision, does not directly bear on the epistemic issue. That said, I agree with this point entirely. (I am amazed at the difficulty I am having in finding an area of disagreement with you, given the volume of objections and counter-objections. I suspect that I am not making clear how narrow is the scope of my generalizations, which leads to you citing the broad range of phenomena which you (rightly) believe to be relevant to AI as a whole.) Let me emphasize again how important the issue of practical reason is. Although practical decision-making can be overlooked in *some* theoretical contexts, in the daily behavior of human beings, it is absolutely impossible to separate epistemic from practical issues. I cannot recommend the work of Donald Davidson too highly on this issue. His essay "Belief and the Basis of Meaning", in _Inquiries into Truth and Interpretation_ is irreplaceable. He concludes that we cannot ascribe beliefs to an agent without also ascribing desires, and rational pursuit of those desires. > Instead, as Marvin Minsky pointed >out in THE SOCIETY OF MIND, such systems for formal reasoning are better >qualified to SUMMARIZE and JUSTIFY those "new truths," once they have been >encountered. Could you give me a page or chapter reference on this? > The question then reduces to whether or not there are MECHANISMS, >deductive or otherwise, for establishing (or even hypothesizing) them. AI >argues that such mechanisms do, indeed, exist; It is certainly the case that no *effective* finitary mechanisms exist for establishing new truths in general, although partially effective mechanisms might. Do you have Edelman in mind here? Natural selection is a fine example of a partially effective finitary mechanism (heredity at least is finitary, though the selection process can involve non-finitary interactions with the environment). Whether selection can be totally effective is a very interesting issue. That may not even be necessary. I mean by "effective" a property of processes analogous to recursiveness in functions. Before a semantic notion such as "truth", or a logical notion such as recursiveness (or finiteness, for that matter) can be applied to a process or its results, the process must be interpreted as being at least linguistic. "Truth", "effectiveness", et al are examples of normative concepts. > but, because we have given >so much of our attention to deductive systems, we are probably still a far >cry from a better understanding of their nature. At the risk of tedium (who said "that's a certainty" :-) let me emphasize again that deduction must be applied only in models of argumentation. In the case of determinist metaphysics, deduction is *not* applied to the relation between objects, states, processes, or any other real thing, but only to assertions about those things. I can't speak for anyone else's use of "deduction", but (excepting mistakes) my use of "deduction" (and all logical terms) is restricted to the case of assertions. Ken Presting