kp@uts.amdahl.com (Ken Presting) (02/24/90)
In article <90Feb15.231415est.6212@neat.cs.toronto.edu> radford@ai.toronto.edu (Radford Neal) writes: >In article <e1oq020b88jL01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >>It is not correct that calculations or algorithms are less powerful >>than other mathematical notations (such as the differential equations >>used to express the laws of physics), as Penrose suggests. Any >>mathematical description of a physical object (such as the brain) may >>be directly translated into a computer program, with no loss of >>information. This claim is a consequence of mathematical theorems which >>are certainly familiar to Penrose. > >I think you have missed Penrose's point, here. I have only just >started reading _The Emperor's New Mind_, but from what I have read, >it is clear that Penrose believes that the (true) laws of physics are >_not_ computable, and that neurons somehow make use of these laws >to perform non-computable operations. Penrose is also quite aware that >the _currently accepted_ laws of physics _are_ computable, so pointing >this out is not an argument against his views. Maybe I need to extend this paragraph. If some hypothesis is proposed, then to test the hypothesis it is necessary to draw conclusions from it (combined with assertions regrading initial conditions, state functions, etc). The hypothesis is disconfirmed if the conclusions mismatch observation. Confirmation is harder to define, but we can say that the hypothesis is succesfully tested when conclusions based on it match observation. As Ray Allis has observed in this group, a computer simulation operates by drawing conclusions from the description of a system. Now, any conclusion which might be drawn by a physicist can also be drawn by a computer (by enumeration, if the theory is stated in a complete logic). Therefore, if a physical hypothesis has been successfully tested for a given system, the physicists have *nothing* to say about that system which the programmers can't match. So my point is not that a computer program can always update its variables to match the trajectory of any physical system through its phase space. Last month I argued to the contrary. My point is that whatever claims the physicists care to make about the trajectory must be stated in finite notations, and are therefore susceptible to automated enumeration of consequences. A related point holds for measurements of initial conditions - sure, there may be an infinite amount of info in a real analog system, but when you measure it, you get a finite amount of info. There is a problem with getting the enumeration to run in real time. The elapsed time for a physical system to traverse a trajectory is not related to the length of the enumeration of assertions *about* the trajectory. If the trajectory is chaotic, then there is no upper bound on the number of significant digits needed to represent the trajectory, and no upper bound on the length of the assertions in the enumeration. So if brain processes are chaotic, we can't use our deductive simulator to control a robot in a real time duplication of some human's behavior. But the finiteness of measurements and the need to test physical hypotheses is significant here also. Suppose we truncate our significant digits (ouch), and run our simulation in real time. Then we try to compare the behavior of the robot to the behavior of a real person. They won't match, but there's no way to tell if the mismatch is due to unmeasurable discrepancies in the initial conditions, or to inaccurate simulation of the trajectory. Of course, if we let slip to the testers that the robot is computer-controlled, they'll know for sure that there are inaccuracies in the simulation. But they could never figure that out by external observation - experimental runs on chaotic systems are inherently unrepeatable. The best they could do is run many tests, looking for statistical deviations between the real person and the robot. Eventually, they'll find some, but we can always add a significant digit and force them to make the testing process take longer than a lifetime. (To get the simulation to run in real time at all requires that a more efficient algorithm than enumeration of assertions be used. I'd rather stick with enumeration and assume that a non-deterministic TM do the enumeration, just for elegance. Associative memory will give us real machines with non-deterministic speed, for finite problems) It is especially interesting that the designers (and testers) of an AI constructed out of analog devices are protected (or hampered) by similar phenomena. The specifications for the analog hardware must be finitely stated, the accuracy of the construction can be verified only down to a finite tolerance, the components' behavior can be predicted with only finite accuracy, and the initial state can be measured with limited precision. So if there is something very very small, but very very important about being born (for example), and we don't notice it, and therefore we can't give the same property to something that we build, then it won't help us at all to have a non-symbolic AI. If nobody noticed and described "that certain something", then no way can we build it into anything. But if we can build it, we can also program it. To sum up, the hypothetical uncomputability of any hypothetical laws of physics is just irrelevant. We'll crunch the same symbols the *physicists* crunch. Who cares what the *particles* compute? I feel a little sheepish making this argument, after having fanatically argued against the "if all else fails, we can simulate the brain" view so recently. The key idea here - the finiteness of scientific method - had not occurred to me before I got into Penrose' book myself. Still, I think my position here is consistent with my former arguments. I see no way to be sure that any given simulation will capture *everything* about an arbitrary physical process. I also see no way to be sure that the inevitable inaccuracies in a simulation will not leave out something important. I agree with Penrose that a computer simulation necessarily diverges from the system it describes. I deny the possibility that the difference is necessarily significant. I guess I'll leave the paragraph as it was. >Unlike the Chinese Room, Penrose's argument makes perfect sense to me. >I don't believe it (as yet), but that's just because I think the >possibility that the real laws of physics are non-computable is remote, >and the possibility that this would make any difference to the operation >of neurons even if it was true is even more remote. They both make sense to me. Searle's are more useful, I think. >But who knows? Maybe he's right. The only way to refute him is to >succeed in creating an artificial intelligence. I certainly hope none >of the proponents of "strong AI" think they've proved _their_ position >in the absense of such a demonstration. > > Radford Neal Thank you for your comments. Ken Presting
dmocsny@uceng.UC.EDU (daniel mocsny) (02/25/90)
In article <2al902Zg8bnn01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >The specifications for the analog hardware must be finitely >stated, the accuracy of the construction can be verified only down to >a finite tolerance, the components' behavior can be predicted with only >finite accuracy, and the initial state can be measured with limited >precision. So if there is something very very small, but very very >important about being born (for example), and we don't notice it, and >therefore we can't give the same property to something that we build, >then it won't help us at all to have a non-symbolic AI. If nobody >noticed and described "that certain something", then no way can we build >it into anything. But if we can build it, we can also program it. I disagree. The analog builders might "get lucky," and be able to reproduce their success even though they can't explicate any programmable underlying mechanism. Remember, *loads* of fine engineering with real-world machines has preceded solid theoretical models of what is going on. (The compass, the optical telescope, gunpowder, rockets, selective breeding, public sanitation, civil engineering in the ancient world, etc., in short, almost everything discovered before the Second World War.) You don't have to have any idea of how to simulate combustion before you can invent gunpowder and conquer nations. All you have to know is how to find white crystals in certain caves, a yellow funny-smelling substance in certain rocks, how to roast logs down to charcoal, and a repeatable formula for mixing these things together. Your recipe for gunpowder is a reproduceable, logical description of the process, but it is nowhere near complete enough *by* *itself* to allow a computer to reproduce much of the sensory information an observer would obtain from watching you mix up a big batch of gunpowder and set a match to it. The real world contains many useful phenomena that give us an enormous head start in the game of artificially creating complex sensory experiences. Trying to recreate the same sensory experiences via logical computation and general-purpose actuators is vastly harder, which is why our efforts to virtualize greatly lag our success at actualizing. We may consider the real world to be like one massive ongoing computation, with many hooks for us to get in and call subroutines we don't comprehend beyond their interfaces. Even in this day of computing machines and extensive theories, experiment still leads theory at least as often as otherwise. Consider the recent example of high-temperature superconductors. A viable industry may emerge without much of a theoretical model. And it will have ample precedent. In my field (chemical engineering) everything is curve fits and correlations. (Thumb through the _Chemical Engineer's Handbook_ the next time you get bored with determinism. The extent to which our economy depends on practically uncomputable phenomena is rather appalling.) I concede that the likely complexity of an intelligent machine greatly lowers the probability that a dirty-handed empiricist could build one by accident. But I don't think empiricism and theory have to be so nicely interchangeable as you imply, especially in the short run. In the long run, who knows? Can any phenomenon be so truly uncomputable that no logical process could behave equivalently (if not exactly)? The presence of such phenomena would seem to imply a universe where theory is of no value at all. I hope we all agree that's not true... :-) Dan Mocsny dmocsny@uceng.uc.edu
ted@nmsu.edu (Ted Dunning) (02/27/90)
In article <3750@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes:
Can any phenomenon be so truly uncomputable that no logical process
could behave equivalently (if not exactly)?
yes.
The presence of such phenomena would seem to imply a universe where
theory is of no value at all.
no.
virtually all chaotic dynamical systems have the characteristic of a
computational horizon beyond which any particular computer cannot keep
up with the physical system in doing the simulation.
the reason for this is that sensitive dependence on initial conditions
requires that the arithmetic that needs to be done gets harder and
harder to do fast enough to keep up with real time. before too long,
you have a system which requires a computer larger than the entire
universe to predict.
all of this assumes that real numbers have some relevance to the real
world, which would be pretty hard to verify.
--
Offer void except where prohibited by law.
kp@uts.amdahl.com (Ken Presting) (02/28/90)
In article <3750@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes: >In article <2al902Zg8bnn01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >> . . . If nobody >>noticed and described "that certain something", then no way can we build >>it into anything. But if we can build it, we can also program it. > >I disagree. The analog builders might "get lucky," and be able to >reproduce their success even though they can't explicate any >programmable underlying mechanism. Good point, I should have explicitly qualified "build" with "deliberately". But even this qualification won't, by itself, rescue my point altogether - >You don't have to have any idea of how to simulate combustion before >you can invent gunpowder and conquer nations. . . . I can think of an even more damaging example. Natural selection has "built" an intelligent organism, by trial and error, and certainly did not start out with any understanding of the phenomenon it was destined to produce. > . . . {A} recipe for >gunpowder is a reproduceable, logical description of the process, but >it is nowhere near complete enough *by* *itself* to allow a computer >to reproduce much of the sensory information an observer would obtain >from watching you mix up a big batch of gunpowder and set a match to >it. > >The real world contains many useful phenomena that give us an enormous >head start in the game of artificially creating complex sensory >experiences. Trying to recreate the same sensory experiences via >logical computation and general-purpose actuators is vastly harder, This comment strikes at the heart of my mistake. A general-purpose computer is easy enough to imagine, and not so hard to make. But now that I think about it, I am convinced that a general-purpose *actuator* is clearly impossible. The picture I drew of a deductive simulator controlling a robot would require an actuator that could take any assertion about the state of a human body and *poof* cause it to be the case that some congregation of atoms has that very state. Short of Star Trek's transporter, the idea is absurd. So let me backpedal a bit. I need to start with a special purpose transducer which can drive a dedicated actuator. I can recite a standard story here, for example a two-way radio inside the skull of a suitably decerebrated (ouch) human body, which is the "actuator". The transducers might be electrodes (or better, electrically controlled neurotransmitter dispensers) connected to the remains of a spinal column, with corresponding neurotransmitter sensors connected to efferent neurons from the retina, skin, etc. Endocrine "signals" from the rest of the body are also important in brain function, and we would like our robot also to respond appropriately to various psycho-active drugs. Fortunately, current technology provides us with a suggestive example of general-purpose chemical sensors in the form of gas chromatographs, PET and MRI scanners, etc, so it is perhaps conceivable that future development (and especially miniaturization) will result in devices capable of encoding the various chemical events to which the brain is sensitive. Of course, since I have described a body which is physically altered, it is trivial to identify observable differences between the robot and a normal human. Just X-ray its head. So my tidy picture of abritrarily accurate calculations subverting increasingly careful or protracted measurements needs an indefinite number of ad-hoc exceptions. If we want our robot to make a reasonable showing in (say) a Total Turing Test, we'd have to make it very shy of radiologists, and who knows what else. To rescue my final conclusion, that any detectable deviations between the model and a real human are eliminable, I must add the assumption that sensor and actuator technology will progress without limit. This is much more problematic than assuming that the deductive simulation can be made arbitrarily precise. I consider the failure of this story to be a reductio of all proposals to guage the success of AI in terms of indistinguishability from humans. The problem is not that there is some threshold of accuracy that androids can never reach. Rather, the problem is that sensor and actuator technology will necessarily progress at a finite pace, always leaving some detectable difference between the robot and the real thing. As long as we refrain from analyzing intelligence, we will be hard pressed to find principled grounds on which to ignore objections that the detectable differences show the failure of our implementation. In a future post, I will discuss Dan's objections to my hasty disposal of empiricism. The main idea is that measurement enforces theory-dependence. In the meantime, my thanks to Dan for a stimulating objection.
kp@uts.amdahl.com (Ken Presting) (02/28/90)
In article <TED.90Feb26144832@kythera.nmsu.edu> ted@nmsu.edu (Ted Dunning) writes: >In article <3750@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes: > > Can any phenomenon be so truly uncomputable that no logical process > could behave equivalently (if not exactly)? > >yes. > >virtually all chaotic dynamical systems have the characteristic of a >computational horizon beyond which any particular computer cannot keep >up with the physical system in doing the simulation. > >the reason for this is that sensitive dependence on initial conditions >requires that the arithmetic that needs to be done gets harder and >harder to do fast enough to keep up with real time. before too long, >you have a system which requires a computer larger than the entire >universe to predict. This is not uncomputable enough. Suppose that one day we find an old piano. On the keys are digits, arithmetic operations + - * / **, parentheses, quantifiers, and "x". The piano makes no sound for any individual keystroke, but we observe that if we play "1+1=2", then close the cover, it sounds a major chord, while if we play "1+1=3" it sounds a minor chord. Playing for a while, we can soon convince ourselves of the proper notation (it makes no sound if we play "1+1=+", plays a major chord for "(x1) 2*x1=x1+x1", etc). Consider the hypothesis that this piano will sound a major chord for every true sentence, and a minor chord for every false one. Clearly, we could never build a machine on purpose, guaranteed to play correctly - that would contradict Goedel's theorem. For similar reasons, no matter how long we observe it, or how carefully we examine its inner workings, we could never prove to ourselves that it is reliable. If we could figure out how & why it worked, we would have an algorithm for deciding truth, another contradiction. Most important, such a device could absolutely never be simulated by a computer. If anybody can think of a physical or logical reason why such a device *cannot* exist, I'd be very interested. I think that Penrose' arguments may reduce to the assertion that such a device is not disallowed by the laws of physics.
dmocsny@uceng.UC.EDU (daniel mocsny) (02/28/90)
In article <TED.90Feb26144832@kythera.nmsu.edu> ted@nmsu.edu (Ted Dunning) writes: > >In article <3750@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes: > > Can any phenomenon be so truly uncomputable that no logical process > could behave equivalently (if not exactly)? > >yes. I should expand on my use of "equivalently." While I agree that some chaotic systems are probably hopeless to simulate with arbitrary precision, certainly that does not prevent an artificial system from being equivalently "interesting." I.e., the artificial system may not exactly reproduce all the behavior of the chaotic system, but it may satisfy some of the same performance measures, it may exhibit behavior that appears to be similarly complex, or it may spend roughly the same amount of time in the same attractor basins. What has this to do with AI? Perhaps much. Billions of intelligent individuals exist today, and NOT ONE of them can exactly duplicate the behavior of ANY other one. For that matter, none of us can exactly duplicate our recent behavior. I interpret this to mean that intelligence is not a point, but a space, and arbitrarily many intelligences may exist. Thus a simulated brain could perhaps deviate substantially from the behavior of the real brain, and still appear to behave quite intelligently. There is no one answer to the problem of intelligence, and we probably haven't exhausted the possibilities yet. >virtually all chaotic dynamical systems have the characteristic of a >computational horizon beyond which any particular computer cannot keep >up with the physical system in doing the simulation. > >the reason for this is that sensitive dependence on initial conditions >requires that the arithmetic that needs to be done gets harder and >harder to do fast enough to keep up with real time. before too long, >you have a system which requires a computer larger than the entire >universe to predict. If we regard the system of (chaotic system + sensor readout) as a computer giving the state of the system at time t, the question is: "From where does the chaotic system derive its great computing power?" For the chaotic system fits quite nicely into the universe, and yet it performs the equivalent of a seemingly uncountable number of elementary operations. >all of this assumes that real numbers have some relevance to the real >world, which would be pretty hard to verify. Yes, particularly since no computer I know of always performs real arithmetic (they usually have to stop the party at some finite precision). :-) Since I don't know of any tight way to associate real numbers with the real world, I'm happy enough when a system that uses "numbers" to make decisions gets an answer I can't distinguish from another system that uses "atoms." Dan Mocsny dmocsny@uceng.uc.edu
ted@nmsu.edu (Ted Dunning) (03/01/90)
In article <3806@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes: > Can any phenomenon be so truly uncomputable that no logical process > could behave equivalently (if not exactly)? > >yes. I should expand on my use of "equivalently." While I agree that some chaotic systems are probably hopeless to simulate with arbitrary precision, certainly that does not prevent an artificial system from being equivalently "interesting." I.e., the artificial system may not exactly reproduce all the behavior of the chaotic system, but it may satisfy some of the same performance measures, it may exhibit behavior that appears to be similarly complex, or it may spend roughly the same amount of time in the same attractor basins. indeed it has been shown that while a computed simulation of a chaotic system will diverge from the `true' trajectory for a particular initial condition, for many systems it will stay within a small neighborhood of some trajectory of the system. thus simulations can reasonably be used to investigate the behavior of some chaotic systems. If we regard the system of (chaotic system + sensor readout) as a computer giving the state of the system at time t, the question is: "From where does the chaotic system derive its great computing power?" For the chaotic system fits quite nicely into the universe, and yet it performs the equivalent of a seemingly uncountable number of elementary operations. for that matter, if we posit a RAM machine which can manipulate real numbers it _can_ solve the halting problem. real numbwers have strange consequences for the theory of algorithms. not to mention for the hardware types. -- Offer void except where prohibited by law.
smoliar@vaxa.isi.edu (Stephen Smoliar) (03/01/90)
In article <3806@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes: > >I interpret this to mean that intelligence is not a point, but a >space, and arbitrarily many intelligences may exist. Thus a simulated >brain could perhaps deviate substantially from the behavior of the >real brain, and still appear to behave quite intelligently. There is >no one answer to the problem of intelligence, and we probably haven't >exhausted the possibilities yet. > I basically sympathize with this position and wish to consider the possibility that Dan has not gone far enough. That is, why should the mathematical concept of "space" be any better for trying to talk about patterns of behavior than "point" (beyond its capacity to indicate that we are not talking about a single thing)? We have a tendency to assume that every word can be associated with an OBJECT, but we may be deliberately confusing ourselves if we try to assume that intelligent behavior has the sort of object properties we associate with a loaf of bread or a jug of wine. I would argue that the temporal nature of behavior makes it quite a different "thing" to talk about (to the extent that WHAT HAPPENS over an interval of time--as opposed to the temporal interval itself--really constitute a "thing"). I suppose what I am trying to argue is that we tend to talk about properties which emerge from the dynamics of a complex system AS IF THEY WERE CONCRETE ARTIFACTS when we really have no justification for doing so. Rather than agonizing over the "nature of intelligence"--as if it were a "thing" which had a "nature"--perhaps we should be reviewing our vocabulary as it pertains to talking about ANY form of behavior. ========================================================================= USPS: Stephen Smoliar USC Information Sciences Institute 4676 Admiralty Way Suite 1001 Marina del Rey, California 90292-6695 Internet: smoliar@vaxa.isi.edu "Only a schoolteacher innocent of how literature is made could have written such a line."--Gore Vidal
mmt@dciem.dciem.dnd.ca (Martin Taylor) (03/01/90)
In article <3750@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes:
Can any phenomenon be so truly uncomputable that no logical process
could behave equivalently (if not exactly)?
yes.
The presence of such phenomena would seem to imply a universe where
theory is of no value at all.
no.
virtually all chaotic dynamical systems have the characteristic of a
computational horizon beyond which any particular computer cannot keep
up with the physical system in doing the simulation.
================
Sorry I don't have the commentator's name here. There was no signature at
the bottom and the header has scrolled off my finite screen. But...
The problem is not whether a process is computable, or if computable is
chaotic, but that there is no way that the *actual* behaviour of any
small part of the universe can be described by deterministic laws, even
though the entire universe may be. For the behaviour of a part of the
universe to be detemined, that part would have to be isolated from the
rest of the universe, and thus be unobservable. If it were not isolated,
the part whose behaviour is supposedly described would be affected by
"surprising" events (events from other parts of the universe not subsumed
in the description of the boundary conditions and/or descriptive laws.
Often, these outside influences do not matter, because the observed system
is in a dynamic state near an attractor, or some similar insensitive
condition. But it might be near a repellor across which the unexpected
event pushes it, into a quite different basin of attraction. I don't
think it much matters about the system being chaotic, provided that the
chaos is expressed in the form of a strange attractor, because it is
likely that most of th trajectories in the attractor have behaviourally
similar consequences.
What this points out is that there is *NO* physical system that can
be guaranteed to compute according to any algorithm. If the algorithmic
nature of computation is what causes Penrose to require new physics,
he need not bother. All he needs is to note that brains and computers
are sub-parts of the universe, and therefor are non-deterministic.
--
Martin Taylor (mmt@zorac.dciem.dnd.ca ...!uunet!dciem!mmt) (416) 635-2048
"Viola, the man in the room
doesn't UNDERSTAND Chinese. Q.E.D." (R. Kohout)
kp@uts.amdahl.com (Ken Presting) (03/01/90)
In article <TED.90Feb28090039@kythera.nmsu.edu> ted@nmsu.edu (Ted Dunning) writes: >for that matter, if we posit a RAM machine which can manipulate real >numbers it _can_ solve the halting problem. By starting with a real number in storage whose (binary) expansion below the radix point is 1 for Goedel numbers of machine plus input state that halt, 0 if not? Is there a way to do this that does not depend on an infinite table- lookup?
kp@uts.amdahl.com (Ken Presting) (03/02/90)
In article <12085@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes: >...We have a tendency to assume that every word can be associated with an >OBJECT, but we may be deliberately confusing ourselves if we try to assume that >intelligent behavior has the sort of object properties we associate with a loaf >of bread or a jug of wine. I would argue that the temporal nature of behavior >makes it quite a different "thing" to talk about (to the extent that WHAT >HAPPENS over an interval of time--as opposed to the temporal interval >itself--really constitute a "thing"). I suppose what I am trying to >argue is that we tend to talk about properties which emerge from the >dynamics of a complex system AS IF THEY WERE CONCRETE ARTIFACTS when >we really have no justification for doing so. Rather than agonizing >over the "nature of intelligence"--as if it were a "thing" which had >a "nature"--perhaps we should be reviewing our vocabulary as it pertains >to talking about ANY form of behavior. Stephen, I agree wholeheartedly with your ultimate suggestion, but I'd like to put in a few words in favor of hairsplitting. (This will give me a temporary respite from agonizing over my reply to your terrific article an the Kronos Quartet performance. You made me wish I'd been there.) In the first place, human bodies (and computers) *are* objects. After all, that is how Penrose gets into the debate at all. One way to think about intelligence is to try to figure out what "object properties" the body has which make possible intelligent behavior. The neurologists are working on this from one angle. AI has been working on it from another. So have philosophers. In the second place, the dynamical properties of a running computer are neatly determined by its program. The program is of course a static object (modulo the contents of its data structures, but you see what I mean). The $64K question is: what properties does the program need to have, so as to allow the emergence of intelligent behavior? In the third place, its easy to give lots of necessary conditions for the intelligence of behavior. This is a worthwhile enterprise, but until we can state a sufficient condition, those goal posts are gonna keep on movin'. I doubt that Turing-style imitation conditions will fill the bill, and in any case they are useless as design guides. Finally, words like "conscious", "intelligent", and "rational" have very important roles in the conceptual framework of our moral and legal system. What if our machines demand suffrage? Freedom of speech? Freedom of movement? These are not pressing problems, I'll admit. But I think that the social and moral impact of AI deserves at least as much investigation as the same issues in Biotechnology. Someday, a computer is going to kill somebody, and it won't look like a "glitch". Questions like "Does the machine understand right and wrong" and "was it intentional" will determine whether it's the machine or the programmer that gets powered off. A lot of philosophers have gotten badly burned by the concept of emergent properties. I don't think Searle is relying on that concept, but his arguments about programs not determining any "causal powers" are similar to some arguments based on emergence. In my own thinking, I have found that the concept of "normative property" does a much better job of capturing the relation between (eg) rationality and goal-seeking behavior. Dan Dennett's "Intentional Stance" is an OK example of a normative approach, although he's not always as thorough as someone like Davidson. If you find that the concept of emergence is useful in formulating some ideas, you will probably find normativity both more useful and more reliable. That's what I'd recommend in response to your suggestion for a vocabulary review. I was looking for a way to bring in normativity in response to your Kronos point about the expressibility of information. That is a Very Difficult problem. I think that reading *anything*, even reading meters, is immensely complex. Much harder to understand than pain or pleasure, which are awful.
kp@uts.amdahl.com (Ken Presting) (03/02/90)
In article <2953@dciem.dciem.dnd.ca> mmt@dretor.dciem.dnd.ca (Martin Taylor) writes: >>virtually all chaotic dynamical systems have the characteristic of a >>computational horizon beyond which any particular computer cannot keep >>up with the physical system in doing the simulation. > >The problem is not whether a process is computable, or if computable is >chaotic, but that there is no way that the *actual* behaviour of any >small part of the universe can be described by deterministic laws, even >though the entire universe may be. For the behaviour of a part of the >universe to be detemined, that part would have to be isolated from the >rest of the universe, and thus be unobservable. Pardon my philosophisms, but unobserved does not imply undetermined. Sure, we couldn't tell what was happening between observations, but what that shows is that we don't know whether our descriptions are true. It does not show that the descriptions are false. Of course, the usual strategy is to take some system which can be reset to some more-or-less known state (say, a ball at the top of a ramp), let it run for a while, and observe it at the end. If the outcome is reproducible, then the system is deterministic. Pace Hume, this is a pretty reliable form of argument. > If it were not isolated, >the part whose behaviour is supposedly described would be affected by >"surprising" events (events from other parts of the universe not subsumed >in the description of the boundary conditions and/or descriptive laws. But part of setting up a measurement is controlling as many outside influences as possible, so that the act of measurement is the only perturbation. What makes measurements different from other other perturbations is that there is a theoretical account of how the measuring device interacts with the system to be measured. For example, a galvanometer in a circuit changes the resistance and inductance of the circuit, but the theory of electromagnetism has very specific predictions about the magnitude of those changes. And of course the theory also explains the degree of deflection of the needle. It is very simplistic to simply read a meter and suppose that the reading describes the state of the system. For convenience in the lab, measuring devices are designed to perturb as little as possible. But the most sensitive measurements must take the perturbations into account, and correct for them, using a theoretical analysis (and experience with the devices, of course). >Often, these outside influences do not matter, because the observed system >is in a dynamic state near an attractor, or some similar insensitive >condition. If we consider the system under observation to include the perturbations of the measuring device, as in the case of a circuit with a galvanometer, then we can let photons fall on the meter (violating the controlled interaction objective), for exactly the rason you state. The meter is highly insensitive to photon impact. I always thought that the whole point of designing measuring devices was to achieve this effect - known perturbation on the measured system, coupled to a macroscopically observable indicator. This applies to every measurement from Litmus paper to bubble chambers. > But it might be near a repellor across which the unexpected >event pushes it, into a quite different basin of attraction. This is of course what digital hardware designers try to avoid. > I don't >think it much matters about the system being chaotic, provided that the >chaos is expressed in the form of a strange attractor, because it is >likely that most of th trajectories in the attractor have behaviourally >similar consequences. If you could elaborate on "strange attractors", I would appreciate it. This is outside my (very limited) familiarity with chaos theory. >What this points out is that there is *NO* physical system that can >be guaranteed to compute according to any algorithm. I suppose you mean that unless the outside influences are restricted, any system can be pushed out of any attractor basin (eg by holding an RF radiator near the bus traces on a CPU board). But it's not that big a deal to isolate a system for practical purposes. After all, computers do work pretty well. > If the algorithmic >nature of computation is what causes Penrose to require new physics, >he need not bother. All he needs is to note that brains and computers >are sub-parts of the universe, and therefor are non-deterministic. For the reasons above, I don't think this follows. I'm not sure whether you're making a metaphysical, epistemological, or practical argument. You seem to me to have a very intriguing idea, which is why I tried to make my objections very explicit. I may well be missing something. >-- >Martin Taylor (mmt@zorac.dciem.dnd.ca ...!uunet!dciem!mmt) (416) 635-2048 >"Viola, the man in the room >doesn't UNDERSTAND Chinese. Q.E.D." (R. Kohout) (Love the .sig :-)
smoliar@vaxa.isi.edu (Stephen Smoliar) (03/03/90)
In article <2c7R02Sr8d8b01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >In article <12085@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) >writes: >> Rather than agonizing >>over the "nature of intelligence"--as if it were a "thing" which had >>a "nature"--perhaps we should be reviewing our vocabulary as it pertains >>to talking about ANY form of behavior. > >Stephen, I agree wholeheartedly with your ultimate suggestion, but I'd >like to put in a few words in favor of hairsplitting. > >In the first place, human bodies (and computers) *are* objects. After >all, that is how Penrose gets into the debate at all. One way to think >about intelligence is to try to figure out what "object properties" the >body has which make possible intelligent behavior. The neurologists are >working on this from one angle. AI has been working on it from another. >So have philosophers. > I have no problem with this point; and I think Marvin Minsky (THE SOCIETY OF MIND) and Gerald Edelman (THE REMEMBERED PRESENT) have done the best jobs of articulating that THIS is the primary issue. >In the second place, the dynamical properties of a running computer are >neatly determined by its program. The program is of course a static >object (modulo the contents of its data structures, but you see what >I mean). The $64K question is: what properties does the program need >to have, so as to allow the emergence of intelligent behavior? > This is where I would like to move in an split a hair or two. It's that word "neatly" which is giving me trouble. After all, if there were a clean relationship between the static properties of a program and the dynamic properties of the device running that program, we wouldn't have all the software problems we have, would we? Even when we invoke all the motherhood of software engineering and enforce static properties which simplify that relationship, we STILL have problems! Furthermore, this is only the tip of the iceberg. If we consider cellular automata, instead of "clean" applicative LISP code (for example), we encounter devices with utterly trivial static properties exhibiting extraordinarily complex behavior with little clue as to the relationship between them. (At the Artificial Life Conference, Chris Langton presented a fascinating talk in which he tried to model this behavior in terms of the physics of phase transitions; and HE was still able to talk about his model at a relatively rough level.) I think we should be honest with ourselves and admit that we still do not have terribly good ways to describe the relationship between a program and the device which runs that program. I would argue that the reason for this is that we still lack good ways to describe and reason about the dynamic properties of processes. The best we have been able to do, thus far, is the abstract those processes into static objects. This is how software engineering SOMETIMES gives us a handle on reasoning about what our programs actually do, but the power of this approach is only as good as the abstraction we develop. Finding the right abstraction often remains the intractable problem in software engineering. Now I would like to push my argument one step further and conjecture that the reason we have so much trouble with the dynamic properties of processes is that we still have considerable trouble with time, itself. Many of the key issues of intentionality (at least as I have thus far been able to understand the story) emerged from Brentano's attempts to deal the ways in which time affects our abilities to perceive. These issues were further developed in a series of lectures of "internal time-consciousness" which Husserl delivered in 1905 and Heidegger subsequently edited and published under the title THE PHENOMENOLOGY OF INTERNAL TIME-CONSCIOUSNESS. (Probably the best place to read up on all this is in Izchak Miller's MIT Press book, HUSSERL, PERCEPTION, AND TEMPORAL AWARENESS.) The key questions seem to be how we perceive the passage of time and how we perceive what is happening as time passes. These remain very stick issues which tend to resist any attempts at reductive abstraction to static objects. I would like to insert an aside here to the effect that I feel my interest in music has been a major impulse to thinking about these questions. What makes music particularly interesting is that we have to deal with (i.e. reason about) the passage of time on several different scales simultaneously. This is nicely described in a passage I would like to quote from Stephen Handel's LISTENING, where, among other things, he tries to compare these time scales with respect to the perception of both music and speech: These variations often occur almost instantaneously (the changes in loudness, quality, and timbre of a note when a violin is initially bowed); they may also occur in short time periods (the changes in pitch, duration, and timbre for a syllable or a note) or occur in long time periods (the changes in loudness, duration, order, and rhythm among elements of a sentence or a musical phrase and the changes in position for a moving object). As Handel points out, what makes the situation particularly difficult is that these time scales cannot be segregated. In other words "what happens in the short time intervals affects what happens in the long time intervals and vice versa." Unfortunately, most people who "do" music theory have little to say about these dynamic processes (just as most software engineers try to abstract away the dynamic properties of running programs), although David Lewin has begun to ask some of these questions as a result of having read Miller's book on Husserl's time-consciousness. >In the third place, its easy to give lots of necessary conditions for >the intelligence of behavior. This is a worthwhile enterprise, but >until we can state a sufficient condition, those goal posts are gonna >keep on movin'. I doubt that Turing-style imitation conditions will >fill the bill, and in any case they are useless as design guides. > Returning to the matter at hand, Ken, you may agree wholeheartedly with my ultimate suggestion; but I think here you have fallen into the trap I have been trying to avoid. Necessary and sufficient conditions are devices we use when we are talking about OBJECTS. They carry an implicit assumption that we have an object which we can hold up and ask whether or not those conditions are satisfied. The point I want to hammer away at for a while is the skeptical question of whether or not processes, behaviors, and other "things that pass with time" can be viewed as such objects. The reason my intuition is putting up such a fight is because I worry about whether or not I can ever do more with a necessary or sufficient condition than apply it to a SNAPSHOT of a process (i.e. to "freeze time" and perform various forms of static analysis on that "frozen moment"). There is a nagging voice which keeps saying that no matter how many snapshots I accumulate, I may never be able to capture the process, itself. >Finally, words like "conscious", "intelligent", and "rational" have >very important roles in the conceptual framework of our moral and >legal system. You have no argument with me here. I would only assert that this makes it all the more important that we home in on better ways to talk about those words, reason with them, and use them. Consciousness may be far too important to be relegated to a universe of objects which are delimited by necessary and sufficient conditions. >A lot of philosophers have gotten badly burned by the concept of >emergent properties. I don't think Searle is relying on that concept, >but his arguments about programs not determining any "causal powers" >are similar to some arguments based on emergence. In my own thinking, >I have found that the concept of "normative property" does a much >better job of capturing the relation between (eg) rationality and >goal-seeking behavior. Dan Dennett's "Intentional Stance" is an OK >example of a normative approach, although he's not always as thorough >as someone like Davidson. If you find that the concept of emergence >is useful in formulating some ideas, you will probably find >normativity both more useful and more reliable. > I appreciate this observation and will try to follow though on it. Watch this space for further development. Meanwhile, I shall probably continue to use rec.music.classical as a forum for working out ideas specific to relating our understanding of the passage of time to what is roughly called "music appreciation" (otherwise known as "what the music critic's ear tells the music critic's brain . . . assuming he has either"). Perhaps at some point my struggles in these two arenas will come to a point of convergence. ========================================================================= USPS: Stephen Smoliar USC Information Sciences Institute 4676 Admiralty Way Suite 1001 Marina del Rey, California 90292-6695 Internet: smoliar@vaxa.isi.edu "Only a schoolteacher innocent of how literature is made could have written such a line."--Gore Vidal
kp@uts.amdahl.com (Ken Presting) (03/03/90)
In article <3750@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes: >In article <2al902Zg8bnn01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >> . . . If nobody >>noticed and described "that certain something", then no way can we build >>it into anything. But if we can build it, we can also program it. > >I disagree. The analog builders might "get lucky," and be able to >reproduce their success even though they can't explicate any >programmable underlying mechanism. . . . > . . . (Thumb through the _Chemical Engineer's >Handbook_ the next time you get bored with determinism. The extent to >which our economy depends on practically uncomputable phenomena is >rather appalling.) Here (and below) I think I can salvage some useful conclusions from my original argument. I had focused on Penrose' arguments, which depend on the possible existence of uncomputable functions in the laws of physics. But I propose a simulation technique which depends on deduction rather than numerical solution of differential equations. The standard logical operation which represents the determinist metaphysical thesis is *deduction*, not computation. Of course, computers are entirely adequate to implement deductive systems, by the Church-Turing thesis. If there is any way at all to predict a future state from a past state, computers can do it as well as any mere physicist. Using deduction doesn't really change the power of the simulator, it just puts the focus on a different issue. I do think this eliminates any concerns with the computability of the functions which are used to state the laws, given that the laws are stated in a complete logic. Recursive enumerability is sufficient, supposing that real-time performance is not required, or that machines with non- deterministic performance exist (such as associative memory). The computability or uncomputability of every other process is irrelevant. Even if we lack a theoretical account of a measured phenomenon, so that we are unable to deduce our measurements from a more general theory, we still have the expedient of adding the tabulated data points themselves to the data base of a simulation. One point I am urging against Penrose is that whatever the scientists can state about the phenomenon we want to model, AI workers can implement, because deduction itself is a computable process. Penrose goes on to suppose that non-deductive processes have a role in mathematics and science, but that is a different subject, with huge controversies of its own. Hypothesis formation is non-deductive, but you need deductions from the hypothesis to get testable predictions, and it's the test of the hypothesis that gets published. The only relevant non-deductive process I can think of is measurement, which causes no problems for my argument. Quite the contrary: >I concede that the likely complexity of an intelligent machine greatly >lowers the probability that a dirty-handed empiricist could build one >by accident. But I don't think empiricism and theory have to be so >nicely interchangeable as you imply, especially in the short run. This is why I made a very big deal out of *measuring* the robot. In engineering, theory most often has a background role if anything, and need have no role at all. As you observe, the bang is no smaller for being unexplainable. But a measuring device without a theoretical explanation is no measuring device at all - it's just another mysterious correlation! This point is crucial to my argument. Penrose can speculate at will about possible laws. But if he is presented with a robot, and wants to criticize its behavior, he'll need some sort of observation on which to base the criticism. If he knows how to make the measurement, then there is a way to simulate the same effect. This dependency of measurement on theory holds pretty well for complex measuring devices, but stops irritatingly short of providing a rationalist reduction of empiricism (it irritates me, anyway). And I should grant right away that a device (X-rays are a good example) can make useful measurements before it's explained by the theorists, once the experimentalists have a resonably thorough description of its performance. But a new form of measurement is not going to topple any old theories until it's explained by a new theory itself. (Via E-mail, Dan made further comments, which I'll address indirectly here. btw, Dan, that letter made my day!) It is possible to finesse the actuator issue by restricting the observer in a Turing test to a limited interface with the simulator. Dan suggests using the simulator to drive a light and sound source, thus reproducing most of an observer's experience of interacting with a person. This would be like substituting a "virtual reality" environment for Turing's original teletype. I have considered an even more radically restricted interface - present the "observer" with two sets of descriptions, one set generated by the simulator, and the other set recorded by hand by technicians watching, measuring, X-raying, etc a live human. The observer then must decide which is the description of the real person versus the output of the simulator. The observer is allowed to specify any situation, any measurement, any dialogue, to his heart's content. The technicians who report on the real human are required to couch their reports in the same notation as used by the simulator. My approach suffers severely from the "simulations don't fly" problem, which is why I stayed with the actuators. But any proposal for a success condition for AI which involves restricting the observations allowed to the judges has a deeper problem. In order to claim that "observations of types X, Y ... provide adequate information to determine the presence of thought in the observed system" it is necessary to show that other types of observations are irrelevant. For example, the familiar Turing test says "Observations of language behavior are sufficient...". This is OK if you can show that (at least some aspect of) thought is independent of all the other things people do, and all the other attributes they posess. But you *can't* do that until you have some independently motivated account of what thought is. I don't think AI is going to get the amateur speculators off its back (and out of the general press) until there is a good reason for saying "We don't care that our duck don't waddle. It's STILL a duck". Or else settle for Weak AI. <dismount soapbox> >In the long run, who knows? Can any phenomenon be so truly uncomputable >that no logical process could behave equivalently (if not exactly)? >The presence of such phenomena would seem to imply a universe where >theory is of no value at all. I hope we all agree that's not true... :-) Agreed! I'd emphasize deduction as the crucial logical process, rather than computation. And we have to be quite forgiving wrt the time scale of the simulation (cf Dunning's comments on chaotic systems), or else focus on the finiteness of the equivalency-checking process. I have a hard time believing that we could ever convince ourselves that we have discovered an absolutely non-simulable process. I bet that if a candidate ever arose, there'd be lots of talk about "hidden variables" or some such nonsense... :-} >Dan Mocsny >dmocsny@uceng.uc.edu Thanks again for a very stimulating article. Ken Presting
dsa@dlogics.UUCP (David Angulo) (03/04/90)
In article <36Is02Wr8diC01@amdahl.uts.amdahl.com>, kp@uts.amdahl.com (Ken Presting) writes: > In article <2953@dciem.dciem.dnd.ca> mmt@dretor.dciem.dnd.ca (Martin Taylor) writes: > >>virtually all chaotic dynamical systems have the characteristic of a > >>computational horizon beyond which any particular computer cannot keep > >>up with the physical system in doing the simulation. > > > >The problem is not whether a process is computable, or if computable is > >chaotic, but that there is no way that the *actual* behaviour of any > >small part of the universe can be described by deterministic laws, even > >though the entire universe may be. For the behaviour of a part of the > >universe to be detemined, that part would have to be isolated from the > >rest of the universe, and thus be unobservable. > > Pardon my philosophisms, but unobserved does not imply undetermined. > Sure, we couldn't tell what was happening between observations, but what > that shows is that we don't know whether our descriptions are true. It > does not show that the descriptions are false. Of course, the usual > strategy is to take some system which can be reset to some more-or-less > known state (say, a ball at the top of a ramp), let it run for a while, > and observe it at the end. If the outcome is reproducible, then the > system is deterministic. Pace Hume, this is a pretty reliable form of > argument. > As I said before, all these arguments and examples about reproducibility, determinism, and computability of the universe or ANYTHING are moot. The universe (and everything in it) is not deterministic, not reproducible, and not computable. Quantum physics dictates that ANYTHING is possible. There is only an overriding probability that "normal" type things will happen. Particles CAN come out of black holes! But what does this have to do with thinking systems anyway? f o d d e r f o r p o o r s o f t w a r e -- David S. Angulo (312) 266-3134 Datalogics Internet: dsa@dlogics.UUCP 441 W. Huron UUCP: ..!uunet!dlogics!dsa Chicago, Il. 60610 FAX: (312) 266-4473
radford@ai.toronto.edu (Radford Neal) (03/04/90)
In article <b3Yc02Cj8dl401@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >... I had focused on Penrose' arguments, which depend on >the possible existence of uncomputable functions in the laws of physics. >But I propose a simulation technique which depends on deduction rather >than numerical solution of differential equations. The standard >logical operation which represents the determinist metaphysical thesis >is *deduction*, not computation. Of course, computers are entirely ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >adequate to implement deductive systems, by the Church-Turing thesis. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >If there is any way at all to predict a future state from a past state, >computers can do it as well as any mere physicist. I think you're suffering from a lack of imagination here. Penrose is of course denying that computers are adequate to implement any deductive system, if by "deductive system" you mean the process by which real mathematicians establish new truths. If you're to understand his argument, you have to try to imagine how this might be true. Let's say that one day Penrose announces that he is able to solve, say, the word problem for semi-groups - a well-known non-computable problem. People give him instances of this problem. After a period of time that goes up only reasonably with the size of the instance he announces the answer: YES or NO. In those cases where the true answer is subsequently determined, he always turns out to be right. This holds even for very difficult cases that require increasingly subtle arguments to establish that the answer is NO, as well as cases where extremely complex reductions are needed to demonstrate that the answer is YES. I think it would be quite reasonable to conclude in the above situation that Penrose can somehow perform non-computable operations, and hence that the laws of physics must also be non-computable. You could construct a similar scenario in which the word problem is solved by some physical computer (obviously not Turing equivalent), rather than by a human being. Radford Neal
jrk@sys.uea.ac.uk (Richard Kennaway) (03/04/90)
In article <90Mar3.152728est.6160@neat.cs.toronto.edu> radford@ai.toronto.edu (Radford Neal) writes: >In article <b3Yc02Cj8dl401@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >>Of course, computers are entirely > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >>adequate to implement deductive systems, by the Church-Turing thesis. > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >I think you're suffering from a lack of imagination here. Penrose is >of course denying that computers are adequate to implement any >deductive system, if by "deductive system" you mean the process by >which real mathematicians establish new truths. If you're to understand >his argument, you have to try to imagine how this might be true. >Let's say that one day Penrose announces that he is able to solve, say, >the word problem for semi-groups - a well-known non-computable problem. >People give him instances of this problem. After a period of time >that goes up only reasonably with the size of the instance he announces >the answer: YES or NO. In those cases where the true answer is >subsequently determined, he always turns out to be right. This holds >even for very difficult cases that require increasingly subtle >arguments to establish that the answer is NO, as well as cases where >extremely complex reductions are needed to demonstrate that the answer >is YES. >I think it would be quite reasonable to conclude in the above situation >that Penrose can somehow perform non-computable operations, and hence >that the laws of physics must also be non-computable. You could construct >a similar scenario in which the word problem is solved by some physical >computer (obviously not Turing equivalent), rather than by a human being. And if pigs had wings they could fly. So what? -- Richard Kennaway SYS, University of East Anglia, Norwich, U.K. Internet: jrk@sys.uea.ac.uk uucp: ...mcvax!ukc!uea-sys!jrk
utility@quiche.cs.mcgill.ca (Ronald BODKIN) (03/05/90)
In article <361@dlogics.UUCP> dsa@dlogics.UUCP (David Angulo) writes: >As I said before, all these arguments and examples about reproducibility, >determinism, and computability of the universe or ANYTHING are moot. The >universe (and everything in it) is not deterministic, not reproducible, and >not computable. Quantum physics dictates that ANYTHING is possible. There >is only an overriding probability that "normal" type things will happen. >Particles CAN come out of black holes! But what does this have to do with >thinking systems anyway? The universe can equivalently be regarded as deterministic in much the same way any non-deterministic process can -- the objects are sets with associated (real number?) probabilities. I would argue the universe is indeed computible as it IS being computed (by the universe). And the probability function for things is not at all arbitrary. The point is really that particles are not exactly the real objects of our universe. Ron
smoliar@vaxa.isi.edu (Stephen Smoliar) (03/06/90)
In article <90Mar3.152728est.6160@neat.cs.toronto.edu> radford@ai.toronto.edu (Radford Neal) writes: > >Let's say that one day Penrose announces that he is able to solve, say, >the word problem for semi-groups - a well-known non-computable problem. >People give him instances of this problem. After a period of time >that goes up only reasonably with the size of the instance he announces >the answer: YES or NO. In those cases where the true answer is >subsequently determined, he always turns out to be right. This holds >even for very difficult cases that require increasingly subtle >arguments to establish that the answer is NO, as well as cases where >extremely complex reductions are needed to demonstrate that the answer >is YES. > I think this gets to the heart of the matter behind Penrose's argument very nicely. It also raises some rather troubling issues which probably need further discussion. I realize that, to some extent, these issues conflict with a personal tendency to try to abstract the world into deductive operations; but, in spite of the fact that I know this may only be a personal bias, there are aspects of this example which worry me. I think the problem is one of the epistemological foundations of issues of BELIEF. Penrose announces that he can solve the word problem. For any case we give him, he responds with the correct answer. Has he convinced us? Let me illustrate the difficulty by constructing what I hope is a valid analogy: Penrose announces that he can solve the general medical diagnosis problem. For any case from the medical literature we give him, he responds with the correct answer. If you were running a hospital, would you want him going on rounds through your wards? The point I am trying to get at here is that there is more to reasoning than getting the right answer. If Penrose says (using whatever reasoning powers "work" for him), "This patient has a malignant brain tumor which will be fatal if not corrected by surgery within one hour," would YOU rush the patient into the operating room WITHOUT ASKING ANY QUESTIONS? Whether you are an administrator or a fellow physician, chances are, you would want some kind of JUSTIFICATION for Penrose's decision before taking any hasty action; and if, when confronted with the question of justification, Penrose were to respond, "I just know it," you would be in quite a quandary. (Believe me, I have no idea how I would respond in such a situation, particularly if I knew that Penrose's track record had been flawless prior to this incident.) I think the point we have to confront here is that CONVINCING reasoning lies at neither extreme. If we are talking about "the process by which real mathematicians [or any other thinkers, for that matter] establish new truths," then we have no reason to believe that deductive systems are the only mechanisms which may kick in. Instead, as Marvin Minsky pointed out in THE SOCIETY OF MIND, such systems for formal reasoning are better qualified to SUMMARIZE and JUSTIFY those "new truths," once they have been encountered. The question then reduces to whether or not there are MECHANISMS, deductive or otherwise, for establishing (or even hypothesizing) them. AI argues that such mechanisms do, indeed, exist; but, because we have given so much of our attention to deductive systems, we are probably still a far cry from a better understanding of their nature. ========================================================================= USPS: Stephen Smoliar USC Information Sciences Institute 4676 Admiralty Way Suite 1001 Marina del Rey, California 90292-6695 Internet: smoliar@vaxa.isi.edu "Only a schoolteacher innocent of how literature is made could have written such a line."--Gore Vidal
kp@uts.amdahl.com (Ken Presting) (03/07/90)
In article <90Mar3.152728est.6160@neat.cs.toronto.edu> radford@ai.toronto.edu (Radford Neal) writes: >> . . . Of course, computers are entirely >>adequate to implement deductive systems, by the Church-Turing thesis. > >I think you're suffering from a lack of imagination here. . . . Them's fightin' words. >Let's say that one day Penrose announces that he is able to solve, say, >the word problem for semi-groups - a well-known non-computable problem. >People give him instances of this problem. After a period of time >that goes up only reasonably with the size of the instance he announces >the answer: YES or NO. In those cases where the true answer is >subsequently determined, he always turns out to be right. This holds >even for very difficult cases that require increasingly subtle >arguments to establish that the answer is NO, as well as cases where >extremely complex reductions are needed to demonstrate that the answer >is YES. If you drop the restriction that the computing device (wet or dry) be constructed on purpose, the trap door swings both ways. My feeble imagination can picture a computer pre-programmed with any number of Y's & N's, (of course I can't imagine how) just waiting to pop out when a word problem gets typed in. <grunt> Urrrnnnh. (Ouch!) Seriously straining my simplistic cerebrum, I can even imagine sooo many little y's and n's mashed into memory that the only word problems the machine couldn't "solve" are sooo long that nobody could type them in in a lifetime. (Picture all the little y's and n's packed into a Penrose tiling of main storage. Don't ask me how they got that way.) And that's the best I can do. I put in the original argument that the period for testing the simulation output would have to be finite. That was on purpose. I tell you, that argument has more spin than a whole flock of Fermions. > . . . You could construct >a similar scenario in which the word problem is solved by some physical >computer (obviously not Turing equivalent), rather than by a human being. Funny you should mention that: >From: kp@uts.amdahl.com (Ken Presting) >Summary: Peano's Piano - major uncomputability in a physical process >Message-ID: <80tJ02UA8cX201@amdahl.uts.amdahl.com> >Date: 27 Feb 90 21:34:11 GMT > >Suppose that one day we find an old piano. On the keys are digits, >arithmetic operations + - * / **, parentheses, quantifiers, and "x". >The piano makes no sound for any individual keystroke, but we observe >that if we play "1+1=2", then close the cover, it sounds a major chord, >while if we play "1+1=3" it sounds a minor chord. Playing for a while, >we can soon convince ourselves of the proper notation (it makes no sound >if we play "1+1=+", plays a major chord for "(x1) 2*x1=x1+x1", etc). > >Consider the hypothesis that this piano will sound a major chord for >every true sentence, and a minor chord for every false one. > >Clearly, we could never build a machine on purpose, guaranteed to >play correctly - that would contradict Goedel's theorem. > >For similar reasons, no matter how long we observe it, >or how carefully we examine its inner workings, we could never prove >to ourselves that it is reliable. If we could figure out how & why it >worked, we would have an algorithm for deciding truth, another >contradiction. > >Most important, such a device could absolutely never be simulated by a >computer. > >If anybody can think of a physical or logical reason why such a device >*cannot* exist, I'd be very interested. I think that Penrose' arguments >may reduce to the assertion that such a device is not disallowed by the >laws of physics. (Note: My construction explicitly assumes that the device is not bounded in the range of inputs for which it is successful) Of course, I wrote this before I strained my imagination on Penrose tiles. It was also before that unfortunate accident that I stumbled on a partial answer. Ted Dunning's comment on machines that can store a real number got me thinking. Sure, a real number can encode an infinite sequence in its decimal expansion: instant Entsheidungsantwort. But that is not just an infinite amount of information, it's a *non-denumerable* amount of information. Now, a real physical device is made of finitely many atoms, each with a *denumerable* infinity of orbitals. That makes only denumerably many states, and therefore a denumerable amount of information in any one state. So, if you want to encode the answer to the EP, you'll have to do so in some other set of states than the electron orbitals, the nuleon orbitals, molecular vibration modes, et al. (Assuming, of course, that all particles in the machine are in bound states. Tunneling would lose information, when it occurs, so I'll ignore it). Now, "who ya gonna call?". It's hard enough to read anything out of the outer orbital states. Since the difference in energy between levels decreases without limit, the time required to measure the energy level of an excited electron (say) with sufficient accuracy to distinguish between adjacent levels will increase without limit. (Assuming that all those excited 'trons stay put long enough to make the machine useful. Most likely, you'd get one big white flash. Bigger, if you used excited nuclear states.) What's left? Store data in hidden variables? Call Ghostbusters - there's a ghost in that machine... ****** Radford, I hope you found this article both informative and amusing. I do not feel that I have the luxury of offending anyone who is willing to take the time to read and respond to my ideas. But I do enjoy exercising my imagination. :-) Ken Presting (Wait; what's that still small voice? What's that? "Dyyynaamiic Prooocessseeeesss... Dddyyynnnaammiicc Pprroocceesseess..." All right, all right, I haven't considered the possibility of storing information in dynamic processes. Wait for my imagination to recover, and I'll post again.)
kp@uts.amdahl.com (Ken Presting) (03/07/90)
In article <12240@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes: >In article <90Mar3.152728est.6160@neat.cs.toronto.edu> radford@ai.toronto.edu >(Radford Neal) writes: >> >>Let's say that one day Penrose announces that he is able to solve, say, >>the word problem for semi-groups - a well-known non-computable problem. >> >. . . I realize that, to some extent, these issues conflict >with a personal tendency to try to abstract the world into deductive >operations; . . . When I built the argument on which Radford is commenting, I was *very* careful to apply deductive abstractions *only* to logical or epistemological issues. Kuhn, Feyerabend, Kitcher, and many others have made it very clear that there can be no general method for defining the appropriate formal abstraction in which to describe natural phenomena. This point holds for abstraction in engineering as well as in science, I believe. However, deductive reasoning still has an indispensable role in all argumentation, including scientists' arguments with each other over theories. I would caution against any application of the concept of deduction outside the context of an analysis of arguments. >Let me illustrate the difficulty by constructing what I hope is a valid >analogy: Penrose announces that he can solve the general medical diagnosis >problem. For any case from the medical literature we give him, he responds >with the correct answer. If you were running a hospital, would you want him >going on rounds through your wards? A small point: Diagnosing a case from a textbook description is different from diagnosing a case from clinical observation. Interpreting X-rays is very tricky, for example. Clinical skill is not implied by inferential skill. This issue is not central to the rest of your example, but I thought it was worth mentioning. Assuming that Penrose can magically (or whatever) diagnose from clinical observations, he would constitute a case of a measuring device without a theoretical explanation. A "mysterious correlation". >The point I am trying to get at here is that there is more to reasoning than >getting the right answer. If Penrose says (using whatever reasoning powers >"work" for him), "This patient has a malignant brain tumor which will be fatal >if not corrected by surgery within one hour," would YOU rush the patient into >the operating room WITHOUT ASKING ANY QUESTIONS? Whether you are an >administrator or a fellow physician, chances are, you would want some >kind of JUSTIFICATION for Penrose's decision before taking any hasty >action; and if, when confronted with the question of justification, >Penrose were to respond, "I just know it," you would be in quite a quandary. >(Believe me, I have no idea how I would respond in such a situation, >particularly if I knew that Penrose's track record had been flawless >prior to this incident.) This example brings up an issue which is very important for AI, but is well outside anything Penrose addresses in his book. When a belief is to be used as the basis for a practical decision, one's confidence in the belief is only one factor. The risks entailed by taking action must also be considered, as well as the cost of the action, missed opportunities, etc. The rationality of an action is by no means determined solely by the rationality of the beliefs which direct it. It is interesting to notice the variation in the points of view of the several actors in Stephen's story. A consulting physician could rationally refuse to take any position at all (moral considerations aside) if his primary concern were his reputation. The administrator must consider the hospital's liability, which will be decided at least partly on the grounds of the hospital's scientific efforts to establish Penrose' reliability. The patient is in quite a pickle. Considerations of epistemic purity are likely to have little influence on his decision. Penrose, Neal, and I have been discussing a case in which epistemic purity alone is relevant. This is appropriate, since we are considering a (fairly) well-defined hypothesis - can a robot controlled by a computer program exhibit behavior indistinguishable from natural human behavior. The parameters of the discussion are also (fairly) clear - the computer is to be allowed any finite speed, storage, and number of processors, and is otherwise constrained only by computability theory. The human's behavior is contrained only by the laws of physics. As I understand Radford's objection, he is insisting that any claim that computers are capable of modeling all real processes must address the issues of (a) other laws of physics and (b) phenomena which are describable but not explainable. Since I constructed my argument in terms of making measurements, and science proceeds by constructing abstract deductive theories, tested by measurement and observation, I think the argument stands. Deduction is by no means the whole of science, but it cannot be abandoned by scientists. My argument (before the Peano Piano) is not based on the limitations of the processes to be modeled, but on the limitations of the the observers' ability to make persuasive objections to the model. >I think the point we have to confront here is that CONVINCING reasoning lies at >neither extreme. If we are talking about "the process by which real >mathematicians [or any other thinkers, for that matter] establish new >truths," then we have no reason to believe that deductive systems are >the only mechanisms which may kick in. Stephen, with this point you have returned to the epistemic issue of discovering new truths. Your example, which involved a practical decision, does not directly bear on the epistemic issue. That said, I agree with this point entirely. (I am amazed at the difficulty I am having in finding an area of disagreement with you, given the volume of objections and counter-objections. I suspect that I am not making clear how narrow is the scope of my generalizations, which leads to you citing the broad range of phenomena which you (rightly) believe to be relevant to AI as a whole.) Let me emphasize again how important the issue of practical reason is. Although practical decision-making can be overlooked in *some* theoretical contexts, in the daily behavior of human beings, it is absolutely impossible to separate epistemic from practical issues. I cannot recommend the work of Donald Davidson too highly on this issue. His essay "Belief and the Basis of Meaning", in _Inquiries into Truth and Interpretation_ is irreplaceable. He concludes that we cannot ascribe beliefs to an agent without also ascribing desires, and rational pursuit of those desires. > Instead, as Marvin Minsky pointed >out in THE SOCIETY OF MIND, such systems for formal reasoning are better >qualified to SUMMARIZE and JUSTIFY those "new truths," once they have been >encountered. Could you give me a page or chapter reference on this? > The question then reduces to whether or not there are MECHANISMS, >deductive or otherwise, for establishing (or even hypothesizing) them. AI >argues that such mechanisms do, indeed, exist; It is certainly the case that no *effective* finitary mechanisms exist for establishing new truths in general, although partially effective mechanisms might. Do you have Edelman in mind here? Natural selection is a fine example of a partially effective finitary mechanism (heredity at least is finitary, though the selection process can involve non-finitary interactions with the environment). Whether selection can be totally effective is a very interesting issue. That may not even be necessary. I mean by "effective" a property of processes analogous to recursiveness in functions. Before a semantic notion such as "truth", or a logical notion such as recursiveness (or finiteness, for that matter) can be applied to a process or its results, the process must be interpreted as being at least linguistic. "Truth", "effectiveness", et al are examples of normative concepts. > but, because we have given >so much of our attention to deductive systems, we are probably still a far >cry from a better understanding of their nature. At the risk of tedium (who said "that's a certainty" :-) let me emphasize again that deduction must be applied only in models of argumentation. In the case of determinist metaphysics, deduction is *not* applied to the relation between objects, states, processes, or any other real thing, but only to assertions about those things. I can't speak for anyone else's use of "deduction", but (excepting mistakes) my use of "deduction" (and all logical terms) is restricted to the case of assertions. Ken Presting