kp@uts.amdahl.com (Ken Presting) (12/29/89)
In article <4702@itivax.iti.org> dhw@itivax.UUCP (David H. West) writes: >If I'm trying to choose between nearly-equally-preferred alternatives, >fluctuations may tip the balance, but IMO the "thought" aspect here >lies in the ability to evaluate utility reasonably well, not in the >ability to evaluate it perfectly. Internal and external >fluctuations also affect my ability to carry out my intentions, but that >doesn't [in itself!] make me unintelligent or non-conscious, just >not omnipotent. I couldn't agree with this more. (I've even tried to get a new thread started on it :-). Evaluating a utility function and applying a decision procedure seems to me to be the formal essence of cognition. Any system that does this has, I suggest, the beginnings of intelligence. Several other qualities are relevant (use of a public language, self-modification, susceptibility to argument) to full "personhood", but you have to respect a machine that knows what it wants and knows how to get it. There is no reason a finitary system couldn't do this - that's why I believe in strong AI.
turpin@cs.utexas.edu (Russell Turpin) (12/29/89)
In article <0cTG02uf793w01@amdahl.uts.amdahl.com>, kp@uts.amdahl.com (Ken Presting) writes: > Evaluating a utility function and applying a decision procedure seems to > me to be the formal essence of cognition. Any system that does this has, > I suggest, the beginnings of intelligence. Several other qualities are > relevant (use of a public language, self-modification, susceptibility to > argument) to full "personhood", but you have to respect a machine that > knows what it wants and knows how to get it. You're in good company. John McCarthy makes the same kind of argument. According to him, a thermostat is capable of three different thoughts: "it's too hot", "it's too cold", and "the temperature here is just right". Russell
kp@uts.amdahl.com (Ken Presting) (12/30/89)
In article <7462@cs.utexas.edu> turpin@cs.utexas.edu (Russell Turpin) writes: >In article <0cTG02uf793w01@amdahl.uts.amdahl.com>, kp@uts.amdahl.com (Ken Presting) writes: >> Evaluating a utility function and applying a decision procedure seems to >> me to be the formal essence of cognition. Any system that does this has, >> I suggest, the beginnings of intelligence. Several other qualities are >> relevant (use of a public language, self-modification, susceptibility to >> argument) to full "personhood", but you have to respect a machine that >> knows what it wants and knows how to get it. > >You're in good company. John McCarthy makes the same kind of >argument. According to him, a thermostat is capable of three >different thoughts: "it's too hot", "it's too cold", and "the >temperature here is just right". > >Russell This is a familiar can of worms. A thermostat is certainly less intelligent than your average worm. I would go so far as to grant that the thermostat uses a public language - the setting dial is calibrated in print. But the fragment of language recognized is so small that the worm gets the edge by virtue of a much more complex utility function. The big deficiency in the thermostat, the worm, and every attempt at AI I know of, is that you can't get them to alter their decision procedure by inputting arguments via the interface that recognizes the public language. Of course, some humans have trouble in this regard, but anyone who's completely immune to reason is incompetent or insane.
turpin@cs.utexas.edu (Russell Turpin) (12/30/89)
In article <5cK702mf795h01@amdahl.uts.amdahl.com>, kp@uts.amdahl.com (Ken Presting) writes: > I would go so far as to grant that the thermostat uses a public language - > the setting dial is calibrated in print. ... > > The big deficiency in the thermostat, the worm, and every attempt at AI > I know of, is that you can't get them to alter their decision procedure > by inputting arguments via the interface that recognizes the public > language. ... I guess I'll have to play the devil's advocate here, even if it is just to point out that you want to be more specific in your criticism. I happen to argue with my thermostat quite frequently. Quite often it thinks it is just right when I think it is too cold, or vice versa. It has a neat device, a lever calibrated in the same public language, that permits me to tell it what I think "too cold" means. Sometimes my arguments even work, and the thermostat modifies its decision procedure to become more to my liking. Russell
ian@oravax.UUCP (Ian Sutherland) (12/30/89)
In article <5cK702mf795h01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >The big deficiency in the thermostat, the worm, and every attempt at AI >I know of, is that you can't get them to alter their decision procedure >by inputting arguments via the interface that recognizes the public >language. Doesn't the thermostat change its decision procedure when I turn the dial? Of course, I can't convince it to adopt a decision procedure which ignores the setting on the dial entirely, but I probably couldn't convince you to adopt a decision procedure which ignores which neurons in your brain are firing either. Both you and the thermostat change withing certain limits defined by your construction. The difference seems to me to be one of degree rather than kind. -- Ian Sutherland ian%oravax.uucp@cu-arpa.cs.cornell.edu Sans Peur
jiii@visdc.UUCP (John E Van Deusen III) (12/31/89)
In article <7462@cs.utexas.edu> turpin@cs.utexas.edu (Russell Turpin) writes: > ... > You're in good company. John McCarthy makes the same kind of > argument. According to him, a thermostat is capable of three > different thoughts: "it's too hot", "it's too cold", and "the > temperature here is just right". ... The device purported to be having thoughts about the ambient temperature is in fact constructed from two thermostats, (one for too hot and the other for too cold), and the actual situation is being obfuscated by complexity. A single thermostat is only capable of answering the question, "The ambient temperature above a certain design value or user setting - True or false?" It is NOT capable of "knowing" the answer to the decision problem that it answers. This is because a thermostat can not send electrical current through its own circuit, and thus it has no way to test its own state. It CAN NOT develop thoughts about the ambient temperature, because it has no access to a thermostat! Suppose that we wire in another thermostat, (and a few supercomputers), so the first one could obtain information about the ambient temperature. Would the first thermostat then begin having thoughts about the ambient temperature? No, for the reason that it is not in the nature of its being to care about the temperature. Temperature is irrelevant to its existence as a thermostat. It is not a matter of degree or complexity; it is a matter of essence. This does not say that it is impossible to build machines that think. Such machines, however, will not scale up from the rudimentary thoughts of nuts and bolts; and they probably will not scale up from the equally non-existent thoughts of universal Turing machines and algorithms. -- John E Van Deusen III, PO Box 9283, Boise, ID 83707, (208) 343-1865 uunet!visdc!jiii
sarge@metapsy.UUCP (Sarge Gerbode) (01/01/90)
In article <0cTG02uf793w01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >Evaluating a utility function and applying a decision procedure seems to >me to be the formal essence of cognition. Any system that does this has, >I suggest, the beginnings of intelligence. Several other qualities are >relevant (use of a public language, self-modification, susceptibility to >argument) to full "personhood", but you have to respect a machine that >knows what it wants and knows how to get it. >There is no reason a finitary system couldn't do this - that's why I >believe in strong AI. I would agree that the ability to decide (which implies the ability to *intend*) is crucial to the process of understanding. But intending (or deciding) requires a conscious being. It is not enough to say that given condition A, result A' obtains, whereas given condition B, result B' obtains. The same could be said of an eroded hillside: When it rains, you get mudslides; when it doesn't rain, you get cracks. That doesn't mean the hillside is *deciding* anything. It behaves that way because that is the physics of the situation. The same is true of a computer, except that what occurs with a computer is simpler and more predictable. It is important to recognize the difference betweeen reality and metaphor. We tend to use anthropomorphic metaphors when talking about machines. And that's OK, until we get so wrapped up in the metaphor that we lose sight of the fact that it *is* a metaphor. It's the same problem that exists with the metaphor of "mental illness", used to describe confusion, unhappiness, and aberrant behavior. (I've heard the term "computer doctor" used in jest and the term "virus", but nobody yet thinks that in debugging a program we are curing an illness.) -- Sarge Gerbode -- UUCP: pyramid!thirdi!metapsy!sarge Institute for Research in Metapsychology 431 Burgess Drive; Menlo Park, CA 94025
turney@njord.cs.cornell.edu (Jenn Turney) (01/09/90)
In article <5cK702mf795h01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >The big deficiency ... >... you can't get them to alter their decision procedure >by inputting arguments via the interface that recognizes the public >language. In article <1213@oravax.UUCP> ian@oravax.odyssey.UUCP (Ian Sutherland) writes: >Doesn't the thermostat change its decision procedure when I turn the >dial? Of course, I can't convince it to adopt a decision procedure which >ignores the setting on the dial entirely, but I probably couldn't >convince you to adopt a decision procedure which ignores which neurons >in your brain are firing either. A clarification and disambiguation should be made before this discussion continues (if it does). I missed the start of it so forgive me if this has already been covered, but judging from the content of these messages, I suspect it has not been. As someone else (sorry, I can't attribute it) pointed out, it is tenuous to say that the thermostat "decides" whether it is above or below a certain temperature. It might be more accurate to state that the thermostat EMBODIES a decision procedure. The answer to the question of whether the decision procedure changes when the dial is turned depends on what you consider to be the decision procedure. Is the procedure followed in deciding that the temperature is above 65 different from the one followed in deciding that the temperature is above 70? If the answer is no, then the first excerpt makes sense (and doesn't otherwise). Mr. Sutherland seems to realize the ambiguity but hasn't explicitly stated it. Turning the dial changes the decision procedure only if you consider a decision procedure to be an _instantiation_ of the function "Is the temperature below X?". I've intentionally left this question open. I think arguments can be made both ways. If you want to discuss "decision procedures", that's fine, but make sure your audience knows what meaning you've assigned to the phrase. Jenn ________ | | turney@svax.cs.cornell.edu | Let us a little permit Nature to take | | | Dept. of Computer Science | her own way; she better understands her \_| | Cornell University | own affairs than we. -- Montaigne
bloch@thor.ucsd.edu (Steve Bloch) (01/11/90)
turney@cs.cornell.edu (Jenn Turney) writes: >ian@oravax.odyssey.UUCP (Ian Sutherland) writes: >>Doesn't the thermostat change its decision procedure when I turn the >>dial? >As someone else (sorry, I can't attribute it) pointed out, it is tenuous >to say that the thermostat "decides" whether it is above or below a >certain temperature... The answer to the question of whether the >decision procedure changes when the dial is turned depends on what you >consider to be the decision procedure... >Turning the dial changes the decision procedure only if you consider >a decision procedure to be an _instantiation_ of the function "Is the >temperature below X?". I would think the decision procedure is not just DETECTING whether some criterion is met, but DECIDING WHAT TO DO -- in this case, whether or not to turn on the heater. If that's what we're interested in, the decision procedure IS changing, in that the action will be carried out under different conditions than it would have been before. "The above opinions are my own. But that's just my opinion." Stephen Bloch bloch%cs@ucsd.edu
kerry@bcsaic.UUCP (Kerry Strand) (01/14/90)
In article <7691@sdcsvax.UCSD.Edu> bloch@thor.UUCP (Steve Bloch) writes: >turney@cs.cornell.edu (Jenn Turney) writes: >> The answer to the question of whether the >>decision procedure changes when the dial is turned depends on what you >>consider to be the decision procedure... > >I would think the decision procedure is not just DETECTING whether >some criterion is met, but DECIDING WHAT TO DO -- in this case, >whether or not to turn on the heater. If that's what we're interested >in, the decision procedure IS changing, in that the action will be >carried out under different conditions than it would have been >before. A thermostat is as much a decision-making entity as is the Great Divide that runs down the backbone of the Rocky Mountains. The continental divide determines whether a particular drop of rain will flow to the Atlantic Ocean or to the Pacific Ocean. I can "change the decision procedure" with a shovel by moving dirt that forms the ridge eastwardly, which will result in more drops being selected to flow to the Pacific. You may object that there is no feedback to the Great Divide on the effect of its action. But consider a thermostat that has been disconnected from the heater. It still functions the same, switching its switch as the temperature changes. Is it really "deciding what to do?" -- Kerry Strand kerry@atc.boeing.com uw-beaver!ssc-vax!bcsaic!kerry Boeing Advanced Technology Center (206)865-3412 P.O. Box 24346 MS 7L-64 . Seattle, WA 98124-0346 .
jeff@aiai.ed.ac.uk (Jeff Dalton) (01/15/90)
In article <1213@oravax.UUCP> ian@oravax.odyssey.UUCP (Ian Sutherland) writes: >Doesn't the thermostat change its decision procedure when I turn the >dial? [...] > Both you and the thermostat change >within certain limits defined by your construction. The difference >seems to me to be one of degree rather than kind. That's one of the problems: it "seems to you", but maybe it seems some other way to someone else. How do we decide which of you is right? Of course, things might be defined so that it's pretty clear what the answer is, but then you run the risk of confining yourself to something trivial when there are much more interesting issues lurking nearby. My view is that we don't yet know enough to say what the interesting similarities and differences are and so shouldn't think that the concepts we can define now are necessarily the ones we really ought to care about. But suppose you're right and it is just a matter of degree. That may not tell us all that much in the end. For example, the difference between moving a 3 miles per hour and moving at 90 percent of the speed of light is a matter of degree, but there are nonetheless significant effects that aren't noticeable at the lower speed. Our current theories may lead us to say there aren't any interesting effects as we move away from the thermostat case, but why should we suppose those theories are correct? -- Jeff
jeff@aiai.ed.ac.uk (Jeff Dalton) (01/16/90)
In article <984@metapsy.UUCP> sarge@metapsy.UUCP (Sarge Gerbode) writes:
I would agree that the ability to decide (which implies the ability
to *intend*) is crucial to the process of understanding. But
intending (or deciding) requires a conscious being. It is not enough
to say that given condition A, result A' obtains, whereas given
condition B, result B' obtains. The same could be said of an eroded
hillside: When it rains, you get mudslides; when it doesn't rain,
you get cracks. That doesn't mean the hillside is *deciding*
anything. It behaves that way because that is the physics of the
situation. The same is true of a computer, except that what occurs
with a computer is simpler and more predictable.
That's a good point, but I think we have to allow for the possibility
that we behave as we do because of the physics of the situation too.
Dennett's book _Elbow Room_ is worth reading on this point. However,
that wouldn't mean intending had to occur at every level, so your
point would still be valid.
-- Jeff
kp@uts.amdahl.com (Ken Presting) (01/16/90)
In article <1213@oravax.UUCP> ian@oravax.odyssey.UUCP (Ian Sutherland) writes: >Doesn't the thermostat change its decision procedure when I turn the >dial? > ..... Both you and the thermostat change >within certain limits defined by your construction. The difference >seems to me to be one of degree rather than kind. There is an important difference in kind if you focus on the "public language". The subset of language used with the thermostat does not include sequences of sentences, so there is no possibility of the the thermostat engaging in an argument. It's not an argument, it's just contradiction. No, it's not. Yes it is! (apologies to Monty Python :-)
adamsf@turing.cs.rpi.edu (Frank Adams) (01/18/90)
In article <1551@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes: >But suppose you're right and it is just a matter of degree. That may >not tell us all that much in the end. [...] >Our current theories may lead us to say there aren't any interesting >effects as we move away from the thermostat case, but why should we >suppose those theories are correct? No, you're missing the point. Of course there *are* interesting effects as we move away from the thermostat case -- talking with people is *much* more interesting than talking with thermostats. But if the difference *is* only one of degree, the strong AI principle is validated, and it is possible to make computers that think.