blenko-tom@cs.yale.edu (Tom Blenko) (08/08/90)
In article <53619@iuvax.cs.indiana.edu> dave@cogsci.indiana.edu (David Chalmers) writes: |... |This is still a misstatement of Searle's position. He is deeply |opposed to *any* behavioural criteria for intelligence. Perhaps your language is just a bit casual here, but I know of no reason to doubt that Searle has behavioral criteria for intelligence -- his claim is that they are not sufficient, not that they aren't necessary. |Actually, Searle's argument is all about introspection. There might be other |arguments about the topic that aren't, but those arguments certainly aren't |Searle's. I don't think Searle would agree with this. My recollection is that his reasoning goes something like this: we individually have conscious (introspective) experience and we have come to ascribe that consciousness to other (similar) intelligent entities. If we were to discover that an entity displaying the necessary behavioral characteristics nevertheless lacked introspective experience, we would not consider it intelligent. Having conscious experiences and establishing the (shared) understanding that other intelligent entities have similar experiences are two quite different things. Evidence for the former is immediately accessible and relates directly to introspection. Evidence for the latter is a far more complicated issue, and I propose has more to do with communication, shared experience, and social conventions than with introspection, per se. |At the bottom line, there are two quite separate Chinese Room problems: one |about consciousness (phenomenology), and the other about intentionality |(semantics). These problems are quite separate -- the correct answer to the |first is the Systems Reply, and the correct answer to the second is the Robot |Reply. One of the biggest sources of confusion in the entire literature |on the Chinese Room stems from Searle conflating these two issues. You've said this in two different messages now, I believe, and I suspect many people would be sympathetic with this position. However, I don't think you've done much to argue against Searle -- in particular, to show which part of Searle's argument you are disagreeing with, and precisely why you claim it is incorrect. Tom
forbis@milton.u.washington.edu (Gary Forbis) (08/08/90)
Some approaches to the other minds problem causes me to have serious concerns. In article <25761@cs.yale.edu> blenko-tom@cs.yale.edu (Tom Blenko) writes: >In article <53619@iuvax.cs.indiana.edu> dave@cogsci.indiana.edu (David Chalmers) writes: >|Searle's argument is all about introspection. >I don't think Searle would agree with this. My recollection is that >his reasoning goes something like this: we individually have conscious >(introspective) experience and we have come to ascribe that >consciousness to other (similar) intelligent entities. If we were to >discover that an entity displaying the necessary behavioral >characteristics nevertheless lacked introspective experience, we would >not consider it intelligent. >Evidence for the former is immediately >accessible and relates directly to introspection. Evidence for the >latter [...] has more to do >with communication, shared experience, and social conventions than with >introspection, per se. Similar is a very subjective attribute. I will come back to this after I consider the clause in which it appears. I am confuse about what is claimed in "we have come to ascribe that consciousness to other (similar) intelligent entities." At first I thought "similar" applied to "intelligent entities," but then I saw the paragraph ends "we would not consider it intelligent." I have come to reread the passage "we individually have conscious (introspective) experiences and have come to ascribe consciousness and intelligence to other similar entities." The problem comes down to defining "similar entities." The phrase "If we were to discover that an entity discover displaying the necessary behavioral characteristics nevertheless lacked introspective experience, we would not consider it intelligent" (I presume this is due to lack of similarity) leads me to discount behavior as a basis for similarity. How are we to determine capability for "introspective experience" if not through behavior? I am left with "similar" implying "physically similar." I am very uncomfortable with basing similarity on physical attributes. I can imagine gender, skin color, etc. being used as a basis for physical similarity. When these attributes are used as a basis for intelligence descrimination can be justified. I sure no one in this group would use skin color as a basis for intelligence but any other physical attribute is just as baseless. Each human is different from the next and we cannot know the basis for our "introspective experience" does not lie somewhere within this difference. How are we to determine capability for "introspective experience" if not through behavior? --gary forbis
dave@cogsci.indiana.edu (David Chalmers) (08/08/90)
In article <25761@cs.yale.edu> blenko-tom@cs.yale.edu (Tom Blenko) writes: >In article <53619@iuvax.cs.indiana.edu> dave@cogsci.indiana.edu (David Chalmers) writes: >|This is still a misstatement of Searle's position. He is deeply >|opposed to *any* behavioural criteria for intelligence. > >Perhaps your language is just a bit casual here, but I know of no >reason to doubt that Searle has behavioral criteria for intelligence -- >his claim is that they are not sufficient, not that they aren't >necessary. Actually, I don't think that Searle believes behaviour is even a *necessary* criterion. I'm pretty sure he's pro the possibility of brain-in-vat style intelligence. But you're right, the claim I was making was that Searle is opposed to behaviour as a sufficient criterion. >|Actually, Searle's argument is all about introspection. There might be other >|arguments about the topic that aren't, but those arguments certainly aren't >|Searle's. > >I don't think Searle would agree with this. My recollection is that >his reasoning goes something like this: we individually have conscious >(introspective) experience and we have come to ascribe that >consciousness to other (similar) intelligent entities. If we were to >discover that an entity displaying the necessary behavioral >characteristics nevertheless lacked introspective experience, we would >not consider it intelligent. I agree with this statement of Searle's position, but I don't see what you're arguing with. The point is that Searle believes "intentionality => introspectability". Therefore "~introspectability => ~intentionality". And this is how his arguments go: demonstrate (arguably) that certain entities don't have introspectability (consciousness), and so (by the implicit premise) don't have intentionality. Given that the implicit premise is highly disputable and not even argued for in the original paper (although he does produce some arguments in this direction in the BBS paper forthcoming this year), we may conclude that the arguments, so far as they go, should really be taken to be about introspectability/consciousness/phenomenology, not about intentionality. >Having conscious experiences and establishing the (shared) >understanding that other intelligent entities have similar experiences >are two quite different things. Evidence for the former is immediately >accessible and relates directly to introspection. Evidence for the >latter is a far more complicated issue, and I propose has more to do >with communication, shared experience, and social conventions than with >introspection, per se. I agree, and again don't see a source of disagreement. The question of how we might establish that certain creatures have conscious experience is independent of the question about what conclusions we might draw, once we know that they have such experiences. Direct introspection is the easiest way to answer the first question, but it's rather limited, so one hopes it's not the only way. Analogy is another method that seems to serve us well in real life. >| [stuff re the two different chinese room arguments, about consciousness >| and introspection] > >You've said this in two different messages now, I believe, and I >suspect many people would be sympathetic with this position. However, I >don't think you've done much to argue against Searle -- in particular, >to show which part of Searle's argument you are disagreeing with, and >precisely why you claim it is incorrect. Apologies for repetition. As for not arguing against Searle, that's quite deliberate. The present discussion is just clarification of the structure of Searle's argument. -- Dave Chalmers (dave@cogsci.indiana.edu) Concepts and Cognition, Indiana University. "It is not the least charm of a theory that it is refutable."
blenko-tom@cs.yale.edu (Tom Blenko) (08/09/90)
In article <6022@milton.u.washington.edu> forbis@milton.u.washington.edu (Gary Forbis) writes: |... Each human is different |from the next and we cannot know the basis for our "introspective experience" |does not lie somewhere within this difference. How are we to determine |capability for "introspective experience" if not through behavior? How does one determine whether a city has civic pride (or to what degree it might be said to have civic pride)? Yet one can rank-order the cities one has lived in according to the civic pride one feels they possess. And one can arrive at such judgements without participating in the life of the city. Tom
blenko-tom@cs.yale.edu (Tom Blenko) (08/09/90)
In article <53635@iuvax.cs.indiana.edu> dave@cogsci.indiana.edu (David Chalmers) writes: |The point is that Searle believes "intentionality => |introspectability". Therefore "~introspectability => ~intentionality". And |this is how his arguments go: demonstrate (arguably) that certain entities |don't have introspectability (consciousness), and so (by the implicit premise) |don't have intentionality. Given that the implicit premise is highly |disputable and not even argued for in the original paper (although he does |produce some arguments in this direction in the BBS paper forthcoming this |year), we may conclude that the arguments, so far as they go, should |really be taken to be about introspectability/consciousness/phenomenology, |not about intentionality. Other intentional states Searle mentions are joy, fear, love, hunger, and exhilaration. I understand his claim to be that if you believe an entity has all of these intentional properties, and then you discover that it doesn't really, then you will no longer suppose it to have a mind. Don't you think most people would agree? |The question of |how we might establish that certain creatures have conscious experience is |independent of the question about what conclusions we might draw, once |we know that they have such experiences. Direct introspection is the easiest |way to answer the first question, but it's rather limited, so one hopes |it's not the only way. Analogy is another method that seems to serve us |well in real life. But I think this misses an important point (in the same way that many previous postings to this group have): irrespective of what (objective) evidence for X is available or justifiable, each of us concludes X or not-X every day. Each of us, for example, necessarily has an immediate, comprehensive, and robust theory of the physical world. The theory is sure to be wrong in many respects, and (I presume) inconsistent and incoherent as well. It is "correct enough" for us to successfully navigate through the physical world, so it must reflect information about the (real) physical world. It is also flavored by influences from other sources, e.g., education, social convention, etc. So X or not-X is not necessarily an objective property of an entity examined in isolation, but may reflect as well the conventions society employs in its treatment of the entity. |As for not arguing against Searle, that's quite |deliberate. The present discussion is just clarification of the structure |of Searle's argument. I don't think one understands Searle's (or anyone else's) argument until one can both attack and defend it. Tom
dave@cogsci.indiana.edu (David Chalmers) (08/09/90)
In article <25771@cs.yale.edu> blenko-tom@cs.yale.edu (Tom Blenko) writes: >Other intentional states Searle mentions are joy, fear, love, hunger, >and exhilaration. I understand his claim to be that if you believe an >entity has all of these intentional properties, and then you discover >that it doesn't really, then you will no longer suppose it to have >a mind. Don't you think most people would agree? Hey, I agree with this to a large extent. I'm a great believer in the importance of consciousness to mind. Given that Searle's argument proceeds by establishing a lack of consciousness in certain creatures, the question that we must ask is: for what properties P, traditionally regarded as mental, is it true that P must necessarily be accompanied by consciousness? Now joy, hunger, even understanding may well be such properties. Therefore, if the Chinese Room is not conscious, then it cannot understand, feel joy, be hungry, etc. The only claim I made in the original article is that intentionality is not such a property -- or at least that the link between intentionality and consciousness is highly disputable. Most of the discussion of intentionality in the last 20 years of the philosophy of mind has proceeded without ever using the notion of consciousness, and I take the lesson to be that the two are quite separable. Searle may disagree, but he doesn't produce any arguments for this in the BBS paper. Assuming for now that Searle's argument does establish the non-consciousness of the Chinese Room (which it doesn't, but never mind that for now), then we may go along with Searle in saying that certain intentional attributions like "understands X" may not be made to the Chinese room. However, this does not imply that the Chinese Room lacks intentionality. The problem with the above attribution lies in the "understands", not in the "X". So even if the Chinese Room does not "understand X", it may still "schmunderstand X", or something. So, while Searle's arguments might establish that certain intentional properties may not be attributed to the Chinese Room, it does not establish that *no* intentional properties may be ascribed to the Chinese Room. ("Belief", for instance, is an intentional property that is often taken to be quite independent of consciousness.) Note that I'm not saying *anything* about "mind" or "intelligence". I'd probably be happy to concede that lack of consciousness implies lack of mind, although "mind" is a very ambiguous, multi-faceted term. The point is solely a point about intentionality -- i.e. semantics. I don't see any reason why semantics, even "intrinsic semantics", should require full-blown consciousness. >But I think this misses an important point (in the same way that many >previous postings to this group have): irrespective of what (objective) >evidence for X is available or justifiable, each of us concludes X or >not-X every day. This is true. However, concluding X does not make X true. Our "everyday" reasoning is fallible. One hopes, although there is no guarantee, that rigorous science and philosophy will allow us to establish X or not-X in a more reliable way. Unless you want to argue that there is no "fact-of-the-matter" about X, over and above our everyday judgments. This may be true for certain X (such as "likeability", perhaps), but I don't see any reason why it should be true for "consciousness". At the very least, it is a premise of Searle's argument that there is an objective fact-of-the-matter about such questions, and I tend to agree with him about this. "Intentionality" is slightly trickier. Searle certainly believes there is a fact-of-the-matter about such things. On the other hand, Dan Dennett has made a career out of the argument that attributions of intentionality are for the most part observer-relative. >I don't think one understands Searle's (or anyone else's) argument >until one can both attack and defend it. I agree with this, and have done both many times (the former more often than the latter, but the latter is sometimes necessary when Searle's arguments are misinterpreted or underestimated). Not now, however. -- Dave Chalmers (dave@cogsci.indiana.edu) Concepts and Cognition, Indiana University. "It is not the least charm of a theory that it is refutable."
forbis@milton.u.washington.edu (Gary Forbis) (09/05/90)
You know, I've seen this line several times before. After some personal mail I think I understand what is being said but then the next time I see it I find once again I cannot make heads or tails of it. I would like to see what the consensus oppinion is. Would those who feel so inclined send me a brief interpretation of the following quoted text, giving particular emphasis as how it relates to the other mind problem. In article <25770@cs.yale.edu> blenko-tom@cs.yale.edu (Tom Blenko) writes: >How does one determine whether a city has civic pride (or to what >degree it might be said to have civic pride)? Yet one can rank-order >the cities one has lived in according to the civic pride one feels they >possess. And one can arrive at such judgements without participating >in the life of the city. Thanks, gary forbis@milton.u.washington.edu