berleant@ut-sally.UUCP (Dan Berleant) (07/04/87)
In article <955@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes: >Another red herring in Wittegenstein's "family resemblance" metaphor was >the issue of negative and disjunctive features. Not-F is a perfectly good >feature. So is Not-F & Not-G. Which quite naturally yields the >disjunctive feature F-or-G [sic] [...] >There's absolutely no reason to restrict "features" to monadic, >conjunctive features that subjects can report by introspection. To my mind, this reconciles the 'classical' view of concept representation with the 'probabilistic' one. There may not be much difference between a classical view augmented to allow *arbitrary* boolean expressions of features (instead of just a conjunction), and a probabilistic view which proposes a list of possibly-present features each associated with a number describing (e.g.) the probability of category X given feature F. There is some difference, I think, which I will ignore for the purposes of this posting. >The >problem in principle is whether there are any logical (and nonmagical) >alternatives to a feature-set sufficient to sort the confusable >alternatives correctly. I would argue that -- apart from contrived, >gerrymandered cases that no one would want to argue formed the real >basis of our ability to categorize -- there are none. You may be dismissing the typicality and reaction time results that have been interpreted as supporting probabilistic and exemplar-based category representations too quickly. We need to be able to explain both these results, and, the observed correctness of categorization even for atypical instances. Yes, we can classify even a platypus as a mammal, but the fact that it takes longer is not irrelevant. The answer may lie in the hypothesis that there are 2 representations for categories: a 'core' of defining features, and a heuristic categorizer that uses other features that are handy and useful rather than defining (or maybe the heuristic categorizer uses exemplars instead of features). For example, my concept of thunder contains defining features like 'caused by lightning', and 'is an atmospheric phenomenon'. However, in actual practice I am more likely to identify thunder using heuristic features like 'rumbling noise' and 'associated with rainstorms'. The heuristic categorizer then accounts for things like typicality and reaction time results in psychological experiments. The core is used as a slower-but-surer checker, which ensures correctness (e.g. informing me that a certain loud boom is thunder, while a loud rumbling truck passing in the rain is not producing thunder, even though the faster heuristic categorizer might have disagreed). Thus, we have 2 pathways along which categories are grounded. What is the third? Anders Weinstein writes that the semantic meaning of terms is not dealt with by such arguments, but is nevertheless an important part of a category. He illustrates this with the fact that thunder may mean (to some!) 'angry gods nearby'. How is this aspect of a category to be grounded? First of all, the terms in the definition presumably are grounded via the 2 routes discussed above (even 'gods', I suppose). But what basis is there for claiming the definition exists in the first place? One basis is, you ask people and they tell you. But it would be preferable to have an objective basis for grounding since people are not always the most reliable measuring instruments. I pointed out in a previous article that perhaps logic can help here. Consider a sentence with 2 variables, e.g. FISH SWIM, where FISH and SWIM are variables. Thus the sentence we are considering is equivalent to the sentence A B. Obviously, many bindings would satisfy the sentence. For example, FISH=fish and SWIM=swim, or FISH=mountains and SWIM=erode (because fish do swim, and mountains do erode). By adding many more true sentences, the possible bindings of the variables become much more constrained. To illustrate, consider now the sentence FISH LIVE, where FISH and LIVE are variables. Clearly the variable FISH can be potentially bound to fewer meanings than before, since whatever FISH is bound to must both SWIM and LIVE. As we add more and more sentences, eventually perhaps there will no longer be a set of variable bindings for which all the sentences (millions of them, maybe, just go to a library and start counting!) are true and in which FISH means 'mountain'. On the other hand, this is only a hypothesis: Maybe a Martian attempting to deduce the meanings of English words could figure out a way to do it consistently with FISH=mountain. Indeed maybe your neighbor has already figured out a way to do that, but as long as you both agree on the truthfulness of all the sentences you are mutually aware of, there is no way to tell! Shades of the Turing test... My question is, would this method of 'grounding' the semantics of categories be sufficient to do the job? Only in theory? Potentially in practice? ... Dan Berleant UUCP: {gatech,ucbvax,ihnp4,seismo...& etc.}!ut-sally!berleant ARPA: ai.berleant@r20.utexas.edu
harnad@mind.UUCP (Stevan Harnad) (07/05/87)
In Article 181 of comp.cog-eng berleant@ut-sally.UUCP (Dan Berleant) of U. Texas CS Dept., Austin, Texas writes: > may not be much difference between a classical view augmented to... > *arbitrary* boolean expressions of features...and a probabilistic view I agree that such a probabilistic representation is possible. Now the question is, will it work, is it economical (and is it right)? Note, though, that even graded (probabilistic) individual features must yield an all-or-none feature SET. So even this would not be evidence of graded membership. (I don't think you'd disagree.) > need to...explain...typicality and reaction time results...interpreted > as supporting probabilistic and exemplar-based category representations Yes, but it seems only appropriate that we should account for the categorization performance capacity itself before we worry about its fine tuning. (Experimental psychology has a long history of bypassing the difficult but real problems underlying our behavioral capacities and fixating instead on fine-tuning.) > may [be] 2 representations for categories: a 'core' of defining features > and a heuristic categorizer... 2 pathways [grounding] categories You may be right. It's an empirical question whether the heuristic component will be necessary to generate successful performance. If it is, it is still not obvious that the need for it would be directly related to the grounding problem. > [Re:] Anders Weinstein [on] the semantic meaning of...thunder/...`angry > gods nearby'...: The terms in the definition presumably are grounded > via the 2 routes discussed above... [now] Consider a sentence with 2 > variables, e.g. FISH SWIM... Obviously, many bindings would satisfy > the sentence. [But]...by adding many more true sentences, the possible > bindings of the variables become much more constrained. I accepted this argument the first time you made it. I think it's right; I've made similar degrees-of-freedom arguments against Quine myself, and I've cross-referenced your point in my response to Weinstein. I don't believe, though, that this reduction of the degrees of freedom of the interpretation (even to zero) is sufficient to ground a symbol system. Even if there's only one way to interpret an entire language, the decryption must be performed; and it's not enough that the mapping should be into a natural language (that's still a symbol/symbol relation, leaving the entire edifice hanging by a skyhook of derived rather than intrinsic meaning). The mapping must be into the world. But, in any case, you seem to rescind your degrees-of-freedom argument immediately after you make it: > On the other hand... Maybe a Martian [or] your neighbor... could > figure out [an alternative] way to do it consistently... but as long > as you both agree on the truthfulness of all the sentences you are > mutually aware of, there is no way to tell! Shades of the Turing test... This is standard Quinean indeterminacy again! So you don't believe your degrees-of-freedom argument! Well I do. And it's partly because of degrees-of-freedom and convergence considerations that I am so sanguine about the TTT. (I called this the "convergence" argument in "Minds, Machines and Searle": There may be many arbitrary ways to successfully model a toy performance, but as you move toward the TTT, the degrees of freedom shrink.) > would this method of 'grounding' the semantics of categories be > sufficient to do the job? Only in theory? Potentially in practice? ... I think it would not (although it may simplify the task of grounding somewhat). Even if only one interpretation is possible, it must be intrinsic, not derivative. > Are you assuming a representation of episodes (more generally, > exemplars) that is iconic rather than symbolic? Yes, I am assuming that episodic representations would be iconic. This is related to the distinction in the human memory literature concering "episodic" vs. "semantic" memory. The former involves qualitative recall for when something happened (e.g., Kennedy's assassination) and the particulars of the experience; the latter involves only the *product* of past learning (e.g., knowing how to ride a bicycle, do calculus or speak English). It's much harder to imagine how the former could be symbolic (although, of course, there are "constructive" memory theories such as Bartlett's that suggest that what we remember as an episode may be based on reconstruction and logical inference...). > *no* category representation method can generate category boundaries > when there is significant interconfusability among categories! I would be very interested to know your basis for this assertion (particularly as "significant interconfusability" is not exactly a quantitative predicate). If I had said "complete indeterminacy," or even "radical underdetermination" (say, features that would require exponential search to find), I could understand why you would say this -- but significant interconfusability... Can you remember first looking at cellular structures under a microscope? Have you seen Inuit snow taxonomies? Have you ever tried serious mushroom-picking? Or chicken sexing? Or tumor identification? Art classification? Or, to pick some more abstract examples: paleolinguistic taxonomy? ideological typologizing? or problems at the creative frontiers of pure mathematics? -- Stevan Harnad (609) - 921 7771 {bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad harnad%mind@princeton.csnet harnad@mind.Princeton.EDU