SDEIBEL%ZEUS.DECnet@GE-CRD.ARPA (11/24/88)
In Vol8 Issue 131 of the BITNET distribution of AILIST, Nick Taylor mentioned the problem of defining intelligence. This is indeed a problem: What really are we talking about when we set ourselves off from the "animals", etc? I'm not foolish enough to pretend I have any answers but did find some interesting ideas in Ray Jackendoff's book "Conciousness and the Computational Mind". Jackendoff suggests (in Chapter 2, I believe) that one fundamental characterestic of intelligence that seperates the actions of humans (and possibly, other animals) from non-intelligent systems/animals/etc is the way in which components of intelligent entities interact. The matter of interest in intelligent entities is the way in which independently acting sub-parts (e.g. neurons) interact and the way in which the states of these sub-parts combinatorily combine. On the other hand, the matter of interest in non-intelligent entities (e.g. a stomach) is the way in which the action of subparts (e.g. secreting cells) SUM into a coherent whole. While vague, this idea of intelligence as arising from complexity and the interaction of independent units seemed interesting to me in that it offers a nice and simplistic general description of intelligence. Oh, yes it could start to imply that computers are intelligent, etc, etc but one must not forget the complexity gap between the brian and the most complex computers in existence today! Rather that wrestle with the subtleties and complexities of words like "intelligence" (among others), it might be better to accept the fact that we may never be able to decide what intelligence is. How about "The sum total of human cognitive abilities" and forget about it to concentrate on deciding how humans might acheive some of their cognitive feats? Try deciding what we really mean when we say "bicycle" and you'll run into monumental problems. Why should we expect to be able to characterise "intelligence" any easier? Stephan Deibel (sdeibel%zeus.decnet@ge-crd.arpa)