[mod.ai] Multidimensional nature of intelligence

larry@JPL-VLSI.ARPA.UUCP (12/17/86)

I don't think we need a practioners' and a philosophers' AI discussion list, 
but more effort to bring the two types of discussion together.  This is such 
an effort.

There seems to me to be little gain from giving a Turing Test, which 
measures intelligence on a single dimension with a binary scale.  Further, 
it's only useful after one's work has produced a (hopefully) intelligent 
machine, giving little help in the creation of the machine.   More useful 
would be a test that treated intelligence as a multi-dimensional activity, 
somewhat like the various clinical IQ tests but considerably expanded, 
perhaps with social or emotional dimensions.

I'd also like to see more microscopic measures, based on my belief that 
"higher" intellectual capabilities are composites of essentially independent 
capacities.  Memory and emotion, for instance, seem to depend upon quite 
different mechanisms.  (Though in an organism as complex as a human there 
may not be any measures that are completely orthogonal.)

Consciousness might be one of those higher capacities, but my guess is that 
it is not essential for intelligence.  On the other hand, I doubt that 
it is an epiphenomenon having no effect on intelligent systems.  Perhaps it 
serves to integrate what would otherwise be disparate parts working against 
their individual and collective survival--in other words, consciousness 
insures that there are no orthogonal measures of intelligence!

Before we can investigate (and duplicate) consciousness we first must 
investigate the functions on which it depends.  One of them is memory, which 
seems to come in many varieties.  Perhaps the most crucial dimension of 
memory (for the study of consciousness) is its volatility.  The most 
volatile is very short term (a half to one-and-a-half seconds) and seems to 
be primarily electrical in nature.  Short term memory (15-30 minutes) may be 
primarily a chemical phenomenon.  Longer term memory seems more related to 
biological mechanisms, and seems to come in two types, which I call mid-term 
(half-hour to about a day) and long-term.  The transfer between mid- and 
long-term memory apparently occurs during sleep or sleep-like phenomena.

To relate this to consciousness, I would guess that consciousness is 
primarily a function of very-short-term memory but depends in successively 
lesser ways on the other volatile memory banks.  So to duplicate conscious-
ness we might have to utilize some kind of multi-buffered pipeline memory.

Free will is another of those nebulous ideas that may seem not to relate to 
AI practice.  I would first say that the connection between freedom and 
willing may be spurious.  I see others, including machines, making decisions 
all the time, so will is obviously a real phenomenon and probably an 
indispensable one for intelligence (unlike consciousness).  But at least in 
machines most decisions are based on information and rules stored in some 
kind of memory (with the remaining decisions the result of error).  I 
surmise that human decisions are similarly determined.

Secondly, some psych research indicates that decisions are rarely (or never) 
consciously made.  Instead we seem to subconsciously perform a very rapid 
vector summation of many conflicting motives (some "rational," some 
emotional).  Then we decide on motion along the vector (either in a positive 
or negative direction), and then create a publicly acceptable reason for our 
decision which finally pops up into the conscious.  (And most of us are so 
quick and skilled at subconscious rationalization that it seems to us as if 
the "reason" preceded the decision.)  To duplicate/emulate this form of 
decision-making analog computation may be more efficient than symbolic 
computation.
                            Larry @ jpl-vlsi.arpa