kahn%UCLA-CS@sri-unix.UUCP (02/05/84)
From: Philip Kahn <kahn@UCLA-CS> In response to Rene Bach's question whether "the brain is a parallel processor." There is no other response other than an emphatic YES! The brain is comprised of about 10E9 neurons. Each one of those neurons is making locally autonomous calculations; it's hard to get more parallel than that! The lower brain functions (e.g., sensory preprocessing, lower motor control, etc.) are highly distributed and locally autonomous processors (i.e., pure parallel data flow). At the higher thought processing levels, however, it has been shown (can't cite anything, but I can get sources if someone wants me to dig them out) that logic tends to run in a serial fashion. That is, the brain is parallel (a hardware structure), yet higher logic processes apply the timing of thought in a serial nature (a "software" structure). It is generally agreed that the brain is an associational machine; it processes based upon the timing of diffuse stimuli and the resulting changes in the "action potential" of its member neurons. "Context" helps to define the strength and structure of those associational links. Higher thinking is generally a cognitive process where the context of situations is manipulated. Changing context (and some associational links) will often result in a "conclusion" significantly different than previously arrived upon. Higher thought may be thought as a three process cycle: decision (evaluation of an associational network), reasonability testing (i.e., is the present decision using a new "context" no different from the decision arrived upon utilizing the previous "context"?), and context alteration (i.e., "if my 'decision' is not 'reasonable' what 'contextual association' may be omitted or in error?"). This cycle is continued until the second step -- 'reasonability testing' -- has concluded that the result of this 'thinking' process is at least plausible. Although the implementation (assuming the trichotomy is correct) in the brain is via parallel neural structures, the movement of information through those structures is serial in nature. An interesting note on the above trichotomy; note what occurs when the input to the associational network is changed. If the new input is not consistent with the previously existing 'context' then the 'reasonability tester' will cause an automatic readjustment of the 'context'. Needless to say, this is not a rigorously proven theory of mine, but I feel it is quite plausible and that there are profuse psychophysical and phychological studies that reinforce the above model. As of now, I use it as a general guiding light in my work with vision systems, but it seems equally appplicable to general AI. Philip Kahn KAHN@UCLA-CS.ARPA