[net.ai] "Rational Psychology"

Black@YALE.ARPA@sri-unix.UUCP (09/30/83)

From:  "John B. Black" <Black@YALE.ARPA>


     Recently on this list, Pereira held up as a model for us all, Doyle's
"Rational Psychology" article in AI Magazine.  Actually, I think what Pereira
is really requesting is a reduction of overblown claims and assertions with no
justification (e.g., "solutions" to the natural language problem).  However,
since he raised the "rational psychology" issue I though I would comment on it.

     I too read Doyle's article with interest (although it seemed essentially
the same as Don Norman's numerous calls for a theoretical psychology in the
early 1970s), but (like the editor of this list) I was wondering what the
referents were of the vague descriptions of "rational psychology."  However,
Doyle does give some examples of what he means: mathematical logic and
decision theory, mathematical linguistics, and mathematical theories of
perception.  Unfortunately, this list is rather disappointing because --
with the exception of the mathematical theories of perception -- they have
all proved to be misleading when actually applied to people's behavior.

     Having a theoretical (or "rational" -- terrible name with all the wrong
connotations) psychology is certainly desirable, but it does have to make some
contact with the field it is a theory of.  One of the problems here is that
the "calculus" of psychology has yet to be invented, so we don't have the tools
we need for the "Newtonian mechanics" of psychology.  The latest mathematical
candidate was catastrophe theory, but it turned out to be a catastrophe when
applied to human behavior.  Perhaps Periera and Doyle have a "calculus"
to offer.

     Lacking such a appropriate mathematics, however, does not stop a
theoretical psycholology from existing.  In fact, I offer three recent examples
of what a theoretical psychology ought to be doing at this time:

 Tversky, A.  Features of similarity.  PSYCHOLOGICAL REVIEW, 1977, 327-352.

 Schank, R.C.  DYNAMIC MEMORY.  Cambridge University Press, 1982.

 Anderson, J.R.  THE ARCHITECTURE OF COGNITION.  Harvard University Press, 1983.

gary@rochester.UUCP (10/11/83)

This is in response to John Black's comments, to wit:

>     Having a theoretical (or "rational" -- terrible name with all the wrong
> connotations) psychology is certainly desirable, but it does have to make some
> contact with the field it is a theory of.  One of the problems here is that
> the "calculus" of psychology has yet to be invented, so we don't have the tools
> we need for the "Newtonian mechanics" of psychology.  The latest mathematical
> candidate was catastrophe theory, but it turned out to be a catastrophe when
> applied to human behavior.  Perhaps Periera and Doyle have a "calculus"
> to offer.

This is an issue I (and I think many AI'ers) are particularly interested in,                         
that is, the correspondence between our programs and the actual workings of
the mind. I believe that an *explanatory* theory of behavior will not be at
the functional level of correspondence with human behavior. Theories which are
at the functional level are important for pinpointing *what* it is that people
do, but they don't get a handle on *how* they do it. And, I think there are 
side-effects of the architecture of the brain on behavior that do not show up
in functional level models. 

This is why I favor (my favorite model!) connectionist models as being a
possible "calculus of Psychology". Connectionist models, for those unfamiliar
with the term, are a version of neural network models developed here at 
Rochester (with related models at UCSD and CMU) that attempts to bring the
basic model unit into line with our current understanding of the information
processing capabilities of neurons. The units themselves are relatively stupid
and slow, but have state, and can compute simple functions (not restricted to
linear). The simplicity of the functions is limited only by "gentleman's
agreement", as we still really have no idea of the upper limit of neuronal
capabilities, and we are guided by what we seem to need in order to accomplish
whatever task we set them to. The payoff is that they are highly connected to
one another, and can compute in parallel. They are not allowed to pass symbol
structures around, and have their output restricted to values in the range
1..10. Thus we feel that they are most likely to match the brain in power.

The problem is how to compute with the things! We regard the outcome of a
computation to be a "stable coalition", a set of units which mutually 
reinforce one another. We use units themselves to represent values of
parameters of interest, so that mutually compatible values reinforce one
another, and mutually exclusive values inhibit one another. These could
be the senses of the words in a sentence, the color of a patch in the
visual field, or the direction of intended eye movement. The result is 
something that looks a lot like constraint relaxation.

Anyway, I don't want to go on forever. If this sparks discussion or interest
references are available from the U. of R. CS Dept. Rochester, NY 14627.
(the biblio. is a TR called "the Rochester Connectionist Papers").
   
gary cottrell   (allegra or seismo)!rochester!gary or gary@rochester

shebs@utah-cs.UUCP (Stanley Shebs) (10/14/83)

I'm always suspicious of trying to reach general conclusions by studying
the detailed low-level behavior of the human brain.  It's not quite the
same thing as learning about a 780 by studying its individual transistors,
rather it's like trying to characterize computability by studying a specific
brand of computer.  You'll have a hard time distinguishing between what is
*possible* and what is an accident of design or evolution.  It seems to me
that the brain uses neurons with certain characteristics not because they're
particularly good, but because early primates had them, and so on ad infinitum.

>From the standpoint of universal psychology, it's more interesting to find
out human brains *can't* do, and why, and what AI programs can and can't do,
and why...

							stan the l.h.
							utah-cs!shebs