[net.ai] Mental states of machines

PEREIRA@SRI-AI.ARPA (12/03/83)

Steven Gutfreund's criticism of John McCarthy is unjustified.  I
haven't read the article in "Psychology Today", but I am familiar with
the notion put forward by JMC and condemned by SG.  The question can
be put in simple terms: is it useful to attribute mental states and
attitudes to machines? The answer is that our terms for mental states
and attitudes ("believe", "desire", "expect", etc...) represent a
classification of possible relationships between world states and the
internal (inacessible) states of designated individuals. Now, for
simple individuals and worlds, for example small finite automata, it
is possible to classify the world-individual relationships with simple
and tractable predicates. For more complicated systems, however, the
language of mental states is likely to become essential, because the
classifications it provides may well be computationally tractable in
ways that other classifications are not. Remember that individuals of
any "intelligence" must have states that encode classifications of
their own states and those of other individuals. Computational
representations of the language of mental states seem to be the only
means we have to construct machines with such rich sets of states that
can operate in "rational" ways with respect to the world and other
individuals.

SG's comment is analogous to the following criticism of our use of the
terms like "execution", "wait" or "active" when talking about the
states of computers: "it is wrong to use such terms when we all know
that what is down there is just a finite state machine, which we
understand so well mathematically."

Fernando Pereira

gutfreund%umass-cs%CSNet-Relay@sri-unix.UUCP (12/08/83)

From:  Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>

I have no problem with using anthropomorphic (or "mental") descriptions of
systems as a heuristic for dealing with difficult problems. One such
trick I especially approve of is Seymour Papert's "body syntonicity"
technique. The basic idea is to get young children to understand the
interaction of mathematical concepts by getting them to enter into a
turtle world and become an active participant in it, and to use this
perspective for understanding the construction of geometric structures.

What I am objecting to is that I sense that John McCarthy is implying
something more in his article: that human mental states are no different
than the very complex systems that we sometimes use mental descriptions
as a shorthand to describe.

I would refer to Ilya Prigogine's 1976 Nobel Prize winning work in chemistry on
"Dissapative Structures" to illustrate the foolishness of McCarthy's
claim.

Dissapative structures can be explained to some extent to non-chemists by means
of the termite analogy. Termites construct large rich and complex domiciles.
These structures sometimes are six feet tall and are filled with complex
arches and domed structures (it took human architects many thousands of
years to come up with these concepts). Yet if one watches termites at
the lowest "mechanistic" level (one termite at a time), all one sees
is a termite randomly placing drops of sticky wood pulp in random spots.

What Prigogine noted was that there are parallels in chemistry. Where random
underlying processes spontaneously give rise to complex and rich ordered
structures at higher levels.

If I accept McCarthy's argument that complex systems based on finite state
automata exhibit mental characteristics, then I must also hold that termite
colonies have mental characteristics, Douglas Hofstadter's Aunt Hillary also
has mental characteristics, and that certain colloidal suspensions and
amorphous crystals have mental characteristics.

                                                - Steven Gutfreund
                                                  Gutfreund.umass@csnet-relay

  [I, for one, have no difficulty with assigning mental "characteristics"
  to inanimate systems.  If a computer can be "intelligent", and thus
  presumably have mental characteristics, why not other artificial
  systems?  I admit that this is Humpty-Dumpty semantics, but the
  important point to me is the overall I/O behavior of the system.
  If that behavior depends on a set of (discrete or continuous) internal
  states, I am just as happy calling them "mental" states as calling
  them anything else.  To reserve the term mental for beings having
  volition, or souls, or intelligence, or neurons, or any other
  intuitive characteristic seems just as arbitrary to me.  I presume
  that "mental" is intended to contrast with "physical", but I side with
  those seeing a physical basis to all mental phenomena.  Philosophers
  worry over the distinction, but all that matters to me is the
  behavior of the system when I interface with it.  -- KIL]

mmt@dciem.UUCP (Martin Taylor) (12/12/83)

Any discussion of the nature and value of mental states in either
humans of machines should include consideration of the ideas of
J.G.Taylor (no relation). In his "Behavioral Basis of Perception"
Yale University Press, 1962, he sets out mathematically a basis
for changes in perception/behaviour dependent on transitions into
different members of "sets" of states. These "sets" look very like
the mental states referenced in the earlier discussion, and may
be tractable in studies of machine behaviour. They also tie in
quite closely with the recent loose talk about "catastrophes" in
psychology, although they are much better specified than the analogists'
models. The book is not easy reading, but it is very worthwhile, and
I think the ideas still have a lot to offer, even after 20 years.

Incidentally, in view of the mathematical nature of the book, it
is interesting that Taylor was a clinical psychologist interested
initially in behaviour modification.
-- 

Martin Taylor
{allegra,linus,ihnp4,uw-beaver,floyd,ubc-vision}!utzoo!dciem!mmt

decot@cwruecmp.UUCP (Dave Decot) (12/13/83)

What makes you think that Hofstadter's Aunt Hillary (an animate system if I ever
saw one) cannot have mental states, but that YOUR collection of chemicals
(the one in your skull) can?  Please define "animate" and "mental state"
in such a way that Aunt Hillary does not qualify but your brain does.  Do not
use such terms as "life", "soul", "mind", unless you define them.

Dave Decot
decvax!cwruecmp!decot    (Decot.Case@rand-relay)

PEREIRA@SRI-AI.ARPA (12/15/83)

The only reason I have to believe that a system encodes in its states
classifications of the states of other systems is that the systems we
are talking about are ARTIFICIAL, and therefore this is part of our
design. Of course, you are free to say that down at the bottom our
system is just a finite-state machine, but that's about as helpful as
making the same statement about the computer on which I am typing this
message when discussing how to change its time-sharing resource
allocation algorithm.

Besides this issue of convenience, it may well be the case that
certain predicates on the states of other or the same system are
simply not representable within the system. One does not even need to
go as far as incompleteness results in logic: in a system which has
means to represent a single transitive relation (say, the immediate
accessibility relation for a maze), no logical combination can
represent the transitive closure (accessibility relation) [example due
to Bob Moore]. Yet the transitive closure is causally connected to the
initial relation in the sense that any change in the latter will lead
to a change in the former. It may well be the case (SPECULATION
WARNING!) that some of the "mental state" predicates have this
character, that is, they cannot be represented as predicates over
lower-level notions such as states.

-- Fernando Pereira