[comp.ai.philosophy] Testing Machine Consciousness

mcdermott-drew@cs.yale.edu (Drew McDermott) (10/19/90)

  In article <VINSCI.90Oct18224945@nic.soft.fi> vinsci@soft.fi (Leonard Norrgard) writes:
  >Your wrote:
  >>[...]
  >>Seriously, though (and maybe I should say that I haven't looked at
  >>comp.ai in a couple of years), what could ever be sufficient evidence
  >>for machine consciousness?
  >
  >How about first finding evidence that other humans are conscious? When
  >you have that, apply it to machines. (Hint: #1 is a little bit troublesome.)


Okay, it's time for: 

Why There is No Other-Minds Problem

Copyright (1990) Drew McDermott

\section{The Traditional Other-Minds Problem}

The other-minds problem is supposed to be something like this: How can
I tell whether there are any minds in the universe besides mine?  What
I wish to argue is that this problem cannot be coherently stated, and
hence is no problem at all.

Let me hedge a bit.  There is an other-minds problem to the extent
that there is an ``outside-reality'' problem.  Descartes (or maybe Aquinas)
was the first to raise the issue whether all of reality might be
an illusion.  All I can know about the world comes through my senses,
so perhaps some Evil Deceiver is feeding systematically false
information through my senses, and there is in fact no real world at
all.

This kind of Total Skepticism is supposed to be independent of the other-minds
problem.  That is, someone could pooh-pooh Cartesian doubt about
reality as a whole, and still claim to entertain doubt that there are
any minds but his.  It is this claim that I dispute.

I begin by observing that, to the untrained observer, there is a lot
of mind around.  Every other person has a mind (excepting the usual
people in comas, and such), and many animals have minds, too.  Indeed,
the naive observer is likely to get carried away and see mind at work
in the actions of storms and planets.  As we get sophisticated, we are
able to draw distinctions more carefully, and we get less tempted to
ascribe mind to the weather.  However, we never get tempted to discard
the concept of mind completely.  (Contrast the concept of God, which
many people are sincerely willing to discard.)  One may as an idle
speculation suppose that mind is much scarcer than it appears to be,
but when it comes to dealing with other creatures we never cease
for an instant to assume that they have minds.

So why in the world is there some issue about whether other minds
exist?  I think the problem is arrived at like this: I
know I have a mind.  (I think, therefore I am.)  But my belief that
others have minds is due to a chain of inference that might be faulty.
I see someone heading for the freezer, and I infer that she wants ice
cream.  I see someone writhing on the floor, and I infer that he is in
pain.  But the inferences could be wrong.  These people could be
just going through the motions, and not have minds at all.

There are two flaws with this way of stating the problem.  First, the
perception of mind in these cases sometimes requires a deep chain
of thought, but usually it does not.  Sometimes we have to infer
that someone is in pain by observing how stiffly he moves, but usually
the pain is just as observable as the stiffness of motion.  We see
enough of the phenomenon to recognize it; from the part we infer the
whole.  It's like seeing the front of an animal and inferring the
presence of the entire animal.  We are occasionally wrong, but we
couldn't be wrong all the time without recourse to the sort of boring
skepticism I put aside early on.

In other words, it's just a faulty analysis of the situation to assume
that the subjective experience is the primary definition of mind, and
all other instances are recognized by inference to the primary
definition.  Mental phenomena tend to have three aspects: subjective
experience, behavioral consequences, and implementation mechanisms.  There
is no particular reason to assign primacy to any one of these aspects
(except for a hangover from Cartesian philosophy; see below).  

Mental phenomena are ``natural kinds.''  As Hilary Putnam has pointed
out, such entities are defined by ostension to some extent, and not
defined by a set of necessary and sufficient properties.  At one point
in history, water was known as a certain abundant transparent liquid.
Later we found out it was H_2O.  The result was not an inference that
in the presence of water one often finds H_2O, but instead the
discovery that water {\it is} H_2O.  We have similar discoveries to
make about mental phenomena.

Consider an analogy to photosynthesis.  Suppose humankind had noted
that plants were capable of surviving on sunshine, water, air, and
dirt, and had given the name ``photosynthesis'' to this ability.  (This is
not intended to be historically accurate.)  Then people would have
expected, with much justification, that when plants were opened up and
examined carefully, mechanisms capable of photosynthesis would be
found.  Similarly, we have no reason not to expect mechanisms capable
of mental phenomena to be found when we open up brains.  Of course, in
each case, the mechanisms responsible do not leap out at us, although
by now we've explained photosynthesis pretty well.  

For some reason, when people talk about opening up brains, they start
talking about looking for ``correlates'' of mind instead of
``mechanisms'' of mind.  The only possible reason is some preexisting
philosophical bias.

Anyway, the first flaw may be summarized thus: The perception of
mental phenomena does not require an inference {\it from} behavior
{\it to} subjectivity.  Evolution has shaped us to recognize certain
real things, such as the three-dimensionality of the world.  Mental
phenomena are one such thing.  It takes special training and
perversity to pretend that all we really are sure of is the subjective
aspect of mind, and everything else is mere evidence for this aspect.
It's just like pretending that two-dimensional images are all we're
really certain of, and the three-dimensional world is an ``inference''
from it.  In the nineteenth century, psychologists really did try to
convince themselves of such statements, based on certain radical-empiricist
preconceptions.   We've gotten over those preconceptions.  I hope
we've gotten over the corresponding ones for the perception of minds.

The second flaw in the statement of the problem is that the idea of
``just going through the motions'' is not sufficiently clear.  It does
not mean ``faking it.''  Suppose someone were faking pain or
intentionality very convincingly.  Then that would not mean he didn't
have a mind; quite the contrary.  I think the phrase is supposed to
mean ``behaving automatically, like a robot.''  But this phrase is
still ambiguous.  If it means, ``behaving like a Walt Disney
audioanimatron,'' then it's quite obvious that people are not ``just
going through the motions,'' because Disney technology is not
sophisticated enough to duplicate people.  No one above a certain age
attributes mind to the Abe Lincoln in the Disneyworld Hall of
Presidents.  It just cannot be the case that a person is ``just going
through the motions'' in this sense.

If the phrase means ``behaving like the most sophisticated robot it is
possible to build,'' then it is not clear that there is actually a
contrast between ``really having a mind'' and ``just going through the
motions.''  That's because we don't know what the most sophisticated
possible robot is capable of.  If you simply want to assume that there
is a contrast between having a real mind and being a robot, then you
have assumed away a very interesting question.  You haven't said
anything about what the contrast {\it is}.  I will say more about this
in Section 2.

I hope at this point you are feeling a certain frustation.  ``I know
the distinction I want to make, but I just can't put it into words.''
Try this: Suppose other people are just hollow shells; there's nothing
``inside.''  But this isn't getting us anywhere.  We know there's a
lot inside.  

Okay, about this phrasing: As granted above,
mental phenomena have three aspects, subjective, behavioral, and
implementational.  Why couldn't it simply be the case that the first
one is often or always absent?  The only agent I'm sure has subjective
experience is me, so what's my evidence that anyone else has?

The problem with this objection is that it misconstrues {\it aspects}
as {\it parts}.  This construal would make sense for problems like the
``other-planets problem'': We observe one star with planets and ask
what our evidence is that some or any other stars have any.  But
subjective experience is not an appendage of mentation in this way.  A
mental event can be experienced from several different angles,
including (although the categories are fairly arbitrary), the
subjective, the behavioral, and the implementational.  But the
experiences are all {\it of the same event}.  For example, consider a
parent dealing with a child's earache.  Both persons are experiencing
the same earache.  We could if we wanted sort the experiences into
bins.  One classification would be into behavioral, subjective, and
implementational.  (Behavioral: an observation of a squeal;
subjective: an observation of a twinge of pain; implementational:
$\ldots$ unlikely, under the circumstances).  Another would be into
who owns what (my squeal; her squeal; our twinge).

On any occasion any angle of observation could be missing, but to
suppose that a particular angle is missing {\it normally} is like
supposing that objects normally do not have insides.  Once we
understand geometry, we do not need to cut open objects and keep a
tally in order to verify that every object has an inside; having an
inside is part of what it means to be an object in the sort of space
we inhabit.  Of course, some objects could actually be weird
Klein-bottlish things which, when cut open, reveal their outsides
again.  It's not logically necessary that all objects have an inside,
nor is it logically necessary that all agents with minds have
subjective experience.  It's just extremely likely that agents capable
of experiencing the world would also experience some of their own
workings.

My problem, you now might claim, is that I'm ``objectifying''
subjectivity.  The issue is not whether we can observe our own mental
phenomena, as someone might observe aircraft, but whether we can {\it
feel} them.  If I were {\it counting} twinges of pain, then I might
grant that your pain was as in principle observable as mine, but when
{\it experiencing} them the issue of whose they is takes on an
altogether different meaning.  My ``raw feels'' are intrinsically {\it
private}.  I can observe my own feelings --- or, more precisely, I can
{\it have} my own feelings --- in a way that no one else can.  And
other people's feelings are forever hidden from me.  So what makes me
think those feelings exist?

Our intuitions, through centuries of training, have become warped
regarding this point.  Suppose a brain surgeon has my skull open, and
touches an electrode to a particular point, causing a pleasant
tingling sensation.  Suppose the surgeon is equipped with a good
theory of how the brain works (much better than what we now have), so
that she can see the reverbations die away, and get reported to the
memory log as pleasant, exactly as I report subjectively.  We're both
observing my sensations directly, but with different ``instruments''
and from different distances.  The pleasant tingling sensation seems
quite different to me than to her (e.g., it's not particularly  
pleasant for her), but then again my voice sounds different to me than
to her.  It's true that she's not ``having'' my feelings, but only
because she's not me.  If she wished, she could probably wire up an
apparatus to cause her brain to experience my brain's states in a way
very close to the way my brain experiences them, but she still
couldn't have my feelings, any more than she could have my location.  

The fact is that other people's sensations are more ``directly''
observable than a lot of things we grant reality to with much less
``direct'' evidence, like quarks or electromagnetic fields.  So why
are we so curiously persnickety about the matter?

I think the answer ultimately derives from the epistemology of
philosophers like Descartes and Kant.  Those guys took it for granted
that each of us is in the strange epistemological position of having
to reconstruct the world from our sense data.  This theory gets us
into some ridiculous quandaries (which is why --- in case you haven't
heard --- most philosophers have abandoned it).  The most ridiculous
quandary is that you have to be reconstructed from my sense data, and
I have to be reconstructed from yours.  On this theory, it's easy to
say who has a mind: Those doing these reconstructions are the actual
minds, and the other things in the world are just passive objects of
recognition.  The other-minds problem then just becomes the question
of how many reconstructors there are.  In other words, when I see
someone with an apparent mind, I have to ask ``Is he actually building
the world from sense data, or is he just an aggregation of --- or
inference from --- my sense data?''

Unfortunately, while this may have seemed an acceptable question to
Descartes, I assume (er, I hope) that any modern person would be
embarrassed by it.  Taking this question seriously presupposes the
``movie theater'' view of the world, in which the human race consists
of a bunch of consciousnesses all watching the same movie (the world).
What's absurd about this view is that we have to reconcile it with the fact
that we're all {\it in} the movie.  The original reconciliation was to
assume that it's only our bodies that are in the movie --- the minds
are out in the theater, connected up to the bodies somehow.  If you
want to buy this picture, then, yes, you have an other-minds problem
(namely, how many seats are occupied?).  But in the twentieth century
it's gradually becoming clear that the minds and the bodies are all
mixed up together.

So we're left with frustation: How do you state the other-minds
problem?  


\section{The Individual Other-Mind Problem}

You may be willing to grant that the traditional other-minds problem
is ill-posed, yet claim that when faced with a particular entity we
still have the problem of determining whether it has a mind.  This is
the ``individual other-mind'' problem.

The usual way this problem is posed is to present a Gedanken
experiment involving an intelligent robot, about which there is doubt
whether it has a mind.  What's odd about this setting is that when the
individual other-mind problem comes up in everyday life, what we are
confronted with a is a {\it borderline case}, such as a nonhuman
mammal or a person in a coma.  The evidence we cite one way or the
other has to do with completely mundane phenomena like whether the
creature appears to fear pain, or shows brain activity of a certain
kind.  There is nothing special about the individual other-mind
problem in this setting.  It's like the problem of distinguishing
between animals and plants.  We believe there's a difference; we
believe there are borderline cases; we believe that by studying
animals and plants more we'll learn more about how to --- and whether
to --- classify the borderline cases.  End of story.

By contrast, in the Gedanken experiments I referred to, we are
supposed to imagine the existence of a creature whose mental status
would not be in doubt at all, if such creatures could actually be
created.  Suppose in the remote future you had an acquaintance who was
a computerized mathematician, a pretty good mathematician, whose
talent was somewhat blunted by its preoccupation with whether other
mathematicians had plagiarized its results.  This computer would not
be able to walk around, but it would be able to attend conferences
(via remote TV hookups), and carry on a completely colloquial spoken
conversation about mathematics, and about the daily affairs and
conspiracies of mathematicians.

I've painted this picture for a variety of reasons, besides the fact
that it's fun to engage in this sort of science fiction.  What I'm
trying to convey is the idea of an entity that probably couldn't pass
Turing's Test, but that no one would deny was intelligent.  (The
computer might rebuff all attempts to Turing Test it by saying things
like, ``I admit I'm not a human; now can we discuss unreachable
cardinals?'')

One might deny that we can ever build such a thing.  Of course, there
is little evidence that we can.  But people who are fond of the
other-minds problem are usually eager to place this topic on
the table: the notion of an entity that would, if it existed, seem
overwhelmingly intelligent, and yet {\it still} would not {\it really}
``have a mind.''  The reason they are so eager is because they have a
vivid image of the possibility of such an entity existing without
``anybody being at home.''  That is, the robot mathematician could be
a ``hollow shell,'' with no subjective experience.

But the problems I raised in Section 1 now arise again with this sort of
vivid image.  There is also a new problem, which is that the whole
experiment assumes we'll know a lot more in the future about how minds
work, and then asks us to predict what it is we'll know.  If we could
actually build such a robot mathematician, we would presumably have a
theory of mind that would make twentieth-century cognitive science
seem puny.  So what good is it to speculate about --- let alone set
limits to --- the answers that future cognitive science would give us
regarding the state of the robot's mind?

Here's an analogy: Suppose someone in the year 1850 speculated that
some day we might understand the chemical basis of life.  Now suppose
a vitalist proposed a Gedanken experiment in which some goo is created
that behaves remarkably similar to an ameba.  ``How would you know,''
he challenges, ``whether it was really alive?''  The proper response
from the anti-vitalist is to say, ``Don't ask {\it me}; wait a hundred
years and ask Watson and Crick.''  (And of course, Watson and Crick
would say, ``Now that we understand what's going on, the question
whether something is really alive has lost any interest or meaning.'')
Unfortunately, a hundred years before the success of molecular biology,
this answer doesn't sound very convincing.

So we have to say a little more.  If we return to the issues raised in
Section 1, they appear here in a slightly different form, but are
still relevant.  First, consider the fact that mental phenomena form
natural kinds.  In the Gedanken experiment, that means that we have to
consider whether the future cognitive scientists have succeeded in
creating intelligence and paranoia in the robot mathematician, or
creating other things that only {\it appear} to be intelligence and
paranoia.  It's odd that people often take it for granted that there
could {\it be} pseudo-intelligence or pseudo-paranoia.  Let's crank up the
photosynthesis analogy again.  Suppose someone built an artificial
plant.  How would we tell whether it was doing real photosynthesis or
pseudo-photosynthesis?  Well, chances are it would be doing something
close to real photosynthesis, but it would also be interesting if
there were entirely different mechanisms that looked macroscopically
just like photosynthesis.  On the other hand, it might turn out that
there simply are no such mechanisms.

When it comes to mind, we don't know enough yet to say whether it's
possible for there to be pseudo-intelligence or pseudo-paranoia.  That
is, it might turn out that there's essentially just one way to create
these things, or that there are a million quite different ways.
Hence, if we ever build intelligent robots, it may be that they are
doing things quite similar to what we do, or it may turn out that
there are lots of design choices; {\it but these questions are entirely
empirical}.  When you go to the robot store, you may be able to select
(e.g.), a companion that experiences jealousy, a companion that
experiences something rather different from jealousy in an interesting
way, or a companion that's quite good at faking jealousy for
fantasy-game purposes.

Now let's turn to the ``just going through the motions'' question.
Here we come, I suppose, to the crux of the matter.  Obviously, if
someone believes that the robot might be ``just going through the
motions,'' he's not alluding to the possibility that the robot is
really remotely controlled by a person.  (Cf. the old
mechanical-chess-player hoax.)  In that case, the debate would be just
about the location of the mind, not its reality.

No, the skeptic believes that the robot is actually really doing what
it appears to be doing, but that ``doing'' is not sufficient.  The
robot's mechanisms might be executing exactly the computational steps
that the future cognitive science says are those carried out by real
brains, and yet the skeptic insists that it might still not really
have a mind (or be conscious, or however you want to phrase it).  ``In
the end,'' says the skeptic, ``only the robot can be completely sure
whether it is conscious.''  

Okay, we'll ask it: ``Are you conscious?''  or, more specifically,
``Are you really afraid that Prof. Potter sneaked a peek at your
proof, or are you just faking it?''

The skeptic will turn up his nose at {\it this}, obviously.  All we're
going to get out of this experiment is more observable data, both verbal behavior
(e.g., the robot stutters as it denies it's paranoid) and observations
of data structures and neuronal transmissions.  The ultimate skeptic
insists that all of these things are ultimately irrelevant, that
subjective experience is fundamentally unobservable by all but the
experiencer.  Here I must part company with him.  For one thing, as I
said in Section 1, if it weren't for warped philosophical intuitions
we would all grant that observations of mental implementational
mechanisms {\it are} observations of subjective experience.  I just
don't know what we're talking about if we're not talking about an
observable phenomenon.  The skeptic claims simultaneously that his
subjective experience can never be observed by me, and that when
he casually alludes to it I know exactly what he is referring to.
Surely the second part of this claim relies on an implicit belief that
his experience and mine are ultimately mediated by the same mechanisms,
and we could in principle open up our brains and verify that.

If the skeptic denies this point, then I am afraid he is committed to
a dualistic position, in which mental substances are connected in
fairly arbitrary ways to physical objects.  Suppose an entity does
have a mind in this sense --- a subjectivity arbitrarily associated
with it.  Then there is nothing linking this kind of mind with any
information-processing capacity of the system.  Suppose the computers
at the National Weather Service do have this kind of subjective minds,
in the same sense that trees or rocks might.  These minds might be dreaming
about God; the chances that they are thinking about the weather are
negligible.  Somehow our theory of the mind associated with an entity
has got to incorporate the idea that if sensors transduce information
from the world into the entity, then what the mind of the entity knows
about is the information so transduced.  The idea that you could know
all about the operation of the entity, and {\it still know nothing
about its subjective experience}, carries with it the consequence that
its subjective experience need have nothing to do with its operation.
We would do well to steer clear of such a nightmarish form of the
mind-body problem.

\section{Conclusion}

The other-minds problem is supposed to be the problem of deciding
whether other people, or a particular entity, is possessed of a mind.
However, the problem turns out to be a vestigial organ of an extinct
philosophical organism, Cartesian epistemology.  The question is
invariably posed in a setting where it is completely obvious, prima
facie, that the creature in question has a mind, and it presupposes
that the alternative --- that it does not ``really'' have a mind ---
makes sense.  But the alternative cannot be spelled out without
presupposing dualism or solipsism.

dave@cogsci.indiana.edu (David Chalmers) (10/20/90)

In article <26852@cs.yale.edu> mcdermott-drew@cs.yale.edu (Drew McDermott) writes:

>Why There is No Other-Minds Problem
>[...]
>Okay, about this phrasing: As granted above,
>mental phenomena have three aspects, subjective, behavioral, and
>implementational.  Why couldn't it simply be the case that the first
>one is often or always absent?  The only agent I'm sure has subjective
>experience is me, so what's my evidence that anyone else has?

Yes, this is the right way to state the problem.  "Mind" as the term
is traditionally used has behavioural, functional, and phenomenological
aspects.  The "other minds" problem is concerned only with the
phenomenological (subjective) aspects.

>The problem with this objection is that it misconstrues {\it aspects}
>as {\it parts}.  This construal would make sense for problems like the
>``other-planets problem'': We observe one star with planets and ask
>what our evidence is that some or any other stars have any.  But
>subjective experience is not an appendage of mentation in this way.  A
>mental event can be experienced from several different angles,
>including (although the categories are fairly arbitrary), the
>subjective, the behavioral, and the implementational.  But the
>experiences are all {\it of the same event}.

The flaw in your argument is right here -- the rest is just wrapping.
It may be that in many cases -- in particular, the cases where subjective
phenomena exist -- the subjective, behavioural, and functional aspects
are all aspects of the same event.  But this does not imply that in all
cases, they must be tied together.  Certainly, in *my* case the three
aspects go together.  Does that allow be to deduce that in your case, or
in a robot's case, they must also?  Of course not.  The coincidence of
functional organization, behavioural consequences and subjectivity in some
cases is quite compatible, a priori, with the coincidence of functional
organization, behavioural consequences and non-subjectivity in others.
Analogy time: In certain cases, temperature coincides with the motion of gas
molecules -- they are literally two aspects of the same thing.  In other
cases, temperature exists without the motion of gas molecules.

Of course, you may be wanting to make a *claim* that in all cases where at
least one of them occur, the three aspects of "mind" will be tied together as
different aspects of the same thing.  This is a non-trivial claim, whose
truth-value may or may not be "true".  The non-triviality of this claim is the
reason why there is an Other-Minds Problem.  (I in fact believe that the claim
is true -- more specifically, I believe that the phenomenological is
supervenient on the functional.  But I can't prove it, at least not easily,
and so the Other Minds problem can still be raised.)

>It's just extremely likely that agents capable
>of experiencing the world would also experience some of their own
>workings.

Your phrasing right here concedes that the OMP is really a problem.
"Extremely likely" is not good enough.  It was always extremely likely
that you and other people all have subjective experience.  The OMP
is "How can you know *for sure*?"  As far as I'm concerned, the best
answer for now is "You can't, but you can be pretty confident on various
inductive grounds".

>That is, the robot mathematician could be
>a ``hollow shell,'' with no subjective experience.
>
>So what good is it to speculate about --- let alone set
>limits to --- the answers that future cognitive science would give us
>regarding the state of the robot's mind?

Of course we can't predict these answer in advance.  But few advocates of the
OMP (Searle excepted) would argue in advance that we know that certain
intelligent-seeming beings could *not* have minds.  They just argue that the
question is open.

>When it comes to mind, we don't know enough yet to say whether it's
>possible for there to be pseudo-intelligence or pseudo-paranoia.

I hate to get stuck in a rut, but you've once again conceded that there *is*
an OMP.

>Okay, we'll ask it: ``Are you conscious?''  or, more specifically,
>``Are you really afraid that Prof. Potter sneaked a peek at your
>proof, or are you just faking it?''

I have a paper that addresses the relationship between claims about
consciousness (e.g. "Sure, I have these really weird subjective feels"),
and consciousness itself.  It's a highly non-trivial issue, but the
conclusion I come to is that if we want our claims about consciousness
to reflect the properties of consciousness, then consciousness must be
supervenient on functional organization (i.e., whenever you have the right
functional -- that is, abstract causal -- organization, it must be
accompanied by consciousness).  If the source of consciousness is something
other than functional organization -- e.g. if consciousness was
biochemistry-specific, as Searle seems occasionally to believe -- then our
consciousness-claims would be deeply irrelevant to consciousness itself.
In this case, there'd be little point even talking about the Mind-Body
Problem, as we wouldn't know what we were talking about.  Shoemaker's paper
"Functionalism and Qualia" (reprinted in Block, _Readings in the Philosophy
of Psychology_, Vol. 1) touches on issues related to this.

>If the skeptic denies this point, then I am afraid he is committed to
>a dualistic position, in which mental substances are connected in
>fairly arbitrary ways to physical objects.  Suppose an entity does
>have a mind in this sense --- a subjectivity arbitrarily associated
>with it.  Then there is nothing linking this kind of mind with any
>information-processing capacity of the system.  Suppose the computers
>at the National Weather Service do have this kind of subjective minds,
>in the same sense that trees or rocks might.  These minds might be dreaming
>about God; the chances that they are thinking about the weather are
>negligible.

One can be some form of dualist *without* believing that minds are
arbitrarily associated with information-processing.  My favourite
theory of consciousness is sort-of-dualist, but still holds that the
causal roots (really, the supervenience base) of consciousness lie in
information-processing.  There can be a dualistic mind-brain association
without it being an arbitrary one.

If we could *prove* such a theory, or any theory delineating the specific
roots of consciousness in the physical, then there would no longer be an OMP.
We would know precisely which entities have, or don't have, minds.
Unfortunately no-one yet has done more than offer plausibility arguments
for various theories.  This is OK -- for all we know, that's the best we can
do.  But while that's the best we can do, the OMP will remain.

>However, the problem turns out to be a vestigial organ of an extinct
>philosophical organism, Cartesian epistemology.  The question is
>invariably posed in a setting where it is completely obvious, prima
>facie, that the creature in question has a mind, and it presupposes
>that the alternative --- that it does not ``really'' have a mind ---
>makes sense.  But the alternative cannot be spelled out without
>presupposing dualism or solipsism.

This is quite false.  It's perfectly consistent to hold a quite
materalist theory of mind, where subjective phenomena are identical
to certain material events -- but only *certain* material events, such
as ones that occur in a given biochemistry.  I believe that for various
reasons this is highly implausible, but it is a coherent position.

--
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

mcdermott-drew@cs.yale.edu (Drew McDermott) (10/25/90)

As usual, my previous posting on Why There is no Other-Minds Problem
needs further explanation.

David Chalmers (dave@cogsci.indiana.edu) objects thus:

  In article <26852@cs.yale.edu> mcdermott-drew@cs.yale.edu (Drew McDermott) writes:

    >Okay, [how] about this phrasing: As granted above,
    >mental phenomena have three aspects, subjective, behavioral, and
    >implementational.  Why couldn't it simply be the case that the first
    >one is often or always absent?  The only agent I'm sure has subjective
    >experience is me, so what's my evidence that anyone else has?

  Yes, this is the right way to state the problem.  "Mind" as the term
  is traditionally used has behavioural, functional, and phenomenological
  aspects.  The "other minds" problem is concerned only with the
  phenomenological (subjective) aspects.

    >The problem with this objection is that it misconstrues {\it aspects}
    >as {\it parts}.  This construal would make sense for problems like the
    >``other-planets problem'': We observe one star with planets and ask
    >what our evidence is that some or any other stars have any.  But
    >subjective experience is not an appendage of mentation in this way.  A
    >mental event can be experienced from several different angles,
    >including (although the categories are fairly arbitrary), the
    >subjective, the behavioral, and the implementational.  But the
    >experiences are all {\it of the same event}.

  The flaw in your argument is right here -- the rest is just wrapping.

Well, I'm not going to acknowledge that my argument is a flaw plus
wrapping!  I will acknowledge that we have a clash of several
different intuitions, and that people are hard to budge from those
they're used to.  What I want to do is reassure those who think that
there's something silly about the other-minds problem that their
intuitions are basically healthy.

  It may be that in many cases -- in particular, the cases where subjective
  phenomena exist -- the subjective, behavioural, and functional aspects
  are all aspects of the same event.  But this does not imply that in all
  cases, they must be tied together.  Certainly, in *my* case the three
  aspects go together.  Does that allow be to deduce that in your case, or
  in a robot's case, they must also?  Of course not.  The coincidence of
  functional organization, behavioural consequences and subjectivity in some
  cases is quite compatible, a priori, with the coincidence of functional
  organization, behavioural consequences and non-subjectivity in others.
  Analogy time: In certain cases, temperature coincides with the motion of gas
  molecules -- they are literally two aspects of the same thing.  In other
  cases, temperature exists without the motion of gas molecules.

  Of course, you may be wanting to make a *claim* that in all cases where at
  least one of them occur, the three aspects of "mind" will be tied together as
  different aspects of the same thing.  This is a non-trivial claim, whose
  truth-value may or may not be "true".  The non-triviality of this claim is the
  reason why there is an Other-Minds Problem.  (I in fact believe that the claim
  is true -- more specifically, I believe that the phenomenological is
  supervenient on the functional.  But I can't prove it, at least not easily,
  and so the Other Minds problem can still be raised.)

This restates the problem again pretty well, but fails to convince me
it's real.  It seems to propose that we could account for all the
*observable properties* of subjectivity (or consciousness) and still not
be sure we had "really" accounted for consciousness.  This is what
seems to me to be a preposterously high standard.

Suppose we build a robot, and it claims to be subjectively aware.  We
ask it how it tells the difference between red things and green
things, and it says they look different.  When we press it, it starts
to tell us about qualia.  It treats its own decisions as free.
Inspection of its blueprints shows that it has the same functional
organization as the brain.  (We can't do this today, of course.)

It's at this point that we hit the intuition that we still couldn't
know whether it *really* experienced anything.  My claim is that this
intuition is empty.  When we've accounted for everything, there's
nothing left to account for.  If necessary, we could even put into the
robot the intuition that mere observables are not enough to be sure an
entity is subjectively aware.  (I don't think it's necessary; I think
the intuition is due to faulty education, not wiring.  But I could be
wrong.)

  Your phrasing right here concedes that the OMP is really a problem.
  "Extremely likely" is not good enough.  It was always extremely likely
  that you and other people all have subjective experience.  The OMP
  is "How can you know *for sure*?"  As far as I'm concerned, the best
  answer for now is "You can't, but you can be pretty confident on various
  inductive grounds".

What is the standard?  Do we know the theory of evolution "for sure"?
Most scientists are considerably annoyed by creationists' refusal to
grant that the theory of evolution is as certain as anything ever gets
in science.  They should also be annoyed by philosophers' attempts to
uphold a similar refusal here with respect to a hypothetical future
theory of mind.

Whence this refusal?  I think Chalmers's use of the word
"phenomenological" above is quite revealing.  This word derives from
Kant's notion of "phenomenon," or thing-as-it-appears.  Kant and his
buddies were obsessed by the distinction between appearance and
reality.  To our minds this obsession seems quaint.  We picture the
world as populated by a variety of information-processing systems,
some conscious, others simpler.  All of them introduce errors and
approximations into data, and most manage to cope with these
distortions most of the time.  We can often come up with a
quantitative theory about how close a system is to the truth about a
situation.  But *this* theory of appearance vs.  reality is obviously
not what Kant and Descartes were worried about.  They were thinking of
the situation of a mind that could be absolutely certain only of
"appearance," and had to reason back from there to reality.  If you
take this picture seriously, then a mind isn't a mind unless things
"appear" to it in this way.  And nothing can "appear" to a robot in
this sense, because (a) there's no absolute certainty; and (b) it's
not even possible or necessary to single out one part of the robot as
the "subject" of these "phenomena."  *This,* I think, is what make
people nervous about the idea that robots can have minds.  All we have
to do is junk the whole epistemological framework, and the problem
goes away.  Consciousness becomes just another part of the world, like
photosynthesis.

  It's perfectly consistent to hold a quite materalist theory of mind,
  where subjective phenomena are identical to certain material events --
  but only *certain* material events, such as ones that occur in a given
  biochemistry.  I believe that for various reasons this is highly
  implausible, but it is a coherent position.


  Dave Chalmers                            (dave@cogsci.indiana.edu)      
  Center for Research on Concepts and Cognition, Indiana University.

By the way, I agree with this point.  I am emphatically not arguing
that a computationalist or functionalist position is correct a priori.
It may well turn out (although I doubt it) that consciousness is a
biochemical property, or a property of enormous and unintelligible
neural nets.  It just seems to me that *any* materialist theory is
going to be open to the "objection" that there's no way to be *sure*
that the objects it predicts are conscious *really* have minds, blah,
blah.

                              Drew McDermott
                              mcdermott@cs.yale.edu