[mod.ai] another stab at "what are we arguing about"

cugini@icst-ecf.UUCP.UUCP (02/09/87)

 
> > Me:       "Why I am not a Methodological Epiphenomenalist"
>  
> Harnad: This is an ironic twist on Russell's sceptical book about religious
> beliefs!  I'm the one who should be writing "Why I'm Not a Methodological
> Mentalist."

Yeah, but I said it first...

OK, seriously folks, I think I see this discussion starting to converge on
a central point of disagreement (don't look so skeptical).  Harnad, 
Reed, Taylor, and I have all mentioned this "on the side" but I think it
may be the major sticking point between Harnad and the latter three.

> Reed: ...However, I don't buy the assumption that two must *observe the same
> instance of a phenomenon* in order to perform an *observer-independent
> measurement of the same (generic) phenomenon*. The two physicists can
> agree that they are studying the same generic phenomenon because they
> know they are doing similar things to similar equipment, and getting
> similar results. But there is nothing to prevent two psychologists from
> doing similar (mental) things to similar (mental) equipment and getting
> similar results, even if neither engages in any overt behavior apart
> from reporting the results of his measurements to the other....
> 
> What is objectively different about the human case is that not only is
> the other human doing similar (mental) things, he or she is doing those
> things to similar (human mind implemented on a human brain) equipment.
> If we obtain similar results, Occam's razor suggests that we explain
> them similarly: if my results come from measurement of subjectively
> experienced events, it is reasonable for me to suppose that another
> human's similar results come from the same source. But a computer's
> "mental" equipment is (at this point in time) sufficiently dissimilar
> from a human's that the above reasoning would break down at the point
> of "doing similar things to similar equipment with similar results",
> even if the procedures and results somehow did turn out to be identical.


 
> > Harnad: Everything resembles everything else in an infinite number of
> > ways; the problem is sorting out which of the similarities is relevant.
>  
> Taylor: Absolutely.  Watanabe's Theorem of the Ugly Duckling applies.  The
> distinctions (and similarities) we deem important are no more or less
> real than the infinity of ones that we ignore.  Nevertheless, we DO see
> some things as more alike than other things, because we see some similarities
> (and some differences) as more important than others.
>  
> In the matter of consciousness, I KNOW (no counterargument possible) that
> I am conscious, Ken Laws knows he is conscious, Steve Harnad knows he is
> conscious.  I don't know this of Ken or Steve, but their output on a
> computer terminal is enough like mine for me to presume by that similarity
> that they are human.  By Occam's razor, in the absence of evidence to the
> contrary, I am forced to believe that most humans work the way I do.
> Therefore
> it is simpler to presume that Ken and Steve experience consciousness than
> that they work according to one set of natural laws, and I, alone of all
> the world, conform to another.


The Big Question: Is your brain more similar to mine than either is to any
plausible silicon-based device?

I (and Reed and Taylor?) been pushing the "brain-as-criterion" based
on a very simple line of reasoning: 

1. my brain causes my consciousness. 
2. your brain is a lot like mine.
3. therefore, by "same cause, same effect" your brain probably
   causes consciousness in you.

(BTW, The above does NOT deny the relevance of similar performance in
confirming 3.)

Now, when I say simple things like this, Harnad says complicated things like:
re 1: how do you KNOW your brain causes your consciousness?  How can you have
causal knowledge without a good theory of mind-brain interaction?  
Re 2: How do you KNOW your brain is similar to others'?  Similar wrt
what features?  How do you know these are the relevant features?

For now (and with some luck, for ever) I am going to avoid a
straightforward philosophical reply.  I think there may be some
reasonably satisfactory (but very long and philosophical) answers to
these questions, but I maintain the questions are really not relevant.

We are dealing with the mind-body problem.  That's enough of a philosophical
problem to keep us busy.  I have noticed (although I can't explain why),
that when you start discussing the mind-body problem, people (even me, once
in a while) start to use it as a hook on which to hang every other
known philosophical problem:

1. well how do we know anything at all, much less our neighbors' mental states?
   (skepticism and epistemology).

2. what does it mean to say that A causes B, and what is the nature of
   causal knowledge?  (metaphysics and epistemology).

3. is it more moral to kill living thing X than a robot?  (ethics).

All of these are perfectly legitimate philosophical questions, but
they are general problems, NOT peculiar to the mind-body problem.
When addressing the mind-body problem, we should deal with its
peculiar features (of which there are enough), and not get mired in
more general problems * unless they are truly in doubt and thus their
solution truly necessary for M-B purposes. *

I do not believe that this is so of the issues Harnad raises.  I
believe people can a) have causal knowledge, both of instances and
types of events, without any articulated "deep" theory of the
mechanics going on behind the scenes (indeed the deep knowledge
comes later as an attempt to explain the already observed causal
interaction), and b) can spot relevant similarities without being
able to articulate them.

A member of an Amazon tribe could find out, truly know, that light
switches cause lights to come on, with a few minutes of
experimentation.  It is no objection to his knowledge to say that he
has no causal theory within which to embed this knowledge, or to
question his knowledge of the relevance of the similarities among
various light switches, even if he is hard-pressed to say anything
beyond "they look alike."  It is a commonplace example that many
people can distinguish between canines and felines without being
able to say why.  I do not assert, I am quick to add, that
these rough-and-ready processes are infallible - yes, yes, are whales
more like cows than fish, how should I know?

But again, to raise the specter of certainty is again a side-issue.
Do we all not agree that the Indian's knowledge of lights and light
switches is truly knowledge, however unsophisticated?  

Now, S. Harnad, upon your solemn oath, do you have any serious practical
doubt, that, in fact, 

1. you have a brain?
2. that it is the primary cause of your consciousness?
3. that other people have brains?
4. that these brains are similar to your own (and if not, why do you
   and everyone else use the same word to refer to them?), at least
   more so than any other object with which you are familiar?

Now if you do know these utterly ordinary assertions to be true,
* even if you can't produce a high-quality philosophical defense for
them, (which inability, I argue, does not cast serious doubt on them,
or on the status of your belief in them as knowledge) *  then, what
is wrong with the simple inference that others' possession of a brain
is a good reason (not necessarily the only reason) to believe that
they are conscious?

John Cugini <Cugini@icst-ecf>
------