[comp.ai] Harnad's epiphenomenalism

aweinste@Diamond.UUCP (02/10/87)

In defending his thesis of "methodological epiphenomenalism", one of Harnad's
favorite strategies is apparently a variant of G.E. Moore's "naturalistic
fallacy" argument:  For any proposed definition of consciousness, he will
ask:  "You say consciousness is X, but why couldn't you just as well have X
WITHOUT consciousness?" If we concede the meaningfulness of this question in
all cases, obviously this objection will be decisive.

But, I think this argument is as question-begging now as it was when Moore
used it in ethical philosophy.  The definer is proposing that X is just what
consciousness IS.  Accordingly, he does *not* grant that you could have X
without consciousness since, on his view, X and consciousness are one and the
same.

Put another way, the materialist is not trying to ADD anything to the
objective, causal story of X by calling it consciousness. Rather, he is
attempting to illuminate the problematic common-sense notion of consciousness
by showing how it is interpretable in naturalistic terms.  Obviously the
adequacy of any proposed definition of consciousness will need to be
established; the issues to be considered will pertain to whether or not the
definition does reasonable justice to the pre-analytic application of the
term, etc. But these issues are just the usual ones for inter-theoretical
identification, and don't present any special problem in the case of mind and
brain.

Another point that Harnad has often stated is that behavior is in practice
our only criterion for the ascription of consciousness.  While this is
currently true, it does not at all preclude the revision of our theory in the
direction of a more refined criterion.  Compare, say, the definition of
"gold." At one time, this substance was identifiable solely on the basis of
its superficial properties such as color, hardness, and specific gravity.
With the growth of scientific knowledge, a new definition of gold in terms of
atomic structure has come to be accepted, and this criterion now supersedes
the earlier ones. If you like, you might say that atomic theory came to
reveal the "essence" of gold.  I see no reason to suppose an analagous shift
couldn't arise out of the study of the mind and brain.

Harnad's "methodological epiphenomenalism" is a apparently an unavoidable
consequence of his philosophy of mind, which seems to be epiphenomenalism
simpliciter.  I am surprised to find many of Harnad's interlocutors
essentially granting him this controversial premise. Whatever happened to
materialism? As I understood it, the whole field of cognitive science -- the
rehabilitation of mentalistic theorizing in psychology -- was inspired by the
philosophical insight that the functional states of computers seemed to have
just the right sorts of features we would want for psycho-physical
identification. Harnad must believe that this philosophy has failed, dooming
us to return to an uneasy and unappealling view: ontological dualism coupled
with methodological behaviorism -- the worst of both worlds.

Well, I don't think we ought to give this up so easily.  I would urge that
cognitivists *not* buy into the premise of so many of Harnad's replies: the
existence of some weird parallel universe of subjective experience.
(Actually, *multiple* such universes, one per conscious subject, though of
course the existence of more than my own is always open to doubt.) We should
recognize no such private worlds. The most promising prospect we have is that
conscious experiences are either to be identified with functional states of
the brain or eliminated from our ultimate picture of the world. How this
reduction is to be carried out in detail is naturally a matter for
empirical study to reveal, but this should remain one (distant) goal of
mind/brain inquiry.

Anders Weinstein		aweinste@DIAMOND.BBN.COM
BBN Labs, Cambridge MA

harnad@mind.UUCP (02/11/87)

aweinste@Diamond.BBN.COM (Anders Weinstein) of BBN Labs, Cambridge, MA,
writes:

>	For any proposed definition of consciousness, [Harnad] will
>	ask:  "You say consciousness is X, but why couldn't you just as
>	well have X WITHOUT consciousness?"
>	I think this argument is as question-begging now as it was when Moore
>	used it in ethical philosophy. The definer is proposing that X is
>	just what consciousness IS.  Accordingly, he does *not* grant that
>	you could have X without consciousness since, on his view, X and
>	consciousness are one and the same.

It unfortunately has to be relentlessly reiterated that these matters
are not settled by definitions or obiter dicta. It simply won't do to
say "On my view, consciousness and X [say, memory, learning,
self-referential capacity, linguistic capacity, or what have you] are
one and the same." It is perfectly legitimate -- indeed, mandatory, if
SOMEONE is going to exercise some self-critical constraints on mentalistic
interpretation -- to ask WHY a candidate process should be interpreted as
conscious. If all the functional answers to that question -- "it's so it
can accomplish X," or "it's so it can accomplish Y this way rather than that
way" -- would be the SAME for an unconscious process, then there are indeed
strong grounds for supposing that the mentalistic interpretation is
methodologically (I might even say, to bait the functionalists more
pointedly, "functionally") superfluous. (It's not the skepticism
that's question-begging, but the mentalistic interpretation that's
supererogatory.)

I have no idea how or why Moore used a similar argument in ethics.
My own argument is purely methodological (and functional -- I am a
kind of functionalist too): I am concerned with how to get devices we
build (and hence understand) to DO what minds can do. These devices may
also turn out to BE what minds are (namely conscious), but I do not
believe that there is any objective, scientific way to ascertain that.
Nor do I think it is methodologically possible or relevant (or, a
fortiori, necessary) to do so. My pointed "why" questions are intended to
pare off the unjustified and distracting mentalistic hype and leave a clearer
image of just how far we really have or haven't gotten in answering the
"how" questions, which are the only scientifically tractable ones in the
area of theoretical bioengineering that mind-modeling occupies.

>	the materialist is attempting to illuminate the problematic
>	common-sense notion of consciousness by showing how it is
>	interpretable in naturalistic terms. Obviously the adequacy of
>	any proposed definition of consciousness will need to be established;
>	the issues to be considered will pertain to whether or not the
>	definition does reasonable justice to the pre-analytic application
>	of the term, etc. But these issues are just the usual ones for
>	inter-theoretical identification, and don't present any special
>	problem in the case of mind and brain.

But it is just the question of whether these issues are indeed the
"usual" ones in the mind/brain case that is at issue. I've given lots of
logical and methodological reasons why they're not. Wishful thinking, hopeful
overinterpretation and scientistic dogma seem to be the only rejoinders
I'm hearing. (I'm a materialist too; methodological constraints on
theoretical inference and its deliverances are what's at issue here.)

>	Compare, say, the definition of "gold."...
>	growth of scientific knowledge...new definition of gold
>	I see no reason to suppose an analogous shift
>	couldn't arise out of the study of the mind and brain.

I like the way Nagel handled this old reductionist chesnut: In
a chesnut-shell, he pointed out that all of the standard
reduction/revision scenarios of science have always consisted of one
objective account of an objective phenomenon being superseded or
subsumed by another objective account of an objective phenomenon (heat
--> mean molecular motion, etc.). There's nothing in this standard
revision-scenario that applies to -- much less can handle --
redefining subjective phenomena objectively. That prominent disanalogy
is yet another of the faces of the mind/body problem (that
functionalist euphoria sometimes overlooks). As it stands, the faith
in an eventual successful "redefinition" is just that: a faith. One
wonders why it does not founder in the sea of counter-examples and
disanalogies rightly generated by Moore's (if it's really his) method
of pointed "why" challenges. But there's no accounting for faith.

>	Harnad's "methodological epiphenomenalism" is apparently an
>	unavoidable consequence of his philosophy of mind, which seems to
>	be epiphenomenalism simpliciter.

No, I'm not an ontological epiphenomenalist (which I suppose is a kind
of dualism), just a methodological one. I don't think consciousness
can enter into scientific theory-building and theory-testing, for the
reasons I've stated. In fact, I think it retards theory-building to
try to account for consciousness or to dress theory up with conscious
interpretations. (Among other things, it masks the performance work
that still remains to be done, and lionizes possible nonstarters.)

However, I have no doubt that consciousness exists, and no serious
doubts that organisms are conscious. Moreover, I'm quite prepared to believe
the same of devices that pass the TTT, and on exactly the same grounds. These
devices may well have "captured" consciousness functionally. Yet not only
is there no way of knowing whether or not they really have; it even makes no
methodological difference to their functioning or to our theoretical
understanding of it whether or not they have really captured
consciousness. This is not an ontological issue. The mind/body problem
simply represents a methodological constraint on what can be known objectively,
i.e., scientifically. (Note that this constraint is not just the ordinary
underdetermination of scientific inferences about unobservables; it's
much worse. For, as I've pointed out several times before, although
hypthesized entities such as quarks or superstrings are no more
observable or "verifiable" than consciousness, it is a methodological
fact that the respective theories from which they come cannot account for the
objective phenomena without positing their existence, whereas any
theory of the objective phenomena of mind -- i.e., I/O performance
capacity, perhaps supplemented by structure and function -- will work
just as well with or without a mentalistic interpretation.)

>	the whole field of cognitive science -- the rehabilitation of
>	mentalistic theorizing in psychology -- was inspired by the
>	philosophical insight that the functional states of computers
>	seemed to have just the right sorts of features we would want for
>	psycho-physical identification. Harnad must believe that this
>	philosophy has failed, dooming us to return to an uneasy and
>	unappealling view: ontological dualism coupled with methodological
>	behaviorism -- the worst of both worlds.

I certainly believe that the view has failed methodologically. But I
don't think the consequence is ontological dualism (for the reasons
I've stated) and it's not clear what "methodological behaviorism" is
(or was, I'll return to this important point). Nor do I consider
cognitive science to be synonymous with mentalistic theorizing; nor
do I consider the field to be inspired by the the psycho-physical
identificatory hopes aroused by the computer. If you want to know what
I think, it's this:

Behaviorism, in a reaction against the sterility of introspectionism,
rejected reflecting and theorizing on what went on in the mind,
suggesting instead that psychology's task was to study observable
behavior. But in its animus against mentalistic theory, behaviorism
managed to do in or trivialize theory altogether. Put another way, not
only was behaviorism opposed to (observing or) theorizing about what went
on in the MIND, it also opposed theorizing about what went on in the HEAD.
As a consequence, behavioristic psychology effectively became a
"science" without a theoretical or inferential branch to speak of. 

Now what I think happened with the advent of cognitive science was
that, again, just as unobservable mental processes and unobservable
(shall we call them) "inernal" processes had been jointly banned from
the citadel, they were, with the rise of computer modeling (and
neural modeling), jointly readmitted. The mistake, as I see it,
was to embrace indiscriminately BOTH the legitimate right (and need) to
make theoretical inferences about the unobservable functional subtrates of
behavior AND the temptation to make mentalistic interpretations of
them. In my view, the first advances empirical progress (in fact is
essential for it), the second beclouds and retards it. Cognitive
science is (or should be) behaviorism-with-a-theory (or theories) at
last. If that's "methodological behaviorism," then it took the computer
era to make it so.

>	Well, I don't think we ought to give this up so easily.
>	I would urge that cognitivists *not* buy into the premise of
>	so many of Harnad's replies: the existence of some weird parallel
>	universe of subjective experience... conscious experiences are
>	either to be identified with functional states of the brain or
>	eliminated from our ultimate picture of the world. How this
>	reduction is to be carried out in detail is naturally a matter for
>	empirical study to reveal, but this should remain one (distant)
>	goal of mind/brain inquiry.

Identify it with the functional states if you like. But then FORGET
about it until you've GOT the functional states that deliver the
performance (TTT) goods. When you've got those -- i.e., when all the
objective questions there are to be answered are answered -- then no
harm whatever will be done by an orgy of mentalistic interpretation of
the objective story.

No "weird parallel universe." Just the familiar subjective one we all
know at first hand. Plus the methodological constraint that the
complete scientific picture is doomed to fail to account to our satisfaction
for the existence, nature, and utility of subjectivity.

-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet           

adt@minster.UUCP (02/12/87)

In article <4021@quartz.Diamond.BBN.COM> aweinste@Diamond.BBN.COM (Anders Weinstein) writes:

>Well, I don't think we ought to give this up so easily.  I would urge that
>cognitivists *not* buy into the premise of so many of Harnad's replies: the
>existence of some weird parallel universe of subjective experience.
>(Actually, *multiple* such universes, one per conscious subject, though of
>course the existence of more than my own is always open to doubt.) We should
>recognize no such private worlds. The most promising prospect we have is that
>conscious experiences are either to be identified with functional states of
>the brain or eliminated from our ultimate picture of the world. How this
>reduction is to be carried out in detail is naturally a matter for
>empirical study to reveal, but this should remain one (distant) goal of
>mind/brain inquiry.
>
>Anders Weinstein		aweinste@DIAMOND.BBN.COM
>BBN Labs, Cambridge MA

Why is it necessary to assert that there are no subjective universes, all that
is necessary is that everyone in their own subjective universe agrees the 
definition of consciousness as they perceive it. Eliminating conscious 
experiences from our ultimate picture of the world sounds like throwing away
half the results so that the theory fits. The analogy of our understanding of
gold in terms of its atomic structure is a useful one but does not require
the rejection of subjective universes. If objectivism is taken to its limit
as above then surely it must be possible to define "beautiful" in terms of
physical states of mind, or "beautiful" should be eliminated from our
ultimate picture of the world. OR "beautiful" is not a conscious experience.
I would be interested to know which of these possibilities you support.