[mod.ai] More on Minsky on Minds

harnad@seismo.CSS.GOV@mind.UUCP (01/21/87)

From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AIList Digest   V5 #4

>	I don't believe that the phenomenon of "first order consciousness"
>	exists, that Harnad talks about.  The part of the mind that speaks is
>	not experiencing the toothache, but is reacting to signals that were
>	sent some small time ago from other parts of the brain.

There seems to be a contradiction in the above set of statements. If
the meaning of "first order consciousness" (call it "C-1") has been
understood, then one cannot at the same time say one does not believe
C-1 exists AND that "the part of the mind that speaks is not
experiencing the toothache" -- unless of course one believes NO part
of the mind is experiencing the toothache; for whatever part of
the mind IS experiencing the toothache is the part of the mind having
C-1. If Minsky DOES mean that no part of the mind is experiencing the
toothache, then I wish to offer my own humble experience as a
counterexample: I (and therefore, a fortiori, some part of my mind)
certainly do experience toothache.

To minimize cross-talk and misunderstanding, I will explicitly
define C-1 and C-2 ("2nd order consciousness"):

	To have (or be) C-1 is to have ANY qualitative experience at all; to
	feel, see, hear. Philosophers call having C-1 "having qualia."
	A helpful portmanteau we owe to the philosopher Tom Nagel is
	that whenever one has C-1 -- i.e., whenever one experiences
	anything at all -- there is something it is "like" to have that
	experience, and we experience what that something is like directly.
	Note: Everyone who is not in the grip of some theoretical
	position knows EXACTLY what I mean by the above, and I use
	the example of having a current toothache merely as a standard
	illustration.

	To have (or be) C-2 (or C-N) is to be aware of having a
	lower-order experience, such as C-1. The distinction between
	C-1 and C-2 is often formulated as the distinction between
	"being aware of something" (say, having a toothache) and "being
	aware of being aware of something" (including, say, remembering,
	thinking about or talking about having a toothache, or about
	what it's like to have a toothache).

My critiques of the extracts from Minsky's book were based on the
following simple point: His hypotheses about the functional
substrates of consciousness are all based on analogies between things
that can go on in machines (and perhaps brains) and things that seem to
go on in C-2. But C-2 is really just a 2nd-order frill on the mind/body
problem, compared with the problem of capturing the machine/brain
substrates of C-1.  Worse than that, C-2 already presupposes C-1. You can't
have awareness-of-awareness without having awareness -- i.e., direct,
first-order experiences like toothaches -- in the first place. This
led directly to my challenge to Minsky: Why do any of the processes he
describes require C-1 (and hence any level of C) at all? Why can't all
the functions he describes be accomplished without being given the
interpretation that they are conscious -- i.e. that they are accompanied
by any experience -- at all? What is there about his scenario that could not
be accomplished COMPLETELY UNCONSCIOUSLY?

To answer the last question is finally to confront the real mind/body
problem. And if Minsky did so, he would find that the conscious
interpretation of all his machine processes is completely
supererogatory. There's no particular reason to believe that systems
with only the kinds of properties he describes would have (or be) C-1. Hence
there's no reason to be persuaded by the analogies between their inner
workings and some of our inferences and introspections about C-2 either.

To put it more concretely using Minsky's own example: There is perhaps
marginally more inclination to believe that systems with the inner workings
he describes [objectively, of course, minus the conscious interpretation
with which they are decorated] are more likely to be conscious
than a stone, but even this marginal additional credibility derives only
from the fact that such systems can (again, objectively) DO more than
a stone, rather than from the C-2 interpretations and analogies. [And
it is of course this performance criterion alone -- what I've called
elsewhere the Total Turing Test -- that I have argued is the ONLY
defensible criterion for inferring consciousness in any device other than
oneself.]


>	I think Harnad's phenomenology is too simple-minded to take seriously.
>	If he has ever had a toothache, he will remember that one is not
>	conscious of it all the time, even if it is very painful; one becomes
>	aware of it in episodes of various lengths. I suppose he'll argue that
>	he remains unconsciously conscious of it. I...ask him to review his
>	insistence that ANTHING can happen instantaneously - no matter how
>	convincing the illusion is...

I hope no one will ever catch me suggesting that we can be "unconsciously
conscious" of anything, since I regard that as an unmitigated contradiction
in terms (and probably a particularly unhelpful Nachlass from Freud).
I am also reasonably confident that my simple-minded phenomenology is
shared by anyone who can pry himself loose from prior theoretical
commitments.

I agree that toothaches fade in and out, and that conscious "instants"
are not punctate, but smeared across a fuzzy interval. But so what?
Call Delta-T one of those instants of consciousness of a toothache. It
is when I'm feeling that toothache RIGHT NOW that I am having a 1st
order conscious experience. Call it Delta-C-1 if you prefer, but it's
still C-1 (i.e., experiencing pain now) and not just C-2 (i.e.,
remembering, describing, or reflecting on experiencing pain) that's
going on then. And unless you can make a case for C-1, the case for C-2
is left trying to elevate itself by its boot-straps.

I also agree, of course, that conscious experiences (both C-1 and C-2)
involve illusions, including temporal illusions. [In an article in
Cognition and Brain Theory (5:29-47, 1982) entitled "Consciousness: An
Afterthought" I tried to show how an experience might be a pastische
of temporal and causal illusions.] But one thing's no illusion, and
that's the fact THAT we're having an experience. The toothache I feel
I'm having right now may in fact have its causal origin in a tooth
injury that happened 90 seconds ago, or a brain event that happened 30
milliseconds ago, but what I'm feeling when I feel it is a
here-and-now toothache, and that's real. It's even real if there's no
tooth injury at all. The point is that the temporal and causal
CONTENTS of an experience may be illusory in their relation to, or
representation of, real time and real causes, but they can't be illusions
AS experiences. And it is this "phenomenological validity" of
conscious experience (C-1 in particular) that is the real burden of
any machine/brain theory of consciousness.

It's a useful constraint to observe the following dichotomy (which
corresponds roughly to the objective/subjective dichotomy): Keep
behavioral performance and the processes that generate it on the
objective side (O) of the ledger, and leave them uninterpreted. On the
subjective (S) side, place conscious experience (1st order and
higher-order) and its contents, such as they are; these are of course
necessarily interpreted. You now need an argument for interpreting any
theory of O in terms of S. In particular, you must show why the
uninterpreted O story ALONE will not work (i.e., why ALL the processes
you posit cannot be completely unconscious). [The history of the
mind/body problem to date -- in my view, at least -- is that no one
has yet managed to do the latter in any remotely rigorous or
convincing way.]

Consider the toothache. On the O side there may (or may not) be
tooth injury, neural substrates of tooth injury, verbal and nonverbal
expressions of pain, and neural substrates of verbal and nonverbal
expressions of pain. These events may be arranged in real time in
various ways. On the S side there is my feeling -- fading
in and out, smeared across time, sometimes vocalized sometimes just
silently suffered -- of having a toothache.

The mind/body problem then becomes the problem of how (and why) to
equate those objective phenomena (environmental events, neural events,
behaviors) with those subjective phenomena (feelings of pain, etc.).
My critique of the excerpts from Minsky's book was that he was conferring
the subjective interpretation on his proposed objective processes and
events without any apparent argument about why the VERY SAME objective
story could not be told with equal objective validity WITHOUT the
subjective interpretation. [If that sounds like a Catch-22, then I've
succeeded in showing the true face of the mind/body problem at last.
It also perhaps shows why I recommend methodological epiphenomenalism --
i.e., not trying to account for consciousness, but only for the
objective substrates of our total performance capacity -- in place of
subjective over-interpretations of those same processes: Because, at
worst, the hermeneutic embellishments will mask or distract from
performance weaknesses, and at best they are theoretically (i.e.,
objectively) superfluous.

>	As for that "mind/body problem" I repeat my slogan, "Minds are simply
>	what brains do."

Easier said than done. And, as I've suggested, even when done, it's no
"solution."
Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771