[comp.ai] Searle, Turing, Symbols, Categories

harnad@mind.UUCP (Stevan Harnad) (11/13/86)

kgd@rlvd.UUCP (Keith Dancey) of Rutherford Appleton Laboratories,
Atlas Buildings, U.K. writes:

>	I don't think it wise to try and equate 'mind' and 'intelligence'.
>	A 'mind' is an absolute thing, but 'intelligence' is relative.

I'm not quite sure what you mean, but perhaps it is that intelligence
seems to be something you can have to varying degrees, whereas a mind
seems to be an all-or-none phenomenon. If so, I agree. But it's also
true that the only handle we have on what might distinguish "mere"
clever performance tricks from "true" intelligence is that the latter,
at least, is indeed exhibited by humans, i.e., creatures with minds.

So the first-order problem is that of distinguishing pseudo-intelligent
performance from intelligent performance (if there is any difference),
and one "sure" case of the latter is our own. The reasoning then runs
like this: (i) I "know" I'm intelligent, because I know what it's "like,"
first-hand. (ii) I infer that other people have minds like my own, and
hence that they too are intelligent; the basis for my inference is
that they act the way I do in all respects that seem intuitively
relevant (they don't have to be my mirror image, just indistinguishable
from me in all the intuitively compelling  and rationally justifiable
respects, i.e., total performance capacity). (iii) Why deny
the same benefit of the doubt to other organisms and devices, if
indistinguishable in the SAME respects. (iv) Having ascertained that
turing-indistinguishable performance capacity is the basis for my
inference about mind, the inference about "intelligence" inherits it.

The only refinement I've put on this is the notion of the Total Turing
Test -- something that captures ALL of our performance capacity. This
is to rule out toy problems, toy models and toy modules, which mimic
subtotal fragments of our performance capacity, and leave open a
larger-than-necessary possibility that they do so in a significantly
different way, namely, unintelligently.

To put it another way, having a mind seems to be a sufficient
condition for displaying intelligent performance. If it is not also a
necessary condition, then our theories have, as I've argued, yet another
order of underdetermination in cognitive science, over and above
the underdetermination of ordinary scientific theory.

Note that even in human beings (and other organisms) HOW intelligent
they are is a matter of degree, but THAT they are intelligent at all
seems to be an all-or-none accompaniment of being human beings (or
other organisms). [Please, for standard objections about mental retardation,
aphasia, coma, brain death, etc., and their rebuttals, see prior iterations
of this discussion.]
 
>	 most people would, I believe, accept that a monkey has a
>	'mind'.  However, they would not necessarily so easily accept that a
>	monkey has 'performance capacity that is indistinguishable from human
>	performance capacity'. On the other hand, many people would accept
>	that certain robotic processes had 'intelligence', but would be very
>	reluctant to attribute them with 'minds'. I think there is something
>	organic about 'minds', but 'intelligence' can be codified, within
>	limits, of course.

I agree that monkeys and other organisms have minds. I think they also
have intelligence. In prior iterations I suggested that nonhuman
variants of the Total Turing Test (TTT) will probably be needed too. These
will still have to be "total" though, within the ecology in question.
Unfortunately, because not all of us are good naturalists, and because
all of us have weaker intuitions for minds in other species than our
own, these nonhuman variants will be both functionally more difficult
to attain (because of difficulties in knowing the species'total performance
ecology) and intuitively less compelling. They may be important way
stations on the path to the human TTT Utopia, though.

The second part of your statement, about accepting (a priori?) that certain
robotic processes have intelligence (and that intelligence is codifiable
[i.e. symbolic?]) unfortunately begs the question, which is whether there is
in fact some important natural or functional kind of which human
intelligence and any old fragment of clever robotic performance can both count
a priori as instances. [Since both this atheoretical view you mention
and the TTT view I advocate happen to share an ultimate reliance on
performance -- clever in one case, turing-indistinguishable from
mindful performance in the other -- and since the mind really only
plays a shadowy intuitive role in the TTT view, I'm inclined to think
that nonmodularity (i.e., the insistence on TOTAL performance
capacity) is really the only thing that separates the two views.]
---
I will close with a reply to an earlier comment by eugene@aurora.UUCP
(Eugene miya) of the NASA Ames Research Center, Mt. View, Ca., who wrote:

>	No single question can answer the question of intelligence, then how
>	many? I hope a finite, preferably small, or at least a countable number.

The Total Turing Test clearly has to be open-ended, just the way it is
when we use it informally in our ongoing provisional solutions to the
"other minds" problem. Being only an informal criterion for capturing
our intuitions about having a mind (the way Church's Thesis tries to
capture our intuitions about "effective procedures"), success on the
turing test, even after a lifetime of trials, can be no guarantor of
anything. And that's without mentioning the normal inductive risk that
forever attends any empirical hypothesis as long as time goes on...

The same is true of what I called the formal component of the Total
turing Test, i.e., the empirical burden of designing a device that displays
ALL of our performance capacities. Here, though, the only liability is
inductive risk, which, as I've argued, is just the normal
underdetermination of scientific theory. The formal component makes no
mention of capturing mind, only total performance capacity. To a good
enough approximation I suppose the number of performance tasks the
model must prove capable of handling here is finite, though it's probably
very large.

>	[The turing test] should be timed as well as checked for accuracy...
>	Turing would want a degree of humor...
>	check for `personal values,' `compassion,'...
>	should have a degree of dynamic problem solving...
>	a whole body of psychometric literature which Turing did not consult.

I think that these details are premature and arbitrary. We all know
(well enough) what people can DO: They can discriminate, categorize,
manipulate, identify and describe objects and events in the world, and
they can respond appropriately to such descriptions. Now let's get
devices to (1) do it all (formal component) and then let's see whether
(2) there's anything that we can detect informally that distinguishes
these devices from other people we judge to have minds BY EXACTLY THE
SAME CRITERIA (namely, total performance capacity). If not, they are
turing-indistinguishable and we have no non-arbitrary basis for
singling them out as not having minds.
-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet           

spf@bonnie.ATT.COM (11/13/86)

>From clyde!rutgers!princeton!mind!harnad Thu Nov 13 11:36:38 EST 1986
>Organization: Cognitive Science, Princeton University
>Lines: 128
>Summary: On (1) whether having a mind is a necessary or merely a sufficient
>	 condition for having intelligence; and on (2) the open-endedness
>	 of the formal and informal Total Turing Test (TTT)
>Xref: clyde comp.ai:14 comp.cog-eng:3
>
>
>kgd@rlvd.UUCP (Keith Dancey) of Rutherford Appleton Laboratories,
>Atlas Buildings, U.K. writes:
>
>>	I don't think it wise to try and equate 'mind' and 'intelligence'.
>>	A 'mind' is an absolute thing, but 'intelligence' is relative.
>
>I'm not quite sure what you mean, but perhaps it is that intelligence
>seems to be something you can have to varying degrees, whereas a mind
>seems to be an all-or-none phenomenon. 

What is your operational definition (a la Bridgeman) of mind, or of
intelligence?  I think if you examine the notion of mind in the
psychological literature, you'll find only inadequate definitions,
most of which are circular in nature.  Notice that Turing hasn't defined
intelligence either.  He presumed the existence of such a definition
and then defined artificial intelligence in terms of that.

A) Is a mind a brain?
    If so, does any creature with a neurological complex (no matter
how simple) have a mind (and for that matter, can we not contemplate
a non-neurological brain)?
B)  Is mind conciousness?
  Does that mean that my unconscious experiences are "mindless"?
C)  Is mind intelligence?
  Does that mean that folks who consistently exhibit unintelligent
characteristics have no mind?

>Note that even in human beings (and other organisms) HOW intelligent
>they are is a matter of degree, but THAT they are intelligent at all
>seems to be an all-or-none accompaniment of being human beings (or
>other organisms).

I think this is merely an artifact of common language.  The
anthropologist might say that humans are intelligent (a binary
judgement), but you might claim that your boss is NOT (very) intelligent.
Do you really mean to say that your boss is not (very) human, or are you
merely using a higher-resolution definition of "intelligence" than
the anthropologist's binary intelligence metric?
 
In the end, we still lack the basic definition of that which we
seek to understand: intelligence.

Steve Frysinger
****

Remember what the doormouse said: "Feed your head!"
		-- Jefferson Airplane

harnad@mind.UUCP (Stevan Harnad) (11/16/86)

spf@bonnie.UUCP (Steve Frysinger) of AT&T Bell Laboratories, Whippany NJ
writes:

>	What is your operational definition (a la Bridgeman) of mind,
>	or of intelligence?

Don't have one. Don't need one. And neither you nor I would recognize
one or have any basis for accepting one if we saw one. But we DO know
we have a mind (at least I do -- this is not intended facetiously, but
as a reminder of the basic point at issue, namely, the "other-minds"
problem), first-hand, without drawing on any "operational
definition." THAT's what we're trying to guess whether anyone
or anything ELSE but ourselves has. And I'm proposing the Total
Turing Test (a) because it's what we use already in all cases but our
own personal one, (b) because it gives cognitive science an empirical
problem to work with (modeling total performance capacity) and (c)
because there don't seem to be any nonarbitrary alternatives.

>	Is a mind a brain? If so, does any creature with a neurological
>	complex (no matter how simple) have a mind (and for that matter,
>	can we not contemplate a non-neurological brain)?

Brains have minds. I have no idea how far down the phylogenetic scale
that is true. Yes, we can contemplate a non-neurological brain; that's
what the robotic functionalism I advocate aims toward. But it's an
empirical question how many brain-like properties a device must have
to generate mind-like total performance capacity.

>	Is mind conciousness? Does that mean that my unconscious experiences
>	are "mindless"?

I am conscious, which is synonymous with "I have a mind." To be conscious
is to have qualitative experience. Strictly speaking, the consciousness is
only going on while experiences are going on. When I'm unconscious, I'm
unconscious.  But since I'm still alive, and wake up eventually (and, though
this is not necessary to the point, since when I wake up I experience a sense
of continuity with my prior experience), it seems reasonable to say
that I have a mind all along, only sometimes it's turned off.
"Unconscious experiences" (not to be confused with forgotten
experiences) is a contradiction in terms.

>	[1] Is mind intelligence? [2] Does that mean that folks who
>	consistently exhibit unintelligent characteristics have no mind?

[1] People (and animals, and perhaps future robots) are intelligent.
"Mind" is not synonymous with "intelligence" (why should it be?). Nor
is having a mind synonymous with having intelligence, although it may
be a sufficient, and possibly even a necessary condition for it. [2]
As I suggested in the module you are commenting on, intelligence does admit
of degrees, but not mind. But whereas I can imagine a person or animal
that is less intelligent than most others, I can't imagine one with no
intelligence at all. (This still does not make mind synonymous with
intelligence; they may be causally related, or merely correlated.)

-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet           

lambert@mcvax.UUCP (11/22/86)

In article <229@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
> I know directly that my
> performance is caused by my mind, and I infer that my
> mind is caused by my brain. I'll go even further (now that we're
> steeped in phenomenology): It is part of my EXPERIENCE of my behavior
> that it is caused by my mind. [I happen to believe (inferentially) that
> "free will" is an illusion, but I admit it's a phenomenological fact
> that free will sure doesn't FEEL like an illusion.] We do not experience our
> performance in the passive way that we experience sensory input. We
> experience it AS something we (our minds) are CAUSING. (In fact, that's
> probably the source of our intuitions about what causation IS. I'll
> return to this later.)

I hope I am not suffering from a terrible disease like incipient
schizophrenia, but for me it is not the case that I perceive/experience/
am-directly-aware-of my performance being caused by anything.  It just
happens.  I have some indirect evidence that there is some relation between
the performance I can watch happening and some sensations (such as anxiety
or happiness) that I can somehow experience directly whereas others have no
such direct access and can only infer the presence or absence of these
sensations within me by circumstantial evidence.

How do I know I have a mind?  This reminds me of the question put to a
priest (teaching religion) by one of the pupils: "Father, how do we know
that people have a soul?"  "Well," said the priest, "here I have a card in
memory of Klaas de Vries.  Look, here it says: `Pray for the soul of Klaas
de Vries.'  They wouldn't put that there if people had no souls, would
they?"  There is something funny with this debate: it is hardly
translatable into Dutch.  The problem is that if you look up "mind" in an
English-Dutch dictionary, some eight translations are suggested, none of
which has "mind" as their primary meaning if translated back to English,
except for idiomatic reasons (like in: "So many men, so many minds").
Instead, we find (1) memory; (2) meaning; (3) thoughts; (4) ghost; (5)
soul; (6) understanding; (7) attention; (8) desire.  Of these, I contend,
"ghost" and "soul" are closest in meaning if someone says: "I know I have
mind.  But how can I know that other people have minds?"

OK, if you substitute "consciousness" for "mind", then this does no
essential harm to the debate and things become translatable to Dutch.  What
you gain, is that you loose the suggestion evoked (at least to me) by the
word "mind" that it is something perhaps not quite, but almost, tangible,
something that you could lock up in a box, or cut in three, or take a
picture of with a camera using aura-sensitive film.  "Consciousness" is
more like "appetite": you can have it and you can loose it, but even though
it is functionally related to bodily organs, you normally don't think of it
as something located somewhere.  Does our appetite cause our eating?  ("My
appetite made me eat too much.")  How can we know for sure that other
people have appetites as well?  I propose to consider the question, "Can
machines have an appetite?"

Now why is consciousness "real", if free will is an illusion?  Or, rather,
why should the thesis that consciousness is "real" be more compelling than
the analogous thesis for free will?  In either case, the essential argument
is: "Because I [the proponent of that thesis] have direct, immediate,
evidence of it."  Sometimes we are conscious of certain sensations.  Do
these sensations disappear if we are not conscious of them?  Or do they go
on on a subconscious level?  That is like the question if a falling tree in
the middle of a forest makes a sound in the absence of creatures capable of
hearing.  That is a matter of the most useful (convenient) definition.  Let
us agree that the sensations continue at least if it can be shown that the
person involved keeps behaving as if the concomitant sensations continued,
even though professing in retrospection not to have been aware of them.  So
people can be afraid without realizing it, say, or drive a car without
being conscious of the traffic lights (and still halt for a red light).

How can you know that you have been conscious of something that you reacted
upon?  You stopped in front of a red light (or so others tell you) while
involved in a heated argument.  You have no remembrance whatsoever of that
light being red, or of your slowing down (or of having been at that
intersection at all).  Maybe your attention was so completely focussed on
the argument that the reaction to the traffic light was fully automatic.
Now someone tells you: No, it wasn't automatic.  You muttered something
unfriendly about that other car driver who made as if he was going to drive
on and then suddenly braked.  And now, zzzap!, the whole episode pops up in
your mind.  You remember that car, the intersection, the traffic light, its
jumping to red, the slight annoyance at not making it, and the anger about
that *@#$%!!! other driver whose car you almost crashed into.

Maybe everything is conscious.  Maybe stones are conscious of lying on the
ground, being kicked against, being picked up.  Their problem is, they can
hardly tell us.  The other problem is, they have no memory (lacking an
appropriate substrate for storing a trace of these experiences).  They are
like us with that traffic light, if there hadn't been that other car with
that idiot driver.  Even if we experience something consciously, if we
loose all remembrance of it, there is no way in which we can tell for sure
that there was a conscious experience.  Maybe we can infer consciousness by
an indirect argument, but that doesn't count.  Indirect evidence can be
pretty strong, but it can never give certainty.  Barring false memories, we
can only be sure if we remember the experience itself.  Now maybe
everything we experience is stored in memory.  It may be that we cannot
recall it like that, but using special techniques (hypnosis, electro-
stimulation, mnemonic drugs) it could be retrieved.  On the other hand, it
is more plausible that not quite everything is stored in memory, since that
would require a tremendous channel width for storing things, which is not
really functional, or, at least, there are presumably better trade-offs in
terms of survival capability given a limited bran capacity.

If some things we experience do not leave a recallable trace, then why
should we say that they were experienced consciously?  Or, why shouldn't we
maintain the position that stones are conscious as well?  That position is
maintainable, but it is not very useful in the sense that the word
"consciousness" looses its meaning; it becomes coextensive with
"existence".  We "loose" our bicameral minds, Freud, and all that jazz.
More useful, then, to use "consciousness" only for experiences that are,
somehow, recallable.  It makes sense that not all, not most of, but some of
the things that go on in our heads are stored away: in order to use for
determining patterns, for better evaluation of the expected outcome of
alternatives, for collecting material that is useful for the construction
or refinement of the model we have of the outside world, and so on.

Being the kind of animal homo is, it also makes sense to store material
that is useful for the refinement of the model we have of our inside world,
that which we think of as "ourselves".  After all, we consult that model to
pre-evaluate the outcome of certain alternatives.  If we don't "know"
ourselves, we are bound to do things (take on a responsibility, marry
someone, etc., things with a long-term commitment) that will lead us unto
suffering.  (We do these things anyway, and one of the causes is that we
don't know ourselves that well.)  So a lot of the things that go on "in the
front of our minds" are stored away, and are recallable.  And it is only
because of this recallability that we can say that these things were "in
the front of our minds", or "in our minds" at all.

Imagine now a machine programmed to "eat" and also to keep up some dinner
conversation.  It has some rules built-in about etiquette like that it is
impolite to eat too much, but also some parameter varying in time to model
"hunger", and a rule IF hunger THEN eat.  It just happens that the machine
is very, very hungry.  There is a conflict here, but fortunately our
machine is equipped with a conflict-resolution module (CRM) that uses fuzzy
logic to get an outcome for conflicting rules.  The outcome here is that
the machine eats more than is polite.  The dinner-conversation module (DCM)
has no direct interface with the CRM, but it is supplied with the resultant
behaviour as part of its input data and so it concludes (using the rule
base) that it is not behaving too politely.  Speaking anthropomorphically,
we would say that the machine is feeling uneasy about it.  Actually, a flag
"uneasiness" is raised, and the DCM is programmed to do something about it.
Using the rule base, the DCM finds a rule that tells it that uneasiness
about being impolite can be reduced by apologizing about it.  The apology
submodule (ASM) is invoked, which discovers that a casual apology will do
in this case, one form of which is just to state an appropriate cause for
the inappropriate behaviour.  The rule base tells ASM that PROBABLE CAUSE
OF eat IS appetite, (next to tape-worms, but these are measured as less
appropriate under the circumstances), so "<<SELF, having, appetite>;
<goodness, 0.6785>>" is passed back to DCM, which, after invoking
appropriate syntactic transformations, utters the unforgettable words:
"Boy, do I have an appetite today."

How different are we from that machine?  If we keep wolfing down food at a
dinner, knowing that we are misbehaving (or just substitute any behaviour
that you are prone to and that you realize is just not quite right--come
on, there must be something), is the choice made the result of a conscious
process?  I think it is not.  I have no reason to think it is.  Even if we
ponder a question consciously ("Whether 'tis nobler in the mind to suffer
..."), I think the outcome is not the result of the conscious process, but,
rather, that the consciousness is a side-effect of the conflict-resolution
process going on.  I think the same can be said about all "conscious"
processes.  The process is there, anyway; it could (in principle) take
place without leaving a trace in memory, but for functional reasons it does
leave such a trace.  And the word we use for these cognitive processes that
we can recall as having taken place is "conscious".

We can as it were instantly focus our attention on things that we are not
conscious of most of the time (the sensation of sitting on a chair, the
colour of the sky).  This means merely that we can influence which part of
the processes going on all the time get the preferential treatment of being
stored away for future reference.  The ability to do so is clearly
functional, notwithstanding the fact that we can make a non-functional use
of it.  This is not different from the fact that it is functional that I
can raise my arm by "willing" it to raise, although I can use that ability
to raise it gratuitously.  If the free will here is an illusion (which I
think is primarily a matter of how you choose to define something as
elusive as "free will"), then so is the free will to direct your attention
now to this, then to that.  Rather than to say that free will is an
"illusion", we might say that it is something that features in the model
people have about "themselves".  Similarly, I think it is better to say
that consciousness is not so much an illusion, but rather something to be
found in that model.  A relatively recent acquisition of that model is
known as the "subconscious".  A quite recent addition are "programs",
"sub-programs", "wrong wiring", etc.

A sufficiently "intelligent" machine, able to pass not only the dinner-
conversation test but also a sophisticated Turing test, must have a model
of itself.  Using that model, and observing its own behaviour (including
"internal" behaviour!), it will be led to conclude not only that it has an
appetite, but also volition and awareness, and it will probably attribute
some of its darker sides (about which it comes to conclude that it feels
guilt, from which it deduces that it has a conscience) to lack of affection
in childhood or "wrong wiring".  Is it mistaken then?  Is the machine taken
in by an illusion?

I propose to consider the question, "Can machines have illusions?"

-- 

Lambert Meertens, CWI, Amsterdam; lambert@mcvax.UUCP

rathmann@brahms (the late Michael Ellis) (11/26/86)

> Steve Harnad >> Keith Dancey

>>	[The turing test] should be timed as well as checked for accuracy...
>>	Turing would want a degree of humor...
>>	check for `personal values,' `compassion,'...
>>	should have a degree of dynamic problem solving...
>>	a whole body of psychometric literature which Turing did not consult.
>
>I think that these details are premature and arbitrary. We all know
>(well enough) what people can DO: They can discriminate, categorize,
>manipulate, identify and describe objects and events in the world, and
>they can respond appropriately to such descriptions. 

    Just who is being arbitrary here? Qualities like humor, compassion,
    artistic creativity and the like are precisely those which many of us
    consider to be those most characteristic of mind! As to the
    "prematurity" of all this, you seem to have suddenly and most
    conveniently forgotten that you were speaking of a "total turing
    test" -- I presume an ultimate test that would encompass all that we
    mean when we speak of something as having a "mind", a test that is
    actually a generations-long research program. 

    As to whether or not "we all know what people do", I'm sure our
    cognitive science people are just *aching* to have you come and tell
    them that us humans "discriminate, categorize, manipulate, identify, and
    describe". Just attach those pretty labels and the enormous preverbal
    substratum of our consciousness just vanishes! Right? Oh yeah, I suppose
    you provide rigorous definitions for these terms -- in your as
    yet unpublished paper...
 
>Now let's get devices to (1) do it all (formal component) and then
>let's see whether (2) there's anything that we can detect informally
>that distinguishes these devices from other people we judge to have
>minds BY EXACTLY THE SAME CRITERIA (namely, total performance
>capacity). If not, they are turing-indistinguishable and we have no
>non-arbitrary basis for singling them out as not having minds.

    You have an awfully peculiar notion of what "total" and "arbitrary"
    mean, Steve: its not "arbitrary" to exclude those traits that most
    of us regard highly in other beings whom we presume to have minds.
    Nor is it "arbitrary" to exclude the future findings of brain
    research concerning the nature of our so-called "minds". Yet you
    presume to be describing a "total turing test". 

    May I suggest that what you describing is not a "test for mind", but
    rather a "test for simulated intelligence", and the reason you will
    not or cannot distinguish between the two is that you would elevate
    today's primitive state of technology to a fixed methodological
    standard for future generations. If we cannot cope with the problem,
    why, we'll just define it away! Right? Is this not, to paraphrase
    Paul Feyerabend, incompetence upheld as a standard of excellence?  

-michael

    Blessed be you, mighty matter, irresistible march of evolution,
    reality ever new born; you who by constantly shattering our mental
    categories force us to go further and further in our pursuit of the
    truth.

-Pierre Teilhard de Chardin "Hymn of the Universe"

harnad@mind.UUCP (Stevan Harnad) (11/28/86)

Lambert Meertens (lambert@boring.uucp) of CWI, Amsterdam, writes:

>	for me it is not the case that I perceive/experience/
>	am-directly-aware-of my performance being caused by anything.
>	It just happens.

Phenomenology is of course not something it's easy to settle
disagreements about, but I think I can say with some confidence that
most people experience their (voluntary) behavior as caused by THEM.
My point about free will's being an illusion is a subtler one. I am
not doubting that we all experience our voluntary actions as freely
willed by ourselves. That EXPERIENCE is certainly real, and no
illusion. What I am doubting is that our will is actually the cause of our
actions, as it seems to be. I think our actions are caused by our
brain activity (and its causes) BEFORE we are aware of having willed
them, and that our experience of willing and causing them involves a
temporal illusion (see S. Harnad [1982] "Consciousness: An afterthought,"
Cognition and Brain Theory 5: 29 - 47, and B. Libet [1986]
"Unconscious cerebral initiative and the role of conscious will in
voluntary action," Behavioral and Brain Sciences 8: 529 - 566.)

Of course, my task of supporting this position would be much easier if
the phenomenology you describe were more prevalent...

>	How do I know I have a mind?... The problem is that if you
>	look up "mind" in an English-Dutch dictionary, some eight
>	translations are suggested.

The mind/body problem is not just a lexical one; nor can it be settled by
definitions. The question "How do I know I have a mind?" is synonymous
with the question "How do I know I am experiencing anything at all
[now, rather than just going through the motions AS IF I were having
experience, but in fact being only an insentient automaton]?"
And the answer is: By direct, first-hand experience.

>	"Consciousness" is more like "appetite"...  How can we know for
>	sure that other people have appetites as well?... "Can machines
>	have an appetite?"

I quite agree that consciousness is like appetite. Or, to put it more
specifically: If consciousness is the ability to have (or the actual
having of) experience in general, appetite is a particular experience
most conscious subjects have. And, yes, the same questions that apply to
consciousness in general apply to appetite in particular. But I'm
afraid that this conclusion was not your objective here...

>	Now why is consciousness "real", if free will is an illusion?
>	Or, rather, why should the thesis that consciousness is "real"
>	be more compelling than the analogous thesis for free will?
>	In either case, the essential argument is: "Because I [the
>	proponent of that thesis] have direct, immediate, evidence of it."

The difference is that in the case of the (Cartesian) thesis of the
reality of consciousness (or mind) the question is whether there is
any qualitative, subjective experience going on AT ALL, whereas in the
case of the thesis of the reality of free will the question is whether
the dictates of a particular CONTENT of experience (namely, the causal
impression it gives us) is true of the world. The latter, like the
existence of the outside world itself, is amenable to doubt. But the former,
namely, THAT we are experiencing anything at all, is not open to doubt,
and is settled by the very act of experiencing something. That is the
celebrated Cartesian Cogito.

>	Sometimes we are conscious of certain sensations. Do these
>	sensations disappear if we are not conscious of them?  Or do they go
>	on on a subconscious level?  That is like the question "If a falling
>	tree..."

The following point is crucial to a coherent discussion of the
mind/body problem: The notion of an unconscious sensation (or, more
generally, an unconscious experience) is a contradiction in terms!

[Test it in the form: "unexperienced experience." Whatever might that
mean? Don't answer. The Viennese delegation (as Nabokov used to call
it) has already made almost a century's worth of hermeneutic hay with the
myth of the "subconscious" -- a manifest nonsolution to the mind/body
problem that simply consisted of multiplying the mystery by two. The problem
isn't the unconscious causation of behavior: If we were all
unconscious automata there would be no mind/body problem. The problem
is conscious experience. And anthropomorphizing the sizeable portion
of our behavior that we DON'T have the illusion of being the cause of
is not only no solution to the mind/body problem but not even a
contribution to the problem of finding the unconscious causes of
behavior -- which calls for cognitive theory, not hermeneutics.]

It would be best to stay away from the usually misunderstood and
misused problem of the "unheard sound of the falling tree." Typically
used to deride philosophers, the unheard last laugh is usually on the derider.

>	Let us agree that the sensations continue at least if it can be
>	shown that the person involved keeps behaving as if the concomitant
>	sensations continued, even though professing in retrospection not
>	to have been aware of them.  So people can be afraid without
>	realizing it, say, or drive a car without being conscious of the
>	traffic lights (and still halt for a red light).

I'm afraid I can't agree with any of this. A sensation may be experienced and
then forgotten, and then perhaps again remembered. That's unproblematic,
but that's not the issue here, is it? The issue is either (1)
unexperienced sensations (which I suggest is a completely incoherent
notion) or (2) unconsciously caused or guided behavior. The latter is
of course the category most behavior falls into. So unconscious
stopping for a red light is okay; so is unconscious avoidance or even
unconscious escape. But unconscious fear is another matter, because
fear is an experience, not a behavior (and, as I've argued, the
concept of an unconscious experience is self-contradictory).

If I may anticipate what I will be saying below: You seem to have
altogether too much intuitive confidence in the explanatory
power of the concept and phenomenology of memory in your views on the
mind/body problem. But the problem is that of immediate, ongoing
qualitative experience. Anything else -- including the specifics of the
immediate content of the experience (apart from the fact THAT it is an
experience) and its relation to the future, the past or the outside
world --  is open to doubt and is merely a matter of inference, rather
than one of direct, immediate certainty in the way experiential matters
are. Hence whereas veridical memories and continuities may indeed happen 
to be present in our immediate experiences, there is no direct way that
we can know that they are in fact veridical. Directly, we know only
that they APPEAR to be veridical. But that's how all phenomenological
experience is: An experience of how things appear. Sorting out what's
what is an indirect, inferential matter, and that includes sorting out
the experiences that I experience correctly as remembered from those
that are really only "deja vu." (This is what much of the writing on
the problem of the continuity of personal identity is concerned with.)

>	Maybe everything is conscious.  Maybe stones are conscious...
>	Their problem is, they can hardly tell us.  The other problem is,
>	they have no memory...  They are like us with that traffic light...
>	Even if we experience something consciously, if we lose all
>	remembrance of it, there is no way in which we can tell for sure
>	that there was a conscious experience.  Maybe we can infer
>	consciousness by an indirect argument, but that doesn't count.
>	Indirect evidence can be pretty strong, but it can never give
>	certainty.  Barring false memories, we can only be sure if we
>	remember the experience itself.

Stones have worse problems than not being able to tell us they're
conscious and not being able to remember. And the mind/problem is not
solved by animism (attributing conscious experience to everything); It
is merely compounded by it. The question is: Do stones have
experiences? I rather doubt it, and feel that a good part of the M/B
problem is sorting out the kinds of things that do have experiences from
the kinds of things, like stones, that do not (and how, and why,
functionally speaking).

If we experience something, we experience it consciously. That's what
"experience" means. Otherwise it just "happens" to us (e.g., when we're
distracted, asleep, comatose or dead), and then we may indeed be like the
stone (rather than vice versa). And if we forget an experience, we
forget it. So what? Being conscious of it does not consist in or
depend on remembering it, but on actually experiencing it at the time.
The same is true of remembering a previously forgotten experience:
Maybe it was so, maybe it wasn't. The only thing we are directly
conscious of is that we experience it AS something remembered.

Inference may be involved in trying to determine whether or not a
memory is veridical, but it is certainly not involved in determining
THAT I am having any particular conscious experience. That fact is
ascertained directly. Indeed it is the ONLY fact of consciousness, and
it is immediate and incorrigible. The particulars of its content, on
the other hand -- what an experience indicates about the outside world, the
past, the future, etc. -- are indirect, inferential matters. (To put
it another way, there is no way to "bar false memories." Experiences
wear their experientiality on their ears, so to speak, but all of the
rest of their apparel could be false, and requires inference for
indirect confirmation.)

>	If some things we experience do not leave a recallable trace, then
>	why should we say that they were experienced consciously?  Or, why
>	shouldn't we maintain the position that stones are conscious
>	as well?...  More useful, then, to use "consciousness" only for
>	experiences that are, somehow, recallable.

These stipulations would be arbitrary (and probably false). Moreover,
they would simply fail to be faithful to our direct experience -- to
"what it's like" to have an experience. The "recallability" criterion
is a (weak) external one we apply to others, and to ourselves when
we're wondering whether or not something really happened. But when
we're judging whether we're consciously experiencing a tooth-ache NOW,
recallability has nothing to do with it. And if we forget the
experience (say, because of subsequent anesthesia) and never recall it
again, that would not make the original experience any less conscious.

>	the things that go on in our heads are stored away: in order to use for
>	determining patterns, for better evaluation of the expected outcome of
>	alternatives, for collecting material that is useful for the
>	construction or refinement of the model we have of the outside world,
>	and so on.

All these conjectures about the functions of memory and other
cognitive processes are fine, but they do not provide (nor can they
provide) the slightest hint as to why all these functional and
behavioral objectives are not simply accomplished UNconsciously. This
shows as graphically as anything how the mind/body problem is
completely bypassed by such functional considerations. (This is also
why I have been repeatedly recommending "methodological
epiphenomenalism" as a research strategy in cognitive modeling.)

>	Imagine now a machine programmed to "eat" and also to keep up
>	some dinner conversation... IF hunger THEN eat... equipped with
>	a conflict-resolution module... dinner-conversation module...
>	Speaking anthropomorphically, we would say that the machine is
>	feeling uneasy... apology submodule... PROBABLE CAUSE OF eat
>	IS appetite...  "<<SELF, having, appetite>... <goodness, 0.6785>>"
>	How different are we from that machine? 

On the information you give here, the difference is likely to be like
night and day. What you have described is a standard anthropomorphic
interpretation of simple symbol-manipulations. Overzealous AI workers
do it all the time. What I believe is needed is not more
over-interpretation of the pathetically simple toy tricks that current
programs can perform, but an effort to model life-size performance
capacity: The Total Turing Test. That will diminish the degrees of
freedom of the model to the size of the normal underdetermination of
scientific theories by their data, and it will augment the problem of
machine minds to the size of the other-minds problem, with which we
are already dealing daily by means of the TTT.

In the process of pursuing that distant scientific goal, we may come to
know certain constraints on the enterprise, such as: (1) Symbol-manipulation
alone is not sufficient to pass the TTT. (2) The capacity to pass the TTT
does not arise from a mere accretion of toy modules. (3) There is no autonomous
symbolic macromodule or level: Symbolic representations must be grounded in
nonsymbolic processes. And if methodological epiphenomenalism is
faithfully adhered to, the only interpretative question we will ever need
to ask about the mind of the candidate system will be precisely the
same one we ask about one another's minds; and it will be answered on
precisely the same basis as the one we use daily in dealing with the
other-minds problem: the TTT.

>	if we ponder a question consciously... I think the outcome is not
>	the result of the conscious process, but, rather, that the
>	consciousness is a side-effect of the conflict-resolution
>	process going on. I think the same can be said about all "conscious"
>	processes. The process is there, anyway; it could (in principle) take
>	place without leaving a trace in memory, but for functional reasons
>	it does leave such a trace. And the word we use for these cognitive
>	processes that we can recall as having taken place is "conscious".

Again, your account seems to be influenced by certain notions, such as
memory and "conflict-resolution," that appear to be carrying more intuitive
weight than they can bear. Not only is the issue not that of "leaving
a trace" (as mentioned earlier), but there is no real functional
argument here for why all this shouldn't or couldn't be accomplished
unconsciously. [However, if you substitute for "side-effect" the word
"epiphenomenon," you may be calling things by their proper name, and
providing (inadevertently) a perfectly good rationale for ignoring them
in trying to devise a model to pass the TTT.]

>	it is functional that I can raise my arm by "willing" it to raise,
>	although I can use that ability to raise it gratuitously. If the
>	free will here is an illusion (which I think is primarily a matter
>	of how you choose to define something as elusive as "free will"),
>	then so is the free will to direct your attention now to this,
>	then to that.  Rather than to say that free will is an "illusion",
>	we might say that it is something that features in the model
>	people have about "themselves".  Similarly, I think it is better to say
>	that consciousness is not so much an illusion, but rather something to
>	be found in that model. A relatively recent acquisition of that model is
>	known as the "subconscious".  A quite recent addition are "programs",
>	"sub-programs", "wrong wiring", etc.

My arm seems able to rise in two important ways: voluntarily and
involuntarily (I don't know what "gratuitously" means). It is not a
matter of definition that we feel as if we are causing the motion in
the voluntary case; it is a matter of immediate experience. Whether
or not that experience is veridical depends on various other factors,
such as the true order of the events in question (brain activity,
conscious experience, movement) in real time, and the relation of the
experiential to the physical (i.e., whether or not it can be causal). The
same question does indeed apply to willed changes in the focus of
attention. If free will "is something that features in the model
people have of 'themselves'," then the question to ask is whether that
model is illusory. Consciousness itself cannot be something found in
a model (although the concept of consciousness might be) because
consciousness is simple the capacity to have (or the having of)
experience. (My responses to the concept of the "subconscious" and the
over-interpretation of programs and symbols are described earlier in
this module.

>	A sufficiently "intelligent" machine, able to pass not only the
>	dinner-conversation test but also a sophisticated Turing test,
>	must have a model of itself. Using that model, and observing its
>	own behaviour (including "internal" behaviour!), it will be led to
>	conclude not only that it has an appetite, but also volition and
>	awareness...Is it mistaken then? Is the machine taken in by an illusion?
>	"Can machines have illusions?"

What a successful candidate for the TTT will have to have is not
something we can decide by introspection. Doing hermeneutics on its
putative inner life before we build it would seem to be putting the
cart before the horse. The question whether machines can have
illusions (or appetites, or fears, etc.) is simply a variant on the
basic question of whether any organism or device other than oneself
can have experiences.
-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet           

harnad@mind.UUCP (Stevan Harnad) (11/29/86)

Peter O. Mikes <mordor!pom> at S-1 Project, LLNL wrote:

>	An example of ["unexperienced experience"] is subliminal perception. 
>	Similar case is perception of outside world during
>	dream, which can be recalled under hypnosis. Perception
>	is not same as experience, and sensation is an ambiguous word.

Subliminal perception can hardly serve as a clarifying example since
its own existence and nature is anything but clearly established.
(See D. Holender (1986) "Semantic activation without conscious
identification," Behavioral and Brain Sciences 9: 1 - 66.) If subliminal
perception exists, the question is whether it is just a case of dim or
weak awareness, quickly forgotten, or the unconscious registration of
information. If it is the former, then it is merely a case of a weak
and subsequently forgotten conscious experience. If it is the latter,
then it is a case of unconscious processing -- one of many, for most
processes is unconscious (and studying them is the theoretical burden of
cognitive science).

Dreaming is a similar case. It is generally agreed (from studies in
which subjects are awakened during dreams) that subjects are conscious
during their dreams, although they remain asleep. This state is called
"paradoxical sleep," because the EEG shows signs of active, waking
activity even though the subject's eyes are closed and he continues to
sleep. Easily awakened in that stage of sleep, the subject can report
the contents of his dream, and indicates that he has been consciously
undergoing the experience, like a vivid day-dream or a hallucination.
If the subject is not awakened, however, the dream is usually
forgotten, and difficult if not impossible to recall. (As usual,
recognition memory is stronger than recall, so sometimes cues will be
recognized as having occurred in a forgotten dream.) None of this
bears on the issue of consciousness, since the consciousness during
dreams is relatively unproblematic, and the only other phenomenon
involved is simply the forgetting of an experience.

A third hypothetical possibility is slightly more interesting, but,
unfortunately, virtually untestable: Can there be unconscious
registration of information at time T, and then, at a later time, T1,
conscious recall of that information AS IF it had been experienced
consciously at T? This is a theoretical possibility. It would still
not make the event at T a conscious experience, but it would mean that
input information can be put on "hold" in such a way as to be
retrospectively experienced at a later time. The later experience
would still be a kind of illusion, in that the original event was NOT
actually experienced at T, as it appears to have been upon
reflection. The nervous system is probably playing many temporal (and
causal) tricks like that within very short time intervals; the question
only becomes dramatic when longer intervals (minutes, hours, days) are
interposed between T and T1.

None of these issues are merely definitional ones. It is true that
"perception" and "sensation" are ambiguous, but, fortunately,
"experience" seems to be less so. So one may want to separate
sensations and perceptions into the conscious and unconscious ones.
The conscious ones are the ones that we were consciously aware of
-- i.e., that we experienced -- when they occurred in real time. The
unconscious ones simply registered information in our brains at their
moment of real-time occurrence (without being experienced), and
the awareness, if any, came only later.

>	suggest that we follow the example of acoustics, which solved the
>	'riddle' of falling tree by defining 'sound' as physical effect 
>	(density wave) and noise as 'unwanted sound' - so that The tree
>	which falls in deserted place makes sound but does not make noise.
>	Accordingly, perception can be unconcious but experience can't.

Based on the account you give, acoustics solved no problem. It merely
missed the point.

Again, the issue is not a definitional one. When a tree falls, all you
have is acoustic events. If an organism is nearby, you have acoustic
events and auditory events (i.e., physiological events in its nervous
system). If the organism is conscious, it hears a sound. But, unless
you are that organism, you can't know for sure about that. This is
called the mind/body problem. "Noise" and "unwanted sound" has
absolutely nothing to do with it.
 
>	mind and consciousness (or something like that) should be a universal
>	quantity, which could be applied to machine, computers... 
>	Since we know that there is no sharp division between living and
>	nonliving, we should be able to apply the measure to everything 

We should indeed be able to apply the concept conscious/nonconscious
to everything, just as we can apply the concept living/nonliving. The
question, however, remains: What is and what isn't conscious? And how are
we to know it?  Here are some commonsense things to keep in mind. I
know of only one case of a conscious entity directly and with
certainty: My own. I infer that other organisms that behave more or
less the way I would are also conscious, although of course I can't be
sure. I also infer that a stone is not conscious, although of course I
can't be sure about that either. The problem is finding a basis for
making the inference in intermediate cases. Certainty will not be
possible in any case but my own. I have argued that the Total Turing
Test is a reasonable empirical criterion for cognitive science and a
reasonable intuitive criterion for the rest of us. Moreover, it has
the virtue of corresponding to the subjectively compelling criterion
we're already using daily in the case of all other minds but our own.
-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet