[comp.ai] More on the functional irrelevance of the brain to mind-modeling

harnad@mind.UUCP (02/19/87)

"CUGINI, JOHN" <cugini@icst-ecf> wrote on mod.ai:

>	The Big Question: Is your brain more similar to mine than either
>	is to any plausible silicon-based device?

That's not the big question, at least not mine. Mine is "How does the
mind work?" To answer that, you need a functional theory of how the
mind works, you need a way of testing whether the theory works, and
you need a way of deciding whether a device implemented according to
the theory has a mind. That's what I proposed the formal and informal
TTT for: testing and implementing a functional theory of mind. 

Cugini keeps focusing on the usefulness of "presence of `brain'"
as evidence for the possession of a mind. But in the absence of a
functional theory of the brain, its superficial appearance hardly
helps in constructing and testing a functional theory of the mind.

Another way of putting it is that I'm concerned with a specific
scientific (bioengineering) problem, not an exobiological one ("Does this
alien have a mind?"), nor a sci-fi one ("Does this fictitious robot
have a mind?"), nor a clinical one ("Does this comatose patient or
anencephalic have a mind?"), nor even the informal, daily folk-psychological
one ("Does this thing I'm interacting with have a mind?"). I'm only
concerned with functional theories about how the mind works.

>	A member of an Amazon tribe could find out, truly know, that light
>	switches cause lights to come on, with a few minutes of
>	experimentation. It is no objection to his knowledge to say that he
>	has no causal theory within which to embed this knowledge, or to
>	question his knowledge of the relevance of the similarities among
>	various light switches, even if he is hard-pressed to say anything
>	beyond "they look alike."

Again, I'm not concerned with informal, practical, folk heuristics but
with functional, scientific theory.

>	Now, S. Harnad, upon your solemn oath, do you have any serious
>	practical doubt, that, in fact,
>	1. you have a brain?
>	2. that it is the primary cause of your consciousness?
>	3. that other people have brains?
>	4. that these brains are similar to your own

My question is not a "practical" one, but a functional, scientific
one, and none of these correlations among superficial appearances help.

>	how do you know that two performances
>	by two entities in question (a human and a robot) are relevantly
>	similar?  What is it precisely about the performances you intend to
>	measure?  How do you know that these are the important aspects?
>	...as I recall, the TTT was a kind
>	of gestalt you'll-know-intelligent-behavior-when-you-see-it test.
>	How is this different from looking at two brains and saying, yeah
>	they look like the same kind of thing to me?

Making a brain look-alike is a trivial task (they do it in Hollywood
all the time). Making a (TTT-strength) behavioral look-alike is not. My
claim is that a successful construction of the latter is as close as we
can hope to get to a functional understanding of the mind.

There's no "measurement" problem. The data are in. Build a robot that
can detect, discriminate, identify, manipulate and describe objects
and events and can interact linguistically indistinguishably from the
way we do (as ultimately tested informally by laymen) and you'll have
the problem licked.

As to "relevant" similarities: Perhaps the TTT is too exacting. TOTAL
human performance capacity may be more than what's necessary to capture mind
(for example, nonhuman species and retarded humans also have minds).
Let's say it's to play it safe; to make sure we haven't left anything
relevant out; in any case, there will no doubt be many subtotal
way-stations on the long road to the asymptotic TTT.

The brain's another matter, though. Its structural appearance is
certainly not good enough to go on. And its function is an ambiguous
matter. On the one hand, its behavioral capacities are among its functional
capacities, so behavioral function is a subset of brain function. But,
over and above that we do not know what implementational details are
relevant. The TTT could in principle be beefed up to demand not only
behavioral indistinguishability, but anatomical, physiological and
pharmacologcal indistinguishability. I'd go for the behavioral
asymptote first though, as the most likely criterion of relevance,
before adding on implementational constraints too -- especially because
those implementational details will play no role in our intuitive
judgments about whether the device in question has a mind like us, any
more than they do now. Nor will they significantly increase the
objective validity of the (frail) TTT criterion itself, since brain
correlates are ultimately validated against behavioral correlates.

My own guess, though, is that our total performance capacity will be
as strong a hardware constraint as is needed to capture all the relevant
functional similarities.

>	Just a quick pout here - last December I posted a somewhat detailed
>	defense of the "brain-as-criterion" position...
>	No one has responded directly to this posting.

I didn't reply because, as I indicated above, you're not addressing the same
question I am (and because our exchanges have become somewhat repetitive).
-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet