[comp.ai] Where might CR understanding come from

throopw@agarn.dg.com (Wayne A. Throop) (03/14/89)

> gilbert@cs.glasgow.ac.uk (Gilbert Cockton)
> I am a
> holist, but I don't see how an attribute of a part can be transferred
> to the whole if it doesn't exist in the part.

No problem, the systems reply doesn't claim that understanding was
"transfered" to the whole system from Searle... it claims that the
whole system understands as an emergent property, as a consequence
of the arrangement of the rules for which Searle just happens to be
an interpreter.  Understanding is a property of a process interacting
(however indirectly) with the object of understanding.

> The interesting thing
> about systems is the attributes of the whole which CANNOT be attributes
> of the parts, not true here I'm afraid.

Why isn't it true here?  I'm under the impression that it IS true here.

--
"This is just the sort of thing that people never believe."
                              --- Baron Munchausen
--
Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (03/16/89)

In article <4079@xyzzy.UUCP> throopw@agarn.dg.com (Wayne A. Throop) writes:
>> gilbert@cs.glasgow.ac.uk (Gilbert Cockton)
>> about systems is the attributes of the whole which CANNOT be attributes
>> of the parts, not true here I'm afraid.
>
>Why isn't it true here?  I'm under the impression that it IS true here.

I mustn't be clear.
Usually, a system possesses attributes which no part *CAN* possess,
and thus does not possess.

Here, the part Searle can possess understanding.

The issue is one of attributes common to a/some part(s) and the
emergeant system.  I don't know an example where a system has
attributes that a part CAN have, but does not have.

Equilibrium, for example, can never be an attribute of a part (unless
it is a system).  As Searle is not a system, this doesn't apply.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

geddis@polya.Stanford.EDU (Donald F. Geddis) (03/18/89)

In article <2599@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>Usually, a system possesses attributes which no part *CAN* possess,
>and thus does not possess.
>
>Here, the part Searle can possess understanding.
>
>The issue is one of attributes common to a/some part(s) and the
>emergeant system.  I don't know an example where a system has
>attributes that a part CAN have, but does not have.

Counterexample:

The program MACSYMA running on a particular computer has the attribute
CAN-SOLVE-SYMBOLIC-CALCULUS-EQUATIONS (CSSCE attribute).  Now, give me a
copy of the source code for MACSYMA and I'll hand-simulate it.

Now the system (Me + Source Code) can solve all sorts of complex equations
that I alone can't.  However, I do have the capacity to have the CSSCE
attribute, I just don't happen to have it at this moment.

So there's an example where a system has attributes that a part can have,
but does not have.

	-- Don
-- 
Geddis@Polya.Stanford.Edu
"Ticking away the moments that make up a dull day..." -- Pink Floyd

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/18/89)

               SYMBOL LOGIC 101 (MAKEUP)

geddis@polya.Stanford.EDU (Donald F. Geddis) of
Organization: Stanford University

" The program MACSYMA running on a particular computer has the attribute
" CAN-SOLVE-SYMBOLIC-CALCULUS-EQUATIONS (CSSCE attribute). Now, give me a
" copy of the source code for MACSYMA and I'll hand-simulate it.
" 
" Now the system (Me + Source Code) can solve all sorts of complex equations
" that I alone can't. However, I do have the capacity to have the CSSCE
" attribute, I just don't happen to have it at this moment.
" 
" So there's an example where a system has attributes that a part can have,
" but does not have.

I'll rephrase it cryptically, since it's all been said longhand, in vain,
so many times before:

(1) It is not in dispute that systems can have attributes that their parts
do not have. What is in dispute is what systems, what parts, what attributes.

(2) It is not in dispute that Searle has the capacity to understand
Chinese. He just does not happen to understand it at the moment.

(3) There is no basis whatever (I HOPE everyone agrees) for projecting
Searle's undisputed actual capacity for understanding English now, and
potential for understanding Chinese in the future, onto anything at
all, part or whole. I hope everyone sees THAT's just double-talk...

(4) The attribute of being able to solve equations is not the same as the
attribute of understanding.

(5) Neither is the attribute of being able to manipulate Chinese
symbols (even under the counterfactual hypothesis that one can
manipulate them well enough to pass the LTT) the same as the attribute
of being able to understand Chinese symbols. Why (for those who thought
it might have been)? One reason is Searle's Chinese Room Argument.
(Here and in my papers I've given several others, including the
"symbol-grounding problem.")

(6) What is the simple conclusion of this simple argument that someone
who has understood it must draw -- unless he has a valid
counterargument (or has become unalterably soft-wired to the
simple-minded belief that thinking is just symbol crunching)? That
thinking is not just symbol crunching. Q.E.D. (R.I.P.)

Refs:   Searle, J. (1980) Minds, Brains and Programs. Behavioral and Brain 
                          Sciences 3: 417-457
        Harnad, S. (1989) Minds, Machines and Searle. Journal of Experimental
                          and Theoretical Artificial Intelligence 1: 5 - 25.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

throopw@agarn.dg.com (Wayne A. Throop) (03/21/89)

> harnad@elbereth.rutgers.edu (Stevan Harnad)
> [..it is not the case that..]
> the attribute of being able to manipulate Chinese
> symbols (even under the counterfactual hypothesis that one can
> manipulate them well enough to pass the LTT) the same as the attribute
> of being able to understand Chinese symbols. Why (for those who thought
> it might have been)? One reason is Searle's Chinese Room Argument.

More like "Chinese Room Anecdote".  Given that its sole force of
argument is gained by assuming its concluson by appeal to
anthropomorphism, there are many people who don't find it convincing.

Not that I think any near descendant of "Doctor" or other toys
should be thought of as posessing understanding.  Just that Searle
has failed to prove (or even show reason) that a much more distant
and complicated descendant should not be thought of in this way.

> (Here and in my papers I've given several others, including the
> "symbol-grounding problem.")

And since the symbols of a symbol crunching system are every bit as
well grounded as those humans use (at least potentially), this reason
is somewhat less than convincing also.

( Note: I don't contend that they are as well grounded by the criteria
  Steve Harnad would like to use.  I just think that some irrelevant
  criteria are snuck in there. )

> [...] thinking is not just symbol crunching. Q.E.D. (R.I.P.)

And claiming that these anecdotes constitute "proof" of anything
at all, especially in the course of pursuing "argument by insult
and intimidation" combined with "argument by emphatic assertion"
is (need I say it?) equally unconvincing.

--
"Who would be fighting with the weather like this?"
        "Only a lunatic."
                "So you think D'Artagnian is involved?"
                        --- Porthos, Athos, and Aramis.
--
Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw

throopw@agarn.dg.com (Wayne A. Throop) (03/21/89)

>,>>> gilbert@cs.glasgow.ac.uk (Gilbert Cockton)
>> throopw@dg-rtp.dg.com (Wayne Throop)
>>> about systems is the attributes of the whole which CANNOT be attributes
>>> of the parts, not true here I'm afraid.
>>Why isn't it true here?  I'm under the impression that it IS true here.
> Usually, a system possesses attributes which no part *CAN* possess,
> and thus does not possess.
> Here, the part Searle can possess understanding.

But the relevant part, the one pointed to to (attempt to) show the
shortcomings of the CR, is not "Searle", but rather "Searle blindly
following rules".  "Searle blindly following rules" cannot (by
definition of what it means to blindly follow rules) understand.  Even
if Searle goes out and learns Chinese, "Searle blindly following
rules" does not, in any relevant sense, understand Chinese.  The
systems reply is that Searle-plus-the-rules *does* understand.

( Of course, Searle's reply  to the systems reply is moot, because he
  assumes the conscious mind of Searle has access to all capabilities of
  all systems operating in the physical system of Searle's brain, despite
  the copious evidence that this is not the case (eg, savant talents,
  multiple personalities). )

In fact, the whole CR argument is a simple and blatant appeal to
anthropomorphism, an anecdote with no force of formal reasoning behind
it.  It doesn't "prove" anything at all, and convinces only those
who already agree with its disguised anthropocentric premises.

--
"Who would be fighting with the weather like this?"
        "Only a lunatic."
                "So you think D'Artagnian is involved?"
                        --- Porthos, Athos, and Aramis.
--
Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw

ellis@unix.SRI.COM (Michael Ellis) (03/22/89)

> Wayne A. Throop

>The systems reply is that Searle-plus-the-rules *does* understand.

     That requires a leap of faith I'm willing to make for other
     humans, animals (and martians should they arrive), but not
     for an artifact whose design fails to include whatever
     relevant causes there may be to consciousness itself.

>In fact, the whole CR argument is a simple and blatant appeal to
>anthropomorphism, an anecdote with no force of formal reasoning behind
>it.  It doesn't "prove" anything at all, and convinces only those
>who already agree with its disguised anthropocentric premises.

    What "anthropocentric" premises might those be? That subjective
    experience and intentional states with semantic content are real?
    That the subject under study is the human mind? 

-michael

fransvo@htsa.uucp (Frans van Otten) (03/23/89)

Michael Ellis writes:

>Wayne A. Throop writes:
>
>>The systems reply is that Searle-plus-the-rules *does* understand.
>
>That requires a leap of faith I'm willing to make for other
>humans, animals (and martians should they arrive), but not
>for an artifact whose design fails to include whatever
>relevant causes there may be to consciousness itself.

The word "understanding" has three very different meanings.  I think
the main problem in this entire discussion is that most people keep
mixing those meanings.

***   First interpretation of "understanding"

I can say "I understand".  This statement is based on some feeling;
it is subjective.  This might be represented/implemented by a flag
which is set to "true" or "false", or by a variable which contains
a value within a (continuous) range, where there is some critical
value when I get the feeling that I do understand.

***   Second interpretation of "understanding"

I can also say "You understand".  This refers to some other entity
then me myself.  This is the "understanding" which is "measured" by
the Turing Test (whatever variant you are using).  Then first of
all, there is the "other minds problem".  Does the entity I am
referring to have a mind ?  I don't know.  But I do say that it
understands.  So I might conclude: External behaviour resulting in
me stating "you understand" does NOT require that the entity has a
mind.

And there was the argument "when you know the used algorithm, you
usually don't say that it understands".  I disagree on this one.
Let's take a calculator.  I can say "it understands how to add two
numbers".  Let's take an email-program.  When I mail to someone on
my local computer, the program "understands" not to call some other
system for that message.  When I mail to some unknown system, it
does understand that the message must be sent to some backbone.

My point is that in the process of thinking about things like
"understanding", "consciousness", "intelligence" etc. many people
lose their normal interpretations of those words and start to
require undefined things, just because "human understanding is so
very special, how could some simple device like a computer ever
be able to have it ?".

***   Third interpretation of "understanding"

Neither of the above described interpretations of "understanding"
mentions the actual process involved in understanding.  There is
no hint as to how understanding might work, what it might be.  The
interpretation of "understanding" concerns this process or state,
the actual implementation of understanding a concept.

The Chinese Room argument mixes the three described meanings of
"understanding" in such a way that nobody knows what is true and
what is false:

  1.  We have a set of rules [a computer program] defining which
      Chinese characters should be presented in response to incoming
      Chinese characters.  It is assumed that when someone strictly
      follows those rules, the other person(s) involved in this
      conversation could be fooled into believing they actually are
      having a correspondence with a human being (or some other
      entity) understanding what is written.  This concerns the
      second interpretation of understanding I described: "you
      understand":  the Turing Test is passed.

  2.  Then the argument continues: "the person interpreting the
      rules doesn't understand Chinese".  This concerns the first
      interpretation of understanding (the subjective one).

  3.  Then finally, the conclusion is made: "as this person, who
      is doing everything a computer would do, does not understand
      Chinese, then a computer can't understand Chinese either".
      This refers to the third interpretation of understanding, the
      objective one, the kind AI-researchers are (or should be)
      interested in.

Objections against this argument have been:

  1.  The set of rules would be far too big, too complex to be
      executed by a human in a reasonable amount of time, etc.,
      the argument relies on intuitions which are misguided thus.

      Of course, the set of rules would be highly complex.  But
      the Chinese Room argument was a thought experiment, so that
      is no problem (but it helps the confusion).  Would it have
      been possible for Einstein to travel on a lightbeam ?

  2.  The Turing Test tests on bevioural characteristics of the
      system, not on the internal (cognitive) functions.

      Very true, but that was exactly why the Turing Test was used
      in the argument.  The idea was "make it seem to understand,
      this can be measured by the Turing Test, then see inside to
      find out whether or not it does understand".

  3.  The person executing the rules is not able to feel the
      understanding of the system.

      That is true, too.  This is one of the flaws of the argument.
      But it doesn't touch the heart of it.  We are not looking for
      a system that has a feeling that it does understand.  We are
      trying to find a system that (objectively) understands.

My objection against the Chinese Room argument is that the different
interpretations of the word "understanding" are mixed in a way that
is not tolerable.  First of all, behavioural characteristics tests as
the Turing Test cannot produce any evidence.  The best they can do
is "it seems to understand" or "it doesn't seem to understand".  Drew
Mcdermott was very clear on this is his paper he posted on the net.

Secondly, as many posters wrote, you can't base a conclusion about an
entire system on observations (let alone feelings) of a single part
of it.  This has been shown in many ways.  The best way was probably
"let someone execute the rules of physics of his own body; although
the person understands things, he wouldn't understand those things
from calculating his own physics".

Note: The question has been raised, if the person would understand
the rules, would he then understand Chinese ?  I think Stevan Harnad
is right on saying "no"; the symbol grounding problem does exist !
When you know that you should respond with a certain Chinese character
when you see another certain one, this doesn't make you understand
Chinese !

My theory on this is, that understanding requires representation of
the external symbols (be it Chinese characters, vocal representations
of words, red traffic lights, or whatever) in internal symbols.  You
can base actions on your internal symbols, but not on external symbols.
Then the "denotation/connotation" argument of Roel Wieringa shows up.
How do I translate external symbols to internal symbols ?  The process
of reaching a conclusion based on an internal state (I call this
process "intelligence") is independent of the external symbols and
their meanings.  But the rules that are used to reach a conclusion are
based on the socially accepted meanings of the external symbols.  The
same holds for the translation from external to internal symbols.  This
is what Karl Kluge meant when he wrote about communicating his desire
for a bowl of icecream to his appartmentmate.

Final note:  If you ever manage to make a system that does understand,
pack it in a green humanoid container, put this in a rocket, write in
big letters "From Mars" on it, and launch this rocket to Michael Ellis.

-- 
	Frans van Otten
	Algemene Hogeschool Amsterdam
	Technische en Maritieme Faculteit
	fransvo@htsa.uucp

throopw@agarn.dg.com (Wayne A. Throop) (03/25/89)

> ellis@unix.SRI.COM (Michael Ellis)
> [..the "systems reply" claim that the system posesses the understanding..]
> requires a leap of faith I'm willing to make for other
> humans, animals (and martians should they arrive), but not
> for an artifact whose design fails to include whatever
> relevant causes there may be to consciousness itself.

I'm not willing to make it easy either.  The commotion is over just
what constitutes "relevant causes".  I simply do not think that the
human brain has any mysterious "causal powers" that a computer
executing a suitable program does not.

>> [..the CR argument..]
>> doesn't "prove" anything at all, and convinces only those
>> who already agree with its disguised anthropocentric premises.
> What "anthropocentric" premises might those be? That subjective
> experience and intentional states with semantic content are real?
> That the subject under study is the human mind?

No.  The appeal is made through the argument that since Searle doesn't
understand, and since Searle (the human component) is where we would
normally look for understanding in such a system, the system must not
understand.  The conclusion simply doesn't follow.  It's rather like
finding a corpse stabbed to death, showing that the butler didn't do
it, and concluding that there was no homicide.

--
If someone tells me I'm not really conscious, I don't marvel about
how clever he is to have figured that out... I say he's crazy.
          --- Searle (paraphrased) from an episode of PBS's "The Mind"
--
Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (03/28/89)

In article <4506@xyzzy.UUCP> throopw@agarn.dg.com (Wayne A. Throop) writes:
>I'm not willing to make it easy either.  The commotion is over just
>what constitutes "relevant causes".  I simply do not think that the
>human brain has any mysterious "causal powers" that a computer
>executing a suitable program does not.

OK then, let's here what a "suitable" program would be.  I contend that
AI research doesn't have a grasp of what "suitable" means at all.

For one, human minds are not artefacts, whereas computer programs
always will be.  This alone will ALWAYS result in performance
differences.  Given a well-understood task, computer programs will
out-perform humans.  Given a poorly understood task, they will look
almost as silly as the author of the abortive program.

The issue as ever is what we do and do not understand about the human
mind, the epistemelogical constraints on this knowledge, and the
ability of AI research as it is practised to add anything at all to
this knowledge.

Come on then boys and girls in AI, lets hear it on "suitable" :-)

Cyberpunk fragments are acceptable, and indeed indistinguishable in a
LTT from some AI research :-]
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

maddoxt@novavax.UUCP (Thomas Maddox) (03/29/89)

In article <2599@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:

>I mustn't be clear.

	His motto.


		       Tom Maddox 
	 UUCP: ...{ucf-cs|gatech!uflorida}!novavax!maddoxt

fransvo@htsa.uucp (Frans van Otten) (03/30/89)

Gilbert Cockton writes:

>In article <4506@xyzzy.UUCP> throopw@agarn.dg.com (Wayne A. Throop) writes:
>>I'm not willing to make it easy either.  The commotion is over just
>>what constitutes "relevant causes".  I simply do not think that the
>>human brain has any mysterious "causal powers" that a computer
>>executing a suitable program does not.

[...]

>For one, human minds are not artefacts, whereas computer programs
>always will be.  This alone will ALWAYS result in performance
>differences.  Given a well-understood task, computer programs will
>out-perform humans.  Given a poorly understood task, they will look
>almost as silly as the author of the abortive program.

This discussion has degraded into a fight between two groups with different
viewpoints:

  1. Humans have some mysterious powers that are responsible for their
     having a mind.  Animals might also have these powers, maybe even
     martians.  This property might be inherent to the building material;
     carbon-hydrogen has it, Si doesn't.

  2. Understanding etc. are properties which arise from a certain way to
     process information.  The information theory is what matters, not
     the way it is implemented.  If we humans can do it using our hardware
     (neurons etc), then computers are able to do this using theirs.

I believe that those who support 1. are in an ideological grippe.  This
is an unsupported way of looking at things.  If these people might think
they could find support in religions, I have to dissapoint them.  In no
religion known to me it is stated that the mind/spirit/... (the non-physical
thing) is dependent (in its being) on its body (the physical thing).  This
includes religions ranging from Christianity to Bhuddism, Zen and Sufi.

There is not either any support for this viewpoint from the technological
world.  There is no apparent chemical reason why carbon-hydrogene molecular
groups can and Si-molecules can not give rise to something as high-level as
understanding and consciousness.

So I think these people are stuck somewhere between a "rational" and a
"not-rational" (emotional/...) viewpoint, but are too lazy to really think
about the issue.  When they join a discussion, it becomes a mess.

My personal opinion on this is as follows.  In the evolutionary process,
with "survival of the fittest", you have to behave in such a way that you
will survive long enough to raise a new generation.  As the level of
complexity of the organism increases, it will have to do more "information
processing": to find food, to protect against enemies, etc.  My point:
intelligence etc. developed out of a need to determine how to behave in
order to survive.  So the behaviourist approach is justified: "when the
system seems to act intelligently, it *is* intelligent".

Then we invented the computer.  We start wondering: can we make this
machine intelligent ?  Before we can write a program for this, we must
understand the algorithm humans use.  This proves to be very difficult.
Research is hindered by people claiming that understanding requires very
mysterious causal powers which computers, due to their design, can never
have.  Gilbert Cockton even claims that because human minds are not
artifacts, while computer systems always will be, there will always be
performance differences.  Apart from the fact that this statement is
nonsense, it is not of any importance to AI-research.
-- 
	Frans van Otten
	Algemene Hogeschool Amsterdam
	Technische en Maritieme Faculteit
	fransvo@htsa.uucp

jb@aablue.UUCP (John B Scalia) (03/30/89)

In article <2691@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) writes:
> [Some deletions here]
>For one, human minds are not artefacts, whereas computer programs
>always will be.  This alone will ALWAYS result in performance
>differences.  Given a well-understood task, computer programs will
>out-perform humans.  Given a poorly understood task, they will look
>almost as silly as the author of the abortive program.

Right, Gilbert. I could argue equally well that anything/person performing
a poorly understood task will always be outperformed by a person/thing
accomplishing a well understood task. This has to be true even if both
"laborors" are human. Else, we'd all have children born who can already
speak all languages as well as performing integral calculas, instead of
adults who despite years of "programming" cannot read or write.

Hey, the "wiring" is in place, but it takes years of "programming" to make
it accomplish much. On this same vein, computer programs are still much
simpler running on much more simple hardware, but they are still deeply
related and much more guaranteable.

I believe here lies most of the AI problem: most of us have a hard time with
a computer program which produces an error where a human being erring is not
only possible but often expected.

-- 
A A Blueprint Co., Inc. - Akron, Ohio +1 216 794-8803 voice
UUCP:	   {uunet!}aablue!jb	Marriage is a wonderful institution, but who
FidoNet:   1:157/697		wants to spend their life in an institution.
EchoNet:   US:OH/AKR.0

jackson@freyja.css.gov (Jerry Jackson) (03/31/89)

In article <813@htsa.uucp>, fransvo@htsa (Frans van Otten) writes:
>Gilbert Cockton writes:

>This discussion has degraded into a fight between two groups with different
>viewpoints:
>
>  1. Humans have some mysterious powers that are responsible for their
>     having a mind.  Animals might also have these powers, maybe even
>     martians.  This property might be inherent to the building material;
>     carbon-hydrogen has it, Si doesn't.

This is a seriously flawed statement of the position.  It is not that
carbon "has something" that silicon doesn't -- that would be *stupid*.  What is
claimed is that possibly it is not merely functional structure that determines
the mind.  The "silicon-based" computers we have are brain-like only in
functional organization (if that :-).  Perhaps consciousness is a *chemical*
phenomenon and not a result of a particular functional structure.  If a computer
could be built based on carbon instead of silicon, the argument would be
the same.

>
>  2. Understanding etc. are properties which arise from a certain way to
>     process information.  The information theory is what matters, not
>     the way it is implemented.  If we humans can do it using our hardware
>     (neurons etc), then computers are able to do this using theirs.
>
>I believe that those who support 1. are in an ideological grippe.  This
>is an unsupported way of looking at things.  If these people might think
>they could find support in religions, I have to dissapoint them.  In no
>religion known to me it is stated that the mind/spirit/... (the non-physical
>thing) is dependent (in its being) on its body (the physical thing).  This
>includes religions ranging from Christianity to Bhuddism, Zen and Sufi.

If anyone in this group were to appeal to religions for support they might
as well put "Don't pay attention to this article!" in the subject line.
Pointing out that no major religion supports a point of view is irrelevant.

>
>There is not either any support for this viewpoint from the technological
>world.  There is no apparent chemical reason why carbon-hydrogene molecular
>groups can and Si-molecules can not give rise to something as high-level as
>understanding and consciousness.
>

See above.

>So I think these people are stuck somewhere between a "rational" and a
>"not-rational" (emotional/...) viewpoint, but are too lazy to really think
>about the issue.  When they join a discussion, it becomes a mess.

No comment.

>
>My personal opinion on this is as follows.  In the evolutionary process,
>with "survival of the fittest", you have to behave in such a way that you
>will survive long enough to raise a new generation.  As the level of
>complexity of the organism increases, it will have to do more "information
>processing": to find food, to protect against enemies, etc.  My point:
>intelligence etc. developed out of a need to determine how to behave in
>order to survive.  So the behaviourist approach is justified: "when the
>system seems to act intelligently, it *is* intelligent".
>

I think most people involved in this argument assume that humans evolved
to their present state.  This, however, is beside the point.  Yes, if
one wishes to define intelligence "from the outside", it is perfectly
ok to do so.  Searle and others are simply arguing that something is
left out when one does so.  This may not make a practical difference
in system performance at all.  The main point of the CR thought experiment
is that there is a subjective experience that is usually labeled "understanding"
that would appear to be missing from the CR.  This doesn't mean the room
can't behave identically to a human speaker of chinese.  In fact, the very
argument presupposes that it *can*.  Humans have a strange attribute known
as subjectivity that doesn't immediately appear to be reducible to structure
or functional organization.  It may even be totally unnecessary for intelligent
behavior.  If so, though, it is hard to imagine why such a thing would evolve.

Some people seem to misunderstand why "pain", for instance, is considered to
be problematic for machine intelligence.  A common point of view I have
seen on the net goes something like this:

	The computer has sensors that determine when it is damaged or likely
	to be damaged.  These send a signal to the central processor which
	takes appropriate action.. (like saying "Ouch!" :-).  

This hardly explains pain!  The signal in question fulfills the same functional
role as a signal in the human nervous system.. i.e. indicating a hazard to the
body.  The only thing missing is the *pain*!  To use an example I have used
before, ask yourself why you take aspirin for a headache.  I claim it is not
because you contemplate the fact that a signal is travelling through your
body and you wish it would stop.  You take the aspirin because your head
*hurts*.  The functionalist model would map a pain signal to some quantity
stored in memory somewhere... Does it really make sense to imagine:

	X := 402; -- OW! OW! 402, ohmigod!... X := 120; WHEW!.. thanks!

I can imagine a system outputting this text when the quantity X changes,
but I can't honestly imagine it actually being in pain.. Can you?

>Then we invented the computer.  We start wondering: can we make this
>machine intelligent ?  Before we can write a program for this, we must
>understand the algorithm humans use.  This proves to be very difficult.
>Research is hindered by people claiming that understanding requires very
>mysterious causal powers which computers, due to their design, can never
>have.  Gilbert Cockton even claims that because human minds are not
>artifacts, while computer systems always will be, there will always be
>performance differences.  Apart from the fact that this statement is
>nonsense, it is not of any importance to AI-research.
>-- 
>	Frans van Otten
>	Algemene Hogeschool Amsterdam
>	Technische en Maritieme Faculteit
>	fransvo@htsa.uucp


--Jerry Jackson

pluto@beowulf.ucsd.edu (Mark E. P. Plutowski) (03/31/89)

In article <448@esosun.UUCP> jackson@freyja.css.gov (Jerry Jackson) writes:
>... What is claimed is that 
>possibly it is not merely functional structure that determines
>the mind.  The "silicon-based" computers we have are brain-like only in
>functional organization (if that :-).  Perhaps consciousness is a *chemical*
>phenomenon and not a result of a particular functional structure.  

Perhaps this is true, at some  level of analysis.  
But it is surely false, at some level of implementation.
If consciousness includes all mental calculations, then
witness the computation facilitated by topological mappings,
interconnection schemata, and by the compartmentalization of
these "chemical phenomenon."




> [such a human trait] as subjectivity ... doesn't immediately appear to be reducible to structure
>or functional organization.  It may even be totally unnecessary for intelligent
>behavior.  If so, though, it is hard to imagine why such a thing would evolve.
>

Depending upon your perspective, perhaps.  
But, it should  not be difficult to imagine *how* such
a thing would evolve.  How can we not  be subjective, unless
out of body experience is a fact?  Indeed, it is more likely that
subjectivity is a result of the underlying implementation
of  intelligence, rather than a refuting piece of  evidence against 
the possibility that intelligence depends upon its underlying
structure.

If, for all practical purposes, we can divorce consciousness
from its underlying structure, then, why is one  particular
practical purpose so difficult for individuals to achieve?

Namely, objectivity.


Mark Plutowski				
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
You can skin a gift horse in the mouth, but you can't make him drink.
----------------------------------------------------------------------
Mark Plutowski				INTERNET: pluto%cs@ucsd.edu	
Department of Computer Science, C-014   	  pluto@beowulf.ucsd.edu
University of California, San Diego     BITNET:	  pluto@ucsd.bitnet
La Jolla, California 92093   		UNIX:{...}ucsd!cs.UCSD.EDU!pluto
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
You can skin a gift horse in the mouth, but you can't make him drink,

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (03/31/89)

>This discussion has degraded into a fight between two groups with different
>viewpoints:
There is considerable diversity, as well as incompatability, in the
arguments both for and against the possibility of strong AI.

You are particularly poor in your grasp of all the anti-AI arguments.
Some are based on the impossibility of simulating the brain's hardware
on a digital computer (indeed, the impossibility of accurately and
faithfully simulating ANY part of the natural world on a computer).
Others rely on epistemic arguments.  Others rely on theories of
ideology which deny any possible objective status to such value laden
concepts as 'intelligence', which are symptomatic of a system of
social stratification peculiar to modern Europe (and taken on in an
even cruder form in the New World).  The word doesn't have a usable
meaning in any scientific context.  

The sensible approach is to identify tasks for automation, describe
them accurately and acceptably, and then proceed.  Designing systems to
possess ill-defined and hardly understood properties is an act of
intellectual dishonesty at worst, and an act of intellectual mediocrity
at best.  Robotics has the advantage of dealing with fairly well-defined
and understood tasks.  I'd be surprised if anyone in robotics really
cares if their robots are 'intelligent'.  Succesful performance at a
task is what matters.  This is NOT THE SAME as intelligent behaviour,
as we can have clear conditions for success for a task, but not for
intelligent behaviour.  Without verification or falisification
criteria, the activity is just a load of mucking about - a headless
chicken paradigm of enlightenment.

>with "survival of the fittest", you have to behave in such a way that you
>will survive long enough to raise a new generation.  As the level of
>complexity of the organism increases, it will have to do more "information
>processing": to find food, to protect against enemies, etc.  My point:
>intelligence etc. developed out of a need to determine how to behave in
>order to survive.  So the behaviourist approach is justified: "when the
>system seems to act intelligently, it *is* intelligent".

You equate intelligence with a high degree of information processing
(by co-location of sentences, there is no explicit or clear argument in
this paragraph).  A cheque clearing system does a high degree of
information processing.  It must be intelligent then - and AI was
achieved 20 years ago?

You are making a historical point.  Please make it like a competent historian.
Otherwise leave evolutionary arguments alone, as you are just making things up.

>Before we can write a program for this, we must
>understand the algorithm humans use.  This proves to be very difficult.
>Research is hindered by people claiming that understanding requires very
>mysterious causal powers which computers, due to their design, can never have.

'Mysterious' is true only in the sense that we do not yet understand them.
'Eternally mysterious' would not be true.  What is true is that
causation in human/animal behaviour, and causation in physics, are very
different types of cause (explanatory dualism).   This does not hold up
research at all, it just directs research into different directions.
Logical necessity is a further type of pseudo-causation.  Its relation
to human agency is highly tenuous, and it is wrong to bet too much on
it in any research into psychology.

Computers cannot uncover mysteries.  Automation research may do, in
that the task or problem must be properly studied, and it is this
study, which advances knowledge rather than the introverted computer
simulation. Attempts at computer simulation do, however, expose gaps in
knowledge, but this does not make the mystery go away - it only deepens
it.  The problem is that, if studies are driven by the imperative to
automate, this will force the research into an epistemic and
methodological straightjacket.  This is a narrow approach to the study
of human behaviour, and is bound to produce nonsense unless it balances
itself with other work.  Hence AI texts are far less 'liberal' than
psychology ones - the latter consider opposing theories and paradigms.

>Gilbert Cockton even claims that because human minds are not
>artifacts, while computer systems always will be, there will always be
>performance differences.  Apart from the fact that this statement is
>nonsense, it is not of any importance to AI-research.

It is highly relevant.  I take it that you think it is nonsense because
I offer no support (reasonable) and you don't want to believe it (typical).

An artefact is designed for a given purpose.  As far as the purpose is
concerned, it must be fully understood.  The human 'mind' (whatever
that is - brain? consciousness?  culture? civilisation? knowledge?) was
not 'designed' for a given purpose as far as I can see (i.e. I am not a
convinced creationist, although nor have I enough evidence to doubt
some form of creation).  As 'mind' was not designed, and not by us more
importantly, it is not fully understood for any of its activities
('brains' are of course, e.g. sleep regulation).  Hence we cannot yet
build an equivalent artefact until we understand it.  Building in
itself does not produce understanding.  I can expose ignorance, but
this is not cured by further building, but by further study.  Strong AI
does not do this study.

My argument is initially from scepticism.  I extend the argument to all
forms of (pseudo-)intellectual activity which cannot improve our
understanding. Strong AI, as modelling without study, i.e. without
directed attempts to fill gaps in knowledge by proper, liberal, study,
is one such dead-end.  Computer modelling based on proper liberal
study, is more profitable, but only as a generator of new hypotheses.
It does not establish the truth of anything.  Finally, establishing the
truth of anything concerning human agency, is far far harder than
establishing the truth about the physical world, and this is hard
enough and getting harder since Quantum interpretations.

We have insitutionalised research.  There are areas to be studied, and
a permanent role in our societies for people who are drawn to advancing
knowledge.  Unfortunately, too many (of the weaker?) researchers today
see any argument on methodological grounds as an attack on research, an
attack on their freedom, a threat to the advance of scientific
knowledge, a threat to their next funding.

The purpose of research is to advance knowledge.  Advancing knowledge
requires an understanding of what can, and cannot, count as knowledge.
In our bloated academia, respect for such standards is diminishing.

Research is not hindered by ideas, but by people acting on them.  If
strong AI cannot win the arguments in research politics, then tough,
well - ironic really, for without research politics, it would not have
grown as it did in the first place.  Those that live by the flam, die
by the flam.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

smoliar@vaxa.isi.edu (Stephen Smoliar) (03/31/89)

In article <813@htsa.uucp> fransvo@htsa.uucp (Frans van Otten) writes:
>
>My personal opinion on this is as follows.  In the evolutionary process,
>with "survival of the fittest", you have to behave in such a way that you
>will survive long enough to raise a new generation.  As the level of
>complexity of the organism increases, it will have to do more "information
>processing": to find food, to protect against enemies, etc.  My point:
>intelligence etc. developed out of a need to determine how to behave in
>order to survive.  So the behaviourist approach is justified: "when the
>system seems to act intelligently, it *is* intelligent".
>
>Then we invented the computer.  We start wondering: can we make this
>machine intelligent ?  Before we can write a program for this, we must
>understand the algorithm humans use.  This proves to be very difficult.
>Research is hindered by people claiming that understanding requires very
>mysterious causal powers which computers, due to their design, can never
>have.  Gilbert Cockton even claims that because human minds are not
>artifacts, while computer systems always will be, there will always be
>performance differences.  Apart from the fact that this statement is
>nonsense, it is not of any importance to AI-research.

I find myself basically sympathetic to this approach.  However, because
recently our Public Television Network has begun a series of programs about
current problems in American education, I have been toying with a darker side
of this evolutionary model.  Let us accept Frans' premise that intelligent
behavior emerges because it is necessary for survival (i.e., if you lack
physical virtues like strength or speed, you need brains).  Then the computer
comes along, sort of like the cherry on top of this monstrous technological
sundae.  At each step in the history of technology, machines have made
intelligent behavior less and less necessary for survival  Is there a
danger that, as machines increase their potential for "intelligent behavior,"
that they will "meet" the corresponding human potential which is in a decline?
Hopefully, this will not be the case.  Hopefully, we, as humans, will have to
become MORE intelligent in order to interact with the very intelligent machines
we build.  I just wonder whether or not the technological entrepreneurs who
wish to fashion the world in their image will see it that way.

njahren@umn-d-ub.D.UMN.EDU (the hairy guy) (04/01/89)

In article <813@htsa.uucp> fransvo@htsa.uucp (Frans van Otten) writes:
>This discussion has degraded into a fight between two groups with different
>viewpoints:
>
>  1. Humans have some mysterious powers that are responsible for their
>     having a mind.  Animals might also have these powers, maybe even
>     martians.  This property might be inherent to the building material;
>     carbon-hydrogen has it, Si doesn't.
>
>  2. Understanding etc. are properties which arise from a certain way to
>     process information.  The information theory is what matters, not
>     the way it is implemented.  If we humans can do it using our hardware
>     (neurons etc), then computers are able to do this using theirs.
>
I think there is at least one alternative here.  In the characterization
of (2), I think there is a certain ambiguity in the use of the term
"information."  The stains on my coffee cup carrie the information that
it contained coffee yesterday, but is this information _for_the_coffee_
_cup?_  Certainly not.  My mind carries the informatyion that I drank
coffee yesterday, and this _is_ information _for_ me.  So there is a
fundamental difference between two things that we would call information.

Now we can ask the question, what is the minimum amount of information
that an "information processing" sequence must contain in order to
be an instance of mentality.  Now one thing that seems reasonable is that
the sequence must be able to carry the information that it is _about_
something (other than itself).  For instance, if I am thinking about
alligators, I must know that I am thinking about alligators, or else
I wouldn't be thinking about alligators (this is not a big revelation
to most people).  Now I might be mistaken, like I might be thinking
about an alligator but be attaching the name "crockadile" to it, or I
might think that alligators are small furry things that rub up against
you, while we can imagine a possible world where alligators are long
green things that would just as soon chow on you as look at you.  Also,
I might not know very much about alligators.  For instance, suppose I
am a little kid and my mother says to me: "Go tell your father that the
alligators are coming up from the swamps and we should leave some milk
and cookies for them."  Now all I would know about alligators is that
they are something my folks are talking about.  But in all these cases,
I would submit that if I think about alligators, I know that my thoughts
are directed towards alligators, it's just that the other mental images
of alligators I could appeal to are either incorrect or very sketchy.

Now, if we accept this as a pretty necessary feature of mentality, we
can ask, in the abstract, whether syntactic digital computation is a
sufficiently rich process to carry the information that it is _about_
something.  If we find reason to believe thatr it is not, then we would
also have reason to believe (1) human mentality is not syntactic digital
computation, and (2) syntactic digital computation cannopt give rise to
a system of information processing as rich as human mentality, no matter
what medium it is implemented in.

I see the Chinese Room argument as an argument that Syntactic Digital
computation is in fact _not_ sufficiently rich to mee the standard for
mentality I oulined above.  I personally find it convincing, and would
be willing to discuss either this interpretation of CR or the standards
above, but I believe I have show that there is at least one reasonable
alternative to the two positions Mr van Otten describes above.

>order to survive.  So the behaviourist approach is justified: "when the
>system seems to act intelligently, it *is* intelligent".

And all this time I thought there was something called "consciousness."
Imagine!  But seriously, isn't the question of consciousness and 
intensionality the question that makes philosophy of mind interesting in
the first place.  And isn't your behavioristic brushing of them aside
tatamount to denying them as important aspects mentality?  And if you
do choose to deny this, don't you come up with the problem that we
_are_ conscious and intensional, and that that's why we're doing all
this in the first place?

Neal.

530 N 24th Ave E               "I cannot disclaim, for my opinions
Duluth, MN 55812                *are* those of the Instutute for
njahren@ub.d.umn.edu            Advanced Partying"

"Silence in El Salvador also.  Just how old are "the bad old
days"?  Down goes Vice President Quayle on February 3 to
urge good conduct on the Salvadorian Army.  On the eve of
Quayle's speech, says the human rights office of the
Catholic Archdiocese, five uniformed troops broke into the
homes of university students Mario Flores and Jose Gerardo
Gomez and took them off.  They are found the next day, just
about the time Quayle and the U.S. press are rubbing noses
at the embassy, dead in a ditch, both shot at close range.
Gomez's fingernails have signs of "foreign objects" being
driven under them.  Flores's facial bones and long vertabrae
are fractured, legs lacerated, penis and scrotum bruised "as
a result of severe pressure or compression."  No
investigation follows."
				    --Alexander Cockburn
			      _The_Nation_, 3 April, 1989

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (04/02/89)

From article <2705@crete.cs.glasgow.ac.uk>, by gilbert@cs.glasgow.ac.uk (Gilbert Cockton):
" ...  As 'mind' was not designed, and not by us more
" importantly, it is not fully understood for any of its activities
" ('brains' are of course, e.g. sleep regulation).  Hence we cannot yet
" build an equivalent artefact until we understand it.  ...

It doesn't follow.  Think of a diamond, for instance.

		Greg, lee@uhccux.uhcc.hawaii.edu

trevor@mit-amt (Trevor Darrell) (04/02/89)

  In article <2705@crete> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
  >...
  >My argument is initially from scepticism.  I extend the argument to all
  >forms of (pseudo-)intellectual activity which cannot improve our
  >understanding. 
  >...
  >The purpose of research is to advance knowledge.  Advancing knowledge
  >requires an understanding of what can, and cannot, count as knowledge.
  >In our bloated academia, respect for such standards is diminishing.

Excuse me, but exactly how does one determine when an activity can or
cannot improve understanding? And have you published your test
of what can, and cannot, count as knowledge? ``References?''

Would you have had all intellectual explorations throughout the
ages constrained by these tests? All the artistic explorations? Would you
perscribe them as an absolute guide to your child's education? 

Are you perhaps a bit lacking in the rigor of your debate? (Maybe diatribe
is a better term?...)

Trevor Darrell
MIT Media Lab, Vision Science.
trevor@whitechapel.media.mit.edu

ssingh@watdcsu.waterloo.edu ( SINGH S - INDEPENDENT STUDIES ) (04/02/89)

Regarding the division of thought: I think BOTH are right in a way, but 
both also have their flaws.

Carbon and Hydrogen does have something going for it that Silicon does not,
that property of being INCREDIBLY plastic. If you could freeze someone at
time t, and map out the neural networks, then froze them again at time t
+5, you would find changes in the nets. Current computer technology does 
not allow circuitry to change itself. Unless there is a major revolution 
in the design of hardware, I honestly think the first truly intelligent computer
will be made with organic materials. Who knows? It may even be grown using
recombinant DNA or something like that. There is no way we can match the
plasticity of the brain with current technology.

The second idea is very pure, and very open-ended. While I admire this
purity and free-form style, I think it has become undisciplined. Such all
encompassing models should be able to express every macroscopic
observation in terms of the theoretical models. No one seems to be able
to agree on even the most basic of definitions. Plenty of vigour, but
no rigour . 

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (04/02/89)

It's hard to see where CR understanding might come from (if it exists).
It's hard to see where understanding might come from (if it exists).
It's hard to see where consciousness might come from (if it exists).
It's hard to see where meaning might come from (if it exists).

I know why it's hard.  These things don't exist.  There are theories to
the contrary that are embedded in the way we ordinarily talk about
people and their behavior.  Meaning is the most obvious case.  When two
sentences are paraphrases, we say they 'mean the same thing'.  Then
there must be a thing that they both mean, right?  Thinking of that as a
theory, it might be right, or it might be wrong.  We ought to devise
alternative theories and look for evidence.  We have.  I think it's
wrong, myself, but opinions differ.  If it does turn out to be wrong,
then the effort to program meaning into a machine can never be
successful -- not because meaning is essentially human, or essentially
organic, or essentially analogue, or essentially denotational, or any of
the other straws that have been grasped at in this discussion.  But
because there's simply no such thing in the world to be found in us or
to be put into a machine.

		Greg, lee@uhccux.uhcc.hawaii.edu

murthy@tut.cis.ohio-state.edu (Murthy Gandikota) (04/04/89)

In article <5755@watdcsu.waterloo.edu> ssingh@watdcsu.waterloo.edu ( SINGH S    - INDEPENDENT STUDIES    ) writes:
>+5, you would find changes in the nets. Current computer technology does 
>not allow circuitry to change itself. Unless there is a major revolution 
>in the design of hardware, I honestly think the first truly intelligent computer
>will be made with organic materials. Who knows? It may even be grown using
>recombinant DNA or something like that. There is no way we can match the
>plasticity of the brain with current technology.

This provokes me to post a thought experiment I've made on
self-organizing neural nets. The point is, for a neural net to be as
efficient storage/processing device as brain, it should be able to
change its connections towards some optimality. Suppose there are two
independant concepts A and B represented as two neurons/nodes. So long
a relationship is not discovered between them there is no connection
between them. Say after some time, a relationship is found between A
and B, then a connection can be created between them. However this
won't be optimal if A and B have a degree/extent relationship. In
which case, A and B have to be merged into some C, with the
degrees/extents captured in (the hidden rules of) C. A ready and
simple example I can put down is, A=bright red, B=dull red, C=shades
of red. Has anyone thought of this before?


--murthy




-- 
"What can the fiery sun do to a passing rain cloud, except to decorate
it with a silver lining?"  
Surface mail: 65 E.18th Ave # A, Columbus, OH-43201; Tel: (614)297-7951

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (04/04/89)

In article <3633@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu (Greg Lee) writes:
>From article <2705@crete.cs.glasgow.ac.uk>, by gilbert@cs.glasgow.ac.uk (Gilbert Cockton):
>" ...  As 'mind' was not designed, and not by us more
>" importantly, it is not fully understood for any of its activities
>" ('brains' are of course, e.g. sleep regulation).  Hence we cannot yet
>" build an equivalent artefact until we understand it.  ...
>
>It doesn't follow.  Think of a diamond, for instance.
>
Category mistake.  

Diamonds are
	a) concrete
	b) 'assayable' - i.e. you can test chemically that X is indeed a diamond
	c) synthesisable by following well-understood chemical theories

Minds are
	a) abstract
	b) not 'assayable' - what the word covers is vague.
	c) not provably sythesisable becuase of (b) no test for mindhood, and also no
	   theory of how minds get made and function

I am still thinking of a diamond however.
I cannot think of a mind.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (04/04/89)

In article <3684@mit-amt> trevor@media-lab.media.mit.edu (Trevor Darrell) writes:
>
>Excuse me, but exactly how does one determine when an activity can or
>cannot improve understanding? And have you published your test
>of what can, and cannot, count as knowledge? ``References?''
>
I'm utterly derivative :-)  Go mug up on epistemology and philosophy of science, then
characterise the activities of the various branches of AI (they are not birds of a
feather, until you reach the strong AI stuff, like real-world semantics),and then
decide to what extent any of them have a coherent view of what it is to promote
convincement.

>Would you have had all intellectual explorations throughout the
>ages constrained by these tests? All the artistic explorations? Would you
>perscribe them as an absolute guide to your child's education? 

Of course not, but we aren't talking about individual exploration, or private
enlightenment.  We are talking about institutionalised creation of knowledge in the
post-war academic culture.  This eats resources and shapes people's images of the
future.  Democracy demands quality control.

I know of no incident in the history of science where continued romantic mucking
about got anywhere.  As A.N. Whitehead argued, all learning must begin with a stage of
Romance, otherwise there will be no motivation, no drive to learn, no fascination.
But it must be followed by a stage of analysis, a specialisation based on proper
discipline, for "where one destroys specialism, one destroys life." With specialism,
the mind is "a disciplined regiment" not a "rabble".  For Whitehead, a final stage of
Generalisation which applies specialised analysis into common sense, real-world
context is essential.

(A.N. Whitehead, (as in Russel & _) The Aims of Education, 1929)

As for children's education, I designed and implemented curricula based on what
Whitehead called his 'Rhythm of Education'.

AI rarely gets beyond the first beat of the rhythm.  Sometimes it dodges it and goes
straight into logic etc, but never gets to generalisation, since it was never
grounded in anything sensible in the first place.

I note that Trevor works in AI vision, which has not got to where it is by romantic
sci-fi, but by proper analysis of psychophysical and physiological knowledge.  Once
reasoning is needed, all this good work goes becomes diluted with the shifting sands
of basic AI.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

fransvo@htsa.uucp (Frans van Otten) (04/04/89)

Jerry Jackson writes:

>This is a seriously flawed statement of the position.  It is not that
>carbon "has something" that silicon doesn't -- that would be *stupid*.
>What is claimed is that possibly it is not merely functional structure
>that determines the mind.  The "silicon-based" computers we have are
>brain-like only in functional organization (if that :-).  Perhaps
>consciousness is a *chemical* phenomenon and not a result of a particular
>functional structure.

I don't understand.  First you say that it is stupid to believe that
carbon has something that silicon doesn't (sounds sensible to me).
Then you claim that having a mind might be some chemical phenomenon
rather than an abstract information processing phenomenon.  So carbon
"has it"; "it" would be certain chemical properties.  What is the
difference ?

[...]

>I think most people involved in this argument assume that humans evolved
>to their present state.  This, however, is beside the point ...  Searle
>and others are simply arguing that something is left out when one does
>so ...  Humans have a strange attribute known as subjectivity that doesn't
>immediately appear to be reducible to structure or functional organization.
>It may even be totally unnecessary for intelligent behavior.  If so, though,
>it is hard to imagine why such a thing would evolve.

I don't think that it is beside the point.  Evolving to the present
state, including the present brain-organization and mind/consciousness/
etc. is a process which is determined by external behaviour.  From the
evolution point of view, there is no reason why any subjective experience
would exist if it would not have a relationship with "the outside world".
But such a relationship does exist: when you experience the subjective
feeling that you understand, you behave different then when you don't
have that feeling.  So I claim that *every* subjective experience is an
internal state which (partly) determines the behaviour.


Gilbert Cockton writes:

>>As the level of complexity of the organism increases, it will have
>>to do more "information processing" ...  My point: intelligence etc.
>>developed out of a need to determine how to behave in order to survive.
>
>You equate intelligence with a high degree of information processing.
>A cheque clearing system does a high degree of information processing.
>It must be intelligent then - and AI was achieved 20 years ago?

Please see the difference between "many simple tasks, all the same" and
"many different and difficult tasks".  But yes: AI was invented (at least)
20 years ago.  The cheque clearing system you write about does understand
how to process a check.

[...]

>>Gilbert Cockton even claims that because human minds are not artifacts,
>>while computer systems always will be, there will always be performance
>>differences.  Apart from the fact that this statement is nonsense, it
>>is not of any importance to AI-research.
>
>An artefact is designed for a given purpose.  As far as the purpose is
>concerned, it must be fully understood...  mind is not fully understood...
>hence we cannot yet build an equivalent artefact until we understand it.

So when we understand how the human mind works, we can build a machine
which has properties like "consciousness", "understanding" etc.  Do you
claim that this would not be an artifact (maybe because we didn't design
it ourselves, but rather copied it) ?  Or would we have built an artifact
with a mind ?  Then there would be no performance differences...  That's
all I wanted to say.

-- 
	Frans van Otten
	Algemene Hogeschool Amsterdam
	Technische en Maritieme Faculteit
	fransvo@htsa.uucp

fransvo@htsa.uucp (Frans van Otten) (04/05/89)

Greg Lee writes:

>It's hard to see where CR understanding might come from (if it exists).
>
>These things don't exist ... the effort to program meaning into a
>machine can never be successful ... because there's simply no such
>thing in the world to be found in us or to be put into a machine.

Without getting too philosophical, let me explain that this is partly
true and partly false.  Humans are conscious.  This is true simply
because we state it.  But what do we mean by "conscious" ?  It is some
subjective phenomenon.  Subjective phenomena are in my opinion nothing
more than certain internal states (a flag that is set, a variable, ...).
So when I say "I am conscious" I only say that that specific flag is
set.  So consciousness does exist (otherwise the word would never have
been invented), but it is not a phenomenon that can be observed or
detected in the functional structure or in the chemical structure or
whereever.

The same holds for "understanding".  I define "understanding" as
"represented in internal symbols".  So it is valid to say that my
calculator understands addition; it is hard-wired into it.

Understanding in itself is useless.  But it becomes neccesary when
you want to do something with this understood concept (or whatever
it is).  So when I want to use my calculator, it must be able to
perform the rules for addition (which it understands).  So I have
to feed it with batteries and numbers.  At the moment that I feed
it with numbers, it understands the instantion of addition I want
it to perform now, e.g. 5 + 3 = 8.

So "understanding" exists at many levels: I can understand a general
concept like addition, but I can also understand an instantion of
addition: I understand that 5 and 3 are 8.

The more general "rules" that a system "understands" determine the
general behaviour of a system.  This set of rules is more commonly
referred to as the "intelligence" of the system.  The more rules
the system understands, the more intelligent it is.  Don't immediately
say "nonsense"; think about it.  Don't you ever note the use of this
word in general ?  In signatures, some people write:

   Dumb mailers:          {backbones}!foo!bar!etc!my_name
   Intelligent mailers:   my_name@etc.foo_domain

When we are not discussing AI, "understanding" is a normal word.
Under normal circumstances, a watch understands to include februari
29 every fourth year.  But when we start discussing AI (or AI-related
philosophy) words like "understanding" and "intelligence" get a
mysterious load.

The Chinese Room Argument is nonsense.  Let me explain it once more:
 
  1. It passes the TT.  This means: a human being can't tell the
     difference between a Chinaman and The Room.  The behaviour
     of the room is such that the humans in the jury set their
     flag: "it understands".

  2. John Searle, inside the room, doesn't experience understanding
     of Chinese.  This means:  the internal state of the system
     "Searle" are such that he concludes (unconsciously) not to set
     his flag "I understand".

  3. John Searle then concludes: "As I don't understand, the entire
     Room doesn't understand" (etc).  He should have concluded:  "I
     don't seem to understand, but this doesn't say anything about
     the Rooms understanding capability".

  4. I say:  Sure, the room does understand Chinese.  Only to the
     extend that the set of books provide for, of course.  And of
     course, the Room has not the [human] sense of "understanding",
     as there is no flag for such an internal state.

  5. I add to that:  In humans, these "flags" are probably located
     in the right hemisphere.  And the symbol grouding problem is
     probably solved in humans by connecting "understood" concepts
     to "understood" (physical) sensations.  (This latter statement
     is supported by certain psychological models.)  Disclaimer: I
     am not sure about these statements, they merely seem very
     probable to me.

-- 
	Frans van Otten
	Algemene Hogeschool Amsterdam
	Technische en Maritieme Faculteit
	fransvo@htsa.uucp

cam@edai.ed.ac.uk (Chris Malcolm cam@uk.ac.ed.edai 031 667 1011 x2550) (04/06/89)

In article <2721@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>In article <3633@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu (Greg Lee) writes:
>>From article <2705@crete.cs.glasgow.ac.uk>, by gilbert@cs.glasgow.ac.uk (Gilbert Cockton):
>>" ...  As 'mind' was not designed, and not by us more
>>" importantly, it is not fully understood for any of its activities
>>" ('brains' are of course, e.g. sleep regulation).  Hence we cannot yet
>>" build an equivalent artefact until we understand it.  ...
>>
>>It doesn't follow.  Think of a diamond, for instance.
>>
>Category mistake.  
>
>Diamonds are
>	a) concrete
>	b) 'assayable' - i.e. you can test chemically that X is indeed a 
>	diamond
>	c) synthesisable by following well-understood chemical theories
>
>Minds are
>	a) abstract
>	b) not 'assayable' - what the word covers is vague.
>	c) not provably sythesisable becuase of (b) no test for mindhood,
>	 and also no theory of how minds get made and function

There are many different kinds of understanding.  People are
extremely good at fiddling about and getting things to work, using
the minimum understanding necessary for the job.  Consequently
sailing ships achieved considerable sophistication before the theory
of aerodynamics was discovered; and steam engines were made to work
not only before the theory of heat engines and thermodynamics
existed, but in the face of some wierd and quite wrong ideas about
the principles involved.  And don't forget that evolution has
re-invented the optical eye a number of times, despite never having
been to school, let alone having a scientific understanding of
optics. 

Richard Gregory in "Mind in Science" argues that not only do people
sometimes make working devices in advance of a proper theoretical
understanding of the principles involved, but that this is actually
the way science usually progresses: somebody makes something work,
and then speculates "how interesting - I wonder WHY it works?"

So I expect that AI will produce working examples of mental
behaviour BEFORE anyone understands how they work in the analytic
sense (as opposed to the follow-this-construction-recipe sense), and
that it will be examination and experimentation with these working
models which will then lead to a scientific understanding of mind.
As for "mind" not being assayable, it's a pity nobody has invented a
mind-meter, but we are all equipped with sufficient understanding to
be able to say "that looks pretty like a mind to me".  Even if
closer examination or analysis proves such a judgement wrong,
subjecting these judgements to analysis such as the Chinese Room
argument, and testing them on the products of AI labs, is a good way
of refining them.  Current ideas about what constitutes mental
behaviour are a good deal more sophisticated than those of several
decades ago, partly due to the experience of exercising our concepts
of mentality on such devices as the von Neumann computer.  I don't
see any reason why AI, psychology, and philosophy, shouldn't
continue to muddle along in the same sort of way, gradually refining
our understanding of mind until the point where it becomes
scientific.

A (new) category mistake?  I assert that I will have a scientific
understanding of mind when I can tell you exactly how to make one of
a given performance, and be proven right by the constructed device,
although such a device had never before been built.  Unfortunately I
don't expect any of us to live that long, but that's just a
technical detail.

This idea that you have to understand something properly before
being able to make it is a delusion of armchair scientists who have
swallowed the rational reconstruction of science usually taught in
schools, and corresponds to the notion sometimes held by
schoolteachers of English that no author could possibly write a
novel or poem of worth without being formally educated in the 57
varieties of figures of speech.  It also corresponds to the notion
that one can translate one language into another by purely syntactic
processing, a notion that AI disabused itself of some time ago after
contemplating its early experimental failure to do just that.

The human mind is fortunately far too subtle and robust to permit a
little thing like not understanding what it's doing to get in the
way of doing it. Otherwise we wouldn't even be able to think, let
alone create artificial intelligence.
-- 
Chris Malcolm    cam@uk.ac.ed.edai   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK		

markh@csd4.milw.wisc.edu (Mark William Hopkins) (04/06/89)

In article <820@htsa.uucp> fransvo@htsa.UUCP (Frans van Otten) writes:
>Greg Lee writes:
>
>>These things don't exist ... the effort to program meaning into a
>>machine can never be successful ... because there's simply no such
>>thing in the world to be found in us or to be put into a machine.
>
>The same holds for "understanding".  I define "understanding" as
>"represented in internal symbols".

... and let me add here.  Those "symbols" are the symbolic operations that
control the body's muscular/skeletal functions.  For example, we "understand"
the verb to "move" by relating it to the control we exercise over our muscles;
the verb to "eat" by our ability to eat and digest food and so on.
     These are biological universals of the human race that, in effect, create
a universal semantic formalism for all human languages; which, in turn, gives
us all a common basis for learning our first (and second and ...) language.
     Understanding a *human* language is intimately related to experiencing our
biological condition.
     BTW, just what is CR anyway?

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (04/06/89)

From article <2721@crete.cs.glasgow.ac.uk>, by gilbert@cs.glasgow.ac.uk (Gilbert Cockton):
" In article <3633@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu (Greg Lee) writes:
" >From article <2705@crete.cs.glasgow.ac.uk>, by gilbert@cs.glasgow.ac.uk (Gilbert Cockton):
" >" ...  As 'mind' was not designed, and not by us more
" >" importantly, it is not fully understood for any of its activities
" >" ('brains' are of course, e.g. sleep regulation).  Hence we cannot yet
" >" build an equivalent artefact until we understand it.  ...
" >
" >It doesn't follow.  Think of a diamond, for instance.
" >
" Category mistake.  

Whose category mistake?  Yours?  Certainly not mine.  If you had
argued:
	As 'mind' was not designed, and not by us more
	importantly, and as it is abstract and not 'assayable'
	and not provably synthesizable, it is not fully
	understood ... Hence we cannot build an equivalent
	artifact ...

then I would not have made the particular objection that I made (though
I might have pointed out the curcularity).  But that's not what you
said.

" ...
" Minds are
" 	a) abstract
" 	b) not 'assayable' - what the word covers is vague.
" 	c) not provably sythesisable becuase of (b) no test
"	   for mindhood, and also no
" 	   theory of how minds get made and function

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (04/06/89)

From article <820@htsa.uucp>, by fransvo@htsa.uucp (Frans van Otten):
" >no such thing in the world to be found in us or to be put into a machine.
"
" Without getting too philosophical, let me explain that this is partly
" true and partly false.  Humans are conscious.  This is true simply
" because we state it.  But what do we mean by "conscious" ?

I think there is a crucial confusion in what you say here.  The
fact that humans say they are conscious is a fact of human
behavior, worthy of study.  That humans are consious can
alternatively be taken as a theory some of us have about
human behavior, though perhaps not a very well defined one.
Taken as a fact, it's undeniable.  Taken as a theory, if it
can be made specific enough to have empirical consequences,
it can be wrong.  I think it is wrong.  But more importantly,
I think we can't even make sense of these issues, if from the
true fact that humans say they are conscious, we draw the
conclusion that the (or some) corresponding theory must be
correct.  That's not sensible.  The fact is different from
the theory; the theory does not follow from the fact.

" ... The Chinese Room Argument is nonsense. ...

I'm with you on that point.
			Greg, lee@uhccux.uhcc.hawaii.edu

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (04/07/89)

In article <322@edai.ed.ac.uk> cam@edai (Chris Malcolm) writes:
>There are many different kinds of understanding.  People are
>extremely good at fiddling about and getting things to work, using
>the minimum understanding necessary for the job.  Consequently

Oh yawn, yawn, yawn.
OK, knowing how, knowing that, knowing why, all different - freshman
epistemology this, but don't let me ryle (sic) you :-)

>sailing ships achieved considerable sophistication before the theory
>of aerodynamics was discovered; and steam engines were made to work
>not only before the theory of heat engines and thermodynamics
>existed, but in the face of some wierd and quite wrong ideas about
>the principles involved.

They were making ships, not minds though.  If they'd tried to make
fish, things would have been different.  Ships and steam engines are
artefacts with functions, they are not attempts to copy nature.  The
rest of your argument here falls down because of this simple
confusion.  Just to make life harder though, "mind" isn't obviously a
part of nature in the sense that most people use it (the eternal, that
which depends on us not being here).  The presence of (the illusion of,
for Greg Lee et al.) consciousness, understanding, volition,
inspiration and imagination put "mind" out of the normal scope of
nature.  You're really going to "build" one of these, and say, "well I
didn't know what I was doing, but hey I got there, and yeah, it's a
mind"?  Stop kidding yourself.  All examples of artefacts which exploit
yet to be undestood phenomena have all been made as tools to solve a
narrow problem.  This does not apply to strong AI, and your attempt to
exploit the analogy does not suggest to me that you have, or indeed
want to, think long and hard about the argument.  You appear, to me, to
be clutching at straws.

>mind-meter, but we are all equipped with sufficient understanding to
>be able to say "that looks pretty like a mind to me".  

Sorry, but this is balls - see under animism. There are cultures which
give minds to things which ours does not.  I suppose they are wrong and
we are right?  Why not trot down to Portobello at the weekend and
ask folk on the beach the following:

	Which of these have "minds"
		a) all humans
		b) Muriel Gray
		c) Neil Kinnock or Ronald Reagan
		d) the beach donkeys
		e) wasps 
		f) slugs
		g) stroke patients with no mobility or speech
		h) rivers
		i) trees
		j1) god (for atheists)
		j2) God (for believers)
		j2 supplementary) the Holy Spirit
		k) South Bridge
		l) an intelligent front end to a computer system

So what are the right answers?  Will we all make the same judgements? If
not, then is the Turing Test a sensible basis for a serious research
topic?  Will the word of one person, enough for Turing, really be
enough for research funders?  What if ten people are split 8-2 in
favour of the computer having a mind?  What if the 8 are all children?
Come one, let's have some proper criteria for success, not what your
mum says.  If the Turing Test was THE test, then bring up a wino from
Greyfriars today and give him some cans of Heavy to say your robots
have minds.  Surely that would count as passing the Turing Test? :-)

>Current ideas about what constitutes mental 
>behaviour are a good deal more sophisticated than those of several 
>decades ago, partly due to the experience of exercising our concepts 
>of mentality on such devices as the von Neumann computer.

Evidence please.  I studied cognitive psychology as part of my first
degree ten years ago.  I presume the progress has happened since then,
because things were very poor then.  In the late 1970s, there was
considerable contempt for the monkeys and cannibals and other
artificial problem solving work, which appeared to be the state of the
art at the time.  What's changed?

>This idea that you have to understand something properly before
>being able to make it is a delusion of armchair scientists who have
You're right that we didn't use to do things this way.  However, the
idea that you can let loose a artefact which is not understood these
days borders on the immoral.  Legally, with changes in product
liability, you *MUST* undertand what you have made.  Hence the social
irrelevance of mcuh expert systems work.  If they can't be guaranteed,
they can't be used.

>The human mind is fortunately far too subtle and robust to permit a
>little thing like not understanding what it's doing to get in the
>way of doing it. Otherwise we wouldn't even be able to think, let
>alone create artificial intelligence.

What have you been eating?  My mind is made, that's how I think with
it.  Your computer mind isn't made, so you can't think with that.

You haven't answered the question of how computer modelling of an
ill-defined social construct improves our understanding of nature.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

fransvo@htsa.uucp (Frans van Otten) (04/07/89)

Greg Lee writes:

>Frans van Otten writes:
>
>>Humans are conscious.  This is true simply because we state
>>it.  But what do we mean by "conscious" ?
>
>The fact that humans say they are conscious [... can been taken
>as] a fact of human behavior [... or as] a theory.  Taken as a
>fact, it's undeniable.  Taken as a theory [...] it can be wrong.
>I think it is wrong.  But more importantly, I think we can't even
>make sense of these issues, if from the true fact that humans say
>they are conscious, we draw the conclusion that the (or some)
>corresponding theory must be correct.  That's not sensible.  The
>fact is different from the theory; the theory does not follow from
>the fact.

I can't follow you.  What do I mean when I say "I am hungry" ?

  1.  I am in need of food.
  2.  I have a (subjective) feeling that I call "hungry".  This
      feeling has been caused by an empty stomach, or by something
      else.

Taken as (1), it is deniable.  I can have a hungry feeling without
actually needing food.  Taken as (2), it is undeniable:  I *am*
hungry.  Maybe this hungry feeling is caused by something else
then an actual need for food, but I don't say anything about that.

Now when I say "I am conscious", I have a (subjective) feeling which
I call "conscious".  With this statement, I don't say anything about
what might have caused this feeling.  In my article, I stated:

>>Humans are conscious (1).  This is true simply because we state
>>it (2).  But what do we mean by "conscious" (3) ?

Maybe I should have written:

  (1) Humans say they are conscious.
  (2) So they have a subjective feeling which they call "conscious".
  (3) What might this feeling mean, or what might have caused it ?

Then I continued my article trying to answer that question.

So where do we misunderstand eachother ?

-- 
	Frans van Otten
	Algemene Hogeschool Amsterdam
	Technische en Maritieme Faculteit
	fransvo@htsa.uucp

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (04/08/89)

From article <322@edai.ed.ac.uk>, by cam@edai.ed.ac.uk (Chris Malcolm    cam@uk.ac.ed.edai   031 667 1011 x2550):
" ...
" This idea that you have to understand something properly before
" being able to make it is a delusion of armchair scientists who have
" swallowed the rational reconstruction of science usually taught in
" schools, and corresponds ...   It also corresponds to the notion
" that one can translate one language into another by purely syntactic
" processing, a notion that AI disabused itself of some time ago after
" contemplating its early experimental failure to do just that.

What you say seems to assume that the syntax of natural languages was or
is understood.  That is not the case.  It's very far from being the
case.  The failure you mention, consequently, does not suggest that
translation cannot be achieved by syntactic processing.

		Greg, lee@uhccux.uhcc.hawaii.edu

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (04/08/89)

From article <826@htsa.uucp>, by fransvo@htsa.uucp (Frans van Otten):
" ...
" I can't follow you.  What do I mean when I say "I am hungry" ?
" 
"   1.  I am in need of food.
"   2.  I have a (subjective) feeling that I call "hungry".  This
"       feeling has been caused by an empty stomach, or by something
"       else.
" 
" Taken as (1), it is deniable.  I can have a hungry feeling without
" actually needing food.  Taken as (2), it is undeniable:  I *am*
" hungry.

Though I have reservations about (2), I think this is a good analogy.
One might say "I am hungry" to mean "Give me some food now" or to
announce an intention to raid the refrigerator.  If so, it's an
explanation of the demand or of the up-coming behavior.  As students of
human behavior, we might take the explanation seriously, and go about
the business of trying to put it in a rigorous way by trying to identify
a chemical hunger-need syndrome or a neuronal hunger-feeling syndrome.
That's step one.  Having done that, as step two, we could investigate
its truth.  Maybe for hunger-feeling it would have to be true, as you
say.  I don't think that's so clear.

But what about step one?  Does it have to succeed?  We're dealing
with a folk explanation of behavior.  Maybe it's just wrong.

Similarly consiousness.

"  Maybe this hungry feeling is caused by something else
" then an actual need for food, but I don't say anything about that.
" 
" Now when I say "I am conscious", I have a (subjective) feeling which
" I call "conscious".  With this statement, I don't say anything about
" what might have caused this feeling.

That's not what I meant by saying it was a theory.  I meant that
statements about consiousness are used ordinarily to explain or justify
behavior.  Folk theories are advanced.

" ... So where do we misunderstand eachother ?

Since you don't recognize the theoretical nature of humans' statements
about consciousness, you are prevented from entertaining the possibility
that the theories might be incorrect.

		Greg, lee@uhccux.uhcc.hawaii.edu