[comp.ai] Can Machines Think?

crowston@athena.mit.edu (Kevin Crowston) (12/18/89)

I also read Searle's and the Churchland's articles in Scientific American
and I'm not sure I understand Searle's argument.  Perhaps someone who
does can try to explain once more.

Searle seems to be saying that the Turing Test is meaningless as a test of
understanding because the Chinese Room can pass it, even though the
person in the Chinese Room doesn't understand Chinese.  But it seems to
me that this argument equates the mind and the brain and thus mislocates 
the thing doing the understanding.  I agree that the man in the room doesn't
understand Chinese; but I would argue similarly that a computer, as a
collection of wires and silicon, or a brain, as a blob of protoplasm,
don't understand anything either.  I all three cases, the thing doing the
understanding is the program not the hardware.  

Searle acknowledges this argument (it's counterargument c in his
article), but answers it by imagining a situation in which the man in
the room memorizes the rules, the inbox, etc.  He argues that it can't
be the rules that do the understanding, since all there is in the room
is the man (who we agree doesn't understand Chinese).

The part I don't understand is, what difference does it make how the
rules are stored?  I don't see why it makes a difference if the man
memorizes the rules or reads them off a piece of paper.  In the latter
case, admittedly, you can point to the rule book; but that doesn't
mean the rule book doesn't exist in the former case.  It seems to me
that Searle's second example is really the same example, in which case
the argument (that it's the rules that do the understanding, not the man
in the room) remains unanswered.  

I expect the Scientific American Articles will set off another wave of
articles; I look forward to the debate.  

Kevin Crowston

jfw@bilbo.mc.duke.edu (John Whitehead) (12/19/89)

In article <1989Dec18.014229.18058@athena.mit.edu> crowston@athena.mit.edu (Kevin Crowston) writes:
>I also read Searle's and the Churchland's articles in Scientific American
>and I'm not sure I understand Searle's argument.  Perhaps someone who
>does can try to explain once more.

For a good analysis of this -- and many other similar thought-challenging
papers -- check out _The Mind's I_, edited by Douglas Hofstadter and
Daniel Dennett.  I haven't seen the Sci Am article, but I imagine it is
similiar (if not identical) to the one in this book.

     - John Whitehead

sarge@metapsy.UUCP (Sarge Gerbode) (12/19/89)

In article <1989Dec18.014229.18058@athena.mit.edu> crowston@athena.mit.edu (Kevin Crowston) writes:

>Searle seems to be saying that the Turing Test is meaningless as a test of
>understanding because the Chinese Room can pass it, even though the
>person in the Chinese Room doesn't understand Chinese.  But it seems to
>me that this argument equates the mind and the brain and thus mislocates 
>the thing doing the understanding.  I agree that the man in the room doesn't
>understand Chinese; but I would argue similarly that a computer, as a
>collection of wires and silicon, or a brain, as a blob of protoplasm,
>don't understand anything either.  In all three cases, the thing doing the
>understanding is the program not the hardware.

On reflection, I don't think you can dispose of the issue that easily
by differentiating between the program and the hardware.  The program
is a schema that describes the electronic state the hardware should
be in when the code file is loaded.  In a very real sense, then, the
shape of the physical machine has been altered by loading the code
file, just as much as if you had flipped switches within the machine
(as we used to do with the old panel switches).  So after the code is
loaded, there is actually a different physical machine there, just as
much as if one had gone out and bought a different machine.

Just because it isn't "hard" (i.e., you can't kick it, and it's easy
to change), doesn't mean it isn't a physical entity.
-- 
Sarge Gerbode -- UUCP:  pyramid!thirdi!metapsy!sarge
Institute for Research in Metapsychology
431 Burgess Drive; Menlo Park, CA 94025

crowston@athena.mit.edu (Kevin Crowston) (12/19/89)

In article <968@metapsy.UUCP> sarge@metapsy.UUCP (Sarge Gerbode) writes:
>On reflection, I don't think you can dispose of the issue that easily
>by differentiating between the program and the hardware.  The program
>is a schema that describes the electronic state the hardware should
>be in when the code file is loaded.  In a very real sense, then, the
>shape of the physical machine has been altered by loading the code
>file, just as much as if you had flipped switches within the machine
>(as we used to do with the old panel switches).  So after the code is
>loaded, there is actually a different physical machine there, just as
>much as if one had gone out and bought a different machine.

But even so, the program still exists in both cases, right?

Actually, I think you've made a key point here.  Loading the software
essentially gives you a different machine.  But I think this actually
supports my argument.  Imagine the effect on the "understanding" done
by the Chinese room of replacing the person with someone else (assuming
that someone can also follow the rules).  Now imagine changing the 
rulebook.  In the first case, the Chinese room will be unaffected; in
the second, it might change.  I would argue that this is further
evidence that it's the program not the hardware that matters.  Since it
could be anyone in the Chinese Room, it shouldn't matter what that
person happens to think. 
>
>Just because it isn't "hard" (i.e., you can't kick it, and it's easy
>to change), doesn't mean it isn't a physical entity.

Actually, this was my point.  Software exists, even though you can't point
to it.

Kevin Crowston

dave@cogsci.indiana.edu (David Chalmers) (12/19/89)

"Programs" do not think.
Cognition is not "symbol-manipulation."
The "hardware/software" distinction is unimportant for thinking about minds.

However:

Systems with an appropriate causal structure think.
Programs are a way of formally specifying causal structures.
Physical systems which implement a given program *have* that causal structure,
physically.  (Not formally, physically.  Symbols were simply an intermediate
device.)

Physical systems which implement the appropriate program think.

--
Dave Chalmers     (dave@cogsci.indiana.edu)      
Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable" -- Fred

mbb@cbnewsh.ATT.COM (martin.b.brilliant) (12/20/89)

From article <31821@iuvax.cs.indiana.edu>, by dave@cogsci.indiana.edu
(David Chalmers)...

Slightly edited to make the bones barer:

  1. Systems with an appropriate causal structure think.
  2. Programs are a way of formally specifying causal structures.
  3. Physical systems implement programs.
  4. Physical systems which implement the appropriate program think.

I take it that (1) is an acceptable definition.  Does anybody think it
begs the question?

The weakest link here may be (2), the supposition that programs can
implement any causal structure whatever, even those that do what we
call thinking.

The software/hardware duality question is semantically resolved by (3).

The conclusion is (4), which seems to assert "strong AI."

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM
After retirement on 12/30/89 use att!althea!marty or marty@althea.UUCP

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

kp@uts.amdahl.com (Ken Presting) (12/20/89)

In article <968@metapsy.UUCP> sarge@metapsy.UUCP (Sarge Gerbode) writes:
>On reflection, I don't think you can dispose of the issue that easily
>by differentiating between the program and the hardware.  The program
>is a schema that describes the electronic state the hardware should
>be in when the code file is loaded.  In a very real sense, then, the
>shape of the physical machine has been altered by loading the code
>file, just as much as if you had flipped switches within the machine
>(as we used to do with the old panel switches).  So after the code is
>loaded, there is actually a different physical machine there, just as
>much as if one had gone out and bought a different machine.

This is e very good point, and often overlooked.  The physical
instantiation of data immensely complicates the concept of "symbol system".
  When machines were built from gears and axles, it was trivial to
distinguish symbols from mechanisms.  Symbols are part of a language, are
written or spoken, and (most importantly) have no mechanical functions.
But communication and computing devices blur the distinction.  In these
machines, an instance of a symbol (a charge, a current pulse, a switch)
has a mechanical role in the operation of the device.
  The first problem that arises is how to distinguish symbol manipulation
systems from other machines.  What makes a symbol "explicit"? The clearest
case of explicit symbols is printed text in a human language, but we need
to resolve hard cases.  One hard case is microcode or firmware.  The
hardest case is probably neural nets.
  Conclusion:  No definition of "symbol manipulation system" which uses
the term "explicit" will be of much help (until "explicit" is defined).

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (12/20/89)

From article <6724@cbnewsh.ATT.COM>, by mbb@cbnewsh.ATT.COM (martin.b.brilliant):
>From article <31821@iuvax.cs.indiana.edu>, by dave@cogsci.indiana.edu
>(David Chalmers)...
>
>Slightly edited to make the bones barer:
>
>1. Systems with an appropriate causal structure think.
>2. Programs are a way of formally specifying causal structures.
>3. Physical systems implement programs.
>4. Physical systems which implement the appropriate program think.
>
>I take it that (1) is an acceptable definition.  Does anybody think it
>begs the question? ...

This and similar discussions have seemed to revolve around
an equivocation between theories about how a thing works and
how the thing does work.  1-4 invite this equivocation in several
ways, consequently they do not serve to clarify. `causal' and
`structure' have to do with theory-making -- we attribute
cause/effect and structure to something in our efforts to
understand it.  So 1. in effect says that we can now understand,
or will come to understand, how people think by making a theory
involving cause and structure.  If the former, it's false;  if
the latter, it does beg the question.

If `program' in 3. is read as `theory' and `physical system' read
as thing about which the theory is made, which is the best I can
make of it, 3. is a generalization of 1. -- we can make (good)
theories about things.  As applied to human thought, and interpreted
as a prediction, it likewise begs the question.

			Greg, lee@uhccux.uhcc.hawaii.edu

kp@uts.amdahl.com (Ken Presting) (12/20/89)

In article <6724@cbnewsh.ATT.COM> mbb@cbnewsh.ATT.COM (martin.b.brilliant) writes:
>From article <31821@iuvax.cs.indiana.edu>, by dave@cogsci.indiana.edu
>(David Chalmers)...
>
>  1. Systems with an appropriate causal structure think.
>  2. Programs are a way of formally specifying causal structures.
>  3. Physical systems implement programs.
>  4. Physical systems which implement the appropriate program think.
>
>I take it that (1) is an acceptable definition.  Does anybody think it
>begs the question?

I don't think so.  Presumably, humans think because of the way we're
built, and the mechanical/chemical/electrical structure determines the
causal structure of our brains.

>The weakest link here may be (2), the supposition that programs can
>implement any causal structure whatever, even those that do what we
>call thinking.

Agreed.  The multi-body problem of astrophysics is a clear case of a
causal system which cannot be precisely represented by an algorithm.
But the argument could succeed with a weaker version of 2, IF we could
figure out which causal structures are relevant to thought

>The software/hardware duality question is semantically resolved by (3).

This is problematic.  Harnad's "symbol grounding problem" (and some of
Searle's objections, I think) point out the difficulty of claiming that
some object "thinks" strictly on the basis of its internal operation,
or even on the basis of it's outputs.  Harnad would want to know how the
symbols found in the output are grounded, while Searle might claim that
the machine *simulated* thinking, but did not itself *think*.
  I agree that the correct resolution of the software/hardware duality
can only be resolved by the concept of implementation used in (3).  I'm
just repeating a familiar (but important) theme.

miken@wheaties.ai.mit.edu (Michael N. Nitabach) (12/20/89)

In article <5767@uhccux.uhcc.hawaii.edu>, lee@uhccux.uhcc.hawaii.edu
(Greg Lee) says:

>`causal' and
>`structure' have to do with theory-making -- we attribute
>cause/effect and structure to something in our efforts to
>understand it.

This view of the fundamental nature of causation derives from a particular
metaphysical tradition, beginning with the British Empiricists, e.g. Locke
and Hume.  This is the view that causation is not an aspect of the world
which our mentality can recognize, but rather a schema which our mind imposes
on events with appropriate spatiotemporal relations.  A conceptually
opposite--Realist--stance would be that causation exists as an actual
attribute of certain pairs of physical events.  Greg's argument in that posting
rests on a particular metaphysical assumption, and not on a simple matter
of definition or brute fact.

Mike Nitabach

ele@cbnewsm.ATT.COM (eugene.l.edmon) (12/20/89)

In article <31821@iuvax.cs.indiana.edu> dave@cogsci.indiana.edu (David Chalmers) writes:
>Systems with an appropriate causal structure think.

Could you elaborate on this a bit? 



-- 
gene edmon    ele@cbnewsm.ATT.COM

sm5y+@andrew.cmu.edu (Samuel Antonio Minter) (12/20/89)

1988:11:19:05:13 SFT

     Couldn't you use the Chinese room analogy to prove that Humans don't
truly understand either.  In this case the matter/energy in the human body
take the role of the man in the room and all his stacks of cards, while
the basic laws of physics take the role of the instuction book.  After all
just as the instruction book tells the man what to do thus simulating a room
which understands Chinese, the laws telling how various atoms, electrons,
energy fields, etc. interact with each other "instruct" the matter and energy
of the human body how to simulate intelligent behavior.  Maybe even
understanding Chinese!  Is there an error in this argument that I'm missing.
     If there isn't then it is a more powerful counter argument than the
agument "of course the man dosn't understand, but the whole room does."

1988:11:19:05:19 SFT

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ |\     You have just read a message from Abulsme Noibatno Itramne! ~
~ | \                                                                ~
~ | /\   E-Mail@CMU: sm5y+@andrew.cmu.edu    <Fixed Width Fonts!>    ~
~ |/  \  S-Mail@CMU: Sam Minter                First the person      ~
~ |\  /|             4730 Centre Ave., #102          next            ~
~ | \/ |             Pittsburgh, PA 15213       the function!!       ~
~ | /  |                                                             ~
~ |/   |  <-----An approximation of the Abulsme symbol               ~
~                                                                    ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

mmcg@bruce.OZ (Mike Mc Gaughey) (12/20/89)

sm5y+@andrew.cmu.edu (Samuel Antonio Minter) [20 Dec 89 05:20:12 GMT]:
> 1988:11:19:05:13 SFT
> 
>      Couldn't you use the Chinese room analogy to prove that Humans don't
> truly understand either.  In this case the matter/energy in the human body
> take the role of the man in the room and all his stacks of cards, while
> the basic laws of physics take the role of the instuction book.  After all

No - this only proves that the laws of physics don't think (just as the man
the room didn't understand).  The total system behavior (i.e of a brain) is
that of an entity which _does_ understand the concepts represented by the
symbols being manipulated.

Mike.
--
Mike McGaughey			ACSNET:	mmcg@bruce.cs.monash.oz

"You have so many computers, why don't you use them in the
 search for love?" - Lech Walesa  :-)

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (12/21/89)

From article <5610@rice-chex.ai.mit.edu>, by miken@wheaties.ai.mit.edu (Michael N. Nitabach):
>In article <5767@uhccux.uhcc.hawaii.edu>, lee@uhccux.uhcc.hawaii.edu
>(Greg Lee) says:
>
>>`causal' and
>>`structure' have to do with theory-making -- we attribute
>>cause/effect and structure to something in our efforts to
>>understand it.
>
>This view of the fundamental nature of causation derives from a particular
>metaphysical tradition, beginning with the British Empiricists, e.g. Locke
>and Hume.  This is the view that causation is not an aspect of the world
>which our mentality can recognize, but rather a schema which our mind imposes
>on events with appropriate spatiotemporal relations. ...

No, that's not quite the view I expressed.  In saying that causation is
something we attribute in theory-making, I do not need to go so far as
to say "causation is not an aspect of the world".  And I don't.  It may
be, in the case of very good theories, that it is reasonable to confound
what the theory says about a thing with the thing itself, or to take the
theory to be a discovery rather than an invention.  But in the case of
not-so-good theories, where there is some doubt as to whether what the
theory says is a cause is indeed a cause, confusing the theory with what
it describes ought to be avoided.

In the present discussion, we are dealing with not-so-good theories.

Surely there's no one who is going to try to defend the view that one
should never distinguish between a theory and what that theory purports
to describe.

			Greg, lee@uhccux.uhcc.hawaii.edu

dejongh@peirce.cis.ohio-state.edu (Matt Dejongh) (12/21/89)

In article <6724@cbnewsh.ATT.COM> mbb@cbnewsh.ATT.COM (martin.b.brilliant) writes:
>From article <31821@iuvax.cs.indiana.edu>, by dave@cogsci.indiana.edu
>(David Chalmers)...
>
>Slightly edited to make the bones barer:
>
>  1. Systems with an appropriate causal structure think.
>
>I take it that (1) is an acceptable definition.  Does anybody think it
>begs the question?

I do.  What is "an appropriate causal structure?"  Give me a
definition and an example.

	matt




----------------------------------------------------------------------------
Matt DeJongh               | Laboratory for Artificial Intelligence Research 
                           | Department of Computer and Information Sciences 
dejongh@cis.ohio-state.edu | The Ohio State University, Columbus, Ohio 43210
-=-
----------------------------------------------------------------------------
Matt DeJongh               | Laboratory for Artificial Intelligence Research 
                           | Department of Computer and Information Sciences 
dejongh@cis.ohio-state.edu | The Ohio State University, Columbus, Ohio 43210

ladkin@icsib (Peter Ladkin) (12/21/89)

In article <5610@rice-chex.ai.mit.edu>, miken@wheaties (Michael N. Nitabach) writes:
>This view of the fundamental nature of causation derives from a particular
>metaphysical tradition, beginning with the British Empiricists, e.g. Locke
>and Hume.  This is the view that causation is not an aspect of the world
>which our mentality can recognize, but rather a schema which our mind imposes
>on events with appropriate spatiotemporal relations.

this is hardly locke's view, and barely that of hume. locke rather strongly
held that primary qualities of matter caused secondary qualities. this 
causation was not a product of anything like a mental schema. and `events
with appropriate spatiotemporal relations' were not the only inhabitants
of the physical world. you might also count bishop berkeley in with the
other two, and for him causation was `in the world'. of course, it
got there by being in the intention of a god, for him. all this is 
well-known and well-researched material. so much for summarising the
views of the british empiricists.

peter ladkin

sm5y+@andrew.cmu.edu@canremote.uucp (sm5y+@andrew.cmu.edu) (12/21/89)

From: sm5y+@andrew.cmu.edu (Samuel Antonio Minter)
Orga: Carnegie Mellon, Pittsburgh, PA

1988:11:19:05:13 SFT

     Couldn't you use the Chinese room analogy to prove that Humans
don't truly understand either.  In this case the matter/energy in the
human body take the role of the man in the room and all his stacks of
cards, while the basic laws of physics take the role of the
instuction book.  After all just as the instruction book tells the
man what to do thus simulating a room which understands Chinese, the
laws telling how various atoms, electrons, energy fields, etc.
interact with each other "instruct" the matter and energy of the
human body how to simulate intelligent behavior.  Maybe even
understanding Chinese!  Is there an error in this argument that I'm
missing.
     If there isn't then it is a more powerful counter argument than
the agument "of course the man dosn't understand, but the whole room
does."

1988:11:19:05:19 SFT

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ |\     You have just read a message from Abulsme Noibatno Itramne! ~
~ | \                                                                ~
~ | /\   E-Mail@CMU: sm5y+@andrew.cmu.edu    <Fixed Width Fonts!>    ~
~ |/  \  S-Mail@CMU: Sam Minter                First the person      ~
~ |\  /|             4730 Centre Ave., #102          next            ~
~ | \/ |             Pittsburgh, PA 15213       the function!!       ~
~ | /  |                                                             ~
~ |/   |  <-----An approximation of the Abulsme symbol               ~
~                                                                    ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

---
 * Via MaSNet/HST96/HST144/V32 - UN AI
 * Via Usenet Newsgroup comp.ai

dave@cogsci.indiana.edu (David Chalmers) (12/21/89)

gene edmon writes:
>...David Chalmers writes:
>>Systems with an appropriate causal structure think.
>
>Could you elaborate on this a bit? 

Well, seeing as you ask.  The basic idea is that "it's not the meat, it's
the motion."  At the bottom line, the physical substance of a cognitive
system is probably irrelevant -- what seems fundamental is the pattern of
causal interactions that is instantiated.  Reproducing the appropriate causal
pattern, according to this view, brings along with it everything that is
essential to cognition, leaving behind only the inessential.  (Incidentally,
I'm by no means arguing against the importance of the biochemical or the
neural -- just asserting that they only make a difference insofar as they
make a *functional* difference, that is, play a role in the causal dynamics
of the model.  And such a functional difference, on this view, can be
reproduced in another medium.)

And yes, of course this is begging the question.  I could present arguments
for this point of view but no doubt it would lead to great complications.
Just let's say that this view ("functionalism", though this word is a dangerous
one to sling around with its many meanings) is widely accepted, and I can't
see it being unaccepted soon.  The main reason I posted was not to argue for
this view, but to delineate the correct role of the computer and the program
in the study of mind.

The other slightly contentious premise is the one that states that computers
can capture any causal structure whatsoever.  This, I take it, is the true
import of the Church-Turing Thesis -- in fact, when I look at a Turing
Machine, I see nothing so much as a formalization of the notion of causal
system.  And this is why, in the philosophy of mind, "computationalism" is
often taken to be synonymous with "functionalism".  Personally, I am
a functionalist first, but accept computationalism because of the plausibility
of this premise.  Some people will argue against this premise, saying that
computers cannot model certain processes which are inherently "analog".  I've
never seen the slightest evidence for this, and I'm yet to see an example of
such a process.  (The multi-body problem, by the way, is not a good example --
lack of a closed-form solution does not imply the impossibility of a
computational model.)  Of course, we may need to model processes at a low,
non-superficial level, but this is not a problem.

The other option for those who argue against the computational metaphor is to
say "yes, but computation doesn't capture causal structure *in the right
way*".  (For instance, the causation is "symbolic", or it has to be
mediated by a central processor.)  I've never found much force in these
arguments.

--
Dave Chalmers     (dave@cogsci.indiana.edu)      
Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable" -- Fred

kp@uts.amdahl.com (Ken Presting) (12/22/89)

In article <31945@iuvax.cs.indiana.edu> dave@cogsci.indiana.edu (David Chalmers) writes:
>The other slightly contentious premise is the one that states that computers
>can capture any causal structure whatsoever.  This, I take it, is the true
>import of the Church-Turing Thesis -- in fact, when I look at a Turing
>Machine, I see nothing so much as a formalization of the notion of causal
>system.  . . .
>    . . .         Some people will argue against this premise, saying that
>computers cannot model certain processes which are inherently "analog".  I've
>never seen the slightest evidence for this, and I'm yet to see an example of
>such a process.  (The multi-body problem, by the way, is not a good example --
>lack of a closed-form solution does not imply the impossibility of a
>computational model.)  Of course, we may need to model processes at a low,
>non-superficial level, but this is not a problem.

What makes the multi-body problem a counter-example is not just the fact
that the problem has no closed-form solution, but the chaotic nature of
the mechanical system.

In a chaotic system, an arbitrarily small change in initial conditions
will over time produce an arbitrarily *large* difference in subsequent
states.  It is true that for any physical system, a numerical solution
of the differential equations can generate a prediction of the future
state with an error as small as desired.  But if a numeric model of a
physical system is to run in real time (as needed for the Turing test),
or just proportional to real time, then there will be a fixed minimum
error in the transitions from state to state.  The error may be reduced
by using faster processors or faster algorithms, but for a given
combination of processor, algorithm, and lag behind real-time, there must
be a limit on the number of terms evaluated, and a minimum error.

So at the end of the first time slice, there will be a finite difference
between the state of the real system and the calculated state of the
model.  The real system will proceed chaotically (as will the model), to
amplify the discrepancy in initial state and each subsequent state until
(sooner or later) the real system will be in a state which diverges from
the state of the model by any amount (up to the size of the system).

That was a rough sketch of a proof that not all causal systems can be
modeled by programs.  Let me add a plausibility argument, so
that the claim will not seem counter-intuitive.

What makes the analog causal system different from the algorithm is that
each state of the analog system encodes an infinite amount of information.
This holds even for electrons bound into the quantum orbitals of an atom.
There is a finite number of electrons, and transitions between levels are
discrete, but there are infininitely many energy levels (most of them very
close together).  Of course, a processor has finitely many states, and
encodes a finite amount of information.  In addition, an analog system
need not time-slice its calculations.  It can make infinitely many
transitions in any interval.  A Turing machine would need to have both
infinitely many states and run arbitrarily fast to prescisely match an
analog system using a numerical algorithm.  Now, the lack of precision
is insignificant for many analog systems in many applications, where the
error is constant or grows slowly.  But in a chaotic system, the error
in the model can grow very rapidly.  If the numerical model cannot be
allowed to lag arbitrarily far behind real-time (thus trading time for
memory) then the amplification of error will make the model useless.

The "solutions in closed form" of rational mechanics gurantee that that
a numerical model for an analog system will have a rapidly computable
algorithm (often a polynomial).  So the fact that the multi-body problem
has no solution in closed form is relevant, but that's not the whole
story.  More important is chaotic amplification of initial differences.

This does *not* show that strong AI is impossible with algorithms.  There
is no way to know whether intelligence requires chaos.  But the brain is
certainly complicated enough to be as chaotic as fluid flow.  Modeling
human behavior may well have to deal with this problem.

BTW, I'd like to know if anyone has heard this kind of argument before
in connection with AI.  It must be old hat in simulation circles.

dhw@itivax.iti.org (David H. West) (12/22/89)

In article <ebfl02Ef76Hs01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes:
|What makes the multi-body problem a counter-example is not just the fact
|that the problem has no closed-form solution, but the chaotic nature of
|the mechanical system.
|
|In a chaotic system, an arbitrarily small change in initial conditions
|will over time produce an arbitrarily *large* difference in subsequent
|states. [...]
|That was a rough sketch of a proof that not all causal systems can be
|modeled by programs.  Let me add a plausibility argument, so
|that the claim will not seem counter-intuitive.
|
|What makes the analog causal system different from the algorithm is that
|each state of the analog system encodes an infinite amount of information.

Real intelligent systems (e.g. humans) function quite successfully
at a finite temperature despite the influence of thermal
fluctuations (Brownian motion), which cause finite random perturbations
of everything.  A finite system embedded in a thermal environment
cannot encode an infinite amount of information.

This would seem to indicate that your argument has no bearing on
what may be necessary for intelligence, at least over a time-scale
short enough that the physical embodiment of the intelligence is
not disrupted by thermal fluctuations, whether or not these are
chaotically amplified.

-David West      dhw@iti.org

kp@uts.amdahl.com (Ken Presting) (12/22/89)

In article <4689@itivax.iti.org> dhw@itivax.UUCP (David H. West) writes:
>Real intelligent systems (e.g. humans) function quite successfully
>at a finite temperature despite the influence of thermal
>fluctuations (Brownian motion), which cause finite random perturbations
>of everything.  A finite system embedded in a thermal environment
>cannot encode an infinite amount of information.

A finite *digital* system is limited in the density of its states by
thermal (and other) fluctuations.  You are right to point out that the
brain is limited in its computational power by this fact.  I think that
the most productive and interesting approach to cognitive science (if not
artificial intelligence) must take advantage of the brain's limitations;
CS wants to learn about how the brain computes and the algorithms it
follows.

But the brain viewed as a computational system is different from the brain
viewed as a causal system.  The same distinction applies to electronic
computers - there are plenty of hardware failure modes to complicate the
causal description of the system which are irrelevant to the computational
description.  If we want to claim that we've accurately modeled the
computational power of the brain by demonstrating that our model is
faithful to the causal interactions in the brain, we're stuck with
modeling the details (including thermal motion) down to whatever level a
critic might demand.

The word "encode" might be ill-chosen for causal systems.  If position &
velocity (et al) encode anything, the code would have infinitely many
"symbols," one for each state of the system.

David continues:
>This would seem to indicate that your argument has no bearing on
>what may be necessary for intelligence, (...)

This is true.  Certainly there is no reason to believe that infinite
processing power (or infinite anything else) is necessary for
intelligence.  What I think we can conclude is that a familiar "safety
net" argument for AI doesn't quite work (different from the original):

(1) The brain is a causal system that thinks
(2) We can model causal systems with numerical methods
(3) Therefore, we can make a model that thinks .

Searle (I believe) would object that we would get a simulation of
thinking, not the real thing.  I'm objecting that it is physically
impossible to make an adequate model.  If we could let the model crank
as long as it needed before responding (ie relax the real-time constraint)
then my argument would not hold.  The model would eventually converge
close enough to the real system to be indistinguishable.  But that won't
pass the Turing test.

dove@portia.Stanford.EDU (Dav Amann) (12/22/89)

Here we go.

Recently, several posters have debated the concept of thinking
machines without explicitly discussing thought.  We all know that we
think but I doubt that many individuals can articulate what it means
to think.  Does, for example, thinking imply conciousness?  In other
words, do things think without being aware that they are thinking?

If thought is merely the process of solving problems, then answers
become obvious.  Solving problems does not imply conciousness.  My Mac
solves all sorts of problems for me, but it certainly does not spend
its off hours contemplating the Eternal Void.  (At least, I don't
think that it does.)

However, I believe that when individuals talk about thought they imply
some sort of conciousness, or awareness.  When we say a machine that
thinks, we mean a machine that understands, reasons, draws
conclusions, learns, and is somehow aware.

Thus when Searle talks of the Chinese room, he is questioning the
awareness of the machine rather than its imitation of reasoning
processes.  (At least, as far as I can tell.)

I believe that the problem of conciousness comes from a choice of
metaphysics.  Most of us in the Western world are disciples of the
ancient Greek metaphysician, Democritus.  Democritus, you'll recall,
was the philosopher who theorized that all of reality was made of
atoms.  Understand the atoms and you'll understand reality.
Mathematics, physics, logic all went a long way towards cementing this
mind set.  The whole is equal to the sum of its parts.  Understand the
parts, and you'll understand the whole.

When this viewpoint is applied to a theory of mind, you get a lot of
folks saying, "All that's in there is a bunch of neurons firing in
certain patterns.  All that's happening is the exchange of
neurochemicals.  Understand those, and you'll understand the mind."

Well, perhaps not.  Somehow, intuitively speaking, I think that the
mind is more than the firing of neurons, though it does seem to
encompass the firing of neurons.  There's more there, I say, than the
exchange of certain neurochemicals.  Plato knew more about the human
mind than me and he knew much less about the construction of the
brain.  Perhaps this intuition explains the vigorous defense of Cartesian
dualism since Descartes without very much empirical evidence.  

Lately, however, one of the newer sciences has been breaking away from
Democritus.  Biology discusses and understands more and more about the
individual cell, yet they find it harder and harder to explain the
relationship between cells within the pretext of the individual cell.
Etymologysts understand a lot about termites but they cannot explain
why five termites together will build arches the Romans  would be
proud of.  The whole is more than the sum of its parts.

Perhaps the mind is much the same way.  Understanding the switches and
chemicals inside of the brain may in some way add to the knowledge of
our selves, but I don't think that it can ever fully explain our
selves and our consciousness.  

So the question arises, How can we understand ourselves or our
consciousness? How can we tell whether a machine thinks?  To these
questions I profess my ignorance, but I do not think that any method
which only looks at the parts of the brain will accomplish that lofty
goal.

						Dav Amann
						dove@portia.stanford.edu

dmocsny@uceng.UC.EDU (daniel mocsny) (12/22/89)

In article <7853@portia.Stanford.EDU>, dove@portia.Stanford.EDU (Dav Amann) writes:
> Recently, several posters have debated the concept of thinking
> machines without explicitly discussing thought.  We all know that we
> think but I doubt that many individuals can articulate what it means
> to think.

I doubt that *any* individuals can articulate what thinking is, any
more than a fish can comprehend that water is wet. 

****

One fringe benefit of being largely ignorant of both metaphysics and
AI is being blissfully unaware of all the ways I am not supposed to
think. Exercising this freedom tonight I had the idea (probably not
original) that an anology may exist between the questions: "Can
machines think?" and "Are viruses alive?"

A virus is simply a protein coat around some genetic material. If it
is floating in a sterile environment it can't metabolize any available
nutrients the way bacteria do. It doesn't reproduce, it doesn't
respond to stimuli, it is an inert speck of macromolecules (albeit
a highly organized speck). In effect, the virus doesn't possess
sufficient causal structure to qualify as "life" in a disorganized
environment.

However, place the virus in the right environment (i.e., a host) and
it mechanistically attaches itself to receptors on host cells by
passively responding to intermolecular attractive forces. Its protein
coat dissolves mechanistically, and its genetic material enters the
host cell and alters its metabolism. As a result, the host cell
generates many copies of the original virus, then ruptures.

In the environment of the host, the virus sure looks like life.
Outside the host it does not. Therefore, whether we choose to consider
the virus "alive" depends on our frame of reference. To an alien
biologist from another planet who has no knowledge of suitable host
organisms, the information in the virus' structure would be so much
nonsense. In a sense, the virus requires a host to "ground" the
information represented in its internal structure. In other words,
a virus is like "life" with a removable context, apart from which
it becomes "non-life". Life with an on-off switch, as it were.

Now consider a very complex symbol-processing system, say one that
could make a good showing in the "Chinese Room", and moreover, one
that implements statistical pattern-matching algorithms giving it some
"learning" ability. Now, when interacting with chinese speakers,
the SPS could give a fairly convincing imitation of "thinking".
But is it thinking? The Chinese Room argument is supposed to
demonstrate conclusively that the SPS does not think. Assume
Searle is correct, and the SPS does not think. In what way, then,
does its failure to think constrain its possible behaviors?

Let us view the relationship between a symbol-processing system and
human minds as analogous to the relationship between a (benign) virus
and its host. Apart from the host, the virus is nothing, but while
interacting with the host it satisfies enough of the requirements of
"life" to be essentially alive. The virus effectively abstracts, or
mirrors, some of the essential causal structure of the host, so that
in combination with the host it can display highly complex, almost
purposive, behaviors.

So too the SPS abstracts or mirrors some of the essential causal
structure of the human mind that created it. It does not abstract
enough of that structure to "stand on its own", i.e., if we merely
print out its list of machine instructions we do not see anything
vaguely resembling "thinking", and neither can the SPS even imitate
thinking in the wrong environment. And yet, while interacting with the
appropriate "hosts", the SPS is capable of arbitrarily complex
behaviors, perhaps indistinguishable (in an arbitrarily complex SPS)
from the behavior of the minds that created it.

So perhaps a useful way to view symbol processing systems is not as
"thinking systems", but rather, "mind viruses".  (My apologies to Dr.
Rapaport if he has already published a series of papers exploring this
very notion!) :-)

> Etymologysts understand a lot about termites but they cannot explain
> why five termites together will build arches the Romans  would be
> proud of.  The whole is more than the sum of its parts.

If we view "mind" as an emergent property, or epiphenomenon, of
"brain", then doesn't that mean we have no way to point to any
tangible structure in the brain and say "this produces mind" or "that
is mind"? (Because an emergent property, by definition, has no tidy
basis in the structure of any part; it only emerges when all the parts
get together. This may, of course, be only an artifact of our
conceptual deficiency.)

Or in other words, perhaps we have no way even in principle to elicit
the causal mechanisms that give rise to mind? The flip side to that
argument is, of course, that we have no way to arbitrarily restrict
the underlying causal mechanisms that could give rise to "mind" as an
epiphenomenon. Or even which causal aspects of the brain give rise to
"mind". Why, then, couldn't "mind" just as well be an epiphenomenon of
(sufficiently complex) "program"? I.e., if we can't say just how the
brain gives rise to mind, how can we be so sure programs can't do it
to? I don't see how the Chinese Room addresses this at all.

Dan Mocsny
dmocsny@uceng.uc.edu

dhw@itivax.iti.org (David H. West) (12/22/89)

In article <3185@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes:
>One fringe benefit of being largely ignorant of both metaphysics and
>AI is being blissfully unaware of all the ways I am not supposed to
>think. Exercising this freedom tonight I had the idea (probably not
>original) that an anology may exist between the questions: "Can
>machines think?" and "Are viruses alive?"

I rather the like the quote attributed to Dijkstra (can anyone
provide a hard citation for this?): "The question of whether a
machine can think is like the question of whether a submarine can
swim".

>So perhaps a useful way to view symbol processing systems is not as
>"thinking systems", but rather, "mind viruses".  (My apologies to Dr.
>Rapaport if he has already published a series of papers exploring this
>very notion!) :-)

Richard Dawkins has.  His word for it was "meme", a conflation (I take 
it) of "memory" and "gene".

-David West        dhw@iti.org

jgk@osc.COM (Joe Keane) (12/23/89)

In article <013Y02gH77ra01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>If we want to claim that we've accurately modeled the
>computational power of the brain by demonstrating that our model is
>faithful to the causal interactions in the brain, we're stuck with
>modeling the details (including thermal motion) down to whatever level a
>critic might demand.

This is not true.  The concept of an exact simulation may be theoretically
interesting, but has nothing to do with AI.  The important question is, what
level of simulation is necessary to pass the Turing test.  I think it is
necesary to model different parts of the brain, but not to worry about the
exact distribution of potassium ions.  If you believe that it must be exact
down to the incoming cosmic rays, i don't know what to say.

>I'm objecting that it is physically
>impossible to make an adequate model.  If we could let the model crank
>as long as it needed before responding (ie relax the real-time constraint)
>then my argument would not hold.  The model would eventually converge
>close enough to the real system to be indistinguishable.  But that won't
>pass the Turing test.

If i'm not mistaken, you're arguing that it takes an arbitrarily large amount
of computation to do an acceptable simulation.  I don't believe this at all.

Let's perform a thought experiment.  I'll make a copy of the universe (that's
why it's a thought experiment) and then remove an electron in your clone's
brain.  This should not cause a severe change in your clone, but since the
brain is chaotic, this may cause him to do something differently than the
original you.  However, despite the fact that you and your clone are now
different, there is no way to tell which is the `real' you.

dave@cogsci.indiana.edu (David Chalmers) (12/23/89)

Ken Presting writes:

>What makes the multi-body problem a counter-example is not just the fact
>that the problem has no closed-form solution, but the chaotic nature of
>the mechanical system.

Chaos is only a problem if we need to model the behaviour of a particular
system over a particular period of time exactly -- i.e., not just capture
how it might go, but how it *does* go.  This isn't what we're trying to do
in cognitive science, so it's not a problem.  We can model the system to
a finite level of precision, and be confident that what we're missing is
only random "noise."  So while we won't capture the exact behaviour of System X
at 3 p.m. on 12/22/89, we'll generate equally plausible behaviour -- in other
words, how the system *might* have gone, if a few unimportant random 
parameters had been different.

This leads to a point which is a central tenet of functionalism -- you don't
need to capture a system's causal dynamics exactly, but only at a certain level
of abstraction.  Which level of abstraction?  Well, this is usually
specified teleologically, depending on what you're trying to capture.  Usually,
it's a level of abstraction that captures plausible input/output relationships.
Anything below this, we can consider either implementational detail, or noise.
Just what this level of abstraction is, of course, is a matter of some debate.
The most traditional functionalists, including the practitioners of
"symbolic" AI, believe that you may go to a very high level of abstraction
before missing anything important.  The move these days seems to be towards
a much less abstract modelling of causal dynamics, in the belief that what
goes on at a low level (e.g. the neural level) makes a fundamental difference.
(This view is sometimes associated with the name "eliminative materialism",
but it's really just another variety of functionalism.  Even at the neural
level, what we're trying to capture are causal patterns, not substance.)

>What makes the analog causal system different from the algorithm is that
>each state of the analog system encodes an infinite amount of information.

Arguable.  My favourite "definition" of information is due to Bateson, I
think (no endorsement of Bateson's other views implied): "Information is a
difference that makes a difference."  An infinite number of bits may be
required to descibe the state of a system, but in any real-world system, all
of these after a certain point will not make any difference at all, except as
random parameter settings.  (The beauty of Bateson's definition is that the
final "difference" depends on our purposes.  If we wanted a precise simulation
of the universe, these bits would indeed be "information".  If we want a
cognitive model, they're not.)

Incidentally, you can concoct hypothetical analog systems which contain
an infinite amount of information, even in this sense -- by coding up
Chaitin's Omega for instance (and thus being able to solve the Halting
Problem, and be better than any algorithm).  In the real world, quantum
mechanics makes all of this irrelevant, destroying all information beyond
N bits or so.  

Happy Solstice.
--
Dave Chalmers     (dave@cogsci.indiana.edu)      
Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable"

mbb@cbnewsh.ATT.COM (martin.b.brilliant) (12/24/89)

In article <32029@iuvax.cs.indiana.edu> dave@cogsci.indiana.edu (David Chalmers) writes:
>Chaos is only a problem if we need to model the behaviour of a particular
>system over a particular period of time exactly -- i.e., not just capture
>how it might go, but how it *does* go.  This isn't what we're trying to do
>in cognitive science, so it's not a problem.  We can model the system to
>a finite level of precision, and be confident that what we're missing is
>only random "noise."  .....

I second that.  The goal of AI is not to model a particular mind, but
to create a mind.  One thing we know from experience about minds -
which is reinforced by the argment based on chaos - is that two minds
never think exactly alike.

That would almost prove that if we modeled a given mind exactly, we
would NOT have created a mind, because a REAL mind never duplicates
another mind.  To prove we have created a mind, we have to have one
that does not exactly model another.

The chaos argument proves that if we create a mind, we will
automatically meet that requirement.  How fortunate!

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM
After retirement on 12/30/89 use att!althea!marty or marty@althea.UUCP

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

sarge@metapsy.UUCP (Sarge Gerbode) (12/26/89)

In article <1989Dec19.061822.27585@athena.mit.edu> crowston@athena.mit.edu
(Kevin Crowston) writes:

>>[After object] code is loaded, there is actually a different
>>physical machine there, just as much as if one had gone out and
>>bought a different machine.

>But even so, the program still exists in both cases, right?

Good question.  What *is* a "program", anyway?  The ascii source
characters, taken as an aggregate?  The machine-language code, as a
sequence of octal or hex characters?  The magnetic patterns on the
disc?  The electronic patterns in RAM when the programis loaded?  Or
is it, as I suspect, the detailed *concept* the programmer had in
mind when he wrote the source code?  Perhaps the program (or, if you
will, the overall algorithm) is a *possibility* that can be
actualized (implemented) in a variety of ways.  This possibility
exists in the mind of a conscious being as the concept called "the
program".  Without the concept, you would not have a "program" but a
mere pattern of electronic whatevers.
-- 
Sarge Gerbode -- UUCP:  pyramid!thirdi!metapsy!sarge
Institute for Research in Metapsychology
431 Burgess Drive; Menlo Park, CA 94025

sarge@metapsy.UUCP (Sarge Gerbode) (12/26/89)

In article <24Yy02PR76bt01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com
(Ken Presting) writes:
>In article <968@metapsy.UUCP>sarge@metapsy.UUCP (Sarge Gerbode) writes:

>>On reflection, I don't think you can dispose of the issue that easily
>>by differentiating between the program and the hardware.  The program
>>is a schema that describes the electronic state the hardware should
>>be in when the code file is loaded.  In a very real sense, then, the
>>shape of the physical machine has been altered by loading the code
>>file, just as much as if you had flipped switches within the machine
>>(as we used to do with the old panel switches).  So after the code is
>>loaded, there is actually a different physical machine there, just as
>>much as if one had gone out and bought a different machine.

>This is a very good point, and often overlooked.  The physical
>instantiation of data immensely complicates the concept of "symbol
>system".

>When machines were built from gears and axles, it was trivial to
>distinguish symbols from mechanisms.  Symbols are part of a language,
>are written or spoken, and (most importantly) have no mechanical
>functions.  But communication and computing devices blur the
>distinction.  In these machines, an instance of a symbol (a charge, a
>current pulse, a switch) has a mechanical role in the operation of
>the device.

I may have a somewhat radical viewpoint on this, but to me a symbol
is defined as such by the intention of the conscious being using it.
A symbol is a perceivable or detectable entity that is used to direct
attention to a particular reality or potential reality.

Charges, current pulses, etc., are rightly regraded as symbols only to
the extent that they are intended (ultimately) to be comprehended by
some sort of conscious entity as indicating certain realities (or
potential relaities).  In the absence of such intentions, they are not
symbols but mere charges, current pulses, etc.

Of course, things can be decoded without being *intended* to be so
decoded.  Scientists are continually decoding (understanding) elements
of the physical universe.  But these elements are (rightly) not
thought of as symbols because (unless one thinks of the universe as a
communication from God) they are not intended to be decoded in a
particular way.
-- 
Sarge Gerbode -- UUCP:  pyramid!thirdi!metapsy!sarge
Institute for Research in Metapsychology
431 Burgess Drive; Menlo Park, CA 94025

bsimon@stsci.EDU (Bernie Simon) (12/28/89)

I would like to make a few point that seem clear to me, but apparently
aren't clear to others in this discussion.

1) All physical objects are not machines. For example, stones, clouds,
flowers, butterflies, and people are not machines. This should be
obvious, but some people use the word machine to include all physical
objects. Not only is this contrary to ordinary usage, it obscures an
important distinction between what is an artifact and what is not.

2) Not all machines are computers. Lamps, screwdrivers, and cars are not
computers.

3) There are some activities which can be performed by physical objects
and machines which cannot be peformed by computers.  Birds can fly and
airplanes can fly, but computers cannot fly.  Of course, a computer can
control an airplane, but this misses the distinction I am trying to
make.  The distinction is that all computers, as computers, are
equivalent to Turing machines.  If the computers performs some other
activity during its operation than executing a program (for example,
flying) it is because the machine which contains the computer is 
capable of the activity (as airplanes are capable of flying).

4) The simulation of a physical activity by a computer cannot be
identified with the physical activity. A computer running a flight
simulation program is not flying.

5) Hence, while it may be possible to build a machine that thinks, it
does not follow that it will be possible to build a computer that
thinks, as not all physical activities can be performed by computers.

6) While there are good reasons to believe that thinking is a physical
activity, there are no good reasons for believing that thinking is the
execution of a computer program. Nothing revealed either through
introspection or the examination of the anatomy of the brain leads to
the conclusion that the brain is operating as a computer. If someone
claims that it is, the burden of proof is on that person to justify that
claim. Such proof must be base on analysis of the brain's structure and
not on logical, mathematical, or philosophical grounds. Since even the
physical basis of memory is poorly understood at present, any claim that
the brain is a computer is at best an unproven hypothesis.


						Bernie Simon

andrew@dtg.nsc.com (Lord Snooty @ The Giant Poisoned Electric Head ) (12/28/89)

. 1) All physical objects are not machines. 
. 2) Not all machines are computers. 

   What is a machine? It could be said that:
	A stone is a machine; slow crystallisation processes within.
	The internal elementary particle dynamics too.
	This is machinery.
   What is a computer? It could be said that:
	Lamps could be computers if composed of Finite State Automata
	on the molecular level. The program ensures that output
	intensity remains approximately constant, and that the
	structural form remains relatively invariant during illumination.
	This is computing.

. 3) There are some activities which can be performed by physical objects
. and machines which cannot be peformed by computers.  
. The distinction is that all computers, as computers, are
. equivalent to Turing machines.  

	What if the physical environment were used to compute with,
	instead of electrical energy? - think abacus. Flying then, for
	example, might be an emergent and NECESSARY property of such
	a computer.

. 4) The simulation of a physical activity by a computer cannot be
. identified with the physical activity.

	Irrelevant in light of rebuttal of 3)

. 5) Hence, while it may be possible to build a machine that thinks, it
. does not follow that it will be possible to build a computer that
. thinks, as not all physical activities can be performed by computers.

	There is no known constraint on the physical activities which
	can be performed by computers, including those of the brain
	(I naturally exclude violations of the known physical laws).

. 6) While there are good reasons to believe that thinking is a physical
. activity, there are no good reasons for believing that thinking is the
. execution of a computer program. Nothing revealed either through
. introspection or the examination of the anatomy of the brain leads to
. the conclusion that the brain is operating as a computer. If someone
. claims that it is, the burden of proof is on that person to justify that
. claim. Such proof must be base on analysis of the brain's structure and
. not on logical, mathematical, or philosophical grounds. Since even the
. physical basis of memory is poorly understood at present, any claim that
. the brain is a computer is at best an unproven hypothesis.

	Repeat - what is a computer?
-- 
...........................................................................
Andrew Palfreyman	a wet bird never flies at night		time sucks
andrew@dtg.nsc.com	there are always two sides to a broken window

mbb@cbnewsh.ATT.COM (martin.b.brilliant) (12/28/89)

In article <1037@ra.stsci.edu> bsimon@stsci.EDU (Bernie Simon) writes:
!I would like to make a few point that seem clear to me, but apparently
!aren't clear to others in this discussion.

Good thing to do.

!1) All physical objects are not machines. For example, stones, clouds,
!flowers, butterflies, and people are not machines...

Meaning that a machine has to be an artifact.  OK, people sometimes
call people machines to emphasize that people and machines are governed
by the same physics.  But saying that people are machines really begs
the question "Can machines think," doesn't it?

!2) Not all machines are computers. Lamps, screwdrivers, and cars are not
!computers.

OK.  But all computers are machines.  And a machine can contain a computer.

!3) There are some activities which can be performed by physical objects
!and machines which cannot be peformed by computers.  Birds can fly and
!airplanes can fly, but computers cannot fly...

!4) The simulation of a physical activity by a computer cannot be
!identified with the physical activity. A computer running a flight
!simulation program is not flying.

True.

!5) Hence, while it may be possible to build a machine that thinks, it
!does not follow that it will be possible to build a computer that
!thinks, as not all physical activities can be performed by computers.

We seem to have got off the track.  The question was not whether
computers can think, but whether machines can think.  If you put a
computer into a machine that can accept sensory input and create
motor output, it might be able to do what we call thinking.

!6) While there are good reasons to believe that thinking is a physical
!activity, there are no good reasons for believing that thinking is the
!execution of a computer program....

I wouldn't believe that for a minute.  I don't know exactly what
thinking is, but it is probably something a computer can't do alone,
but a machine with a computer in it might be able to do.

!.... Nothing revealed either through
!introspection or the examination of the anatomy of the brain leads to
!the conclusion that the brain is operating as a computer....

Is that a requirement for machines to think?  Consider a machine with
sensory inputs and motor outputs.  It needs a controller.  Do you have
to have an actual brain inside, or will it be sufficient to have a
computer that simulates the brain?

Flying.  We talked about flying.  A computer can't fly.  But if you
build a machine with eyes, and wings, and feet, and it needs a
controller, a machine that simulates a brain will be just as effective
as a genuine brain.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM
After retirement on 12/30/89 use att!althea!marty or marty@althea.UUCP

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

dg1v+@andrew.cmu.edu (David Greene) (12/28/89)

Excerpts from netnews.comp.ai: 27-Dec-89 Re: Can Machines Think?
martin.b.brilliant@cbnew (2932)

> !In article <1037@ra.stsci.edu> bsimon@stsci.EDU (Bernie Simon) writes:

> !5) Hence, while it may be possible to build a machine that thinks, it
> !does not follow that it will be possible to build a computer that
> !thinks, as not all physical activities can be performed by computers.

> We seem to have got off the track.  The question was not whether
> computers can think, but whether machines can think.  If you put a
> computer into a machine that can accept sensory input and create
> motor output, it might be able to do what we call thinking.


I would welcome some clarification... 

Let's assume there  is some agreement on what constitutes "what we call
thinking" -- a big assumption. 
Is it the case that machines alone can think?  Or is it that a machine
requires a computer (as a necessary but not sufficient condition) to
think?  (and that a computer alone is insufficient)

If it is only the machine+computer combination that is capable, what is
it about the combination?  Is it the ability to control its sensory
inputs and outputs (the machine part) or some other distinction?



-David
--------------------------------------------------------------------
 David Perry Greene      ||    ARPA:   dg1v@andrew.cmu.edu, dpg@isl1.ri.cmu.edu
 Carnegie Mellon Univ.  ||    BITNET:  dg1v%andrew@vb.cc.cmu.edu
 Pittsburgh, PA 15213    ||    UUCP: !harvard!andrew.cmu.edu!dg1v
--------------------------------------------------------------------

kp@uts.amdahl.com (Ken Presting) (12/29/89)

Here is the original argument under discussion:

  1. Systems with an appropriate causal structure think.
  2. Programs are a way of formally specifying causal structures.
  3. Physical systems implement programs.
  4. Physical systems which implement the appropriate program think.

I have been arguing that this argument is unsound because (2) is false.
By no means do I dispute the conclusion, though of course others would.

 David Chalmers writes:
>Chaos is only a problem if we need to model the behaviour of a particular
>system over a particular period of time exactly -- i.e., not just capture
>how it might go, but how it *does* go.  This isn't what we're trying to do
>in cognitive science, so it's not a problem.  We can model the system to
>a finite level of precision, and be confident that what we're missing is
>only random "noise."  So while we won't capture the exact behaviour of System X
>at 3 p.m. on 12/22/89, we'll generate equally plausible behaviour -- in other
>words, how the system *might* have gone, if a few unimportant random
>parameters had been different.

 M. B. Brilliant writes:
>I second that.  The goal of AI is not to model a particular mind, but
>to create a mind.

These objections seem to grant at least a part of my point - some of the
characteristics of some causal systems cannot be specified by programs.
I agree that an AI need not model any particular person at a particular
time.  But since the error in a numerical model is cumulative over time
slices, it's not just the behavior of the system at a given time that
won't match, but also the general shape of the trajectories though the
state space of the system.  If a numerical model of the brain is claimed
to be accurate except for "noise", and therefore claimed to be conscious,
then it must be shown that what is called "noise" is irrelevant to
consciousness (or thinking).  Fluctuations that seem to be "noise" may
have significant consequences in a chaotic system.

 David Chalmers continues:
>Incidentally, you can concoct hypothetical analog systems which contain
>an infinite amount of information, even in this sense -- by coding up
>Chaitin's Omega for instance (and thus being able to solve the Halting
>Problem, and be better than any algorithm).  In the real world, quantum
>mechanics makes all of this irrelevant, destroying all information beyond
>N bits or so.

Quantum mechanics can't destroy any information - it just make the
information statistical.  Note that in the wave formulation of QM, the
probability waves are continuous, and propagate and interfere
deterministically. No probability information is ever lost, but retrieving
the probabilistic information can be time consuming.  The physical system
need not "retrieve" the probabilistic information; it can react directly.

John Nagle writes:
>    2.  Recent work has resulted in an effective way to solve N-body
>        problems to an arbitrary level of precision and with high
>        speed.  See "The Rapid Evaluation of Potential Fields in
>        Particle Systems", by L.F. Greengard, MIT Press, 1988.
>        ISBN 0-262-07110-X.
>
>        Systems with over a million bodies are now being solved using
>        these techniques.

It's not enough to do fast and accurate calculations; the calculations
must remain fast no matter how accurate the simulation has to be.  Every
computer must have a finite word size, so when accuracy levels require
multiple words to represent values in a state vector, the model will slow
down in proportion to the number of words used.  This effect is
independent of the efficiency of the basic algorithm.

dhw@itivax.iti.org (David H. West) (12/29/89)

In article <f6xk02b078qO01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes:
|Here is the original argument under discussion:
|
|  1. Systems with an appropriate causal structure think.
|  2. Programs are a way of formally specifying causal structures.
|  3. Physical systems implement programs.
|  4. Physical systems which implement the appropriate program think.
|
|I have been arguing that this argument is unsound because (2) is false.
|By no means do I dispute the conclusion, though of course others would.

[quotes from David Chalmers and Marty Brilliant omitted]

|These objections seem to grant at least a part of my point - some of the
|characteristics of some causal systems cannot be specified by programs.

Since the essence of your point is the lack of infinite precision,
you could just as well say that the characteristics of some systems
cannot be perceived.  But you haven't given any reasons to suppose
that this prohibits intelligence or consciousness, only omniscience.

|then it must be shown that what is called "noise" is irrelevant to
|consciousness (or thinking).  Fluctuations that seem to be "noise" may
|have significant consequences in a chaotic system.

If I'm trying to choose between nearly-equally-preferred alternatives,
fluctuations may tip the balance, but IMO the "thought" aspect here
lies in the ability to evaluate utility reasonably well, not in the 
ability to evaluate it perfectly.  Internal and external
fluctuations also affect my ability to carry out my intentions, but that
doesn't [in itself!] make me unintelligent or non-conscious, just
not omnipotent.

|It's not enough to do fast and accurate calculations; the calculations
|must remain fast no matter how accurate the simulation has to be.  Every

This seems to imply that speed can make the difference between
thought and non-thought, but none of your points 1-4 mention speed. 

-David West     dhw@iti.org

hwajin@wrs.wrs.com (Hwa Jin Bae) (12/30/89)

In article <7853@portia.Stanford.EDU> dove@portia.Stanford.EDU (Dav Amann) writes:
[...]
>Perhaps this intuition explains the vigorous defense of Cartesian
>dualism since Descartes without very much empirical evidence.  
>Lately, however, one of the newer sciences has been breaking away from
>Democritus.  Biology discusses and understands more and more about the
>individual cell, yet they find it harder and harder to explain the
>relationship between cells within the pretext of the individual cell.
>Etymologysts understand a lot about termites but they cannot explain
>why five termites together will build arches the Romans  would be
>proud of.  The whole is more than the sum of its parts.

This theme is further detailed in Fritjof Capra's _The Turning Point_,
which states that Cartesian-Newtonian framework is not sufficient for
a complete understanding of human and physical problems.  [I'm sure
you all have been noticing recent abundance on this particular subject
in popular literature.]  His solution seems to be to incorporate
a holistic and ecological aspect into the Cartesian-Newtonian framework,
producing a new "multidisciplinary" methodology to appropach problems.
Not only that, he seems to be proposing that various different but
mutually consistent concepts may be used to describe different aspects
and levels of reality, without the need to reduce the phenomena of any
level to those of other.  Interesting.

hwajin

dave@cogsci.indiana.edu (David Chalmers) (12/30/89)

In article <1037@ra.stsci.edu> bsimon@stsci.EDU (Bernie Simon) writes:

>I would like to make a few point that seem clear to me, but apparently
>aren't clear to others in this discussion.

Hmmm, do I take it from the references line that you mean me?

>1) All physical objects are not machines. 
>2) Not all machines are computers.
>3) There are some activities which can be performed by physical objects
>and machines which cannot be performed by computers.

1) Arguable but not relevant.
2) Of course.
3) Of course.

>Birds can fly and airplanes can fly, but computers cannot fly. [...]
>4) The simulation of a physical activity by a computer cannot be
>identified with the physical activity. A computer running a flight
>simulation program is not flying.
>5) Hence, while it may be possible to build a machine that thinks, it
>does not follow that it will be possible to build a computer that
>thinks, as not all physical activities can be performed by computers.

If you recall, the *premise* of the current discussion was that thinking is
thinking in virtue of its abstract causal structure, and not in virtue of
physical details of implementation.  If you want to argue with this premise
 -- functionalism -- then fine.  The point was not to defend it, but to defend
a view of the relation between computation and cognition which is less
simple-minded than "the mind is a computer".

Of course, I also believe that functionalism is true.  The functionalist
believes that thinking is fundamentally different to flying (and heating,
swimming, and nose-blowing).  The essence of flying certainly *cannot*
be captured in an abstract causal structure.  This is because there are
substantive *physical* criteria for flying.  An object, *by definition*,
is not flying unless it is (very roughly) engaged in ongoing motion without
any connection to the ground.  Nothing abstract about this -- it's a solid,
physical criterion. If you capture the causal patterns without the correct
physical realization, then it's not flying, period.  Similarly for
nose-blowing and the rest. 

Thinking, on the other hand, has no such solid criteria in its definition.
The only definitive criterion for thought is "having such and such a subjective
experience" -- which is far away from physical details (and a criterion which
is understood notoriously badly).  Of course, this doesn't *prove* that
thinking is not nevertheless inseparable from physical details -- a correct
theory of mind *might* just require that for these experiences, you can't
get away with anything but pointy-headed neurons.  But at the very least,
physical details are out of the *definition*, and there is thus a
principled difference between thinking and flying.  Which makes the jump to
functionalism much more plausible.  Maybe "thinking" is more like "adding"
than like "flying".

Most arguments against functionalism are in terms of "funny instantiations" --
as in "but *this* has the right causal dynamics, and surely *this* doesn't
think".  Generally Chinese objects seem to be favoured for these arguments --
whether Rooms, Gyms or Nations.  Some people find these intuitively compelling.
As for me, I find the arguments sufficiently unconvincing that my "faith" is
not only affirmed but strengthened.

>6) While there are good reasons to believe that thinking is a physical
>activity, there are no good reasons for believing that thinking is the
>execution of a computer program. Nothing revealed either through
>introspection or the examination of the anatomy of the brain leads to
>the conclusion that the brain is operating as a computer. If someone
>claims that it is, the burden of proof is on that person to justify that
>claim. Such proof must be base on analysis of the brain's structure and
>not on logical, mathematical, or philosophical grounds. Since even the
>physical basis of memory is poorly understood at present, any claim that
>the brain is a computer is at best an unproven hypothesis.

I agree.  Did you read my first note?  The whole point is that you can accept
the computational metaphor for mind *without* believing somewhat extreme
statements like "the brain is a computer", "the mind is a program", "cognition
is just symbol-manipulation" and so on.  The role of computer programs is
that they are very useful formal specifications of causal dynamics (which
happen to use symbols as an intermediate device).  Implementations of
computer programs, on the other hand, possess *physically* the given causal
dynamics.  So if you accept (1) functionalism, and (2) that computer programs
can capture any causal dynamics, then you accept that implementations of
the right computer programs think.

--
Dave Chalmers     (dave@cogsci.indiana.edu)      
Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable"

bwk@mbunix.mitre.org (Kort) (12/30/89)

In article <1037@ra.stsci.edu> bsimon@stsci.EDU (Bernie Simon) writes:

 > 6) While there are good reasons to believe that thinking is a physical
 > activity, there are no good reasons for believing that thinking is the
 > execution of a computer program.  Nothing revealed either through
 > introspection or the examination of the anatomy of the brain leads to
 > the conclusion that the brain is operating as a computer.  If someone
 > claims that it is, the burden of proof is on that person to justify that
 > claim. Such proof must be base on analysis of the brain's structure and
 > not on logical, mathematical, or philosophical grounds.  Since even the
 > physical basis of memory is poorly understood at present, any claim that
 > the brain is a computer is at best an unproven hypothesis.

The brain is a collection of about 400 anatomically identifiable
neural networks, interconnected by trunk circuits called nerve bundles,
and connected to the outside world by sensory organs (eyes, ears, nose,
tactile sensors) and effectors (muscles, vocal cords).  Neural networks
are programmable computational devices, capable of categorizing stimuli
into cases, and capable of instantiating any computable function (some
more easily than others).  Artificial neural networks are used today
for classifying applicants for credit or insurance.  They have also
been used to read ASCII text and drive a speech synthesizer, thereby
demonstrating one aspect of language processing.  As to memory, you
might want to explore recent research on the Hebb's synapse.

--Barry Kort

bwk@mbunix.mitre.org (Kort) (12/30/89)

In article <f6xk02b078qO01@amdahl.uts.amdahl.com>
kp@amdahl.uts.amdahl.com (Ken Presting) writes:

 > I agree that an AI need not model any particular person at a particular
 > time.  But since the error in a numerical model is cumulative over time
 > slices, it's not just the behavior of the system at a given time that
 > won't match, but also the general shape of the trajectories though the
 > state space of the system.  If a numerical model of the brain is claimed
 > to be accurate except for "noise", and therefore claimed to be conscious,
 > then it must be shown that what is called "noise" is irrelevant to
 > consciousness (or thinking).  Fluctuations that seem to be "noise" may
 > have significant consequences in a chaotic system.

Noise is to thinking as genetic mutations are to evolution.  Most
noise is counterproductive, but occasionally the noise leads to a
cognitive breakthrough.  That's called serendipity.

--Barry Kort

cam@aipna.ed.ac.uk (Chris Malcolm) (12/30/89)

In article <973@metapsy.UUCP> sarge@metapsy.UUCP (Sarge Gerbode) writes:
>In article <1989Dec19.061822.27585@athena.mit.edu> crowston@athena.mit.edu
>(Kevin Crowston) writes:

>>>[After object] code is loaded, there is actually a different
>>>physical machine there, just as much as if one had gone out and
>>>bought a different machine.

>>But even so, the program still exists in both cases, right?

>Good question.  What *is* a "program", anyway?
> ...
>is it, as I suspect, the detailed *concept* the programmer had in
>mind when he wrote the source code?

Computer programs, like knitting patterns, sometimes arise by
serendipitous accident. "Gee, that looked good - wonder if I can do it
again?" There are also those cases where one computer program invents
another. In these cases there need never have been any mind which had a
detailed concept, or even intention, behind the source code. Even in
fully deliberate programs, it is sometimes the case that the programmer
fixes a bug by accident without understanding it - sometimes the only
way, for obscure bugs, just to tinker until it goes away. But I would
not like to say that programs which have never been understood are
therefore not programs, any more than I would like to say that if God
does not understand how I'm built then I'm not really a person.
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

gall@yunexus.UUCP (Norm Gall) (12/31/89)

bwk@mbunix.mitre.org (Kort) writes:

| Noise is to thinking as genetic mutations are to evolution.  Most
| noise is counterproductive, but occasionally the noise leads to a
| cognitive breakthrough.  That's called serendipity.

Don't you think you are playing fast and loose with these concepts?
What you say noise is when you equate it with genetic mutation is not
the same as what a radio operator knows it to be, what a philosopher
knows it to be, and what the mother of a teenager in Windsor, ON
knows it to be.

I'm not saying that you haven't defined your concepts well enough (ai
scientists have more that adequately defined it, for their purposes).
My question is "What licenses you to shift the meaning of any
particular term?"

nrg

-- 
York University          | "Philosophers who make the general claim that a 
Department of Philosophy |       rule simply 'reduces to' its formulations
Toronto, Ontario, Canada |       are using Occam's razor to cut the throat
_________________________|       of common sense.'             - R. Harris

miron@fornax.UUCP (Miron Cuperman ) (12/31/89)

In article <f6xk02b078qO01@amdahl.uts.amdahl.com>
	kp@amdahl.uts.amdahl.com (Ken Presting) writes:
> David Chalmers writes:
>>So while we won't capture the exact behaviour of System X
>>at 3 p.m. on 12/22/89, we'll generate equally plausible behaviour -- in other
>>words, how the system *might* have gone, if a few unimportant random
>>parameters had been different.
>
>These objections seem to grant at least a part of my point - some of the
>characteristics of some causal systems cannot be specified by programs.
>I agree that an AI need not model any particular person at a particular
>time.  But since the error in a numerical model is cumulative over time
>slices, it's not just the behavior of the system at a given time that
>won't match, but also the general shape of the trajectories though the
>state space of the system.

My thoughts:

Let us say you see a leaf falling.  Since you are a chaotic system your
trajectory through space-time may be completely different than it would
be if you did not see that leaf.  But did that leaf make you non
human?  Did it 'kill' you because it changed your future so
drastically?  I don't think so.

Let us say that we model someone on a computer but we do not capture
everything.  Because of the imperfections of the model the resulting
system will diverge.  (Also because the inputs to this system and to
the original are different.)  Isn't that equivalent to the falling leaf
incident? (assuming the model is close enough so it does not cause a
breakdown in the basic things that make a human -- whatever those
are.)  I don't agree that some characteristics cannot be specified by
programs.  They can be specified up to any precision we would like.  I
don't think the human brain is so complex that it has an infinite
number of *important* parameters (that without them you will fail the
'thinking test').  Actually I don't think there are so many $10M will
not capture today (if we knew how to model them).

You also wrote that chaotic systems are specificaly hard to model.  A
computer is a chaotic system.  It is very easy to model a computer.
Therefore it may be possible to model other chaotic systems.  You have
to justify your claim better.

Summary:  Inaccurate modeling may have an effect similar to 'normal'
  events.  Since the inputs will be different anyway, the inaccuracy
  may not matter.

		Miron Cuperman
		miron@cs.sfu.ca

jpp@tygra.UUCP (John Palmer) (12/31/89)

}In article <85217@linus.UUCP> bwk@mbunix.mitre.org (Barry Kort) writes:
}In article <1037@ra.stsci.edu} bsimon@stsci.EDU (Bernie Simon) writes:
}
} } 6) While there are good reasons to believe that thinking is a physical
} } activity, there are no good reasons for believing that thinking is the
} } execution of a computer program.  Nothing revealed either through
} } introspection or the examination of the anatomy of the brain leads to
} } the conclusion that the brain is operating as a computer.  If someone
} } claims that it is, the burden of proof is on that person to justify that
} } claim. Such proof must be base on analysis of the brain's structure and
} } not on logical, mathematical, or philosophical grounds.  Since even the
} } physical basis of memory is poorly understood at present, any claim that
} } the brain is a computer is at best an unproven hypothesis.
}
}The brain is a collection of about 400 anatomically identifiable
}neural networks, interconnected by trunk circuits called nerve bundles,
}and connected to the outside world by sensory organs (eyes, ears, nose,
}tactile sensors) and effectors (muscles, vocal cords).  Neural networks
}are programmable computational devices, capable of categorizing stimuli
}into cases, and capable of instantiating any computable function (some
}more easily than others).  Artificial neural networks are used today
}for classifying applicants for credit or insurance.  They have also
}been used to read ASCII text and drive a speech synthesizer, thereby
}demonstrating one aspect of language processing.  As to memory, you
}might want to explore recent research on the Hebb's synapse.
}
}--Barry Kort

But the brain is not structurally programmable. The tradeoff principle
states that no system can have structural programmability, evolutionary
adaptability and efficiency at the same time. 
 
Digital computers are programmable, but lack efficiency (I may post
more on this later) and evolutionary adaptability. The brain (ie:
humans) has evolutionary adaptability and (relative) efficiency.

Biological neurons are much more complex than their weak cousins  
(artificial neurons) and contain internal dynamics which play 
a very important role in their function. Things like second messenger
systems and protein/substrate interactions are important. Internal
dynamics rely heavily on the laws of physics and we cannot determine
what "function" a neuron "computes" unless we do a physics experiment
first. Computer engineers work very hard to mask off the effects of
the laws of physics (ie: by eliminating the effects of background
noise) in order to produce a device which is structurally programmable.

Biological neurons, on the other hand, RELY on the laws of physics
to do their work. The basic computing element of biological systems,
the protein, operates by recognizing a substrate. This is accomplished
by Brownian Motion and depends on weak bonds (VanderWaals interactions,
etc). Thus, there is a structure/function relationship which is 
essential.      

Artificial neural nets will still be unable to solve hard problems 
(patttern recognition, REAL language processing, etc) because they
are implemented in silicon (usually as a virtual machine on top of
a standard digital computer) and are therefore inherently inefficient.
In theory (Church-Turing Thesis) it is possible for such problems to
be solved by digital computers, but most of the hard problems are 
intractable. We are very quickly reaching the limits of 
speed of silicon devices.

The only hope of solving these hard problems is by developing devices 
which take advantage of the laws of physics and that have a very
strong structure/function relationship. Of course, these devices 
will not be structurally programmable, but will have to be developed
by an evolutionary process. 

My point: We are not going to solve the hard problems of AI by 
simply developing programs for our digital computers. We have to
develope hardware that has a strong structure/function relationship.

Sorry if this posting seems a little incoherent. Its 5am and I just
woke up. I'll post more on this later. Most of these ideas are to
be attributed to Dr. Michael Conrad, Wayne State University, 
Detroit, MI. 
-- 
=  CAT-TALK Conferencing Network, Prototype Computer Conferencing System  =
-  1-800-446-4698, 300/1200/2400 baud, 8/N/1. New users use 'new'         - 
=  as a login id.   E-Mail Address: ...!uunet!samsung!sharkey!tygra!jpp   =
-           <<<Redistribution to GEnie PROHIBITED!!!>>>>                  -

sarge@metapsy.UUCP (Sarge Gerbode) (01/02/90)

In article <1779@aipna.ed.ac.uk> cam@aipna.ed.ac.uk (Chris Malcolm) writes:
>In article <973@metapsy.UUCP>sarge@metapsy.UUCP (Sarge Gerbode) writes:

>>What *is* a "program", anyway?
>>...
>>is it, as I suspect, the detailed *concept* the programmer had in
>>mind when he wrote the source code?

>Computer programs, like knitting patterns, sometimes arise by
>serendipitous accident. "Gee, that looked good - wonder if I can do it
>again?"

>There are also those cases where one computer program invents
>another. In these cases there need never have been any mind which had a
>detailed concept, or even intention, behind the source code.

You haven't really defined "program", yet.  Do you mean the ascii
code?  I can see how that could arise from a random source, but
doesn't it take a conscious being to look at the source code and
label it as a "program"?  I suppose it would be easy to design a
program that would randomly generate syntactically correct ascii C
code that would compile and run without run-time errors (probably has
been done).  But would you really call such a random product a
program?  And if so, what's so interesting about programs as such?

>Even in fully deliberate programs, it is sometimes the case that the
>programmer fixes a bug by accident without understanding it -
>sometimes the only way, for obscure bugs, just to tinker until it
>goes away. But I would not like to say that programs which have never
>been understood are therefore not programs, any more than I would
>like to say that if God does not understand how I'm built then I'm
>not really a person.

Good point.  But even if a program were accidentally generated
randomly (like the fabled monkeys accidentally producing a Shakespeare
play), would it not require a conscious being to *label* such a
production a "program", in order for it to be one?

I'm not sure about this point.  I suppose there might be an argument
for saying that the enterprise of science is to discover the programs
that exist in Nature, so that we can understand, predict, and control
Nature.  In particular, the DNA system could be (has been) described
as a program.  I'm not sure if this usage is legitimate, or if we are
engaging in a bit of anthropomorphizing, here.  Or "theomorphizing",
if we find ourselves thinking as if the universe was somehow
programmed by some sort of intelligent Being and we are discovering
what that program is.
-- 
Sarge Gerbode -- UUCP:  pyramid!thirdi!metapsy!sarge
Institute for Research in Metapsychology
431 Burgess Drive; Menlo Park, CA 94025

dhw@itivax.iti.org (David H. West) (01/03/90)

In article <191@fornax.UUCP> miron@cs.sfu.ca (Miron Cuperman) writes:
>You also wrote that chaotic systems are specificaly hard to model.  A
>computer is a chaotic system.  

How so?

byoder@smcnet.UUCP (Brian Yoder) (01/03/90)

In article <979@metapsy.UUCP>, sarge@metapsy.UUCP (Sarge Gerbode) writes:
> In article <24Yy02PR76bt01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com
> (Ken Presting) writes:
> >In article <968@metapsy.UUCP>sarge@metapsy.UUCP (Sarge Gerbode) writes:
> 
> I may have a somewhat radical viewpoint on this, but to me a symbol
> is defined as such by the intention of the conscious being using it.
> A symbol is a perceivable or detectable entity that is used to direct
> attention to a particular reality or potential reality.
> 
> Charges, current pulses, etc., are rightly regraded as symbols only to
> the extent that they are intended (ultimately) to be comprehended by
> some sort of conscious entity as indicating certain realities (or
> potential relaities).  In the absence of such intentions, they are not
> symbols but mere charges, current pulses, etc.

Consider the real implementation of most programs though.  THey are written
in a high-level language like C, Pascal, FORTRAN, or COBOL. That's what the
programmer knew about.  The Compiler turns those symbols into symbols
that no human (usually) ever looks at or understands.  The end user sees
neither of these, he sees the user interface and understands what the 
{{program is doing from yet another perspective.  What is the intelligence
that understands the machine language symbols?

One more step higher in complexity is to consider systems with complex
memories that load memory as they go (virtual memory kinds of systems)
which have a different physical configuration each time they are executed.

One more step takes us to self-modifying languages like LISP which can 
execute and build statements in their own language.  No human ever sees 
these intermediate symbols, but those constructs are processed and
are reflected in the behavior of the program.

Finally, we have really dynamic systems like neural networks that aren't
so much "loaded with a program" as "taught" what to do.  They like us,
don't have a static program controling the behavior outputs.  In a sense
our brains become "different machines" from minute to minute as we 
learn and act.  (Some might say that large portions of the population
remain changeless through video stimulation, but this effect has not
yet been proven :-)

Are all of these working with "symbols"?  If not which are? Is it 
only humans that can identify a symbol?  What if all of the records
about punched cards were destroyed while card readers still existed,
would the little holes in card decks still be symbols? After the readers
were destroyed?

Brian Yoder



> -- 
> Sarge Gerbode -- UUCP:  pyramid!thirdi!metapsy!sarge
> Institute for Research in Metapsychology
> 431 Burgess Drive; Menlo Park, CA 94025


-- 
-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-
| Brian Yoder                 | answers *byoder();                            |
| uunet!ucla-cs!smcnet!byoder | He takes no arguments and returns the answers |
-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-

byoder@smcnet.UUCP (Brian Yoder) (01/03/90)

In article <6902@cbnewsh.ATT.COM>, mbb@cbnewsh.ATT.COM (martin.b.brilliant) writes:
> In article <1037@ra.stsci.edu> bsimon@stsci.EDU (Bernie Simon) writes:

[Parts 1-4 deleted for brevity]

> !5) Hence, while it may be possible to build a machine that thinks, it
> !does not follow that it will be possible to build a computer that
> !thinks, as not all physical activities can be performed by computers.

I think that this is being a little bit too restrictive.  It is pretty
clear (at least to those of us who believe that humans can think ;-) that
brains think.  However without the "machine" through which it operates
it couldn't do much about changing the world or discovering facts about
it.  To be fair, our potentially intelligent computer would have to have
some kind of "body" with senses and output devices (hands, wheels, or
at least a video display).

> !6) While there are good reasons to believe that thinking is a physical
> !activity, there are no good reasons for believing that thinking is the
> !execution of a computer program....
> 
> I wouldn't believe that for a minute.  I don't know exactly what
> thinking is, but it is probably something a computer can't do alone,
> but a machine with a computer in it might be able to do.

What would be missing is something for the computer/machine to think about
and a way for it to let us know that it thought something.  There's not
much to think about without any input.  As for what thinking is, the 
definition ought to include interpretation of information, the deduction 
of new information, and decisions about courses of action.  Isn't that 
something both brains and programs both do pretty well?

> !.... Nothing revealed either through
> !introspection or the examination of the anatomy of the brain leads to
> !the conclusion that the brain is operating as a computer....

Maybe we should look at it the other way around, we could have a computer
acting as a brain in this machine/computer.  If it interpreted sensory data
selected actions, and orchestrated their implementation (say, by flapping
wings) isn't that accomplishing the same end as a brain would?


Brian Yoder


-- 
-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-
| Brian Yoder                 | answers *byoder();                            |
| uunet!ucla-cs!smcnet!byoder | He takes no arguments and returns the answers |
-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-

bwk@mbunix.mitre.org (Kort) (01/03/90)

In article <6126@yunexus.UUCP> gall@yunexus.UUCP writes:

 > In article <85218@linus.UUCP> bwk@mbunix.mitre.org (Barry Kort) writes:

 > > Noise is to thinking as genetic mutations are to evolution.  Most
 > > noise is counterproductive, but occasionally the noise leads to a
 > > cognitive breakthrough.  That's called serendipity.
  
 > Don't you think you are playing fast and loose with these concepts?
 > What you say noise is when you equate it with genetic mutation is not
 > the same as what a radio operator knows it to be, what a philosopher
 > knows it to be, and what the mother of a teenager in Windsor, ON
 > knows it to be.

I'm using noise as a metaphor and analogizing it to random perturbations
in an otherwise deterministic system.  I find that analogies and
metaphors are useful tools in creative thinking, helping to direct
the mind toward deeper understanding of complex processes.

 > I'm not saying that you haven't defined your concepts well enough (AI
 > scientists have more that adequately defined it, for their purposes).
 > My question is "What licenses you to shift the meaning of any
 > particular term?"
  
My birthright licenses me to use my brain and mind to seek knowledge
and understanding, and to communicate interesting and intriguing ideas
with like-minded philosophers.

I hope you share the same birthright.  I would hate to see you
voluntarily repress your own opportunity to participate in the
exploration of interesting ideas.

--Barry Kort

flink@mimsy.umd.edu (Paul V Torek) (01/04/90)

kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>If a numerical model of the brain is claimed to be accurate except for 
>"noise", and therefore claimed to be conscious, then it must be shown 
>that what is called "noise" is irrelevant to consciousness (or thinking).

Are you suggesting that
(a) Some types of conscious thought might go wrong were it not for the
	"noise", or
(b) Although a "noiseless" system might pass the Turing Test, "noise"
	might be necessary for consciousness to exist at all?
(Or something else?)

Most of the rest of your article suggests (a), but (b) strikes me as a
more interesting thesis.  I can't think of any argument against (b).
-- 
"There ain't no sanity clause" --Marx
Paul Torek					flink@mimsy.umd.edu

gilham@csl.sri.com (Fred Gilham) (01/04/90)

Brian Yoder writes:

| Consider the real implementation of most programs though.  THey are written
| in a high-level language like C, Pascal, FORTRAN, or COBOL. That's what the
| programmer knew about.  The Compiler turns those symbols into symbols
| that no human (usually) ever looks at or understands.  The end user sees
| neither of these, he sees the user interface and understands what the 
| {{program is doing from yet another perspective.  What is the intelligence
| that understands the machine language symbols?

I'm pretty sure that you are using the word `symbol' here in different
ways.  In the case of a programmer writing in some programming
language, I would say that symbols (at various levels of abstraction)
are being used.  However, when the program is compiled, the symbols
disappear.  To say that the compiler turns the symbols into other
symbols is, I believe, to speak metaphorically.  The point is that
symbols only exist when there is someone to give them a meaning.

I envision the process in this way:


   meaning (in the mind)
     |                                          computerized
     |====>symbol==>(some physical pattern)==>syntactic transformation
                                                     |
                                                     |
   meaning<==symbol<==(some physical pattern)<=======|
   (back in
    the mind)

It seems to me that the computer starts and ends with the physical
patterns.  Everything else happens in our heads.

The fact that the transformations themselves can be described
symbolically tends to fool people into thinking that the computer is
actually using and manipulating symbols, or even manipulating meaning.
This has been described as a ``hermeneutical hall of mirrors'', where
we project onto the computer our own thought processes.  The computer
manipulates the patterns in ways that are meaningful to us; therefore
the computer must be doing something involving meaning.  But it isn't,
any more than the Eliza program actually understood the people that
talked to it, even though THEY thought it did.

-Fred Gilham      gilham@csl.sri.com

kp@uts.amdahl.com (Ken Presting) (01/04/90)

In article <21606@mimsy.umd.edu> flink@mimsy.umd.edu (Paul V Torek) writes:
>kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>>If a numerical model of the brain is claimed to be accurate except for 
>>"noise", and therefore claimed to be conscious, then it must be shown 
>>that what is called "noise" is irrelevant to consciousness (or thinking).
>
>Are you suggesting that
>(a) Some types of conscious thought might go wrong were it not for the
>	"noise", or
>(b) Although a "noiseless" system might pass the Turing Test, "noise"
>	might be necessary for consciousness to exist at all?
>(Or something else?)
>
>Most of the rest of your article suggests (a), but (b) strikes me as a
>more interesting thesis.  I can't think of any argument against (b).

I have in mind the "something else".

The point about noise in chaotic systems arises as an objection to the
argument that if all other attempts at AI fail, at least we can
numerically model the phyics of the brain.  For this argument to work,
we need to be sure that we really *can* make an accurate model.  Chaotic
systems can mechanically amplify small discrepancies in initial state,
such as noise.  Numerical models trade speed for precision, so if a model
is to have the arbitrary precision needed to eliminate all discrepancies,
the model would run well behind real time, and fail the Turing test.

I think (b) is reversed.  Random brain events are probably important in
human behavior, thus affecting the Turing test.  But at least the sort
of thinking that is used to evaluate decision functions or logical
arguments seems to depend little on randomness.

Creative thinking - inventing proofs, constructing metaphors - could very
well profit from random influences.

markh@csd4.csd.uwm.edu (Mark William Hopkins) (01/05/90)

In article <35@tygra.UUCP> jpp@tygra.UUCP (John Palmer) writes:
* My point: We are not going to solve the hard problems of AI by 
* simply developing programs for our digital computers. We have to
* develope hardware that has a strong structure/function relationship.

Don't let biological precedent mislead you into this conclusion.  Nobody ever
said that nature has the best of what is possible.

If digital systems and quasi-analogical systems such as neural nets have
complimentary strangths, then combining a digital computer and neural net into
one integrated system (where each tackles those tasks it can best handle) will
undoubtedly create a system capable of more than either is by itself ... and
probably capable of much more than biological systems ever were.

I can already see places where neural nets can be used in conjunction with
a classical problem-solving AI program (e.g. to "learn" evaluation functions)
... and these are just simplistic applications.

miron@fornax.UUCP (Miron Cuperman ) (01/05/90)

In article <4711@itivax.iti.org> dhw@itivax.UUCP (David H. West) writes:
>In article <191@fornax.UUCP> miron@cs.sfu.ca (Miron Cuperman) writes:
>>You also wrote that chaotic systems are specificaly hard to model.  A
>>computer is a chaotic system.  
>
>How so?

Ok.  I was unclear.  The point is that many digital systems diverge
in output when just one info bit is changed.  My concept of chaos may
be fuzzy.

Actually, I would like to see some references on chaotic systems.
If I get enough, I may even post a list.

	Miron Cuperman
	miron@cs.sfu.ca

Nagle@cup.portal.com (John - Nagle) (01/05/90)

     We had this discussion last year.  Everybody in the field has
heard it altogether too many times.  Could we get the philosophy
out of comp.ai, please?

					John Nagle

hougen@umn-cs.CS.UMN.EDU (Dean Hougen) (01/06/90)

In article <25621@cup.portal.com> Nagle@cup.portal.com (John - Nagle) writes:
>
>     We had this discussion last year.  Everybody in the field has
>heard it altogether too many times.  Could we get the philosophy
>out of comp.ai, please?

I was here for last year's discussion, and it was obvious that not everybody
had heard the arguments enough times, or at least had not paid attention.
Quite a few people for example, talked about Searle's Chinese Room arguement
as if it had something to do with translation (it doesn't).

Should we get the philosophy out of comp.ai?  Definately, and the CogSci,
and the math, and the EE, and the ... CompSci. ;)  Alot of people see
overlap between ai and phil.  If you don't, simply add a subject line
or two to your kill file until the talk dies down.  Or call for the
creation of a new group to handle this discussion.  Until then the calls
for help and calls for papers and calls for participants will be mixed
with some real discussion. :)

Dean Hougen
--
"The world move on a womans hips.  She start to walk and she shake it up.
     - Talking Heads

zocy641@ut-emx.UUCP (01/08/90)

	Fred writes a program on his clone to sieve for primes.  He then does
a hex-dump of the compiled machine code, and puts the printout in a bottle.

	The bottle washes ashore at the feet of Sam.  Sam has never seen a
clone, in fact he has had no contact with any modern folks.  Still he
belives the symbols are some sort of message.  Through great effort he finally
extracts the algorithm from the code.  Is there not then a realtion between
the symbols the two men express the algorithm with?

	Where then where the symbols hiding in the bottle?

	BTW INHO I will be convenced to let the AIs vote in Usenet elections,
when they can proform visual parsing.  I.e. I send them some pictures in (say)
Postscript, and they reply as to what they see.  When I am content with the
response, I will welcome them aboard.

	As a test case to calibrate the results, let's try the test on the
current Usenet lusers.  Send your entries to:

	Henry J. Cobb	hcobb@walt.cc.utexas.edu

jpp@tygra.UUCP (John Palmer) (01/08/90)

In article <23040@ut-emx.UUCP} hcobb@walt.cc.utexas.edu (Henry J. Cobb) writes:
}
}	Fred writes a program on his clone to sieve for primes.  He then does
}a hex-dump of the compiled machine code, and puts the printout in a bottle.
}
}	The bottle washes ashore at the feet of Sam.  Sam has never seen a
}clone, in fact he has had no contact with any modern folks.  Still he
}belives the symbols are some sort of message.  Through great effort he finally
}extracts the algorithm from the code.  Is there not then a realtion between
}the symbols the two men express the algorithm with?
}
}	Where then where the symbols hiding in the bottle?
}

It would not be possible for "Sam" to extract the algorithm.      


-- 
=  CAT-TALK Conferencing Network, Prototype Computer Conferencing System  =
-  1-800-446-4698, 300/1200/2400 baud, 8/N/1. New users use 'new'         - 
=  as a login id.   E-Mail Address: ...!uunet!samsung!sharkey!tygra!jpp   =
-           <<<Redistribution to GEnie PROHIBITED!!!>>>>                  -

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (01/09/90)

In article <25621@cup.portal.com> Nagle@cup.portal.com (John - Nagle) writes:
>
>     We had this discussion last year.  Everybody in the field has
>heard it altogether too many times.  Could we get the philosophy
>out of comp.ai, please?

Look, we wait all year for the science, and if we don't see any we're
entitled to some philosophy at the end of the year as a reward for
our patience.

I vote the discussion continues until an interesting result in AI is
announced :-)

As an alternative to the Sci Am discussion, post articles on the
following:

"The death of positivism in the study of Man rules out Truth in AI"

No credit will be given for postings which do not use real AI papers as
examples.

Candidates must write on one side of the text editor.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

byoder@smcnet.UUCP (Brian Yoder) (01/09/90)

In article <GILHAM.90Jan3110834@cassius.csl.sri.com>, gilham@csl.sri.com (Fred Gilham) writes:
 
> Brian Yoder writes:
 
> | Consider the real implementation of most programs though.  THey are written
> | in a high-level language like C, Pascal, FORTRAN, or COBOL. That's what the
> | programmer knew about.  The Compiler turns those symbols into symbols
> | that no human (usually) ever looks at or understands.  The end user sees
> | neither of these, he sees the user interface and understands what the 
> | {{program is doing from yet another perspective.  What is the intelligence
> | that understands the machine language symbols?
> 
> I'm pretty sure that you are using the word `symbol' here in different
> ways.  In the case of a programmer writing in some programming
> language, I would say that symbols (at various levels of abstraction)
> are being used.  However, when the program is compiled, the symbols
> disappear.  To say that the compiler turns the symbols into other
> symbols is, I believe, to speak metaphorically.  The point is that
> symbols only exist when there is someone to give them a meaning.
 
> I envision the process in this way:
 
 
>    meaning (in the mind)
>      |                                          computerized
>      |====>symbol==>(some physical pattern)==>syntactic transformation
>                                                      |
>                                                      |
>    meaning<==symbol<==(some physical pattern)<=======|
>    (back in
>     the mind)
 
> It seems to me that the computer starts and ends with the physical
> patterns.  Everything else happens in our heads.
 
> The fact that the transformations themselves can be described
> symbolically tends to fool people into thinking that the computer is
> actually using and manipulating symbols, or even manipulating meaning.
> This has been described as a ``hermeneutical hall of mirrors'', where
> we project onto the computer our own thought processes.  The computer
> manipulates the patterns in ways that are meaningful to us; therefore
> the computer must be doing something involving meaning.  But it isn't,
> any more than the Eliza program actually understood the people that
> talked to it, even though THEY thought it did.

The point I was trying to make was that the information loaded in the memory
of the computer IS a set of symbols.  If anyone bothered to look in there
with a debugging tool they'd see the symbols in there (the machine
language) even though it was never in anyone's head before.  Would
you maintain that in this example they pattern in memory does not 
consist of symbols, then it does after it has been probed by the 
debugger?  That seems a bit odd.  Do they go back into being non-symbols
when the debugger is removed?  Are the words on this screen symbols
when you stop looking at them? When you forget them? When they disappear 
from the screen?  I say that they are carriers of information and exist 
in whatever medium they are expressed in.  Thus, symbols exist all the time
(though perhaps they cannot be translated into certain forms with the
available equipment:

Brain containing      Paper containing     Disk containing 
Symbols         =====>Symbols         ====>Symbols        ===+
                                                             |
                                                        Memory containing
                                                        translated symbols
                                                        (Object Code)
                                                             | 
Brain Containing      Screen containing     Computations     | 
Symbols         <=====Symbols          <====Express Symbols==+

If we had a book written in chinese and all people able to read chinese
suddenly dropped dead wouldn't the things in the book still be symbols?
Would they not still express information?

I don't think a symbol needs to be read for it to be a symbol any more
than a boat needs to float before it's a boat.  What do you think?


Brian Yoder

-- 
-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-
| Brian Yoder                 | answers *byoder();                            |
| uunet!ucla-cs!smcnet!byoder | He takes no arguments and returns the answers |
-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-

gilham@csl.sri.com (Fred Gilham) (01/10/90)

Brian Yoder writes:

===============
Brain containing      Paper containing     Disk containing 
Symbols         =====>Symbols         ====>Symbols        ===+
                                                             |
                                                        Memory containing
                                                        translated symbols
                                                        (Object Code)
                                                             | 
Brain Containing      Screen containing     Computations     | 
Symbols         <=====Symbols          <====Express Symbols==+
===============
I reply:

Brain containing      Paper containing     Disk containing 
Symbols         =====>marks          ====> magnetic domains==+
                                                             |
                                                        Memory containing
                                                        electric charges
                                                             | 
Brain Containing      Screen containing     Computations     | 
Symbols         <=====Light patterns   <====(transformation==+
                                            from physical states
                                            to other physical
                                            states)

Using a debugger doesn't really change this.  It is only when the
light patterns get translated into symbols in our heads that the
symbols exist.  We use physical states to represent symbols.  When you
use a debugger, you impose a transformation from physical states of
the computer's memory to other physical states (light patterns) on the
screen or whatever.  These light patterns have no intrinsic meaning,
only that we impose on them.  If you assert otherwise, I don't see how
you can escape the conclusion that the physical patterns have meaning
for everyone, since they themselves embody meaning.  But this cannot
be true.  For example, when I read some mathematical text, I may see
some fancy squiggle.  My first task is to find out what the author
means by this fancy squiggle.  If the meaning were implicit in the
fancy squiggle, I wouldn't have this problem.

-Fred Gilham  gilham@csl.sri.com

jb3o+@andrew.cmu.edu (Jon Allen Boone) (01/10/90)

gilham@csl.sri.com (Fred Gilham) writes:
> I reply:
> 
> Brain containing      Paper containing     Disk containing 
> Symbols         =====>marks          ====> magnetic domains==+
>                                                              |
>                                                         Memory containing
>                                                         electric charges
>                                                              | 
> Brain Containing      Screen containing     Computations     | 
> Symbols         <=====Light patterns   <====(transformation==+
>                                             from physical states
>                                             to other physical
>                                             states)
> Using a debugger doesn't really change this.  It is only when the
> light patterns get translated into symbols in our heads that the
> symbols exist.  We use physical states to represent symbols.  When you
> use a debugger, you impose a transformation from physical states of
> the computer's memory to other physical states (light patterns) on the
> screen or whatever.  These light patterns have no intrinsic meaning,
> only that we impose on them.  If you assert otherwise, I don't see how
> you can escape the conclusion that the physical patterns have meaning
> for everyone, since they themselves embody meaning.  But this cannot
> be true.  For example, when I read some mathematical text, I may see
> some fancy squiggle.  My first task is to find out what the author
> means by this fancy squiggle.  If the meaning were implicit in the
> fancy squiggle, I wouldn't have this problem.

Well, for the computer, the physical states don't mean anything until
we symbol-processors force it to interpret them one way or another....
thus to the computer, they are symbols. In other words, if we didn't
build computers the way we do, then the binary state of a particular
section of memory wouldn't *have* to mean what it does to the
computer.  Thus, i claim that the computer uses symbols too, albeit
different ones than we use.  To me it is as valid as you claiming that
a fancy squiggle is a symbol, since you have to go find out what it
means.  The computer is microcoded to figure out what its symbols are
supposed to mean - it doesn't happen that way by default.

> -Fred Gilham  gilham@csl.sri.com

- iain
"trip to the left please...."

flink@mimsy.umd.edu (Paul V Torek) (01/11/90)

I asked:
pt>Are you suggesting that
pt>(a) Some types of conscious thought might go wrong were it not for the
pt>	"noise", or
pt>(b) Although a "noiseless" system might pass the Turing Test, "noise"
pt>	might be necessary for consciousness to exist at all?
pt>(Or something else?)

kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>I have in mind the "something else".
>
>The point about noise in chaotic systems arises as an objection to the
>argument that if all other attempts at AI fail, at least we can
>numerically model the phyics of the brain.  For this argument to work,
>we need to be sure that we really *can* make an accurate model.  Chaotic
>systems can mechanically amplify small discrepancies in initial state

But that doesn't matter unless (a) is true.  As many people pointed out
in reply to you, the fact that an AI system doesn't duplicate the thought
processes of any *particular* person, is no problem for strong AI.  The
fact that my thought processes are different from yours doesn't
necessarily mean I'm wrong (or that I'm not really thinking) -- it just
means I'm different.

Now suppose that (a) *were* true -- the "noiseless" system goes wrong,
because (say) it can't think creatively, because "noise" is necessary
to do so.  Now *that* would be a problem.

>I think (b) is reversed.  Random brain events are probably important in
>human behavior, thus affecting the Turing test.  But at least the sort
>of thinking that is used to evaluate decision functions or logical
>arguments seems to depend little on randomness.

I agree that any particular person's behavior probably depends on random
events in her brain, but I doubt that this would affect the Turing test
-- a noiseless system would not respond "wrongly", just differently.  That's
my hunch.  But let's let that pass.  Your last sentence says that a
noiseless system would probably pass those aspects of the Turing Test
which involve such tasks as you mention.  I agree, but what (b) was
suggesting was that the Turing Test might not be an adequate test of
whether a "thinker" is conscious.  (And if you define "thought" such
that it must be conscious, then non-conscious things can't think.)
-- 
"There ain't no sanity clause" --Marx
Paul Torek					flink@mimsy.umd.edu

kp@uts.amdahl.com (Ken Presting) (01/11/90)

In article <21745@mimsy.umd.edu> flink@mimsy.umd.edu (Paul V Torek) writes:
>I asked:
>pt>Are you suggesting that
>pt>(a) Some types of conscious thought might go wrong were it not for the
>pt>	"noise", or
>
>kp@amdahl.uts.amdahl.com (Ken Presting) writes:
> .....  For this {"backstop"} argument to work,
>>we need to be sure that we really *can* make an accurate model.  Chaotic
>>systems can mechanically amplify small discrepancies in initial state
>
>But that doesn't matter unless (a) is true.  As many people pointed out
>in reply to you, the fact that an AI system doesn't duplicate the thought
>processes of any *particular* person, is no problem for strong AI.  The
>fact that my thought processes are different from yours doesn't
>necessarily mean I'm wrong (or that I'm not really thinking) -- it just
>means I'm different.

The divergence of the model from the real system means that for any
person, at any time, the model would diverge significantly from that
person's states (assuming that the brain is significantly chaotic, for the
purpose of discussion).  So it's not just that the numerical model can't
simulate me or you, it can't simulate *anybody*, *ever*.  So if we want to
claim that the simulation is close enough to brain function to be
simulated thought, then we have to show that the chaotic aspects of brain
function are inessential to thought.

BTW, my thanks to you and to the others in comp.ai who are participating
in the philosophical discussions here.  You folks have helped me to
clarify my ideas by your constructive and thoughtful comments.  This group
is a good example of the solid intellectual value of Usenet.

hankin@sauron.osf.org (Scott Hankin) (01/11/90)

kp@uts.amdahl.com (Ken Presting) writes:

>The divergence of the model from the real system means that for any
>person, at any time, the model would diverge significantly from that
>person's states (assuming that the brain is significantly chaotic, for the
>purpose of discussion).  So it's not just that the numerical model can't
>simulate me or you, it can't simulate *anybody*, *ever*.  So if we want to
>claim that the simulation is close enough to brain function to be
>simulated thought, then we have to show that the chaotic aspects of brain
>function are inessential to thought.

However, if the brain is sufficiently chaotic, we would have to assume that
a "perfect duplicate" (such as might come out of a matter duplicator) would
immediately diverge from the original.  I fail to see how that matters.
Would the duplicate therefore not be thinking?  Would he/she not be the
same as the original?  I suspect the answer to the first question is no,
for the duplicate would still function in the same manner as the original,
who, we can only assume, thinks.  I also suspect that the answer to the
second is no.  The duplicate would cease being the same as the original at
the point of duplication.  They would be substantially the same, to be
sure, but start to differ almost immediately.

I don't feel that the issue is the simulation of any given personality, but
rather whether a simulation could have thought processes as close to yours
as yours are to mine.

- Scott
------------------------------
Scott Hankin  (hankin@osf.org)
Open Software Foundation

bls@cs.purdue.EDU (Brian L. Stuart) (01/12/90)

In article <85YE02kP7duL01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>The divergence of the model from the real system means that for any
>person, at any time, the model would diverge significantly from that
>person's states (assuming that the brain is significantly chaotic, for the
>purpose of discussion).  So it's not just that the numerical model can't
>simulate me or you, it can't simulate *anybody*, *ever*.  So if we want to
>claim that the simulation is close enough to brain function to be
>simulated thought, then we have to show that the chaotic aspects of brain
>function are inessential to thought.
>

This is not what's really at issue here.  To simulate (or possess)
intelligence is not to simulate one that possesses intelligence.
We don't need to accurately simulate anyone.  The questions that are
significant here are: first, are the chaotic properties of the brain
necessary for intelligence?  If so, then what characteristics of the
brain's attractor are necessary?  If these characteristics are also
sufficient, then there is no reason that any system possessing the
same characteristics in its attractor will not also be intelligent.
If the attractor characteristics are not sufficient, then we have the
problem of finding out what else is necessary.

In general, just because small changes in the input to chaotic systems
can lead to qualatitivly different behavior does not mean that that
behavior is unconstrained.  It is still constrained by the system's
attractor.  Simulating an existing intelligence is a red herring;
natural intelligent systems don't simulate others, so artificial
ones likewise need not.

>BTW, my thanks to you and to the others in comp.ai who are participating
>in the philosophical discussions here.  You folks have helped me to
>clarify my ideas by your constructive and thoughtful comments.  This group
>is a good example of the solid intellectual value of Usenet.

Ditto.

Brian L. Stuart
Department of Computer Science
Purdue University

ercn67@castle.ed.ac.uk (M Holmes) (01/12/90)

Just a thought while hairs are being split on the difference between
thinking computers, thinking machines, and thinking hybrids of computers
and machines.

It seems to have been suggested that computers would need to be able to
manipulate the environment (basically have senses and have hands) in
order to do what we call thinking. I'm not sure I'd agree but I think
it's irrelevant anyway, for the following reasons.

As a thought experiment (which is all thinking computers/machines are at
the present time) suppose that we simulate a world within a computer
system. Then we build an artificial intelligence embedded withing this 
simulation and allow "it" a simulated ability to sense and manipulate
the simulated environment. This would seem to fulfill the criteria for a
hybrid computer/machine which can sense and manipulate the "real" world.
It would however simply be a program in a computer system. The point
being that both sense and manipulation are simply a form of information 
processing which is what computers do anyway.

It could be argued that this would just be "simulated thinking" but it
isn't clear that this would be any different from the real thing.

 -- A Friend of Fernando Poo

ssingh@watserv1.waterloo.edu ($anjay "lock-on" $ingh - Indy Studies) (01/15/90)

In article <35@tygra.UUCP> jpp@tygra.UUCP (John Palmer) writes:
>
>Artificial neural nets will still be unable to solve hard problems 
>(patttern recognition, REAL language processing, etc) because they
>are implemented in silicon (usually as a virtual machine on top of
>a standard digital computer) and are therefore inherently inefficient.
>In theory (Church-Turing Thesis) it is possible for such problems to
>be solved by digital computers, but most of the hard problems are 
>intractable. We are very quickly reaching the limits of 
>speed of silicon devices.

The inherent inefficiency you attribute to digital computers may be due
in part to the Von Neumann bottleneck (See Hillis, W. Daniel, The Connection
Machine, MIT Press 1985). The strong version of the Church-Turing Thesis,
described in Hofstadter (Metamagical Themas, Bantam Books, 1985) implies
that a digital computer can, given enough time, solve ANY and ALL problems.
This was the intention of having such a general architecture for computers;
ie, if everything is done in the memory, nothing need be physically changed.
But when you try to program something as intensive as image or language
processing on such a general architecture, things quickly bog down because
the serial architecture of the computer, while capable of carrying out the
computations necessary, gets stuck in the bottleneck between processor and
memory. Image or language processing are problems that lend themselves
well to a parallel architecture because they can be broken down and solved
over many processors, providing a far greater information throughput than
is possible with a purely serial design. Serial machines are good for
simulation, because they are so open-ended, but as actual implementations
of intelligence, they are somewhat lean.

>The only hope of solving these hard problems is by developing devices 
>which take advantage of the laws of physics and that have a very
>strong structure/function relationship. 
>
>My point: We are not going to solve the hard problems of AI by 
>simply developing programs for our digital computers. We have to
>develope hardware that has a strong structure/function relationship.

This is why neural nets are the preferred mode of exploration today in
large parts of AI research. There does indeed exists a strong structure/
function relation between the NN's parallel design, and the parallel
nature of the problems they are being built to solve.


-- 
$anjay "lock-on" $ingh
ssingh@watserv1.waterloo.edu

"A modern-day warrior, mean mean stride, today's Tom Sawyer, mean mean pride."

es@sinix.UUCP (Dr. Sanio) (02/06/90)

In article <4050@jarthur.Claremont.EDU> jwilkins@jarthur.Claremont.EDU (Jeff Wilkinson) writes:
>so why don't we make one that does "continiously aquire, test, and
>generalize"?  why don't we make one with reasoning imperfect and chaotic
>enough to simulate human behavior?  why not make a mammoth machine, a
>dynamic system so complex that it boggles the immagination, a self
>organizing system of such sclae that it no longer "computes", but instead
>manipulates fuzzy, vague, HUMAN-type thoughts symbolicly?  would this not be
>thought in a machine?
>
The problem is, IMHO, that we are still not sure what we are looking for.
As far as I know, nobody has given, up to now, a valid answer on the 
question "what is intelligence (thought)". We could identify some aspects
- such as pattern recognition, symbol creation and manipulation, some aspects
of logic, and vaguely try to combine them to a generalized model of thought.
In fact, we can simulate those isolated aspects more or less well in algorith-
mic machines. Even trying to combine the to a system with the capabilities
of a rodent exceeds our skills and the capabilities  of our machines.
Personally, I don't share the point that human mind is something mysterious
which cannot be modelled or reproduced at all for all times. But I doubt 
that we are much closer to that goal (which can be questioned for its useful-
ness, btw, but thats a different topic) than the medieval alchemists when 
they modeled a human body from clay and treated it by some substances, elec-
tricity (some experimented with static electricity!) etc in order to give
them the spirit of life.
If the human brain is comparable to our computers, it must be regarded (under
my opinion and state of knowledge) as a machine which recursively and steadi-
ly reprograms code, data and even hardware (to stay inside the metaphor).
(Even simple self-modifying code is not even appreciated in the area of 
programming, as you probably know, too - we're fairly unable to do it in a
sufficiently reliable way, so for us, it's a widely banned technique)
Recently, we have neither understood the hardware (the coercion of the neurons,
the way of storing information etc. - we have some basic knowledge and lots of
speculation about that) nor have we - beyond some initial steps - decoded the
firmware (the genetic information laid down in a single cell - that's why the
discussion about inheritance of intelligence is freguently breaking out again
in this group and sci.psychology). About the software, we're completely specula-
ting.
Like the alchemists, we have some intuition about what's going on won by
introspection and observation, but no valid knowledge about what "intelligence"
is nor how a brain works. IMHO, the goal to build an "intelligent" machine
is - at least recently - pure megalomania.

>				 -=jefsoph/jeff wilkinson/wilky=-
>				-=jwilkins@jarthur.claremont.edu=-

regards, es		

ian@oravax.UUCP (Ian Sutherland) (02/07/90)

In article <907@athen.sinix.UUCP> es@athen.UUCP (Dr. Sanio) writes:
>In article <4050@jarthur.Claremont.EDU> jwilkins@jarthur.Claremont.EDU (Jeff Wilkinson) writes:
>>so why don't we make one that does "continiously aquire, test, and
>>generalize"?  why don't we make one with reasoning imperfect and chaotic
>>enough to simulate human behavior?  why not make a mammoth machine, a
>>dynamic system so complex that it boggles the immagination, a self
>>organizing system of such sclae that it no longer "computes", but instead
>>manipulates fuzzy, vague, HUMAN-type thoughts symbolicly?  would this not be
>>thought in a machine?

Maybe so, but why in the world would we want to build such a machine?
For people who don't have enough flesh-and-blood friends?  If I were
building a machine to help me with a task which needed to function
like a human (e.g. a robot to perform or supervise a sophisticated
task in a dangerous environment), I'd want it to have LESS of the
kinds of chaotic, fuzzy vagueness described above than a human.

>The problem is, IMHO, that we are still not sure what we are looking for.
>As far as I know, nobody has given, up to now, a valid answer on the 
>question "what is intelligence (thought)".

Indeed.  I don't see why such a definition is necessary, or even
helpful.  It seems to me that most of the useful work that gets done in
the area of AI happens when people stop trying to make a machine that
"thinks", whatever that means, and adopt a more concrete goal, like
trying to make a machine to do medical diagnoses.  I think the pursuit
of "intelligence" in the field of AI is very counterproductive.

>But I doubt 
>that we are much closer to that goal
[...]
>than the medieval alchemists when 
>they modeled a human body from clay and treated it by some substances, elec-
>tricity (some experimented with static electricity!) etc in order to give
>them the spirit of life.

The likening of AI to alchemy has got to be one of the most apt
metaphors I've ever heard ...
-- 
Ian Sutherland		ian%oravax.uucp@cu-arpa.cs.cornell.edu

Sans Peur

taplin@thor.acc.stolaf.edu (Brad Taplin) (02/07/90)

In article <1326@oravax.UUCP> ian@oravax.odyssey.UUCP (Ian Sutherland) writes:

>>>>mamoth, fuzzy, almost "humane" computer...
>..........but why in the world would we want to build such a machine?
>For people who don't have enough flesh-and-blood friends?  If I were
>building a machine to help me with a task which needed to function
>like a human (e.g. a robot to perform or supervise a sophisticated
>task in a dangerous environment), I'd want it to have LESS of the
>kinds of chaotic, fuzzy vagueness described above than a human.

Depends on the application. My first inspiration for getting into AI
was the movie "2001". There's an application in which I would prefer
the "face", if not the acual algorithms, to appear more "humane".
This "friend in a box" could be found and activated at will to play
the perfect sounding board. Her/his memory and attidude could be
specifically designed to encourage and help the user forget it's just
a machine doing all the "multi-tasking" I imagine such a system doing
for me.

Perhaps we'd be wise to endow our "friendly computer" with two linked
minds, one for information processing and retrieval, and one for the
pleasant, clear, almost artful (if you believe a computer can produce
art) presentation of everything. The difference between such machines 
and today's computers could be as important (or un- if you like) as
that between a stark office and a warmly decorated one. Some won't
care, but most of us would find it much easier to work in the more
comfy environment. Sorry if I've offended your spartan nature.

>>...medeival monks and clay models.........
>> (some experimented with static electricity!) etc in order to give
>> them the spirit of life.

Who was that again? 1500s some monk wasn't he? Buried models, seeded with
sperm or something, forty days in horse dung. The products had a name?

>The likening of AI to alchemy has got to be one of the most apt
>metaphors I've ever heard ...

ditto... but what fun to know we're onto something, even if 
our current understanding is riddled with wild presumptions. 
-- 
########################################################################
  "...I've gotten two thousand, fourteen times smarter since then..."
B.R.T c/o Jan Aho St.Olaf Northfield MN 55057 taplin@thor.acc.stolaf.edu
########################################################################

ian@oravax.UUCP (Ian Sutherland) (02/08/90)

In article <11185@thor.acc.stolaf.edu> taplin@thor.stolaf.edu (Brad Taplin) writes:
>In article <1326@oravax.UUCP> ian@oravax.odyssey.UUCP (Ian Sutherland) writes:
>
>>>>>mamoth, fuzzy, almost "humane" computer...
>Depends on the application.

Indeed.  If you really DON'T have enough flesh and blood friends, and
you prefer your friends to be mammoth, fuzzy and vague ;-) you might
want such a machine.  The kind of application the original poster was
talking about was one in which you wanted the machine to be
INTELLIGENT.  For applications such as this, I claim you don't want to
mimic the attributes of humans which cloud their thinking.
-- 
Ian Sutherland		ian%oravax.uucp@cu-arpa.cs.cornell.edu

Sans Peur

andrew@dtg.nsc.com (Lord Snooty @ The Giant Poisoned Electric Head ) (02/08/90)

If we substitute "universe" for "chinese room", and "the laws of physics"
for "the rules of chinese language", are we to conclude that the cosmos
is a conscious entity?
-- 
...........................................................................
Andrew Palfreyman	andrew@dtg.nsc.com	Albania before April!

hougen@umn-cs.cs.umn.edu (Dean Hougen) (02/08/90)

In article <621@berlioz.nsc.com> andrew@dtg.nsc.com (Lord Snooty @ The Giant Poisoned Electric Head       ) writes:
>
>If we substitute "universe" for "chinese room", and "the laws of physics"
>for "the rules of chinese language", are we to conclude that the cosmos
>is a conscious entity?
>Andrew Palfreyman	andrew@dtg.nsc.com	Albania before April!

Who is it talking to?

Dean Hougen
--
"Summoning his cosmic powers,
 And glowing slightly from his toes,
 The psychic emination grows."   - Pink Floyd

ian@oravax.UUCP (Ian Sutherland) (02/08/90)

In article <621@berlioz.nsc.com> andrew@dtg.nsc.com (Lord Snooty @ The Giant Poisoned Electric Head       ) writes:
-
-If we substitute "universe" for "chinese room", and "the laws of physics"
-for "the rules of chinese language", are we to conclude that the cosmos
-is a conscious entity?

Why is this absurd (as your "Summary" line suggests)?
-- 
Ian Sutherland		ian%oravax.uucp@cu-arpa.cs.cornell.edu

Sans Peur

randy@ms.uky.edu (Randy Appleton) (02/08/90)

I just read the Jan Scientific American, the one with Searle and so on.
Here is the one burning question I have.  I think a satisfactory answer to this
will convince me that Searle is right, and strong-AI is wrong.  But until
then, I find Searle's argument to be imprecise gobbly-gook.

What exactly IS the difference between "understanding" and "the formal
manipulation of syntatic symbols"?  He uses those two phrases quite alot,
and I think it is this difference that is his main argument.  BUT HE NEVER
SAYS WHAT IT IS! ARG!

Well, thanks
Randy

taplin@thor.acc.stolaf.edu (Brad Taplin) (02/08/90)

In article <1328@oravax.UUCP> ian@oravax.odyssey.UUCP (Ian Sutherland) writes:
>In article <11185@thor.acc.stolaf.edu> taplin@thor.stolaf.edu (Brad Taplin) writes:
>>In article <1326@oravax.UUCP> ian@oravax.odyssey.UUCP (Ian Sutherland) writes:
>>>>>>mamoth, fuzzy, almost "humane" computer...
>>Depends on the application.
 
>Indeed.  If you really DON'T have enough flesh and blood friends, and
>you prefer your friends to be mammoth, fuzzy and vague ;-) you might
>want such a machine.  The kind of application the original poster was
>talking about was one in which you wanted the machine to be
>INTELLIGENT...
SPOILER: Several screenfuls of some crazed AI terrorist's ideas!

Thanks for the reassuring ;-). I do have trouble finding trustworthy 
friends so I would want such a machine :-Y. But about INTELLIGENCE...

Understood. I counter your cliam by suggesting that SOME (albeit few)
of those "fuzzy" characteristics can actually aid "thinking", speed
problem solving. Your straight algorithms might lead directly to
complex solutions, but I can imagine situations in which a quick,
reliable guess beats a somewhat slower but perfect response.

Ever read "The Art of Motorcycle Maintenance"? Persig suggests that when
his excessively single-minded pursuits of "Truth" in the mountains got
bogged down he'd clear his head and let his attention drift. The
critical eye remained open (wrote Persig) but the focus became whatever
struck the imagination. I've tried such "lateral thinking" and found it
often led to useful and accurate ideas my previous trains of thought
might have never reached. If indeed the possibilities are as complex and
ever-changing in real-world scenarios as I imagine them to be, then
could not practical AI benefit from the "alchemy" Persig suggests?

Imagine I've designed a computer to take over the responsibilities of an
air traffic control tower. Some paniced pilot radios that they need an
emergency landing NOW, yet the "independent" mind in my computer knows
damn well that recalculating precisely which planes go where on a
crowded New York Friday will probably take way too long. My computer
then divies up its tasks into three: The first watches and prioritizes
everything, the second (top priority now) starts making "fuzzy"
educated guesses, the third calculates under a more methodical system
the best possible solution. 

Now, if the pilot needs an answer before program3 is done, program1
(judge, manager, communicator) takes the best program2 has yet to
offer and offers it on, along with a rough estimate on its chances
of working, to all planes, vehicles, and people involved. While prog2
spools up new ideas of ever-greater complexity prog1 keeps both the
thoughtful and quick progs informed. Once prog3 has decided the best
possible solution the pilot is informed, and if s/he thinks that ultimate
decision is still workable prog1 helps everyone execute the prog3 plan.
If not, then everything starts afresh and prog2 has a spool of untested
ideas waiting to be considered.

One might argue that all three algorithms in this crisis control
situation should be seen as arrow-straight, but I'm still under the
impression that the quick-thinking prog2 must work most efficiently
by being "softer", not just simpler, than prog3. Well-placed random
variables could result in a very workable solution, even if it ain't
the best, in this painfully strict timeframe.

Tell me if (and why) I'm barking up a felled tree.
-- 
########################################################################
"...I've gotten two thousand fourteen times smarter since then..." -MCP
Brad Taplin, alum, magna sin laude, afloat?  taplin@thor.acc.stolaf.edu
########################################################################

andrew@dtg.nsc.com (Lord Snooty @ The Giant Poisoned Electric Head ) (02/09/90)

In article <1329@oravax.UUCP>, ian@oravax.UUCP (Ian Sutherland) writes:
> I wrote:
> -If we substitute "universe" for "chinese room", and "the laws of physics"
> -for "the rules of chinese language", are we to conclude that the cosmos
> -is a conscious entity?
> Why is this absurd (as your "Summary" line suggests)?		 Sans Peur

Because, as Dean points out, there is no external referent to "universe".
However, if we now consider a microcosm which obeys the laws of physics
(and we are not proponents of Mach) perhaps the absurdity goes away?
-- 
...........................................................................
Andrew Palfreyman	andrew@dtg.nsc.com	Albania before April!

asanders@adobe.COM (02/09/90)

>... are we to conclude that the cosmos is a conscious entity?

>>Who is it talking to?

Itself, perhaps?

kp@uts.amdahl.com (Ken Presting) (02/12/90)

(This article is very long.  I hope it's useful.)


In article <14069@s.ms.uky.edu> randy@ms.uky.edu (Randy Appleton) writes:
>I just read the Jan Scientific American, the one with Searle and so on.
>Here is the one burning question I have.  I think a satisfactory answer to this
>will convince me that Searle is right, and strong-AI is wrong.  But until
>then, I find Searle's argument to be imprecise gobbly-gook.
>
>What exactly IS the difference between "understanding" and "the formal
>manipulation of syntatic symbols"?  He uses those two phrases quite alot,
>and I think it is this difference that is his main argument.  BUT HE NEVER
>SAYS WHAT IT IS! ARG!

I think this is the $64K question.  Here is part of the answer:

The simplest way to define "understanding" is "knowledge of meanings".
That is, you understand English because given (most) any English
expression, you know what it means.  This much is common sense.  As the
definition becomes more technical and precise, it also becomes more
controversial (because the technicalities may not match common sense).
We can analyze the Chinese Room well enough with just this definition.

Now, let me say a few words about knowledge.  There is much less agreement
about the definition of knowledge than about the definition of semantics.
The most popular definition is "true beliefs, accompanied by reasons".
The best example is mathematical knowledge - we may well believe a
conjecture such as the 4-color map theorem (which is true), but until we
find the proof, we can't call it knowledge.  Another important distinction
is between knowledge and know-how.  Knowledge properly describes the
relation between a person, his beliefs, and reality (ie the truth of the
beliefs).  Know-how depends only on abilities.  You can "know how" to
hit a baseball, without knowing any physics or physiology.

Finally, let's look at the case where Searle memorizes the rules and
passes the Turing test without the books.  Searle is correct to say that
he still does not know Chinese.  Anyone who knows both English and Chinese
must be able to translate from one into the other, but Searle cannot.
What he has learned by memorizing the rules is how to respond to Chinese
questions.  So he has some Chinese know-how, but no knowledge of Chinese.
So I think the Chinese Room example has a real point.

If asked in Chinese about the meaning of a Chinese phrase, Searle would
no doubt be able to respond correctly.  This might suggest that he does
in fact "understand" Chinese.  But notice that if his questioner should
ask Searle his name, or the time of day, or the color of his tie, he
would *not* be able to answer correctly.  This is because Searle's rules
are limited to procedures for manipulating Chinese symbols, and do not
include procedures for looking at his watch or his tie.  By learning the
rules, Searle knows that the correct response to the Chinese question,
"What is a tie?" is the Chinese answer "A strip of cloth worn around the
neck."  But he does not know that the Chinese phrase "your tie" denotes
the strip of cloth around his own neck.  That is why he can correctly
claim that by learning the rules he does not learn Chinese.

Searle is correct that the rules contain no information about Chinese
semantics.  But he is wrong about *why* that information is absent.  He
thinks that programs have no semantics, which is an obvious mistake.
It is not because the Chinese responses are programmed that they have
no semantics.  Rather, the Turing test itself is too easy.  Turing did
not insist that the conversation in the imitation game include references
to events outside the dialogue.  The Turing test (as most people think of
it) can be passed by a program that uses no semantic information.

Searle's argument revolves around the claim that information of a certain
type - semantic information - cannot be learned by memorizing rules.
Let's look more closely at what Searle can learn by memorizing rules.

He would not learn the semantics of Chinese, but he might learn the syntax
of Chinese.  If he were asked in Chinese whether some expression were
grammatically correct, he would apply the rules and produce the correct
answer.  If he were asked in English about the same Chinese phrase, he
would _examine_ the rules, and perhaps find no rule which applies to the
expression.  Searle could infer that the expression is ungrammatical, on
the assumption that the rules cover all valid Chinese expressions.  If the
rules cover ungrammatical expressions as well, there would probably be
a small set of resonses to the effect of "I don't understand", and an
examination of the rules would exhibit a large group of expressions for
which the "I don't understand" symbol was the prescribed response.
Depending on the sophistication of the rules, inferring the syntax of
Chinese might be easy or hard, but by definition the rules contain all
the information necessary to infer a complete specification of Chinese
syntax.  Since information content is invariant under inference, by
learning the rules that enable him to pass the Chinese Turing Test, Searle
_would_ learn Chinese syntax, and could apply that knowledge in English
conversations (once he has performed the necessary inferences, no trivial
task).

Now suppose that Searle is provided with rules which not only allow him
to pass the standard Turing test, but also enable him to answer Chinese
questions about the color of his tie, and all the other everyday queries
he might encounter living in China.  When he is given the Chinese question
"What color is your tie?" the rules will no doubt direct him to look at
his tie, note its color, and select a Chinese symbol appropriate to that
color.  Clearly Searle is on his way to learning the semantics of the
Chinese color vocabulary.  The path from here to complete knowledge of
Chinese semantics is difficult.  Language-learning problems related to
this have been studied by philosophers under the name "radical translation"
or "radical interpretation".  Armed with the rules for manipulating the
symbols and the procedures for assigning symbols to observable qualities,
Searle would be well prepared for the radical translation process.

So if we add the appropriate proviso to the Turing test, requiring that
the system not only respond coherently in kind to Chinese questions,
but also display native competence in Chinese descriptions of its physical
environment, then by learning the same rules Searle _would_ learn Chinese.
Or at least, he would have enough information to figure out Chinese.  And
that knowledge of Chinese would be part of Searle's own knowledge, not a
part of some "second personality". At this point, I think I've dismembered
Searle's original example, but I should anticipate a probable objection:
(make that several objections)

Objection 1:
Searle is a smart guy, speaks a couple of languages, knows about radical
translation, and in general is already a thinking thing before he memorizes
the rules for Chinese.  Not so for a computer running the same program.
The strong AI idea is that just by loading the rules into the machine, the
machine will understand Chinese, that is, know the meaning of Chinese
expressions.  But a computer has none of the pre-existing talents that can
be attributed to Searle.  So what if the program contains all the
information about Chinese syntax and semantics?  The computer can't perform
a radical translation into a language it already speaks, because it doesn't
speak any language at all - and don't say it speaks machine language, there
aren't even any declarative sentences in machine language.  Plus the
computer would have to be programmed to perform a radical translation, and
off you go into an infinite regress.  What Searle has before the radical
translation is just more know-how about Chinese syntax and semantics, so
when the rules are programmed into a computer, all you'll get is a
mechanized rulebook, not a thinking thing.

Reply: Mechanized, yes; rulebook, no.
       If you can find any symbols inside a computer, you're looking at
       it through a hermeneutic hall of mirrors.

Objection 2:
It doesn't matter that programming languages have semantics.  What you
need to do is get semantics into the *data* - the output of the machine.

Reply: It's the implementation that forces semantics onto the data.
       Nobody claims that a program that's not running can think about
       anything.

Objection 3:
And what about feelings/emotions/sensations/qualia/consciousness?

Reply: You define 'em, I'll argue about 'em.
       (Actually I have some definitions of my own for these concepts,
        but if I told you, that would start an even bigger argument)

Objection 4:
Ah, but what about the reasons for beliefs?  Searle has good reasons to
believe his answers to questions about Chinese syntax and semantics.  The
computer has no choice but to answer as it is programmed.  Pressed to
explain his answers, Searle could cite the expertise of the rule-writers
and his own success in applying the rules.  Searle has real experience
of success with the rules, and real experience of the author's reliability.
The computer has no such background, and therefore has no knowledge.

Reply: Okay, so the computer has only opinions.  I thought you wanted a
       thinking machine.  Now you want Athena, sprung fully-formed from
       Zeus's brow.  How is it that *you* know what English words mean?
       No - I mean *before* you learned about linguistics.

Objection 5:
Ever heard of the frame problem?  To suppose that a set of rules could
specify native competence in syntactic performance is one thing.  But
such semantic performances as forming perceptual judgements and reporting
them are quite another matter.  You might as well build an android, and
you might have to.

Reply: The answer to the frame problem is to use a smaller frame. 24 x 80
       is about right.

***********************************************************************

I'd better stop wisecracking before I get into trouble.

So far, I've talked about understanding, but not discussed "formal
symbol manipulation" at all.  That is (perhaps surprisingly) MUCH more
difficult.  Common sense notions of understanding and knowledge are
good enough to show what's happening in the Chinese Room, but we will
need very precise concepts of formal symbol, symbol token, semantics,
operation, program, implementation, and interpretation, before we can
coherently discuss symbol manipulation.  (The problem is getting your
_manos_ on an _objectus_abstractus_)

All the objections here depend on the difference between people and
computers.  The Chinese Room is easy because it deals only with a person's
knowledge and abilites.  I won't be able to say much about the objections
above until I've made some points about computers, but I promise I'll get
to them (supposing anybody cares).  I didn't want to leave the impression
that I was unaware of the issues.

I'll be thinking furiously and typing spasmodically for a day or two.
In the meantime, I'd be delighted to get any feedback whatsoever on this
article.  I think it's pretty slick.

Ken Presting

sn13+@andrew.cmu.edu (S. Narasimhan) (02/12/90)

> Excerpts from netnews.comp.ai: 8-Feb-90 Re: Can Machines Think? Randy
> Appleton@ms.uky.ed (549)

> I just read the Jan Scientific American, the one with Searle and so on.
> Here is the one burning question I have.  I think a satisfactory answer to this
> will convince me that Searle is right, and strong-AI is wrong.  But until
> then, I find Searle's argument to be imprecise gobbly-gook.

> What exactly IS the difference between "understanding" and "the formal
> manipulation of syntatic symbols"?  He uses those two phrases quite alot,
> and I think it is this difference that is his main argument.  BUT HE NEVER
> SAYS WHAT IT IS! ARG!

> Well, thanks
> Randy
 
Surprisingly, none of the postings on this subject ever dealt with this
question directly. However, I believe that a clear distinction exists
between symbol manipulation and "understanding". 
      I would say a system "understands" iff  , given  a certain event
in some representation , the system can retrieve from its memory (this
can be called the "case-base") a previous event which is "related " to
the current event in some way . The case-base is a collection of
previous experiences either manually input or acquired thru learning.
The above mentioned relation between events can be quite subjective,
just like responses to a certain question can be quite subjective. It is
possible to design a test , which I call the case-retrieval test, to
determine whether a system understands a certain event. Should the
system be "intelligent" to "understand" things ? This depends on what we
mean by "intelligence". However, it is possible to deal with the
question of whether a system "understands" without defining
"intelligence". The system ,however, should be able to reason by what is
called as the "analogical reasoning" or in general "case-based
reasoning".
           On the other hand, a system which can only manipulate symbols
or rather, which can give "good" responses to input symbols need not
"understand "at  all. 
            On the basis of the above, I agree with Searle that the
person in the chinese room need not *necessarily* understand the
question. However, I don't agree with him when he says " It is
impossible to build an 'understanding' system which manipulates only
symbols." 
            Note that he also doesn't define what semantics is . I
believe that even semantics is basically a group of symbols. (If
interested see my Feb.7 posting in comp.ai titled "Semantics are
symbols").

        You might wonder on what basis I say that  case retrieval is
"understanding". I'll give an example. Do you understand  "x" ? If no,
can you say why you don't understand "x" ? Is it because it is
meaningless? But, why is it "meaningless" ? 
        Do you "understand" the following group of symbols : "John
walked yesterday."
         Do you "understand" this : " xyzgf#$  ran  yesterday ".
         Do you "understand" this : " John walked $$3ewr"
         Do you "understand" this : " John ee##2323 yesterday."
         Do you "understand" this : " $%#$@ #@@@@ FGGFd "
I think you will notice the difference in the degree of  your
"understanding" the above sentences.

For the first one , you were able to retrieve a "complete" case from
your memory with respect to object, action ,time etc. However, this
is'nt the extreme case. Suppose you had a friend whose name was John and
you actually saw him walking yesterday then the above sentence might
have retrieved that case.  I call this exact matching as "knowing" and
the particular case as "knowledge" cf. rules in an expert system. Coming
back to the above examples, you'll notice that you understand them
lesser and lesser ie., retrieve  more  inexact cases until you reach the
last one where you may not able to retrieve any case at all. You'd say
that you don't understand the last sentence completely, but do
understand the others partially.
       Interestingly, note that if you "know" something then you don't
require to understand it. For example do you "understand" that 2 X  2 =
4 ? 

Narasimhan.

weyand@csli.Stanford.EDU (Chris Weyand) (02/12/90)

kp@uts.amdahl.com (Ken Presting) writes:


::Finally, let's look at the case where Searle memorizes the rules and
::passes the Turing test without the books.  Searle is correct to say that
::he still does not know Chinese.  Anyone who knows both English and Chinese
::must be able to translate from one into the other, but Searle cannot.
::What he has learned by memorizing the rules is how to respond to Chinese
::questions.  So he has some Chinese know-how, but no knowledge of Chinese.
::So I think the Chinese Room example has a real point.

::If asked in Chinese about the meaning of a Chinese phrase, Searle would
::no doubt be able to respond correctly.  This might suggest that he does
::in fact "understand" Chinese.  But notice that if his questioner should
::ask Searle his name, or the time of day, or the color of his tie, he
::would *not* be able to answer correctly.  This is because Searle's rules
::are limited to procedures for manipulating Chinese symbols, and do not
::include procedures for looking at his watch or his tie.  By learning the
::rules, Searle knows that the correct response to the Chinese question,
::"What is a tie?" is the Chinese answer "A strip of cloth worn around the
::neck."  But he does not know that the Chinese phrase "your tie" denotes
::the strip of cloth around his own neck.  That is why he can correctly
::claim that by learning the rules he does not learn Chinese.

No!  Searle's assumption is that the room answers "all" questions.  This
includes "what is  your name?"  If the questioner asked Searle for *his* name
Searle would reply "Searle" (unless he asked in Chinese).  If he asked in
Chinese Searle may manipulate the book in which case the CR would respond with
a name.  Remember there are obviously two agents here;
Searle and the Chinese speaker.  Clearly a system couldn't pass
the Turing Test if it couldn't answer questions that would (weakly) imply self-
awareness.  But Searle's assumption is that the CR does pass the TT!

Searle rather than marvel at such a machine that could pass the TT would
simply scoff and say "yea but it's still just a simulation"

::Searle is correct that the rules contain no information about Chinese
::semantics.  But he is wrong about *why* that information is absent.  He
::thinks that programs have no semantics, which is an obvious mistake.
::It is not because the Chinese responses are programmed that they have
::no semantics.  Rather, the Turing test itself is too easy.  Turing did
::not insist that the conversation in the imitation game include references
::to events outside the dialogue.  The Turing test (as most people think of
::it) can be passed by a program that uses no semantic information.

Absolutely not!  The Turing Test if anything is too hard.  Turing even
acknowledge that himself.  Turing didn't insist anything in particular
about what the interregator should ask.  He simply said that rather than
ask the question "could a machine think"  we should ask whether it can fool
us into believing it is a person.  Clearly for us to believe an agent was
a person we would have to ask it all kinds of questions that referred to 
various events.  We'd ask how they felt at the moment, if they like to read,
to tell us a romantic story, to explain what it means to be conscious, 
whether or not it had free will, why? etc.

::Searle's argument revolves around the claim that information of a certain
::type - semantic information - cannot be learned by memorizing rules.
::Let's look more closely at what Searle can learn by memorizing rules.

::He would not learn the semantics of Chinese, but he might learn the syntax
::of Chinese.  If he were asked in Chinese whether some expression were
::grammatically correct, he would apply the rules and produce the correct
::answer.  If he were asked in English about the same Chinese phrase, he
::would _examine_ the rules, and perhaps find no rule which applies to the
::expression.  Searle could infer that the expression is ungrammatical, on
::the assumption that the rules cover all valid Chinese expressions.  If the
::rules cover ungrammatical expressions as well, there would probably be
::a small set of resonses to the effect of "I don't understand", and an
::examination of the rules would exhibit a large group of expressions for
::which the "I don't understand" symbol was the prescribed response.
::Depending on the sophistication of the rules, inferring the syntax of
::Chinese might be easy or hard, but by definition the rules contain all
::the information necessary to infer a complete specification of Chinese
::syntax.  Since information content is invariant under inference, by
::learning the rules that enable him to pass the Chinese Turing Test, Searle
::_would_ learn Chinese syntax, and could apply that knowledge in English
::conversations (once he has performed the necessary inferences, no trivial
::task).

Clearly Searle has internalized more than the rules of Chinese syntax.
The CR can carry on a conversation well enough to pass the Turing Test.
This obviuosly takes more intelligence than a natural language parsing
system and look-up table.  In effect Searle has internalized an entire 
brain/mind!  A ridculous thought even in principle.  Searle has grossly
misled the reader who buys into this argument that since all of the system
is in him and since he doesn't understand there is no understanding.

::Now suppose that Searle is provided with rules which not only allow him
::to pass the standard Turing test, but also enable him to answer Chinese
::questions about the color of his tie, and all the other everyday queries
::he might encounter living in China.  When he is given the Chinese question
::"What color is your tie?" the rules will no doubt direct him to look at
::his tie, note its color, and select a Chinese symbol appropriate to that
::color.  Clearly Searle is on his way to learning the semantics of the
::Chinese color vocabulary.  The path from here to complete knowledge of
::Chinese semantics is difficult.  Language-learning problems related to
::this have been studied by philosophers under the name "radical translation"
::or "radical interpretation".  Armed with the rules for manipulating the
::symbols and the procedures for assigning symbols to observable qualities,
::Searle would be well prepared for the radical translation process.

I think you are confused about the CR situation.  Searle is only manipulating
the signs and symbols of the book.  The book with Searle manipulating it
is another agent that happens to speak Chinese.  If the interregators asked
for the color of the agent's tie the CR would certainly not respond with the
color of Searle's tie.  The questions are not aimed towards Searle.
This is all part of Searle's sophistry.  He wants us to identify with him
the manipulator (the CPU) and not with the Chinese Speaker embodied within the
book.  Obviuosly Searle doesn't understand a word of Chinese, Searle doesn't
have to convince me of that.

In Dennet and Hofstadter's "The Mind's I" Searle's article is included
with comments from Hofstadter and Dennet.  There rebuttal to Searle is
very good and if you haven't read it I would refer you to it.  I think you'll
find it very interesting.

::So if we add the appropriate proviso to the Turing test, requiring that
::the system not only respond coherently in kind to Chinese questions,
::but also display native competence in Chinese descriptions of its physical
::environment, then by learning the same rules Searle _would_ learn Chinese.
::Or at least, he would have enough information to figure out Chinese.  And
::that knowledge of Chinese would be part of Searle's own knowledge, not a
::part of some "second personality". At this point, I think I've dismembered
::Searle's original example, but I should anticipate a probable objection:
::(make that several objections)

We don't need to add any proviso.  We can ask whatever questions we want.
There were never any constraints on the questions; hence the power of the
test.
It's possible that Searle could learn Chinese if he had some way of relating
the symbols coming in to the room with the world.  But more likely Searle
would sit in the room for the rest of his life without ever knowing what he
was doing.  Searle is simply a symbol manipulator.  



::I'll be thinking furiously and typing spasmodically for a day or two.
::In the meantime, I'd be delighted to get any feedback whatsoever on this
::article.  I think it's pretty slick.

::Ken Presting

Very interesting ideas I'll be reading.


--Chris Weyand
--weyand@csli.Stanford.Edu

ted.kihm@canremote.uucp (TED KIHM) (02/12/90)

ru>What exactly IS the difference between "understanding" and "the formal
ru>manipulation of syntatic symbols"?
ru>BUT HE NEVER SAYS WHAT IT IS! ARG!

It's that little bit of the ineffable that makes us different
from machines!  Perfectly legitimate argument.

In any case, despite his rambling on, Searle really only claims one
point.  Searle is stoutly refuting the proposition that ALL computer
programs are intelligent.  Now that we've been enlightened to the fact
that "Hello World" does not constitute an intelligent entity, lets get
on with it!
---
 ~ DeLuxe 1z11a18 #2979  If I had finished this Tagline,
 ~ QNet 2.04: NorthAmeriNet: Sound Advice BBS ~ Gladstone ~ MO

kp@uts.amdahl.com (Ken Presting) (02/13/90)

In article <12214@csli.Stanford.EDU> weyand@csli.Stanford.EDU (Chris Weyand) writes:
>kp@uts.amdahl.com (Ken Presting) writes:
>::Finally, let's look at the case where Searle memorizes the rules and
>::passes the Turing test without the books.
>
>::. . . notice that if his questioner should
>::ask Searle his name, or the time of day, or the color of his tie, he
>::would *not* be able to answer correctly.
>
>No!  Searle's assumption is that the room answers "all" questions.  This
>includes "what is  your name?"  If the questioner asked Searle for *his* name
>Searle would reply "Searle" (unless he asked in Chinese).  If he asked in
>Chinese Searle may manipulate the book in which case the CR would respond with
>a name.  Remember there are obviously two agents here;
>Searle and the Chinese speaker.  Clearly a system couldn't pass
>the Turing Test if it couldn't answer questions that would (weakly) imply self-
>awareness.  But Searle's assumption is that the CR does pass the TT!

It is not obvious to Searle that there are two agents!  I would like to
avoid discussing whether he is right about that, because I think we can
make real progress if we stick to some simpler issues first.

Searle has carefully separated the CR example from the rest of his
argument about AI, and I want to follow him in that.  The CR does a great
job of splitting out one human capacity from the rest of thinking.  Searle
wants to focus on learning meanings, which is fine by me.  So let's
consider only what Searle (the guy who speaks English) learns from the
rulebooks.

>::Searle is correct that the rules contain no information about Chinese
>::semantics.  But he is wrong about *why* that information is absent.  He
>::thinks that programs have no semantics, which is an obvious mistake.
>::It is not because the Chinese responses are programmed that they have
>::no semantics.  Rather, the Turing test itself is too easy.  Turing did
>::not insist that the conversation in the imitation game include references
>::to events outside the dialogue.  The Turing test (as most people think of
>::it) can be passed by a program that uses no semantic information.
>
>Absolutely not!  The Turing Test if anything is too hard.  Turing even
>acknowledge that himself.  Turing didn't insist anything in particular
>about what the interregator should ask.  He simply said that rather than
>ask the question "could a machine think"  we should ask whether it can fool
>us into believing it is a person.  Clearly for us to believe an agent was
>a person we would have to ask it all kinds of questions that referred to 
>various events.  We'd ask how they felt at the moment, if they like to read,
>to tell us a romantic story, to explain what it means to be conscious, 
>whether or not it had free will, why? etc.

The questions you suggest are perfect examples of what I mean
about the usual idea of the Turing test being to easy.  But you do
have a good point about the Turing test being too hard.  I agree that in
some respects it is too hard.

When you suggest questions such as "Do you like to read", you allow for
fixed responses.  Now, none of us programmers would be particularly
impressed just because somebody programmed a computer to tell the time.
But check this out:  No "formal symbol manipulator" can tell the time.
A clock, even a clock chip, is not a formal symbol, and reading a clock
is not a formal manipulation.  (I swiped this time-of-day example from
somebody here on the net, but I've forgotten who)


>::He would not learn the semantics of Chinese, but he might learn the syntax
>::of Chinese.
>
>Clearly Searle has internalized more than the rules of Chinese syntax.
>The CR can carry on a conversation well enough to pass the Turing Test.
>This obviuosly takes more intelligence than a natural language parsing
>system and look-up table.   . . .

True, but slow down!  Searle has not admitted that he would learn
*anything* by memorizing the rules.  At this stage, we are only talking
about knowledge of language.  We'll add other knowledge to the argument
later.


>::Now suppose that Searle is provided with rules which not only allow him
>::to pass the standard Turing test, but also enable him to answer Chinese
>::questions about the color of his tie, and all the other everyday queries
>::he might encounter living in China.  When he is given the Chinese question
>::"What color is your tie?" the rules will no doubt direct him to look at
>::his tie, note its color, and select a Chinese symbol appropriate to that
>::color.  Clearly Searle is on his way to learning the semantics of the
>::Chinese color vocabulary.  The path from here to complete knowledge of
>::Chinese semantics is difficult.  Language-learning problems related to
>::this have been studied by philosophers under the name "radical translation"
>::or "radical interpretation".  Armed with the rules for manipulating the
>::symbols and the procedures for assigning symbols to observable qualities,
>::Searle would be well prepared for the radical translation process.
>
>I think you are confused about the CR situation.  Searle is only manipulating
>the signs and symbols of the book.  The book with Searle manipulating it
>is another agent that happens to speak Chinese.  If the interregators asked
>for the color of the agent's tie the CR would certainly not respond with the
>color of Searle's tie.  The questions are not aimed towards Searle.
>This is all part of Searle's sophistry.  He wants us to identify with him
>the manipulator (the CPU) and not with the Chinese Speaker embodied within the
>book.  Obviuosly Searle doesn't understand a word of Chinese, Searle doesn't
>have to convince me of that.

I'm trying to convince you that Searle *does* understand Chinese!  Or at
least, from the right kind of rulebooks, he could figure it out.  This
means that if sombody holds up the Chinese character for blue, and says
"What does this mean" in English, Searle will be able to say "Blue", and
be able to explain his reasons for thinking that the character means blue.

Given the two-agents-in-one-body point of view, I can see how you would
find some ambiguity in the question "What color is your tie?".  So turn
it around.  Suppose the interrogator asks "What color is MY tie?" (in
Chinese, of course).  Plug that into the paragraph above, and you should
see my point.


>In Dennet and Hofstadter's "The Mind's I" Searle's article is included
>with comments from Hofstadter and Dennet.  There rebuttal to Searle is
>very good and if you haven't read it I would refer you to it.  I think you'll
>find it very interesting.

I have read it, thank you.  I think Dennet does a little better than
Hofstadter;  Searle's terminology is more familiar to philosophers.


>::So if we add the appropriate proviso to the Turing test, requiring that
>::the system not only respond coherently in kind to Chinese questions,
>::but also display native competence in Chinese descriptions of its physical
>::environment, then by learning the same rules Searle _would_ learn Chinese.
>::Or at least, he would have enough information to figure out Chinese.  And
>::that knowledge of Chinese would be part of Searle's own knowledge, not a
>::part of some "second personality". At this point, I think I've dismembered
>::Searle's original example, but I should anticipate a probable objection:
>::(make that several objections)
>
>We don't need to add any proviso.  We can ask whatever questions we want.
>There were never any constraints on the questions; hence the power of the
>test.
>It's possible that Searle could learn Chinese if he had some way of relating
>the symbols coming in to the room with the world.  But more likely Searle
>would sit in the room for the rest of his life without ever knowing what he
>was doing.  Searle is simply a symbol manipulator.  

The proviso is just that the right kind of questions do get asked.

If Searle can tell the time, he's not just a symbol manipulator.

>Very interesting ideas I'll be reading.
>--Chris Weyand
>--weyand@csli.Stanford.Edu

Thanks for taking the time to comment.

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (02/13/90)

From article <898D02hl87rd01@amdahl.uts.amdahl.com>, by kp@uts.amdahl.com (Ken Presting):
" ... But notice that if his questioner should
" ask Searle his name, or the time of day, or the color of his tie, he
" would *not* be able to answer correctly. ...

Yes, he would:

	name?: (in Chinese) Hao Wang.
	time?: (in Chinese) I'm not wearing my watch.
	tie color?: (in Chinese) Green.

				Greg, lee@uhccux.uhcc.hawaii.edu

kp@uts.amdahl.com (Ken Presting) (02/13/90)

In article <6573@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu (Greg Lee) writes:
>From article <898D02hl87rd01@amdahl.uts.amdahl.com>, by kp@uts.amdahl.com (Ken Presting):
>> ... But notice that if his questioner should
>> ask Searle his name, or the time of day, or the color of his tie, he
>> would *not* be able to answer correctly. ...
>
>Yes, he would:
>
>	name?: (in Chinese) Hao Wang.
>	time?: (in Chinese) I'm not wearing my watch.
>	tie color?: (in Chinese) Green.

I'm not sure what you have in mind.

If the answers are false, Chinese interrogators will know Searle is faking.
Searle may not know the difference between "Hao Wang" and "John Searle"
(printed in Chinese characters), but the audience would.  When I said
"answer correctly" I meant "make a statement which is true", not just
"make a statement that is meaningful Chinese, relevant to the topic of
the question".

radford@ai.toronto.edu (Radford Neal) (02/13/90)

In article <d74702yL871701@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes:

>Searle has carefully separated the CR example from the rest of his
>argument about AI, and I want to follow him in that.  The CR does a great
>job of splitting out one human capacity from the rest of thinking.  Searle
>wants to focus on learning meanings...

I think this is one of the places where Searle goes seriously wrong. 
"Meanings" have no meaning outside the context of consciousness in general.

To illustrate, let's consider the question of whether an air traffic control
computer understands the meaning of the word "airplane". Certainly, we
wouldn't say it understood "airplane" if it, say, issued instructions to
pilots that would make sense only on the assumption that airplanes can
fly under water. But, asks the skeptic, even if it does a wonderfull job
of air traffic control, does it really understand the word "airplane"?

The answer is: Who cares? If turning over air traffic control to the computer
reduces the number of accidents, I (and I presume everyone else) am all in 
favour of doing so. Debating whether the computer understands the word
"airplane" is something best left to those incapable of doing anything useful 
with their time.

Now consider a computer that is said to understand the words "love", "fear",
and "courage". It is clear to me that any entity that truely understands
these words has the moral status of a "person". Conversely, I would not
consider any entity to have such moral status if it didn't understand, to
at least some degree, at least some such concepts. [ I will ignore here 
the problem of entities that are, perhaps, only embryonic or degenerate
persons, such as babies and the severly demented. ] 

Given this, it is perverse to discuss in isolation the question of whether
the computer really understands "love", "fear", or "courage". The answer
hinges on the whole question of whether the computer is a person, a question
which we will answer in accord with our empathic sense. I will believe the
computer is a person, and understands those important words, if and only if 
I recognize in it the essential attributes that make my own life valuable.

Unlike the question of whether the air traffic control computer understands
"airplane", this question has real implications - a computer that is a 
person has the moral rights and responsibilities of a person, with all that 
implies for our actions. I don't think this question can be answered by
sort of debate that accompanies the Chinese Room Problem.

    Radford Neal

daryl@oravax.UUCP (Steven Daryl McCullough) (02/13/90)

In article <e58H02l087KM01@amdahl.uts.amdahl.com>, kp@uts.amdahl.com (Ken
Presting) writes:
> In article <6573@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu (Greg
Lee) writes:
> >From article <898D02hl87rd01@amdahl.uts.amdahl.com>, by kp@uts.amdahl.com
(Ken Presting):
> >> ... But notice that if his questioner should
> >> ask Searle his name, or the time of day, or the color of his tie, he
> >> would *not* be able to answer correctly. ...
> >
> >Yes, he would:
> >
> >	name?: (in Chinese) Hao Wang.
> >	time?: (in Chinese) I'm not wearing my watch.
> >	tie color?: (in Chinese) Green.
> 
> I'm not sure what you have in mind.
> 
> If the answers are false, Chinese interrogators will know Searle is faking.
> Searle may not know the difference between "Hao Wang" and "John Searle"
> (printed in Chinese characters), but the audience would.  When I said
> "answer correctly" I meant "make a statement which is true", not just
> "make a statement that is meaningful Chinese, relevant to the topic of
> the question".

In the original Turing Test, it was required that the interrogator
only be able to question the "contestant" via a teletype system, not
in "person". The reason for this stipulation is that the goal of
artificial intelligence is to reproduce a human mind, *not* a human
body. It isn't fair, then, to look at the contestant and say "Hey, I
can tell you are a computer by your keyboard!" Likewise, I think it is
not fair in the Chinese Room to test the veracity of answers like
"What color tie are you wearing?". If Searle keeps inside the Chinese
Room, then the interrogator wouldn't be able to know that the answer
is false. Answering *correctly* is not required for the Turing Test,
only answering convincingly.

Someone in this newsgroup (I don't remember who) brought up the issue
that if computer program succeeded in passing the Turing Test, it
would have to do so through lying; it would have to claim to be a
human being, to have headaches occasionally, to wear green ties, etc.
I don't think the fact that these claims are false should in any way
be held against the computer program; it could very well have the
*mind* of a human being with stomach aches, etc., and so could be
answering truthfully as far as it knows. A human being can be
similarly mistaken about the state of his or her own body; for
example, the "phantom limb" experience of amputees, or the "phantom
odors" experienced when one's brain is stimulated by an electrode.


Daryl McCullough, Odyssey Research Associates
oravax.uucp!daryl@cu-arpa.cs.cornell.edu

kp@uts.amdahl.com (Ken Presting) (02/14/90)

In article <90Feb12.205915est.10612@ephemeral.ai.toronto.edu> radford@ai.toronto.edu (Radford Neal) writes:
>In article <d74702yL871701@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>
>>Searle has carefully separated the CR example from the rest of his
>>argument about AI, and I want to follow him in that.  The CR does a great
>>job of splitting out one human capacity from the rest of thinking.  Searle
>>wants to focus on learning meanings...
>
>I think this is one of the places where Searle goes seriously wrong. 
>"Meanings" have no meaning outside the context of consciousness in general.
>
>To illustrate, let's consider the question of whether an air traffic control
>computer understands the meaning of the word "airplane".  . . .
>
>The answer is: Who cares? . . .

Agreed.  Of course, an air traffic control computer does not have to be
conscious to work well.


>Now consider a computer that is said to understand the words "love", "fear",
>and "courage". It is clear to me that any entity that truely understands
>these words has the moral status of a "person". Conversely, I would not
>consider any entity to have such moral status if it didn't understand, to
>at least some degree, at least some such concepts.

I think you are on the right track here, but I would put more emphasis on
concepts such as "promise", "truth", "due process", and a lot of others.
In general, the question "What qualities require that we grant human
rights to an organism" is high on my list of important philosphical issues
related to AI.

>Given this, it is perverse to discuss in isolation the question of whether
>the computer really understands "love", "fear", or "courage". The answer
>hinges on the whole question of whether the computer is a person, a question
>which we will answer in accord with our empathic sense. I will believe the
                                         ~~~~~~~~~~~~~~
>computer is a person, and understands those important words, if and only if 
>I recognize in it the essential attributes that make my own life valuable.

I disagree strongly on this point.  I don't believe there is any such
"sense", although I would grant that there are emotions which do a similar
job.  These emotions are very important, but I think we are going to need
rational grounds to make judgments such as who or what has human rights.
I wouldn't trust my, your, or anybody else's emotions on these questions.

>Unlike the question of whether the air traffic control computer understands
>"airplane", this question has real implications - a computer that is a 
>person has the moral rights and responsibilities of a person, with all that 
>implies for our actions. I don't think this question can be answered by
>sort of debate that accompanies the Chinese Room Problem.

Well, here we are discussing that very question in regard to the Chinese
Room!  I'm glad you brought it up.

Here is why I think the Chinese Room is a valuable contribution to AI.
Turing's test isolated conversation from all the other forms of human
behavior, and allowed AI research to concentrate its attention.  It did
not *force* researchers to concentrate on language, of course - that
would have been stupid.  Searle's Chinese Room allows us to focus even
more clearly on one aspect of human language use.  Again, it does not
force anybody to do anything, and is not intended to do so.  Humans not
only can pronounce words and respond to words, they can also understand.
Perhaps you will agree that those who state problems also contribute to
a project, though not always as significantly as those who solve the
problems.

You have pointed out that the ability to understand certain words may be
necessary if moral standing is to be granted to computers.  Searle has
argued that it is impossible for computers to understand any words at all.
I have argued that programs can contain enough information to allow for
understanding, which does not solve Searle's problem.  But is does show
that Searle has not proved the problem unsolvable.

kp@uts.amdahl.com (Ken Presting) (02/14/90)

In article <1336@oravax.UUCP> daryl@oravax.UUCP (Steven Daryl McCullough) writes:
>In the original Turing Test, it was required that the interrogator
>only be able to question the "contestant" via a teletype system, not
>in "person". The reason for this stipulation is that the goal of
>artificial intelligence is to reproduce a human mind, *not* a human
>body. . . .
> . . . Answering *correctly* is not required for the Turing Test,
>only answering convincingly.

I agree completely that convincing answers are all that is required.  My
point is that it is trivially simple to get a "pure symbol system" to
generate unconvincing answers.  This is great for Strong AI, because it
shows that computers are anything but pure symbol systems.

Suppose the interrogator asks, in Chinese, "What day is it?" or "What
month is it?"  It is common to become confused occaisionally about the
date, or day of the week.  But a rulebook like Searle's, or a computer
which was so lazily programmed that it did not examine the system clock,
would *never* get it straight.  So an interrogator would start to get
suspicious.

Now consider a teletype-oriented question.  Suppose the interrogator types
as fast as he can the question "How long did this question take to type?"
Then suppose he types the same question again, very slowly.  A human on
the other teletype could tell the difference immediately.  SO COULD A REAL
COMPUTER.  But Searle, manipulating symbols, wouldn't have a chance.

What this shows is that Searle's Axiom 1 is false.  Programs *do* have
semantics.  It does *not* show that the program understands what it is
saying or doing, but that is something I will get to later.


>Someone in this newsgroup (I don't remember who) brought up the issue
>that if computer program succeeded in passing the Turing Test, it
>would have to do so through lying; it would have to claim to be a
>human being, to have headaches occasionally, to wear green ties, etc.

That was me!

>I don't think the fact that these claims are false should in any way
>be held against the computer program; it could very well have the
>*mind* of a human being with stomach aches, etc., and so could be
>answering truthfully as far as it knows. A human being can be
>similarly mistaken about the state of his or her own body; for
>example, the "phantom limb" experience of amputees, or the "phantom
>odors" experienced when one's brain is stimulated by an electrode.

The problem is not that the computer lies.  There is only a problem if
the computer does not know the truth.  To put it better, there is a big
problem if the program does not know *any* of the truth.

Ken Presting

daryl@oravax.UUCP (Steven Daryl McCullough) (02/14/90)

<6573@uhccux.uhcc.hawaii.edu> <2dSM02LL88qx01@amdahl.uts.amdahl.com>

In article <2dSM02LL88qx01@amdahl.uts.amdahl.com>, kp@uts.amdahl.com (Ken
Presting) writes:
> >I don't think the fact that these claims are false should in any way
> >be held against the computer program; it could very well have the
> >*mind* of a human being with stomach aches, etc., and so could be
> >answering truthfully as far as it knows. A human being can be
> >similarly mistaken about the state of his or her own body; for
> >example, the "phantom limb" experience of amputees, or the "phantom
> >odors" experienced when one's brain is stimulated by an electrode.
> 
> The problem is not that the computer lies.  There is only a problem if
> the computer does not know the truth.  To put it better, there is a big
> problem if the program does not know *any* of the truth.
> 
> Ken Presting

I'm not sure if we are in disagreement or not. I don't usually
consider it to be part of intelligence to *know* what is true and what
is not.  Knowing what is true (insofar as this is possible) depends on
the sophistication and reliability of one's information-gathering
equipment, which for a nonhandicapped human being includes eyes, ears,
etc. In my opinion, the only criterion intelligence is the ability to
correctly reach conclusions based on the information one has. The fact
that a person or computer program has no access to a watch or a calendar
to determine the time of day does not indicate a lack of intelligence,
in my opinion.

Let me call a computer program "virtually intelligent" if it can
converse perfectly intelligently about information that it receives
through conversation alone, but has no other source of new information
(that is, it may have memories, but it has no way of learning what
time it is, or whether it is raining, or any other fact about the real
world unless that fact is revealed through conversation). It seems to
me that it would be a relatively small task to modify a "virtually
intelligent" program to make it "truly intelligent"; it would only
require hooking up timers and TV cameras, etc. Do you agree?

Daryl McCullough