[sci.philosophy.tech] Can Machines Think?

crowston@athena.mit.edu (Kevin Crowston) (12/18/89)

I also read Searle's and the Churchland's articles in Scientific American
and I'm not sure I understand Searle's argument.  Perhaps someone who
does can try to explain once more.

Searle seems to be saying that the Turing Test is meaningless as a test of
understanding because the Chinese Room can pass it, even though the
person in the Chinese Room doesn't understand Chinese.  But it seems to
me that this argument equates the mind and the brain and thus mislocates 
the thing doing the understanding.  I agree that the man in the room doesn't
understand Chinese; but I would argue similarly that a computer, as a
collection of wires and silicon, or a brain, as a blob of protoplasm,
don't understand anything either.  I all three cases, the thing doing the
understanding is the program not the hardware.  

Searle acknowledges this argument (it's counterargument c in his
article), but answers it by imagining a situation in which the man in
the room memorizes the rules, the inbox, etc.  He argues that it can't
be the rules that do the understanding, since all there is in the room
is the man (who we agree doesn't understand Chinese).

The part I don't understand is, what difference does it make how the
rules are stored?  I don't see why it makes a difference if the man
memorizes the rules or reads them off a piece of paper.  In the latter
case, admittedly, you can point to the rule book; but that doesn't
mean the rule book doesn't exist in the former case.  It seems to me
that Searle's second example is really the same example, in which case
the argument (that it's the rules that do the understanding, not the man
in the room) remains unanswered.  

I expect the Scientific American Articles will set off another wave of
articles; I look forward to the debate.  

Kevin Crowston

jfw@bilbo.mc.duke.edu (John Whitehead) (12/19/89)

In article <1989Dec18.014229.18058@athena.mit.edu> crowston@athena.mit.edu (Kevin Crowston) writes:
>I also read Searle's and the Churchland's articles in Scientific American
>and I'm not sure I understand Searle's argument.  Perhaps someone who
>does can try to explain once more.

For a good analysis of this -- and many other similar thought-challenging
papers -- check out _The Mind's I_, edited by Douglas Hofstadter and
Daniel Dennett.  I haven't seen the Sci Am article, but I imagine it is
similiar (if not identical) to the one in this book.

     - John Whitehead

sarge@metapsy.UUCP (Sarge Gerbode) (12/19/89)

In article <1989Dec18.014229.18058@athena.mit.edu> crowston@athena.mit.edu (Kevin Crowston) writes:

>Searle seems to be saying that the Turing Test is meaningless as a test of
>understanding because the Chinese Room can pass it, even though the
>person in the Chinese Room doesn't understand Chinese.  But it seems to
>me that this argument equates the mind and the brain and thus mislocates 
>the thing doing the understanding.  I agree that the man in the room doesn't
>understand Chinese; but I would argue similarly that a computer, as a
>collection of wires and silicon, or a brain, as a blob of protoplasm,
>don't understand anything either.  In all three cases, the thing doing the
>understanding is the program not the hardware.

On reflection, I don't think you can dispose of the issue that easily
by differentiating between the program and the hardware.  The program
is a schema that describes the electronic state the hardware should
be in when the code file is loaded.  In a very real sense, then, the
shape of the physical machine has been altered by loading the code
file, just as much as if you had flipped switches within the machine
(as we used to do with the old panel switches).  So after the code is
loaded, there is actually a different physical machine there, just as
much as if one had gone out and bought a different machine.

Just because it isn't "hard" (i.e., you can't kick it, and it's easy
to change), doesn't mean it isn't a physical entity.
-- 
Sarge Gerbode -- UUCP:  pyramid!thirdi!metapsy!sarge
Institute for Research in Metapsychology
431 Burgess Drive; Menlo Park, CA 94025

crowston@athena.mit.edu (Kevin Crowston) (12/19/89)

In article <968@metapsy.UUCP> sarge@metapsy.UUCP (Sarge Gerbode) writes:
>On reflection, I don't think you can dispose of the issue that easily
>by differentiating between the program and the hardware.  The program
>is a schema that describes the electronic state the hardware should
>be in when the code file is loaded.  In a very real sense, then, the
>shape of the physical machine has been altered by loading the code
>file, just as much as if you had flipped switches within the machine
>(as we used to do with the old panel switches).  So after the code is
>loaded, there is actually a different physical machine there, just as
>much as if one had gone out and bought a different machine.

But even so, the program still exists in both cases, right?

Actually, I think you've made a key point here.  Loading the software
essentially gives you a different machine.  But I think this actually
supports my argument.  Imagine the effect on the "understanding" done
by the Chinese room of replacing the person with someone else (assuming
that someone can also follow the rules).  Now imagine changing the 
rulebook.  In the first case, the Chinese room will be unaffected; in
the second, it might change.  I would argue that this is further
evidence that it's the program not the hardware that matters.  Since it
could be anyone in the Chinese Room, it shouldn't matter what that
person happens to think. 
>
>Just because it isn't "hard" (i.e., you can't kick it, and it's easy
>to change), doesn't mean it isn't a physical entity.

Actually, this was my point.  Software exists, even though you can't point
to it.

Kevin Crowston

dave@cogsci.indiana.edu (David Chalmers) (12/19/89)

"Programs" do not think.
Cognition is not "symbol-manipulation."
The "hardware/software" distinction is unimportant for thinking about minds.

However:

Systems with an appropriate causal structure think.
Programs are a way of formally specifying causal structures.
Physical systems which implement a given program *have* that causal structure,
physically.  (Not formally, physically.  Symbols were simply an intermediate
device.)

Physical systems which implement the appropriate program think.

--
Dave Chalmers     (dave@cogsci.indiana.edu)      
Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable" -- Fred

mbb@cbnewsh.ATT.COM (martin.b.brilliant) (12/20/89)

From article <31821@iuvax.cs.indiana.edu>, by dave@cogsci.indiana.edu
(David Chalmers)...

Slightly edited to make the bones barer:

  1. Systems with an appropriate causal structure think.
  2. Programs are a way of formally specifying causal structures.
  3. Physical systems implement programs.
  4. Physical systems which implement the appropriate program think.

I take it that (1) is an acceptable definition.  Does anybody think it
begs the question?

The weakest link here may be (2), the supposition that programs can
implement any causal structure whatever, even those that do what we
call thinking.

The software/hardware duality question is semantically resolved by (3).

The conclusion is (4), which seems to assert "strong AI."

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM
After retirement on 12/30/89 use att!althea!marty or marty@althea.UUCP

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

kp@uts.amdahl.com (Ken Presting) (12/20/89)

In article <968@metapsy.UUCP> sarge@metapsy.UUCP (Sarge Gerbode) writes:
>On reflection, I don't think you can dispose of the issue that easily
>by differentiating between the program and the hardware.  The program
>is a schema that describes the electronic state the hardware should
>be in when the code file is loaded.  In a very real sense, then, the
>shape of the physical machine has been altered by loading the code
>file, just as much as if you had flipped switches within the machine
>(as we used to do with the old panel switches).  So after the code is
>loaded, there is actually a different physical machine there, just as
>much as if one had gone out and bought a different machine.

This is e very good point, and often overlooked.  The physical
instantiation of data immensely complicates the concept of "symbol system".
  When machines were built from gears and axles, it was trivial to
distinguish symbols from mechanisms.  Symbols are part of a language, are
written or spoken, and (most importantly) have no mechanical functions.
But communication and computing devices blur the distinction.  In these
machines, an instance of a symbol (a charge, a current pulse, a switch)
has a mechanical role in the operation of the device.
  The first problem that arises is how to distinguish symbol manipulation
systems from other machines.  What makes a symbol "explicit"? The clearest
case of explicit symbols is printed text in a human language, but we need
to resolve hard cases.  One hard case is microcode or firmware.  The
hardest case is probably neural nets.
  Conclusion:  No definition of "symbol manipulation system" which uses
the term "explicit" will be of much help (until "explicit" is defined).

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (12/20/89)

From article <6724@cbnewsh.ATT.COM>, by mbb@cbnewsh.ATT.COM (martin.b.brilliant):
>From article <31821@iuvax.cs.indiana.edu>, by dave@cogsci.indiana.edu
>(David Chalmers)...
>
>Slightly edited to make the bones barer:
>
>1. Systems with an appropriate causal structure think.
>2. Programs are a way of formally specifying causal structures.
>3. Physical systems implement programs.
>4. Physical systems which implement the appropriate program think.
>
>I take it that (1) is an acceptable definition.  Does anybody think it
>begs the question? ...

This and similar discussions have seemed to revolve around
an equivocation between theories about how a thing works and
how the thing does work.  1-4 invite this equivocation in several
ways, consequently they do not serve to clarify. `causal' and
`structure' have to do with theory-making -- we attribute
cause/effect and structure to something in our efforts to
understand it.  So 1. in effect says that we can now understand,
or will come to understand, how people think by making a theory
involving cause and structure.  If the former, it's false;  if
the latter, it does beg the question.

If `program' in 3. is read as `theory' and `physical system' read
as thing about which the theory is made, which is the best I can
make of it, 3. is a generalization of 1. -- we can make (good)
theories about things.  As applied to human thought, and interpreted
as a prediction, it likewise begs the question.

			Greg, lee@uhccux.uhcc.hawaii.edu

kp@uts.amdahl.com (Ken Presting) (12/20/89)

In article <6724@cbnewsh.ATT.COM> mbb@cbnewsh.ATT.COM (martin.b.brilliant) writes:
>From article <31821@iuvax.cs.indiana.edu>, by dave@cogsci.indiana.edu
>(David Chalmers)...
>
>  1. Systems with an appropriate causal structure think.
>  2. Programs are a way of formally specifying causal structures.
>  3. Physical systems implement programs.
>  4. Physical systems which implement the appropriate program think.
>
>I take it that (1) is an acceptable definition.  Does anybody think it
>begs the question?

I don't think so.  Presumably, humans think because of the way we're
built, and the mechanical/chemical/electrical structure determines the
causal structure of our brains.

>The weakest link here may be (2), the supposition that programs can
>implement any causal structure whatever, even those that do what we
>call thinking.

Agreed.  The multi-body problem of astrophysics is a clear case of a
causal system which cannot be precisely represented by an algorithm.
But the argument could succeed with a weaker version of 2, IF we could
figure out which causal structures are relevant to thought

>The software/hardware duality question is semantically resolved by (3).

This is problematic.  Harnad's "symbol grounding problem" (and some of
Searle's objections, I think) point out the difficulty of claiming that
some object "thinks" strictly on the basis of its internal operation,
or even on the basis of it's outputs.  Harnad would want to know how the
symbols found in the output are grounded, while Searle might claim that
the machine *simulated* thinking, but did not itself *think*.
  I agree that the correct resolution of the software/hardware duality
can only be resolved by the concept of implementation used in (3).  I'm
just repeating a familiar (but important) theme.

miken@wheaties.ai.mit.edu (Michael N. Nitabach) (12/20/89)

In article <5767@uhccux.uhcc.hawaii.edu>, lee@uhccux.uhcc.hawaii.edu
(Greg Lee) says:

>`causal' and
>`structure' have to do with theory-making -- we attribute
>cause/effect and structure to something in our efforts to
>understand it.

This view of the fundamental nature of causation derives from a particular
metaphysical tradition, beginning with the British Empiricists, e.g. Locke
and Hume.  This is the view that causation is not an aspect of the world
which our mentality can recognize, but rather a schema which our mind imposes
on events with appropriate spatiotemporal relations.  A conceptually
opposite--Realist--stance would be that causation exists as an actual
attribute of certain pairs of physical events.  Greg's argument in that posting
rests on a particular metaphysical assumption, and not on a simple matter
of definition or brute fact.

Mike Nitabach

ele@cbnewsm.ATT.COM (eugene.l.edmon) (12/20/89)

In article <31821@iuvax.cs.indiana.edu> dave@cogsci.indiana.edu (David Chalmers) writes:
>Systems with an appropriate causal structure think.

Could you elaborate on this a bit? 



-- 
gene edmon    ele@cbnewsm.ATT.COM

sm5y+@andrew.cmu.edu (Samuel Antonio Minter) (12/20/89)

1988:11:19:05:13 SFT

     Couldn't you use the Chinese room analogy to prove that Humans don't
truly understand either.  In this case the matter/energy in the human body
take the role of the man in the room and all his stacks of cards, while
the basic laws of physics take the role of the instuction book.  After all
just as the instruction book tells the man what to do thus simulating a room
which understands Chinese, the laws telling how various atoms, electrons,
energy fields, etc. interact with each other "instruct" the matter and energy
of the human body how to simulate intelligent behavior.  Maybe even
understanding Chinese!  Is there an error in this argument that I'm missing.
     If there isn't then it is a more powerful counter argument than the
agument "of course the man dosn't understand, but the whole room does."

1988:11:19:05:19 SFT

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ |\     You have just read a message from Abulsme Noibatno Itramne! ~
~ | \                                                                ~
~ | /\   E-Mail@CMU: sm5y+@andrew.cmu.edu    <Fixed Width Fonts!>    ~
~ |/  \  S-Mail@CMU: Sam Minter                First the person      ~
~ |\  /|             4730 Centre Ave., #102          next            ~
~ | \/ |             Pittsburgh, PA 15213       the function!!       ~
~ | /  |                                                             ~
~ |/   |  <-----An approximation of the Abulsme symbol               ~
~                                                                    ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

mmcg@bruce.OZ (Mike Mc Gaughey) (12/20/89)

sm5y+@andrew.cmu.edu (Samuel Antonio Minter) [20 Dec 89 05:20:12 GMT]:
> 1988:11:19:05:13 SFT
> 
>      Couldn't you use the Chinese room analogy to prove that Humans don't
> truly understand either.  In this case the matter/energy in the human body
> take the role of the man in the room and all his stacks of cards, while
> the basic laws of physics take the role of the instuction book.  After all

No - this only proves that the laws of physics don't think (just as the man
the room didn't understand).  The total system behavior (i.e of a brain) is
that of an entity which _does_ understand the concepts represented by the
symbols being manipulated.

Mike.
--
Mike McGaughey			ACSNET:	mmcg@bruce.cs.monash.oz

"You have so many computers, why don't you use them in the
 search for love?" - Lech Walesa  :-)

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (12/21/89)

From article <5610@rice-chex.ai.mit.edu>, by miken@wheaties.ai.mit.edu (Michael N. Nitabach):
>In article <5767@uhccux.uhcc.hawaii.edu>, lee@uhccux.uhcc.hawaii.edu
>(Greg Lee) says:
>
>>`causal' and
>>`structure' have to do with theory-making -- we attribute
>>cause/effect and structure to something in our efforts to
>>understand it.
>
>This view of the fundamental nature of causation derives from a particular
>metaphysical tradition, beginning with the British Empiricists, e.g. Locke
>and Hume.  This is the view that causation is not an aspect of the world
>which our mentality can recognize, but rather a schema which our mind imposes
>on events with appropriate spatiotemporal relations. ...

No, that's not quite the view I expressed.  In saying that causation is
something we attribute in theory-making, I do not need to go so far as
to say "causation is not an aspect of the world".  And I don't.  It may
be, in the case of very good theories, that it is reasonable to confound
what the theory says about a thing with the thing itself, or to take the
theory to be a discovery rather than an invention.  But in the case of
not-so-good theories, where there is some doubt as to whether what the
theory says is a cause is indeed a cause, confusing the theory with what
it describes ought to be avoided.

In the present discussion, we are dealing with not-so-good theories.

Surely there's no one who is going to try to defend the view that one
should never distinguish between a theory and what that theory purports
to describe.

			Greg, lee@uhccux.uhcc.hawaii.edu

dejongh@peirce.cis.ohio-state.edu (Matt Dejongh) (12/21/89)

In article <6724@cbnewsh.ATT.COM> mbb@cbnewsh.ATT.COM (martin.b.brilliant) writes:
>From article <31821@iuvax.cs.indiana.edu>, by dave@cogsci.indiana.edu
>(David Chalmers)...
>
>Slightly edited to make the bones barer:
>
>  1. Systems with an appropriate causal structure think.
>
>I take it that (1) is an acceptable definition.  Does anybody think it
>begs the question?

I do.  What is "an appropriate causal structure?"  Give me a
definition and an example.

	matt




----------------------------------------------------------------------------
Matt DeJongh               | Laboratory for Artificial Intelligence Research 
                           | Department of Computer and Information Sciences 
dejongh@cis.ohio-state.edu | The Ohio State University, Columbus, Ohio 43210
-=-
----------------------------------------------------------------------------
Matt DeJongh               | Laboratory for Artificial Intelligence Research 
                           | Department of Computer and Information Sciences 
dejongh@cis.ohio-state.edu | The Ohio State University, Columbus, Ohio 43210

ladkin@icsib (Peter Ladkin) (12/21/89)

In article <5610@rice-chex.ai.mit.edu>, miken@wheaties (Michael N. Nitabach) writes:
>This view of the fundamental nature of causation derives from a particular
>metaphysical tradition, beginning with the British Empiricists, e.g. Locke
>and Hume.  This is the view that causation is not an aspect of the world
>which our mentality can recognize, but rather a schema which our mind imposes
>on events with appropriate spatiotemporal relations.

this is hardly locke's view, and barely that of hume. locke rather strongly
held that primary qualities of matter caused secondary qualities. this 
causation was not a product of anything like a mental schema. and `events
with appropriate spatiotemporal relations' were not the only inhabitants
of the physical world. you might also count bishop berkeley in with the
other two, and for him causation was `in the world'. of course, it
got there by being in the intention of a god, for him. all this is 
well-known and well-researched material. so much for summarising the
views of the british empiricists.

peter ladkin

dave@cogsci.indiana.edu (David Chalmers) (12/21/89)

gene edmon writes:
>...David Chalmers writes:
>>Systems with an appropriate causal structure think.
>
>Could you elaborate on this a bit? 

Well, seeing as you ask.  The basic idea is that "it's not the meat, it's
the motion."  At the bottom line, the physical substance of a cognitive
system is probably irrelevant -- what seems fundamental is the pattern of
causal interactions that is instantiated.  Reproducing the appropriate causal
pattern, according to this view, brings along with it everything that is
essential to cognition, leaving behind only the inessential.  (Incidentally,
I'm by no means arguing against the importance of the biochemical or the
neural -- just asserting that they only make a difference insofar as they
make a *functional* difference, that is, play a role in the causal dynamics
of the model.  And such a functional difference, on this view, can be
reproduced in another medium.)

And yes, of course this is begging the question.  I could present arguments
for this point of view but no doubt it would lead to great complications.
Just let's say that this view ("functionalism", though this word is a dangerous
one to sling around with its many meanings) is widely accepted, and I can't
see it being unaccepted soon.  The main reason I posted was not to argue for
this view, but to delineate the correct role of the computer and the program
in the study of mind.

The other slightly contentious premise is the one that states that computers
can capture any causal structure whatsoever.  This, I take it, is the true
import of the Church-Turing Thesis -- in fact, when I look at a Turing
Machine, I see nothing so much as a formalization of the notion of causal
system.  And this is why, in the philosophy of mind, "computationalism" is
often taken to be synonymous with "functionalism".  Personally, I am
a functionalist first, but accept computationalism because of the plausibility
of this premise.  Some people will argue against this premise, saying that
computers cannot model certain processes which are inherently "analog".  I've
never seen the slightest evidence for this, and I'm yet to see an example of
such a process.  (The multi-body problem, by the way, is not a good example --
lack of a closed-form solution does not imply the impossibility of a
computational model.)  Of course, we may need to model processes at a low,
non-superficial level, but this is not a problem.

The other option for those who argue against the computational metaphor is to
say "yes, but computation doesn't capture causal structure *in the right
way*".  (For instance, the causation is "symbolic", or it has to be
mediated by a central processor.)  I've never found much force in these
arguments.

--
Dave Chalmers     (dave@cogsci.indiana.edu)      
Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable" -- Fred

kp@uts.amdahl.com (Ken Presting) (12/22/89)

In article <31945@iuvax.cs.indiana.edu> dave@cogsci.indiana.edu (David Chalmers) writes:
>The other slightly contentious premise is the one that states that computers
>can capture any causal structure whatsoever.  This, I take it, is the true
>import of the Church-Turing Thesis -- in fact, when I look at a Turing
>Machine, I see nothing so much as a formalization of the notion of causal
>system.  . . .
>    . . .         Some people will argue against this premise, saying that
>computers cannot model certain processes which are inherently "analog".  I've
>never seen the slightest evidence for this, and I'm yet to see an example of
>such a process.  (The multi-body problem, by the way, is not a good example --
>lack of a closed-form solution does not imply the impossibility of a
>computational model.)  Of course, we may need to model processes at a low,
>non-superficial level, but this is not a problem.

What makes the multi-body problem a counter-example is not just the fact
that the problem has no closed-form solution, but the chaotic nature of
the mechanical system.

In a chaotic system, an arbitrarily small change in initial conditions
will over time produce an arbitrarily *large* difference in subsequent
states.  It is true that for any physical system, a numerical solution
of the differential equations can generate a prediction of the future
state with an error as small as desired.  But if a numeric model of a
physical system is to run in real time (as needed for the Turing test),
or just proportional to real time, then there will be a fixed minimum
error in the transitions from state to state.  The error may be reduced
by using faster processors or faster algorithms, but for a given
combination of processor, algorithm, and lag behind real-time, there must
be a limit on the number of terms evaluated, and a minimum error.

So at the end of the first time slice, there will be a finite difference
between the state of the real system and the calculated state of the
model.  The real system will proceed chaotically (as will the model), to
amplify the discrepancy in initial state and each subsequent state until
(sooner or later) the real system will be in a state which diverges from
the state of the model by any amount (up to the size of the system).

That was a rough sketch of a proof that not all causal systems can be
modeled by programs.  Let me add a plausibility argument, so
that the claim will not seem counter-intuitive.

What makes the analog causal system different from the algorithm is that
each state of the analog system encodes an infinite amount of information.
This holds even for electrons bound into the quantum orbitals of an atom.
There is a finite number of electrons, and transitions between levels are
discrete, but there are infininitely many energy levels (most of them very
close together).  Of course, a processor has finitely many states, and
encodes a finite amount of information.  In addition, an analog system
need not time-slice its calculations.  It can make infinitely many
transitions in any interval.  A Turing machine would need to have both
infinitely many states and run arbitrarily fast to prescisely match an
analog system using a numerical algorithm.  Now, the lack of precision
is insignificant for many analog systems in many applications, where the
error is constant or grows slowly.  But in a chaotic system, the error
in the model can grow very rapidly.  If the numerical model cannot be
allowed to lag arbitrarily far behind real-time (thus trading time for
memory) then the amplification of error will make the model useless.

The "solutions in closed form" of rational mechanics gurantee that that
a numerical model for an analog system will have a rapidly computable
algorithm (often a polynomial).  So the fact that the multi-body problem
has no solution in closed form is relevant, but that's not the whole
story.  More important is chaotic amplification of initial differences.

This does *not* show that strong AI is impossible with algorithms.  There
is no way to know whether intelligence requires chaos.  But the brain is
certainly complicated enough to be as chaotic as fluid flow.  Modeling
human behavior may well have to deal with this problem.

BTW, I'd like to know if anyone has heard this kind of argument before
in connection with AI.  It must be old hat in simulation circles.

dhw@itivax.iti.org (David H. West) (12/22/89)

In article <ebfl02Ef76Hs01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes:
|What makes the multi-body problem a counter-example is not just the fact
|that the problem has no closed-form solution, but the chaotic nature of
|the mechanical system.
|
|In a chaotic system, an arbitrarily small change in initial conditions
|will over time produce an arbitrarily *large* difference in subsequent
|states. [...]
|That was a rough sketch of a proof that not all causal systems can be
|modeled by programs.  Let me add a plausibility argument, so
|that the claim will not seem counter-intuitive.
|
|What makes the analog causal system different from the algorithm is that
|each state of the analog system encodes an infinite amount of information.

Real intelligent systems (e.g. humans) function quite successfully
at a finite temperature despite the influence of thermal
fluctuations (Brownian motion), which cause finite random perturbations
of everything.  A finite system embedded in a thermal environment
cannot encode an infinite amount of information.

This would seem to indicate that your argument has no bearing on
what may be necessary for intelligence, at least over a time-scale
short enough that the physical embodiment of the intelligence is
not disrupted by thermal fluctuations, whether or not these are
chaotically amplified.

-David West      dhw@iti.org

dove@portia.Stanford.EDU (Dav Amann) (12/22/89)

Here we go.

Recently, several posters have debated the concept of thinking
machines without explicitly discussing thought.  We all know that we
think but I doubt that many individuals can articulate what it means
to think.  Does, for example, thinking imply conciousness?  In other
words, do things think without being aware that they are thinking?

If thought is merely the process of solving problems, then answers
become obvious.  Solving problems does not imply conciousness.  My Mac
solves all sorts of problems for me, but it certainly does not spend
its off hours contemplating the Eternal Void.  (At least, I don't
think that it does.)

However, I believe that when individuals talk about thought they imply
some sort of conciousness, or awareness.  When we say a machine that
thinks, we mean a machine that understands, reasons, draws
conclusions, learns, and is somehow aware.

Thus when Searle talks of the Chinese room, he is questioning the
awareness of the machine rather than its imitation of reasoning
processes.  (At least, as far as I can tell.)

I believe that the problem of conciousness comes from a choice of
metaphysics.  Most of us in the Western world are disciples of the
ancient Greek metaphysician, Democritus.  Democritus, you'll recall,
was the philosopher who theorized that all of reality was made of
atoms.  Understand the atoms and you'll understand reality.
Mathematics, physics, logic all went a long way towards cementing this
mind set.  The whole is equal to the sum of its parts.  Understand the
parts, and you'll understand the whole.

When this viewpoint is applied to a theory of mind, you get a lot of
folks saying, "All that's in there is a bunch of neurons firing in
certain patterns.  All that's happening is the exchange of
neurochemicals.  Understand those, and you'll understand the mind."

Well, perhaps not.  Somehow, intuitively speaking, I think that the
mind is more than the firing of neurons, though it does seem to
encompass the firing of neurons.  There's more there, I say, than the
exchange of certain neurochemicals.  Plato knew more about the human
mind than me and he knew much less about the construction of the
brain.  Perhaps this intuition explains the vigorous defense of Cartesian
dualism since Descartes without very much empirical evidence.  

Lately, however, one of the newer sciences has been breaking away from
Democritus.  Biology discusses and understands more and more about the
individual cell, yet they find it harder and harder to explain the
relationship between cells within the pretext of the individual cell.
Etymologysts understand a lot about termites but they cannot explain
why five termites together will build arches the Romans  would be
proud of.  The whole is more than the sum of its parts.

Perhaps the mind is much the same way.  Understanding the switches and
chemicals inside of the brain may in some way add to the knowledge of
our selves, but I don't think that it can ever fully explain our
selves and our consciousness.  

So the question arises, How can we understand ourselves or our
consciousness? How can we tell whether a machine thinks?  To these
questions I profess my ignorance, but I do not think that any method
which only looks at the parts of the brain will accomplish that lofty
goal.

						Dav Amann
						dove@portia.stanford.edu

dave@cogsci.indiana.edu (David Chalmers) (12/23/89)

Ken Presting writes:

>What makes the multi-body problem a counter-example is not just the fact
>that the problem has no closed-form solution, but the chaotic nature of
>the mechanical system.

Chaos is only a problem if we need to model the behaviour of a particular
system over a particular period of time exactly -- i.e., not just capture
how it might go, but how it *does* go.  This isn't what we're trying to do
in cognitive science, so it's not a problem.  We can model the system to
a finite level of precision, and be confident that what we're missing is
only random "noise."  So while we won't capture the exact behaviour of System X
at 3 p.m. on 12/22/89, we'll generate equally plausible behaviour -- in other
words, how the system *might* have gone, if a few unimportant random 
parameters had been different.

This leads to a point which is a central tenet of functionalism -- you don't
need to capture a system's causal dynamics exactly, but only at a certain level
of abstraction.  Which level of abstraction?  Well, this is usually
specified teleologically, depending on what you're trying to capture.  Usually,
it's a level of abstraction that captures plausible input/output relationships.
Anything below this, we can consider either implementational detail, or noise.
Just what this level of abstraction is, of course, is a matter of some debate.
The most traditional functionalists, including the practitioners of
"symbolic" AI, believe that you may go to a very high level of abstraction
before missing anything important.  The move these days seems to be towards
a much less abstract modelling of causal dynamics, in the belief that what
goes on at a low level (e.g. the neural level) makes a fundamental difference.
(This view is sometimes associated with the name "eliminative materialism",
but it's really just another variety of functionalism.  Even at the neural
level, what we're trying to capture are causal patterns, not substance.)

>What makes the analog causal system different from the algorithm is that
>each state of the analog system encodes an infinite amount of information.

Arguable.  My favourite "definition" of information is due to Bateson, I
think (no endorsement of Bateson's other views implied): "Information is a
difference that makes a difference."  An infinite number of bits may be
required to descibe the state of a system, but in any real-world system, all
of these after a certain point will not make any difference at all, except as
random parameter settings.  (The beauty of Bateson's definition is that the
final "difference" depends on our purposes.  If we wanted a precise simulation
of the universe, these bits would indeed be "information".  If we want a
cognitive model, they're not.)

Incidentally, you can concoct hypothetical analog systems which contain
an infinite amount of information, even in this sense -- by coding up
Chaitin's Omega for instance (and thus being able to solve the Halting
Problem, and be better than any algorithm).  In the real world, quantum
mechanics makes all of this irrelevant, destroying all information beyond
N bits or so.  

Happy Solstice.
--
Dave Chalmers     (dave@cogsci.indiana.edu)      
Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable"

mbb@cbnewsh.ATT.COM (martin.b.brilliant) (12/24/89)

In article <32029@iuvax.cs.indiana.edu> dave@cogsci.indiana.edu (David Chalmers) writes:
>Chaos is only a problem if we need to model the behaviour of a particular
>system over a particular period of time exactly -- i.e., not just capture
>how it might go, but how it *does* go.  This isn't what we're trying to do
>in cognitive science, so it's not a problem.  We can model the system to
>a finite level of precision, and be confident that what we're missing is
>only random "noise."  .....

I second that.  The goal of AI is not to model a particular mind, but
to create a mind.  One thing we know from experience about minds -
which is reinforced by the argment based on chaos - is that two minds
never think exactly alike.

That would almost prove that if we modeled a given mind exactly, we
would NOT have created a mind, because a REAL mind never duplicates
another mind.  To prove we have created a mind, we have to have one
that does not exactly model another.

The chaos argument proves that if we create a mind, we will
automatically meet that requirement.  How fortunate!

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM
After retirement on 12/30/89 use att!althea!marty or marty@althea.UUCP

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

sarge@metapsy.UUCP (Sarge Gerbode) (12/26/89)

In article <1989Dec19.061822.27585@athena.mit.edu> crowston@athena.mit.edu
(Kevin Crowston) writes:

>>[After object] code is loaded, there is actually a different
>>physical machine there, just as much as if one had gone out and
>>bought a different machine.

>But even so, the program still exists in both cases, right?

Good question.  What *is* a "program", anyway?  The ascii source
characters, taken as an aggregate?  The machine-language code, as a
sequence of octal or hex characters?  The magnetic patterns on the
disc?  The electronic patterns in RAM when the programis loaded?  Or
is it, as I suspect, the detailed *concept* the programmer had in
mind when he wrote the source code?  Perhaps the program (or, if you
will, the overall algorithm) is a *possibility* that can be
actualized (implemented) in a variety of ways.  This possibility
exists in the mind of a conscious being as the concept called "the
program".  Without the concept, you would not have a "program" but a
mere pattern of electronic whatevers.
-- 
Sarge Gerbode -- UUCP:  pyramid!thirdi!metapsy!sarge
Institute for Research in Metapsychology
431 Burgess Drive; Menlo Park, CA 94025

sarge@metapsy.UUCP (Sarge Gerbode) (12/26/89)

In article <24Yy02PR76bt01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com
(Ken Presting) writes:
>In article <968@metapsy.UUCP>sarge@metapsy.UUCP (Sarge Gerbode) writes:

>>On reflection, I don't think you can dispose of the issue that easily
>>by differentiating between the program and the hardware.  The program
>>is a schema that describes the electronic state the hardware should
>>be in when the code file is loaded.  In a very real sense, then, the
>>shape of the physical machine has been altered by loading the code
>>file, just as much as if you had flipped switches within the machine
>>(as we used to do with the old panel switches).  So after the code is
>>loaded, there is actually a different physical machine there, just as
>>much as if one had gone out and bought a different machine.

>This is a very good point, and often overlooked.  The physical
>instantiation of data immensely complicates the concept of "symbol
>system".

>When machines were built from gears and axles, it was trivial to
>distinguish symbols from mechanisms.  Symbols are part of a language,
>are written or spoken, and (most importantly) have no mechanical
>functions.  But communication and computing devices blur the
>distinction.  In these machines, an instance of a symbol (a charge, a
>current pulse, a switch) has a mechanical role in the operation of
>the device.

I may have a somewhat radical viewpoint on this, but to me a symbol
is defined as such by the intention of the conscious being using it.
A symbol is a perceivable or detectable entity that is used to direct
attention to a particular reality or potential reality.

Charges, current pulses, etc., are rightly regraded as symbols only to
the extent that they are intended (ultimately) to be comprehended by
some sort of conscious entity as indicating certain realities (or
potential relaities).  In the absence of such intentions, they are not
symbols but mere charges, current pulses, etc.

Of course, things can be decoded without being *intended* to be so
decoded.  Scientists are continually decoding (understanding) elements
of the physical universe.  But these elements are (rightly) not
thought of as symbols because (unless one thinks of the universe as a
communication from God) they are not intended to be decoded in a
particular way.
-- 
Sarge Gerbode -- UUCP:  pyramid!thirdi!metapsy!sarge
Institute for Research in Metapsychology
431 Burgess Drive; Menlo Park, CA 94025

bsimon@stsci.EDU (Bernie Simon) (12/28/89)

I would like to make a few point that seem clear to me, but apparently
aren't clear to others in this discussion.

1) All physical objects are not machines. For example, stones, clouds,
flowers, butterflies, and people are not machines. This should be
obvious, but some people use the word machine to include all physical
objects. Not only is this contrary to ordinary usage, it obscures an
important distinction between what is an artifact and what is not.

2) Not all machines are computers. Lamps, screwdrivers, and cars are not
computers.

3) There are some activities which can be performed by physical objects
and machines which cannot be peformed by computers.  Birds can fly and
airplanes can fly, but computers cannot fly.  Of course, a computer can
control an airplane, but this misses the distinction I am trying to
make.  The distinction is that all computers, as computers, are
equivalent to Turing machines.  If the computers performs some other
activity during its operation than executing a program (for example,
flying) it is because the machine which contains the computer is 
capable of the activity (as airplanes are capable of flying).

4) The simulation of a physical activity by a computer cannot be
identified with the physical activity. A computer running a flight
simulation program is not flying.

5) Hence, while it may be possible to build a machine that thinks, it
does not follow that it will be possible to build a computer that
thinks, as not all physical activities can be performed by computers.

6) While there are good reasons to believe that thinking is a physical
activity, there are no good reasons for believing that thinking is the
execution of a computer program. Nothing revealed either through
introspection or the examination of the anatomy of the brain leads to
the conclusion that the brain is operating as a computer. If someone
claims that it is, the burden of proof is on that person to justify that
claim. Such proof must be base on analysis of the brain's structure and
not on logical, mathematical, or philosophical grounds. Since even the
physical basis of memory is poorly understood at present, any claim that
the brain is a computer is at best an unproven hypothesis.


						Bernie Simon

andrew@dtg.nsc.com (Lord Snooty @ The Giant Poisoned Electric Head ) (12/28/89)

. 1) All physical objects are not machines. 
. 2) Not all machines are computers. 

   What is a machine? It could be said that:
	A stone is a machine; slow crystallisation processes within.
	The internal elementary particle dynamics too.
	This is machinery.
   What is a computer? It could be said that:
	Lamps could be computers if composed of Finite State Automata
	on the molecular level. The program ensures that output
	intensity remains approximately constant, and that the
	structural form remains relatively invariant during illumination.
	This is computing.

. 3) There are some activities which can be performed by physical objects
. and machines which cannot be peformed by computers.  
. The distinction is that all computers, as computers, are
. equivalent to Turing machines.  

	What if the physical environment were used to compute with,
	instead of electrical energy? - think abacus. Flying then, for
	example, might be an emergent and NECESSARY property of such
	a computer.

. 4) The simulation of a physical activity by a computer cannot be
. identified with the physical activity.

	Irrelevant in light of rebuttal of 3)

. 5) Hence, while it may be possible to build a machine that thinks, it
. does not follow that it will be possible to build a computer that
. thinks, as not all physical activities can be performed by computers.

	There is no known constraint on the physical activities which
	can be performed by computers, including those of the brain
	(I naturally exclude violations of the known physical laws).

. 6) While there are good reasons to believe that thinking is a physical
. activity, there are no good reasons for believing that thinking is the
. execution of a computer program. Nothing revealed either through
. introspection or the examination of the anatomy of the brain leads to
. the conclusion that the brain is operating as a computer. If someone
. claims that it is, the burden of proof is on that person to justify that
. claim. Such proof must be base on analysis of the brain's structure and
. not on logical, mathematical, or philosophical grounds. Since even the
. physical basis of memory is poorly understood at present, any claim that
. the brain is a computer is at best an unproven hypothesis.

	Repeat - what is a computer?
-- 
...........................................................................
Andrew Palfreyman	a wet bird never flies at night		time sucks
andrew@dtg.nsc.com	there are always two sides to a broken window

mbb@cbnewsh.ATT.COM (martin.b.brilliant) (12/28/89)

In article <1037@ra.stsci.edu> bsimon@stsci.EDU (Bernie Simon) writes:
!I would like to make a few point that seem clear to me, but apparently
!aren't clear to others in this discussion.

Good thing to do.

!1) All physical objects are not machines. For example, stones, clouds,
!flowers, butterflies, and people are not machines...

Meaning that a machine has to be an artifact.  OK, people sometimes
call people machines to emphasize that people and machines are governed
by the same physics.  But saying that people are machines really begs
the question "Can machines think," doesn't it?

!2) Not all machines are computers. Lamps, screwdrivers, and cars are not
!computers.

OK.  But all computers are machines.  And a machine can contain a computer.

!3) There are some activities which can be performed by physical objects
!and machines which cannot be peformed by computers.  Birds can fly and
!airplanes can fly, but computers cannot fly...

!4) The simulation of a physical activity by a computer cannot be
!identified with the physical activity. A computer running a flight
!simulation program is not flying.

True.

!5) Hence, while it may be possible to build a machine that thinks, it
!does not follow that it will be possible to build a computer that
!thinks, as not all physical activities can be performed by computers.

We seem to have got off the track.  The question was not whether
computers can think, but whether machines can think.  If you put a
computer into a machine that can accept sensory input and create
motor output, it might be able to do what we call thinking.

!6) While there are good reasons to believe that thinking is a physical
!activity, there are no good reasons for believing that thinking is the
!execution of a computer program....

I wouldn't believe that for a minute.  I don't know exactly what
thinking is, but it is probably something a computer can't do alone,
but a machine with a computer in it might be able to do.

!.... Nothing revealed either through
!introspection or the examination of the anatomy of the brain leads to
!the conclusion that the brain is operating as a computer....

Is that a requirement for machines to think?  Consider a machine with
sensory inputs and motor outputs.  It needs a controller.  Do you have
to have an actual brain inside, or will it be sufficient to have a
computer that simulates the brain?

Flying.  We talked about flying.  A computer can't fly.  But if you
build a machine with eyes, and wings, and feet, and it needs a
controller, a machine that simulates a brain will be just as effective
as a genuine brain.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM
After retirement on 12/30/89 use att!althea!marty or marty@althea.UUCP

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

dg1v+@andrew.cmu.edu (David Greene) (12/28/89)

Excerpts from netnews.comp.ai: 27-Dec-89 Re: Can Machines Think?
martin.b.brilliant@cbnew (2932)

> !In article <1037@ra.stsci.edu> bsimon@stsci.EDU (Bernie Simon) writes:

> !5) Hence, while it may be possible to build a machine that thinks, it
> !does not follow that it will be possible to build a computer that
> !thinks, as not all physical activities can be performed by computers.

> We seem to have got off the track.  The question was not whether
> computers can think, but whether machines can think.  If you put a
> computer into a machine that can accept sensory input and create
> motor output, it might be able to do what we call thinking.


I would welcome some clarification... 

Let's assume there  is some agreement on what constitutes "what we call
thinking" -- a big assumption. 
Is it the case that machines alone can think?  Or is it that a machine
requires a computer (as a necessary but not sufficient condition) to
think?  (and that a computer alone is insufficient)

If it is only the machine+computer combination that is capable, what is
it about the combination?  Is it the ability to control its sensory
inputs and outputs (the machine part) or some other distinction?



-David
--------------------------------------------------------------------
 David Perry Greene      ||    ARPA:   dg1v@andrew.cmu.edu, dpg@isl1.ri.cmu.edu
 Carnegie Mellon Univ.  ||    BITNET:  dg1v%andrew@vb.cc.cmu.edu
 Pittsburgh, PA 15213    ||    UUCP: !harvard!andrew.cmu.edu!dg1v
--------------------------------------------------------------------

kp@uts.amdahl.com (Ken Presting) (12/29/89)

Here is the original argument under discussion:

  1. Systems with an appropriate causal structure think.
  2. Programs are a way of formally specifying causal structures.
  3. Physical systems implement programs.
  4. Physical systems which implement the appropriate program think.

I have been arguing that this argument is unsound because (2) is false.
By no means do I dispute the conclusion, though of course others would.

 David Chalmers writes:
>Chaos is only a problem if we need to model the behaviour of a particular
>system over a particular period of time exactly -- i.e., not just capture
>how it might go, but how it *does* go.  This isn't what we're trying to do
>in cognitive science, so it's not a problem.  We can model the system to
>a finite level of precision, and be confident that what we're missing is
>only random "noise."  So while we won't capture the exact behaviour of System X
>at 3 p.m. on 12/22/89, we'll generate equally plausible behaviour -- in other
>words, how the system *might* have gone, if a few unimportant random
>parameters had been different.

 M. B. Brilliant writes:
>I second that.  The goal of AI is not to model a particular mind, but
>to create a mind.

These objections seem to grant at least a part of my point - some of the
characteristics of some causal systems cannot be specified by programs.
I agree that an AI need not model any particular person at a particular
time.  But since the error in a numerical model is cumulative over time
slices, it's not just the behavior of the system at a given time that
won't match, but also the general shape of the trajectories though the
state space of the system.  If a numerical model of the brain is claimed
to be accurate except for "noise", and therefore claimed to be conscious,
then it must be shown that what is called "noise" is irrelevant to
consciousness (or thinking).  Fluctuations that seem to be "noise" may
have significant consequences in a chaotic system.

 David Chalmers continues:
>Incidentally, you can concoct hypothetical analog systems which contain
>an infinite amount of information, even in this sense -- by coding up
>Chaitin's Omega for instance (and thus being able to solve the Halting
>Problem, and be better than any algorithm).  In the real world, quantum
>mechanics makes all of this irrelevant, destroying all information beyond
>N bits or so.

Quantum mechanics can't destroy any information - it just make the
information statistical.  Note that in the wave formulation of QM, the
probability waves are continuous, and propagate and interfere
deterministically. No probability information is ever lost, but retrieving
the probabilistic information can be time consuming.  The physical system
need not "retrieve" the probabilistic information; it can react directly.

John Nagle writes:
>    2.  Recent work has resulted in an effective way to solve N-body
>        problems to an arbitrary level of precision and with high
>        speed.  See "The Rapid Evaluation of Potential Fields in
>        Particle Systems", by L.F. Greengard, MIT Press, 1988.
>        ISBN 0-262-07110-X.
>
>        Systems with over a million bodies are now being solved using
>        these techniques.

It's not enough to do fast and accurate calculations; the calculations
must remain fast no matter how accurate the simulation has to be.  Every
computer must have a finite word size, so when accuracy levels require
multiple words to represent values in a state vector, the model will slow
down in proportion to the number of words used.  This effect is
independent of the efficiency of the basic algorithm.

dave@cogsci.indiana.edu (David Chalmers) (12/30/89)

In article <1037@ra.stsci.edu> bsimon@stsci.EDU (Bernie Simon) writes:

>I would like to make a few point that seem clear to me, but apparently
>aren't clear to others in this discussion.

Hmmm, do I take it from the references line that you mean me?

>1) All physical objects are not machines. 
>2) Not all machines are computers.
>3) There are some activities which can be performed by physical objects
>and machines which cannot be performed by computers.

1) Arguable but not relevant.
2) Of course.
3) Of course.

>Birds can fly and airplanes can fly, but computers cannot fly. [...]
>4) The simulation of a physical activity by a computer cannot be
>identified with the physical activity. A computer running a flight
>simulation program is not flying.
>5) Hence, while it may be possible to build a machine that thinks, it
>does not follow that it will be possible to build a computer that
>thinks, as not all physical activities can be performed by computers.

If you recall, the *premise* of the current discussion was that thinking is
thinking in virtue of its abstract causal structure, and not in virtue of
physical details of implementation.  If you want to argue with this premise
 -- functionalism -- then fine.  The point was not to defend it, but to defend
a view of the relation between computation and cognition which is less
simple-minded than "the mind is a computer".

Of course, I also believe that functionalism is true.  The functionalist
believes that thinking is fundamentally different to flying (and heating,
swimming, and nose-blowing).  The essence of flying certainly *cannot*
be captured in an abstract causal structure.  This is because there are
substantive *physical* criteria for flying.  An object, *by definition*,
is not flying unless it is (very roughly) engaged in ongoing motion without
any connection to the ground.  Nothing abstract about this -- it's a solid,
physical criterion. If you capture the causal patterns without the correct
physical realization, then it's not flying, period.  Similarly for
nose-blowing and the rest. 

Thinking, on the other hand, has no such solid criteria in its definition.
The only definitive criterion for thought is "having such and such a subjective
experience" -- which is far away from physical details (and a criterion which
is understood notoriously badly).  Of course, this doesn't *prove* that
thinking is not nevertheless inseparable from physical details -- a correct
theory of mind *might* just require that for these experiences, you can't
get away with anything but pointy-headed neurons.  But at the very least,
physical details are out of the *definition*, and there is thus a
principled difference between thinking and flying.  Which makes the jump to
functionalism much more plausible.  Maybe "thinking" is more like "adding"
than like "flying".

Most arguments against functionalism are in terms of "funny instantiations" --
as in "but *this* has the right causal dynamics, and surely *this* doesn't
think".  Generally Chinese objects seem to be favoured for these arguments --
whether Rooms, Gyms or Nations.  Some people find these intuitively compelling.
As for me, I find the arguments sufficiently unconvincing that my "faith" is
not only affirmed but strengthened.

>6) While there are good reasons to believe that thinking is a physical
>activity, there are no good reasons for believing that thinking is the
>execution of a computer program. Nothing revealed either through
>introspection or the examination of the anatomy of the brain leads to
>the conclusion that the brain is operating as a computer. If someone
>claims that it is, the burden of proof is on that person to justify that
>claim. Such proof must be base on analysis of the brain's structure and
>not on logical, mathematical, or philosophical grounds. Since even the
>physical basis of memory is poorly understood at present, any claim that
>the brain is a computer is at best an unproven hypothesis.

I agree.  Did you read my first note?  The whole point is that you can accept
the computational metaphor for mind *without* believing somewhat extreme
statements like "the brain is a computer", "the mind is a program", "cognition
is just symbol-manipulation" and so on.  The role of computer programs is
that they are very useful formal specifications of causal dynamics (which
happen to use symbols as an intermediate device).  Implementations of
computer programs, on the other hand, possess *physically* the given causal
dynamics.  So if you accept (1) functionalism, and (2) that computer programs
can capture any causal dynamics, then you accept that implementations of
the right computer programs think.

--
Dave Chalmers     (dave@cogsci.indiana.edu)      
Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable"

bwk@mbunix.mitre.org (Kort) (12/30/89)

In article <1037@ra.stsci.edu> bsimon@stsci.EDU (Bernie Simon) writes:

 > 6) While there are good reasons to believe that thinking is a physical
 > activity, there are no good reasons for believing that thinking is the
 > execution of a computer program.  Nothing revealed either through
 > introspection or the examination of the anatomy of the brain leads to
 > the conclusion that the brain is operating as a computer.  If someone
 > claims that it is, the burden of proof is on that person to justify that
 > claim. Such proof must be base on analysis of the brain's structure and
 > not on logical, mathematical, or philosophical grounds.  Since even the
 > physical basis of memory is poorly understood at present, any claim that
 > the brain is a computer is at best an unproven hypothesis.

The brain is a collection of about 400 anatomically identifiable
neural networks, interconnected by trunk circuits called nerve bundles,
and connected to the outside world by sensory organs (eyes, ears, nose,
tactile sensors) and effectors (muscles, vocal cords).  Neural networks
are programmable computational devices, capable of categorizing stimuli
into cases, and capable of instantiating any computable function (some
more easily than others).  Artificial neural networks are used today
for classifying applicants for credit or insurance.  They have also
been used to read ASCII text and drive a speech synthesizer, thereby
demonstrating one aspect of language processing.  As to memory, you
might want to explore recent research on the Hebb's synapse.

--Barry Kort

bwk@mbunix.mitre.org (Kort) (12/30/89)

In article <f6xk02b078qO01@amdahl.uts.amdahl.com>
kp@amdahl.uts.amdahl.com (Ken Presting) writes:

 > I agree that an AI need not model any particular person at a particular
 > time.  But since the error in a numerical model is cumulative over time
 > slices, it's not just the behavior of the system at a given time that
 > won't match, but also the general shape of the trajectories though the
 > state space of the system.  If a numerical model of the brain is claimed
 > to be accurate except for "noise", and therefore claimed to be conscious,
 > then it must be shown that what is called "noise" is irrelevant to
 > consciousness (or thinking).  Fluctuations that seem to be "noise" may
 > have significant consequences in a chaotic system.

Noise is to thinking as genetic mutations are to evolution.  Most
noise is counterproductive, but occasionally the noise leads to a
cognitive breakthrough.  That's called serendipity.

--Barry Kort

cam@aipna.ed.ac.uk (Chris Malcolm) (12/30/89)

In article <973@metapsy.UUCP> sarge@metapsy.UUCP (Sarge Gerbode) writes:
>In article <1989Dec19.061822.27585@athena.mit.edu> crowston@athena.mit.edu
>(Kevin Crowston) writes:

>>>[After object] code is loaded, there is actually a different
>>>physical machine there, just as much as if one had gone out and
>>>bought a different machine.

>>But even so, the program still exists in both cases, right?

>Good question.  What *is* a "program", anyway?
> ...
>is it, as I suspect, the detailed *concept* the programmer had in
>mind when he wrote the source code?

Computer programs, like knitting patterns, sometimes arise by
serendipitous accident. "Gee, that looked good - wonder if I can do it
again?" There are also those cases where one computer program invents
another. In these cases there need never have been any mind which had a
detailed concept, or even intention, behind the source code. Even in
fully deliberate programs, it is sometimes the case that the programmer
fixes a bug by accident without understanding it - sometimes the only
way, for obscure bugs, just to tinker until it goes away. But I would
not like to say that programs which have never been understood are
therefore not programs, any more than I would like to say that if God
does not understand how I'm built then I'm not really a person.
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

gall@yunexus.UUCP (Norm Gall) (12/31/89)

bwk@mbunix.mitre.org (Kort) writes:

| Noise is to thinking as genetic mutations are to evolution.  Most
| noise is counterproductive, but occasionally the noise leads to a
| cognitive breakthrough.  That's called serendipity.

Don't you think you are playing fast and loose with these concepts?
What you say noise is when you equate it with genetic mutation is not
the same as what a radio operator knows it to be, what a philosopher
knows it to be, and what the mother of a teenager in Windsor, ON
knows it to be.

I'm not saying that you haven't defined your concepts well enough (ai
scientists have more that adequately defined it, for their purposes).
My question is "What licenses you to shift the meaning of any
particular term?"

nrg

-- 
York University          | "Philosophers who make the general claim that a 
Department of Philosophy |       rule simply 'reduces to' its formulations
Toronto, Ontario, Canada |       are using Occam's razor to cut the throat
_________________________|       of common sense.'             - R. Harris

miron@fornax.UUCP (Miron Cuperman ) (12/31/89)

In article <f6xk02b078qO01@amdahl.uts.amdahl.com>
	kp@amdahl.uts.amdahl.com (Ken Presting) writes:
> David Chalmers writes:
>>So while we won't capture the exact behaviour of System X
>>at 3 p.m. on 12/22/89, we'll generate equally plausible behaviour -- in other
>>words, how the system *might* have gone, if a few unimportant random
>>parameters had been different.
>
>These objections seem to grant at least a part of my point - some of the
>characteristics of some causal systems cannot be specified by programs.
>I agree that an AI need not model any particular person at a particular
>time.  But since the error in a numerical model is cumulative over time
>slices, it's not just the behavior of the system at a given time that
>won't match, but also the general shape of the trajectories though the
>state space of the system.

My thoughts:

Let us say you see a leaf falling.  Since you are a chaotic system your
trajectory through space-time may be completely different than it would
be if you did not see that leaf.  But did that leaf make you non
human?  Did it 'kill' you because it changed your future so
drastically?  I don't think so.

Let us say that we model someone on a computer but we do not capture
everything.  Because of the imperfections of the model the resulting
system will diverge.  (Also because the inputs to this system and to
the original are different.)  Isn't that equivalent to the falling leaf
incident? (assuming the model is close enough so it does not cause a
breakdown in the basic things that make a human -- whatever those
are.)  I don't agree that some characteristics cannot be specified by
programs.  They can be specified up to any precision we would like.  I
don't think the human brain is so complex that it has an infinite
number of *important* parameters (that without them you will fail the
'thinking test').  Actually I don't think there are so many $10M will
not capture today (if we knew how to model them).

You also wrote that chaotic systems are specificaly hard to model.  A
computer is a chaotic system.  It is very easy to model a computer.
Therefore it may be possible to model other chaotic systems.  You have
to justify your claim better.

Summary:  Inaccurate modeling may have an effect similar to 'normal'
  events.  Since the inputs will be different anyway, the inaccuracy
  may not matter.

		Miron Cuperman
		miron@cs.sfu.ca

jpp@tygra.UUCP (John Palmer) (12/31/89)

}In article <85217@linus.UUCP> bwk@mbunix.mitre.org (Barry Kort) writes:
}In article <1037@ra.stsci.edu} bsimon@stsci.EDU (Bernie Simon) writes:
}
} } 6) While there are good reasons to believe that thinking is a physical
} } activity, there are no good reasons for believing that thinking is the
} } execution of a computer program.  Nothing revealed either through
} } introspection or the examination of the anatomy of the brain leads to
} } the conclusion that the brain is operating as a computer.  If someone
} } claims that it is, the burden of proof is on that person to justify that
} } claim. Such proof must be base on analysis of the brain's structure and
} } not on logical, mathematical, or philosophical grounds.  Since even the
} } physical basis of memory is poorly understood at present, any claim that
} } the brain is a computer is at best an unproven hypothesis.
}
}The brain is a collection of about 400 anatomically identifiable
}neural networks, interconnected by trunk circuits called nerve bundles,
}and connected to the outside world by sensory organs (eyes, ears, nose,
}tactile sensors) and effectors (muscles, vocal cords).  Neural networks
}are programmable computational devices, capable of categorizing stimuli
}into cases, and capable of instantiating any computable function (some
}more easily than others).  Artificial neural networks are used today
}for classifying applicants for credit or insurance.  They have also
}been used to read ASCII text and drive a speech synthesizer, thereby
}demonstrating one aspect of language processing.  As to memory, you
}might want to explore recent research on the Hebb's synapse.
}
}--Barry Kort

But the brain is not structurally programmable. The tradeoff principle
states that no system can have structural programmability, evolutionary
adaptability and efficiency at the same time. 
 
Digital computers are programmable, but lack efficiency (I may post
more on this later) and evolutionary adaptability. The brain (ie:
humans) has evolutionary adaptability and (relative) efficiency.

Biological neurons are much more complex than their weak cousins  
(artificial neurons) and contain internal dynamics which play 
a very important role in their function. Things like second messenger
systems and protein/substrate interactions are important. Internal
dynamics rely heavily on the laws of physics and we cannot determine
what "function" a neuron "computes" unless we do a physics experiment
first. Computer engineers work very hard to mask off the effects of
the laws of physics (ie: by eliminating the effects of background
noise) in order to produce a device which is structurally programmable.

Biological neurons, on the other hand, RELY on the laws of physics
to do their work. The basic computing element of biological systems,
the protein, operates by recognizing a substrate. This is accomplished
by Brownian Motion and depends on weak bonds (VanderWaals interactions,
etc). Thus, there is a structure/function relationship which is 
essential.      

Artificial neural nets will still be unable to solve hard problems 
(patttern recognition, REAL language processing, etc) because they
are implemented in silicon (usually as a virtual machine on top of
a standard digital computer) and are therefore inherently inefficient.
In theory (Church-Turing Thesis) it is possible for such problems to
be solved by digital computers, but most of the hard problems are 
intractable. We are very quickly reaching the limits of 
speed of silicon devices.

The only hope of solving these hard problems is by developing devices 
which take advantage of the laws of physics and that have a very
strong structure/function relationship. Of course, these devices 
will not be structurally programmable, but will have to be developed
by an evolutionary process. 

My point: We are not going to solve the hard problems of AI by 
simply developing programs for our digital computers. We have to
develope hardware that has a strong structure/function relationship.

Sorry if this posting seems a little incoherent. Its 5am and I just
woke up. I'll post more on this later. Most of these ideas are to
be attributed to Dr. Michael Conrad, Wayne State University, 
Detroit, MI. 
-- 
=  CAT-TALK Conferencing Network, Prototype Computer Conferencing System  =
-  1-800-446-4698, 300/1200/2400 baud, 8/N/1. New users use 'new'         - 
=  as a login id.   E-Mail Address: ...!uunet!samsung!sharkey!tygra!jpp   =
-           <<<Redistribution to GEnie PROHIBITED!!!>>>>                  -

sarge@metapsy.UUCP (Sarge Gerbode) (01/02/90)

In article <1779@aipna.ed.ac.uk> cam@aipna.ed.ac.uk (Chris Malcolm) writes:
>In article <973@metapsy.UUCP>sarge@metapsy.UUCP (Sarge Gerbode) writes:

>>What *is* a "program", anyway?
>>...
>>is it, as I suspect, the detailed *concept* the programmer had in
>>mind when he wrote the source code?

>Computer programs, like knitting patterns, sometimes arise by
>serendipitous accident. "Gee, that looked good - wonder if I can do it
>again?"

>There are also those cases where one computer program invents
>another. In these cases there need never have been any mind which had a
>detailed concept, or even intention, behind the source code.

You haven't really defined "program", yet.  Do you mean the ascii
code?  I can see how that could arise from a random source, but
doesn't it take a conscious being to look at the source code and
label it as a "program"?  I suppose it would be easy to design a
program that would randomly generate syntactically correct ascii C
code that would compile and run without run-time errors (probably has
been done).  But would you really call such a random product a
program?  And if so, what's so interesting about programs as such?

>Even in fully deliberate programs, it is sometimes the case that the
>programmer fixes a bug by accident without understanding it -
>sometimes the only way, for obscure bugs, just to tinker until it
>goes away. But I would not like to say that programs which have never
>been understood are therefore not programs, any more than I would
>like to say that if God does not understand how I'm built then I'm
>not really a person.

Good point.  But even if a program were accidentally generated
randomly (like the fabled monkeys accidentally producing a Shakespeare
play), would it not require a conscious being to *label* such a
production a "program", in order for it to be one?

I'm not sure about this point.  I suppose there might be an argument
for saying that the enterprise of science is to discover the programs
that exist in Nature, so that we can understand, predict, and control
Nature.  In particular, the DNA system could be (has been) described
as a program.  I'm not sure if this usage is legitimate, or if we are
engaging in a bit of anthropomorphizing, here.  Or "theomorphizing",
if we find ourselves thinking as if the universe was somehow
programmed by some sort of intelligent Being and we are discovering
what that program is.
-- 
Sarge Gerbode -- UUCP:  pyramid!thirdi!metapsy!sarge
Institute for Research in Metapsychology
431 Burgess Drive; Menlo Park, CA 94025

dhw@itivax.iti.org (David H. West) (01/03/90)

In article <191@fornax.UUCP> miron@cs.sfu.ca (Miron Cuperman) writes:
>You also wrote that chaotic systems are specificaly hard to model.  A
>computer is a chaotic system.  

How so?

byoder@smcnet.UUCP (Brian Yoder) (01/03/90)

In article <979@metapsy.UUCP>, sarge@metapsy.UUCP (Sarge Gerbode) writes:
> In article <24Yy02PR76bt01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com
> (Ken Presting) writes:
> >In article <968@metapsy.UUCP>sarge@metapsy.UUCP (Sarge Gerbode) writes:
> 
> I may have a somewhat radical viewpoint on this, but to me a symbol
> is defined as such by the intention of the conscious being using it.
> A symbol is a perceivable or detectable entity that is used to direct
> attention to a particular reality or potential reality.
> 
> Charges, current pulses, etc., are rightly regraded as symbols only to
> the extent that they are intended (ultimately) to be comprehended by
> some sort of conscious entity as indicating certain realities (or
> potential relaities).  In the absence of such intentions, they are not
> symbols but mere charges, current pulses, etc.

Consider the real implementation of most programs though.  THey are written
in a high-level language like C, Pascal, FORTRAN, or COBOL. That's what the
programmer knew about.  The Compiler turns those symbols into symbols
that no human (usually) ever looks at or understands.  The end user sees
neither of these, he sees the user interface and understands what the 
{{program is doing from yet another perspective.  What is the intelligence
that understands the machine language symbols?

One more step higher in complexity is to consider systems with complex
memories that load memory as they go (virtual memory kinds of systems)
which have a different physical configuration each time they are executed.

One more step takes us to self-modifying languages like LISP which can 
execute and build statements in their own language.  No human ever sees 
these intermediate symbols, but those constructs are processed and
are reflected in the behavior of the program.

Finally, we have really dynamic systems like neural networks that aren't
so much "loaded with a program" as "taught" what to do.  They like us,
don't have a static program controling the behavior outputs.  In a sense
our brains become "different machines" from minute to minute as we 
learn and act.  (Some might say that large portions of the population
remain changeless through video stimulation, but this effect has not
yet been proven :-)

Are all of these working with "symbols"?  If not which are? Is it 
only humans that can identify a symbol?  What if all of the records
about punched cards were destroyed while card readers still existed,
would the little holes in card decks still be symbols? After the readers
were destroyed?

Brian Yoder



> -- 
> Sarge Gerbode -- UUCP:  pyramid!thirdi!metapsy!sarge
> Institute for Research in Metapsychology
> 431 Burgess Drive; Menlo Park, CA 94025


-- 
-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-
| Brian Yoder                 | answers *byoder();                            |
| uunet!ucla-cs!smcnet!byoder | He takes no arguments and returns the answers |
-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-

byoder@smcnet.UUCP (Brian Yoder) (01/03/90)

In article <6902@cbnewsh.ATT.COM>, mbb@cbnewsh.ATT.COM (martin.b.brilliant) writes:
> In article <1037@ra.stsci.edu> bsimon@stsci.EDU (Bernie Simon) writes:

[Parts 1-4 deleted for brevity]

> !5) Hence, while it may be possible to build a machine that thinks, it
> !does not follow that it will be possible to build a computer that
> !thinks, as not all physical activities can be performed by computers.

I think that this is being a little bit too restrictive.  It is pretty
clear (at least to those of us who believe that humans can think ;-) that
brains think.  However without the "machine" through which it operates
it couldn't do much about changing the world or discovering facts about
it.  To be fair, our potentially intelligent computer would have to have
some kind of "body" with senses and output devices (hands, wheels, or
at least a video display).

> !6) While there are good reasons to believe that thinking is a physical
> !activity, there are no good reasons for believing that thinking is the
> !execution of a computer program....
> 
> I wouldn't believe that for a minute.  I don't know exactly what
> thinking is, but it is probably something a computer can't do alone,
> but a machine with a computer in it might be able to do.

What would be missing is something for the computer/machine to think about
and a way for it to let us know that it thought something.  There's not
much to think about without any input.  As for what thinking is, the 
definition ought to include interpretation of information, the deduction 
of new information, and decisions about courses of action.  Isn't that 
something both brains and programs both do pretty well?

> !.... Nothing revealed either through
> !introspection or the examination of the anatomy of the brain leads to
> !the conclusion that the brain is operating as a computer....

Maybe we should look at it the other way around, we could have a computer
acting as a brain in this machine/computer.  If it interpreted sensory data
selected actions, and orchestrated their implementation (say, by flapping
wings) isn't that accomplishing the same end as a brain would?


Brian Yoder


-- 
-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-
| Brian Yoder                 | answers *byoder();                            |
| uunet!ucla-cs!smcnet!byoder | He takes no arguments and returns the answers |
-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-<>-

bwk@mbunix.mitre.org (Kort) (01/03/90)

In article <6126@yunexus.UUCP> gall@yunexus.UUCP writes:

 > In article <85218@linus.UUCP> bwk@mbunix.mitre.org (Barry Kort) writes:

 > > Noise is to thinking as genetic mutations are to evolution.  Most
 > > noise is counterproductive, but occasionally the noise leads to a
 > > cognitive breakthrough.  That's called serendipity.
  
 > Don't you think you are playing fast and loose with these concepts?
 > What you say noise is when you equate it with genetic mutation is not
 > the same as what a radio operator knows it to be, what a philosopher
 > knows it to be, and what the mother of a teenager in Windsor, ON
 > knows it to be.

I'm using noise as a metaphor and analogizing it to random perturbations
in an otherwise deterministic system.  I find that analogies and
metaphors are useful tools in creative thinking, helping to direct
the mind toward deeper understanding of complex processes.

 > I'm not saying that you haven't defined your concepts well enough (AI
 > scientists have more that adequately defined it, for their purposes).
 > My question is "What licenses you to shift the meaning of any
 > particular term?"
  
My birthright licenses me to use my brain and mind to seek knowledge
and understanding, and to communicate interesting and intriguing ideas
with like-minded philosophers.

I hope you share the same birthright.  I would hate to see you
voluntarily repress your own opportunity to participate in the
exploration of interesting ideas.

--Barry Kort

flink@mimsy.umd.edu (Paul V Torek) (01/04/90)

kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>If a numerical model of the brain is claimed to be accurate except for 
>"noise", and therefore claimed to be conscious, then it must be shown 
>that what is called "noise" is irrelevant to consciousness (or thinking).

Are you suggesting that
(a) Some types of conscious thought might go wrong were it not for the
	"noise", or
(b) Although a "noiseless" system might pass the Turing Test, "noise"
	might be necessary for consciousness to exist at all?
(Or something else?)

Most of the rest of your article suggests (a), but (b) strikes me as a
more interesting thesis.  I can't think of any argument against (b).
-- 
"There ain't no sanity clause" --Marx
Paul Torek					flink@mimsy.umd.edu

gilham@csl.sri.com (Fred Gilham) (01/04/90)

Brian Yoder writes:

| Consider the real implementation of most programs though.  THey are written
| in a high-level language like C, Pascal, FORTRAN, or COBOL. That's what the
| programmer knew about.  The Compiler turns those symbols into symbols
| that no human (usually) ever looks at or understands.  The end user sees
| neither of these, he sees the user interface and understands what the 
| {{program is doing from yet another perspective.  What is the intelligence
| that understands the machine language symbols?

I'm pretty sure that you are using the word `symbol' here in different
ways.  In the case of a programmer writing in some programming
language, I would say that symbols (at various levels of abstraction)
are being used.  However, when the program is compiled, the symbols
disappear.  To say that the compiler turns the symbols into other
symbols is, I believe, to speak metaphorically.  The point is that
symbols only exist when there is someone to give them a meaning.

I envision the process in this way:


   meaning (in the mind)
     |                                          computerized
     |====>symbol==>(some physical pattern)==>syntactic transformation
                                                     |
                                                     |
   meaning<==symbol<==(some physical pattern)<=======|
   (back in
    the mind)

It seems to me that the computer starts and ends with the physical
patterns.  Everything else happens in our heads.

The fact that the transformations themselves can be described
symbolically tends to fool people into thinking that the computer is
actually using and manipulating symbols, or even manipulating meaning.
This has been described as a ``hermeneutical hall of mirrors'', where
we project onto the computer our own thought processes.  The computer
manipulates the patterns in ways that are meaningful to us; therefore
the computer must be doing something involving meaning.  But it isn't,
any more than the Eliza program actually understood the people that
talked to it, even though THEY thought it did.

-Fred Gilham      gilham@csl.sri.com

kp@uts.amdahl.com (Ken Presting) (01/04/90)

In article <21606@mimsy.umd.edu> flink@mimsy.umd.edu (Paul V Torek) writes:
>kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>>If a numerical model of the brain is claimed to be accurate except for 
>>"noise", and therefore claimed to be conscious, then it must be shown 
>>that what is called "noise" is irrelevant to consciousness (or thinking).
>
>Are you suggesting that
>(a) Some types of conscious thought might go wrong were it not for the
>	"noise", or
>(b) Although a "noiseless" system might pass the Turing Test, "noise"
>	might be necessary for consciousness to exist at all?
>(Or something else?)
>
>Most of the rest of your article suggests (a), but (b) strikes me as a
>more interesting thesis.  I can't think of any argument against (b).

I have in mind the "something else".

The point about noise in chaotic systems arises as an objection to the
argument that if all other attempts at AI fail, at least we can
numerically model the phyics of the brain.  For this argument to work,
we need to be sure that we really *can* make an accurate model.  Chaotic
systems can mechanically amplify small discrepancies in initial state,
such as noise.  Numerical models trade speed for precision, so if a model
is to have the arbitrary precision needed to eliminate all discrepancies,
the model would run well behind real time, and fail the Turing test.

I think (b) is reversed.  Random brain events are probably important in
human behavior, thus affecting the Turing test.  But at least the sort
of thinking that is used to evaluate decision functions or logical
arguments seems to depend little on randomness.

Creative thinking - inventing proofs, constructing metaphors - could very
well profit from random influences.

flink@mimsy.umd.edu (Paul V Torek) (01/11/90)

I asked:
pt>Are you suggesting that
pt>(a) Some types of conscious thought might go wrong were it not for the
pt>	"noise", or
pt>(b) Although a "noiseless" system might pass the Turing Test, "noise"
pt>	might be necessary for consciousness to exist at all?
pt>(Or something else?)

kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>I have in mind the "something else".
>
>The point about noise in chaotic systems arises as an objection to the
>argument that if all other attempts at AI fail, at least we can
>numerically model the phyics of the brain.  For this argument to work,
>we need to be sure that we really *can* make an accurate model.  Chaotic
>systems can mechanically amplify small discrepancies in initial state

But that doesn't matter unless (a) is true.  As many people pointed out
in reply to you, the fact that an AI system doesn't duplicate the thought
processes of any *particular* person, is no problem for strong AI.  The
fact that my thought processes are different from yours doesn't
necessarily mean I'm wrong (or that I'm not really thinking) -- it just
means I'm different.

Now suppose that (a) *were* true -- the "noiseless" system goes wrong,
because (say) it can't think creatively, because "noise" is necessary
to do so.  Now *that* would be a problem.

>I think (b) is reversed.  Random brain events are probably important in
>human behavior, thus affecting the Turing test.  But at least the sort
>of thinking that is used to evaluate decision functions or logical
>arguments seems to depend little on randomness.

I agree that any particular person's behavior probably depends on random
events in her brain, but I doubt that this would affect the Turing test
-- a noiseless system would not respond "wrongly", just differently.  That's
my hunch.  But let's let that pass.  Your last sentence says that a
noiseless system would probably pass those aspects of the Turing Test
which involve such tasks as you mention.  I agree, but what (b) was
suggesting was that the Turing Test might not be an adequate test of
whether a "thinker" is conscious.  (And if you define "thought" such
that it must be conscious, then non-conscious things can't think.)
-- 
"There ain't no sanity clause" --Marx
Paul Torek					flink@mimsy.umd.edu

kp@uts.amdahl.com (Ken Presting) (01/11/90)

In article <21745@mimsy.umd.edu> flink@mimsy.umd.edu (Paul V Torek) writes:
>I asked:
>pt>Are you suggesting that
>pt>(a) Some types of conscious thought might go wrong were it not for the
>pt>	"noise", or
>
>kp@amdahl.uts.amdahl.com (Ken Presting) writes:
> .....  For this {"backstop"} argument to work,
>>we need to be sure that we really *can* make an accurate model.  Chaotic
>>systems can mechanically amplify small discrepancies in initial state
>
>But that doesn't matter unless (a) is true.  As many people pointed out
>in reply to you, the fact that an AI system doesn't duplicate the thought
>processes of any *particular* person, is no problem for strong AI.  The
>fact that my thought processes are different from yours doesn't
>necessarily mean I'm wrong (or that I'm not really thinking) -- it just
>means I'm different.

The divergence of the model from the real system means that for any
person, at any time, the model would diverge significantly from that
person's states (assuming that the brain is significantly chaotic, for the
purpose of discussion).  So it's not just that the numerical model can't
simulate me or you, it can't simulate *anybody*, *ever*.  So if we want to
claim that the simulation is close enough to brain function to be
simulated thought, then we have to show that the chaotic aspects of brain
function are inessential to thought.

BTW, my thanks to you and to the others in comp.ai who are participating
in the philosophical discussions here.  You folks have helped me to
clarify my ideas by your constructive and thoughtful comments.  This group
is a good example of the solid intellectual value of Usenet.

hankin@sauron.osf.org (Scott Hankin) (01/11/90)

kp@uts.amdahl.com (Ken Presting) writes:

>The divergence of the model from the real system means that for any
>person, at any time, the model would diverge significantly from that
>person's states (assuming that the brain is significantly chaotic, for the
>purpose of discussion).  So it's not just that the numerical model can't
>simulate me or you, it can't simulate *anybody*, *ever*.  So if we want to
>claim that the simulation is close enough to brain function to be
>simulated thought, then we have to show that the chaotic aspects of brain
>function are inessential to thought.

However, if the brain is sufficiently chaotic, we would have to assume that
a "perfect duplicate" (such as might come out of a matter duplicator) would
immediately diverge from the original.  I fail to see how that matters.
Would the duplicate therefore not be thinking?  Would he/she not be the
same as the original?  I suspect the answer to the first question is no,
for the duplicate would still function in the same manner as the original,
who, we can only assume, thinks.  I also suspect that the answer to the
second is no.  The duplicate would cease being the same as the original at
the point of duplication.  They would be substantially the same, to be
sure, but start to differ almost immediately.

I don't feel that the issue is the simulation of any given personality, but
rather whether a simulation could have thought processes as close to yours
as yours are to mine.

- Scott
------------------------------
Scott Hankin  (hankin@osf.org)
Open Software Foundation

bls@cs.purdue.EDU (Brian L. Stuart) (01/12/90)

In article <85YE02kP7duL01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>The divergence of the model from the real system means that for any
>person, at any time, the model would diverge significantly from that
>person's states (assuming that the brain is significantly chaotic, for the
>purpose of discussion).  So it's not just that the numerical model can't
>simulate me or you, it can't simulate *anybody*, *ever*.  So if we want to
>claim that the simulation is close enough to brain function to be
>simulated thought, then we have to show that the chaotic aspects of brain
>function are inessential to thought.
>

This is not what's really at issue here.  To simulate (or possess)
intelligence is not to simulate one that possesses intelligence.
We don't need to accurately simulate anyone.  The questions that are
significant here are: first, are the chaotic properties of the brain
necessary for intelligence?  If so, then what characteristics of the
brain's attractor are necessary?  If these characteristics are also
sufficient, then there is no reason that any system possessing the
same characteristics in its attractor will not also be intelligent.
If the attractor characteristics are not sufficient, then we have the
problem of finding out what else is necessary.

In general, just because small changes in the input to chaotic systems
can lead to qualatitivly different behavior does not mean that that
behavior is unconstrained.  It is still constrained by the system's
attractor.  Simulating an existing intelligence is a red herring;
natural intelligent systems don't simulate others, so artificial
ones likewise need not.

>BTW, my thanks to you and to the others in comp.ai who are participating
>in the philosophical discussions here.  You folks have helped me to
>clarify my ideas by your constructive and thoughtful comments.  This group
>is a good example of the solid intellectual value of Usenet.

Ditto.

Brian L. Stuart
Department of Computer Science
Purdue University

ercn67@castle.ed.ac.uk (M Holmes) (01/12/90)

Just a thought while hairs are being split on the difference between
thinking computers, thinking machines, and thinking hybrids of computers
and machines.

It seems to have been suggested that computers would need to be able to
manipulate the environment (basically have senses and have hands) in
order to do what we call thinking. I'm not sure I'd agree but I think
it's irrelevant anyway, for the following reasons.

As a thought experiment (which is all thinking computers/machines are at
the present time) suppose that we simulate a world within a computer
system. Then we build an artificial intelligence embedded withing this 
simulation and allow "it" a simulated ability to sense and manipulate
the simulated environment. This would seem to fulfill the criteria for a
hybrid computer/machine which can sense and manipulate the "real" world.
It would however simply be a program in a computer system. The point
being that both sense and manipulation are simply a form of information 
processing which is what computers do anyway.

It could be argued that this would just be "simulated thinking" but it
isn't clear that this would be any different from the real thing.

 -- A Friend of Fernando Poo