[net.misc] Re. Turing Test

jim (12/04/82)

In a reply to Tom Blenko's article on Searle, Searle does *not* make an
analogy between water boiling and AI programs, he also does *not* claim
that it is completely impossible to have intelligence with silicon chips
(at least, not in the talk I saw, regardless of what he said in his
article).  What he does claim, and in this I think he is probably right,
is that it is impossible to get intelligence out of a system for manipulating
formal semantic symbols. 
    Searle uses the water boiling analogy to demonstrate how a macroscopic
phenomenon (namely the phase transition from a liquid to a gas in water)
can occur as the result of a microscopic one (namely the increased jiggling
of the molecules when the temp. increases). Such cooperative phenomena are
commonplace in nature, another example being a ferromagnet which becomes
magnetized when the temperature is reduced below the critical temperature.
In the ferromagnet case, the magnetization results from the lining up of
the magnetic dipole moments of the molecules or atoms in the magnet, and
is well described by the Ising theory or by the Landau theory.
    Neurons in the brain can be treated by the same kind of mathematical
techniques which have been successfully used to describe cooperative
phenomena in physics. Statistical mechanical 'order parameters' can
be derived which can then be related to the macroscopic brain states.
A good reference on the technicalities is: H. Haken, "Synergetics:
An Introduction", Springer, Heidelberg, 1980. In fact, an article in Physica 5D
from May of this year describes *exactly* such a treatment for the neocortex.
    This is a far cry from a lisp program, which definitely does *not*
evolve from any microscopic aspects of the underlying hardware, but is
artificially layered on from the outside. Searle made the point when I
saw him that nobody would mistake a simulation of a hurricane on a 
computer for the real thing, so why claim a lisp program is anything more than
a simulation?
    Incidently, I don't think it is impossible to construct an intelligent
machine, just that most AI researchers are going about it the wrong way.
The architechture necessary to have an intelligent machine would have to
very closly follow that of the brain, and intelligence would have to 
evolve out of that architecture rather than out of a program which 
someone hacked into that architecture.  And, until we know more about
the architecture of the brain, *real* AI will, in my opinion, probably
remain elusive.

					Jim Kempf
					arizona!jim

tombl (12/07/82)

Regarding jim at arizona's comments on Searle and the connection to
mathematical modelling of brain activity:

	With regard to Searle's claims, his argument was that just as
the properties of individual water molecules are in a causal
relationship with macroscopic properties of water, so are AI programs
causal with respect to brain states (he does not define these, and
neither am I). What he went on to claim was that the macroscopic
properties result not only from properties of individual molecules, but
from the properties of the embedding medium (the laws of statistical
mechanics?); by analogy, since silicon does not recreate the properties
of the embedding medium (the brain), AI programs cannot recreate brain
states.

	The mistake Searle makes, and which you repeat by making your
reference to Haken, is to assume that developing a mathematical theory
relating microscopic properties to macroscopic behavior somehow
introduces additional causal factors. A simpler version of the same
issue is demonstrated by the following example: if I stick an electrode
near a particular part of the brain and record the electrical activity
there (evoked potential), I can correlate an emotional state, say
fear, with level of activity recorded at the electrode. I now have a
model of the brain. When neurons in a particular part of the brain fire
(microscopic activity), fear is observed (macroscopic state). Yet it
would be silly for me to claim that my theory contributes in any sense
to the process by which the mental state is produced. The only causal
factor involved is the activity of neurons, which have no knowledge
whatsoever of my theory.  I can call my theory cerebral mechanics;
I can write a book about it; my theory may become more sophisticated
and elaborate, and may provide some very interesting explanation of
phenomena produced by populations of interacting elements; I can
calculate constants to fit my model to observed behavior until I am
blue in the face. The fact remains, however, that it is the behavior
of the "microscopic" elements and their behavior alone which is
producing the macroscopic states.

Enough of that. I do have a question for those still bothering to read
these, especially those, like Jim, who feel for some reason that we
need to somehow recreate the brain in order to emulate its behavior:

	Why? I gather that a good part of the motivation, as in
Searle's case, is that we have these conscious states and we cannot
imagine a machine having conscious states. Isn't it possible that our
conscious states are just a grand illusion (reference:  Stevan Harnad,
Consciousness: an Afterthought, Cognition and Brain Theory, 5(1), pgs.
29-48. The author describes a finite state machine implementation of
human consciousness). Simulation of a hurricane certainly is not a
hurricane, but how are they different? Oh well...

	Tom Blenko
	decvax!teklabs!tekmdp!tombl
	ucbvax!teklabs!tekmdp!tombl

lemmon (12/08/82)

Jim Kempf has said some things that I think need a response.  First,
he comments on the phase change analogy.  Indeed it is possible
(I presume) to make statistical predictions about assemblies of neurons
(though they interact differently than iron atoms), but to what purpose?
One could also do statistical studies of the behavior of circuits
in a digital computer, and engineers concerned with power consumption
and heat flow might do this.  But how would such calculations help
anyone debug, say, a new compiler? Or figure out why a net message
went astray?  Similarly such calculations about the brain might help
in explaining epilepsy or brain waves.

About the relation between simulation and reality, it seems to me
that when it is information processing (including intelligence) that
is being simulated, then a simulation is just as good as the "real"
thing (assuming the simulation is accurate and we are not interested
in efficiency.)  Consider whether a commercial chess player "really"
plays chess.  "Actually", it only responds to button pushes by lighting
certain patterns of lines on 7-segment displays.  But it can still beat
humans.  Would you say that only those players with robot arms
"really" play chess?

Jim Kempf also says that "The architecture necessary to have an
intelligent machine would have to very closely follow that of the brain ...".
He seems to claim knowledge that none of us have yet.  How closely
does the architecture of a flying machine have to follow that of a bird?
All natural flying machines work with flapping wings; no human-built
ones do (except for a few toys, etc.)  It may be that the brain's
structure is the best way to do intelligence, or even the only way;
but until we know more, working from the other end makes sense.
Anyway, computer programs are excellent ways of trying out ideas,
either for explaining natural intelligence or for engineering
the artificial variety.

	Alan Lemmon
	linus!lemmon
	lemmon@mitre or lemmon@usc-ecl

sher (12/09/82)

I just read an article which indicated that one could simulate intelligence
but not be intelligent.  There was as a prominent example the fact that a
simulation of a hurricane on a computer could never be mistaken for a real
hurricane (an interesting question in itself? What if a very big fan was one
of the said computer's io devices?).  I am interested in exactly how a
computer simulation of intelligence is different from "real intelligence".
If no one else is interested or someone else has said the same thing just
ignore me.

-David Sher