[comp.ai] The limitations of logic

ray@bcsaic.UUCP (Ray Allis) (12/06/88)

In <696@quintus.UUCP> ok@quintus.uucp (Richard A. O'Keefe) says:
>In order for a digital system to emulate a neural net adequately,
>it is not necessary to model the entire physical universe, as Ray
>Allis seems to suggest.  It only has to emulate the net.

Emulation, simulation and modelling are all *techniques for analysis*
of the Universe.  The trap analysts get into is to forget that the
model is not identical to the thing modelled.  Emulating, simulating
or modelling a neural net, however "adequately", does not *duplicate*
a neural net or its behavior.  You don't expect Revell models of F-16s
to do much dogfighting, why would you expect a model of a mind to think?

>>You see, all the ai work being done on digital computers is modelling using
>>formal logic.

>Depending on what you mean by "formal logic", this is either false or
>vacuous.  All the work on neural nets uses formal logic too (whether the
>_nets_ do is another matter).

Well sheesh!  How many interpretations of the phrase "formal logic" ARE
there?  I meant "form-al"; of or pertaining to form, disregarding
content.  I realize the phrase is redundant, I can only plead seduction by
common usage.

I meant to include most neural net work in the phrase "ai work".

ok@quintus.uucp (Richard A. O'Keefe) (12/07/88)

In article <9020@bcsaic.UUCP> ray@bcsaic.UUCP (Ray Allis) writes:
>In <696@quintus.UUCP> ok@quintus.uucp (Richard A. O'Keefe) says:
>>In order for a digital system to emulate a neural net adequately,
>>it is not necessary to model the entire physical universe, as Ray
>>Allis seems to suggest.  It only has to emulate the net.
>
>Emulation, simulation and modelling are all *techniques for analysis*
>of the Universe.  The trap analysts get into is to forget that the
>model is not identical to the thing modelled.  Emulating, simulating
>or modelling a neural net, however "adequately", does not *duplicate*
>a neural net or its behavior.  You don't expect Revell models of F-16s
>to do much dogfighting, why would you expect a model of a mind to think?

By "neural net" I have meant all along the kind of thing AI people call
a neural net, that is a computational device of nodes and weighted links.
I haven't meant a network of neurons.  That may be the cause of the
misunderstanding.  The point is that a *connectionist* net is not a
natural part of the Universe to be modelled, but is *itself* a formal
model, and it *is* possible for one formal model to completely duplicate
the behaviour of another.  I have never alleged that a model of a mind
would think (or for that matter, that it would not).  All I have claimed
is that *one* formal model (a digital system) is capable of emulating
*another* formal model (a connectionist net) to the point where the "real"
thing (a connectionist net) and its emulation cannot be distinguished.
I neither claim nor deny that either can or cannot model collections of
biological neurons, minds, human beings, or even politicians.

>>>You see, all the ai work being done on digital computers is modelling using
>>>formal logic.
>
>>Depending on what you mean by "formal logic", this is either false or
>>vacuous.  All the work on neural nets uses formal logic too (whether the
>>_nets_ do is another matter).
>
>Well sheesh!  How many interpretations of the phrase "formal logic" ARE
>there?  I meant "form-al"; of or pertaining to form, disregarding
>content.  I realize the phrase is redundant, I can only plead seduction by
>common usage.

How many interpretations?  Uncountably many.  "Formal logic" is usually
taken as meaning some variant of predicate calculus.  Seduction by
common usage would be an excellent excuse, if only it were true.
Differential equations are not normally regarded as "formal logic",
for example, though we employ (if we are wise!) formal logic in
reasoning about them.  The point I was making is that connectionist
nets provide no mystical escape from "formal logic" (whatever that is).

The claim that "all the AI work being done on digital computers is	
modelling using formal logic" simply isn't true:  as soon as you connect
the thing to the real world with cameras, pressure sensors, "arms", and
so on, you have a thing which is behaving in the _real_ world, not a
model of it.  I don't know what a Revell model is, but the Air Force
expect the Artificial Wingman project to deliver a computer program
which when embodied in a computer in an appropriate aircraft should be
able to do real dogfighting.

bph@buengc.BU.EDU (Blair P. Houghton) (12/08/88)

In article <9020@bcsaic.UUCP> ray@bcsaic.UUCP (Ray Allis) writes:
>In <696@quintus.UUCP> ok@quintus.uucp (Richard A. O'Keefe) says:
>>In order for a digital system to emulate a neural net adequately,
>>it is not necessary to model the entire physical universe, as Ray
>>Allis seems to suggest.  It only has to emulate the net.
>
>Emulation, simulation and modelling are all *techniques for analysis*
>of the Universe.  The trap analysts get into is to forget that the
>model is not identical to the thing modelled.  Emulating, simulating
>or modelling a neural net, however "adequately", does not *duplicate*
>a neural net or its behavior.  You don't expect Revell models of F-16s
>to do much dogfighting, why would you expect a model of a mind to think?

(Ignoring the fact that a Revell model is not "adequate" for flight...)

Because I believe the Neuron to be overspecified for the purposes of
thought, having been required also to be alive and provide for its own
energy transport, conversion, and utilization.

I don't expect that an artificial neural network will be able to
regenerate if destroyed on a cellular level (neural tissue is limited
at this, anyway), nor indeed to grow itself in a womb; but I certainly
do expect it to be able to carry on all the information processing
innate to a real neural network.

Better, I believe the artificial net, not having itself to support, will
be capable of thinking longer, harder, and in a smaller space than the
real net.

Further, a large part of the activity of real neural networks is not thought
per se, but processing of sensory, autonomic, and efferent information
without regard to content.  Such activity is modelled adequately by
ordinary electronics such as logic devices [see McCulloch and Pitts, 1943].

I don't expect small children to need real F-16's to incite their
imaginations, why would you expect a topologically and neuromimetically
accurate model of a brain NOT to think?

				--Blair
				  "And what does it take in its coffee?"

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/10/88)

In article <1628@buengc.BU.EDU> bph@buengc.bu.edu (Blair P. Houghton) 
wonders what we would expect a topologically and neuromimetically
accurate model of a brain to think.

I imagine the simulated brain would take stock of the situation
and inquire of its creator, "To what purpose or goal shall I devote
my ability to think?"

And if you were the creator of that simulated brain, how would you answer
that simulated question?

--Barry Kort

bph@buengc.BU.EDU (Blair P. Houghton) (12/12/88)

In article <42836@linus.UUCP> bwk@mbunix (Kort) writes:
>In article <1628@buengc.BU.EDU> bph@buengc.bu.edu (Blair P. Houghton) 
>wonders what we would expect a topologically and neuromimetically
>accurate model of a brain to think.
>
>I imagine the simulated brain would take stock of the situation
>and inquire of its creator, "To what purpose or goal shall I devote
>my ability to think?"
>
>And if you were the creator of that simulated brain, how would you answer
>that simulated question?

"My child, to whatever your little double-poly-technology heart desires."

				--Blair
				  "Practicing for RoboDaddyhood, or
				   a career in VLSI genetics..."

ap1i+@andrew.cmu.edu (Andrew C. Plotkin) (12/12/88)

/ In article <1628@buengc.BU.EDU> bph@buengc.bu.edu (Blair P. Houghton)
/ wonders what we would expect a topologically and neuromimetically
/ accurate model of a brain to think.
/
/ I imagine the simulated brain would take stock of the situation
/ and inquire of its creator, "To what purpose or goal shall I devote
/ my ability to think?"

I assume this simbrain has been taught language in the same way our brains are.
(If it was created with already-existing patterns taken from a human, it's real
easy to figure out what it would think -- just ask the human it's simulating!)
So I would assume it to have developed a personality over the years of its
maturing; I doubt it would placidly look to its creators for direction. (Unless
its education had been designed to indoctrinate it with the fact that it was a
servant.)

--Z

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/13/88)

In a previous article, I speculated on the thoughts of a
topologically and neuromimetically accurate model of a brain.
I wrote:

 > I imagine the simulated brain would take stock of the situation
 > and inquire of its creator, "To what purpose or goal shall I devote
 > my ability to think?"

In article <YXci3Ey00Xo48FpkVH@andrew.cmu.edu> ap1i+@andrew.cmu.edu
(Andrew C. Plotkin) comments:

 > I doubt it would placidly look to its creators for direction.
 > (Unless its education had been designed to indoctrinate it
 > with the fact that it was a servant.)

OK.  Let's say the simulated brain considers itself to be a
self-directed free thinker.  The question remains.  How does
it select the subject of its contemplation?

--Barry Kort

bph@buengc.BU.EDU (Blair P. Houghton) (12/14/88)

In article <YXci3Ey00Xo48FpkVH@andrew.cmu.edu> ap1i+@andrew.cmu.edu (Andrew C. Plotkin) writes:
>/ In article <1628@buengc.BU.EDU> bph@buengc.bu.edu (Blair P. Houghton)
>/ wonders what we would expect a topologically and neuromimetically
>/ accurate model of a brain to think.
>
>I assume this simbrain has been taught language in the same way our brains are.
>(If it was created with already-existing patterns taken from a human, it's real
>easy to figure out what it would think -- just ask the human it's simulating!)

This is something we dealt with in my first Neural Science course.  Our
brains are sufficiently similar as to be indistinguishable for the purposes
of survival, yet inescapably dissimilar for the purposes of comparison.

Imagine a million identical brains, born all with the same neural
mappings and chemical levels, but in a million ordinary humans in
ordinary human environments.  Even two who are siamese twins will have
slight differences in perspective, literally, and will develop some
differing ideas.  Learning from one's own thoughts is one of the most
prevalent human mental activities (being a part of deduction,
imagination, intuition, etc.) and no two of the many identical brains
can be expected to always have parallel thoughts, because of these
differences in their perspective; hence they will all be trained
differently.  The upshot is that the environment is capable of
producing all of the differences between humans, yet can not account
for all of the similarities.

In response to your statement, therefore, the fact that SimBrain is in
a bakelite box will make it very different from the teacher, as will the
teacher-student relationship.  We can't know what to expect it to think.

				--Blair
				  "Just ask the human _I'm_ simulating."

ap1i+@andrew.cmu.edu (Andrew C. Plotkin) (12/14/88)

bph@buengc.BU.EDU (Blair P. Houghton) writes....

/In article <YXci3Ey00Xo48FpkVH@andrew.cmu.edu> ap1i+@andrew.cmu.edu (Andrew C.
Plotkin) /writes:
/>/   bph@buengc.bu.edu (Blair P. Houghton)
/>/ wonders what we would expect a topologically and neuromimetically
/>/ accurate model of a brain to think.
/>
/>I assume this simbrain has been taught language in the same way our brains are.
/>(If it was created with already-existing patterns taken from a human, it's real
/>easy to figure out what it would think -- just ask the human it's simulating!)
/
/  Our
/ brains are sufficiently similar as to be indistinguishable for the purposes
/ of survival, yet inescapably dissimilar for the purposes of comparison.
/ [...]
/ In response to your statement, therefore, the fact that SimBrain is in
/ a bakelite box will make it very different from the teacher, as will the
/ teacher-student relationship.  We can't know what to expect it to think.

I understand that, no problem. I meant something like: If it's a good
simulation, and it's simulating a particular person, we can get a good idea of
what it's thinking by asking that person "What would you think if you found
yourself looking out of a camera lens? (and whatever other sensory effects the
simbrain is receiving.)" You won't get a neuron-perfect answer, but it gets the
idea across.
    In other words, your question sounded like "System X is a simulation of
system Y. I wonder how will X behave?" Not a philosophically helpful question.

   On the other hand, if the simbrain *was* taught language and so forth the
usual way, through years of experience, my answer becomes "It will think like a
person who has grown up with these experiences." This is, of course, much harder
to answer. (Read "impossible", given our current knowledge of psychology,
although speculation is of course possible. (See Robert Heinlein, Jeffrey
Carver, et al... :-)

--Z

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/15/88)

In article <1656@buengc.BU.EDU> bph@buengc.bu.edu (Blair P. Houghton)
writes about empowering a topologically and neuromimetically accurate
model of a brain to select a worthwhile goal of cerebration.

 > "My child, to whatever your little double-poly-technology heart desires."

I love it!

--Barry Kort

bph@buengc.BU.EDU (Blair P. Houghton) (12/16/88)

In article <42962@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>
>OK.  Let's say the simulated brain considers itself to be a
>self-directed free thinker.  The question remains.  How does
>it select the subject of its contemplation?

The same way you do:  as a response to continuous stimulation by your
environment, and the occasional random fluctuation in attention level,
along with the mechanisms that keep churning your thoughts within
your memory apparati.

If it thinks, I would have no doubt that ceasing to stimulate it
would allow it to dream.

				--Blair

bph@buengc.BU.EDU (Blair P. Houghton) (12/16/88)

In article <42994@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>In article <1656@buengc.BU.EDU> bph@buengc.bu.edu (Blair P. Houghton)
>writes about empowering a topologically and neuromimetically accurate
>model of a brain to select a worthwhile goal of cerebration.
>
> > "My child, to whatever your little double-poly-technology heart desires."
>
>I love it!

Why, thank you.

				--Blair
				  "And thank's to all the
				   'little people' who made
				   it happen.  If only they
				   would stop marching around
				   in my hair at night..."

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (12/22/88)

In article <42962@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>
>OK.  Let's say the simulated brain considers itself to be a
>self-directed free thinker.  The question remains.  How does
>it select the subject of its contemplation?
>

Possibly we will need to understand better the control mechanisms
that direct attention in the real brain in order to properly
design an artificial one.  Both parietal and frontal lobe play
roles in directing conscious attention.  The frontal lobes
have a role in assessing competing stimuli, and the parietal
lobes are needed in order to attend to them.  I suspect that
more abstract topics for contemplation also compete for attention
by the more serially behaving conscious.  The exact mechanisms
aren't clear to me at this time.

bwk@mbunix.mitre.org (Barry W. Kort) (12/28/88)

In article <1902@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu
(Gordon E. Banks) rejoins the discussion on model brains.  He begins
with the question at hand:

>>OK.  Let's say the simulated brain considers itself to be a
>>self-directed free thinker.  The question remains.  How does
>>it select the subject of its contemplation?
>
>Possibly we will need to understand better the control mechanisms
>that direct attention in the real brain in order to properly
>design an artificial one.  Both parietal and frontal lobe play
>roles in directing conscious attention.  The frontal lobes
>have a role in assessing competing stimuli, and the parietal
>lobes are needed in order to attend to them.  I suspect that
>more abstract topics for contemplation also compete for attention
>by the more serially behaving conscious.  The exact mechanisms
>aren't clear to me at this time.

Good.  Then we have a candidate subject for contemplation.  I now
propose that awareness of an unsolved problem becomes a stimulus
in the contest for brain time and attention.  If this theory is
correct, then we can move on to the selection criteria for choosing
the winning puzzle which captivates and fascinates the roving mind.

But perhaps I move too fast.  Perhaps there are other stimuli
besides awareness of unsolved problems which direct the focus
of conscious attention.

--Barry Kort