[talk.religion.misc] Elementary AI Philosophy

mirk@warwick.UUCP (Mike Taylor) (01/16/89)

In article <1241@arctic.nprdc.arpa> meadors@nprdc.arpa (Tony Meadors) writes:
>In article <18464@santra.UUCP> ay@hutcs.UUCP (Beta) writes:
>>What might be the essential element of the human mind, the something that
>>humans have and which makes them different from machines?
>
>Roughly, you assumed that there exists some "essential element
>of minds" and that "humans have" some essence which makes them
>fundamentally "different from machines." as you show in the next
>few sentences you're fishing for some "essence" more akin to a "soul",
>something that would in fact separate MEN from BEASTS.

I am not at all sure that the original poster mightn't have a point here.
I assume that most of you are familiar with Chris Searle's intriguing
"Chinese Room" objection to the claim of "Strong AI": In a nutshell, that
a human could hand-simulate any putative AI program without gaining any
*understanding* of the supposedly cognitive acts involved.  Searle
concludes from this that we have no evidence to assume that the program
itself, when running on a computer "understands" as such, (leaving aside
for now the perrenial problem of exactly what we mean by "undestand").

As far as this goes, I am forced to accept Searle's argument - the replies
which he attempts to refute in his article "Minds, Brains and Program" seem
to me to be dealt with fairly adroitly by him.  But Searle then seems to
lack the courage of his convictions, and instead of adopting the "soulist"
point of view, which seems to me to follow fairly naturally from the
"Chinese Room" thought-experiment, he adopts the rather startling position
of believing that there is somehow something magical about the neural
substrate, that brains are somehow just made of "the right stuff" to
support intellignece, whereas computers are not.

To my mind, if Searle's conclusion that we have no evidence for the Strong AI
hypothesis, (that correctly programmed machines can "understand" in roughly
the same sense in which we do), is correct at all, then the difference
between the human mind and an AI program is something far more fundamental
than the mere fact of the physical composition of the substrate.

In short, I think that the intellectually honest response to accepting
Searle's experiment is to postulate the existence of "soul" or "spirit".

>>Christianity offers one answer to the question.  According to the Old
>>Testament, God created man to be His image.

And this entailed giving him (and her!  Don't forget Eve :-) with a soul
and spirit.  Whether or not you accept the existance of the Christian God,
I think that it is unscientific to dismiss out of hand the idea that there
is something "supernatural" (read: "operating by mechanisms as yet unknown
to mankind") about the mind.

>>Accepting a new hypothesis can always lead to new results, and I
>>havent't seen this one in AI textbooks.

I think that the reason for this is clear.  If AI researchers decided that
the mind was "magical", and that they were unable to synthesise one for very
fundamental and immutable reasons, then there wouldn't be any point in doing
what they do.  Accepting a soulist position undermines strong AI to the
point of collapse.

So-called weak AI remains a valid proposition under this hypothesis, though.
It would still be intersting and useful to see what can be learned about human
cognition by *simulating* cognitive processes, even if it turns out that this
is all we can achieve.

>>It might be a good idea for AI researchers to study the romantic
>>literature written by women for women (sic).
>You'll have to spell this one out for me.

Me too.  Any explainations out there, anyone?

Hmm, well I hope that this has provoked some of you to thought.  A disclaimer
here, to finish with: I am not in any way an AI professional - not even a
Computer Scientist, just an interested layman.  Please forgive any factual
errors in the above, or misuse of terms etc.
______________________________________________________________________________
Mike Taylor - {Christ,M{athemat,us}ic}ian ...  Email to: mirk@uk.ac.warwick.cs
*** Unkle Mirk sez: "Em9 A7 Em9 A7 Em9 A7 Em9 A7 Cmaj7 Bm7 Am7 G Gdim7 Am" ***
------------------------------------------------------------------------------

ftoomey@maths.tcd.ie (Fergal Toomey) (01/17/89)

In article <904@ubu.warwick.UUCP> mirk@emerald.UUCP (Mike Taylor) writes:

>To my mind, if Searle's conclusion that we have no evidence for the Strong AI
>hypothesis, (that correctly programmed machines can "understand" in roughly
>the same sense in which we do), is correct at all, then the difference
>between the human mind and an AI program is something far more fundamental
>than the mere fact of the physical composition of the substrate.

In the absence of any real knowledge of how the mind works, I am perfectly
happy to allow the possibility of the existence of a soul, although I think
it's more useful to science to assume for the moment that a natural
explanation of the mind is possible. As regards your comments on Searle's
"Chinese Room": as I see it, all Searle did was to show that it is 
impossible to test for the presence or absence of 'mind' in a machine that
appears to be acting intelligently; for an automaton can show all the
appearence of being self-conscious - while being in fact a mere automaton.
I don't think it follows that it is impossible to make a machine that has
a mind (ie. is self-conscious), or that the mind is a supernatural
phenomenon. The question is rather: if we ever succeed in making a mind
'of nuts and bolts', how will we know we have succeeded?

			Yours, Fergal Toomey
				TCD

arm@ihlpb.ATT.COM (Macalalad) (01/18/89)

In article <904@ubu.warwick.UUCP> mirk@emerald.UUCP (Mike Taylor) writes:
>I assume that most of you are familiar with Chris Searle's intriguing
>"Chinese Room" objection to the claim of "Strong AI": In a nutshell, that
>a human could hand-simulate any putative AI program without gaining any
>*understanding* of the supposedly cognitive acts involved.  Searle
>concludes from this that we have no evidence to assume that the program
>itself, when running on a computer "understands" as such, (leaving aside
>for now the perrenial problem of exactly what we mean by "undestand").
>

While I am not so familiar with Searle's argument, it seems fairly
clear to me that Searle has made a fundamental error.  He assumes that 
in order for a computer to be intelligent, there must be something
inside the computer that "understands."  Of course, that isn't the
case.  That would be like assuming a little man inside my head that
"understands" for me so that I can be intelligent.  If and when an
intelligent computer is constructed, it will not be the program which
will understand, nor the processor.  It will be the computer as a
whole.

-Alex

dan-hankins@cup.portal.com (Daniel B Hankins) (01/19/89)

In article <904@ubu.warwick.UUCP> mirk@warwick.UUCP (Mike Taylor) writes:

>I assume that most of you are familiar with Chris Searle's intriguing
>"Chinese Room" objection to the claim of "Strong AI": In a nutshell, that
>a human could hand-simulate any putative AI program without gaining any
>*understanding* of the supposedly cognitive acts involved.  Searle
>concludes from this that we have no evidence to assume that the program
>itself, when running on a computer "understands" as such, (leaving aside
>for now the perrenial problem of exactly what we mean by "undestand").

     This is true, as far as it goes.  He has however missed the principle
of emergent properties of dynamic systems.  A computer program without a
cpu to execute on can do nothing.  A cpu without a program to run can do
nothing.  But combine the two and the system can do much.  The ability to
process information is an emergent property of the dynamic system
consisting of a cpu and program together.

     Of course, the man in the Chinese room does not understand Chinese. 
And of course, the room, pencil, paper, and rules do not understand
Chinese.  But the room, pencil, paper, rules and man *system* *DOES*
understand Chinese.  If one converses with the Chinese room system, one
gets intelligible results in Chinese - results which demonstrate that the
system understands the language, even though the man inside understands
nothing.  Does Searle adequately answer this argument?

>Whether or not you accept the existance of the Christian God, I think that
>it is unscientific to dismiss out of hand the idea that there is something
>"supernatural" (read: "operating by mechanisms as yet unknown to mankind")
>about the mind.

     Supernatural does not mean mechanisms *as yet* unknown to mankind. 
Supernatural means mechanisms *unknowable* by mankind - mechanisms which
are not natural and not governed by physical laws.  It is completely
scientific to dismiss out of hand such mechanisms.
     If the soul is in principle (though not yet in practice)
understandable, as you imply above, then rules can be set down describing
its operation.  Such rules can be embodied in a computer program and made
alive by running the program.  There is then nothing supernatural about
this definition of soul.  It is completely natural in every way.
     If one wishes to adopt the notion of an uncaused causal agent, one
must abandon scientific investigation of the causes.  This leads the
scientist to dismiss such a notion out of hand, simply in order to have
something to do.


Dan Hankins
Seen on Usenet:
"God is real, unless declared as integer."

bwk@mbunix.mitre.org (Barry W. Kort) (01/20/89)

In article <904@ubu.warwick.UUCP> mirk@emerald.UUCP (Mike Taylor) writes:

 > It would still be interesting and useful to see what can be
 > learned about human cognition by *simulating* cognitive processes,
 > even if it turns out that this is all we can achieve.

I *simulate* cognitive processes by programming computers to engage
in deductive, inductive, inferential, and model-based reasoning.
What I learn about human cognition is that the average carbon-based
neural network is not very skilled at these forms of cognition.

But all is not lost.  By watching the silicon thinker, ordinary
humans can find out how these mental processes work, and they
can download them from the machine to their own brains.  (That's AI.)

I'm still trying to teach the computer to emulate such classical
human cognitive processes as worrying, mystical insight, intuition,
imagination, and fantasy.  So far, I'm not making much progress.

--Barry Kort

bwk@mbunix.mitre.org (Barry W. Kort) (01/21/89)

In article <9423@ihlpb.ATT.COM> arm@ihlpb.UUCP (Alex Macalalad) 
responds to Mike Taylor's comments about the Chinese Room.  Alex writes:

 > [Searle] assumes that in order for a computer to be intelligent,
 > there must be something inside the computer that "understands." 
 > Of course, that isn't the case.  That would be like assuming a
 > little man inside my head that "understands" for me so that I
 > can be intelligent.  If and when an intelligent computer is
 > constructed, it will not be the program which will understand,
 > nor the processor.  It will be the computer as a whole.

There *can* be "something inside the computer that `understands'",
but that something need not be thought of as a "homunculus".

If we substitute the word "comprehend" for "understand", we have
a better chance of seeing how a computer can have an idea of how
things work out there in the real world.  The verb "to comprehend"
means "to capture with".  In my mind, and in the mind of my
computer, I construct models which replicate the structure and
behavior of real-world objects.  I capture (comprehend) reality
with such models.

As to intelligence, that step follows easily after I have a working
model.  I can now do "thought experiments" on the model to find
out what will happen if I diddle the controls on the model, or
if I perturb the operating environment in which the model is
embedded.  I call this "cognition".  (Some people call it
model-based reasoning, or modal logic.)

While we are on the point of capturing ideas with models, let us
take note of two other captivating terms.  The word, "phenomenal"
means "to capture with the senses."  Contrast that well-known
word with its lesser-known counterpart, "noumenal", which means
"to capture with thought."  Theories are the product of noumenal
processes.

--Barry Kort

bwk@mbunix.mitre.org (Barry W. Kort) (01/21/89)

In article <232@maths.tcd.ie> ftoomey@maths.tcd.ie (Fergal Toomey) asks:

 > The question is rather: if we ever succeed in making a mind
 > 'of nuts and bolts', how will we know we have succeeded?

It will tell us.

--Barry Kort

smoliar@vaxa.isi.edu (Stephen Smoliar) (01/24/89)

In article <43763@linus.UUCP> bwk@mbunix.mitre.org (Barry Kort) writes:
>
>There *can* be "something inside the computer that `understands'",
>but that something need not be thought of as a "homunculus".
>
>If we substitute the word "comprehend" for "understand", we have
>a better chance of seeing how a computer can have an idea of how
>things work out there in the real world.  The verb "to comprehend"
>means "to capture with".  In my mind, and in the mind of my
>computer, I construct models which replicate the structure and
>behavior of real-world objects.  I capture (comprehend) reality
>with such models.
>
>As to intelligence, that step follows easily after I have a working
>model.  I can now do "thought experiments" on the model to find
>out what will happen if I diddle the controls on the model, or
>if I perturb the operating environment in which the model is
>embedded.  I call this "cognition".  (Some people call it
>model-based reasoning, or modal logic.)
>
I'm not sure that things progress quite so "easily" as that.  Let
us assume, for the sake of argument, that you have "something inside
the computer" that can construct the sorts of models you have in mind.
It is not a simple step from having those models to performing thought
experiements on them.  What sorts of issues remain?

	1.  First of all, there is the problem that you are going
	to have to manage lots of models.  Thus, there are questions
	of storage management.  There are also questions of retrieval:
	How do you know what model to access and when to access it?

	2.  Next, there is the issue of manipulating those models.
	Metaphorically speaking, any model is going to have lots of
	"knobs" on it.  Observing a model's behavior is a matter of
	knowing how to twist which knobs when and what to look for.

	3.  This then brings up the issue of "thought experiements."
	Like laboratory experiments, thought experiments must be planned.
	Furthermore, those plans are designed in response to hypotheses
	to be investigated.  Thus, before we can talk about thought
	experiments, we have to talk about some agent which "thinks
	them up."  Thus, while it may be true that you do not need
	a homunculus to build your models (and I probably would be
	willing to contest THAT point, too, given more time to think
	about it), it would seem that your scenario ultimately depends
	on having a homunculus to manipulate them.

Is there any way to escape the homunculus?  I would argue that it can only be
escaped by rejecting the premise which fostered it:  the idea that
understanding can be attributed to a "something inside the computer."
Both Minsky and Edelman have proposed views of comprehension in which
understanding emerges from the interactions of components.  The important
point here is that no individual component may be said to embody understanding,
nor is it the case that all components are subjected to some kind of
"intelligent control."  It is simply that there is this behavior which
is a byproduct of interaction which manifests the characteristics of
understanding;  and because it is an emergent behvior rather than a
"thing," it makes no sense to try to isolate it in a single component
of a system.

arm@ihlpb.ATT.COM (Macalalad) (01/25/89)

I tried mailing this reply, but it bounced back.  So....

In article <43763@linus.UUCP> Barry Kort writes:
>There *can* be "something inside the computer that `understands'",
>but that something need not be thought of as a "homunculus".
>
>If we substitute the word "comprehend" for "understand", we have
>a better chance of seeing how a computer can have an idea of how
>things work out there in the real world.  The verb "to comprehend"
>means "to capture with".  In my mind, and in the mind of my
>computer, I construct models which replicate the structure and
>behavior of real-world objects.  I capture (comprehend) reality
>with such models.

Barry, I'm not sure what this has to do with homunculi.  Who is
doing the comprehending here, the model or you?  If I build a model
railroad, does the model railroad then have some comprehension of
railroads?

If what you're trying to say is that you construct the computer system
such that it builds its own models of reality, then I would answer
that the computer system as a whole, and not its models, is
comprehending.

>As to intelligence, that step follows easily after I have a working
>model.  I can now do "thought experiments" on the model to find
>out what will happen if I diddle the controls on the model, or
>if I perturb the operating environment in which the model is
>embedded.  I call this "cognition".  (Some people call it
>model-based reasoning, or modal logic.)

Most people call it the scientific method. (:-)  Again, where does
the cognition lie, in you or the model?

>While we are on the point of capturing ideas with models, let us
>take note of two other captivating terms.  The word, "phenomenal"
>means "to capture with the senses."  Contrast that well-known
>word with its lesser-known counterpart, "noumenal", which means
>"to capture with thought."  Theories are the product of noumenal
>processes.

As I understand it, noumenal refers to the thing-in-itself, as
opposed to phenomenal, which refers to the thing-perceived.  Kant
argued that the noumena is unknowable through rational thought, i.e.
we can learn only about things as they are perceived, and nothing
about things-in-themselves.  If by saying that theories are the
product of noumenal thought, you imply that I can never fully
understand your theory, then you may be right. (:-)

>--Barry Kort

Barry, I have long admired your thoughtful and thought-provoking
discussion on the net.  Please let me know if I have completely
missed your point (which is very possible).

My point (just in case you missed it the first time) was to argue
that Searle's man inside the computer was essentially a homunculus.
Further, when Searle assumes that the computer understands Chinese
only if the man inside the computer understands Chinese, he implies
that we must have a similar little man inside our heads in order for
us to understand.  Finally, since most of us agree that none of us
need a little man inside our head in order to understand, Searle's
assumption that a computer _does_ can easily be seen as fallacious.

As to my comment that the computer system as a whole understands, I
meant that understanding was an emergent property of the computer
system.  (Sort of like the way enclosure is an emergent property
of a box.  Does the property of enclosure lie within the top? the
bottom? the sides?  It emerges from the way the top, bottom and
sides are put together to form a box.)

Something to think about....

-Alex

bwk@mbunix.mitre.org (Barry W. Kort) (01/25/89)

In article <7346@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP
(Stephen Smoliar) rejoins the discussion on computer models.
Stephen writes:

>	1.  First of all, there is the problem that you are going
>	to have to manage lots of models.  Thus, there are questions
>	of storage management.  There are also questions of retrieval:
>	How do you know what model to access and when to access it?

I delegate storage management to the operating system.  After all,
I use the corporate Library, which has a very nice system of storage
management (which I did not invent).  As to selection of the 
appropriate model, this is done by pattern-matching on the *structure*
of the knowledge base (without regard to the semantics).  Thus two
English sentences which have the same diagram are analogs, even
if the semantics are completely unrelated.

>	2.  Next, there is the issue of manipulating those models.
>	Metaphorically speaking, any model is going to have lots of
>	"knobs" on it.  Observing a model's behavior is a matter of
>	knowing how to twist which knobs when and what to look for.

Just as Carnegie-Mellon's Terragator goes out to play (so that
it can calibrate it's vision), we can let the computer play with
the model to discover interesting cause-and-effect linkages between
stimulus and response.  The play can be systematic (all possible
combinations in lexicographic order), or chaotic (totally random)
or something in between (heuristic rules with priorities and randomized
tie-breaking).

>	3.  This then brings up the issue of "thought experiments."
>	Like laboratory experiments, thought experiments must be planned.
>	Furthermore, those plans are designed in response to hypotheses
>	to be investigated.  Thus, before we can talk about thought
>	experiments, we have to talk about some agent which "thinks
>	them up."  Thus, while it may be true that you do not need
>	a homunculus to build your models (and I probably would be
>	willing to contest THAT point, too, given more time to think
>	about it), it would seem that your scenario ultimately depends
>	on having a homunculus to manipulate them.

Experiments are planned (or thought up) to resolve questions which
arise in theory construction or theory testing.  The starting
hypothesis is that a model or theory can be constructed at all.
The goal is to construct a compact, easy to use model with
explanatory power.  Unless someone is motivated to ask "Why does
the world work the way it does?", there is no motivation to
seek the answer.

>Is there any way to escape the homunculus?

A knowledge system reposes a symbolic replica of some chunk of the world.
The "understanding" occurs when the knowledge system captures (comprehends)
an accurate replica.  A model which imitates the structure and behavior
of real-world systems is a tool of survival.  The mariner's most valuable
instrument is his charts.  He who sojourns without a map is destined to
blindly walk into brick walls.

--Barry Kort

dave@hotlr.ATT ( C D Druitt hotlk) (01/27/89)

In article <9465@ihlpb.ATT.COM> arm@ihlpb.UUCP (55528-Macalalad,A.R.) writes:
 > In article <43763@linus.UUCP> Barry Kort writes:
 > >There *can* be "something inside the computer that `understands'",
 > >but that something need not be thought of as a "homunculus".
 (Occam was here) 
 > Barry, I'm not sure what this has to do with homunculi.  Who is
 > about things-in-themselves.  If by saying that theories are the
 > product of noumenal thought, you imply that I can never fully
 > understand your theory, then you may be right. (:-)
 > My point (just in case you missed it the first time) was to argue
 > that Searle's man inside the computer was essentially a homunculus.
 > Further, when Searle assumes that the computer understands Chinese
 > only if the man inside the computer understands Chinese, he implies
 > that we must have a similar little man inside our heads in order for
 > us to understand.  Finally, since most of us agree that none of us
 > need a little man inside our head in order to understand, Searle's
 > assumption that a computer _does_ can easily be seen as fallacious.
 (Occam was here, too! 8-)) 
 > As to my comment that the computer system as a whole understands, I
 > meant that understanding was an emergent property of the computer
 > system.  (Sort of like the way enclosure is an emergent property
 > of a box.  Does the property of enclosure lie within the top? the
 > bottom? the sides?  It emerges from the way the top, bottom and
 > sides are put together to form a box.)
 > 
 > Something to think about....
 > 
 > -Alex
After a few seconds of good, hard, thought, I am left feeling like
a minority. You see, I think the little man in our heads _IS_ there!
And WE ARE HIM!
Maybe the disagreement is more one of abstract semanticism (;-):
Just as enclosure requires conjugal relationships between the elements
of the box (though a box is only a small sample of the groups that
give rise the concept of enclosure) to give it it's emergent property,
the "little man" is made up of - what?
Isn't _he_ simply _you_ or _me_? Aren't we our own little men in our
heads?
The working collaboration of the substructures into a superstructure
with a controller (or little man) to make choices and decisions
is something that occurs in nature to an infinite degree and on
infinite levels. Isn't a corporation a living organism? A city? A country?
Any assumption that _most of us_ (four-out-of five politicians surveyed?)
agreeing on some thought means that we understand it, or that any
dissenter is easily seen to be fallacious, is necessarily fallacious.

(sorry to get upset about this - I like the little man!
 I think he's a very useful and worthwhile concept to explore!)

Dave DRuitt
(the NODES)
(201) 949-5898

("what you need, son, is a little exposure to full frontal nodity (sic)")

bwk@mbunix.mitre.org (Barry W. Kort) (01/29/89)

In article <9465@ihlpb.ATT.COM> arm@ihlpb.UUCP (55528-Macalalad,A.R.)
publishes a somewhat extended discussion in lieu of bounced E-mail.
I hope that the netters won't find our open dialogue too offensive.

Starting with Alex's concluding paragraphs:

 > Barry, I have long admired your thoughtful and thought-provoking
 > discussion on the net.  Please let me know if I have completely
 > missed your point (which is very possible).

Thank you, Alex.  I appreciate your kind remarks.  I don't think
you missed my point, but perhaps you missed my intent.

 > My point was to argue that Searle's man inside the computer was
 > essentially a homunculus.   ...  Finally, since most of us agree
 > that none of us need a little man inside our head in order to
 > understand, Searle's assumption that a computer _does_ can
 > easily be seen as fallacious.

I completely agree, Alex.  I was hoping to go beyond your refutation
of Searle, and suggest a plausible mechanism for understanding
that illustrates the process without any need for a homunculus.

 > As to my comment that the computer system as a whole understands,
 > I meant that understanding was an emergent property of the
 > computer system.

Yes.  I, too, am persuaded by the "emergent property" argument.

Returning now to the details of my original comments,

>In article <43763@linus.UUCP> Barry Kort writes:
>>There *can* be "something inside the computer that `understands'",
>>but that something need not be thought of as a "homunculus".
>>
>> [Deleted discussion of computer-resident models.]
>
>Barry, I'm not sure what this has to do with homunculi.  Who is
>doing the comprehending here, the model or you?  If I build a model
>railroad, does the model railroad then have some comprehension of
>railroads?

It has nothing to do with homunculi.  There is no "who", unless
you want to think of the computer as an "amplifier of the mind",
in which case the computer and I together form the "who".  The
computer reposes the model on my behalf.  (It may be sapient,
but it is not yet sentient, conscious, or the slightest
bit interested in my model.)

>If what you're trying to say is that you construct the computer system
>such that it builds its own models of reality, then I would answer
>that the computer system as a whole, and not its models, is
>comprehending.

I would agree with that.  But you must know that my computer is
not so advanced.  I have to laboriously teach it everything it knows.
Maybe someday I'll get my hands on a more powerful computer, but
for now I'm stuck with garden variety low-budget models.

>>As to intelligence, that step follows easily after I have a working
>>model.  I can now do "thought experiments" on the model to find
>>out what will happen if I diddle the controls on the model, or
>>if I perturb the operating environment in which the model is
>>embedded.  I call this "cognition".  (Some people call it
>>model-based reasoning, or modal logic.)
>
>Most people call it the scientific method. (:-)  Again, where does
>the cognition lie, in you or the model?

Yeah, I was hoping to sneak the name of that step past my critics
who don't want to admit that they are unclear on the scientific
method.  Anyway, the cognition is a joint effort, but the computer
does most of the dirty work.  (I'm not only sneaky, I'm lazy to boot.)

>>[Defintion of "phenomenal" as "to capture with the senses,"
>>and "noumenal", as "to capture with thought." 
>
>As I understand it, noumenal refers to the thing-in-itself, as
>opposed to phenomenal, which refers to the thing-perceived.  Kant
>argued that the noumena is unknowable through rational thought, i.e.
>we can learn only about things as they are perceived, and nothing
>about things-in-themselves.  If by saying that theories are the
>product of noumenal thought, you imply that I can never fully
>understand your theory, then you may be right. (:-)

Kant did take a perfectly meaningful word ("noumenal"), and
elaborate it as you have outlined.  Personally, I don't
understand Kant's distinction.  I was using the word in
its more primitive denotation as that which is perceived by
insight.  My favorite metaphor is that of the jigsaw puzzle
whose big picture is assembled in one's head instead of by
laboriously hand-assembly for the benefit of the regular
visual system.  (For very small jigsaw puzzles, anyone can do this.)

As to understanding someone else's theory, I think it not
impossible.  But I am reminded of Raymond Smullyan's whimsical
Professor Griffin, curator of the Master Forest in _To Mock
A Mockingbird_.  Griffin's rule is that only the elite are
permitted to enter his forest.  But the kindly professor
has a liberal definition of "elite".  He defines elite as
anyone who wishes to enter.

--Barry Kort

fransvo@htsa.uucp (Frans van Otten) (01/31/89)

In article <526@hotlr.ATT> dave@hotlr.UUCP (54246 - C D Druitt hotlk) writes:
>In article <9465@ihlpb.ATT.COM> arm@ihlpb.UUCP (55528-Macalalad,A.R.) writes:
> > In article <43763@linus.UUCP> Barry Kort writes:
> > >There *can* be "something inside the computer that `understands'",
> > >but that something need not be thought of as a "homunculus".
> (Occam was here) 

I've seen this name - Occam - several times in this newsgroup. I have not
the slightest idea what is meant, what his theories are about, etc. Would
anyone care to explain ? Thanks.

-- 
                         Frans van Otten
                         Algemene Hogeschool Amsterdam
			 Technische en Maritieme Faculteit
                         fransvo@htsa.uucp

shouse@macomw.ARPA (claude shouse) (01/31/89)

In article <44077@linus.UUCP>, bwk@mbunix.mitre.org (Barry W. Kort) writes:
> In article <9465@ihlpb.ATT.COM> arm@ihlpb.UUCP (55528-Macalalad,A.R.)
> publishes a somewhat extended discussion in lieu of bounced E-mail.
> I hope that the netters won't find our open dialogue too offensive.
> 

If I get a vote I hope you keep it right here.  I'm absorbed by it and
hope that others are also.

> 
>  > My point was to argue that Searle's man inside the computer was
>  > essentially a homunculus.   ...  Finally, since most of us agree
>  > that none of us need a little man inside our head in order to
>  > understand, Searle's assumption that a computer _does_ can
>  > easily be seen as fallacious.
> 

Forgive me if this has been covered already.  Why is it fallacious?
A man does not need a homunculus because he is the homunculus.  But
the computer would need a homunculus if it were to understand things
like fear and joy.  Am I way off base here?

A while back a linguist posted two sentences that were parsed the same
way.  1)  That plane flies like an arrow.
      2)  Fruit flies like an orange.

Maybe I really don't know how far AI has progressed in computers.  Do we
really have computers that generate the proper models to get the sense of
these ideas?

Just wondering.

Claude Shouse

hassell@tramp.Colorado.EDU (Christopher Hassell) (02/01/89)

In article <474@macomw.ARPA> shouse@macomw.ARPA (claude shouse) writes:
# In article <44077@linus.UUCP>, bwk@mbunix.mitre.org (Barry W. Kort) writes:
# >  > My point was to argue that Searle's man inside the computer was
# >  > essentially a homunculus.   ...  Finally, since most of us agree
# >  > that none of us need a little man inside our head in order to
# >  > understand, Searle's assumption that a computer _does_ can
# >  > easily be seen as fallacious.
# 
# Forgive me if this has been covered already.  Why is it fallacious?
# A man does not need a homunculus because he is the homunculus.  But
# the computer would need a homunculus if it were to understand things
# like fear and joy.  Am I way off base here?
That is a good point regarding this idea.
  
# A while back a linguist posted two sentences that were parsed the same
# way.  1)  That plane flies like an arrow.
#       2)  Fruit flies like an orange.
# 
# Maybe I really don't know how far AI has progressed in computers.  Do we
# really have computers that generate the proper models to get the sense of
# these ideas?
# 
# Just wondering.
# 
# Claude Shouse

They have.  The rational process of mass-association tries-and-failures
to make sense of a sentence are presumed to be the better ways to handle
this, with a bit of X-works-more-often-than-Y too.   Presumably for humans 
as well.

This "Star-Trek" Principle <a noted issue in 60s S.F.> has been around
and is viscerally maintained to be true: 
  That "emotions" are beyond
          description/generation and therefore not implementable/predictable.

It is a toughy certainly.

The biggest problem I see is that of *describing* emotions in any manner 
other than ........ EMOTIONAL.  Not a truism by far, but it does make it
generally difficult to analyze things.  As for other inhibiting factors
there are the Well-Heck-We-Affect-Everything-So-What-Is-Science-Worth clan.
They seem predisposed to nothing but self-maintained ignorance, so....

I personally find a problem with this, though.  There are alot of VERY
ticklish issues with regard to examining oneself in ORDER to decompose
that person into implementable ideas and principles, but that still won't
stop some of us. :-)

I have always found the vague definitions available as quite accurate w/regards
to emotion.. 

Sadness: Feeling of helplessness/grief<loss> from strong event.
Hate:  Feeling of infringment and tresspassing by someone else resulting
         in possibly arbitrary classifications and actions.
Happiness:  Sorta the opposite of all of helplessness/need/sadness
Love: <ooh yuk> The having of extreme trust/respect/need for and with someone
          else or for something.  Happiness in its satisfaction is notable.

Easy ones are Surprise, Fear <of sadness, in general>, Apathy, Interest...

This is by no means formal, or necessarily accurate.  It IS however 
"simulatible" by "computible" methods.  They require LOTS and LOTS of
associations as basis <to define helplessness in all colors, as well as
  infringement, as well as trust....>

I have often wondered about a model to use: <NOTE BTW that these are all the
  knowledge-representation Sledgehammer approaches and that N-nets are 
  much better at this "computation" in principle and in practice>

Given Maslow's hierarchy of needs, couldn't one say that if these DO cover
ALL of "Human Needs" then couldn't these be modeled as "substances"?

I always wondered if we could give credence to the idea of "I don't get 
  no Respect!" as being provable, within fuzzy limits.  Things like 
  Security and Power could be strange.  All of the interactions of such
  a beast would be strange.  But...then again, so is the subject of the
  simulation!  :->

I have heard *Reams-and-Reams* of argument on Why-even-if-it-looks-like-it-
-AI-won't-Make-A-Comparable-Intelligence.  These arguments are interesting
in that No One has done anything except find the best way to express it.

Logically and scientifically speaking....Couldn't there be another hypothesis
to test? if this one doesn't seem to work?  This is NOT a light possibility
but so was the discovery of the A-bomb.  Not Fun but not End-of-Everything and
certainly not Impossible by default.

It does seem that too many people look at their text editor and their operating
system <if it does> and see the SERIOUS DUMBNESS that would exist if it were
taken compared to humans.  The extrapolations don't seem to come into it.

That's enough dissension for now.
### C>H> ###

bwk@mbunix.mitre.org (Barry W. Kort) (02/01/89)

In article <474@macomw.ARPA> shouse@macomw.ARPA (Claude Shouse) 
votes in favor of keeping the current line of discussion (about
homonculi) in the open forum.  Hearing no objection, I am glad to oblige.
 
Claude writes:

 > A man does not need a homunculus because he is the homunculus.  But
 > the computer would need a homunculus if it were to understand things
 > like fear and joy.  Am I way off base here?

Claude, I am not persuaded that man is his own homunculus.  Perhaps
you can elaborate your reasoning here.

As to a computer having fear and joy, I would like to address that in
more detail.

First, I think we should distinguish between *having* emotions and
*understanding* them.  I have many emotions which I undeniably
experience.  Some of them I can name, and some of them I can
explain in terms of the life circumstances that gave rise to them.
I also experience emotions from time to time that I cannot name
or explain.

An ambulatory automaton may reasonably adopt (or be given) two
conflicting goals:

	1.  Explore your environment and construct a map of
	    the territory so that you may navigate your way
	    without bumping into walls and people.

	2.  Exercise caution so as not to fall over and break
	    yourself.

The second goal ("be safe") would lead to behavior similar to
fear-based response in animals and people.  Self-protective
behavior is important for survival of a sentient being.  It
would be fair to say that prudence is the result of responding
to fear (of damage or destruction) in an intelligent way.

The first goal ("to boldy go where no automaton has gone before")
would be seen as the behavior pattern of a curious individual
with a healthy sense of adventure.  The discovery of a previously
unknown portion of the territory gives rise to an important
internal activity: map-making.  A goal-oriented automaton needs
to recognize progress and achievement, and that recognition may
fairly be labeled as a state of joy or satisfaction.

Thus I claim that a sentient automaton *has* emotions (although
it may not understand them, just as we do not always understand
our own fears and anxieties).

 > A while back a linguist posted two sentences that were parsed the same
 > way.  1)  That plane flies like an arrow.
 >       2)  Fruit flies like an orange.
 > 
 > Maybe I really don't know how far AI has progressed in computers.  Do we
 > really have computers that generate the proper models to get the sense of
 > these ideas?

People working in natural language understanding have noted how
ambiguous our language is, and how easily we can be thrown off
by misleading contexts.

Try these:

	The astronomer married the star.
	The movie producer married the star.

	I saw the man in the park with a telescope.

(Does "with a telescope" modify "saw", "man", or "park"?)

To answer your question, computers are having just as hard a
time understanding English as people are.  Except that computers
are more honest about disclosing their confusion.

--Barry Kort

dmocsny@uceng.UC.EDU (daniel mocsny) (02/05/89)

Here I am enjoying the vivid intellectual scenery in yet another Barry Kort
article, and Lo! He throws in an aside that echoes my very thoughts.

In article <44236@linus.UUCP>, bwk@mbunix.mitre.org (Barry W. Kort) writes:
> ... computers are having just as hard a
> time understanding English as people are.  Except that computers
> are more honest about disclosing their confusion.

Yes! And this sentence is pregnant with potential for large increases
in scientific productivity. How? Writing is a major (if not the
primary) avenue for scientific communication. A distressing fraction
of the scientific literature displays much more than the necessary
level of semantic complexity. This complicates the task of the reader,
lowering scientific productivity across the board, and making our jobs
less fun. This happens because the technical community learns to write
largely by reading itself. We develop a writing style from tradition,
and not by understanding of how to best render facts for easy
comprehension. (See John Brogan, _Clear Technical Writing_,
McGraw-Hill, 197x, where x is a small positivie integer).

Computers lack the fantastic wetware that lets us (sort of) cut
through semantic noise. Thus computers will force us to write more
clearly, and we will benefit. Like the child who could not see the
Emperor's new clothes, our computers will tell us when we are
ambiguous, when we add nonessential words, when our sentence structure
is too complex, and how to better say what we are really trying to
say.

> --Barry Kort

Dan Mocsny
dmocsny@uceng.uc.edu

A discipline is mature when it makes sense to a computer.

bwk@mbunix.mitre.org (Barry W. Kort) (02/07/89)

In article <654@uceng.UC.EDU> dmocsny@uceng.UC.EDU (Daniel Mocsny) writes:

 > Computers lack the fantastic wetware that lets us (sort of) cut
 > through semantic noise. Thus computers will force us to write more
 > clearly, and we will benefit. Like the child who could not see the
 > Emperor's new clothes, our computers will tell us when we are
 > ambiguous, when we add nonessential words, when our sentence structure
 > is too complex, and how to better say what we are really trying to
 > say.

Well, Dan, I've been using WWB (Writer's WorkBench) ever since the first
version came out of Murray Hill, and at least some of your vision is
already a reality.

But I have also noticed that even my best writing often fails to reach
my audience.  So I have turned to other means of communication.  I
now explain my ideas (in excruciating detail) to my computer, and
have it draw pictures of the image in my mind's eye.  I then let
people watch the color cartoons, which they seem to enjoy very much.
I notice that I can communicate ideas more effectively with computer
animation than with text.  (They also like the funny sound effects.)

--Barry Kort