[net.ai] The Mind's I

reid (01/21/83)

This newsgroup is awfully dry, after so much traffic in net.misc.  

I have been wading through "The Mind's I", by Hofstadter and Dennett.  I find
myself intrigued by some of the stuff in there, in particular people's
tendency to immediately "trust" and "identify with" computers, or to ascribe
human tendencies and even emotions to cleverly written programs.  I think we
all ascribe some sort of muse or "mind" to the computer to which we type so
much and so often.  If nothing else, it makes life simpler to ignore the 
hardware and the nuts and bolts.  That is the point of all high-level goings-on
anyway.  It seems to me that there will quickly reach a point where we can
treat computers in much the same manner as we treat fellow humans, without
ever assuming that they are human or should be.  

For instance, I think it not unreasonable to ask a computer to understand me
(maybe someday in natural language), to cooperate with me, to take some ini-
tiative on its own, and to make life simpler for me.  It is reasonable for the
computer to not understand occasionally, and to need clarification, or even
for it to screw up and do as I said, and not what I meant.  Contrast this with
one of your subordinate workers (human).  Why is it necessary for us to ascribe human emotions to things which may
well function very much like humans someday?  In the workplace, especially.
I think that computers will not likely be suitable marriage material, no 
matter how sophisticated they become, and probably not very good for discussing
literature or problems.  That is why it will be important to have people 
around.  Think how nice to be able to make your computer do all the dirty work
for you so you can think lofty thoughts and discuss Tolstoy, without the guilt
of having just made your secretary do all the work for which you are getting
paid....

"TERRY, how about writing up a report on the latest MumbleNet protocol for me,
I have a game of chess to play...."

Respectfully,
Glenn Reid

cutler (01/23/83)

A few points about the view I recently stated:

	1) I'm not expressing a philosophical statement on AI
	   or anything that comes from AI.  If a program acts
	   human and is as (un)predictable as a human, then
	   for all practical purposes I would treat it as though
	   it understands human behavior and associated functions.
	   From this point of view it is irrelevant whether or not
	   the program actually does understand or just mimics
	   understanding.
	2) Will an AI intelligence necessarily be human? No,
	   but to paraphrase a leader in the field, we know
	   humans can achieve a high level of intelligence, but
	   we don't know for certain that there is any other form
	   of highly intelligent "life", so at least for now we
	   should use the only available model.  This expresses
	   two goals which I share: to create an artificial intelligence
	   and to figure out how people work.

						Ben Cutler
						decvax!yale-comix!cutler