[net.ai] AI and Human Intelligence

spaf%gatech@UDel-Relay@sri-unix.UUCP (08/23/83)

From:  The soapbox of Gene Spafford <spaf%gatech@UDel-Relay>

[The following are excerpts from several interchanges with the author.
-- KIL]

Words mean not necessarily what I want them to mean nor what you want
them to mean, but what we all agree that they mean.  My point is that 
we must very possibly consider emotions and ethics in any model we 
care to construct of a "human" intelligence.  The ability to handle a
conversation, as is implied by the Turing test, is not sufficient in 
my eyes to classify something as "intelligent."  That is, what
*exactly* is intelligence?  Is it something measured by an IQ test?
I'm sure you realize that that particular point is a subject of much
conjecture.

If these discussion groups are for discussion of artificial
"intelligence," then I would like to see some thought given as to the
definition of "intelligence."  Is emotion part of intelligence?  Is
superstition part of intelligence?

FYI, I do not believe what I suggested -- that bigots are less than
human.  I made that suggestion to start some comments.  I have gotten
some interesting mail from people who have thought some about the
idea, and from a great many people who decided I should be locked away
for even coming up with the idea.

[...]

That brought to mind a second point -- what is human?  What is
intelligence?  Are the the same thing? (My belief -- no, they aren't.)
I proposed that we might classify "human" as being someone who *at
least tries* to overcome irrational prejudices and bigotry.  More than
ever we need such qualitites as open-mindedness and compassion, as
individuals and as a society.  Can those qualities be programmed into
an AI system?  [...]

My original submission to Usenet was intended to be a somewhat 
sarcastic remark about the nonsense that was going on in a few of the
newsgroups.  Responses to me via mail indicate that at least a few
people saw through to some deeper, more interesting questions.  For
those people who immediately jumped on my case for making the
suggestion, not only did you miss the point -- you *are* the point.

--
  The soapbox of Gene Spafford
  CSNet:  Spaf @ GATech ARPA:  Spaf.GATech @ UDel-Relay
  uucp: ...!{sb1,allegra,ut-ngp}!gatech!spaf
        ...!duke!mcnc!msdc!gatech!spaf

laura@utcsstat.UUCP (Laura Creighton) (08/25/83)

Goodness, I stopped reading net.ai a while ago, but had an ai
problem to submit and decided to read this in case the question
had already been asked and answered. News here only lasts for
2 weeks, but things have changed...

at any rate, you are all discussing here what I am discussing in mail
to AI types (none of whom mentioned that this was going on here, the
cretins! ;-) ). I am discusisng bigotry by mail to AI folk.

I have a problem in furthering my discussion. When I mentioned it
I got the same response from 2 of my 3 AI folk, and am waiting for
the same one from the third. i gather it is a fundamental AI
sort of problem.

I maintain that 'a problem' and 'a discription of a problem'
are not the same thing. Thus 'discrimination' is a problem,
but the word 'nigger' is not. 'Nigger' is a word which describes
the problem of discrimination. One may decide not to use the
word 'nigger' but abolishing the word only gets rid of one
discription of the problem, but not the problem itself.

If there were no words to express discrimination, and discrimination
existed, then words would be created (or existing words would be
perverted) to express discrimination. Thus language can be counted
upon to reflect the attitudes of society, but changing the language
is not an effective way to change society.


This position is not going over very well. I gather that there is
some section of the AI community which believes that language
(the description of a problem) *is* the problem.  I am thus
reduced to saying, "oh no it isnt't you silly person" but am left
holding the bag when they start quoting from texts. I can bring
out anthropology and linguistics and they can get out some
epistomology and Knowledge Rep[resentation, but the discussion
isn't going anywhere...

can anybody out there help?

laura creighton
utzoo!utcsstat!laura

mark@umcp-cs.UUCP (08/27/83)

Laura says that words are only descriptions of the problem, not the
problem itself, so we need not be concerned about using the words.
Her example, however, uses a word which is not a description
of the problem at all, but in fact helps to create the problem by
creating divisive images of classes of people and therefore
setting the stage for treating some people as less than human.
Words are not just descriptions--words have powerful effects
on the world.
-- 
spoken:	mark weiser
UUCP:	{seismo,allegra,brl-bmd}!umcp-cs!mark
CSNet:	mark@umcp-cs
ARPA:	mark.umcp-cs@UDel-Relay