[net.ai] Prejudice and Frames, Turing Test

turner@rand-unix@sri-unix.UUCP (08/19/83)

  I don't think prejudice is a by-product of Minsky-like frames.
Prejudice is simply one way to be misinformed about the world.  In
people, we also connect prejudism with the inability to correct
incorrect information in light of experiences which prove it be wrong.

  Nothing in Minsky frames as opposed to any other theory is a
necessary condition for this.  In any understanding situation, the
thinker must call on background information, regardless of how that is
best represented.  If this background information is incorrect and not
corrected in light of new information, then we may have prejudism.

  Of course, this is a subtle line.  A scientist doesn't change his
theories just because a fact wanders by that seems to contradict his
theories.  If he is wise, he waits until a body of irrefutable
evidence builds up.  Is he prejudiced towards his current theories?
Yes, I'd say so, but in this case it is a useful prejudism.

  So prejudism is really related to the algorithm for modifying known 
information in light of new information.  An algorithm that resists
change too strongly results in prejudism.  The opposite extreme -- an
algorithm that changes too easily -- results in fadism, blowing the
way the wind blows and so on.

                        -----------

  Stan's point in I:42 about Zeno's paradox is interesting.  Perhaps
the mind cast forced upon the AI community by Alan Turing is wrong.
Is Turing's Test a valid test for Artificial Intelligence?

  Clearly not.  It is a test of Human Mimicry Ability.  It is the
assumption that the ability to mimic a human requires intelligence.
This has been shown in the past not to be entirely true; ELIZA is an
example of a program that clearly has no intelligence and yet mimics a
human in a limited domain fairly well.

  A common theme in science fiction is "Alien Intelligence".  That is,
the sf writer basis his story on the idea:  "What if alien
intelligence wasn't like human intelligence?"  Many interesting
stories have resulted from this basis.  We face a similar situation
here.  We assume that Artificial Intelligence will be detectable by
its resemblance to human intelligence.  We really have little ground
for this belief.

  What we need is a better definition of intelligence, and a test
based on this definition.  In the Turing mind set, the definition of
intelligence is "acts like a human being" and that is clearly
insufficient.  The Turing test also leads one to think erroneously
that intelligence is a property with two states (intelligent and
non-intelligent) when even amongst humans there is a wide variance in
the level of intelligence.

  My initial feeling is to relate intelligence to the ability to
achieve goals in a given environment.  The more intelligent man today
is the one who gets what he wants; in short, the more you achieve your
goals, the more intelligent you are.  This means that a person may be
more intelligent in one area of life than in another.  He is, for
instance, a great businessman but a poor father.  This is no surprise.
We all recognize that people have different levels of competence in
different areas.

  Of course, this defintion has problems.  If your goal is to lift
great weights, then your intelligence may be dependent on your
physical build.  That doesn't seem right.  Is a chess program more
intelligent when it runs on a faster machine?

  In the sense of this definition we already have many "intelligent"
programs in limited domains.  For instance, in the domain of
electronic mail handling, there are many very intelligent entities.
In the domain of human life, no electronic entities.  In the domain of
human politics, no human entities (*ha*ha*).

  I'm sure it is nothing new to say that we should not worry about the
Turing test and instead worry about more practical and functional
problems in the field of AI.  It does seem, however, that the Turing
Test is a limited and perhaps blinding outlook onto the AI field.


                                        Scott Turner
                                        turner@randvax

ech@pyuxll.UUCP (Ned Horvath) (08/25/83)

The characterization of prejudice as  an  unwillingness/inability
to  adapt  to  new  (contradictory)  data  is  an  appealing one.
Perhaps this belongs in net.philosophy, but it seems to me that a
requirement  for  becoming a fully functional intelligence (human
or otherwise) is to abandon the search for  compact,  comfortable
"truths"  and  view knowledge as an approximation and learning as
the process of improving those approximations.

There is nothing wrong with compact generalizations: they  reduce
"overhead" in routine situations to manageable levels. It is when
they   are   applied   exclusively   and/or    inflexibly    that
generalizations  yield bigotry and the more amusing conversations
with Eliza et al.

As for the Turing test, I think it may be appropriate to think of
it  as  a "razor" rather than as a serious proposal.  When Turing
proposed the test there was a philosophical argument raging  over
the  definition  of  intelligence,  much  of  which  was outright
mysticism. The famous test cuts the fog nicely: a device  needn't
have  consciousness,  a  soul,  emotions -- pick your own list of
nebulous terms -- in order to  function  "intelligently."  Forget
whether it's "the real thing," it's performance that counts.

I think Turing recognized that, no matter how successful AI  work
was, there would always be those (bigots?) who would rip the back
off the machine and say,  "You  see?  Just  mechanism,  no  soul,
no emotions..." To them, the Turing test replies, "Who cares?"

=Ned=

emma@uw-june (Joe Pfeiffer) (08/25/83)

I don't think I can accept some of the comments being bandied about
regarding prejudice.  Prejudice, as I understand the term, refers to
prejudging a person on the basis of class, rather than judging that
person as an individual.  Class here is used in a wider sense than
economic.  Examples would be "colored folk got rythm" or "all them white
saxophonists sound the same to me"-- this latter being a quote from
Miles Davis, by the way.  It is immediately apparent that prejudice is a
natural result of making generalizations and extrapolating from
experience.  This is a natural, and I would suspect inevitable, result of a
knowledge acquisition process which generalizes.

Bigotry, meanwhile, refers to inflexible prejudice.  Miles has used a
lot of white saxophonists, as he recognizes that some don't all sound
the same.  Were he bigoted, rather than prejudiced, he would refuse to
acknowledge that.  The problem lies in determining at what point an
apparent counterexample should modify a conception.  Do we decide that
gravity doesn't work for airplanes, or that gravity always works but
something else is going on?  Do we decide that a particular white sax
man is good, or that he's got a John Coltrane tape in his pocket?

In general, I would say that some people out there are getting awfully
self-righteous regarding a phenomenon that ought to be studied as a
result of our knowledge acquisition process rather than used to
classify people as sub-human.

-Joe P.