[net.ai] Intelligence and Categorization

AXLER.Upenn-1100%Rand-Relay@sri-unix.UUCP (11/16/83)

From:  AXLER.Upenn-1100@Rand-Relay (David M. Axler - MSCF Applications Mgr.)

     I think Tom Portegys' comment in 1:98 is very true.  Knowing whether or
not a thing is intelligent, has a soul, etc., is quite helpful in letting
us categorize it.  And, without that categorization, we're unable to know
how to understand it.  Two minor asides that might be relevant in this
regard:

     1)  There's a school of thought in the fields of linguistics, folklore,
anthropology, and folklore, which is based on the notion (admittedly arguable)
that the only way to truly understand a culture is to first record and
understand its native categories, as these structure both its language and its
thought, at many levels.  (This ties in to the Sapir-Whorf hypothesis that
language structures culture, not the reverse...)  From what I've read in this
area, there is definite validity in this approach.  So, if it's reasonable to
try and understand a culture in terms of its categories (which may or may not
be translatable into our own culture's categories, of course), then it's
equally reasonable for us to need to categorize new things so that we can
understand them within our existing framework.

     2)  Back in medieval times, there was a concept known as the "Great
Chain of Being", which essentially stated that everything had its place in
the scheme of things; at the bottom of the chain were inanimate things, at the
top was God, and the various flora and fauna were in-between.  This set of
categories structured a lot of medieval thinking, and had major influences on
Western thought in general, including thought about the nature of intelligence.
Though the viewpoint implicit in this theory isn't widely held any more, it's
still around in other, more modern, theories, but at a "subconscious" level.
As a result, the notion of 'machine intelligence' can be a troubling one,
because it implies that the inanimate is being relocated in the chain to a
position nearly equal to that of man.

I'm ranging a bit far afield here, but this ought to provoke some discussion...
Dave Axler

west@sdcsla.UUCP (11/30/83)

<<<void>>>
Quoted section between lines of "---".   I omit Dave's point #2, which
I find uninteresting.
----------------------------------------------------------------------------
>From:  AXLER.Upenn-1100@Rand-Relay (David M. Axler - MSCF Applications Mgr.)

     I think Tom Portegys' comment in 1:98 is very true.  Knowing whether or
not a thing is intelligent, has a soul, etc., is quite helpful in letting
us categorize it.  And, without that categorization, we're unable to know
how to understand it.  Two minor asides that might be relevant in this
regard:

     1)  There's a school of thought in the fields of linguistics, folklore,
anthropology, and folklore, which is based on the notion (admittedly arguable)
that the only way to truly understand a culture is to first record and
understand its native categories, as these structure both its language and its
thought, at many levels.  (This ties in to the Sapir-Whorf hypothesis that
language structures culture, not the reverse...)  From what I've read in this
area, there is definite validity in this approach.  So, if it's reasonable to
try and understand a culture in terms of its categories (which may or may not
be translatable into our own culture's categories, of course), then it's
equally reasonable for us to need to categorize new things so that we can
understand them within our existing framework.
----------------------------------------------------------------------------

Deciding whether a thing is or is not intelligent seems to be a hairier
problem than "simply" categorizing its behavior and other attributes.

As to point #1, trying to understand a culture by looking at how it
categorizes does not constitute a validation of the process of
categorization (particularly in scientific endeavours).   Restated: There
is no connection between the fact that anthropologists find that studying
a culture's categories is a very powerful tool for aiding understanding,
and the conclusion that we need to categorize new things to understand them.

I'm not saying that categorization is useless (far from it), but Sapir-Whorf's
work has no direct bearing on this subject (in my view).

What I am saying is that while deciding to treat something as "intelligent",
e.g., a computer chess program, may prove to be the most effective way of
dealing with it in "normal life", it doesn't do a thing for understanding
the thing.   If you choose to classify the chess program as intelligent,
what has that told you about the chess program?   If you classify it
as unintelligent...?   I think this reflects more upon the interaction
between you and the chess program than upon the structure of the chess
program.

			-- Larry West   UC San Diego
possible net addresses:
			-- ARPA:	west@NPRDC
			-- UUCP:	ucbvax!sdcsvax!sdcsla!west
			--	or	ucbvax:sdcsvax:sdcsla:west