[net.ai] Behavioristic definition of intelligence

portegys@ihuxv.UUCP (Tom Portegys) (11/14/83)

What is the purpose of knowing whether something is 
intelligent?  Or has a soul?  Or has consciousness?

I think one of the reasons is that it makes it easier to
deal with it.  If a creature is understood to be a human
being, we all know something about how to behave toward it.
And if a machine exhibits intelligence, the quintessential 
quality of human beings, we also will know what to do.

One of the things that this implies is that we really should
not worry too much about whether a machine is intelligent 
until one gets here.  The definition of it will be in part
determined by how we behave toward it. Right now, I don't feel
very confused about how to act in the presence of a computer
running an AI program.

           Tom Portegys, Bell Labs IH, ihuxv!portegys

alf@ttds.UUCP (Thomas Sjoeland) (11/21/83)

Doesn't the concept "intelligence" have some characteristics in common with
a concept such as "traffic"?  It seems obvious that one can measure such
entities as "traffic intensity" and the like thereby gaining an indirect
understanding of the conditions that determine the "traffic" but it seems
very difficult to find a direct measure of "traffic" as such.  Some may say
that "traffic" and "traffic intensity" are synonymous concepts but I don't
agree.  The common opinion among psychologists seems to be that
"intelligence" is that which is measured by an intelligence test.  By
measuring a set of problem solving skills and weighing the results together
we get a value.  Why not call it "intelligence" ?  The measure could be
applicable to machine intelligence also as soon as (if ever) we teach the
machines to pass intelligence tests.  It should be quite clear that
"intelligence" is not the same as "humanness" which is measured by a Turing
test.