[net.ai] Definition of Intelligence

v.kahn%UCLA-LOCUS@sri-unix.UUCP (11/02/83)

From:  Philip Kahn <v.kahn@UCLA-LOCUS>

        When it comes down to it, isn't intelligence the ability to
recognize space-time relationships?  The nice thing about this definition
is that it recognizes that ants, programs, and humans all possess
varying degrees of intelligence (that is, varying degrees in their
ability to recognize space-time relationships).  This implies that
intelligence is only correlative, and only indirectly related to
physical environmental interaction.

Laws@SRI-AI.ARPA (11/04/83)

From:  Ken Laws <Laws@SRI-AI.ARPA>

I like the idea that the intelligence of an organism should be
measured relative to its goals (which usually include survival, but
not in the case of "smart" bombs and kamikaze pilots).  I don't think
that goal-satisfaction criteria can be used to establish the "relative
intelligence" of organisms with very different goals.  Can a fruit fly
be more intelligent than I am, no matter how well it satisfies its
goals?  Can a rock be intelligent if its goals are sufficiently
limited?

To illustrate this in another domain, let us consider "strength".  A
large bulldozer is stronger than a small one because it can apply more
brute force to any job that a bulldozer is expected to do.  Can we
say, though, that a bulldozer is "stronger" than a pile driver, or
vice versa?

Put another way: If scissors > paper > rock > scissors ..., does it
make any sense to ask which is "best"?  I think that this is the
problem we run into when we try to define intelligence in terms of
goals.  This is not to say that we can define it to be independent of
goals, but goal satisfaction is not sufficient.

Instead, I would define intelligence in terms of adaptability or
learning capability in the pursuit of goals.  An organism with hard-
wired responses to its environment (e.g. a rock, a fruit fly, MACSYMA)
is not intelligent because it does not adapt.  I, on the other hand,
can be considered intelligent even if I do not achieve my goals as
long as I adapt to my environment and learn from it in ways that would
normally enhance my chances of success.

Whether speed of response must be included as a measure of
intelligence depends on the goal, but I would say that, in general,
rapid adaptation does indicate greater intelligence than the same
response produced slowly.  Multiple choice aptitude tests, however,
exercise such limited mental capabilities that a score of correct
answers per minute is more a test of current knowledge than of ability
to learn and adapt within the testing period.  Knowledge relative to
age (IQ) is a useful measure of learning ability and thus of
intelligence, but cannot be used for comparing different species.  I
prefer unlimited-time "power" tests for measuring both competence and
intelligence.

The Turing test imposes a single goal on two organisms, namely the
goal of convincing an observer at the other end of tty that he/it is
the true human.  This will clearly only work for organisms capable
of typing at human speed and capable of accepting such a goal.  These
conditions imply that the organism must have a knowledge of human
psychology and capabilities, or at least a belief (probably incorrect)
that it can "fake" them.  Given such a restricted situation, the
nonhuman organism is to be judged intelligent if it can appropriately
modify its own behavior in response to questioning at least as well as
the human can.  (I would claim that a nonadapting organism hasn't a
chance of passing the test, and that this is just what the observer
will be looking for.)

I do not believe that a single test can be devised which can determine
the relative intelligences of arbitrary organisms, but the public
wants such a test.  What shall we give them?  I would suggest the
following procedure:

For two candidate organisms, determine a goal that both are capable
of accepting and that we consider related to intelligence.  For an
interesting test, the goal must be such that neither organism is
specially adapted or maladapted for achieving it.  The goal might be
absolute (e.g., learn 100 nonsense syllables) or relative (e.g.,
double your vocabulary).  If no such goal can be found, the relative
organisms cannot be ranked.  If a goal is found, we can rank them
along the dimension of the indicated behavior and we can infer a
similar ranking for related behaviors (e.g., verbal ability).  The
actual testing for learning ability is relatively simple.

How can we test a computer for intelligence?  Unfortunately, a computer
can be given a wide variety of sensors and effectors and can be made
to accept almost any goal.  We must test it for human-level adaptability
in using all of these.  If it cannot equal human ability nearly all
measurable scales (e.g., game playing, verbal ability, numerical
ability, learning new perceptual and motor skills, etc.), it cannot
be considered intelligent in the human sense.  I know that this is
exceedingly strict, but it is the same test that I would apply to
decide whether a child, idiot savant, or other person were intelligent.
On the other hand, if I could not match the computer's numerical and
memory capabilities, it has the right to judge me unintelligent by
computer standards.

The intelligence of a particular computer program, however, should
be judged by much less stringent standards.  I do not expect a
symbolic algebra program to learn to whistle Dixie.  If it can
learn, without being programmed, a new form of integral faster
than I can, or if it can find a better solution than I can in
any length of time, then I will consider it an intelligent symbolic
algebra program.  Similar criteria apply to any other AI program.

I have left open the question of how to measure adaptability,
relative importance of differing goals, parallel satisfaction of
multiple goals, etc.  I have also not discussed creativity, which
involves autonomous creation of new goals.  Have I missed anything,
though, in the basic concept of intelligence?

                                        -- Ken Laws

unbent@ecsvax.UUCP (11/07/83)

"...intelligence is the ability to recognize space-time relationships..."

I'm not sure what this means.  Does it mean:  Being able to
get around without bumping into things?  Remembering where
the bone is buried?  Being able to read the logical map of
the usenet network?  I sure wouldn't put that last one on a
continuum with the first two!

                                --Jay Rosenberg
                                 (ecsvax!unbent)