[comp.ai] Intelligence and complex behavior.

dmocsny@uceng.UC.EDU (daniel mocsny) (12/08/88)

How about this working definition of intelligence (and this time I'm
going to proofread it so I distinguish between the noun
``intelligence'' and the adjective ``intelligent'' -- my thanks to
longsuffering comp.ai'ers for sparing me the grammatical flames on my
previous gaffe...):

Intelligence: the ability to find fault with any attempt to define
intelligence.

*****

Earlier I speculated that having some satisfying way to discuss
complexity is a necessary precursor to having a satisfying (i.e.,
quantitative and perhaps reproducible) way to discuss intelligence.
That is because I think the only way we might be able to dissociate
intelligence from the social context is to find some measure of
behavioral complexity.

We do not ordinarily think of a rock as being intelligent. Yet it is a
physical system and it obeys physical laws (as I suppose we are
tacitly assuming humans are and do). It I kick the rock, it sails to
the other side of the room (unless the rock is massive, then I break
my toe). If I push the rock off the table, it falls to the floor. I
don't need to play with it very long to pretty much grasp the scope of
potential rock behaviors. It cannot adapt to changing external
conditions in very elaborate ways, so I don't consider it intelligent.

Moving up the organizational scale, we have motile microbes whipping
flagella in pursuit of concentration gradients, plants growing toward
the light, ant colonies with their emergent properties, our immune
systems that learn (sometimes) to recognize invaders and to spare
self, and on up to human culture with its seemingly endless capacity
for arbitrary, often seemingly frivolous, variation. Certainly,
externally observable human behavior doesn't account for everything
going on inside the human brain, but we could lump in neural activity
as ``behavior,'' if we could measure it with sufficient resolution.

I think we may eventually discuss intelligence in terms of the
complexity of behaviors a system can potentially exhibit. Learning
is a part of that, as well as knowledge previously learned.

*****

A quick thought on the idea of humans as machines. I think the reason
many people find this notion offensive follows from the distinction
between the sorts of organization we impart to technological artifacts
and that apparent in biological systems (see S. Wolfram, ``Approaches
to Complexity Engineering,'' Physica D, 1985 or 86 (part of the Los
Alamos conference on Evolution, Games, and Learning, I think)).
Technological systems tend to have components that are hierarchical,
constrained, periodic, synchronized, linear, etc., and emergent
behaviors are generally destructive and purposely designed out.
Biological systems have many opposite tendencies. Nonlinear
interactions between many individually trivial components give rise to
complex emergent behaviors, yielding robust and adptive systems.

Wolfram says the trick to building non-brittle artifacts is to learn
how to specify systems that exhibit a desired emergent behavior,
something we don't know how to do yet. (I'm looking into GLA's now,
and I wonder if they might provide some way to create, say, a neural
net that implements, or can efficiently learn, a desired mapping from
R^n to I^m.)

When someone calls people machines, strictly speaking (s)he is only
echoing the materialist assumption. On the other hand, if we take
``machine'' to refer to our best technological artifacts, then the
claim does, I think, fall a bit short.

Cheers,

Dan Mocsny
dmocsny@uceng.uc.edu