[comp.ai] Learning an Environment

kp@uts.amdahl.com (Ken Presting) (03/17/90)

In article <1990Mar16.104707.29360@hellgate.utah.edu> kbreinho%ug.utah.edu@cs.utah.edu (Keith Breinholt) writes:
>
>WHEN we have the algorithms, input devices and other hardware so that machines
>can learn an arbitrary environment "out of the box", THEN AND ONLY THEN will
>I believe that machines are sentient.
>

This position seems to me to be too strong.  Human beings have very
complex sensory and motor functions, and the use of these functions is
a large part of human intelligence.  But we don't count ourselves as
lacking in intelligence because we can't see radio waves, and can't
bend steel beams with our bare hands.  Our sensorimotor functions are
arbitrarily limited by natural selection to a class which are useful
and reliably reproducible.

I'd like to suggest that a machine with radically limited facilities for
interaction with its environment could still exhibit category formation,
concept learning, and behavioral complexity more than sufficient to
merit the term "intelligent" (whatever we eventually decide that means).

One thing about Keith's claim that I agree with is that the environment
itself must be the real world, with real people, places, and things.
Furthermore, I would go beyond Keith's statement to say that what the
machine does learn must include all the things that people learn - how
to use the sensory and motor facilities they have, and how to participate
in the activities of other people.

Human beings engage in a number of behaviors which are generally agreed
to involve intellect - negotiating contracts, writing legal opinions,
arguing about science and philosophy, for example.  It is easy to
overlook the role of perception in these activities, but it is important
that when people engage in them, observations and premises based on
observation are used regularly.  None of these intellectual activities is
a pure calculation.

A machine which had very limited inputs and outputs (perhaps no more than
a keyboard and printer) could participate in arguments or negotiations
with a few practical handicaps, but with no intellectual handicaps.  For
example, suppose that a computer were acting as a judge in a courtroom,
with the court stenographer typing on its keyboard, and a bailiff to
announce the computer's directions and rulings.  The machine would be
dependent on the cooperation of able-bodied helpers, but that does not
seem to be an intellectual handicap.  The machine would have all the
information available (eg) to an appellate court.

The question then is what can a printer-keyboard machine learn, and how?
The central observation here is that the input to the machine, which
appears to be symbolic from our point of view, is no different from
our own afferent nerve impulses, from another point of view.  If the
inputs are categorized according to programmed operations, then the
machine does not "learn" categories in the human sense of learning.

Another question is how the machine could exhibit any behavior that would
reasonably be called "motivated" as opposed to "reflex".  It seems natural
to suppose that a printer-keyboard machine could have *curiosity*, but
perhaps not much else.  But if what we're after is behavior that
resembles what we call "intelligent" in humans, that may be sufficient.

I used the courtroom example mainly for variety.  I think that arguing
about science and philosophy is actually the more suggestive case.  The
main issue is, since the human interface to its environment is already
a subset of the possible interactions, why should another subset be
inadequate to support the development of intelligence, simply because
it's a smaller subset?


Ken Presting