[mod.ai] Seminar - AI from the Bottom Up

Marcella.Zaragoza@ISL1.RI.CMU.EDU.UUCP (02/27/87)

			AI SEMINAR

TOPIC:    "Artificial Intelligence from the Bottom Up"

SPEAKER:  Hans Moravec, Robotics

WHEN:     Tuesday, March 3, 1987, 3:30 pm

WHERE:    Wean Hall 5409

			ABSTRACT

       Computers were created to do arithmetic faster and better than
people. AI attempts to extend this superiority to other mental arenas.
Some mental activities require little data, but others depend on
voluminous knowledge of the world.  Robotics was pursued in AI labs
partly to automate the acquisition of world knowledge.  It was soon
noticed that the acquisition problem was less tractable than the mental
activities it was to serve.  While computers often exhibited adult
level performance in difficult mental tasks, robotic controllers were
incapable of matching even infantile perceptual skills.

       In hindsight the dichotomy is not surprising.  Animal genomes
have been engaged in a billion year arms race among themselves, with
survival often awarded to the quickest to produce a correct action from
inconclusive perceptions.  We are all prodigous olympians in perceptual
and motor areas, so good that we make the hard look easy.  Abstract
thought, on the other hand, is a small new trick, perhaps less than a
hundred thousand years old, not yet mastered.  It just looks hard when
we do it.

	How hard and how easy?  Average humans beings can be beaten at
arithmetic by a one operation per second machine, in logic problems
by 100 operations per second, at chess by 10,000 operations per second,
in some narrow "expert systems" areas by a million operations.  Robotic
performance can not yet provide this same standard of comparison, but
a calculation based on retinal processes and their computer visual
equivalents suggests that 10 BILLION (10^10) operations per second are
required to do the job of the retina, and a TRILLION (10^12) to match the
bulk of the human brain.

       Truly expert human performance may depend on mapping a problem
into structures originally constructed for perceptual and motor tasks -
so it can be internally visualized, felt, heard or perhaps smelled and
tasted.  Such transformations give the trillion operations per second
engine a purchase on the problem.  The same perceptual-motor structures
may also be the seat of "common sense", since they probably contain a
powerful model of the world - developed to solve the merciless life and
death problems of rapidly jumping to the right conclusion from the
slightest sensory clues.

       Semilog plots of computer power hint that trillion operation per
second computers will be common in twenty to forty years.  Can we
expect to program them to mimic the "hard" parts of human thought in
the same way that current AI program capture some of the easy parts?
It is unlikely that introspection of conscious thought can carry us
very far - most of the brain is not instrumented for introspection, the
neurons are occupied efficiently solving the problem at hand, as in the
retina.  Neurobiologists are providing some very helpful
instrumentation extra-somatically, but not fast enough for the forty
year timetable.

       Another approach is to attempt to parallel the evolution of
animal nervous systems by seeking situations with selection criteria
like those in their history.  By solving similar incremental problems,
we may be driven, step by step, through the same solutions (helped,
where possible, by biological peeks at the "back of the book").  That
animals started with small nervous systems gives confidence that small
computers can emulate the intermediate steps, and mobile robots provide
the natural external forms for recreating the evolutionary tests we
must pass.  By this "bottom up" route I hope one day to meet my "top
down" colleagues half way.  Together we can then metaphorically drive
the golden spike that unites the two efforts.