[comp.ai.digest] navigation and symbol manipulation

jbn@GLACIER.STANFORD.EDU (John B. Nagle) (08/25/88)

To: comp-ai-digest@decwrl.dec.com
Path: labrea!glacier!jbn
From: John B. Nagle <jbn@glacier.stanford.edu>
Newsgroups: comp.ai.digest
Subject: Re: navigation and symbol manipulation
Date: Tue, 23 Aug 88 02:05 EDT
References: <19880820041455.2.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU>
Reply-To: John B. Nagle <glacier!jbn@labrea.stanford.edu>
Organization: Stanford University
Lines: 51


In a previous article, Stephen Smoliar writes:
>It is also worth noting that Chapter 8 of Gerald Edelman's NEURAL DARWINISM
>includes a fascinating discussion of the possible role of interaction between
>sensory and motor systems.  I think it is fair to say that Edelman shares
>Nagle's somewhat jaundiced view of mathematical logic, and his alternative
>analysis of the problem makes for very interesting, and probably profitable,
>reading.

       I do not take a "jaundiced view" of mathematical logic, but I
think its applicability limited.  I spent some years on automated program
verification (see my paper in ACM POPL '83) and have a fairly good idea of
what can be accomplished by automated theorem proving.  I consider mathematical
logic to be a very powerful technique when applied to rigidly formalizable
systems.  But outside of such systems, it is far less useful.  Proof is so
terribly brittle.  There have been many attempts to somehow deal with
the brittleness problem, but none seem to be really satisfying.  So,
it seems appropriate to accept the idea that the world is messy and go
from there; to seek solutions that can begin to cope with the messyness of
the real world.

       The trouble with this bottom-up approach, of course, is that you
can spend your entire career working on problems that seem so utterly
trivial to people who haven't struggled with them.  Look at Marc
Raibert's papers.  He's doing very significant work on legged locomotion.
Progress is slow; first bouncing, then constrained running, last year a forward
flip, maybe soon a free-running quadruped.  A reliable off-road runner is
still far away.  But there is real progress every year.  Along the way
are endless struggles with hydraulics, pneumatics, gyros, real-time control
systems, and mechanical linkages.  (I spent the summer of '87 overhauling
an electrohydraulic robot, and I'm now designing a robot vehicle.  I can
sympathise.)

       How much more pleasant to think deep philosophical thoughts.
Perhaps, if only the right formalization could be found, the problems
of common-sense reasoning would become tractible.  One can hope.
The search is perhaps comparable to the search for the Philosopher's Stone.
One succeeds, or one fails, but one can always hope for success just ahead.
Bottom-up AI is by comparison so unrewarding.  "The people want epistemology", 
as Drew McDermott once wrote.  It's depressing to think that it might take
a century to work up to a human-level AI from the bottom.  Ants by 2000,
mice by 2020 doesn't sound like an unrealistic schedule for the medium term,
and it gives an idea of what might be a realistic rate of progress.

       I think it's going to be a long haul.  But then, so was physics.
So was chemistry.  For that matter, so was electrical engineering.  We
can but push onward.  Maybe someone will find the Philosopher's Stone.
If not, we will get there the hard way.  Eventually.


					John Nagle

josh@KLAATU.RUTGERS.EDU (J Storrs Hall) (09/05/88)

To: comp-ai-digest@rutgers.edu
Path: klaatu.rutgers.edu!josh
From: J Storrs Hall <josh@klaatu.rutgers.edu>
Newsgroups: comp.ai.digest
Subject: Re: navigation and symbol manipulation
Date: Mon, 29 Aug 88 22:02 EDT
References: <19880824193843.3.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU>
Organization: Rutgers Univ., New Brunswick, N.J.
Lines: 36



>       How much more pleasant to think deep philosophical thoughts.
>Perhaps, if only the right formalization could be found, the problems
>of common-sense reasoning would become tractible.  One can hope.
>The search is perhaps comparable to the search for the Philosopher's Stone.
>One succeeds, or one fails, but one can always hope for success just ahead.
>Bottom-up AI is by comparison so unrewarding.  "The people want epistemology", 
>as Drew McDermott once wrote.  It's depressing to think that it might take
>a century to work up to a human-level AI from the bottom.  Ants by 2000,
>mice by 2020 doesn't sound like an unrealistic schedule for the medium term,
>and it gives an idea of what might be a realistic rate of progress.
>
>       I think it's going to be a long haul.  But then, so was physics.
>So was chemistry.  For that matter, so was electrical engineering.  We
>can but push onward.  Maybe someone will find the Philosopher's Stone.
>If not, we will get there the hard way.  Eventually.

There is much to speak for this point of view.  However, halfway
through the historical life of the steam engine, thermodynamics was
put on a sound basis.  In Physics, we had Newton, in Chemistry,
Mendeleev.  In EE there were Maxwell's equations.  There is a two-way 
feedback here:  sufficient practical experience allows one to
formulate general principles, which then inform and amplify practical 
efforts.

I think the robot and the expert system are the Newcomen engines of
AI.  Our "science" may be all epicycles and alchemy but what we are
after is not a Philosopher's Stone but a periodic table and a
calculus.  

There was a feeling among some of the people I polled at AAAI this
year that there is a bit of a malaise in "theoretical AI".  My guess
is that we have our phlogiston and caloric theories that can be turned
into the real thing with some more work and insight.

--JoSH

mt@MEDIA-LAB.MEDIA.MIT.EDU (Michael Travers) (09/15/88)

Date: Wed, 7 Sep 88 23:59 EDT
From: Michael Travers <mt@media-lab.media.mit.edu>
Subject: Re: navigation and symbol manipulation
To: glacier!jbn@LABREA.STANFORD.EDU, AIList@AI.AI.MIT.EDU
In-Reply-To: The message of 23 Aug 88 02:05 EDT from John B. Nagle <jbn@glacier.stanford.edu>

    Date: 23 Aug 88 06:05:43 GMT
    From: jbn@glacier.stanford.edu (John B. Nagle)

				   It's depressing to think that it might take
    a century to work up to a human-level AI from the bottom.  Ants by 2000,
    mice by 2020 doesn't sound like an unrealistic schedule for the medium term,
    and it gives an idea of what might be a realistic rate of progress.

Well, we're a little ahead of schedule.  I'm working on agent-based
systems for programming animal behavior, and ants are my main test case.
They're pretty convincing, and have been blessed by real ant
ethologists.  But I won't make any predictions as to how long it will
take to extend this methodology to mice or humans.