[comp.ai] Discover magazine's "Invasion of the Insect Robots"

loren@tristan.llnl.gov (Loren Petrich) (03/12/91)

	In the current issue of "Discover" magazine, there is an
article about attempting to construct robots with insect-level
intelligence. It featured the work of Rodney Brooks, who has had great
success with such robots.

	He and his colleagues have built several kinds of such robots,
including some six-legged walkers and some wheeled ones.

	They are programmed with simple heuristics which compete with
each other to make responses. For instance, short-range responses like
"back off", when triggered by the prospect of a collision, can
override long-range goals like "track prey".

	Is this type of setup an example of "Fuzzy Logic"?

	The article reports that Pattie Maes has derived a program
that can help one of Brooks's six-legged robots learn to walk. At
first, the robot flails its legs around uselessly. But it then learns
what leg motions both keep the robot up and make it go forward. Before
too long, the robot has learned an insect gait -- to move the two
outer legs on each side in sync with the middle leg of the other. The
article did not describe the algorithm, but it was probably some kind
of Neural Net algorithm.

	Brooks got the idea from considering that insects are awfully
dumb animals, but ones which are nevertheless able to walk and fly and
see. He had the idea of building up from simple reflexes. He decided
to leave behind the traditional AI model of symbolic reasoning about
the world, because it had yielded only very limited success in
controlling robots. For instance, such models have trouble with
unfamiliar circumstances, circumstances that Brooks's robots have no
trouble with. And they require a LOT more processing power than
Brooks's robots, which only need simple microprocessors.

	Not surprisingly, he has looked down on symbolic-modeling AI,
which has been the mainstream of the field. And he has gotten a lot of
flak from those who have worked in the symbolic-AI field, who have
accused him of scaling down his goals from human-level intelligence to
insect-level intelligence.

	My response to that would be that insect-level intelligence is
better than level-zero intelligence.

	There are a lot of analogies with the Neural Nets field -- a
system not based on traditional, symbol-manipulating, AI that is much
simpler than it and easily outperforms it, but which seems to have
much less in the way of ultimate potential.

	Is that fair?

	And does anyone else know of any details of Brooks's work?


$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov

Since this nodename is not widely known, you may have to try:

loren%sunlight.llnl.gov@star.stanford.edu

demers@odin.ucsd.edu (David E Demers) (03/12/91)

In article <92995@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:

>	In the current issue of "Discover" magazine, there is an
>article about attempting to construct robots with insect-level
>intelligence. It featured the work of Rodney Brooks, who has had great
>success with such robots.
[...]
>	Is this type of setup an example of "Fuzzy Logic"?

No, but there may be a way to express subsumption architectures
with Fuzzy logic.

[...]
>	There are a lot of analogies with the Neural Nets field -- a
>system not based on traditional, symbol-manipulating, AI that is much
>simpler than it and easily outperforms it, but which seems to have
>much less in the way of ultimate potential.

>	Is that fair?

Not really - they are different tools.  Although work is being
done on higher-level cognitive tasks with NNs (addition/subtraction,
for example), symbolic systems work better on many tasks.  They
are really two different methods, and comparing them is probably
not useful except with respect to a particular problem.  AI should
be inclusive; let's use the right tool for a particular task.



>	And does anyone else know of any details of Brooks's work?

He has published quite a bit.  You may want to look at the
recent collection of papers "AI at MIT: Expanding Frontiers"
edited by Patrick Winston (MIT Press, 1990) ISBN: 0-262-23150-6
The first two chapters of vol. 2 are by Brooks; one discusses
the subsumption architecture in general terms, the second
shows the application to six-legged locomotion.

-- 
Dave DeMers					demers@cs.ucsd.edu
Computer Science & Engineering	C-014		demers%cs@ucsd.bitnet
UC San Diego					...!ucsd!cs!demers
La Jolla, CA 92093-0114	  (619) 534-8187,-0688  ddemers@UCSD

barryf@aix01.aix.rpi.edu (Barry B. Floyd) (03/12/91)

Is Brook's on the net (USENET)?
 
barry
-- 
+--------------------------------------------------------------------+ 
| Barry B. Floyd                   \\\       barry_floyd@mts.rpi.edu |
| Manager Information Systems - HR    \\\          usere9w9@rpitsmts |
+-Rensselaer Polytechnic Institute--------------------troy, ny 12180-+

karln@uunet.uu.net (03/13/91)

In article <92995@lll-winken.LLNL.GOV> loren@tristan.llnl.gov (Loren Petrich) writes:
>flak from those who have worked in the symbolic-AI field, who have
>accused him of scaling down his goals from human-level intelligence to
>insect-level intelligence.
>
	I was at the pub the other night, after which I am not sure
there is a difference[;-}> <-A sly sort of smile from a square head with beard.

  Very interesting subject. Although I have heard repeated claims that neural
nets are supposed to learn, as in the learning to walk example, this is the 
first I personally have heard of such success. It seemed to me that Neural
Nets were mostly only good for Statistical output, in other words, given a 
number in, the Neural Nets Statistics 'Database' shows that this is probably
the number out. This is from seeing many Cartesian to Polor converters and 
such.

   I suppose what is involved in a neural net learning to walk is a "Feels 
good" circut or subroutine for feedback. If the bug is not moving, the neural
net gets negative feedback. If the bug is moving, then the net gets positive
feedback. The level or severity of feedback would depend on haw well the bug is
moving as told. If the "Barin" wants to go foward, but the neural net drives
the bug slightly sideways, then the feedback is bad. The feedback gets better
as the bug goes straighter, and worse as the bug goes more sideways. I wonder
what other type of feedback is required to get the business working properly.
Do all the legs current positions need to be feedback? How about the current
leg positioning command? This seems like something to keep track of around here.


Karl Nicholas
karln!karln@uunet.uu.net

mullen@evax.arl.utexas.edu (Dan Mullen) (03/13/91)

I find it rather ironic that after almost a half century of research in
"artificial intelligence" we have humbly accepted our limitations and
assumed the task of modeling insects.   I remember reading once that 
a mosquito with its paltry 10,000 neurons is infinitly more intelligent
than our fastest super-computer.  I'm not argueing on either side of that.
My point is simply that modeling insects is a good place to start. Maybe
soon there will be the small rodent robot and then the farm animal robot. 
Maybe an entire robot farm, and then .....       d.m.

pat@cs.strath.ac.uk (Pat Prosser) (03/13/91)

In article <1991Mar12.201920.18088@evax.arl.utexas.edu> mullen@evax.arl.utexas.edu (Dan Mullen) writes:
>I find it rather ironic that after almost a half century of research in
>"artificial intelligence" we have humbly accepted our limitations and
>assumed the task of modeling insects.   I remember reading once that 
>a mosquito with its paltry 10,000 neurons is infinitly more intelligent
>than our fastest super-computer.  I'm not argueing on either side of that.
>My point is simply that modeling insects is a good place to start. Maybe
>soon there will be the small rodent robot and then the farm animal robot. 
>Maybe an entire robot farm, and then .....       d.m.

and then .... Do Robots Dream Electric Sheep?

zane@ddsw1.MCS.COM (Sameer Parekh) (03/17/91)

In article <1991Mar12.201920.18088@evax.arl.utexas.edu> mullen@evax.arl.utexas.edu (Dan Mullen) writes:
>I find it rather ironic that after almost a half century of research in
>"artificial intelligence" we have humbly accepted our limitations and
>assumed the task of modeling insects.   I remember reading once that 
>a mosquito with its paltry 10,000 neurons is infinitly more intelligent
>than our fastest super-computer.  I'm not argueing on either side of that.
>My point is simply that modeling insects is a good place to start. Maybe
>soon there will be the small rodent robot and then the farm animal robot. 
>Maybe an entire robot farm, and then .....       d.m.
	I agree that trying to create low intelligence is a good start,
yet there is no purpose in having 80 artificial cows.  They can't
do anything better than us, so why use them? (Or were you joking?)


-- 
zane@ddsw1.MCS.COM

johnc@ms.uky.edu (John Coppinger) (03/17/91)

In article <1991Mar12.201920.18088@evax.arl.utexas.edu>, mullen@evax.arl.utexas.edu (Dan Mullen) writes:
> I find it rather ironic that after almost a half century of research in
> "artificial intelligence" we have humbly accepted our limitations and
> assumed the task of modeling insects.

When you try to run down the path before first learning to crawl,
you'll soon find yourself sprawled out in the mud only a few yards from
where you began.  I believe this happened to AI.  Perhaps it's not an
acceptance of limitations.  Rather, it's the realization that crawling is
the important first step towards running.

>   I remember reading once that 
> a mosquito with its paltry 10,000 neurons is infinitly more intelligent
> than our fastest super-computer. 

Ah, the power of parallel...


-- 
-- John Coppinger                    "You'll find that your left cuff link  --
-- University of Kentucky             will be communicating with your right --
-- johnc@s.ms.uky.edu                 cuff link via satellite"              --
-- johnc@graphlab.cc.uky.edu [NeXT]          -- Nicholas Negroponte         -- 

karln@uunet.uu.net (03/20/91)

In article <1991Mar17.063506.28939@ddsw1.MCS.COM> zane@ddsw1.MCS.COM (Sameer Parekh) writes:
>	I agree that trying to create low intelligence is a good start,
>yet there is no purpose in having 80 artificial cows.  They can't
>do anything better than us, so why use them? (Or were you joking?)
>
 
	I do not think that is true. An artificial cow could be 'programed'
to eat more at the right time to produce more meat come market time. An artificial
cow might be able to have two calves. The error of that statement is glaring.

	Anyway we should ask a farmer about that one ...

        "Mr Farmer, what would you do with a programable cow?"

	"Well, could I run my Nintendo games on it?!?!" ....

      Also, I do not think that many people would argue that the
first computer, made from spinning wheels and brass bars, couldn't
do math better than us .... you gotta start somewhere.



	I keep waiting for someone to wonder why there is a problem with
two types of AI. I seem to get the impression that the insect neural net
application is like the motor control center of the brain. Why ask a symbol
processor to move legs when it could just 'think' I wanna go there and let the
neural net solve the specifics of motivation, which it is doing quite well.

	Karl Nicholas
	karln!karln@uunet.uu.net

usenet@cs.utk.edu (USENET News Poster) (03/21/91)

The suggestion for a "rodent robot" brings to mind s pair of articles in 
the magazine _INK_, edited, I believe by Steve Ciarcia of Circuit Cellar
fame (or his brother, I don't remember which).  Anyway, these two articles in
the 1990 (year) volume describe how to build "MityMouse II", a rodent robot.
I've taken these magazines home, so I don't have the exact citations.
From: sfp@mars.ornl.gov (Phil Spelt)
Path: mars!sfp

BTW, there are numerous other approaches to robot behavior.  We here at ORNL hjave produced two autonomous robots which are capable of doing a variety of
"intelligent" things -- see my article in _IEEE_Expert_, Winter, 1989.

Phil Spelt, Cognitive Systems & Human Factors Group  sfp@epm.ornl.gov
============================================================================
Any opinions expressed or implied are my own, IF I choose to own up to them.
----------------------------------------------------------------------------
MIND.  A mysterious form of matter secreted by the brain.  Its chief activity
consists in the endeavor to asscertain its own nature, the futility of the
attempt being due to the fact that it has nothing but itself to know itself
with.   -- Ambrose Bierce
============================================================================
Phil Spelt, Cognitive Systems & Human Factors Group  sfp@epm.ornl.gov
============================================================================
Any opinions expressed or implied are my own, IF I choose to own up to them.
============================================================================

carroll@ssc-vax (Jeff Carroll) (03/22/91)

In article <1991Mar21.123604.284@ulrik.uio.no> espen@math.uio.no (Espen J. Vestre) writes:
>
>In these Patriot times, don't you think there'd be a market even for 
>things  simpler than artificial cows?  Sharks .... or swarms of 
>insects.... yech.
>

	I can see it now. Artificial locusts... frogs... gnats... flies...
Maybe we could start out with artificial hailstones. Yeah, that's it...
We could call them "brilliant pebbles".



-- 
Jeff Carroll
carroll@ssc-vax.boeing.com

zane@ddsw1.MCS.COM (Sameer Parekh) (03/26/91)

In article <1991Mar19.225113.15536@uunet.uu.net> karln@karln.UUCP () writes:
>In article <1991Mar17.063506.28939@ddsw1.MCS.COM> zane@ddsw1.MCS.COM (Sameer Parekh) writes:
>>	I agree that trying to create low intelligence is a good start,
>>yet there is no purpose in having 80 artificial cows.  They can't
>>do anything better than us, so why use them? (Or were you joking?)
>>
> 
>	I do not think that is true. An artificial cow could be 'programed'
>to eat more at the right time to produce more meat come market time. An artificial
>cow might be able to have two calves. The error of that statement is glaring.
	
	The idea of artificial cow, I had thought, was a robotic being
with the intelligence of a cow.  Making an artificial cow that
is EDIBLE, in my belief, would be tougher than designing a human-level
intelligence.


-- 
The Ravings of the Insane Maniac Sameer Parekh -- zane@ddsw1.MCS.COM

karln@uunet.uu.net (03/28/91)

In article <1991Mar25.173925.21895@ddsw1.MCS.COM> zane@ddsw1.MCS.COM (Sameer Parekh) writes:
>In article <1991Mar19.225113.15536@uunet.uu.net> karln@karln.UUCP () writes:
>>In article <1991Mar17.063506.28939@ddsw1.MCS.COM> zane@ddsw1.MCS.COM (Sameer Parekh) writes:
>> 
>>	I do not think that is true. An artificial cow could be 'programed'
>>to eat more at the right time to produce more meat come market time. An artificial
>>cow might be able to have two calves.
>	
>	The idea of artificial cow, I had thought, was a robotic being
>with the intelligence of a cow.  Making an artificial cow that
>is EDIBLE, in my belief, would be tougher than designing a human-level
>intelligence.
>

True. I was thinking that an artificial cow would be a cow with a brain
transplant. Perhaps in the future putting the desired `behavior patterns`
into a cow via `brain transplant' would be better than trying to breed 
this `behavior pattern` into cows.

  If this ever were the case the small Neural Net to handle walking and
eating could be very usefull.

  Really just some way out there sort of thoughts. My appologies for seeming
nasty.


BTW, I have recieved the program from Pat and Greg Williams that models
the insect behavior of Dr. Beers insects. Sure enough the insects walk around
and are show a certain ummunity to neural damage or removed without failing
completely.

  However, the bugs come with the neural nets already programmed. The bugs
come knowing how to walk.

  I was sort of hopeing to see these bugs flounder around a bit then
eventually learn to walk. Does anybody have any suggestions on how I 
should set about setting this up. Is this set of programs capable of it?
any comments. anybody else have this program?

karl nicholas

karln!karln@uunet.uu.net