[comp.ai.philosophy] AI - the real problem

nagle@well.sf.ca.us (John Nagle) (02/01/91)

     It's been thirty-two years since Samuels' checkers program, the first
major success of AI.  And yet we still can't build something with the
competence of an ant brain in dealing with the real world.  This is
discouraging.

     It's encouraging that this is now recognized as a problem.  Brooks,
Connell, Maes, and others are working on artificial insects.  The level 
of insect competence demonstrated to date is still rather low.  

     There is a bit of hubris in trying to address human-level intelligence
from our present level of ignorance.  We now understand that just getting an
ant though a minute of life is hard.  Walking over rough ground is hard.
Avoiding obstacles is hard.  Picking up things is hard.  Piling things up
is hard.  General ant level competence is very hard.

     We will not achieve lizard-level competence until we have ant-level
competence well in hand.  We will not achieve rodent-level competence
until we have lizard-level competence.  And we will not achive primate-level
competence until we can build rodent-level brains.  And until we have achieved
primate-level competence, we will not successfully build a general-purpose
human-level AI.

     However, we just might succeed working from the bottom up.

     This, not undecidability, is the problem with AI.

					John Nagle

ms@cs.toronto.edu (Manfred Stede) (02/01/91)

nagle@well.sf.ca.us (John Nagle) writes:


>     It's been thirty-two years since Samuels' checkers program, the first
>major success of AI.  And yet we still can't build something with the
>competence of an ant brain in dealing with the real world.  This is
>discouraging.

<lines deleted>

>competence until we can build rodent-level brains.  And until we have achieved
>primate-level competence, we will not successfully build a general-purpose
>human-level AI.

>     However, we just might succeed working from the bottom up.
>     This, not undecidability, is the problem with AI.

It is, of course, a problem only if the goal of AI is to build a general-purpose
human-level AI, as you state it. Not everybody subscribes to this view, though.

If there is a problem with AI at all, I should think, then it is the one that 
the community is unable to agree on the nature of the discipline and its goals,
but nevertheless engages in discussions on "the problem with AI". Quite often, 
these debates are bound to remain fruitless, because people just start out from
totally diverging assumptions.

Maybe the problem, if it exists, started when the field was christened 'AI'?
This seems to be the only discipline that advertises its long-term ambitions,
however nebulous, in its very title, instead of choosing a more neutral one.

Maybe there are in fact two or many more fields, hidden under a common title 
which sounds exciting and is likely to ease the process of grant hunting?

Maybe written by Manfred Stede, ms@ai.toronto.edu 

mikeb@wdl31.wdl.loral.com (Michael H Bender) (02/02/91)

John Nagle writes:

   .... There is a bit of hubris in trying to address human-level intelligence
   from our present level of ignorance .... 
   We will not achieve lizard-level competence until we have ant-level
   competence well in hand.  We will not achieve rodent-level competence
   until we have lizard-level competence.  And we will not achieve primate-
   level
   competence until we can build rodent-level brains.  And until we have 
   achieved primate-level competence, we will not successfully build a 
   general-purpose human-level AI.

The only problem I have with this argument is that if we were to just 
focus on "building up" AI competence from lower-level components, we 
would ignore one of greatest sources of information: our ability to 
introspect about our cognitive processes. It is this ability which has
led to the development of expert system technology, the GPS and its
follow-ons, and so on.

Clearly, AI will be more successful when it marries the cognitive approach,
which has been so popular of late, and the "developmental" approach
which John recommends. But that does not mean we should go to the other
extreme and ignore the "higher-level" aspects of human intelligence.

Mike Bender

dnk@yarra-glen.aaii.oz.au (David Kinny) (02/04/91)

mikeb@wdl31.wdl.loral.com (Michael H Bender) writes:

>John Nagle writes:

>   .... There is a bit of hubris in trying to address human-level intelligence
>   from our present level of ignorance .... 
>   We will not achieve lizard-level competence until we have ant-level
>   competence well in hand.  We will not achieve rodent-level competence
>   until we have lizard-level competence.  And we will not achieve primate-
>   level competence until we can build rodent-level brains.  And until we
>   have achieved primate-level competence, we will not successfully build a 
>   general-purpose human-level AI.

>The only problem I have with this argument is that if we were to just 
>focus on "building up" AI competence from lower-level components, we 
>would ignore one of greatest sources of information: our ability to 
>introspect about our cognitive processes. 

What is it that leads you to believe that introspection is one of
the greatest sources of information about our cognitive processes ?
I would regard it as a most unreliable source of information about the
true nature of those processes.  Man did not develop an adequate theory of
human vision from his experience of it, that had to wait for the
development of optics, neuroanatomy, etc. and is still a long way from
complete.  Why then should it be any different with thinking ?

Consider the inability of people with certain brain lesions to even
be aware that something is wrong.  They may have lost completely some
function, such as the ability to see in half of the visual field,
but they are unaware of the loss, it cannot even be brought to their
attention.  Perhaps others with a bit more psychology/neurology under
their belts could offer some hard data on the value of introspection.

It seems to me that introspection produces subjective and unreliable
information about a tiny subset of our higher level cognitive processes,
and says nothing about the vast bulk of the edifice of intelligence.
If we could understand the workings of a lizard mind it would be a
major step towards understanding human cognitive processes.

-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
David Kinny                       Australian Artificial Intelligence Institute
dnk@aaii.oz.AU                                  1 Grattan Street
Phone: +61 3 663 7922                  CARLTON, VICTORIA 3053, AUSTRALIA

G.Joly@cs.ucl.ac.uk (Gordon Joly) (02/05/91)

In article <22951@well.sf.ca.us>, nagle@well.sf.ca.us (John Nagle) said          

>>      There is a bit of hubris in trying to address human-level intelligence
>> from our present level of ignorance.  We now understand that just getting an
>> ant though a minute of life is hard.  Walking over rough ground is hard.
>> Avoiding obstacles is hard.  Picking up things is hard.  Piling things up
>> is hard.  General ant level competence is very hard.

``Ignorance is in the mind of the beholder.''

>>      We will not achieve lizard-level competence until we have ant-level
>> competence well in hand.  We will not achieve rodent-level competence
>> until we have lizard-level competence.  And we will not achieve primate-level
>> competence until we can build rodent-level brains.  And until we have achieved
>> primate-level competence, we will not successfully build a general-purpose
>> human-level AI.

``Intelligence is in the mind of the beholder.''

I agree with John Nagle, I am not attacking him. However,
animal/human/rock intelligence is not a linear process running
parallel to evolution.

``Satori: When the learning curve becomes a step function.''

I just think that we have a long way to go to catch up the dolphins.

``So long and thanks for all the fish.''

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

G.Joly@cs.ucl.ac.uk (Gordon Joly) (02/05/91)

In article <22951@well.sf.ca.us>, nagle@well.sf.ca.us (John Nagle) says

>      We will not achieve lizard-level competence until we have ant-level
> competence well in hand.  We will not achieve rodent-level competence
> until we have lizard-level competence.  And we will not achive primate-level
> competence until we can build rodent-level brains.  And until we have achieved
> primate-level competence, we will not successfully build a general-purpose
> human-level AI.
> 
>      However, we just might succeed working from the bottom up.

I see a discontinuity in the development intelligence, at the point of
the emergence of homo sapiens. Perhaps there should be a metric of
intelligence, with rocks and logs (cf Twin Peaks) at 0 and us humans
at 1.

    ``There must be something unique about man because otherwise,
    evidently, the ducks would be lecturing about Konrad Lorenz and
    the rats would be writing papers about B. F. Skinner.''

The Ascent of Man, Jacob Bronowski, BBC Publications, 1978.

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

punch@pleiades.cps.msu.edu (Bill Punch) (02/06/91)

>John Nagle (nagle@well.sf.ca.us) writes:
>     There is a bit of hubris in trying to address human-level intelligence
>from our present level of ignorance.  We now understand that just getting an
>ant though a minute of life is hard.  Walking over rough ground is hard.
>Avoiding obstacles is hard.  Picking up things is hard.  Piling things up
>is hard.  General ant level competence is very hard.

While the point is well taken (ant intelligence is hard and we are far
from a full simulation of it) I question the "hubris" of working from
the other end of the spectrum. That is, I agree fully with the
difficulty and necessity of working at the "simpler" levels of
intelligence, but question why working at pieces of the "complicated"
levels isn't also revealing.

Sometimes I feel that my colleagues look upon work like knowledge-based
systems as a sort of fifth whell, at best an engineering feat at worst a
simple hack to impress the uninformed. In fact, I think any number of
KBS researchers pushing on particular themes of human cognition are
investigating a basis for intelligence. Further, I think that both ends
working towards the middle is a good approach, and that progress will be
more quickly forthcoming as a result.

Finally, I think the problem with much of the work of AI is not its
fractured, nonuniform nature. I think exploration of problems/solutions
in this way is good and to be expected. What it lacks is some unifying
theme/work to bring these various groups together. Research is focusing
only on little, discrete problems and not focusing on
unification/integration of these pieces in a bigger whole. There are
those who are doing it, but this work is just beginning. It is this next
stage of research, if and when it occurs, that will bring AI some more
coherent lines of thought. My opinion.

				>>>bill<<<
			punch@pleiades.cps.msu.edu

	"Call on God, but row away from the rocks", Indian proverb.

schraudo@beowulf.ucsd.edu (Nici Schraudolph) (02/06/91)

G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:

>I see a discontinuity in the development intelligence, at the point of
>the emergence of homo sapiens.

>    ``There must be something unique about man because otherwise,
>    evidently, the ducks would be lecturing about Konrad Lorenz and
>    the rats would be writing papers about B. F. Skinner.''
>
>The Ascent of Man, Jacob Bronowski, BBC Publications, 1978.

It is interesting to note that this perceived discontinuity between
human and animal intelligence - that seems so obvious when looking at
the achievements of our species and its impact on this planet - all but
evaporates at the neurobiological and even psychophysical level.  In
other words: since (to the best of our knowledge) we're not built very
different from other animals, how come we're so smart?

One hypothesis is that the "extra intelligence" resides not so much
in the individual human brain, but rather in the countless cognitive
artifacts and social structure we have created for ourselves. (*)
Thus although we still essentially have animal brains, over the
generations we collectively managed to create an environment in
which slightly modified animal brains can pull all sorts of amazing
stunts (such as discussing their own epistemology :-).

From this point of view, the bottom-up approach to AI will not culminate
in an intelligent machine per se but rather in a population of machines
capable of structuring their environment so as to eventually become
intelligent.  Might take a while though...

(*) I owe this perspective to Ed Hutchins, although he is in no way
    kesponsible for my ramblings.

-- 
Nicol N. Schraudolph, C-014          | "And long cars in long lines
University of California, San Diego  |  And great big signs, and they all say:
La Jolla, CA 92093-0114              |  Hallelujah.  Every man for himself."
nici%cs@ucsd.{edu,bitnet,uucp}       |      - Laurie Anderson, "Big Science".

nagle@well.sf.ca.us (John Nagle) (02/07/91)

G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:

>I see a discontinuity in the development intelligence, at the point of
>the emergence of homo sapiens. Perhaps there should be a metric of
>intelligence, with rocks and logs (cf Twin Peaks) at 0 and us humans
>at 1.

      Unclear.  At our present state of ignorance, we can't really
answer this question.  However, opinion in neurobiology suggests
that the brains of mammals differ primarily in quantitative ways.
See "The Evolution of the Brain", by Sir John Eccles, (1988,
Couteledge, London), especially chapter 3, section 3.1.  Results
from dissection, DNA measures of evolutionary distance, and time
required for evolution all indicate that the brains of humans are
not all that different from those of the other mammals.  

      From an AI perspective, this is encouraging, in that if we
can work our way up into the mammal range at all, we may be most of 
the way to human-level AI.  It also indicates that levels beyond
human intelligence might be possible with a similar architecture
but more hardware.


					John Nagle


      


					John Nagle

nagle@well.sf.ca.us (John Nagle) (02/07/91)

punch@pleiades.cps.msu.edu (Bill Punch) writes:

>While the point is well taken (ant intelligence is hard and we are far
>from a full simulation of it) I question the "hubris" of working from
>the other end of the spectrum. That is, I agree fully with the
>difficulty and necessity of working at the "simpler" levels of
>intelligence, but question why working at pieces of the "complicated"
>levels isn't also revealing.

      The question is whether effective "abstract" high-level intelligences
can be built without the underpinnings of the machinery that allows animals
to deal with the real, physical world.  Based on results in AI to date,
the answer appears to be no.  Expert systems have not begun to exhibit
"common sense", or even understanding of the consequences of their actions.
AI is still stalled on the common sense front.  

      This may all be to the good.  It has taken thirty years for AI to
dispose of most the debris of several thousand years of philosophy on
what is "intelligence".  Now we know that none of the classical ideas
(intelligence is arithmetic, intelligence is logic, intelligence is
language, intelligence is chess...) lead anywhere.  With the decks
cleared, we can now begin to work.

					John Nagle

mikeb@wdl35.wdl.loral.com (Michael H Bender) (02/08/91)

David Kinny writes:

   What is it that leads you to believe that introspection is one of
   the greatest sources of information about our cognitive processes?

    .... (many intelligent and relevant comments deleted) ...

   If we could understand the workings of a lizard mind it would be a
   major step towards understanding human cognitive processes.

Because it is the only REAL source I/you/we have when it comes to our
higher level cognitive processes (e.g., consciousness).  Laboratory
experiments can measure many different things, but they CANNOT reliably
measure consciousness.

I believe that consciousness is important. For instance, you mention vision
and I admit that there has been excellent neurological research in that
area. However, this doesn't change the fact that we can CONSCIOUSLY control
what we look at, even consciously ignore items that are designed to attract
our attention (something that lizards might find it hard to do). No AI
system that ignores this ability for conscious control will be worth its
beans (or, should I say, worth its vision?).

I honestly do believe that there is a valid reason for trying to understand
the workings of a lizard, however I also believe that there is also a very
valid reason for introspection and understanding of conscious cognitive
processes. Although somewhat looked down upon currently, it is likely that
expert system technology grew out of this type of research.

Mike Bender

hm02+@andrew.cmu.edu (Hans P. Moravec) (02/08/91)

schraudo@beowulf.ucsd.edu (Nici Schraudolph) writes:
>G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:
 
>>I see a discontinuity in the development intelligence, at the point of
>>the emergence of homo sapiens.
...
> One hypothesis is that the "extra intelligence" resides not so much
> in the individual human brain, but rather in the countless cognitive
> artifacts and social structure we have created for ourselves. (*)
> Thus although we still essentially have animal brains, over the
> generations we collectively managed to create an environment in
> which slightly modified animal brains can pull all sorts of amazing
> stunts (such as discussing their own epistemology :-).
>  
> From this point of view, the bottom-up approach to AI will not culminate
> in an intelligent machine per se but rather in a population of machines
> capable of structuring their environment so as to eventually become
> intelligent.  Might take a while though...

Unlike humans, the robots don't start in an intellectual vacuum.  They'll
get their initial epistemology, mathematics and do-it-yourself tips
from us, almost the instant their minds are capable of absorbing it.
Getting to where we are now is (we'll say in hindsight) the trivial
part.  Things will get interesting when they then begin to push into
unexplored
mental territory.

        Hans Moravec

dailey@frith.uucp (Chris Dailey) (02/09/91)

In article <1416@ucl-cs.uucp> G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:
>
>In article <22951@well.sf.ca.us>, nagle@well.sf.ca.us (John Nagle) says
>
[.. ant -> lizard -> rodent -> primate -> ..]
[.. general purpose human level AI quote  ..]
[.. deleted..]
>I see a discontinuity in the development intelligence, at the point of
>the emergence of homo sapiens. Perhaps there should be a metric of
>intelligence, with rocks and logs (cf Twin Peaks) at 0 and us humans
>at 1.

So what do you do about some form of intelligence that has some
understandings that humans do not, yet lack some of the understandings
that humans do have?  A "metric of intelligence" would have a meaning
that would be biased by the person that thinks up the definition, just
as the early sociological/anthropological classification of peoples
(a scale that went from civilized to barbarians, with the sociologists
and anthropologists being in the civilized category).  Such a definition
or scale could not help but be inherently ego-centric, which would
render the concept almost meaningless.

>Gordon Joly                                       +44 71 387 7050 ext 3716
--
Chris Dailey   dailey@(frith.egr|cps).msu.edu
    __  __  ___       | "A line in the sand." -- The Detroit News
 __/  \/  \/ __:>-    |
 \__/\__/\__/         | "Allein in der sand." -- me

dailey@frith.uucp (Chris Dailey) (02/09/91)

Sorry for the long included quote, but I thought it best be kept:

In article <1991Feb5.165223.9584@msuinfo.cl.msu.edu> punch@pleiades.cps.msu.edu (Bill Punch) writes:
>>John Nagle (nagle@well.sf.ca.us) writes:
>> [...] General ant level competence is very hard.

>While the point is well taken (ant intelligence is hard and we are far
>from a full simulation of it) I question the "hubris" of working from
>the other end of the spectrum. That is, I agree fully with the
>difficulty and necessity of working at the "simpler" levels of
>intelligence, but question why working at pieces of the "complicated"
>levels isn't also revealing.
>
>Sometimes I feel that my colleagues look upon work like knowledge-based
>systems as a sort of fifth whell, at best an engineering feat at worst a
>simple hack to impress the uninformed. In fact, I think any number of
>KBS researchers pushing on particular themes of human cognition are
>investigating a basis for intelligence.

I think it is important to keep in mind, however, that human cognition
may not be the most proper route to look at for implementation on
traditional computer architecture.  I think we have a good shot of
coming up with some 'true' form of intelligent cognition (whatever that
may be...), but it will not be something that we would recognize as
modelled after natural/biological intelligence.

I don't think we need to be able to make a full model of ant
intelligence before we can say we've created something that has the
level of intelligence that an ant does.  Putting the limits of the known
on that which is unknown, although often helpful, is often just a
crutch.

A saying to demonstrate the pitfalls of conventional thinking:  "The
most intelligent creature in the universe is a rock.  Noone would know
it because they have lousy I/O."

>Further, I think that both ends
>working towards the middle is a good approach, and that progress will be
>more quickly forthcoming as a result.

Maybe it's more of a circle and we are working our way inward.  Or
perhaps a circle where we are in the center working our way out?

>				>>>bill<<<
>			punch@pleiades.cps.msu.edu
--
Chris Dailey   dailey@(frith.egr|cps).msu.edu
    __  __  ___       | "A line in the sand." -- The Detroit News
 __/  \/  \/ __:>-    |
 \__/\__/\__/         | "Allein in der sand." -- me

dailey@frith.uucp (Chris Dailey) (02/09/91)

In article <MIKEB.91Feb7144048@wdl35.wdl.loral.com> mikeb@wdl35.wdl.loral.com (Michael H Bender) writes:
>David Kinny writes:
[...]
>   If we could understand the workings of a lizard mind it would be a
>   major step towards understanding human cognitive processes.
[...]
>I honestly do believe that there is a valid reason for trying to understand
>the workings of a lizard, however I also believe that there is also a very
>valid reason for introspection and understanding of conscious cognitive
>processes. Although somewhat looked down upon currently, it is likely that
>expert system technology grew out of this type of research.

We also need to try to understand what all minds have in common.  Which
basic instincts that are common among all (somewhat intelligent) living
creatures would need to be held in common by an artificial intelligence
to truly be intelligent?  (Of course we wouldn't want to limit ourselves
to just these attributes -- maybe this would be a sufficient condition,
but not a necessary condition.)

Would survival be one of these traits?  If so, would we give a machine
the ability to control its own survival?  Would we make it so it could
intelligent enough to make sure we couldn't destroy it?  If we are not
willing to do so, could we actually create intelligence? (maybe a
catch-22)

What artificial intelligence is there that actually tries to mimic the
basic attributes among various species?  I would think that if we took a
good look, we wouldn't find much (although I am not at all well versed
with AI literature).  Maybe that is where the challenge should lie.
Maybe in attempting such a feat we would learn so much more about our
own intelligence and about the other areas of AI.

>Mike Bender
--
Chris Dailey   dailey@(frith.egr|cps).msu.edu
    __  __  ___       | "A line in the sand." -- The Detroit News
 __/  \/  \/ __:>-    |
 \__/\__/\__/         | "Allein in der sand." -- me

G.Joly@cs.ucl.ac.uk (Gordon Joly) (02/14/91)

nagle@well.sf.ca.us (John Nagle) writes
< G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:
< 
< >I see a discontinuity in the development intelligence, at the point of
< >the emergence of homo sapiens. Perhaps there should be a metric of
< >intelligence, with rocks and logs (cf Twin Peaks) at 0 and us humans
< >at 1.
< 
<       Unclear.  At our present state of ignorance, we can't really
< answer this question.  However, opinion in neurobiology suggests
< that the brains of mammals differ primarily in quantitative ways.
< See "The Evolution of the Brain", by Sir John Eccles, (1988,
< Couteledge, London), especially chapter 3, section 3.1.  Results
< from dissection, DNA measures of evolutionary distance, and time
< required for evolution all indicate that the brains of humans are
< not all that different from those of the other mammals.  

I am aware of some of this work, from a brush with molecular genetics
in psychiatry. Fascinating to learn that Josie Bloggs and Freda Smith
are ten times further apart, genetically, then say a Caucasian man is
>from a Semitic man or a Negro man.  But I digress...

Will the sequencing of the human genome be a short cut to AI? We just
read the secrets of intelligence off the database?

<       From an AI perspective, this is encouraging, in that if we
< can work our way up into the mammal range at all, we may be most of 
< the way to human-level AI.  It also indicates that levels beyond
< human intelligence might be possible with a similar architecture
< but more hardware.
< 
< 
< 					John Nagle

``QED: The Boy Who Draws Buildings'' BBC TV, 13 Feb, The Times review says

`` [...] the progress of Stephen Wiltshire, the autistic boy with an
extraordinary gift for drawing. His speciality is buildings, which he
can capture with an architect's attention to detail after studying
them for only a few minutes. He has what psychologists call the
"savant syndrome", which has noting to do with photographic memory or
even intelligence, as his IQ is only half that of normal person [...]''

An example of (exploited!) human beings with talents that do not seem
to "match" with IQ. The cases described by Oliver Sachs are also in
this class. "Seeing Voices" describes people with some form of damage
to the brain; in one case a person believes that all the furniture is
on one side of their room, but give a true description of reality when
signing.

I still beleive that the mammals and us have a missing link. I do not
feel that rats will ever to paint a picture of a bridge, mainly
because they don't want to. And chimps do not have language.

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

mikeb@wdl35.wdl.loral.com (Michael H Bender) (02/14/91)

Chris Dailey writes:

  ... historical review of previous comments ...

   We also need to try to understand what all minds have in common.  Which
   basic instincts that are common among all (somewhat intelligent) living
   creatures would need to be held in common by an artificial intelligence
   to truly be intelligent?  (Of course we wouldn't want to limit ourselves
   to just these attributes -- maybe this would be a sufficient condition,
   but not a necessary condition.)

   Would survival be one of these traits?  If so, would we give a machine
   the ability to control its own survival?  Would we make it so it could
   intelligent enough to make sure we couldn't destroy it?  If we are not
   willing to do so, could we actually create intelligence? (maybe a
   catch-22)

   What artificial intelligence is there that actually tries to mimic the
   basic attributes among various species?  I would think that if we took a
   good look, we wouldn't find much (although I am not at all well versed
   with AI literature).  Maybe that is where the challenge should lie.
   Maybe in attempting such a feat we would learn so much more about our
   own intelligence and about the other areas of AI.


The basic problem with what you propose is that as you go "up" the
evolutionary ladder you discover that these traits slowly change. (For
instance, does an amoeba have a need to survive when it spits in half?)
Clearly there are biological factors that affect our behavior, but they
can be strongly effected by environmental and sociological factors.
Often, we are responding to many different, contradictory, biological
drives at the same time. So what you are proposing sounds very
difficult to me. I suggest that you read Wilson (?) who wrote
Sociobiology (or was it Biosociology?).


By the way, do you really believe that there only is 1 (!) challenge?

Mike Bender

mmt@client2.DRETOR.UUCP (Martin Taylor) (02/14/91)

ici%cs@ucsd.{edu,bitnet,uucp} (Nicol N. Schraudolph) says:

>One hypothesis is that the "extra intelligence" resides not so much
>in the individual human brain, but rather in the countless cognitive
>artifacts and social structure we have created for ourselves. (*)
>Thus although we still essentially have animal brains, over the
>generations we collectively managed to create an environment in
>which slightly modified animal brains can pull all sorts of amazing
>stunts (such as discussing their own epistemology :-).

Just think of the difference in behaviour between a feedback loop with
a gain of 1-epsilon as compared to one with a gain of 1+epsilon.  In
the first case, any input eventually decays to zero, whereas is the
second it grows to infinity.  I think our "intelligence" is 2*epsilon
greater than that of chimpanzees, where epsilon is small enough to
allow the smartest chimps to be smarter than the dumbest humans, but
not enough chimps are smart enough to allow their mutual communication
to add up to "cognitive artifacts" or joint invention, whereas most humans
can use language cleverly enough to aid one another in developing
concepts, and can port pre-built concepts from one human to another.

Just as the universe seems "naturally" to have just enough mass to
be on the border between closing and not closing, even though we can't
find all that mass, so, I think we "naturally" have juuuuust enough
intelligence to regenerate and expand our cognitive artifacts over
time, and to slowly learn to build "mind-expanders" that increase our
ability to do so (writing, calculators, computers ...).  We are where
we are because this is almost the only place people that would ask
such questions could be.  A few centuries ago, few people could have
conceived the question; a few centuries hence, few will be interested
in such trivia.

(*) Schraudolph attributes the idea to Ed Hutchins.  With all due respect
to Ed, I think it is in the Zeitgeist, and has been for quite some time.
-- 
Martin Taylor (mmt@ben.dciem.dnd.ca ...!uunet!dciem!mmt) (416) 635-2048
There is no legal canon prohibiting the application of common sense
(Judge James Fontana, July 1990, on staying the prosecution of a case)

G.Joly@cs.ucl.ac.uk (Gordon Joly) (02/14/91)

I said 
> An example of (exploited!) human beings with talents that do not seem
> to "match" with IQ. The cases described by Oliver Sachs are also in
> this class. "Seeing Voices" describes people with some form of damage
> to the brain; in one case a person believes that all the furniture is
> on one side of their room, but give a true description of reality when
> signing.

I must apologise for spelling Oliver Sacks' name incorrectly. Also,
the example is in "Seeing Voices", a revised edition published by
Picador in the U.K. this year (HarperPerennial edition, 1990, in the
US).

The observations were of a subject with a "massive lesion to the right
cerebral hemisphere". There is a reference to "What the Hands Reveal
About the Brain", H. Piozner, E. S. Kilma & U. Bellugi, The MIT
Press/Bradford Books, 1987.

The television program I mentioned was interesting; Stephen Wiltshire
talent was amazing; "he sometimes preferred to draw buildings with his
back towards them" having taken a good look. His drawings fetch around
1,000 pounds - 2,000 US dollars. Yet he could not spot the link
between "apple, banana, pear, orange".

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (02/14/91)

There has been considerable discussion under this subject of
differences between human and animal thought.  Has anyone considered
the conjecture that humans have 3.5 levels of STM, or large=-scale
temporary K-lines -- and procedures capable of earning to use them.
Maybe chimps have only 2.5 layers of recirsion abilities -- and
earlier mammals only 1.5.  This could account for many aspects of
human abilities in language, planning, problem-solving, etc.  And note
the positive feedback: with a larger (yet still small) such stack,
yiou also get more time to put more things into LTM to use as
"virtual" STM stack.

For example, Marcus grammars can do a lot of "natural language
grammar" with 3 stack-like registers, but not very much with only two.

By "2.5" levels of stack, I simply mean that the first register is
very competent and capacious, the second less so, etc., so the thing
trails off.  That's why, presumably, you can understand sentences with
2 levels of embedding, but have trouble with 3, etc.

dailey@frith.uucp (Chris Dailey) (02/15/91)

In article <MIKEB.91Feb13122512@wdl35.wdl.loral.com> mikeb@wdl35.wdl.loral.com (Michael H Bender) writes:
>Chris Dailey writes:
[... My comments deleted ...]
>The basic problem with what you propose is that as you go "up" the
>evolutionary ladder you discover that these traits slowly change. (For
>instance, does an amoeba have a need to survive when it spits in half?)

[I assume you meant "splits".]

But survival is just one possible 'necessary trait'.  Ability to
reproduce, adapt, and interact with the environment might be others,
and there'd be undoubtedly many more.  Splitting in half is not the
sort of trait I was thinking about (but can be considered a REALIZATION
of one such trait).  [On an aside, maybe that's why they call cellular
automata 'Life' -- because the images on the screen appear to exhibit
some of these traits/characteristics ...?]

>Clearly there are biological factors that affect our behavior, but they
>can be strongly effected by environmental and sociological factors.
>Often, we are responding to many different, contradictory, biological
>drives at the same time.

... as well as intellectual factors.  Different traits are affected by
each type of factor.  Different combinations of factors and traits can
result in very different responses (as with a light flashing at a
certain frequency to an average person and to an epileptic).

>So what you are proposing sounds very
>difficult to me. 

Even so, there are general categories of traits across all lifeforms
(er, maybe I should say lifeforms that shows signs of intelligence)
that I think would be desirable in such a model.  (Besides, nobody ever
said it would be easy!  :)

>By the way, do you really believe that there only is 1 (!) challenge?

No, by 'where the challenge should lie' I guess I meant 'where I'd like
to see the next step be taken'.  Sorry for the bad wording and personal
opinion.

>Mike Bender


--
Chris Dailey   dailey@(frith.egr|cps).msu.edu
    __  __  ___       | "A line in the sand." -- The Detroit News
 __/  \/  \/ __:>-    |
 \__/\__/\__/         | "Allein in der sand." -- me

nagle@well.sf.ca.us (John Nagle) (02/16/91)

      Worrying about the differences between humans and the other mammals
remains premature, given our grossly inadequate knowledge about how anything
more complex than a slug really works.  We just don't have the knowledge to
address the design of a human-level intelligence, given that efforts to
build even synthetic ant-level intelligences have only been marginally
successful to date.

      This is harsh, yes.  But there is too much wishful thinking in AI.
The feeling that real AI is just around the corner keeps researchers
looking for a "magic bullet" that will crack the problem.  This is 
analogous to the belief of medieval alchemists that they were near to
transmuting lead into gold, when the principles of chemistry had not
yet been discovered.  Two centuries of slow, painstaking work, by men
such as Lavosier, was needed to put chemistry on a sound footing.  
Top-down AI may be the same way.  

     I might be wrong.  Someone might pull off a miracle.  But from
the bottom up, we don't need a miracle.  Ordinary cleverness will
suffice.

					John Nagle

mikeb@wdl35.wdl.loral.com (Michael H Bender) (02/22/91)

Marvin Minsky writes:

> There has been considerable discussion under this subject of
> differences between human and animal thought.  Has anyone considered
> the conjecture that humans have 3.5 levels of STM, or large=-scale
> temporary K-lines -- and procedures capable of earning to use them.
> Maybe chimps have only 2.5 layers of recirsion abilities -- and
> earlier mammals only 1.5.  This could account for many aspects of
> human abilities in language, planning, problem-solving, etc.  And note
> the positive feedback: with a larger (yet still small) such stack,
> yiou also get more time to put more things into LTM to use as
> "virtual" STM stack.
>  For example, Marcus grammars can do a lot of "natural language
> grammar" with 3 stack-like registers, but not very much with only two.
>  By "2.5" levels of stack, I simply mean that the first register is
> very competent and capacious, the second less so, etc., so the thing
> trails off.  That's why, presumably, you can understand sentences with
> 2 levels of embedding, but have trouble with 3, etc.

I think there is an analogy between what you are suggesting and other
reported limitations in human and animal thought. 

	Example 1: It has been reported (I don't remember the source) that
	crows can reliably count up to 4 (?) but not past this number. 

	Example 2: The studies associated with "The Magic Number 7". 

In both these examples there may be built in, structural, limitations on
the amount of objects that can be perceived, conceptualized at any one time
and these limitations appear to vary from species to species.

Mike Bender

G.Joly@cs.ucl.ac.uk (Gordon Joly) (02/26/91)

Michael H Bender writes:
 > Marvin Minsky writes:
 > 
 > > There has been considerable discussion under this subject of
 > > differences between human and animal thought.  
 > > [...]
 >
 > I think there is an analogy between what you are suggesting and other
 > reported limitations in human and animal thought. 
 > 
 > 	Example 1: It has been reported (I don't remember the source) that
 > 	crows can reliably count up to 4 (?) but not past this number. 
 > 
 > 	Example 2: The studies associated with "The Magic Number 7". 
 > 
 > In both these examples there may be built in, structural, limitations on
 > the amount of objects that can be perceived, conceptualized at any one time
 > and these limitations appear to vary from species to species.
 > 
 > Mike Bender

Human intelligence is the sum, in part, of some all of previous
thought, transmitted through language.

There was a suggestion in the AI Journal that 2.5 million years of
humanity compared with the rest of life on earth meant that simulating
the IQ of an earwig or a lizard meant that you were almost home and
dry, and that human IQ was within spitting distance.

If the first people walked that Earth 2.5 million years ago, then why
did they have a brain with the same biology of that of Einstein,
Beethoven or us?

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

   "I didn't do it. Nobody saw me do it. You can't prove anything!"

feldman@rex.cs.tulane.edu (Damon Feldman) (02/27/91)

In <23176@well.sf.ca.us> nagle@well.sf.ca.us (John Nagle) writes:


>      Worrying about the differences between humans and the other mammals
>remains premature, given our grossly inadequate knowledge about how anything
>more complex than a slug really works.  We just don't have the knowledge to
>address the design of a human-level intelligence, given that efforts to
>build even synthetic ant-level intelligences have only been marginally
>successful to date.
	
>     I might be wrong.  Someone might pull off a miracle.  But from
>the bottom up, we don't need a miracle.  Ordinary cleverness will
>suffice.

	Nonsense.  The brain is structured fundamentally differently
from a computer and has capabilities that use this structure to its
maximal advantage (I beleive for a variety of reasons).  Modeling the
brain with a computer is no more feasible than having a human perform
all the tasks of a computer.
	The problems I refer to are things like having 10^10 (or
something like that) connections among neurons.  A computer cannot
keep track of 10^10 little peices of information in the first place,
and if it could, it could not model the interactions that occur among
all of them many times per second.
	The brain has defined certain areas of AI by being good in
them, and cannot be matched by a Von Neumann Machine (or some toy with
a few thousand processors) in these areas.

Just my opinion,
Damon
-- 

Damon Feldman                  feldman@rex.cs.tulane.edu
Computer Science Dept.         Tulane University, New Orleans LA, USA

dailey@buster.cps.msu.edu (Chris Dailey) (03/01/91)

Gordon Joly (G.Joly@cs.ucl.ac.uk) writes:
>Michael H Bender writes:
> > Marvin Minsky writes:
> > > There has been considerable discussion under this subject of
> > > differences between human and animal thought.  
> > > [...]
> > I think there is an analogy between what you are suggesting and other
> > reported limitations in human and animal thought. 
> > 
> > 	Example 1: It has been reported (I don't remember the source) that
> > 	crows can reliably count up to 4 (?) but not past this number. 
> > 
> > 	Example 2: The studies associated with "The Magic Number 7". 
> > Mike Bender
>
>Human intelligence is the sum, in part, of some all of previous
>thought, transmitted through language.

How about transmitted through experience?  Why limit yourself to just
language?  Is there not intelligence in knowing what the seasons
bring?  Or if you drink Mountain Dew then you get jittery?  These are
manifestations of human intelligence that may also be similar to
manifestations of intelligence in general.

How about this:  Human intelligence is the ability to recognize
patterns in the environment [in order] to make generalizations and
predictions about future events.

Maybe life could be defined as the desire of intelligence/ability in
order to assure one's survival and/or happiness and/or propogation of
the species.

These are not propositions for formal definitions, I'm just kicking
around some ideas.

>Gordon Joly                                       +44 71 387 7050 ext 3716
-- 
Chris Dailey   dailey@(frith.egr|cps).msu.edu
    __  __  ___       | "A line in the sand." -- The Detroit News
 __/  \/  \/ __:>-    |
 \__/\__/\__/         | "Allein in der sand." -- me

nagle@well.sf.ca.us (John Nagle) (03/01/91)

G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:

>There was a suggestion in the AI Journal that 2.5 million years of
>humanity compared with the rest of life on earth meant that simulating
>the IQ of an earwig or a lizard meant that you were almost home and
>dry, and that human IQ was within spitting distance.

     Reference, please.

     I've been saying something like that for a few years, although
more along the lines that once we get up to low-end mammal capability
we should almost be there.  This follows Sir John Eccles' comments
in "Evolution of the Brain" that the brains of mammals differ only
quantatitively.

     The discouraging thing is that even building an ant brain is hard.
The encouraging thing is that this is now recognized.

					John Nagle

nagle@well.sf.ca.us (John Nagle) (03/01/91)

feldman@rex.cs.tulane.edu (Damon Feldman) writes:

>	Nonsense.  The brain is structured fundamentally differently
>from a computer and has capabilities that use this structure to its
>maximal advantage (I beleive for a variety of reasons).  Modeling the
>brain with a computer is no more feasible than having a human perform
>all the tasks of a computer.

>	The problems I refer to are things like having 10^10 (or
>something like that) connections among neurons.  A computer cannot
>keep track of 10^10 little peices of information in the first place,
>and if it could, it could not model the interactions that occur among
>all of them many times per second.

      If all we needed was to store 10^10 "little peices [sic] of
information, it would be easy enough.  RAM prices are down to around
$100/MB, so for a million dollars, one can store 10^10 bytes.  And maybe
some could be paged out to disk.  Disk drives are down below $5/MB.

      Hardware is not the real problem any more.  If you believe Moravec's
figures on brain complexity, a Connection Machine has more than enough power to
emulate a mouse.  Yet we still have trouble doing ants.  

      Squirrel-level AI is worth striving for.  Good eye-hand coordination,
good balance, good performance on manual tasks, and about three orders of
magnitude below human brain size.  When we can do a squirrel, we will be
close to human-level performance.

					John Nagle

rickert@mp.cs.niu.edu (Neil Rickert) (03/01/91)

In article <1991Feb28.193218.21879@msuinfo.cl.msu.edu> dailey@buster.cps.msu.edu (Chris Dailey) writes:
>
>How about this:  Human intelligence is the ability to recognize
>patterns in the environment [in order] to make generalizations and
>predictions about future events.

 Sound OK, providing that you will agree that Pavlov's dog exhibited
human intelligence.

-- 
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
  Neil W. Rickert, Computer Science               <rickert@cs.niu.edu>
  Northern Illinois Univ.
  DeKalb, IL 60115                                   +1-815-753-6940

kludge@grissom.larc.nasa.gov ( Scott Dorsey) (03/01/91)

In article <1991Feb28.204538.21350@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:
>In article <1991Feb28.193218.21879@msuinfo.cl.msu.edu> dailey@buster.cps.msu.edu (Chris Dailey) writes:
>>
>>How about this:  Human intelligence is the ability to recognize
>>patterns in the environment [in order] to make generalizations and
>>predictions about future events.
>
> Sound OK, providing that you will agree that Pavlov's dog exhibited
>human intelligence.

Actually, I like this definition.  By this definition, Pavlov's dog does
exhibit quite a degree of intelligence, far more than, say, a worm.  But
much less than Dr. Minsky, because it was not able to make as sophisticated
generalizations or as detailed predictions.  A rock or a Turing machine by
this definition would have no intelligence.  As to whether a Turing machine
may be used to simulate a system which is intelligent is a question that
can't yet be answered.
--scott

feldman@rex.cs.tulane.edu (Damon Feldman) (03/01/91)

In <23399@well.sf.ca.us> nagle@well.sf.ca.us (John Nagle) writes:

>feldman@rex.cs.tulane.edu (Damon Feldman) writes:

>>	Nonsense.  The brain is structured fundamentally differently
>>from a computer and has capabilities that use this structure to its
>>maximal advantage (I beleive for a variety of reasons).  Modeling the

>>	The problems I refer to are things like having 10^10 (or
>>something like that) connections among neurons.  A computer cannot
>>keep track of 10^10 little peices of information in the first place,
>>and if it could, it could not model the interactions that occur among
>>all of them many times per second.

>      If all we needed was to store 10^10 "little peices [sic] of
>information, it would be easy enough.  RAM prices are down to around
>$100/MB, so for a million dollars, one can store 10^10 bytes.  And maybe
>some could be paged out to disk.  Disk drives are down below $5/MB.

>      Hardware is not the real problem any more.  If you believe Moravec's
>figures on brain complexity, a Connection Machine has more than enough power to
>emulate a mouse.  Yet we still have trouble doing ants.

	In terms of computing power alone, I agree, we've got enough.
If not with a connection machine then with a whole bunch of 'em.
	I don't think that is the point, personally.  As I said
before, there are things computers can do that we cannot, although we
have more computational power than a Mackintosh.  This is because we
operate differently, not because we don't perform enough operations per
second. 
	Tasks that people do tend to be those that we can do!  In
other words, the set of all tasks acheiveable with a totally paralell,
10^9 processor, superconnected computer with program-as-structure
(i.e. brain) are achieveable by people.  To simulate human (or even
animal) behavior, a computer would have to be able to do all the
things that a totally-paralell, 10^9 processor etc. machine can do.
Since a computer has relatively few processors and program-as-data
etc. it cannot.  Not because it doesn't perform enough operations per
second, but because its mode of operation is better for some things
and worse for others.
	Given a computer and a person with the same computational
power (using any definition you like) the person will still take a lot
longer to calculate 20,000 digits of pi.  The inefficeincy of the
person is due to the way he works, since computational power has been
equalized.  The person would take a year to do what the computer can
do in an hour.  How bad does it get before we accept that there is a
fundamental difference in operational capabilities, beyond speed and
memory? 
	To make a more concrete analogy, it is similar to trying to
achieve the programming capabilities of C running on a connection machine
using a distributed system of PS-2's running lisp.  Lisp has strong
points, but it cannot do some of the things C can do without gross
inneficeincy with respect to both time and storage needs.

hope this was coherent,
Damon

-- 

Damon Feldman                  feldman@rex.cs.tulane.edu
Computer Science Dept.         Tulane University, New Orleans LA, USA

dailey@buster.cps.msu.edu (Chris Dailey) (03/01/91)

I (dailey@buster.cps.msu.edu) wrote:
>How about this:  Human intelligence is the ability to recognize
>patterns in the environment [in order] to make generalizations and
>predictions about future events.

To which Neil Rickert (rickert@mp.cs.niu.edu) replied:
> Sound OK, providing that you will agree that Pavlov's dog exhibited
>human intelligence.

To which Scott Dorsey (kludge@grissom.larc.nasa.gov) replied:
>Actually, I like this definition.  By this definition, Pavlov's dog does
>exhibit quite a degree of intelligence, far more than, say, a worm.  But
>much less than Dr. Minsky, because it was not able to make as sophisticated
>generalizations or as detailed predictions.  A rock or a Turing machine by
>this definition would have no intelligence.  As to whether a Turing machine
>may be used to simulate a system which is intelligent is a question that
>can't yet be answered.
>--scott

Maybe I shouldn't have said, "Human intelligence...".  I think it is a
mistake to be searching for intelligence inherently based on our own
form (or for reasonable computer emulation therof).  I think I should
have said, "Higher intelligence".  That leaves us with:

     Higher intelligence is the ability to recognize patterns in the
     environment [in order] to make generalizations and predictions
     about future events.

So if we desire to improve this definition, let us consider a plant
that always points towards the sun to get the maximum amount of light.
Can we [and do we want to] say that the plant is displaying
intelligence?  There is a form of pattern recognition there, right?
But is it making predictions or just responding to its environment?
And if it is just responding to its environment, should that be a part
of the definition of higher intelligence?  Or maybe it is part of a
definition of lower intelligence, which should possibly be included in
a definition of higher intelligence?

This definition requires interaction with the environment.  Should that
be?  Or is this not really a requirement for higher intelligence but a
necessity to be able to determine whether there is higher intelligence?

Sorry for posing so many questions at once.
-- 
Chris Dailey   dailey@(frith.egr|cps).msu.edu
    __  __  ___       | "A line in the sand." -- The Detroit News
 __/  \/  \/ __:>-    |
 \__/\__/\__/         | "Allein in der sand." -- me

cam@aipna.ed.ac.uk (Chris Malcolm) (03/02/91)

In article <1473@ucl-cs.uucp> G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:

>If the first people walked that Earth 2.5 million years ago, then why
>did they have a brain with the same biology of that of Einstein,
>Beethoven or us?

With a much poorer cultural inheritance (far less giants to stand on the
shoulders of) it took an Einstein to invent such things as the
bow-and-arrow and a Marie Curie to invent such things as the basket --
the basket has been cited as the single most important human invention.

Don't forget that the evolution of the human brain size has to run
parallel with the evolution of the female hips. In other words, our
brain size is a compromise between the advantages of a big brain, and
the disadvantages of the brain damage that afflicts at least some due
the difficulty of getting the big head out. Brains would keep getting
bigger until the effects of the hip bottleneck nullified the advantage.
The next increment in cleverness would then be produced by wider or more
flexible female hips, and the cycle would begin again.  And a social
culture-passing animal doesn't need _everyone_ to be smart, just enough
smart people to make the tribe and the culture smart. Culture is a
wonderful way of sharing out the mental talents of the few.  Under these
various circs you'd expect wide variation in smartness, partly due to
wide variation in genetic endowment, and partly due to the fact that
many (most?) people suffer from minor early brain damage.

The hip argument is just the most obvious example of a general truth:
that when there is strong selective pressure for some change (and the
rate of evolution of human brain size suggests there was), then it will
be achieved along with damaging adaptive stress to the rest of the
physiology. Once the selected-for change has stabilised, evolution will
then work "invisibly" to remove the adaptive stress by bringing the rest
of the genetic spec into line with it. The suggestion is that the big
brain hasn't been around for long enough for this to have happened yet.
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   +44 (0)31 667 1011 x2550
Department of Artificial Intelligence,    Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK                DoD #205

cam@aipna.ed.ac.uk (Chris Malcolm) (03/02/91)

In article <23398@well.sf.ca.us> nagle@well.sf.ca.us (John Nagle) writes:
>G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:

>>There was a suggestion in the AI Journal that 2.5 million years of
>>humanity compared with the rest of life on earth meant that simulating
>>the IQ of an earwig or a lizard meant that you were almost home and
>>dry, and that human IQ was within spitting distance.

>     Reference, please.

Not the requested reference, but others relevant.

Moravec, H.,{\em Locomotion, Vision, and Intelligence}, in Robotics
Research 1, eds Brady and Paul, MIT Press, 1984.

This says it quite explicitly, and suggests that the proper basis for
an AI research programme is therefore to start with a simple
responsive mobile creature and then recapitulate the "ox-trail of
evolution".

See MIT AI MEMO 899, R. Brooks, "Achieving Artificial Intelligence
through building robots", 1986, which suggests just such a research
programme.

AUTHOR: Evans , Christopher  1931-1979
TITLE: The mighty micro  : the impact of the computer revolution
EDITION: New ed.,2nd impr.
IMPRINT: London Victor Gollancz c1982

This earlier popular book gives the same sort of numbers and arguments
which form the basis of Moravec's recommendation.

But I am sure that the general idea has been part of the
systems/cybernetics culture for as long as enough has been known to do
the sums, i.e., I think it's at least as old as AI, but I'm sorry, a
rapid rummage of my bookshelves has failed to turn up a reference.

The very earliest robotics research was along the kind of lines
suggested by these arguments, e.g., the MH1 MIT hand of the late 1950s
early 1960s. The later successes of the analytic approach, e.g., the
position-controlled robot arm with inverse-kinematics giving Cartesian
end-effector position specification, and the use of inverse Newtonian
optics to deduce object position from stereo images -- these successes
(IMHO) tempted research away from what then was a barely-articulated
paradigm into the "classical approach", i.e., which performs
"sensor-fusion" via world models, and in which the behaviour of the
system is made to conform to the formal model by using the formal model
as an ingredient in the implementation. The funding crisis of the 1970s,
when AI researchers were criticised for being "unscientific", i.e.,
"where's the formal model then?" helped the shift along.

"How do you know which leg to move next?" asked the ant of the
centipede. The centipede thought for a long time, started waving its
legs experimentally, and fell over.




-- 
Chris Malcolm    cam@uk.ac.ed.aipna   +44 (0)31 667 1011 x2550
Department of Artificial Intelligence,    Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK                DoD #205

G.Joly@cs.ucl.ac.uk (Gordon Joly) (03/03/91)

Chris Malcolm writes:
 > In article <23398@well.sf.ca.us> nagle@well.sf.ca.us (John Nagle) writes:
 > >G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:
 > 
 > >>There was a suggestion in the AI Journal that 2.5 million years of
 > >>humanity compared with the rest of life on earth meant that simulating
 > >>the IQ of an earwig or a lizard meant that you were almost home and
 > >>dry, and that human IQ was within spitting distance.
 > 
 > >     Reference, please.
 > 
 > [...]
 > -- 
 > Chris Malcolm    cam@uk.ac.ed.aipna   +44 (0)31 667 1011 x2550
 > Department of Artificial Intelligence,    Edinburgh University
 > 5 Forrest Hill, Edinburgh, EH1 2QL, UK                DoD #205

Sorry; on further reading on the article, I think I also
misrepresented the author. Apologies.

%T Today the earwig, tomorrow man?
%A David Kirsh
%J Artificial Intelligence
%V 47
%N 1-3
%P 161-184
%D 1991
%I Elsevier
%C Amsterdam
%E David Kirsh

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

   "I didn't do it. Nobody saw me do it. You can't prove anything!"

G.Joly@cs.ucl.ac.uk (Gordon Joly) (03/04/91)

Chris Malcolm writes:
 > In article <1473@ucl-cs.uucp> G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:
 > 
 > >If the first people walked that Earth 2.5 million years ago, then why
 > >did they have a brain with the same biology of that of Einstein,
 > >Beethoven or us?
 > 
 > With a much poorer cultural inheritance (far less giants to stand on the
 > shoulders of) it took an Einstein to invent such things as the
 > bow-and-arrow and a Marie Curie to invent such things as the basket --
 > the basket has been cited as the single most important human invention.

Yes, I see.

 > Don't forget that the evolution of the human brain size has to run
 > parallel with the evolution of the female hips. In other words, our
 > brain size is a compromise between the advantages of a big brain, and
 > the disadvantages of the brain damage that afflicts at least some due
 > the difficulty of getting the big head out.
 > [...]
 > -- 
 > Chris Malcolm    cam@uk.ac.ed.aipna   +44 (0)31 667 1011 x2550
 > Department of Artificial Intelligence,    Edinburgh University
 > 5 Forrest Hill, Edinburgh, EH1 2QL, UK                DoD #205

No, I disagree. The idea that a woman has to have large hips to give
birth to large(r) babies is a widespread misconception (pun intended).

Wide hips are a device for selection, not a device for babies with
bigger heads. There is only sexism here, since men are doing the
selection.

Any, we still have a lot redudancy left to soak up, cf Sacks work.

OMMM....

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

   "I didn't do it. Nobody saw me do it. You can't prove anything!"

scotp@csc2.essex.ac.uk (Scott P D) (03/05/91)

In article <4083@aipna.ed.ac.uk> cam@aipna.ed.ac.uk (Chris Malcolm) writes:
>Don't forget that the evolution of the human brain size has to run
>parallel with the evolution of the female hips. In other words, our
>brain size is a compromise between the advantages of a big brain, and
>the disadvantages of the brain damage that afflicts at least some due
>the difficulty of getting the big head out. Brains would keep getting
>bigger until the effects of the hip bottleneck nullified the advantage.
>The next increment in cleverness would then be produced by wider or more
>flexible female hips, and the cycle would begin again.  
>
That is until some particularly smart combination of chromosomes stumbles
across the idea that it isn't really necessary to build the complete
brain before birth.  There is no reason why the development of the
neural circuitry should not continue for some time in the neonate.

This is in fact what seems to have happened.  It is one reason why the
average human cannot even crawl when 3 months old yet the average calf
is able to follow the herd after only a few hours.  Clearly this
strategy for producing a larger brain is only possible in a species
that provides a great deal of post natal care.  Possibly there is a
threshold brain size for this to be possible, and it would be this that
determined how large the pelvic opening needed to be.

So I am afraid your vision of eternally widening female hips is not
to be.  Sorry if I have ruined a good fantasy.


Paul Scott, Dept Computer Science, University of Essex, Colchester, UK.

sarima@tdatirv.UUCP (Stanley Friesen) (03/07/91)

In article <4083@aipna.ed.ac.uk> cam@aipna.ed.ac.uk (Chris Malcolm) writes:
>>If the first people walked that Earth 2.5 million years ago, then why
>>did they have a brain with the same biology of that of Einstein,
>>Beethoven or us?
>
>With a much poorer cultural inheritance (far less giants to stand on the
>shoulders of) it took an Einstein to invent such things as the
>bow-and-arrow and a Marie Curie to invent such things as the basket --
>the basket has been cited as the single most important human invention.

Very good!  The basket was the most important invention,

and the bow and arrow was the *hardest*.  It did indeed take an Einstein
to invent it.  [It is one of those things that is obvious onece seen,
but hard to derive for the first time].


By the way, the original poster exaggerated a bit.  The earliest hominds,
2.5 milion years ago had *far* smaller brains than we.  (Proportionally
to thier body the brain of Autralopithecus afarensis was about the same
size as a chimpanzee's).  It was not until about 100 thousand years ago
(give or take a few tens of thousands) that a modern sized brain appeared.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)

christo@psych.toronto.edu (Christopher Green) (03/08/91)

In article <162@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>
>The earliest hominds,
>2.5 milion years ago had *far* smaller brains than we.  (Proportionally
>to thier body the brain of Autralopithecus afarensis was about the same
>size as a chimpanzee's).  It was not until about 100 thousand years ago
>(give or take a few tens of thousands) that a modern sized brain appeared.

Neandrethal atually had a brain LARGER than that of homo sapiens sapiens.
Didn't they appear before 100,000 years ago?
-christopher-

-- 
Christopher D. Green
Psychology Department                             e-mail:
University of Toronto                   christo@psych.toronto.edu
Toronto, Ontario M5S 1A1                cgreen@lake.scar.utoronto.ca 

smoliar@isi.edu (Stephen Smoliar) (03/11/91)

In article <1991Mar1.145213.1423@msuinfo.cl.msu.edu> dailey@buster.cps.msu.edu
(Chris Dailey) writes:
>
>     Higher intelligence is the ability to recognize patterns in the
>     environment [in order] to make generalizations and predictions
>     about future events.
>
>So if we desire to improve this definition, let us consider a plant
>that always points towards the sun to get the maximum amount of light.
>Can we [and do we want to] say that the plant is displaying
>intelligence?  There is a form of pattern recognition there, right?
>But is it making predictions or just responding to its environment?
>And if it is just responding to its environment, should that be a part
>of the definition of higher intelligence?  Or maybe it is part of a
>definition of lower intelligence, which should possibly be included in
>a definition of higher intelligence?
>
All these questions may stem from the assumption that pattern recognition,
generalization, and prediction are each SINGLE skills which an agent either
has or lacks.  One of the things which interests me the most about Edelman's
THE REMEMBERED PRESENT is that he tries to approach the problem of what the
brain does in terms of different levels of capacity for what he calls
"categorical perception," a piece of terminology which tends to incorporate
what Chris probably has in mind for pattern recognition, generalization, and
prediction.  Thus, the heliotropic response of a plant (such as a heliotrope)
may be regarded as a very limited capacity for perceptual categorization.  The
sensory hairs on a Venus fly-trap endow it with a somewhat greater capacity
through its ability to (generally) distinguish food from debris.  When we move
into the animal kingdom, we find a variety of means by which different
organisms can be distinguished by the sophistication of their respective
capacities for categorical perception;  and Edelman's book is essentially
an outline of all the features which constitute human capacity in this respect.
-- 
USPS:	Stephen Smoliar
	5000 Centinela Avenue  #129
	Los Angeles, California  90066
Internet:  smoliar@venera.isi.edu

schraudo@beowulf.ucsd.edu (Nici Schraudolph) (03/12/91)

G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:

>Any, we still have a lot redundancy left to soak up, cf Sacks work.

I think such claims of redundancy in the brain -- ranging from lesion
studies down to the infamous "you only use 10%" ads -- are premature;
we simply don't know enough about brain function to make such a deter-
mination.  Just because a person can still function normally with a
lesion of A doesn't mean that A was redundant: the lesion may have pro-
duced a defect that escaped our tests, or the function of A has been taken
over by B, slightly compromising the function of B in the process, or...

Even if the redundancy is there, it may be vital (as far as evolution is
concerned) to compensate for the detrimental effects of concussions, micro-
lesions, toxic substances, ageing, and the like.  We simply don't know
how much slack there is for increasing brain functionality without in-
creasing brain size.

-- 
Nicol N. Schraudolph, CSE Dept.  |   "I don't know about your dreams, but mine
Univ. of California, San Diego   | are sort of hackney: same thing night after
La Jolla, CA 92093-0114, U.S.A.  | night, just this repetitive.  And the color
nici%cs@ucsd.{edu,bitnet,uucp}   | is really bad..."    - Laurie Anderson.

sarima@tdatirv.UUCP (Stanley Friesen) (03/14/91)

In article <schraudo.668710823@beowulf> schraudo@beowulf.ucsd.edu (Nici Schraudolph) writes:
<G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:
<I think such claims of redundancy in the brain -- ranging from lesion
<studies down to the infamous "you only use 10%" ads -- are premature;
<we simply don't know enough about brain function to make such a deter-
<mination.  Just because a person can still function normally with a
<lesion of A doesn't mean that A was redundant: the lesion may have pro-
<duced a defect that escaped our tests, or the function of A has been taken
<over by B, slightly compromising the function of B in the process, or...

You are making some very good points.  And indeed most recent work in
neurobiology tends to bear you 'guesses' out.  That is as we find out
more about brain function we are finding less and less 'redundancy'.

The reality seems to be a combination of the two alternatives you mentioned,
perhaps with a distinct bias towards the second (sometimes with more than
just a slight compromise in B's function; sometimes the function seems to
be taken over in part by B and C and D and ...)

<Even if the redundancy is there, it may be vital (as far as evolution is
<concerned) to compensate for the detrimental effects of concussions, micro-
<lesions, toxic substances, ageing, and the like.  We simply don't know
<how much slack there is for increasing brain functionality without in-
<creasing brain size.

Again, a very good point.  The human body is a wonder of self-correcting,
fail-safe, fault-tolerant operation.  I suspect that *any* redundancy that
actually is present is geared towards that end.  (This is certainly true
in such areas as tensile strength of bone &c.).
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)

dailey@cpsin3.cps.msu.edu (Chris Dailey) (03/15/91)

Here is a quote from an article by John Nagle (nagle@well.sf.ca.us) that
passed by here a while ago.  I'm sorry that I've lost the message ID.
(I'm retyping this from hardcopy -- sorry about any mistakes.)

>     It's been thirty-two years since Samuels' checkers program, the first
>major success of AI.  And yet we still can't build something with the
>competence of an ant brain in dealing with the real world.  This is
>discouraging.

>     It's encouraging that this is now recognized as a problem.  Brooks,
>Connell, Maes, and others are working on artificial insects.  The level
>of insect competence demonstrated to date is still rather low.

>     There is a bit of hubris in trying to address human-level intelligence
>from our present level of ignorance.  We now understand that just getting an
>ant through a minute of life is hard.  Walking over rough ground is hard.
>Avoiding obstacles is hard.  Picking up things is hard.  Piling things up
>is hard.  General ant level competence is very hard.

Let's look at what ant level intelligence is.  First, think of the
spider-like robots being created at MIT, by Brooks, et al.  They seem
to be a success at navigating over rough ground, etc., etc.  So what
level of intelligence is this?  To me, this is something that most
humans would do without even thinking about it under most circumstances
(that is, unless the terrain were extremely rough).

There would seem to me to be a part of the brain that has
responsibility for this type of lower level activity.  My guess is that
it is more closely linked to the central nervous system, and we take it
for granted when thinking about intelligence.  But without it, the more
cognitive parts of our brain would not be able to function.

So what of ants and spiders?  I would guess that their brains consist
of mostly this type of lower level intelligence.  Little of their
brains would be devoted to the higher level functions, such as social
interaction.  (However, as cellular automata show, simple rules can
lead to very complex behavior.)

I would guess that we are comprised of a heirarchy of intelligence
levels.  There are probably well defined communication links between
each level, and each level responds accordingly to the communications
it receives from other levels.

Well, those are my thoughts for now ...
-- 
Chris Dailey   dailey@(frith.egr|cps).msu.edu            
    __  __  ___    | Software Engineer Wanna-be studying compiler design &
 __/  \/  \/ __:>- | implementation. Temporarily residing in Software
 \__/\__/\__/      | Engineering Lab, Engineering Bldg., MSU Campus -- cps424

nagle@well.sf.ca.us (John Nagle) (03/17/91)

dailey@cpsin3.cps.msu.edu (Chris Dailey) writes:

>I would guess that we are comprised of a heirarchy of intelligence
>levels.  There are probably well defined communication links between
>each level, and each level responds accordingly to the communications
>it receives from other levels.

     Minsky calls this the "A-brains and B-brains" theory in his
"Society of Mind".  It's a conjecture at this point.  Sensory
data is not necessarily filtered through autonomous lower levels.
In an evolved system, there's no real reason to expect clean
interfaces and structured design, and good reason not to expect it.
But it's a useful way to think about the problem.

     It's worth recognizing that the "higher levels" only have to
deal with the things the lower levels get wrong.  More on the
implications of this later.

     One major reason for working bottom-up is that there is a
strong possibility that the "upper layers" (or the "more recently
evolved layers") use many of the same components evolved to make
the lower levels work.  The rapidity with which evolution proceeded
from the lowest mammals to the highest to date indicates this is 
a likely hypothesis.  So bottom-up work may lead to development of
enough of a parts catalog that building the higher levels can be
tackled with better options than we face at our present level of
ignorance.

					John Nagle