[comp.ai.philosophy] Perceptron limitations...

ssingh@watserv1.waterloo.edu (Sneaky Sanj ;-) (04/02/91)

I am writing up a paper for philosophy and I would like to push the
non-reductive materialism model of the mind.

The basic idea is that an increase in quantity gives rise to a 
spontaneous and sudden change in quality.

I was wondering if it is correct to cite Minsky & Pappert's _Perceptrons_
to support such a model of minds, where in order to have a human
mind be able to process a symbolic language like English, it must be
of sufficient complexity, and failure to demonstrate this capacity
in lower primates is the result of a lower complexity brain.

This is analagous to the idea of a single layer net unable to 
learn the xor rule and a multi-layer one was able to successfully
implement it.

It might be that I am way off base. But regrettably, I lack the 
skill to understand the rigour of the book, so if anyone can
help me out, I would be very grateful.

Thanks in advance.

Ice. "We're all clones..."-Alice Cooper.

-- 
"No one had the guts... until now!"  
$anjay $ingh     Fire & "Ice"     ssingh@watserv1.[u]waterloo.{edu|cdn}/[ca]
ROBOTRON Hi-Score: 20 Million Points | A new level of (in)human throughput...
!blade_runner!terminator!terminator_II_judgement_day!watmath!watserv1!ssingh!

cpshelley@violet.uwaterloo.ca (cameron shelley) (04/02/91)

In article <1991Apr2.092041.9391@watserv1.waterloo.edu> ssingh@watserv1.waterloo.edu (Sneaky Sanj ;-) writes:
>I am writing up a paper for philosophy and I would like to push the
>non-reductive materialism model of the mind.
>
>The basic idea is that an increase in quantity gives rise to a 
>spontaneous and sudden change in quality.

Apparently not, if you're just talking about lumping neuron upon neuron
in a `brain'.  Not only do our brains contain more than some minimum of
neural cells, but the cells come in many kinds and similar ones tend to
group themselves together.  The groups then tend to take on different
functions.  This kind of diversity is apparently part of what makes
`mind' possible.  Since the human brain is our best (understood) example
for mind, the factor of morphological diversity should at least be taken
into account.

>I was wondering if it is correct to cite Minsky & Pappert's _Perceptrons_
>to support such a model of minds, where in order to have a human
>mind be able to process a symbolic language like English, it must be
>of sufficient complexity, and failure to demonstrate this capacity
>in lower primates is the result of a lower complexity brain.

Slightly off topic, there is an article in a 1976 Scientific American 
about paleoneurology.  The author suggests that since our ancestors
had a very rudimentary sense of smell, they could not do things like
territory marking by scent (like wolves), so they resorted to vocalizations
(like apes do, at least when film-crews are around).  This, he claims,
might have been our first impetus to speech and thus language.  If 
you're going to start comparing us with lower primates, then you
should check the paleologic work out.

(The article is the first in a SA reader printed last year.  If you
want to borrow it, e-mail me and I'll bring it in...)

				Cam

--
      Cameron Shelley        | "Belladonna, n.  In Italian a beautiful lady;
cpshelley@violet.waterloo.edu|  in English a deadly poison.  A striking example
    Davis Centre Rm 2136     |  of the essential identity of the two tongues."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

fi@grebyn.com (Fiona Oceanstar) (04/03/91)

Cameron Shelley writes:
>Not only do our brains contain more than some minimum of
>neural cells, but the cells come in many kinds and similar ones tend to
>group themselves together.  The groups then tend to take on different
>functions.  This kind of diversity is apparently part of what makes
>`mind' possible.  Since the human brain is our best (understood) example
>for mind, the factor of morphological diversity should at least be taken
>into account.

It does my heart good to hear someone alluding to cells, even anatomy,
on this newsgroup.  I feel that I am not alone out here, in my puptent
in the realm of neurobiology.  And I do agree with Cameron: models that view
the brain as homogeneous, are hard for me to make heads or tails of--because
the brain is so highly structured, so complex in three dimensions.

>Slightly off topic, there is an article in a 1976 Scientific American 
>about paleoneurology.  The author suggests that since our ancestors
>had a very rudimentary sense of smell, they could not do things like
>territory marking by scent (like wolves), so they resorted to vocalizations
>(like apes do, at least when film-crews are around).  This, he claims,
>might have been our first impetus to speech and thus language.  If 
>you're going to start comparing us with lower primates, then you
>should check the paleologic work out.

'Makes me think of those experiments with vervet monkeys in Africa
(Seyfarth et al., _Science_ 210:801), where they found that vervet
monkeys give different alarm calls for different predators.  What they
did to crack the code: they taped the alarm calls and played them back at
different times, to figure out from the monkeys' reactions, the meanings
of the different "words."  They found that one call caused the monkeys
to run into the trees--that would be "leopard."  One call caused them
to look up at the sky--"eagle."  And one call caused them to look at
the ground--"snake."  They noticed that while the adult monkeys called
primarily for leopards, martial eagles, and pythons, the youngsters
were more confused in their calling behavior, giving leopard alarms to
a wide variety of mammals, eagle alarms to many birds, and snake alarms
to "snake-like objects."

Can't you just imagine it?  This little monkey looks down, sees what he
thinks is a snake on the ground, and goes "Snake! Snake!" in a loud
voice.  Then some adult comes over, checks out the situation, and
discovers that it's not a snake, just a long twisty vine.  The adult
goes over to the little guy, cuffs him around a bit, and says "Don't
say 'snake' when it's not really a snake, you dummy!"

And they say only humans have language.

							--Fiona O.

ssingh@watserv1.waterloo.edu (Sneaky Sanj ;-) (04/03/91)

In article <1991Apr2.182825.4500@grebyn.com> fi@grebyn.com (Fiona Oceanstar) writes:
>Cameron Shelley writes:
>>Not only do our brains contain more than some minimum of
>>neural cells, but the cells come in many kinds and similar ones tend to
>>group themselves together.  The groups then tend to take on different
>>functions.  This kind of diversity is apparently part of what makes
>>`mind' possible.  Since the human brain is our best (understood) example
>>for mind, the factor of morphological diversity should at least be taken
>>into account.

Physical implementation aside, the bottom line is that they are still
devices that can be modelled by finite automata (or is this the
continuous vs discrete argument again ;-). The question still begs as
to whether or not if you string enough of them together the ability
for more complex computation arises that is not present in less complex
networks in any form, and is irreducible to any one of the elements.

With my convoluted understanding of neural nets_Perceptrons_ is the
only book that attempts to address this, and I was just pondering the 
notion that language is possible in humans because the capacity for
abstraction that underlies language can only be implemented by a
sufficiently complex brain. And do Minsky's results have any relevance.

>[...]  And I do agree with Cameron: models that view
>the brain as homogeneous, are hard for me to make heads or tails of--because
>the brain is so highly structured, so complex in three dimensions.

I agree that the brain is highly structured, but it is bad to immediately
trash models that abstract the brain as homogeneous. To model the brain
in this way allows for a generality that dwelling on the connectivity of
the hippocampus does not. Remember that we are dealing with a highly
refined and highly tweaked information processor. It makes sense from
an evolutionary standpoint to have a hard-wired link as outlined below.

stimulus -> iconic mem -> STM -> hippocampus -> LTM.

Presumably, links between iconic memory and STM are hardwired into place.
Via specialized structures that take away from the homogeneity of the brain.
Why? I would guess so that a high enough informational bandwidth can be
achieved to process information in real-time. If not, we might be a repast
for a mean sabre-toothed tiger! :-) Mind you, the specialized structure
of the hippocampus serves a different function, but it may need to
be "optimized" in an analagous fashion.

So anyhow, I maintain that homogenous models are good and convenient for
simulation and theoretical results. Domain-specific optimization is best
left to field-testing. And homogeneous models areable to achieve
functional equivalence to more specialized models, even if real-world
implementations are not as effective. 
 
>[neat story about monkeys deleted]

>And they say only humans have language.
>
>							--Fiona O.

Your example seemed to imply that it was very much like classical
conditioning. Where a certain stimulus led to a certain response.

This is not language. These monkeys most likely do not have the
ability to communicate the symbol for "eagle" or "snake" without
being convinced of seeing such a thing; ie. stimulus -> response.

That's the difference. I can type "snake" and you know what I'm
talking abou. I don't have to physically bring you a snake or 
appeal to sense datums to communicate the concept of a snake. I doubt
that those monkeys could do that.

Sanjay Singh never existed... There was only...
Ice.

-- 
"No one had the guts... until now!"  
$anjay $ingh     Fire & "Ice"     ssingh@watserv1.[u]waterloo.{edu|cdn}/[ca]
ROBOTRON Hi-Score: 20 Million Points | A new level of (in)human throughput...
!blade_runner!terminator!terminator_II_judgement_day!watmath!watserv1!ssingh!

cpshelley@violet.uwaterloo.ca (cameron shelley) (04/03/91)

In article <1991Apr2.214606.16223@watserv1.waterloo.edu> ssingh@watserv1.waterloo.edu (Sneaky Sanj ;-) writes:
[...]
>Physical implementation aside, the bottom line is that they are still
>devices that can be modelled by finite automata (or is this the
>continuous vs discrete argument again ;-). The question still begs as
>to whether or not if you string enough of them together the ability
>for more complex computation arises that is not present in less complex
>networks in any form, and is irreducible to any one of the elements.

My point was only that "physical implementation aside" itself is begging
a question.  I don't see anything wrong with that provided it is 
acknowledged.  But phrases like "if you string enough of them together"
would indicate you aren't intending to address structure seriously,
which I think would be a mistake.  If it were only a numbers game, then
we might expect brains to be far less differentiated than they are.
Since morphological diversity is used in implementing real minds (as
opposed to `vapour' ones), why ignore it?

While it is true that a large turing machine can functionally imitate
a smaller, more sophisticated machine, this ignores alot of operational
overhead involved in control and coordination.  This `meta-structure'
may by important to `mind', even the distribution of this complexity
could have a critical impact -- how do you know different?  The answer
might well be: "none of that matters much", but it would be *nice* to
know the reasons...

>With my convoluted understanding of neural nets_Perceptrons_ is the
>only book that attempts to address this, and I was just pondering the 
>notion that language is possible in humans because the capacity for
>abstraction that underlies language can only be implemented by a
>sufficiently complex brain. And do Minsky's results have any relevance.

Well, I can't speak for Minsky, but I wonder why dynamic structure (or
what I called "operational" above) is so ignorable in favour of static
structure -- or `selectively' ignorable.  Rather than view "language"
as implicit in a set brain, try looking at it as a process of 
communication -- maybe both!  Parts of the brain are built to support
things like language use; there might be more reason than you suspect,
but you'll never know if you don't look.

Btw, I'm not claiming I have an answer here, only a legitimate question.

>I agree that the brain is highly structured, but it is bad to immediately
>trash models that abstract the brain as homogeneous. To model the brain
>in this way allows for a generality that dwelling on the connectivity of
>the hippocampus does not. 

I didn't say you should trash your model, and it would be bad to do so.
It might also be premature to claim that your abstraction represents
what you think it does without convincing argument.  All I've seen
in connectionist literature (admittedly not a whole lot) is something
like "brains are parallel, neural nets are parallel, ergo neural nets
are brains (kinda sorta)".  Even with the usual caveats, I take this
with a grain of salt.

>Remember that we are dealing with a highly
>refined and highly tweaked information processor. 

When the "tweaking" has been established as trivial, you will have less
of a problem.

[...]
>So anyhow, I maintain that homogenous models are good and convenient for
>simulation and theoretical results. Domain-specific optimization is best
>left to field-testing. And homogeneous models areable to achieve
>functional equivalence to more specialized models, even if real-world
>implementations are not as effective. 

Suppositions for the purposes of study are fine, treating these as given
is not, I think, doing the subject justice.

--
      Cameron Shelley        | "Belladonna, n.  In Italian a beautiful lady;
cpshelley@violet.waterloo.edu|  in English a deadly poison.  A striking example
    Davis Centre Rm 2136     |  of the essential identity of the two tongues."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) (04/07/91)

fi@grebyn.com (Fiona Oceanstar) writes:

>It does my heart good to hear someone alluding to cells, even anatomy,
>on this newsgroup.  I feel that I am not alone out here, in my puptent
>in the realm of neurobiology.  And I do agree with Cameron: models that view
>the brain as homogeneous, are hard for me to make heads or tails of--because
>the brain is so highly structured, so complex in three dimensions.

   I am comstantly surprised at the low level of knowledge about neuroscience
present in AI work, especially work with neural networks.  Having had more
formal training in neuroscience than AI/Cogsci seems quite helpful to
me in general.  There has been an amazing amount of work on both sides that
really needs to be correlated, otherwise there will be too much of not only
reinventing the wheel, but also of missing research paths because you didn't
know they were there.

   Anyway, I agree.


   Off topic again (and not having the reference to earlier articles handy),
I remember an article referring to some work done on dreaming rats, cats, and
rabbits related to one of the ideas that you mentioned, i.e. that dreaming
appeared to be corellated with complexes of neurons that were active during
behavior characterized by theta-wave activity...  which was then correlated
to privmary survival behavior in each of the species.

   It seemed that on the low level, this was a means of activating more
permanent changes in neural structure (by the bursts of stimulation provided
in REM sleep), and conceptually, seems for reinforcing the aforementioned
survival behaviors.  Other than percieved higher-level effects in the
human mind...  (I should note that "perceived" and "actual" are very different
things)...  I wonder what other effects these states would have, on the
neural level.  I am curious about this because often cognitive effects
(especially at the level where they can be noticed) seem to be a very limited
explanation of neural behavior.

   Comments please?   (or references for the more advanced kibitzers ;-)

   Erich

             "I haven't lost my mind; I know exactly where it is."
     / --  Erich Stefan Boleyn  -- \       --=> *Mad Genius wanna-be* <=--
    { Honorary Grad. Student (Math) }--> Internet E-mail: <erich@cs.pdx.edu>
     \  Portland State University  /        Phone #:  (503) 289-4635