[sci.psychology] Perceptron limitations...

ssingh@watserv1.waterloo.edu (Sneaky Sanj ;-) (04/03/91)

In article <1991Apr2.182825.4500@grebyn.com> fi@grebyn.com (Fiona Oceanstar) writes:
>Cameron Shelley writes:
>>Not only do our brains contain more than some minimum of
>>neural cells, but the cells come in many kinds and similar ones tend to
>>group themselves together.  The groups then tend to take on different
>>functions.  This kind of diversity is apparently part of what makes
>>`mind' possible.  Since the human brain is our best (understood) example
>>for mind, the factor of morphological diversity should at least be taken
>>into account.

Physical implementation aside, the bottom line is that they are still
devices that can be modelled by finite automata (or is this the
continuous vs discrete argument again ;-). The question still begs as
to whether or not if you string enough of them together the ability
for more complex computation arises that is not present in less complex
networks in any form, and is irreducible to any one of the elements.

With my convoluted understanding of neural nets_Perceptrons_ is the
only book that attempts to address this, and I was just pondering the 
notion that language is possible in humans because the capacity for
abstraction that underlies language can only be implemented by a
sufficiently complex brain. And do Minsky's results have any relevance.

>[...]  And I do agree with Cameron: models that view
>the brain as homogeneous, are hard for me to make heads or tails of--because
>the brain is so highly structured, so complex in three dimensions.

I agree that the brain is highly structured, but it is bad to immediately
trash models that abstract the brain as homogeneous. To model the brain
in this way allows for a generality that dwelling on the connectivity of
the hippocampus does not. Remember that we are dealing with a highly
refined and highly tweaked information processor. It makes sense from
an evolutionary standpoint to have a hard-wired link as outlined below.

stimulus -> iconic mem -> STM -> hippocampus -> LTM.

Presumably, links between iconic memory and STM are hardwired into place.
Via specialized structures that take away from the homogeneity of the brain.
Why? I would guess so that a high enough informational bandwidth can be
achieved to process information in real-time. If not, we might be a repast
for a mean sabre-toothed tiger! :-) Mind you, the specialized structure
of the hippocampus serves a different function, but it may need to
be "optimized" in an analagous fashion.

So anyhow, I maintain that homogenous models are good and convenient for
simulation and theoretical results. Domain-specific optimization is best
left to field-testing. And homogeneous models areable to achieve
functional equivalence to more specialized models, even if real-world
implementations are not as effective. 
 
>[neat story about monkeys deleted]

>And they say only humans have language.
>
>							--Fiona O.

Your example seemed to imply that it was very much like classical
conditioning. Where a certain stimulus led to a certain response.

This is not language. These monkeys most likely do not have the
ability to communicate the symbol for "eagle" or "snake" without
being convinced of seeing such a thing; ie. stimulus -> response.

That's the difference. I can type "snake" and you know what I'm
talking abou. I don't have to physically bring you a snake or 
appeal to sense datums to communicate the concept of a snake. I doubt
that those monkeys could do that.

Sanjay Singh never existed... There was only...
Ice.

-- 
"No one had the guts... until now!"  
$anjay $ingh     Fire & "Ice"     ssingh@watserv1.[u]waterloo.{edu|cdn}/[ca]
ROBOTRON Hi-Score: 20 Million Points | A new level of (in)human throughput...
!blade_runner!terminator!terminator_II_judgement_day!watmath!watserv1!ssingh!

cpshelley@violet.uwaterloo.ca (cameron shelley) (04/03/91)

In article <1991Apr2.214606.16223@watserv1.waterloo.edu> ssingh@watserv1.waterloo.edu (Sneaky Sanj ;-) writes:
[...]
>Physical implementation aside, the bottom line is that they are still
>devices that can be modelled by finite automata (or is this the
>continuous vs discrete argument again ;-). The question still begs as
>to whether or not if you string enough of them together the ability
>for more complex computation arises that is not present in less complex
>networks in any form, and is irreducible to any one of the elements.

My point was only that "physical implementation aside" itself is begging
a question.  I don't see anything wrong with that provided it is 
acknowledged.  But phrases like "if you string enough of them together"
would indicate you aren't intending to address structure seriously,
which I think would be a mistake.  If it were only a numbers game, then
we might expect brains to be far less differentiated than they are.
Since morphological diversity is used in implementing real minds (as
opposed to `vapour' ones), why ignore it?

While it is true that a large turing machine can functionally imitate
a smaller, more sophisticated machine, this ignores alot of operational
overhead involved in control and coordination.  This `meta-structure'
may by important to `mind', even the distribution of this complexity
could have a critical impact -- how do you know different?  The answer
might well be: "none of that matters much", but it would be *nice* to
know the reasons...

>With my convoluted understanding of neural nets_Perceptrons_ is the
>only book that attempts to address this, and I was just pondering the 
>notion that language is possible in humans because the capacity for
>abstraction that underlies language can only be implemented by a
>sufficiently complex brain. And do Minsky's results have any relevance.

Well, I can't speak for Minsky, but I wonder why dynamic structure (or
what I called "operational" above) is so ignorable in favour of static
structure -- or `selectively' ignorable.  Rather than view "language"
as implicit in a set brain, try looking at it as a process of 
communication -- maybe both!  Parts of the brain are built to support
things like language use; there might be more reason than you suspect,
but you'll never know if you don't look.

Btw, I'm not claiming I have an answer here, only a legitimate question.

>I agree that the brain is highly structured, but it is bad to immediately
>trash models that abstract the brain as homogeneous. To model the brain
>in this way allows for a generality that dwelling on the connectivity of
>the hippocampus does not. 

I didn't say you should trash your model, and it would be bad to do so.
It might also be premature to claim that your abstraction represents
what you think it does without convincing argument.  All I've seen
in connectionist literature (admittedly not a whole lot) is something
like "brains are parallel, neural nets are parallel, ergo neural nets
are brains (kinda sorta)".  Even with the usual caveats, I take this
with a grain of salt.

>Remember that we are dealing with a highly
>refined and highly tweaked information processor. 

When the "tweaking" has been established as trivial, you will have less
of a problem.

[...]
>So anyhow, I maintain that homogenous models are good and convenient for
>simulation and theoretical results. Domain-specific optimization is best
>left to field-testing. And homogeneous models areable to achieve
>functional equivalence to more specialized models, even if real-world
>implementations are not as effective. 

Suppositions for the purposes of study are fine, treating these as given
is not, I think, doing the subject justice.

--
      Cameron Shelley        | "Belladonna, n.  In Italian a beautiful lady;
cpshelley@violet.waterloo.edu|  in English a deadly poison.  A striking example
    Davis Centre Rm 2136     |  of the essential identity of the two tongues."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce