[comp.ai.neural-nets] intrinsic vs. extrinsic rules

dar@telesoft.com (David Reisner) (03/15/90)

In article <7181@gelatinosa.ai.mit.edu>, miken@wheaties.ai.mit.edu (Michael N. Nitabach) writes:
>
> development of intelligence depends on *both* interactions with the environ-
> ment, and *many* innate structures of the mind.  Why does radical behaviorism
> continue to exert such a seductive pull on workers in fields other than
> psychology?

As noted in the earlier posting...

> In article <5209@ccncsu.ColoState.EDU>, ld231782@longs.LANCE.ColoState.Edu
> (Lawrence Detweiler) writes:
>
> This is a hopeful sentiment for the future of machine thought.

It is an obviously self-serving view. If you WANT machine thought to be
possible or be able to replace human thought (becuase you like the idea,
it would be useful, its interesting, or you get along better with machines
than with people), then this viewpoint makes things easier.  I think this
sort of view would be less common if more people were familiar with the
behaviour of other species, and therefore could see human behaviour thru
somewhat different eyes.

-David

ld231782@longs.LANCE.ColoState.EDU (Lawrence Detweiler) (03/22/90)

There seems to be enough public interest in this thread to justify
another posting.

miken@wheaties.ai.mit.edu (Michael N. Nitabach) in
<7181@gelatinosa.ai.mit.edu>:

>>In short, it may be possible to have intelligence with nothing but
>           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>simple rules like Hebb's that depend only on input.  This is a hopeful
> ^^^^^^^^^^^^^^^^^^^^^^^^
>>sentiment for the future of machine thought.
>
>I find it a depressing "sentiment" that workers in the area of AI can still
>maintain such absolutely behavioristic positions.  It is clear that the
>development of intelligence depends on *both* interactions with the environ-
>ment, and *many* innate structures of the mind.
[...]
>Virtually all psychologists in academia have rightly discarded
>pure behaviorism--the position that the *only* innate capacities necessary
>for the achievement of intelligence are stimulus-stimulus and action-
>consequence association--as a viable view of cognitive development.


It is clear that some combination of mechanisms in the brain allows it
to think.  My speculations apply to intelligence outside of the brain,
whereas behaviorism makes assertions about its role INSIDE (the last
time I checked,  psychology was talking about humans).  Admittedly I
have cited biological examples in my favor suggestive of an
environmentally dominant role in the development of intelligent behavior
(for example, the development of the cat's visual system).  However,
even if they are wholly inappropriate and misapplied it does not condemn
the speculation, because its central idea transcends them.

(I find it depressing that these views could be misconstrued to be
associated with behaviorism and furthermore that conventional wisdom has
proclaimed behaviorism dead.  I have no personal affinity for nor
adherence to this ideology, but I see a distressing trend and tendency
in conventional thought to ridicule any innocent prod to think by
identifying it with some other supposedly manifestly erroneous view.)

cdh@praxis.co.uk (Chris Hayward) in <4972@newton.praxis.co.uk>:
>As babies become humans (a long process :-) their behaviour expands,
>they build on acquired knowledge and world-understanding. But that
>behaviour is still directed towards being rested (if you don't believe
>this, try staying awake for a couple of days!), fed, comfortable, and
>then intellectually satisfied. 

Let us suppose the "satisfaction-to-curiosity" is the most important of
these, because it is the rule that governs useful emerging behavior
beyond the rudimentary ones.

>The key point, I think, is that if we include *curiosity* as a primary
>motivator, everything else falls into place. It provides the bootstrap.
>
>But how will we put it into a neural-network?

What if curiosity is a natural consequence of the wiring?  It is hard to
imagine any global overseeing mechanism that decides what is interesting
and what isn't.  Because curiosity is arguably the most important aspect
in learning, my suggestion that intelligence arises naturally from the
interactions of electricity (stimulii) and wires (the brain) does not
seem so radical.

dar@telesoft.com (David Reisner) in <744@telesoft.com>:

>It is an obviously self-serving view. If you WANT machine thought to be
>possible or be able to replace human thought (becuase you like the idea,
>it would be useful, its interesting, or you get along better with machines
>than with people), then this viewpoint makes things easier.

Humans think, humans are machines, therefore machines think.  My views
naturally assume that the duplication of thought outside the brain is
feasible (presumably that is the very reason for the existence of this
forum!).  It the view that our intelligence is sacred that is literally
the self-serving view!  To suppose that some other entity is capable of
duplicating humankind's main claim to fame is as disheartening to think
we descended from apes!  Then again, imitation is the sincerest form of
flattery.

bill@boulder.Colorado.EDU (03/22/90)

In article <5423@ccncsu.ColoState.EDU> ld231782@longs.LANCE.ColoState.EDU 
(Lawrence Detweiler) writes:
>
>What if curiosity is a natural consequence of the wiring?  It is hard to
>imagine any global overseeing mechanism that decides what is interesting
>and what isn't.  

  It's not so hard to imagine, at least in a general way.  All you really
need is a novelty detector:  a mechanism that compares the current input
with the contents of memory, and, if nothing like the current input has
ever been seen, generates an impulse to investigate it.  Kohonen has
described a simple sort of neural-network novelty detector, and there
is a good deal of evidence for novelty-detection mechanisms in the
brain.

>                 Because curiosity is arguably the most important aspect
>in learning, my suggestion that intelligence arises naturally from the
>interactions of electricity (stimulii) and wires (the brain) does not
>seem so radical.

  The more we learn about brains, the clearer it becomes that human
intelligence is the result of a complex and laboriously designed (by
evolution) architecture.  The brain, even the neocortex, is by no means
a homogeneous net in which everything is connected to everything
else.  The local details of connections may (and probably do) result
from experience, but the broad global organization of the net is
preprogrammed.
  I suppose it is theoretically possible that merely hooking up a 
sufficiently massive glob of wires will somehow give rise to intelligence,
but I see no reason to believe it.

	-- Bill Skaggs
>
>dar@telesoft.com (David Reisner) in <744@telesoft.com>:
>
>>It is an obviously self-serving view. If you WANT machine thought to be
>>possible or be able to replace human thought (becuase you like the idea,
>>it would be useful, its interesting, or you get along better with machines
>>than with people), then this viewpoint makes things easier.
>
>Humans think, humans are machines, therefore machines think.  My views
>naturally assume that the duplication of thought outside the brain is
>feasible (presumably that is the very reason for the existence of this
>forum!).  It the view that our intelligence is sacred that is literally
>the self-serving view!  To suppose that some other entity is capable of
>duplicating humankind's main claim to fame is as disheartening to think
>we descended from apes!  Then again, imitation is the sincerest form of
>flattery.

miken@wheaties.ai.mit.edu (Michael N. Nitabach) (03/23/90)

d231782@longs.LANCE.ColoState.EDU (Lawrence Detweiler) says in article
<5423@ccncsu.ColoState.EDU>:

>>>In short, it may be possible to have intelligence with nothing but
>>           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>>simple rules like Hebb's that depend only on input.  This is a hopeful
>> ^^^^^^^^^^^^^^^^^^^^^^^^
>>>sentiment for the future of machine thought.
>>
>>I find it a depressing "sentiment" that workers in the area of AI can still
>>maintain such absolutely behavioristic positions.

>(I find it depressing that these views could be misconstrued to be
>associated with behaviorism and furthermore that conventional wisdom has
>proclaimed behaviorism dead.  I have no personal affinity for nor
>adherence to this ideology, but I see a distressing trend and tendency
>in conventional thought to ridicule any innocent prod to think by
>identifying it with some other supposedly manifestly erroneous view.)

Why can't you see the historical roots of your point of view?  It is
clear that the essence of your claim--that intelligence can arise in an
organism that is born with nothing but some associative rule--is identical
with the major claim of behaviorism.  I think it is very sad that many
of today's intellectuals have neither knowledge of nor interest in the
historical antecedents of their work.  It is folly to think that because
an old idea has been clothed in some new terminology or adopted by some
new discipline, it will not retain all of its already well understood
limitations.  And we already know that "nothing but simple rules like
Hebb's" will not suffice to explain the behavior of even the simplest
organisms.
--Mike Nitabach