[sci.psychology] Reference needed

erich@eecs.cs.pdx.edu (Erich Boleyn) (09/06/90)

Hi,

   I am searching for the reference to something I heard a long time ago
about an experiment that was conducted on young children concerning pattern
matching and generalizing skills.

   From what I remember, it was done with sets of colored blocks of
various shapes, and the children were given feedback on how strings of
these blocks were formed (syntactically).  It was based on the english
language, and although many of them became quite good at it, none of them
made the connection that they were using a derivative of english (apparently
it was very similar).

   I would appreciate anything on this (send e-mail to: erich@eecs.ee.pdx.edu)
and thanks.

   Erich

   ___--Erich S. Boleyn--___  CSNET/INTERNET:  erich@cs.pdx.edu
  {Portland State University}     ARPANET:     erich%cs.pdx.edu@relay.cs.net
       "A year spent in           BITNET:      a0eb@psuorvm.bitnet
      artificial intelligence is enough to make one believe in God"

brat@c3pe.C3.COM (John Whitten) (09/13/90)

Hello... this is the first time I've posted to USENET so if I get it wrong dont
shout TOO loud :-)

I have been interested in Robotics and Artificial Intelligence for many years
and have followed discussions concerning Neural Networking and Parallelism
rather closely... I firmly believe that machines can be made that could be
considered 'intelligent' ... as an aside, one of my pet peaves is that folks
are fond of saying that machines can never be intelligent because [insert
excuse]... they never seem to remember that our ONLY definition of intelligence
includes ourselves, and that we assert ourselves to BE intelligent and not
merely a clever trick of nature... indeed, I believe it could be argued quite
sucsessfully that we are NOT intelligent... (end of tangent)

Anyway, getting to the point, I believe that intelligence requires 'significant
complexity' (heheh define THAT!) and that many of the connections must be
faulty or malformed (or at the very least different) for a 'personality' to
form. Sorta like the grain of sand that irritates the oyster... and that while
'raw hardware' is an important part, it is the underlying STRUCTURE that forms
the basic framework upon which an intelligence hangs... 

Experience is also a part. One of the most fundamental experiences is the
discovery that 'I' (an intelligent entity) am not alone and that 'you' exist.
I firmly believe that intelligence cannot form in isolation. Something that is
totally isolated has no use for intelligence, and that intelligence is a tool
used to successfully interact with others in 'our' universe.

The third major condition involves the ability to perceive and affect the
environment. Without this ability, the boundaries of where 'I' end and 'you'
begin cannot be established.

With this laid out as my underlying philosophy, my search for intelligence has
lead me to very large interconnected massively parallel systems (and Neural
Networking as an offshoot). One of my first beliefs is that a successful
system will not be constructed that comprises of only one type of network.
And that in fact it takes the concerted effort of many different types of
networks working together as well as independently to process all the
different type of information that must sorted and evaluated. 

Another question concerns the initial 'seed' (or grain of sand) that kicks
the whole process off... I belive that every intelligent system begins life
with a sort of 'Random number' (for lack of a better word) which becomes the
essence of that entity... and that given identical hardware and environments,
different 'seed points' will develop into radically different personalities.

This leads to both 'Network Programming' and 'Network Topographies'. Massively
parallel systems of any type may be said to have a topography which describes
the state of the system at any given moment. It is only with the INITIAL state
that I am presently concerned with... the question that I posed to myself was
'supposing that all the physical limitations were removed and the hardware
designs worked out and a system built, how would you program the damned thing?'

Neural Networks are funny beasts... seems you hafta TRAIN them to do what you
want (Hmmm sounds interesting)... but the catch is, to see an increase in
thei performance and learning rates, you must initialize them with values that
are closer to the optimal values... There seems to be a catch-22 forming
here... how can you initialize them with numbers that approximate performance
to problems yet to be encountered? This seems to be where the real intelligence
part comes in.

Fractals. This is where I am now. Dunno if this is a blind alley or what. From
what I understand of fractals, it seems the perfect programming language for
Neural Networks... the idea of describing a topography using fractal-based
means is intriguing to me... this would allow networks to 'pass on' the
benefits (and setbacks) of their experience(s) to future generations... this
would in turn tend to 'tweak' the values over time/generations to those that
would more closely approximate the actual values without hard-coding them into
the system. In this manner, fractals could be used to simulate the built-in
routines (or instincts) that we humans (and other similar life forms) posess.
The rest of course is up to the network.

I seem to be getting to the rambling stage so perhaps I'll quit while I'm
ahead (thus giving me a better chance of outdistancing the flames :-)  )

I don't know how to make those cute little signatures at the bottom so I'll
just state for the record that my opinions are strictly my own and do not
in any way constitute the opinions of anyone else.

   Brat Wizard [aka John Whitten - in real life]