[net.misc] AI ramblings

group6 (12/03/82)

   Ever since I read D. R. Hofstader's "G:odel, Escher, Bach: an Eternal Golden
Braid" I have held the opinion that many researchers in Artificial Intelligence,at least those trying to mimic human intelligence, are going about it in the
wrong way.  Instead of trying to make super-smart chess programs or other
"expert" programs, I also believe we need to examine the hardware that we are
trying to copy, instead of trying to do it using hardware designed for tasks
that are *completely* different.  I don't know of too many people whose brain
is much like a PDP-10, even if it runs an "intelligent" operating system.
Because the architecture of the human brain is so different from the
architecture of today's conventional computers, it is understandably
difficult to get them to do anything even remotely approximating human
intelligence.

   I believe an understanding of some important aspects of thought and reasoning
is to be gained by modelling the mind with several n-illion similar (but *not*
identical) circuits (call them "newrons"), connected in a random network.
Some of the circuits should be connected to the external world, so that our
"mind" can do more than contemplate itself.  After this electronic tangle has
been allowed to run for a while (?) we may observe the development of some kind
of behavior.

   This experiment is similar to the "primordial soup" study done a while back
by Stanley, in which the supposed mixture of compounds present in the early
atmosphere and oceans was subjected to radiation and electrical discharges for
an extended period.  After a few weeks, the apparatus contained several of the
simpler amino acids used in all known life forms.

   Any comment?  Has this type of thing been tried before?  Am I beating a dead
   cow?  Am I way off base?

				-Dave Decot
				-...!decvax!cwruecmp!group6

(By the way, how about a new group net.ai, net.psychology, net.??? for this
kind of discussion?)

soreff (12/06/82)

Self organising systems have been tried in AI, and by and large haven't worked.
There was a great deal of work on "Perceptrons" growing out of ~McChollach
(I know the spelling is wrong) and Pitts's work on neurons in the 50's. It
turned out that the networks used had some intrinsic problems (Minsky proved
some theorems which put bounds on the kinds of patterns they could recognize,
bounds below human performance). There is also a lot of evidence that neural
systems start out with a lot of structure, so random nets are probably not
the way tp go.				-Jeffrey Soreff

zrm (12/08/82)

While the central nervous system starts out with a lot of organisation,
that organisation is also very mallable. Cases of asymptotic agenesis of
the corpus callosum behave like normal controls in the same sorts of
experiments where where commisurotomy patients exibit their remarkable
"split brain" behaivior. Both are missing the corpus callosum, but the
subjects born without it have developed other means of hemisphere to
hemispere communication. That is, while the organ specialised to send
information from one side to the other is gone, if it was not present at
birth, other parts of the brain take over its function.

The most important thing about out brains may not be their structure,
but the way in which they develop.

Cheers,
Zig

jcz (12/13/82)

References: cwruecmp.312



Reminds me of how the freshmen CSC majors here at state used
to resubmit the same job over and over in the hope that
transmission errors would fix the bug.

--jcz
NCSU