[net.misc] Off Base on AI?

cbostrum (12/06/82)

I wouldnt say that you are "off base" necessarily, Dave, but one must
remember that the basic AI manifesto is to produce artificially 
intelligent behavior and not mimic human (intelligent or otherwise)
behavior. AI people are usually also commited to the idea that there
is what Newell calls a "knowledge level", that is, a higher level
of processing going on where the things being processed are "propositions",
"ideas", "concepts", or what have you. Thus simulating nuerons is not
what they want to do; rather they want to find general "symbol systems" that
produce intelligent behavior.
It would be depressing (and shocking as well) to discover that there are no
general principles involved in intelligence, but that it really is just a
certain sort of collection of nuerons, which can **only** be
described on a physiological level. (Actually, if it got down that
far, one would probably have to go all the way down to an elementary
particle bedrock. Why stop sooner?)
The view that there  **is** some sort of "knowledge level" relevant to
"minds" is the currently most popular view in philosophy of mind, known
as functionalism and advocated by people like Hilary Putnam, Jerry Fodor,
and Daniel Dennett.

As far as simulating humans goes, there are a number of different viewpoints.
Within AI originally there were really two "schools". One could be
associated with Minsky, and involved getting AI any way possible, and the
other with Simon, which involved simulating the most intelligent things we
knew (humans) to get AI. Thus, Simons first program, the Logic Theorist,
was criticised heavily by Hao Wang because he could write a program to
do a much better job (is using truth tables more intelligent?) Wang missed the
point of Simons endeavour here.

Altho the Simonesque approach might be more appealing to modern cognitive
psychologists, I personally think that AI researchers should be looking
for essential characteristics of intelligence and attempting to mechanise
them. Of course this doesnt mean they cant use pyschology and introspection
to get clues, but merely that the meaning of "intelligent behaviour" is
not in any sense "human behaviour".

Thr "random `newral` net" idea **has** been tried a number of times
before. I dont know much about the results, except that they didnt get
too far. Certain sorts of very simple regularities resulted, but that
was about all. Other studies have succesfully "evolved" tiny finite
automata to accept very simple languages by natural selection and
random mutation of machine tables. Patrick Suppes has shown that with
some very simple conditioning axioms one can condition a FSA to accept
an arbitrary language asymptotically.
I would appreciate any and all sorts of references on this sort of approach.

I second Dave Decots idea of some new newsgroups for this sort of thing.
There should be a net.analytic_philosophy (analytic to keep out
existentialists, ethical and religious philosophers, etc), and maybe
a net.ai too.