[comp.ai] AI

litow@uwm-cs.UUCP (10/28/87)

Recently postings have focused on the topic: 'AI - success or failure'. Some
postings have been concerned with epistemological or metaphysical matters.
Other postings have taken the view that AI is a vast collection of design
problems for which much of the metaphysical worry is irrelevant. Based
upon its history and current state it seems to me that AI is an area of
applied computer science largely aimed at design problems. I think that
AI is an unfortunate moniker because AI work is basically fuzzy programming
(more accurately the design of systems supporting fuzzier and fuzzier
programming) where the term 'fuzzy' is not being used in a pejorative sense.

All of the automation issues in AI work are support issues for really fuzzy
programming i.e. where humans can extend the interface with automata so
that human/automata interaction becomes increasingly complex and  
'undisciplined'. Thus in a large sense AI is the frontier part of software
science. It could be claimed that at some stage of extension the interface
becomes so complex (by human standards at the time) that cognition can be
ascribed to the systems. Personally I doubt this will happen. On the other
hand the free use of play-like interfaces must have unforeseeable and
gigantic consequences for humans. This is where I see the importance of AI.

I distinguish between cognitive studies and AI. The metaphysics belongs to
the former,not the latter.

vangelde@cisunx.UUCP (Timothy J Van) (07/13/88)

What with the connectionist bandwagon, everyone seems to be getting a lot
clearer about just what AI is and what sort of a picture of cognition
it embodies.  The short story, of course, is that AI claims that thought
in general and intelligence in particular is the rule governed manipulation
of symbols.  So AI is committed to symbolic representations with a
combinatorial syntax and formal rules defined over them.  The implemenation
of those rules is computation.

Supposedly, the standard or "classical" view in cognitive psychology is
committed to exactly the same picture in the case of human cognition, and 
so goes around devising models and experiments based on these commitments.


My question is - how much of cognitive psychology literally fits this kind
of characterization?  Some classics, for example the early Shepard and 
Metzler experiments on image rotation dont seem to fit the description 
very closely at all.  Others, such as the SOAR system, often seem to 
remain pretty vague about exactly how much of their symbolic machinery
they are really attributing to the human cognizer.

So, to make my question a little more concrete - I'd be interested to know
what people's favorite examples of systems that REALLY DO FIT THE 
DESCRIPTION are?  (Or any other interesting comments, of course.)

Tim van Gelder