[comp.ai] Cognitive AI vs Expert Systems

krulwich-bruce@CS.YALE.EDU (Bruce Krulwich) (06/17/88)

In article <19880615061536.5.NICK@INTERLAKEN.LCS.MIT.EDU> 
dg1v+@ANDREW.CMU.EDU (David Greene) writes:
>I'm not advocating Mr. Cockton's views, but the limited literature breadth in
>many AI papers *is* self-defeating.  For example, until very recently, few
>expert system papers acknowledged the results of 20+ years of psychology
>research on Judgement and Decision Making.

This says something about expert systems papers, not about papers discussing
serious attempts at modelling intelligence.  It is wrong to assume (as both
you and Mr. Cockton are) that the expert system work typical of the business
world (in other words, applications programs) is at all similar to work done
by researchers investigating serious intelligence.  (See work on case based
reasoning, explanation based learning, expectation based processing, plan
transformation, and constraint based reasoning, to name a few areas.)


Bruce Krulwich

Net-mail: krulwich@{yale.arpa, cs.yale.edu, yalecs.bitnet, yale.UUCP}

	Goal in life: to sit on a quiet beach solving math problems for a
		      quarter and soaking in the rays.   

krulwich-bruce@CS.YALE.EDU (Bruce Krulwich) (06/21/88)

In a previous post, I claimed that there were differences between people
doing "hard AI" (trying to achieve serious understanding and intelligence)
and "soft AI" (trying to achieve intelligent behavior).  

dg1v+@ANDREW.CMU.EDU (David Greene) responds:
>Since my researchs concerns developing knowledge acquisition approaches (via
>machine learning) to address real world environments, I'm well aquainted with
>not only the above literature, but psych, cog psych, JDM (judgement and
>decision making), and BDT (behavioral decision theory).
>
>While I suspect AI researchers who work in Expert System might resent being
>excluded from work in "serious intelligence", I think my point is that, for a
>given phenomena, multiple viewpoints from different disciplines (literature)
>can provide important breadth and insights.

I agree fully, and I think you'll find this in the references section of alot
of "hard AI" research work.  (As a matter of fact, a fair number of
researchers in "hard AI" are prof's in or have degrees psychology,
linguistics, etc.)  I'm sorry if my post seemed insulting -- it wasn't
intended that way.  I truly believe, however, that there are differences in
the research goals, methods, and results of the two different areas.  That's
not a judgement, but it is a difference.

Bruce Krulwich

mikeb@wdl1.UUCP (Michael H. Bender) (06/22/88)

I think terms like "hard AI" and "soft AI" are potentially offensive
and imply a set of values (i.e. some set of problems being of more
value than others). Instead, I highly recommend that you use the
distinctions proposed by Jon Doyle in the AI Magazine (Spring 88),
in which he distinguishes between the following (note - the short
definitions are MINE, not Doyle's)

 o	COMPUTATIONAL COMPLEXITY ANALYSIS - i.e. the search for
	explanations based on comutational complexity

 o	ARTICULATING INTELLIGENCE - i.e. codifying command and expert
	knowledge.

 o	RATIONAL PSYCHOLOGY - i.e. the cognitive science that deals
	with trying to understand human thinking

 o	PSYCHOLOGICAL ENGINEERING - i.e. the development of new
	techniques for implementing human-like behaviors and
	capacities 

Note - using this demarcation it is easier to pin-point the different
areas in which a person is working.