[net.cog-eng] transition from AI to Cognitive Science

mikes@tekecs.UUCP (Michael Sellers) (09/05/86)

> I find it very interesting that there is so much excitement generated over
> parallel processing computer systems by the AI community.  Interesting in
> that the problems of AI (the intractability of: language, vision, and general
> cognition to name a few) are not anywhere near limited by computational
> power but by our lack of understanding. [...]
> 	         The field that was once A.I. is very quickly headed back to
> it's origins in computer science and is producing "Expert Systems" by the
> droves.  The problem isn't that they aren't useful, but rather that they
> are being touted as the A.I., and true insights into actual human thinking
> are still rare (if not non-existant).

Inordinate amounts of hype have long been a problem in AI; the only difference 
now is that there is actually a small something there (i.e. knowldege based 
systems), so the hype is rising to truly unbelievable heights.  I don't know
that AI is returning to its roots in computer science, probably there is just
more emphasis on the area(s) where something actually *works* right now.

> Has everybody given up?  I doubt it.  However, it seems that economic reality
> has set in.  People are forced to show practical systems with everyday appli-
> cations.  

Good points.  You should check out the book "The AI Business" by ...rats, it
escapes me (possibly Winston or McCarthy?).  I think it was published in late 
'84 or early '85, and makes the same kinds of points that you're making here,
talking about the hype, the history, and the current state of the art and the
business.

> So what is one to do?  Go into cog-psych?  At least psychologists are working
> on the fundamental problems that AI started, but many seem to be grasping at
> straws, trying to find a simple solution (i.e., familly resemblance, primary
> attribute analysis, etc.)

The Grass is Always Greener.  I started out going into neurophysiology, then
switched to cog psych because the neuro research is still at a lower level than
I wanted, and then became disillusioned because all of the psych work being 
done seemed to be either super low-level or infeasable to test empirically.  
So, I started looking into computers, longing to get into the world of AI.  
Luckily, I stopped before I got to the point you are at now, and found 
something better (no, besides Amway :-)...

> What seems to be lacking is a cogent combination of theories.  Some attempts
> have been made, but these authors basically punt on the issue, stating 
> like "none of the above theories adequately explain the observed phenomena,
> perhaps the solution is a combination of current hypothesis".  Very good, now
> lets do that research and see if this is true!

And this is exactly what is happening in the new field of Cognitive Science.
While there is still no "cogent combination of theories", things are beginning
to coalesce. (Pylyshyn described the current state of the field as Physics
searching for its Newton.  Everyone agrees that the field needs a Newton to
bring it all together, and everyone thinks that he or she is probably that 
person.  The problem is, no one else agrees with you, except maybe your own
grad students.)  Cog sci is still emerging as a separate field, even though
its beginnings can probably be pegged as being in the late '70s or early '80s.
It is taking material, paradigms, and techniques from AI, neurology, cog psych,
anthropology, linguistics, and several other fields, and forming a new field
dedicated to the study of cognition in general.  This does not mean that 
cognition should be looked at in a vacuum (as is to some degree the case with
AI), but that it can and should be examined in both natural and artificial
contexts, allowing for the difference between them.  It can and should take 
into account all types and levels of cognition, from the low-level neural
processing to the highly plastic levels of linguistic and social cognitive
interaction, researching and applying these areas in artificial settings
as it becomes feasable.

> 						[...]  My real opinion is that
> without "bringing baby up" so to speak, we won't get much accomplished.  The
> ultimate system will have to be able to reach out, grasp (whether visually or
> physically, or whatever) and sense it's world around it in a rich manner.  It
> will have to be malleable, but still have certain guidelines built in.  It
> must truely learn, forming a myriad of connections with past experiences and
> thoughts.  In sum, it will have to be a living animal (though made of sand..)

This is one possibility, though not the only one.  Certainly an artificially
cogitating system without many of the abilities you mention would be different
from us, in that its primary needs (food, shelter, sensory input) would not
be the same.  This does not make these things a requirement, however.  If we
would wish to build an artificial cogitator that had roughly the same sort of
world view as we have, then we probably would have to give it some way of
directly interacting with its environment through the use of sensors and 
effectors of some sort.  
  I suggest that you find and peruse the last 5 or 6 years of the journal
Cognitive Science, put out by the Cognitive Science Society.  Most of the
things that have been written in there are still fairly up-to-date, as the
field is still reaching "critical mass" in terms of theoretical quantity
and quality (an article by Norman, "Twelve Issues for Cognitive Science" 
from 1980 in this journal (not sure which issue) discusses many of the things
you are talking about here).  

Let's hear more on this subject!

> Ted Inoue.
> Cornell

-- 

		Mike Sellers
	UUCP: {...your spinal column here...}!tektronix!tekecs!mikes


	   INNING:  1  2  3  4  5  6  7  8  9  TOTAL
	IDEALISTS   0  0  0  0  0  0  0  0  0    1
	 REALISTS   1  1  0  4  3  1  2  0  2    0

craig@think.COM (Craig Stanfill) (09/06/86)

> I find it very interesting that there is so much excitement generated over
> parallel processing computer systems by the AI community.  Interesting in
> that the problems of AI (the intractability of: language, vision, and general
> cognition to name a few) are not anywhere near limited by computational
> power but by our lack of understanding. [...]

For the last year, I have been working on AI on the Connection Machine,
which is a massively parallel computer.  Depending on the application,
the CM is between 100 and 1000 times faster than a Symbolics 36xx.  I
have performed some experiments on models of reasoning from memory
(Memory Based Reasoning, Stannfill and Waltz, TMC Technical Report).
Some of these experiments required 5 hours on a 32,000 processor CM.  I,
for one, do not consider a 500-5000 hour experiment on a Symbolics a
practical way to work.

More substantially, having a massively parallel machine changes the way
you think about writing programs.  When certain operations become 1000
times faster, what you put into the inner loop of a program may change
drasticly. 

rggoebel@watdragon.UUCP (Randy Goebel LPAIG) (09/07/86)

Mike Sellers from Tektronix in Wilsonville, Oregon writes:

| Inordinate amounts of hype have long been a problem in AI; the only difference
| now is that there is actually a small something there (i.e. knowldege based 
| systems), so the hype is rising to truly unbelievable heights.  I don't know
| that AI is returning to its roots in computer science, probably there is just
| more emphasis on the area(s) where something actually *works* right now.

I would like to remind all that don't know or have forgotten that the notion
of a rational artifact as digitial computer does have its roots in
computing, but the more general notion of intelligent artifact has concerned
scientists and philosophers much longer than the lifetime of the digital
computer.  John Haugeland's book ``AI: the very idea'' would be good reading
for those who aren't aware that there is a pre-Dartmouth history of ``AI.''

Randy Goebel
U. of Waterloo