[comp.ai] shrdlu

gerry@zds-ux.UUCP (Gerry Gleason) (02/10/90)

In article <MITCHELL.90Jan25115652@tartarus.uchicago.edu> mitchell@tartarus.uchicago.edu (Mitchell Marks) writes:
>. . .  Winograd's recent book
>_Understanding_Computers_and_Cognition_, written with Fernando Flores,
>seems to reflect major changes in his thinking about
>computational/cognitive issues, and he may no longer hold to the
>positions suggested in the earlier book and article mentioned in this
>paragraph.

My understanding of what is presented in this book is that he now takes
shrdlu, and programs like it to be negative evidence for the strong AI
position.  I find it pretty ridiculous to use the failure of a particular
aproach as the failure of all possible approaches, so there must be other
evidence in order to reject strong AI.

My problem is that having read UCaC (significant part more than once), and
having participated in some of the programs of Flores' that are run by
his company, Logonet where I was exposed more of the philisophical background
of this work, I still don't get it.  That is, I can interpret much of what
is said as support for strong AI, and I haven't found anywhere that makes
explicite the principles and logic that leads to the other position.  The
argument seems to use a biological evolutionary foundation in a similar
way to Penrose' (_The_Emporer's_New_Mind_) use of QM and phisics.  To me,
all this argument says is that if a machine is to be intellegent, it must
be capable of interacting with the world (real world, not simulated) in
a way similarly sophistocated as the way we do.

Any help understanding this would be appreciated.

Gerry Gleason