LANSKY@SRI-AI.ARPA (Amy Lansky) (06/20/86)
WHY PLANNING ISN'T RATIONAL Terry Winograd (TW@SAIL) Stanford University (Computer Science, Linguistics, and CSLI) 11:00 AM, MONDAY, June 23 SRI International, Building E, Room EK242 (note room change) Orthodox AI approaches to describing and achieving intelligent action are based on a "rationalistic" tradition in which the focus is on a process of deducing (using a representation of some kind) the consequences of specific acts (operations) and searching for a sequence of acts that will lead to a desired result (goal). This works reasonably well for some limited domains, but falls far short of being a general theory of intelligent action. It does not work well in the small (how I operate my finger muscles, or where an amoeba slithers), or in the large (how I conduct my life or where my research is headed). Even in the cases of clearly explicit rational planning (e.g. planning a bank robbery), the relation between plan and execution is not easy to capture (what happens when the teller sneezes?). In a recent book written jointly with Fernando Flores, I have proposed a different basis for looking at action and cognition, focussing on the "thrownness" of action without reflection, and on the open-endedness of interpretation. Any alternative such as ours must address several obvious questions: Why is the naive view of rational decision-making and action so intuitively plausible if it isn't right? How can we account for the evolution of complex behavior which is effective in an environment? What implications does it have for AI and the design of computer systems in general? I will address these questions and related others, focussing on some different issues from those raised in my talk to CSLI a couple of weeks ago on "Why language isn't information". VISITORS: Please arrive 5 minutes early so that you can be escorted up from the E-building receptionist's desk. Thanks!