tjhorton@ai.toronto.edu ("Timothy J. Horton") (03/18/88)
--------------------------------------------------------------------------
From gpu.utcs.toronto.edu!yorkvm1.bitnet!COGSCI-L Fri Mar 18 08:58:01 1988
Sender: Cognitive Science Discussion Group <COGSCI-L@yorkvm1.bitnet>
From: Michael Friendly <FRIENDLY@yorkvm1.bitnet>
Subject: March 25th meeting
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
| |
| Cognitive Science Discussion Group |
| |
| Speaker : Peter Roosen-Runge (Computer Science, York) |
| Title : Forward-chained vs. Backward-chained Rules: |
| A Crucial Polarity in Cognitive Models |
| Date : Friday, Mar. 25, 1988 -- 1pm |
| Location: Rm 207 Behavioural Science Bldg., York University |
| |
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
Abstract
The directionality of rules in a program seems to be related to
deep issues in epistemology. I will briefly sketch how the con-
trast between forward- and backward-chained rules is manifested
in AI software, and how it corresponds to well-known oppositions
in the structure of scientific theories, in the structure of com-
puter programs, and in problem-solving strategies.
More speculatively, I will suggest that backward-chaining is a
necessary and almost sufficient requirement for a symbol-
processing system to exhibit a minimal form of "intentionality".
(As an argumentative corollary: Prolog is thus much more rele-
vant to cognitive science than is Lisp).
If correct, this result refutes a basic premise of the highly
influential Newell/Anderson approach to cognitive modelling which
has been entirely based on explicitly forward-chained architec-
tures.