[comp.ai] Old AI

eliot@phoenix.Princeton.EDU (Eliot Handelman) (12/07/89)

In article <7200@pt.cs.cmu.edu> valdes@b.gp.cs.cmu.edu (Raul Valdes-Perez) writes:
;Simon & Kotovsky wrote a program in an attempt to reproduce human
;ability and shortcomings in sequence-extrapolation tasks.  The 
;justification of this research was:  people arrive at answers on
;such tasks, despite the logical impossibility of a unique answer;
;how can one explain this empirical phenomenon?  Read the paper to
;know how well they did.
;
;Later, Klahr & Wallace wrote a program for the same task, without
;any intent to model human problem-solving, just to have it "work"
;well.
;
;The first reference is "Human acquisition of concepts for sequential 
;patterns," Psychological Review, 70, 534-546.  
;The second is "The development of serial completion strategies:  An
;information-processing analysis," British Journal of Psychology, 61(2), 
;243-257.

I went through Simon & Kotovsky, and it occurred to me that in addition
to the much discussed categories of "strong" and "weak" AI there is actually
a third: "old" AI. The article appeared in 1963: they have a footnote to
the effect that "human thinking processes are essentially list processes."
Thus they have a program which generates sequences given a few rules; they
contend that what happens in the mind must be similar to their program.
Apparently Lisp-like languages (their programs were written in IPL-V, the
language of GPS) were at one time advanced as "languages of thought." There
are corresponding units in the mind that take CARS, CDRS, do RPLACAs, etc.

Why think this way? Because there weren't any other plausible candidates
for the claim that computer programs can model the mind (my reasoning owes 
something to Nietzsche). 

"Old" AI says: the claims of the AI community are supported by the best
programming paradigm available, and vice-versa. This happened, in 1963, to be 
IPL-V, GPS, etc.

No malice intended to Simon and Kotovsky: I'm sure this isn't how they 
saw things. But the lesson of "old" AI is that we, at least, ought to
be able to see through the error: there's no necessary relationship between
these claims and these programming paradigms. The paradigms just happen to be
around; the claims just happen to be around. Ergo these paradigms must 
support these claims. Possibly one could extend this fallacy to paradigms
of thought in general.

valdes@b.gp.cs.cmu.edu (Raul Valdes-Perez) (12/07/89)

In article <11988@phoenix.Princeton.EDU> eliot@phoenix.Princeton.EDU (Eliot Handelman) writes:
>I went through Simon & Kotovsky ...
>they contend that what happens in the mind must be similar to their program.

I wish you would cite sentences that support this statement.  As I recall,
Simon & Kotovsky provide a model for an empirical phenomenon, compare its
behavior and predictions with observations, and conclude that there is
evidence for believing that the model veridically accounts for the mental
mechanisms involved.  This sounds pretty unobjectionable, being what many,
many empirical scientists do, schematically speaking.
-- 
Raul E. Valdes-Perez

eliot@phoenix.Princeton.EDU (Eliot Handelman) (12/08/89)

In article <7247@pt.cs.cmu.edu> valdes@b.gp.cs.cmu.edu (Raul Valdes-Perez) writes:
;In article <11988@phoenix.Princeton.EDU> eliot@phoenix.Princeton.EDU (Eliot Handelman) writes:
;>I went through Simon & Kotovsky ...
;>they contend that what happens in the mind must be similar to their program.
;
;I wish you would cite sentences that support this statement.  As I recall,
;Simon & Kotovsky provide a model for an empirical phenomenon, compare its
;behavior and predictions with observations, and conclude that there is
;evidence for believing that the model veridically accounts for the mental
;mechanisms involved.  This sounds pretty unobjectionable, being what many,
;many empirical scientists do, schematically speaking.

They say that the sequence "urtustuttu" could be coded as follows (they
spell out the algorithm, I've coded it to make it easier to read ---
how things do change):

(defvar *alphabet* '(r s t [etc, a circular list]))

(defun produce-the-sequence-urtustuttu ()
  (let ((immediate-memory (pop *alphabet*)))   ;; step 1
    (loop                                      ;; step 6 
      (print 'u)                               ;; step 2
      (print immediate-memory)                 ;; step 3
      (setq immediate-memory (pop *alphabet*)) ;; step 4
      (print 't))))                            ;; step 5

This is exactly the sequence of events they give. (Step 6 should really be a
go statement at the end, but I couldn't bear to write it that way.)

They then write:

"We postulate: (italics) normal adult beings have stored in memory a 
program capable of interpreting and executing descriptions of serial
patterns. In its essential structure, the program is like the one we have 
just described. (end italics)

"Our main evidence for these assertions is that the program we have written,
containing the mechanisms and processes we have described, is in fact
capable of generating and extrapolating letter series from stored 
descriptions. We are not aware that any alternative mechanism has been
hypothesized capable of doing this. Further, the basic processes incorporated
in the program are processes that have already been shown to be efficacious
in simulating human problem solving and memorizing behavior." (pp 539-540)

--Simon & Kotovsky (1963), "Human Acquistion of Concepts for Sequential
Patterns," Psychological Review, Vol. 70, No. 6

They did, as you indicate, match the performance of the extrapolation
algorithm with human performance, but there the problems are basically
the same. At the point in the article from which I've taken the quote,
they haven't discussed the extrap. alg. yet.

So what's wrong with this argument? Opinions?

--eliot