[comp.lang.lisp] Is this the end of the lisp wave?comp.lang.lisp.3165

aarons@syma.sussex.ac.uk (Aaron Sloman) (01/15/91)

valdes@cs.cmu.edu (Raul Valdes-Perez) writes:

> In article <4178@syma.sussex.ac.uk> aarons@syma.sussex.ac.uk (Aaron Sloman) writes:
> >All the people I talked to in the AI field in the early days were
> >very clear that there was a difference between what they were trying
> >to implement and how they were implementing it, although it was
> >agreed that sometimes making the distinction was not easy (hence the
> >occasional confused person who called a program a theory).
>
> Could Prof. Sloman make clear why computer programs do not merit the status
> of theory?  Would he accept a system of differential or difference equations
> as a theory?

This could take us into a long discussion of issues in the
philosophy of science, about the nature of theories, models,
explanations, etc., which I'd rather not get into and which would
not be appropriate for this news group. But I had in mind only the
relatively simple point that most AI programs that are intended to
model some bit of reality (like many computer models) have a great
deal of detail that is there not because it corresponds to anything
in the thing being modelled but because (a) it is required in order
to get the model going on the particular hardware and software
platform (b) it is required for coping with the artificial data
simulating the real environment and/or (c) it is required for nice
glossy user interfaces for demonstrating the software etc etc.

When this happens it is all too easy for people (including me) to be
unclear about the distinction between those aspects of the program
that are essential to the theory that is being demonstrated and
those that aren't. E.g. think of an AI vision program intended to
model some aspect of human vision, that takes input in the form of a
regular rectangular array: much of the code will be geared to the
structure of that array. Will all the edge-detecting algorithms that
work on the array be part  of the theory  of how  animal visual
systems work,  or will  it be  an implementation detail  providing
input  to some  other part  of  the system that is  intended as  the
real model?  If the  input to  that other part has an "unrealistic:
form because of how it is  derived, does that  mean  that  only
certain aspects  of  the  intermediate mechanisms are intended as
part of the theory? Which aspects? It isn't always easy to be clear
about this. Unfortunately no interesting AI theory about the
workings of the human mind or brain can be expressed in a few simple
equations.

This is closely related to the critique David Marr made of some of
the work in AI in the early 70s, though I think his alternative
approach stressed the study of the nature of abstract problems at
the expense of workable solutions able to cope with real-time
constraints, poor data, malfunctioning sensors, etc. (But lets not
get into that now!)

Aaron Sloman,
School of Cognitive and Computing Sciences,
Univ of Sussex, Brighton, BN1 9QH, England
    EMAIL   aarons@cogs.sussex.ac.uk