[comp.ai] McDermott's analysis of "free will"

ok@quintus.UUCP (Richard A. O'Keefe) (05/21/88)

I have been waiting for someone else to say this better than I can,
and unfortunately I've waited so long that McDermott's article has
expired here, so I can't quote him.

Summary: I think McDermott's analysis is seriously flawed.
Caveat:  I have probably misunderstood him.

I understand his argument to be (roughly)
    an intelligent planner (which attempts to predict the actions of
    other agents by simulating them using a "mental model") cannot
    treat itself that way, otherwise it would run into a loop, so it
    must flag itself as an exception to its normal models of
    causality, and thus perceives itself as having "free will".
[I'm *sure* I'm confusing this with something Minsky said years ago.
 Please forgive me.]

1.  From the fact that a method would get an intelligent planner into
    serious trouble, we cannot conclude that people don't work that way.
    To start with, people have been known to commit suicide, which is
    disastrous for their future planning abilities.  More seriously,
    people live in a physical world, and hunger, a swift kick in the
    pants, strange noises in the undergrowth &c, act not unlike the
    Interrupt key.  People could well act in ways that would have them
    falling into infinite loops as long as the environment provided
    enough higher-priority events to catch their attention.

2.  It is possible for a finite computer program (with a sufficiently
    large, but at all times finite) store to act *as if* it was an
    one-way infinite tower of interpreters.  Brian Cantwell Smith
    showed this with his design for 3-Lisp.  Jim desRiviers for one
    has implemented 3-Lisp.  So the mere possibility of an agent
    having to appear to simulate itself simulating itself ... doesn't
    show that unbounded resources would be required:  we need to know
    more about the nature of the model and the simulation process to
    show that.

3.  In any case, we only get the infinite regress if the planner
    simulates itself *exactly*.  There is a Computer Science topic
    called "abstract interpretation", where you model the behaviour
    of a computer program by running it in an approximate model.
    Any abstract interpreter worth its salt can interpret itself
    interpreting itself.  The answers won't be precise, but they are
    often useful.

4.  At least one human being does not possess sufficient knowledge of
    the workings of his mind to be able to simulate himself anything BUT
    vaguely.  I refer, of course, to myself.  [Well, I _think_ I'm
    human.]  If I try to predict my own actions in great detail, I run
    into the problem that I don't know enough about myself to do it,
    and this doesn't feel any different from not knowing enough about
    President Reagan to predict his actions, or not knowing enough
    about the workings of a car.  I do not experience myself as a
    causal singularity, and the actions I want to claim as free are the
    actions which are in accord with my character, so in some sense are
    at least statistically predictable.  Some other explanation must be
    found for _my_ belief that I have "free will".

Some other issues:

    It should be noted that dualism has very little to do with the
    question of free will.  If body and mind are distinct substances,
    that doesn't solve the problem, it only moves the
    determinism/randomness/ whatever else from the physical domain to
    the mental domain.  Minds could be nonphysical and still be
    determined.

    What strikes me most about this discussion is not the variety of
    explanations, but the disagreement about what is to be explained.
    Some people seem to think that their freest acts are the ones
    which even they cannot explain, others do not have this feeling.
    Are we really arguing about the same (real or illusory) phenomenon?