[comp.ai.digest] A BDI Approach to Free Will

AIList-REQUEST@AI.AI.MIT.EDU (AIList Moderator Nick Papadakis) (05/25/88)

Return-Path: <@AI.AI.MIT.EDU:LAWS@IU.AI.SRI.COM>
Date: Mon  9 May 88 23:41:40-PDT
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: A BDI Approach to Free Will
To: ailist@AI.AI.MIT.EDU
Mail-System-Version: <VAX-MM(215)+TOPSLIB(126)+PONY(205)@IU.AI.SRI.COM>

I am not trained in philosophy, but the following points seem reasonable:

Let my current beliefs, desires, and intentions be called my BDI state.
It may be a fuzzy state, an algorithm, whatever.  Let the continuous set
of all such states from conception to the present be called my BDI history.
(I gather that these are the situated automata assumptions.  Fine;
I'm willing to view myself as a Markov process.  I just hope I'm not
abusing a standardized vocabulary.) 

Are my actions fully determined by external context, my BDI state,
and perhaps some random variables?  Yes, of course -- what else is there?
This follows almost directly from my definition of BDI state.
I suppose there could be influence from non-BDI variables
(e.g., from my BDI history, which is not itself a belief, desire,
or intention), but I could fix that by positing a more elaborate
state vector that includes all such influences.

Is my BDI history fully determined by genetics, external context,
and perhaps some random variables?  Yes, of course -- since I'm  not
a mind/body dualist.  The dualist position seems to require
a spiritual-domain context, BDI state, and history -- but
that just bumps the problem up one level instead of solving it.

Are my actions predictable?  No.  My BDI history is chaotic and
possibly stochastic, and my BDI state is unknowable.  Even I can't
predict my actions in complete detail, although I can predict dominant
characteristics in familiar situations.

Do I have free will?  What does that mean?  It can't mean that I will
my actions to contradict my BDI state, since that intention would 
itself be part of my BDI state.  It can't mean that I ignore
my BDI state and take random actions, since that surrenders will
to chance.  (The act of surrender is controlled by my BDI state, and
is separate from any random acts that later occur.  It might be an
act of free will, if we can pin down what that means.)

Free will must mean the ability to follow actions dictated by my
BDI state, if it means anything.  Yet that is only the freedom
to follow a program, the antithesis of free will.  So the term itself
is a contradiction, and the discussion is meaningless.  I do what
I do because I am what I am, and the current "I" has no control
over what I am at this moment.

There is one aspect I haven't covered.  Because my BDI states are
recursive, my current actions (including thoughts and other mental
actions) influence my future BDI states.  I can shape my own
character and destiny, although my actions in this regard are still
determined by my BDI state.  I can lock in specific goals, then
work toward changing my BDI state to achieve them.  Success may
be almost instantaneous, or may be as difficult as quitting smoking
or losing weight.  It is in these processes that the illusion of
free will is strongest, but it is still an illusion (of the sort
pointed out by Drew McDermott.)  It is also in these processes
that we most sense our lack of free will when we fail to achieve
the internal states necessary for our chosen goals.

I see no reason why we can't build machines with similar mental
architectures, at least in principle.  They will necessarily
experience free will, although they may or may not believe that
they have it.  We can also believe as we will, but that will is
no more free for us than for the machines.

					-- Ken Laws

-------