[comp.ai.digest] Artificial Free Will -- what's it good for?

NICK@AI.AI.MIT.EDU (Nick Papadakis) (06/06/88)

Date: Sat, 4 Jun 88 14:21 EDT
From: George McKee <mckee%corwin.ccs.northeastern.edu@RELAY.CS.NET>
To: ailist@mc.lcs.mit.edu
Subject: Artificial Free Will -- what's it good for?

Obviously many people think that the question of whether or not
humans have free will is important to a lot of people, and
thinking about how it could be implemented in a computer program
is an effective way to clarify exactly what we're talking about.
I think the McDermott's contributions show this -- they're
getting pretty close to pseudocode that you could think about
translating into executable programs.  (But just to put in my
historical two cents, I first saw this kind of analysis in a
proceedings of the Pontifical Academy of Sciences article by
D.M.MacKay in about 1968.)  If free will is programmable,
it's appropriate to then ask "why bother?", and "how will we
recognize success?", i.e. to make explicit the scientific
motivation for such a project, and the methodology used to
evaluate it.
	I can see two potential reasons to work on building
free will into a computer system: (1) formalizing free will into
a program will finally show us the structure of an aspect of the
human mind that's been confusing to philosophers and psychologists
for thousands of years.  (2) free-will-competent computer systems
will have some valuable abilities missing from systems without
free will.
	Reason 1 is unquestionably important to the cognitive
sciences, and insofar as AI programs are an essential tool to
cognitive scientists, *writing* a program that includes free will
as part of its structure might be a worthwhile project.  But
*executing* a program embodying free will won't necessarly show
us anything that we didn't know already.  Free will in its sense
as a consequence of the incompleteness of an individual's self-model
has an essentially personal character, that doesn't get out into
behavior except as verbal behavior in arguments about whether it
exists at all.  For instance, I haven't noticed in this discussion
any mention of how you recognize free will in anyone other than
yourself.  If you can't tell whether I have free will or not, how
will you recognize if my program has it without looking at the code?
And if you always need to look at the code, what's the point in
actually running the program, except for other, irrelevant reasons?
(This same argument applies to consciousness, and explains why I,
and maybe others out there as well, after sketching out some
pseudocode that would have some conscious notion of its own
structure, decided to leave implementation to the people who
work on the formal semantics of reflective languages like
"3lisp" or "brown". (See the proceedings of the Lisp and FP
conferences, but be careful to avoid thinking about multiprocessing
while reading them.))
	Which brings us to Reason 2, and free will from the
perspective of pure, pragmatic AI.  As far as I can tell, the only
way free will can affect behavior is by making it unpredictable.
But since there are many other, easier ways to get unpredictability
without having to invoke the demoniacal (or is it oracular?)
Free Will, I'm back to "why bother?" again.  Unpredictability in
behavior is certainly valuable to an autonomous organism in a
dangerous environment, both as an individual (e.g. a rabbit trying
to outrun a hungry fox) and as a group (e.g. a plant species trying
to find a less-crowded ecological niche), but in spite of my use of
the word "trying" this doesn't need to involve any will, free or
otherwise.  In highly sophisticated systems like human societies,
where statements of ability (like diplomas :-) are often effectively
equivalent to demonstrations of ability, claiming "I have Free Will,
you'll fail if you try to predict/control my behavior!" might well be
quite effective in fending off a coercive challenge.  But computer
systems aren't in this kind of social situation (at least the ones I
work with aren't).  In fact they are designed to be as predictable
as possible, and when they aren't, it indicates a failure either of
understanding or in design.  So again, I don't see the need for
Artificial Free Will, fake or real.
	My background is largely psychology, so I think that it's
valuable to understand how it is that people feel that their behavior
is fundamentally unconstrained by external forces, especially social
ones.  But I also don't think that this illusion has any primary
adaptive value, and I don't think there's anything to be gained by
giving it to a computer.  If this is true, then the proper place
for this discussion is some cognitive-science list, which I'd be
happy to read if I knew where to send my subscription request.
	- George McKee
	  NU Computer Science